What goes inside the black boxes that process data to make decisions has always been shrouded in mystery
Post Date – 12:45 AM, Wednesday – 5/31/23

representative image.
by GHP Raju
Hyderabad: Scientific inventions such as the wheel, fire, printing press, steam engine, penicillin and the telephone have changed our civilization in incredible ways. Likewise, the Internet and digital media have ushered in dramatic changes in nearly every aspect of our lives over the past two decades.
Artificial intelligence (AI) is now the most powerful digital decision-making tool, but the ethical dilemma remains unresolved. Artificial intelligence uses vast amounts of data generated through the internet and digital media to make decisions based on machine learning. Artificial intelligence is the buzzword for government and private enterprise efforts. Some key areas where AI is widely used include natural language processing (NLP), e-commerce, manufacturing and robotics, customer service, banking and finance, and healthcare. Unresolved ethical questions have also been staring at policymakers in these fields.
artificial intelligence intervention
Three key reasons make the use of AI in decision-making imperative: large volumes of complex digital data, extraordinary numerical computing power, and the need for rapid decision-making through machine learning. While AI is being widely adopted and decision-making becomes more mechanized, regulators, governments, and even the judiciary rarely discuss or address some of the core ethical issues associated with the process.
For example, artificial intelligence is widely used in customer service and e-commerce. AI algorithms analyze customer preferences, purchase history and behavior to provide personalized product recommendations. This may seem like a huge advantage to customers, but it often comes at the cost of violating personal privacy. Various service providers secretly collect, store and rely on personal data such as mobile numbers, email IDs, personal preferences and locations to leverage to their advantage and maximize profits. Personal data privacy was rarely discussed for corrective intervention in any quarter.
Likewise, a patient’s medical history is private data collected and stored by hospitals. Patients are rarely informed about the nature of the data that hospitals collect, how long and in what form they intend to store it, and whether adequate protections are in place to prevent data theft. Patients and/or their families have the right to know the data security measures taken by the hospital. But a lack of public awareness keeps hospitals from taking responsibility for their patients’ health information.
black box dilemma
Another serious problem with an AI-supported decision-making process is how it reaches its decisions. What goes inside the black boxes that process data to make decisions has always been shrouded in mystery. In some U.S. states, AI-based predictive policing software is being deployed in crime analysis to predict an individual’s criminal potential. The software produced results showing that black Americans have a high propensity to commit crimes. This is absurd and racially biased inferences drawn using artificial intelligence.
If this software were adopted in our society, where social bias based on caste, religion, and region prevails, the results would be as predictably biased as in the United States. The problem is not the decision-making ability of AI, but the lack of explanation of the decision-making process of AI inside the black box. With decision-making happening inside the black box of artificial intelligence, who is responsible for the ill effects of such biased decision-making is an ethical dilemma that policymakers face when dealing with predictive policing software.
Artificial intelligence and robot police
Nursing robots are developed to meet the medical needs of the elderly. Some of them turned into hooligans and either killed or harmed patients (by taking incorrect doses of medication). Who is responsible for the fatal consequences of their decisions? Similarly, consider the well-remembered Hollywood movie “Robocop” in which a half-human, half-robot starts out enforcing the law, but quickly goes rogue and hurts innocent citizens (albeit due to an internal circuit failure). Although the efficiency of crime prevention and detection has increased exponentially thanks to robot police, the moral dilemma of accountability remains unresolved.
Artificial Intelligence and Robot Referees
Could we introduce robot judges into our criminal justice system to improve the delivery of justice? What if we employed robot judges on an experimental basis to make judicial decisions/orders based on the evidence presented by the prosecution, the arguments of the defense attorneys, case law and other legal provisions? Robo Judges will work 24/7 (affected by load shedding), reducing case pending and processing cases with extreme speed, while strictly adhering to legally mandated procedures.
One thing they fail to do, however, is the moral element of the case at hand. Thus, a robot judge, like a robot policeman, cannot feel empathy and sympathy, and cannot take humanity into account in judicial matters. This is a huge limitation of AI decision machines.
AI and the Unemployed
Another ethical dilemma facing policymakers around the world is the choice between “efficiency” and “employment”. Robotic police are highly effective in preventing and detecting crimes. One robot cop may be equal to ten cops in efficiency. But when educated youth need jobs, what will policy makers choose? If AI-enabled machines start taking jobs from humans, it will lead to extraordinary social unrest, increased crime, and spell doom for the nation.
Likewise, if nursing robots replace human nurses in providing care to patients, the ethical, social and economic consequences will become unacceptable. They would certainly fight against such a policy decision. Google CEO Sundar Pichai emphasized this point, saying: “We must address the ethics and morals of artificial intelligence because it will have a significant impact on society.”
Necessary evil?
Improving the efficiency not only of administrative decision-making, but also of delivering good governance to the people is undoubtedly an immediate need. With the adoption of digital technologies and artificial intelligence decision-making machines, government and private business entities will be able to better serve their people and customers. AI tools are widely used by governments for direct benefit transfer (DBT) of funds and PDS, national digital health missions, crop yield forecasting, pest and disease management, soil health assessment, weather forecasting, etc. Ethical dilemmas are effectively addressed by policy makers in these areas While using artificial intelligence.
Private business entities such as mobile service providers, shopping malls, private hospitals, and digital platforms make extensive use of AI-enabled machines and collect vast amounts of personal data of customers and patients without concern for data security, privacy, data theft, misuse, and other ethical aspects commitment concerns.
Scientists have expressed serious ethical concerns about artificial intelligence decision-making machines. Renowned astrophysicist Stephen Hawking said: “The development of comprehensive artificial intelligence may herald the end of humanity.” Technology industrialist Elon Musk emphasized that “artificial intelligence is a fundamental risk to the existence of human civilization. Policymakers in India must consider ethical dilemmas while using AI tools in all policymaking to increase efficiency, transparency and accountability.

