By Euny Hong
6 min read
Artificial intelligence (AI) has been a technological aspiration for decades or longer, but it is only in recent years that the practical realization of AI has begun to catch up with what scientists imagine it to be. Now, AI systems and tools are found in countless industries and products, from smartphones to self-driving cars and in the health care, retail, manufacturing, and banking sectors, among many others.
It’s easy to understand why there is a tremendous push to develop AI capabilities. Computers that are able to think and make decisions in a human-like way have the significant advantage over human counterparts of being exponentially more efficient. The potential to bring down costs, create efficiencies in systems, and bolster research and technological progress is a major potential benefit of AI. This is one reason why worldwide business spending on AI has been projected to reach $110 billion per year by 2024.
But as with any powerful technology that develops quickly, there are reasons to be hesitant or even concerned. There is, after all, a big difference between a view of AI as a technology allowing the automation of repetitive and simple tasks without the need for significant decisions, and the potential for AI to create highly complex and sophisticated systems that are capable of learning over time. The former was a long-standing view of what AI might be, but the latter is increasingly seen as what AI actually represents.
Below, we provide a general overview of some of the many ethical considerations and implications of AI. Note, however, that this is a complex area of inquiry and one that is evolving alongside the rapid development of AI systems.
AI technology is largely unregulated by governments at the moment. This means that individual companies and developers are essentially free to create AI systems entirely of their own design. While AI technology used in a targeted way for a particular company’s internal operations or even a customer-facing application may seem to have little relevance outside of that use case, there remains the potential for privately-developed AI to have a much greater impact on society. A recent example is OpenAI’s ChatGPT chatbot, released in November 2022. Within weeks, ChatGPT became widely popular around the world. Users have adopted the technology for a variety of tasks, ranging from deepening the process of research to cheating on essays for school. As of March 2023, a free version of ChatGPT remains available.
Algorithmic processes provide significant opportunities for making systems more efficient, but one concern is that AI lacks a core element of human judgment. It may also amplify existing biases and discriminatory elements of existing systems. Political philosopher Michael Sandel of Harvard University has posed the question: “Can smart machines outthink us, or are certain elements of human judgment indispensable in deciding some of the most important things in life?”
Take the example of a company’s hiring practices. There are already many questions of bias, discrimination, and fair practices wrapped up in this work, even without AI. If an AI system is designed to screen resumes for a company, it could potentially magnify these concerns, and even lend a veneer of scientific legitimacy to them, as it is easy to assume that AI systems are objective even if they are not. On the other hand, however, an AI designed to read resumes could potentially improve access for applicants by screening a much wider pool than a human team ever could. One study estimates AI could help to fill an additional 3.3 million jobs.
This brief example illustrates many of the ethical considerations of AI as it relates to human judgment and bias: how can AI systems be designed to be objective and not to take on the inherent biases that the humans who create them may have? How can AI be prevented from amplifying existing discriminatory practices? And is it necessary to educate the broader public about the inherent subjectivity of some AI systems?
Another key ethical question for developers of AI is surveillance and privacy. AI systems utilize vast pools of information in order to learn, and this information often comes from users who may not even be aware that their data is being collected and processed in this way. There exists great potential for this to positively impact a user, but it may also constitute a violation of privacy and can be used for harm as well.
Surveillance is a more specific concern related to computer vision, a subfield of AI that trains computers how to process and recognize visual data similarly to a human. Computer vision systems are responsible for Face ID scanners on smartphones, among many other tools. The potential for this technology to be used to surveil and even control unwitting individuals is high: the Chinese government has already been accused of stifling behavior through face recognition surveillance systems, as an example.
As with cryptocurrency, AI systems can require vast amounts of energy due to the countless parameters and huge amounts of data they utilize. Generating this energy can have an environmental impact, and AI developers—as well as companies that use AI—should be aware of these implications.
Discussions and best practices about AI ethics are constantly adapting to the changing technology. Companies developing or utilizing AI systems must make difficult decisions about who is responsible for overseeing ethical concerns related to AI, how to set an ethical standard across an organization, how to identify problems, and, perhaps most importantly, how to operationalize solutions.
Cheat Sheet
Decrypt-a-cookie
This website or its third-party tools use cookies. Cookie policy By clicking the accept button, you agree to the use of cookies.