What it takes to create and deploy ethical artificial intelligence

Artificial intelligence “acts” unethically in ways different from humans, even if the damage both AI and humans can cause are similar. Even if both humans and AI can invade people’s privacy, discriminate, or cause physical harm, artificial intelligence does not act with intent to cause such harm. Rather, the damage stems from the way artificial intelligence collects and processes data.

Currently, artificial intelligence can not achieve consciousness, though a Google engineer disagrees† Today, the type of artificial intelligence that companies create and incorporate into their operations and decision-making systems is artificial narrow intelligence, which refers to a computer’s ability to perform a single task or limited tasks extremely well. Also known as ‘machine learning’, in most cases that task is the processing of data through learning, for example. When discovering similarities between data points, AI will discover (or construct) a pattern that it will use to identify what the AI ​​algorithm should find and/or provide a solution, given the pattern revealed by the data points.

In many ways, machine learning is similar to human reasoning or inductive reasoning. Analogy reasoning is when a person compares two cases to determine how similar they can be in applying the same conclusions from one case to another. It’s a way to establish a pattern for things that can be diverse but have a common theme. Inductive reasoning is when a person draws a conclusion or a broad generalization from a set of specific examples. The larger the set, the more accurate the conclusion can be. It’s a way to find the pattern between the details.

The big difference between these two forms of human reasoning and ‘machine learning’ is the fact that ‘machine learning’ is more vulnerable to what GE Moore naturalistic fallacy, that is when one confuses what currently exists with what should be seen as moral. For example, the naturalistic fallacy can confuse the existence of systemic bias with the assumption that it should persist. For AI ethics scandals, there is no shortage of examples of AI algorithms reinforce discriminatory practices simply because discrimination is already rampant. Recently, the Department of Justice and the Equal Employment Opportunity Commission discovered that AI may discriminate against disabled applicants during the hiring process.

While people often fall into this trap as well, after the fact, or before acting on it, people may recognize how biased, discriminatory, or unethical their conclusions are. Machine learning and limited artificial intelligence do not have this metacognitive ability. People should be able to nullify unethical decisions as they occur, or – more essentially – they should be able to avoid them by the way they develop learning algorithms.

In his new book Ethical Machines: Your Quick Guide to Completely Unbiased, Transparent and Respectful AI, Reid Blackman shows how it is possible to develop AI that can explain and avoid the naturalistic fallacy. He also provides a clear roadmap for how companies can create governance structures to identify and mitigate ethical risks that AI can pose. One of the greatest lessons from his book is that efforts to identify and reduce bias and potentially discriminatory conclusions must begin before determining the learning algorithm dataset and training the machine to learn. Unethical AI, as Reid Blackman puts it, is “the result of not thinking about the consequences, not monitoring the AI ​​’in the wild’, not knowing what to look for when developing and acquiring AI.” Identification and mitigation strategies should include decisions about which data points should be considered relevant and how inputs should be weighted or prioritized, as well as whether to establish thresholds for particular outputs and what objective function to establish or test against. It will also be necessary to think carefully beforehand about what discrimination and bias means in practice.

This last point brings Blackman to the second biggest lesson of his book. Ethical AI requires ethical experts in AI development and procurement teams. Since unethical AI results not only from programming flaws, but rather from not fully considering how a company’s values ​​and ethical requirements should be incorporated into the learning algorithm, companies cannot leave ethical challenges to wishful thinking. Blackman warns that “it’s unfair to expect data scientists, engineers and product owners to do the kind of work for which they don’t have in-depth expertise. You let them make decisions on complex ethical, social and political issues, all of which are reputational, regulatory and legal risks of the court that they are not equipped for and cannot be equipped for in the short (or even long) term. Because the complexity of AI and the scale at which its decisions will impact large groups of consumers, patients and citizens (depending on where the AI ​​is being used), it is essential to include those on the AI ​​development or procurement team. those who are trained to consider the differences between ethical concepts and their operationalization, as well as those who know the difference between the biases currently influencing decision-making and how decision-making must take place in order to be ethical.

Of course, ethics experts shouldn’t be the only ones on the team who can talk about moral challenges. This would sap the effort and undermine the ability to create ethical AI. Just as the ethicists on the team should at least be familiar with the factors that create ethical risks, such as product development and customer preferences, so other members of the team should be familiar with how their domains can lead to unethical AI. In addition, leaders of companies using artificial intelligence must ensure that organizational ethical values ​​are established as a priority and must provide core training and ongoing practice not only to recognize ethical risks, but also to prevent and mitigate them.

Leave a Comment

Your email address will not be published. Required fields are marked *