top of page

Mathematical Principle Helps To Unearth Unethical Choices Of AI - B-AIM PICK selects


Researchers often have fun with artificial intelligence (AI), recreating the painting processes of masterpieces, and generating fictional scenes from The Great British Baking Show to Harry Potter. However, AI’s use in other pursuits, such as hiring and the justice system, often highlights the system’s (ie human data inputters') ingrained racial and gender biases. In commercial situations, where this technology is being increasingly deployed, AI can also veer onto unethical ground.

In insurance, for example, AI can be used to set different prices for different people. Whilst there are legitimate reasons for doing so, an “optimized” system may choose to profit out of people’s willingness to shop around. Choosing from a host of different strategies, AI can land on such “unethical” decisions that could see the company face hefty penalties from stakeholders, regulators, and ultimately customers.

To navigate the moral minefield, an international team of researchers has put forward a mathematical principle that could help businesses to seek out the questionable strategies their AI systems may be adopting.

“Our suggested 'Unethical Optimization Principle' can be used to help regulators, compliance staff, and others to find problematic strategies that might be hidden in a large strategy space,” Professor Robert MacKay of the Mathematics Institute of the University of Warwick, UK, said in a statement. “Optimisation can be expected to choose disproportionately many unethical strategies, inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them in future.”

Published in Royal Society Open Science, the principle is defined as: “If an AI aims to maximize risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk.”

This logic coupled with an accompanying formula put forward by the team could help to quantify the risk of an unethical strategy and understand its impact. In the future, it could also be used to help eliminate the risk entirely.

“The Principle also suggests that it may be necessary to re-think the way AI operates in very large strategy spaces, so that unethical outcomes are explicitly rejected in the optimization/learning process,” MacKay added.

At a time when human intervention is becoming increasingly absent in decision-making, we must make sure that we keep “an ethical eye on AI.”

Post: Blog2_Post
bottom of page