Artificial intelligence will challenge regulators
The effects of artificial intelligence on societies are still poorly understood, hindering effective regulation
Despite the recent rapid evolution and adoption of artificial intelligence (AI) systems, there is scant legislation in any jurisdiction regulating their development and use. Unease about potentially harmful consequences in areas ranging from job markets to civil rights will put pressure on governments to eliminate this regulatory vacuum.
What next
Regulating AI will be challenging because of characteristics inherent in its development, such as the geographic dispersal of teams building it, the difficulty in identifying unconscious biases in the algorithms, and inability to predict how the technology may work or how it can be applied. Government will nonetheless take a more active role. However, rather than covering AI systems in general, forthcoming regulation will focus on more narrowly defined technologies or problems associated with AI use.
Subsidiary Impacts
- AI will gain prominence in debates about the shortcomings of modern democracies.
- Court battles over AI bias will become increasingly common.
- Corporations will struggle to avoid a backlash against AI failures.
Analysis
AI has the potential to boost productivity and improve public services. However, as its adoption has increased in recent years, preeminent names in the technology industry and leading scientists, from Microsoft founder Bill Gates and Tesla chief executive Elon Musk to physicist Stephen Hawking, have warned about the risks, including 'killer robots' and even the extinction of humanity.
Governments face mounting pressure to address the impact of AI on employment and a range of other areas.
Democracy
Research from 2013 showed that Facebook 'likes' can be used to predict personal traits, such as political leanings and sexual orientation, with great accuracy. This can be used to manipulate voting behaviour, as many assert happened in the Brexit referendum and 2016 US election.
Researchers such as Hult University's Olaf Groth warn that internet platforms that mine people's online behaviour to tailor services to them may undermine free will by framing or inducing people's choices and failing to account for 'non-clickable' opinions and worldviews. The result could be that someone's 'world' becomes 'smaller' even with an immense choice of content and political views at their disposal. Social media has already been shown to 'lock' internet users in like-minded political digital communities, increasing political polarisation (see PROSPECTS 2017-22: Artificial intelligence - December 6, 2016).
AI software can learn human prejudices
Bias
Some AI systems have displayed ethically problematic behaviour. COMPAS, an AI-based software used in US courts to calculate the risk of recidivism for defendants, was found to indicate that black people were 77% more likely to be presented as at higher risk of committing a future violent crime; its developer, Northpointe, argues that its algorithm does not include race in its calculations.
The algorithm considered questions such as "in your neighbourhood, have some of your friends or family been crime victims?", probably labelling people living in higher-crime areas as "at higher risk" of committing new offences. The problem is that higher-crime areas are often poorer areas, and a much higher percentage of blacks than whites are poorer, so black people tend to have 'bad postcodes' and therefore have their assessment affected.
Such algorithms take into account a large number of data points (137 in this case), so although each one does not define the final result, it does skew the system's output.
Low diversity in the AI field, which is dominated (at least in the West) by white men from privileged backgrounds, may create a risk that unconscious biases may be written into algorithms.
Inscrutability
A problem in the COMPAS case is that Northpointe does not reveal how its algorithm calculates probability because it is proprietary. An even more formidable challenge, from a regulatory perspective, is presented by computer 'neural networks', complex systems that simulate the activity of the human brain, notably its layers of neural connections through which input is passed (see INTERNATIONAL: Adoption of machine learning will rise - September 25, 2017).
The 'deep learning' process that takes place in neural networks is often described as a 'black box'. In it, known input leads to specific output, but how that happens is unknown because the 'neurons' make their connections autonomously.
An example is Deep Patient, a programme developed by researchers at the Mount Sinai Hospital in New York. After being 'fed' data from some 700,000 patients, the software proved highly accurate at anticipating disease. Yet physicians were puzzled by the fact that the software was able to predict even the onset of psychiatric disorders that are extremely difficult for doctors to anticipate.
This poses two challenges. From a regulatory perspective, it may be difficult for doctors (in this case) to make a decision they may not be able to justify. They may legally be unable to treat a patient who is not yet exhibiting symptoms.
The ability to challenge a decision is also important. For example, regulators would need to clarify what recourse a client has whose mortgage application has been rejected by a bank on the basis of an algorithm, without any other explanation.
Unease about the fairness and safety of deep learning is growing. For example, the NYU's AI Now Institute has urged public agencies responsible for "high stakes domains" such as healthcare, criminal justice and education to stop using them.
Initiatives and proposals
Regulators in different jurisdictions are trying to address AI-related concerns through new legislation:
- In September, the US House of Representatives passed and sent to the Senate for its appreciation the SELF DRIVE Act, which establishes a federal framework for regulating autonomous vehicles.
- The EU General Data Protection Regulation (GDPR) will come into force in May 2018 (see PROSPECTS 2018: Cybersecurity - November 29, 2017). The GDPR gives EU citizens the right to know when firms are making automated decisions about them, as well as to an explanation of how these decisions were reached and how to challenge them. Critics argue that the GDPR is weaker than the authorities suggest.
The EU's data protection regulation may be weaker than its proponents suggest
Suggestions for regulating AI vary. There have been calls for a European watchdog that could send independent investigators to scrutinise the use of AI by organisations. Other stakeholders support the implementation of standards and corporate social responsibility initiatives.
Difficulties
AI systems are often the product of components developed by dispersed and uncoordinated teams, each requiring limited infrastructure. For example, individual users can modify at will an open source machine-learning library, scikit-learn. On one day alone, April 2, 2015, nine different users modified scikit-learn's code from four different locations: Switzerland, France, United States and India.
International cooperation will be necessary to regulate different aspects of AI development and application. AI will likely create a 'tragedy of the commons' problem at a global scale.
For example, states worldwide might agree that one area of AI should not be developed because it is dangerous (for example, 'killer robots'). However, the technology may confer advantages on an individual state that possesses it. Therefore if one state (for example, the United States) develops it, then others (for example, China) will feel obliged to do likewise. This behaviour is rational for each state involved but sub-optimal at the global level.
The timing of regulation can also be challenging. All AI existing today is 'narrow', ie, only able to perform one specific task, such as playing chess or recognising speech. Introducing legislation for narrow fields of AI may not work if scientists develop 'general' AI capable of solving complex problems in different domains like humans can, or 'artificial superintelligence' that exceeds human capabilities in all fields (see INTERNATIONAL: The singularity is distant - October 11, 2017).