Have you ever given an order to a virtual assistant? Avoided a traffic jam thanks to a smart navigation system? What about those targeted offers appearing on various websites, continually reminding you of items that you looked up online? All of the above are made possible as a result of big data analysis from Artificial Intelligence (AI) systems, the use of which is increasing. However, as AI starts affecting every dimension of our society, it also poses several opportunities and challenges for social issues, such as human rights.
The above examples of AI-powered technologies were designed to make specific tasks easier. Therefore, with all these promising innovations, why should we care about human rights in the field of AI? The most straightforward answer to this question is that it affects us all and will continue to do so in the future. Although the benefits of basing certain decisions on mathematical calculations can be significant in many sectors, relying too heavily on AI can also backfire on its users, increase injustices and impede on people’s rights.
In the past, the threats of AI have been represented in films through sci-fi supervillains, such as HAL 9000 in Stanley Kubrick’s 2001: A Space Odyssey or T-800 in James Cameron’s Terminator. In January 2015, several tech innovators, including Stephen Hawking and Elon Musk, stepped forward to warn us about the potential dangers of a takeover of intelligent machines. Does AI genuinely have the potential to take over the world? What threats does AI pose to human rights? Moreover, can these threats be avoided?
Most people still think of killer robots solely as fictive characters, but due to current advances in technology, they may soon become a reality. Formally known as lethal autonomous weapons systems (LAWS), such weapons destined to target people do not exist yet, but experts predict that it is only a matter of a few years. Some argue that giving machines such power on the battlefield is an immoral application of technology, as maintaining human control of any combat robot is crucial to safeguard humanitarian protection and efficient legal control. Therefore, scientists are urging a ban on LAWS, with the help of international coalitions, such as the Campaign to Stop Killer Robots.
Furthermore, as AI is data-driven, where does it leave privacy concerns? Lilian Edwards, a law professor at the University of Strathclyde in Glasgow, declared that ‘Big data is completely opposed to the basis of data protection (…) I think people have been very glib about saying we can make the two reconcilable, because it’s very difficult.’
In the digital world, privacy concerns rest on our capacity to control the way our data is being stored and exchanged between different parties. AI-driven systems usually collect vast amounts of data, often without the knowledge or consent of its users. As a result, AI can be used to identify people who wish to remain anonymous, generate sensitive information —such as one’s political views— from non-sensitive data, unfairly profile people, or make far-reaching decisions using this data which can negatively affect people’s lives.
Last but not least, there are increasing concerns that some AI systems are not always fair in their decision-making. The data used to train deep-learning systems can easily reflect the biases of the people who assemble them or be prejudiced by history, encoding patterns that reproduce certain discriminations. If algorithmic bias is not corrected, it could have severe consequences and reinforce existing discriminations, especially for marginalised and more impoverished communities.
In June 2017, Amnesty International’s Secretary General Salil Shetty addressed the AI for Global Good Summit in Geneva by stating that states should be committed to making the best of AI while coordinating it with the preservation of the protection of human rights. The benefits of AI in the technologies people use on a daily basis are widely known, but as AI systems increase in complexity, big data can also be used for good causes and even help to address inequalities and correct existing biases in AI.
Sophia, Hanson Robotics Ltd. speaking at the AI for Global Good Summit in 2017. Source: Flickr
The Microsoft Research team FATE (Fairness, Accountability, Transparency, and Ethics in AI) was created to develop and improve AI systems that are both innovative and ethical. By uncovering existing biases in AI data, the program can assist users by offering enhanced insights and avoiding exposure to discrimination.
When precise data is unavailable, it can be difficult for policymakers to back racial justice initiatives. In 2015, Yeshimabeit Milner founded Data 4 Black Lives (D4BL), a group of activists, organisers, and mathematicians seeking to mobilise scientists around racial justice issues. D4BL is based on the idea of using data science like AI and machine-learning to establish a concrete and assessable change in the lives of black people and empower them.
In the medical field, the use of AI is already widespread, as its revolutionising capacities have the potential to develop preventive and curative strategies. For example, the capabilities of AI have already been proved useful in cancer detection and diagnosis, as researchers suggest that AI can now detect some types of cancer better than clinicians. In the future, we will be able to defeat diseases that are difficult to cure thanks to advances in AI.
AI can also be used for medical predictions in patients. In a study published in May 2018 in Nature Digital Medicine, an algorithm calculated de-identified electronic health records of over 216,000 adult patient hospitalisations to predict unplanned re-admissions, length of hospital stays, and in-hospital mortality more precisely than with traditional predictive approaches.
AI is bound to affect our social, political, and economic rights. Consequently, we need to guarantee that these systems are developed and used in ways that maintain principles of fairness, accountability, and transparency. Banning LAWS would require the establishment of treaties and an institutional framework with the help of state and non-state actors. To address threats to privacy, laws must be created, reviewed or amended. Action has already been taken in the EU where the General Data Protection Regulation (GDPR) was implemented to give Internet users more control over what is collected and shared about them. Finally, to combat discrimination due to AI bias, more diverse teams should be behind the development of AI systems and build comprehensive machine-learning and deep-learning models that are inclusive, assessable, and adjustable.
Like any technology, AI can be both beneficial and harmful. In conclusion, we must acknowledge that algorithms are not neutral, but reflect the data they were provided with. If biased data is fed into an algorithm, discriminatory results will follow. Therefore, it is ultimately human decisions and the provided data that determine whether AI systems will have malicious intentions or not. The only way to ensure that the AI revolution remains a non-destructive milestone for humanity is to strictly control it and ensure that it safeguards human rights above all.