top of page

Artificial Intelligence – Panacea or Pandora’s box?



The digital revolution has brought with it an inundation of data, leading to the rise of algorithms and systems that are capable of “learning” from data. Artificial Intelligence (AI), the study of such “intelligent” algorithms has seen a tremendous rise in recent years. “AI is the new electricity. It will transform every industry and create huge economic value,” says Andrew Ng, a computer scientist and businessman at the forefront of AI technology (Jewell, 2019). Just last month, Google DeepMind’s AI system called AlphaFold has made huge progress on the protein folding problem, a 50-year-old challenge in biology with the potential to revolutionize drug design (Senior A. W., 2020).


The recent breakthroughs and promises of AI technology have led to its proliferation in industries as diverse as finance, manufacturing, health care, and consumer electronics. AI is already invading our homes and personal spaces. “Smart” products have become the norm. AI is in our phones, homes, cars, banks, and hospitals. Amidst all this hype, one might be forgiven for thinking if AI is truly the solution to all our problems.


With the increased use of AI technology in the modern world, we are also beginning to understand its limitations. Despite the recent successes and potential, not all is as rosy as it seems with AI. Recent research shows how algorithms that learn from data can sometimes turn out to be brittle, unfair, unexplainable, or privacy-violating. In this essay, we will explore four important aspects in which AI is lacking - privacy, robustness, fairness, and explainability. Hopefully, this brief exploration of AI’s shortcomings will convince you that AI is no silver bullet and compel you to be on guard the next time you encounter unwarranted AI hype.


1. Privacy


In 2006, Netflix announced a $1 million prize for the best algorithm that can learn to predict a subscriber’s movie preferences based on a dataset that contained anonymous movie ratings of 500,000 of its subscribers. While teams from all over the world competed in building the best AI movie recommendation engine, researchers from the University of Texas, Austin had a different idea. They wanted to see if it is possible to reveal the identities of the anonymous subscribers in the Netflix dataset, based on other data available over the internet. Using the publicly available information on the Internet Movie Database (IMDb), they were able to successfully de-anonymize several users in the dataset, thereby revealing sensitive information about their political and religious leanings, etc. that can be inferred from their movie ratings (Shmatikov, 2008). In 2010, Netflix discontinued the competition in response to a class-action lawsuit over privacy violation.


A more concerning example of a “privacy attack” of this kind is in genomics, where researchers found that it is possible to detect which users took part in a genomics study using only the summary data from genome-wide association studies (GWAS) that are released to the public (Sriram Sankararaman, 2009). de-anonymization attacks like these have also been used to reveal the identity of “anonymous” users from social networks (Narayanan & Shmatikov, 2009), and even to trace specific individuals from their anonymous mobility data (Gambs, Killijian, & Cortez, 2013).


Incidents like these show that ensuring privacy in the presence of intelligent algorithms is not as simple as anonymizing the datasets. With the rise of data collection and devices that track our every move, it is even more of a challenge for AI algorithms to ensure that our right to privacy is not violated, while still managing to learn valuable insights from the data. At the heart of the challenge is the seeming paradox of learning nothing about any specific individual, while still learning useful information from the collective data. Privacy is a tricky problem, even for “smart” algorithms.


2. Robustness


In 2014, researchers discovered a troubling fact about state-of-the-art AI algorithms designed for image classification. Image classification is a classic task in computer vision, a sub-field of AI that specializes in enabling computers to learn to “see” from images and video. Given a training dataset of images belonging to various categories (for example, species of plants, animals, objects, etc.), the goal of an image classification algorithm is to learn to predict the correct category of a previously unseen image with high accuracy. The researchers showed that several state-of-the-art AI algorithms that can classify thousands of categories of images, beating even human-level accuracy on benchmark datasets, are extremely sensitive to tiny perturbations of data. To “fool” the algorithm into misclassifying targeted images, researchers found that it is enough to contaminate the test image with perturbations so small that human eyes cannot even perceive the difference! This method of adulterating test data with imperceptible corruptions designed to fool algorithms into misbehaving are termed “adversarial attacks”.


Figure 1: Adversarial attack on an image classification algorithm. The Algorithm is fooled into classifying an image of a panda into a gibbon with high confidence. Source: "Explaining and Harnessing Adversarial Examples" by Goodfellow et al. (2015).

Ever since the initial discovery, these “adversarial attacks” have morphed into various forms that expose the brittleness of AI algorithms in several tasks in computer vision, voice recognition, and natural language processing. A particularly troubling attack is the “adversarial sticker” attack, where stickers with noise-like patterns are placed next to a target object to fool an AI object-detection algorithm into misbehaving. In 2019, security researchers from Tencent have successfully used such an attack to spoof Tesla autopilot. With just three small stickers placed on a road intersection, they managed to cause Tesla autopilot to swerve into the wrong lane (Ackerman, 2019). Similar attacks have also been successful in fooling AI diagnostic systems into misdiagnosing images of tumors (Samuel G. Finlayson, 2019).


Adversarial attacks raise serious questions about the capabilities of state-of-the-art AI algorithms in adversarial environments. If small corruptions to data are enough to make AI algorithms misbehave, can we really trust them in safety-critical applications like autonomous driving or automated medical diagnosis? While prediction accuracy is the of the game when it comes to judging the performance of AI algorithms, it is equally important to pay attention to robustness in the presence of unexpected corruptions that tend to occur in the real world.


3. Fairness


In September 2020, Ph.D. student Colin Madland tweeted a photo of a black faculty member to show that Zoom keeps blocking out his head when using virtual background, despite using good lighting. In an unexpected turn of events, Madland discovered that twitter’s own algorithm blocked out the said black faculty member’s face in favor of Madland’s white face in the tweet’s image preview! In one Twitter thread, Madland exposed the racial shortcomings of AI algorithms used by two companies – Zoom and Twitter.

Figure 2: Twitter thread by Colin Madland that exposes algorithmic bias in Zoom and Twitter. Link: https://twitter.com/colinmadland/status/1307111822772842496


AI’s troubles in identifying faces from minority groups are not new. A study done by the US National Institute of Standards and Technology (NIST) discovered that face-recognition algorithms produced false positives 10 times more frequently in black women compared to white women. The algorithms consistently performed worse for women than men and gave the best performance on white male faces, which presumably form much of the data on which these algorithms are trained. The troubling fact is that some of these algorithms are actively used by the government agencies of various countries including the FBI, US Customs and Border Protection, and police forces in the US, UAE, Australia, France, and India (Simonite, 2019).


The social implications of “algorithmic bias” may range from minor inconveniences in using social media to being classified as ‘high risk’ by a crime prediction algorithm used by the police. The data and the algorithms that go into AI are made in the image of their creators – humans. Hence, it is natural to expect that our biases and prejudices trickle down into our creations. To think of AI-enabled products as unthinking, unfeeling, and neutral would be a mistake.


4. Explainability


In 2016, a Bloomberg report found that Amazon seems to exclude zip codes with a majority black population from its same-day delivery services. When asked for an explanation, the Amazon PR director commented that a number of factors go into determining where Amazon can deliver the same day, including the number of prime members in the area, the local demand, the distance to the nearest fulfillment center, etc (Letzter, 2016). While Amazon’s algorithm may not have explicitly used the racial composition of a neighborhood in offering its services (because it would be illegal), it inadvertently did so without a proper explanation.

Like in the Amazon example, it is common for AI algorithms to make decisions, often having high utility, but without an explanation. As the datasets grow large and complex, so do the algorithms that learn from them. Consequently, some of the most sophisticated algorithms appear like a “black-box” even to their designers. The output produced by such black-box algorithms often cannot be interpreted by humans and remain unexplainable. This is the case with many modern AI systems.


We live in an age where algorithms have a say in determining who gets a bank loan, who gets picked for a job, or who is granted parole in criminal procedure. Decisions like these have a significant impact on the lives of the people, and it is only reasonable that they have a right to an explanation. In the current state of AI, explainability seems to be yet another important but often ignored criterion for progress.


Privacy, robustness, fairness, and explainability – four aspects that are vital for the successful functioning of social systems, four aspects that is also severely lacking in today’s AI systems. If we want AI to be an enabler of societal progress, it is important that we bolster AI’s capabilities in these four aspects. Failing to do so will at best hinder the progress and proliferation of AI technology, and at worst pose serious legal and moral challenges to the functioning of a technology-driven society.


The challenges facing AI have not imagined problems of some distant future. The challenges that come from ensuring privacy, robustness, fairness, and explainability are practical challenges that affect people today. Whether or not AI turns out to be an existential threat to humanity in the remote future seems a less urgent concern in light of these more basic shortcomings.


It is also important to realize that the solutions to these challenges cannot come from the isolated technical wizardry of a few, but through mindful analysis involving all stakeholders. Interdisciplinary conferences like the ACM Conference on Fairness, Accountability, and Transparency (FAccT) that bring together experts from diverse fields like computer science, humanities, and law are a welcome development. Government regulation like the European Union’s GDPR (General Data Protection Regulation) that require AI systems to honor the user’s right to privacy and right to an explanation is also a step in the right direction, but only time will tell how effective these measures will be.


References:


Ackerman, E. (2019, April). Three Small Stickers in Intersection Can Cause Tesla Autopilot to Swerve Into Wrong Lane. Retrieved from IEEE Spectrum: https://spectrum.ieee.org/cars-that-think/transportation/self-driving/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane

Gambs, S., Killijian, M.-O., & Cortez, M. N. (2013). De-anonymization attack on geolocated data. Journal of Computer and System Sciences.

Goasduff, L. (2020, September). 2 Megatrends Dominate the Gartner Hype Cycle for Artificial Intelligence, 2020. Retrieved from Gartner: https://www.gartner.com/smarterwithgartner/2-megatrends-dominate-the-gartner-hype-cycle-for-artificial-intelligence-2020/

Letzter, R. (2016, April). Some Amazon Prime services seem to exclude many predominantly black zip codes. Retrieved from Business Insider: https://www.businessinsider.com/amazon-prime-doesnt-serve-black-areas-2016-4

Lynch, S. (2017, March). Andrew Ng: Why AI Is the New Electricity. Retrieved from https://www.gsb.stanford.edu/insights/andrew-ng-why-ai-new-electricity

Narayanan, A., & Shmatikov, V. (2009). De-anonymizing Social Networks. IEEE Symposium on Security and Privacy. IEEE.

Samuel G. Finlayson, J. D. (2019, March). Adversarial attacks on medical machine learning. Retrieved from Science: https://science.sciencemag.org/content/363/6433/1287

Senior, A. W. (2020). Improved protein structure prediction using potentials from deep learning. Nature.

Simonite, T. (2019, July). The Best Algorithms Struggle to Recognize Black Faces Equally. Retrieved from Wired: https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/

Sriram Sankararaman, G. O. (2009). Genomic privacy and limits of individual detection in a pool. Retrieved from Nature: https://www.nature.com/articles/ng.436


157 views0 comments

Recent Posts

See All

Comments


bottom of page