Artificial Intelligence and Race
By: Hooshyar Afsar
Introduction
Artificial intelligence has been in the news during the last two years, with purported consequences ranging from certain promises to replace jobs to threats of ending humanity as we know it. The purpose of this article is to give a brief history of this field of computer science, distinguish hype from reality, analyze bias and racism in the field, and finally look at how it impacts our community.
What is Artificial Intelligence?
Artificial intelligence (AI) is referred to as a type of intelligence that is provided by machines (computer hardware) and/or software. The term “intelligence,” by itself, is subject to many questions from different viewpoints. From a racial justice perspective, it has been used as one of the human traits to create an unscientific hierarchy of humans and justify the Western European and North American “white supremacy.” For more on this, please see my article in Peyk #205. [1] On a related note, the IQ measure of intelligence has long been marred by racism [2] and lack of attention to economic and social issues. Many neuroscientists believe that the notions of human consciousness, being, and creativity—along with the utilization of advanced linguistic techniques—are better measures to distinguish human beings from other humanoids, primates, or animals in general. [3]
While the renowned British computer scientist and father of the Turing Test first used the term “Machine Intelligence” in 1941, the term AI was first coined in a 1956 paper when it was used in a related workshop in Dartmouth College. [4] It went pretty much unnoticed for decades until advancements in computer science—in theory, hardware, and software—allowed it to re-emerge as a viable technology in the 1980s when there were many optimistic and premature predictions about the AI potential to replace human beings in a short span of time. [5] While those predictions did not come true, advancements continued in many computer science areas. The most prominent of these areas were image processing and facial recognition, computer vision, speech synthesis and recognition, robotics, and music generation. While significant progress was made in each of those areas, multiple developments facilitated a major breakthrough in 2017, as described below.
Fast Computer Hardware and Big Data Facilitate “Large Language Models” Breakthrough
In computer hardware, there has been an ongoing revolution since the 1970s that has resulted in more powerful and cheaper processing power. [6] This process has been augmented by the use of GPUs (Graphical Processing Units) that were first solely intended for enhanced image and video presentations. In the 2010s, the process of utilizing GPUs beyond their original purpose resulted in their use in self-driving car technology and computer games. Then, in the late 2010s and early 2020s, GPUs usage started to make a significant difference in the speed of neural networks and natural language processing.
“Neural networks” is a field of computer science that attempts to use principles of human neurology to artificially simulate a human neural network. The theories behind neural networks have existed for decades, yet up until the second half of the 2010s they never became functional for public use. While during the 1980s there was evidence that neural networks could predict what would be the next few sentences in a conversation, the majority of computer scientists did not believe that these methods of machine learning could become a viable application for general use. [7]
In addition to the computer hardware revolution discussed above, another major factor contributing to a breakthrough in this area was in the field of data. In every computer process there is always data involved; there is data input, the processing of that data for various purposes, and then there is the output. In the first decade of the twenty-first century, huge progress was made in the storage, ability to update, and availability of all kinds of data with high speed and relatively low cost. The advent of the “Cloud” and vast networks of computers globally further facilitated transformation in size, speed, and availability of data at such huge scales that the term “Big Data” was coined in referring to this phenomenon. [8]
So, fast and relatively cheap computer hardware, vast networks of computers in the Cloud, plus the advent of Big Data facilitated a revolution in AI in 2017. While up to this time progress had been made in disparate fields of AI like robotics, speech recognition, computer vision, image generation, and speech generation—separate from each other—in 2017, a common interface linking all these fields together propelled the growth rate to a whole new level. This happened when neural networks became viable to provide a common interface for all the above fields using a “Large Language Model.” It took a few years for the public to fully catch wind of the development, but finally in the 2020s, ChatGPT by Open AI became something that the general public started using and “bragging” about. While this was a significant development, the 1980s hype re-emerged at a much bigger scale with credible scientists and mainstream media talking about the threat of AI replacing humans and their jobs and eventually ending human civilization as we know it. [9]
Yet, this is not the general public’s first encounter with AI.
This Is Actually Our Second Public Encounter with AI
While the public has been exposed to many aspects of AI over the past few decades, the first active, all-engaging, and interactive encounter with AI occurred in the 2000s and 2010s with the emergence and proliferation of social media platforms such as Facebook, Instagram, and TikTok.
Although the initial promises of social media as “giving everyone a voice,” “connecting with friends,” and “joining like minded communities” sounded noble, that is not how things turned out. In fact, prominent computer scientists at the Center for Humane Technology assert that humanity lost its first encounter with AI. [10] Their reasoning is sound and is based on the overwhelming negative outcomes of social media such as: addiction to social media platforms, disinformation and fake news, sexualization of young girls, polarization, mental health issues resulting in suicide, and censorship. These trends have taken over the positive aspects of social media.
Negative outcomes have been the result of AI-based algorithms embedded in these platforms monitoring our simple interaction with AI to predict our behavior and desires by analyzing what pictures, videos, or data points we look at, how much time we spend, and so on. The mega platforms such as Facebook, Instagram, and YouTube use this constant automated monitoring of our behavior for sales and marketing of all kinds of products, whether real or virtual. It is rather clear that the reason for these negative outcomes is that the public’s interest has not been protected via an up-to-date set of laws and regulations. The lion’s share of laws and regulations that exist today regarding privacy, use of our data, and monitoring of our behavior belong to the nineteenth century, long before the advent of computers, let alone AI technology. [11]
Now, in the first half of the 2020s, with the advent of Large Language Models and generative AI platforms such as ChatGPT and Google Bard, humanity’s second encounter with AI has begun. Again, we hear positive promises such as: AI will make us more efficient, it will make us write faster, code faster, solve impossible scientific challenges, solve climate change, and make a lot of money. Although there is a certain level of exaggeration and hype, the potential of AI to facilitate realization of at least some of these promises is undeniable. Again, according to the Center for Humane Technology, there is a significant danger of AI racial and gender bias (among others), loss of jobs, lack of transparency on validity of the data used and scope of what these platforms could do and, last but not least, AI acting creepy. [12] An example of this last one is the case of attorney Steven Shwartz, whose use of ChatGPT landed him and his colleague Peter LoDuca in hot water with a federal judge through his use of ChatGPT for legal research, which resulted in six fake cases being generated by AI and cited in a legal brief. [13]
Racism in AI—From Facial Recognition to Intelligence
One might ask, how could racism impact a field in computer science? Well, it does and it all starts with how Black and Brown people are represented in the field and in IT in general. While Blacks represent 13.6% of the U.S. population, they only make up 4.2% of computer scientists in the U.S. [14] Although there is no data on the percentage of Black women computer scientists, if we assume that it is proportional to the overall percentage of all women computer scientists (21.2%), their representation drops to less than 1%. In fact with a strong majority male and white demography, the whole field of computer science lacks diversity and inclusion. This fact, with or without intention, results in a lack of sensitivity to discrimination and racism, culminating in both algorithm and data bias.
It is not possible to write about bias in AI and not mention the names of two Black women computer scientists who have pioneered raising this issue and have been at the forefront of exposing its impact: Timnit Gabru and Joy Buolamwini.
As a war refugee from Eritrea at the age of 15, Gabru faced and dealt with racism from the time she came to the U.S. and started high school in Wisconsin. She became the first ever Black woman to get a PhD in Computer Science from Stanford, one of the most highly regarded CS departments in the world. While at Google, in charge of computer ethics, she noticed bias in the field, leading her to publish a paper with six other renowned computer scientists on the dangers of AI and bias in the field—ultimately getting her fired. [15] The paper exposed significant bias in data and algorithms used in AI and came up with ideas to curtail and eliminate bias. For example, every AI application, including ChatGPT, has an input data called the “training set.” The paper shows how “overtly abusive language patterns in training data… leads to risks of harm, including encountering derogatory language and experiencing discrimination at the hands of others who reproduce racist, sexist, ableist, extremist or other harmful ideologies reinforced through interactions with synthetic language.” [16]
Joy Buolamwini’s story is related to bias and racism in facial recognition. While an undergraduate student at Georgia Institute of Technology, she realized that facial recognition software was so biased that it only recognized her face if she wore a white mask. She continued her research on this crucial area later toward her PhD at MIT, an endeavor which resulted in the documentary Coded Bias, which points to a lack of AI regulations resulting in violation of human rights. It shows that certain algorithms in AI technologies discriminate by race and gender statuses in many areas such as housing, career opportunities, healthcare, credit, education and policing. [17] Buolamwini has testified about needed regulation in front of Congressional committees and she founded a digital advocacy organization, the Algorithmic Justice League.
There is a large segment of AI proponents who advocate for Artificial General Intelligence, or AGI, which is supposed to know everything and automate all the tasks that humans do. Gabru, Buolamwini, and many other advocates of racial and gender justice in technology believe that kind of goal is extremely strange and inherently undemocratic. There is an alarming trend among the proponents of AGI whose advocacy points in the direction of a eugenics resurgence.
Started by Charles Darwin’s cousin Francis Galton (and contrary to the wishes and aspirations of Darwin himself) in the nineteenth century, eugenics was a branch of scientific racism that was fully debunked in the scientific community during the last decades of the twentieth century and especially with the conclusion of the Human Genome project in 2003. Eugenicists believed in “cleansing” of the human race through inhumane experimentations and sterilization of Black and Brown people. Germany’s Nazi Party believed that the California sterilization program that was in effect until 1979 was a role model to follow. Nazis used such experimentations during the holocaust in their concentration camps. According to Gabru, we now face a whole new brand of neo-eugenicists who openly believe in neo-transhumanism and utopia that mimics the ideas of the eugenicists of the twentieth century. The basic belief here is that with the merger of machine and human (also known as singularity), there shall be such advanced humans that they will be able to “colonize” the whole universe. Among the prominent advocates of these ideas are Elon Musk, founder of Tesla Motors and Space X, and Sam Altman, CEO of Open AI, creator of ChatGPT. [18]
How Could Our Community Be Affected?
Iran is a euphemism for the enemy in U.S. mainstream media propaganda—regardless of what we think about the Islamic Republic, its existence impacts our community in terms of how all of us are viewed and, to a certain extent, treated. According to Gallup’s latest poll, 82% of Americans have a negative view of Iran. [19] Many in the Iranian American community may think that they are not impacted by this phenomenon since they are considered Americans. While there is no comprehensive study on how Americans view Iranian Americans, there is some evidence of second generation Iranians growing up in the U.S. and having their experience marred by stereotyping and open racism. This has been documented by Professor Neda Maghbouleh in her book Limits of Whiteness, Iranian Americans and Everyday Politics of Race. [20] These trends have a powerful influence on the training data and algorithms embedded in AI as shown in previous sections and documented by Timnit Gabru and her colleagues in the paper that resulted in her firing from Google.
However, the future is not necessarily bleak for our community and all communities of color around the world. There is a great push for new regulations in the European Union and United States. There are also many AI scientists who are the proponents of racial and gender equality and have come up with ways and methods to detect and remove bias in data and algorithms. They call for transparency on what the AI platforms do exactly and what training data they use.
By supporting such efforts and even becoming active in making AI safe for humanity, all of us—and especially our community advocates—could play a positive role in making humanity’s second encounter with AI a positive one full of possibilities for the future of the global community.
Hooshyar Afsar is one of the founders of Racism Awareness Project (RAP), an educational program on the history of and present-day racism in the United States and its impact on the Iranian American community. RAP has had a variety of educational forums across the United States. Mr. Afsar has written several articles and book reviews on the topic for Peyk and other publications. He can be reached at hoosh.afsar@rapusa.org.
References:
[1] – https://peykmagazine.com/en/2023/05/08/the-myth-of-an-aryan-race/
[2] – https://via.library.depaul.edu/cgi/viewcontent.cgi?article=1270&context=law-review
[3] – Seth, Anil, Being You: A New Science of Consciousness. New York, Penguin Audio, 2021, Chapter 13.
[4] – Crevier, Daniel, AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks, 1993, Page 49.
[5] – Crevier, Daniel, AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks, 1993.
[6]-https://iscaconf.org/isca2018/docs/HennessyPattersonTuringLectureISCA4June2018.pdf
[7] – https://aws.amazon.com/what-is/neural-network/#:~:text=A%20neural%20network%20is%20a,that%20resembles%20the%20human%20brain.
[8] – https://www.oracle.com/big-data/what-is-big-data/#:~:text=Put%20simply%2C%20big%20data%20is,been%20able%20to%20tackle%20before.
[9] – https://www.youtube.com/watch?v=xoVJKj8lcNQ&t=884s
[10] – https://www.humanetech.com/podcast/the-ai-dilemma#:~:text=Humanity’s%20’First%20Contact’%20moment%20with,to%20make%20the%20same%20mistakes.
[11] – https://www.priviq.com/blog/social-dilemma-data-privacy-and-protection-regulations
[12] – https://youtu.be/xoVJKj8lcNQ?t=3810
[13] – https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.html
[14] – https://www.zippia.com/computer-scientist-jobs/demographics/
[15] – https://dl.acm.org/doi/pdf/10.1145/3442188.3445922?uuid=f2qngt2LcFCbgtaZ2024
[16] – On the Dangers of Stochastic Parrots:
Can Language Models Be Too Big? https://dl.acm.org/doi/pdf/10.1145/3442188.3445922
[17] – https://www.codedbias.com/
[18] – https://www.youtube.com/watch?v=P7XT4TWLzJw
[19] – https://news.gallup.com/poll/116236/iran.aspx
[20] – Maghbouleh, Neda. The Limits of Whiteness, Iranian Americans and Politics of Race. Stanford University Press, 2017.