African Americans and Technology in History
This is part 7 of my series, The Black Purposes of Web3, where I post my undergraduate thesis in sections. Read the series intro.
This post corresponds to the first sections of chapter 5 ("Race and Technology"), and is adapted closely from my original writing.
The Web3 world (and the associated "crypto" world) is known to be white male dominated, and this is reflected in the conversations about Web3 and its potential impacts. Of all of the discussion about Web3 on blogs, news articles, and on Twitter, the voices of marginalized social groups are less prominent. The study of the impacts of technology on society, especially with regards to race, is an important topic in science and technology studies and should be applied to other technical fields as well. When questioning the potential impacts of Web3 on society, it is not enough to consider the views of society's most privileged, but we must also look at the experiences of other groups as well.
One such group is African Americans, who have had a long history of varied interactions with and impacts from technology. Since those individuals are more vulnerable to experiencing the downsides of new technologies or simply experiencing them in a different way, viewing innovations through this lens can provide deeper insight on their broader implications. With Web3, there have been some claims that its core principles are especially beneficial for African Americans, particularly as a new option to help build wealth through decentralized finance, NFTs, and the creator economy.[1] Given the history of the impacts of new technologies on Black people in America, especially without any proactive consideration of the impacts or interventions, it is important not to take these claims at face value initially. As with the history of the web itself, looking to the past at themes of interactions and impacts is helpful to understand what may happen with Web3, and how it may or may not bring benefit to this societal group.
African Americans and Technology in History
Around the early 2000s with the rise of personal computer and Internet use, a large amount of research was produced regarding the "digital divide," or the gap between demographics that have access to the Internet and computers and those that do not, both within the United States and globally. Studies most often showed that privileged societal groups have greater access to and literacy of these technologies than marginalized groups, which predictably translated to African Americans having less access than the average American or their White counterparts.[2]
Indeed, even with the recent narrowing of the digital divide between racial groups in the United States, there are still differences in both general access as well as other dimensions of use. Whereas a lot of the original work on the topic used a particularly binary understanding of the divide, simply distinguishing between the "haves" and "have nots," subsequent research moved to studying the nuances of this divide in various ways. Eszter Hargittai proposed that, instead of focusing on the "digital divide," the "term 'digital inequality' better encompasses the various dimensions along which differences will exist even after access to the [Internet] is nearly universal."[3]
One example of this is differences in how technologies are used, rather than whether or not they are being used. Observations of early computer use in 1989 showed that computers in economically disadvantaged (and presumably majority African American) schools were used as "high-tech flashcards to run students through rote drills," whereas economically advantaged White schools used computers to expose students to programming and other high level skills to better equip them for the workplace.[4] In this way, a divide was represented in the way that technology was used early on, in turn rendering African Americans as consumers of technology rather than producers.[4] This is a demonstration of how technology access is both caused by and reproduces racial disparities.[2]
More recently, danah boyd[5] examined how teenage use of Web 2.0 social networking sites Facebook and MySpace differed by race and class, and how this gap in turn affected the public perception of each site. Even though most students that she interviewed and observed used MySpace at some point, she noticed that when Facebook became available, White students were the main ones that preferred Facebook and students of color remained on MySpace. She likened the creation of this racial gap to "white flight" from urban areas into the suburban areas, or MySpace to Facebook, and pointed out how, in both cases, the racial demographics of each context impacted the conditions and the experiences within them.[5] Different racial groups technically had the same access to these Web 2.0 platforms, but the divergence in preferences and resulting experiences of use created another type of digital divide.
The initial gap in access to digital technologies, as well as how they were used, is important to keep in mind when analyzing the interactions between African Americans and technology currently and in the past. However, this relationship and its impacts should not be reduced to only the digital divide in a way that views African Americans as naturally deprived in a technological sense. There are other notable themes to consider in this area, such as surveillance and privacy, coded racial bias, and technological creativity.
In a previous post, I discussed privacy concerns in response to the participation of large companies in the data economy. For African Americans, these concerns are exacerbated both in the context of Web 2.0 and other digital technologies as well. Surveillance of Black people in America is not new, and has been a phenomenon even before the advent and dissemination of modern technology. The exacerbation of this phenomenon is a result of discriminatory surveillance, which is the "surveillance of, or privacy intrusions on, certain groups as opposed to others."[6]
A prime example of this is the use of surveillance technologies in law enforcement and other areas of the United States penal system. In the purported effort to predict and monitor crime, "police focus their attention and resources on [B]lack communities at a disproportionately high rate relative to drug use and crime rates."[7] Such is the case for Chicago, where the Chicago Police Department makes use of a system called ShotSpotter with hidden microphones that are supposed to detect gunshots and subsequently call armed police to the location.[8] However, data from the sensors shows that the sensors are "placed almost exclusively in majority Black and brown neighborhoods, based on population data from the U.S. Census," indicating a higher level of surveillance of African Americans than others in the city.[8]
Higher levels of exposure tend to put African Americans at risk for harm both in the real world and online, and these different realms often interact. One example of this is how predatory targeting, racial profiling, and discrimination are linked to data brokers creating profiles of users based on the data extracted from Web 2.0 applications and other sources.[9] In particular, in 2013 the U.S. Senate Commerce Committee reported that data brokers sold profiles of financially vulnerable people to companies that provide financially risky products so that those companies could market products aimed at people with bad credit, increasing the likelihood of those same people falling into more debt.[9] For African Americans, increased data collection and monitoring not only violates privacy but also opens the door to increased targeting. As Ruha Benjamin, a sociologist who studies race, technology, and justice, puts it, the practice of marketing to niche groups based on data profiles can have "a serious downside when tailoring morphs into targeting and stereotypical containment" due to "tech developers... encoding race, ethnicity, and gender as immutable characteristics that can be measured, bought, and sold."[7]
Another notable theme both in the past and currently is racial bias embedded in technological innovations. There has been a significant amount of research and reporting about biases found in artificial intelligence systems and other applications, and how these technologies came to exhibit such discrimination – by reflecting and recreating biases found in the real world. Computer scientist and researcher Joy Buolamwini, among many others, has contributed extensive analysis and critiques of this phenomenon. Notable examples include the high error rate demonstrated by facial recognition systems when applied to darker-skinned individuals,[7] as well as Amazon's artificial intelligence recruiting tool that demonstrated a clear bias against resumes with female identifiers.[10]
Apart from artificial intelligence, racial bias also appears online in the delivery of online advertisements, as demonstrated by computer scientist LaTanya Sweeney. After searching for her own name on Google and seeing ads for criminal records of someone with that name, she found that Google had a pattern of providing ads suggesting a criminal record for names associated with Black people, as opposed to the neutral or lack of ads for names associated with White people.[11] In line with the discussion about racial bias, Sweeney asserts that "delivering ads suggestive of arrest much more often for searches of black-identifying names than for whites-identifying names is an example of unwanted discrimination... because the ads appear regardless of whether actual arrest records exist for the names in the company's database."[11]
The advertising model of Web 2.0 that is based on collecting large amounts of user data seems to reproduce existing bias in the real world that are exhibited by user behaviors. This is but one illustration of a pervasive pattern that Benjamin calls "the New Jim Code" – "the employment of new technologies that reflect and reproduce existing inequities but that are promoted and perceived as more objective or progressive than the discriminatory systems of a previous era."[7] Using this lens to examine other technologies and systems, as Benjamin has skillfully done in her book Race After Technology, highlights the myriad of ways African Americans and people of color are subject to discrimination and bias at the hands of seemingly-neutral innovations.
Sentiment Towards Technology
Beyond the documented barriers of access, education, and systemic discrimination, I was curious about whether sentiment and attitudes toward technology might also play a role in adoption patterns. To explore this, I analyzed survey data from the National Science Foundation's "Public Attitudes Toward and Understanding of Science and Technology" dataset from 1979-2006, comparing responses between Black and White participants.
The dataset included questions about interest in technology, opinions about whether technology makes life better or worse, and actual technology use patterns. I focused on survey years 1999 and 2001, which were the most recent years that included the relevant questions about attitudes and sentiment. While this data is dated, it provides insight into patterns that may persist.
Key findings from the analysis: Black participants showed more negative sentiment toward technology compared to White participants. A higher percentage of Black respondents agreed with statements like "science makes our way of life change too fast" (46.7% vs 38.6%) and "technological discoveries will eventually destroy the earth" (37.6% vs 26.5%). Conversely, fewer Black participants agreed that "science and technology are making our lives healthier, easier, and more comfortable" (80.8% vs 92.7%). Black participants also reported lower interest in technology issues and spent less time using the web at home.
These findings suggest that negative sentiment toward technology—not just lack of access or education—may contribute to lower adoption rates among African Americans. Even when informed about technological innovations, some African Americans expressed unfavorable attitudes. This is important context for understanding potential barriers to Web3 adoption, as it indicates that simply providing access and education may not be sufficient if underlying skepticism remains.
References
1. A. Jenkins, "Why Web3 Matters for Black Creators," Forbes, 2022.
2. R. S. Burt, "Structural Holes and Good Ideas," American Journal of Sociology, vol. 110, no. 2, pp. 349-399, 2004.
3. E. Hargittai, "Second-Level Digital Divide: Differences in People's Online Skills," First Monday, vol. 7, no. 4, 2002.
4. M. Warschauer, Technology and Social Inclusion: Rethinking the Digital Divide. MIT Press, 2004.
5. d. boyd, "White Flight in Networked Publics? How Race and Class Shaped American Teen Engagement with MySpace and Facebook," in Race After the Internet, 2012.
6. D. Lyon, "Surveillance, Power and Everyday Life," in The Oxford Handbook of Information and Communication Technologies, 2007.
7. R. Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code. Polity, 2019.
8. City of Chicago Office of Inspector General, "Review of the Chicago Police Department's 'Predictive Risk Models,'" 2021.
9. Federal Trade Commission, "Data Brokers: A Call for Transparency and Accountability," 2014.
10. J. Dastin, "Amazon scraps secret AI recruiting tool that showed bias against women," Reuters, 2018.
11. L. Sweeney, "Discrimination in Online Ad Delivery," Communications of the ACM, vol. 56, no. 5, pp. 44-54, 2013.