Categories
Digital History

October 18, 2020 – Module 6: Ethics, Racism, and Search Engines

“The public generally trusts information found in search engines” (38).

To fully grapple with and potentially disentangle my thoughts from all of the readings this week I would have to write a longer essay. To save my classmates (and maybe myself?) from the endeavor, I am primarily focused on the monograph assigned this week. Noble’s message stuck with me every page of the book, and its argument is so important for scholars in today’s world. We often think about gaps, silences, and bias in the archives, and there is a move to consider this in digital collections and research. But now we must grapple with discrimination in essentially every facet of Internet use. Yikes.

The argument, however, is not comfortable. Safiya Umoja Noble’s book Algorithms of Oppression: How Search Engines Reinforce Racism refuses to soften arguments or shy away from hard topics. Her introduction chapter states that a Google search for ‘black girls’ she made in 2011 came up with plenty of hardcore pornography hits. She lists the websites included and a picture of the Google search, as well as the images at the top of the screen which are also sexual. She goes onto prove her agrgument that search engines and computer algorithms are inherently biased, racist, and sexist by focusing on Google and Black people’s experience. Noble condemns the use of private interests and advertisement revenue in perpetuating societal discriminatory ideals in what is seen as a neutral and factual way of disseminating information. 

Going back to Noble’s introduction, she outlines a specific argument that runs throughout the whole book regarding the pornification of Black women and other minorities. I felt uncomfortable reading some of the language of the book, as she didn’t shy away graphic words used in porn or showing the screenshots of what her google search pulled up. Noble’s point required her to state these words and make us uncomfortable. Her point is uncomfortable; search engines and algorithms perpetuate and reinforce racism and discrimination.  This example sets the stage for other (less salacious) issues present in human-created algorithms, search engines that most trust, and Google itself as a private company. Shock value can work people.

The core of this monograph is centered around racism in information sciences that affect Black women, but the point Noble argues is vast compared to this one problem addressed above. Other examples throughout the book, such as Google searches of the word “Jew” leading to anti-Semitic websites, though Google blocked Nazi memorabilia from appearing in certain countries, show a wider-range of issues. Issues that include removing information, as “this indicates that Google can in fact remove objectionable hits, it is equally troubling, because the company provided search results without informing searchers that information was being deleted” (Noble, 45). Google added a disclaimer regarding search results of “Jew,” yet didn’t remove hits. Interesting, yes; scary in lots of ways. In Chapter Three, Noble also writes about Dylann Roof, a white supremacist who murdered nine Black people at  Emanuel African Episcopal Methodist Church in South Caroline in 2015. Roof used Google searches to formulate his racist ideals. Googling “black on white crimes” lead his to the site of  Council of Conservative Citizens – “a modern reincarnation of the old White Citizens Council” which fought segregation (Noble, 112). This search led him to no reputable websites that discussed intraracial violence, or at least weren’t ‘cloaked websites.’

Discriminatory search algorithms do more than project problematic media ideals about marginalized groups. It perpetuates racism, sexism, and discrimination, making it profitble for companies such as Google. It allows people to easily find problematic and inaccurate sources with certain keywords that can help create violence and strife, as “[k]nowledge management reflects the same social biases that exist in society, because human beings are at the epicenter of information curation” (Noble, 141).

Noble’s Call-To-Action centers around the idea of having more Black and minority people in the tech industry, where there is a distinct dearth. She also argues for tech design to include people who have studied marginalized groups and their histories, such as scholars of African-American history or women and gender studies, and an education in understanding that web searches ARE NOT neutral. In “Algorithmic Accountability: A Primer,” with authors Robyn Caplyn, Joan Donovan, Lauren Hanson, and Jeanna Matthews, the understanding of algorithmic issues feeds of Noble’s work and others. Writing “[c]ritically, algorithms do not make mistakes, humans do” (Caplyn, Donovan, Hanson, Matthews, 22). These scholars similarly understand that lack of oversight and capitalism perpetuates the issue, and call for individual auditing of algorithm from people such as journalists and also argue for more government intervention. Noble, as well, calls for “public search engine alternatives, united with public-interest journalism and librarianship, to ensure that the public has access to the highest quality information available” (Noble, 152).

Noble’s point is an overarching call to scholars and non-scholars, as everyone uses the Internet, Google, and algorithms. For historians, primary research often begins with Google, even if it includes googling articles and historians to see their work. By relying on search engine algorithms to do this work for us, we are relying on a problematic system with roots in sexism, racism, and discrimination (For an article about issues with JStor’s Topics, including getting rid of the topic ‘women’ for a period of time, read Sharon Block’s “Erasure, Misrepresentation and Confusion: Investigating JSTOR Topics on Women’s and Race Histories” from Digital Humanities Quarterly.) To truly fix the issue needs systemic changes, as Noble’s call-to-action includes, but for historians, understanding the issue is a good first step. First step only, but one I needed to learn about to even take. 

Sources Used:

Block, Sharon. “Erasure, Misrepresentation and Confusion: Investigating JSTOR Topics on Women’s and Race Histories,” Digital Humanities Quarterly 14, no 1 (2020).

Caplan, Robyn et. al, “Algorithmic Accoutability: A Primer,” 2018.

Noble, Safiya Umoja, Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press, 2018. 

Ziegler, S. L. “Open Data in Cultural Heritage Institutions: Can We Be Better Than Data Brokers?” Digital Humanities Quarterly 14, no 2 (2020).

6 replies on “October 18, 2020 – Module 6: Ethics, Racism, and Search Engines”

Caroline, this blog really synthesized a lot of the key takeaways from Noble’s work. That first quote, “The public generally trusts information found in search engines” is scarily true, and I would say that they general public trusts MOST content they see on the internet, which is even more troubling. In my first blog for this class, I somewhat confidently stated that the internet has democratized research. I am know rolling back from this statement now that we are this far in the class. Yes, materials are more easily available, but Noble has convinced me that the web space is far from democratic. So much needs systemic change, but the vastness of the internet makes any change seem like a drop in the bucket. Ugh…..

The barrage of readings this week was definitely a lot to take in. And each one was such a heavy-hitter, too! I share a similar degree of discomfort with the use of so many internet services now, especially after it’s been revealed that they have such fundamental issues with them.

But even with this knowledge, as both regular users and historians, are we capable of breaking from their grasp? Many of these algorithms provide admittedly useful services. How could we balance our social awareness and daily function?

Caroline, I agree with Jayme, your blog highlights really well Noble’s main points. As I finished reading Noble’s work I kept asking myself, how does one regulate encoding to ensure that the correct results are shown without bias? Noble offers several viable solutions to this question. And, my other question is, should Google be determining for me the results I should read? I am one of those individuals who will read all 18 pages and have often found the answer to my question on the 10th or 12th page. My other response was.. why would people establish algorithms to produce these kinds of results? Why? I was shocked with Noble’s results and mimicked her methods by searching by “Asian girls” and “Russian girls.” The results were disheartening.

How can libraries make sure that they provide bona-fide, legitimate, non-pornographic results to these questions how? Can libraries do a better job of directing the public to the vetted search engines and databases?

Your post really brings out the anti-racist sentiment at the core of Noble’s work. You heavily emphasize how uncomfortable the book is and that is a(n unfortunate?) side effect of having anti-racist conversations. Racism is always an uncomfortable subject, and the realities that exude from it are terrible. So to bring such awful things to light is fantastic, and clearly it has affected all of us after reading it. This is one of those books that seems like everyone should have to read to get a better understanding of how the world works. I think particularly the conversation about how Dylann Roof was radicalized is something that needs to be broadly understood and railed against. The internet is a dark place and radicalization is too easy with algorithms nudging people towards similar hateful communities.

“Discriminatory search algorithms do more than project problematic media ideals about marginalized groups. It perpetuates racism, sexism, and discrimination, making it profitable for companies such as Google. It allows people to easily find problematic and inaccurate sources with certain keywords that can help create violence.”

This is such an important point, and I’m glad you brought it up. There is almost no economic imperative to change the ways these algorithms function — Google has an indomitable market share, and just telling everyone to “not use Google” doesn’t really seem feasible. And yet we also don’t really have any legal/regulatory way to drive this (at least within the U.S.), even though it’s clear Google is able to deprioritize and hide certain content — because they do it for companies in which they have shared interest! https://scholarship.law.upenn.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1463&context=jbl

@Cassandra and Caroline…you ask why would someone establish algorithms to produce those kinds of results? I don’t think developers do (for the most part) but write the code to seek out and repeat the hits that have been most likely. It is by and large user defined. Noble’s points still stand, but we need to look at our unrestricted access to this junk and examine the heart of our culture that is devouring it.

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php