EDI in Metadata
Picturing the Right Pooch
If someone asks you to imagine a dog, what do you picture in your mind? A poodle? A corgi? Your own dog? Is there a reason you may be more likely to think of a golden retriever than a greyhound? Or maybe you are one of the few people who were not imagining a particular dog breed at all, but a mixture of all the dogs in the world? The online world was created by humans and feeds off the information that we give it— so, just like us, the answers that our search engine provides when we ask the question “What is a dog?” may hold biases.
It is important for users to have access to viable resources—that is, without bias—so that they can make well informed decisions. Many find it hard to believe that in the new age of advanced search engines like Google or Yahoo , the information they find can be faulty or outdated. It is only reasonable to hold these massive search engines to a high standard, as they are in constant use for things as simple as looking up Buzzfeed quizzes on what dog breed you are, to intense searches for philosophical texts needed to write an academic paper, and everything in between.
So, what makes these search engines biased, and why is it problematic? Search engine ranking uses a set of algorithms designed to organize our searches to promote results that the algorithms believe are most relevant to us, and whatever will keep us engaged online. These systems are based on factors like our search history, the searches of others, and even digital “tags” (similar to hashtags) which can be attached to content we upload.
Some of the most profound examples of search engines’ link to biases, and how it can be harmful, are found during political events like elections, where an uptick of searches occurs due to national attention. This can be troublesome because users will no longer be exposed to impartial sources to educate themselves, but instead see search results that cater to their presumed political leaning, opening an opportunity for extremists and Internet “trolls” to spread misinformation and misguidance related to one’s leanings.
How can this problem be tackled? Surprisingly enough, the COVID-19 pandemic has become a call to action to limit this type of misleading behaviour online, presumably sparked by controversy and conflict over the efficiency and reliability of vaccines. Social media and search engines alike are now taking increased responsibility for what is being promoted through their platforms; with features like sensitive content warnings, cautionary warnings for stunts performed by professionals, or bias/misinformation warnings on vaccine-related (and other) posts.

