SABF Author

Team Gisela de la Villa

Blog Team 2017

Archive

Diversity in tech and why we need it

It is a wide known fact within the industry of IT that there’s not a lot of diversity among people who build the internet. Why is this a problem and why should we address it?

Technology is everywhere. We use technology to communicate with our peers at work, with our families, with our friends. We use technology to search for information, we use it to get our news, we use it to learn and to grow. Being such an omnipresent factor in our lives, in everyone’s lives, it is imperative that technology is built for everyone. Moreover, it’s important that technology is built by everyone.

While it is true that most people are born naturally empathetic, there’s only so much our empathy can go. To give a silly yet relatable example, last weekend we forgot to purchase vegan sweets for an event. The reason being: it’s always been our (then absent) vegan friend who thought about those things. One can make an extra effort to be empathetic and walk in someone else’s shoes, but not sharing their reality only allows us to do it to some extent.

Of course, the lack of empathy when building a product can go beyond sensitivities and affect functionality as well. A clear example of that are facial recognition algorithms. Take the case of Joy Buolamwini, an African-American MIT student, whose face was not being consistently recognised by the face-detection algorithms she was using to complete her studies. In order to test her assignments, she even had to recur to wearing a white mask to increase contrast in low-light environments and have her face detected.

Does this mean that whoever created the face detection algorithms is racist, or that the algorithm has a racist bias? Not at all. Most face detection programs use artificial intelligence, where a neural network needs to be trained with a set of samples (in this case, faces), that will allow it to determine patterns to match against. The main cause for black faces not being recognised, or Asian eyes detected as closed, is that the set of samples used for training the neural network was not diverse enough.

While it can seem hard to, as individuals, influence how a phone screen blocker detects Asian eyes or how crime prevention algorithms identify suspects, the truth is that we all have a part to play. Diversity is key, and we all can start by encouraging others to become involved. Examples of this are Rails Girls and Django Girls among others, which are organisations aimed at increasing the proportion of women in tech, and Black Girls Code, which aims to increase the number of women of color in the digital space. Another great example is the Algorithmic Justice League, created by the aforementioned Joy to highlight algorithmic bias.

If you feel identified with any of these stories, get involved. If you ever found it difficult to use an app or website due to your ethnicity, age or disabilities, get your community involved. Educate them, attract them to the industry. Increase diversity in the development teams and in the test groups. If you didn’t, if you’ve never had any struggles at all, make a special effort to become aware of social bias. Start by looking at your surroundings. Inspect the company you work at and analyse whether it’s diverse enough. Encourage diversity. Improve tech.

Disinformation Era

Imagine the following situation: it’s Tuesday, it’s late, and you’re just arriving home. The day has been dreadfully long, so you choose to browse your favourite social network to unwind for a while. Your feed is full of the same old: funny jokes about the latest mediatic politician, videoclips of some corny pop artist, memes about some Turkish chef, and an avalanche of baby pictures and first wedding anniversary memorabilia. You scroll, scroll, scroll, until you find a video of a cat. Now, that’s relaxing.

This behaviour is hardly surprising. The excess of information creates an overload of our receptors, causing us to shut down our senses. There is so much of it around, that it really is an effort to take it all in. We tend to absorb only the information that’s preprocessed, the easy bits. This could be tightly bound to the fact that laziness is an evolutionary trait in humans[1]. We’re built to save energy in a calorie-restricted environment. Of course, that’s not our current reality, but the evolutionary trait still remains.

Which leads us to the main causes of disinformation: the lack of diversification and the lack of verification of sources.

Let’s start with lack of diversification of sources. Believe it or not, there are people who rely exclusively on social media to keep informed on current events. Facebook, Twitter, even 9gag! One of the main issues with this approach is that the information found on such media is highly biased. The feed is composed by people we choose to follow, people we choose to befriend. With that in mind, the information and points of view we will be presented with are limited.

“Tell me who your friends are, and I’ll tell you what you know”

Not only are we conditioned by our choice of people to follow, or people to be friends with, but also social media will keep feeding us only a subset of the available information. Social networks will determine what to show us in our main feed based on what we have searched, what we have liked, and whose profile we’ve opened in the past[2], thus creating a retro-feeding loop of related content. We’re therefore being presented only with information that an algorithm calculated that we’ll like. The posts we see, the ads, and clickbaits, all relate to our history and encase us in a pattern which in itself provides the algorithm with more detailed information about our perceived preferences.

In addition to that, some social networks give you the option of hiding a certain type of posts, either by author or, more dangerously, by content similarity. In this case, people choose to ignore information. Of course, you might want to block content from someone you dislike (just unfollow/unfriend them, trust me on this one), but an alternative reason for it might be that the information we’re wanting to block makes us uncomfortable. We experiment cognitive dissonance: mental stress caused by holding two contradictory ideas simultaneously[3], or when being presented with evidence that contradicts our beliefs. The ways of solving this discomfort is by either changing our beliefs, which is the most difficult and unlikely solution of all, ignoring the new information that causes our discomfort, or seeking sources that coincide with our beliefs and allow us to deem the new evidence erroneous. This last solution is what is called Confirmation Bias[4].

This ultimately leads us to the second main cause of disinformation: the lack of verification of sources. On one hand, our need to get rid of our cognitive dissonance through confirmation bias will predispose us to believe whatever sides with our beliefs, regardless of the source. We will gladly accept the words of whoever confirms our theories and ideas, even when we might be wrong (there are still people who believe the Earth is flat). It’s quite unlikely for someone to seek alternative sources of truth, trying to find points of view that contradict our truth. On the other hand, our lazy nature will lead us to believing any plausible information presented to us blindly, without going to the extent of cross-reference checking with reliable sources.

Of course, not all the information we find on the Internet is true. The best way of finding reliable information is by consulting reliable sources. A potential sources reliability ranking could be the following (from most to least reliable):

  1. Official documents, laws, and decrees (true by their enunciative nature)
  2. Scientific papers (highly reliable due to the supporting research and scientific evidence, slightly less reliable because each research opens the challenge of disproving it)
  3. Highly renowned newspapers (you would expect serious newspapers to verify their sources and have editors who make a sanitization of the publications)
  4. Less renowned newspapers (articles are less serious and sometimes more oriented at sensationalism)
  5. Social media (absolutely unreliable, where every John and Jane can write whatever they please)

In this schema, information can only be as reliable as the least reliable source that’s been quoted as a reference (i.e. if a major newspaper shares news from a less renowned newspaper, the information will only have reliability of level 4). With this in mind, anything found on social media has to be regarded as highly unreliable information. And yet, some people end up believing even the most ridiculous Alternative Facts[5].

While there doesn’t seem to be a way of fixing the disinformation globally, there is a way of solving it on a personal level: inform yourself, look for reliable sources that confirm what you have read or heard, look for alternative points of view, try to avoid the confirmation bias. If you’re too lazy to do it on your account, get a reliable fact checker (like Chequeado.com or Politifact). Do not stay with the apparent truth.

Keep informed.

 

[3]: Festinger, L. (1957). A Theory of Cognitive Dissonance. California: Stanford University Press.