Google isn’t evil. You’re making it so.

Lately, I’ve been noticing a growing moral panic around how digital tools change our perception of reality and have a negative impact on society, politics and other real-life areas.

It comes in different forms and shapes: Facebook and its fake-news-glorifying algorithm, Google tagging black people as gorillas, our social media-driven perception bubble, the ubiquity of nude photo leaks or the increasingly scary risk of hacking, illustrated by the Mirai botnet.

It seems like the Internet is becoming an increasingly scary place and some of that nastiness is permeating into our everyday lives. But are big corporations or new technologies to blame? Or is it us the one molding and defining technology in ugly ways?

And more importantly: If we are to blame -not government, nor corporations or technologies, but us; how are we supposed to make things right?

But first, let me talk about some evil, evil internet robots.

The story of an evil, evil internet robot

Microsoft released Tay, an AI designed to develop “conversational understanding” through interaction with real users on Twitter in March 2016. In less than 24 hours Tay had gone full Nazi, and its creators had to plug it off.

That didn’t prevent Microsoft from giving Tay a second chance once they had filtered most of the offensive stuff the bot was learning from its fellow Twitter users. That didn’t go well neither.

What Microsoft learned that day is what anyone who has seen Jurassic Park knew from the very beginning: “life, uh, finds a way.” In this case, it means that if you throw a bot to a platform like Twitter, in which you’re open to any interaction from hundreds of millions of users, the lowest common denominator will win.

There’s no way of filtering the endless means human intellect can find to screw your bot. If enough people want to screw it, consider it screwed. Life finds a way.

Machine learning is not evil. But it has issues.

The example of Tay illustrates the biggest problem with artificial intelligence, big data and machine learning. And it doesn’t even have much to do with AI or machine learning at all: it’s a very human problem. That issue is that the assumptions these systems are working with are often inaccurate because they’re being given flawed data or they’re designed with a flawed logic.

The second issue is that we assume that machines, being much more able than humans for analyzing and processing data, will be better than humans at making the right choices. Thus, we’ve learned to rely on their output without challenging it, not realizing it is as flawed as ours; sometimes in the same ways.

In short, the problem of AI and big data can be summarized with junk data in… junk data out.

We’re starting to see an understandable concern about the tools used to run institutions all around the globe. From AIs judging beauty contests with a bias towards non-white people to risk assessment software used in judicial processes in the United States with potential bias against Afro Americans.

Black defendants were still 77 percent more likely to be pegged as at higher risk of committing a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind.
ProPublica

This doesn’t mean AIs or machine learning are flawed per se, but they repeat the issues that are present in our societies at large.

When people started complaining about the racism of Google images, the issue was perfectly explained by Antoine Allen in a series of videos on his YouTube channel:

In short, Google is not evil by selecting offensive pictures to represent minorities. Google is just a digital window to how society views and represents these minorities.

How am I making the internet evil?

This rant article was inspired by a piece on food stamps by Jacobin Magazine. Jacobin Magazine is an outlet promoting social justice with a nonapologetic left-wing approach to news and politics.

That’s why it came as a surprise me that they used this picture to illustrate their article about food stamps.

The image shows a non-white family shopping, tagged by USDA’s flicker as Hispanic/Latino.

In this case, the writer paints a colorful picture of why the stigmatization of food stamp beneficiaries on the basis of the NYT’s argument is incorrect scientifically. But then, it includes a bias -from the author or the professional in charge of adding the featured image- in the selection of the image: that Hispanic/Latino minorities are more likely to use food stamps.

It is important to mention that this issue is not limited to race, religion or gender; and it requires a complex approach to understanding how a society’s views and practices are perpetuated through culture.

Things that we do online everyday have an everlasting impact on how AIs understand the world. Simple actions like linking to another page, clicking a link or letting a video play on our Facebook feed may have an unintended impact. It’s up to each one of us know if and how we want to solve this, but being aware of the issue is always good.

What can I do to prevent the Internet from being evil?

First, as we’ve seen at the beginning of the article, life always finds a way. Algorithms will be refined, AIs improved and the field will advance spectacularly; but our cultural practices and representations will permeate into the digital world no matter how sophisticated are the filters we add.

So, it’s up to us. You’ll be able to make a bigger difference if you’re an academic, work in media or have a large audience, but anyone can help making the internet a better place.

Be conscious about your personal bias: If you’re a developer or academic working with AIs, be conscious that your assumptions may be biased. That doesn’t mean you’re evil; just that your environment and context don’t apply to most people in the world. And that’s ok.

If you’re providing a dataset for a machine learning algorithm, try to be as inclusive as fair as possible in terms of representation and, if you have doubts, search professional assistance from race, gender or cultural counselors.

Be mindful of what you click: The algorithms from Google, Youtube or Facebook learn from user behavior to make more prominent the most successful content. Clicking on an article of your news feed, a search result or watching an entire ad/video on Youtube or Facebook will tell the algorithms that result is important.

This generally works, but on some occasions you’ll identify content that needs to be contained; be it fake news, hate speech or direct trolling. In these cases, it’s better to ignore it, no matter how click-baity or offensive it is.

If you are seriously concerned about a specific content and think and need to do something about it, sharing it will only give it more publicity. Most platforms will allow to denounce it, and you probably will be able to rally people around you to do the same.

Many algorithms consider negative feedback, and at some point it may trigger a manual review. In this, there’s strength in numbers. This is especially important in cases where specific results can trigger violent responses or are designed to spread hate.

Be mindful of the words you use online: Word clustering is a tactic used by search engines and other softwares trying to understand the correlation between words. When two words appear in the same sentence or near each other, these AIs will draw a conclusion: there’s a connection between them.

That’s why it’s often better to refer to people by their names or as people, and try to avoid details such as the sex, race or mental history unless they’re relevant to the story.

Be mindful of what you search: Google and other search engines use your searches for their autocomplete.

If you make a search that implies that the holocaust is a hoax, even if you don’t beleve it, you’ll be making that suggestion more available to people who may be more influenciable than you.

Be mindful of the images you use online: if you upload a picture about a Hispanic family to illustrate an article about food stamps, it’s likely that Google will draw a parallelism. The same problem would happen if a picture of any other minority was chosen, including white ones.

Using illustrations, icons or more conceptual pictures can be a good way to avoid the issue. For example, a picture of food, a supermarket or a shopping cart would have worked just as well in that case. Besides, illustrations and icons will allow you to use SVGs, which will be better for your site’s load times.

Don’t feed the troll: This is an old rule, but it still rings true. Facebook or Twitter will give a greater importance to comments with more replies and interaction.

In any conversation online you’ll find a lot of offensive/hate speech. Even if you feel the need to reply, try to avoid it: as it’ll only make that comment more relevant and in the worst case scenario will get you arguing with a bot. Some strategies argue that fighting the trolls may be effective, but from an algorithmic perspective, ignoring, downvoting or reporting them will work better.

Be conscious about connected devices: You should be aware of the dangers that connected devices pose to the internet. Viruses like Mirai can infect entire armies of these devices and use them for organizing DDoS attacks, jeopardizing the access to Internet for millions of people.

It’s important to understand that there’s no easy fix for this yet, so it’ll help if you think twice whether you really need a connected device. If you indeed need one, try to rely on renowned brands with a good historic with security, so if there are any viruses affecting their devices they’ll be able to patch them.

A cheap, low-cost connected device is always a no-no.

Educate IRL: This may seem utopian, but I would like to end the article on a positive note.

When the issue is that some of the ugliest aspects of society are using the online world as a platform, trying to address and confront those awful realities is also a solution. Yes, it looks impossible, but in the last 100 years western societies have gone through a decolonisation process, women have gained access the workplace and the right to vote in most of the western world and banished the ugliest parts of segregation.

We’re facing great challenges, but doubling down in the effort to make the world a better place will also make the Internet better, inasmuch as it’s a reflection of our own failures as a society and culture. Things may not look well at the moment, but remember, “the arc of the moral universe is long, but it bends towards justice”.

Things may not look well at the moment, but remember, “the arc of the moral universe is long, but it bends towards justice”.