Is your AI plotting to kill you? The dangers of AI and Machine Learning.

Machine Learning and AI have become increasingly popular over the last few years. The increase of processing power and improved hardware have made new applications in the field of AI possible, and those applications are rapidly becoming more and more human-like.

This is delivering a lot of useful possibilities, but also comes at a cost. This document provides a non-exhaustive list of all things wrong with AI and machine learning nowadays, but of course it will also provide you with the ammunition to fight these bad AIs.

Before we dive into the bad stuff, let’s talk a bit about AI itself. What is it exactly? The definition I give to it is as follows:

Artificial intelligence is computer software or a computer system that is capable of performing intelligent tasks that are only deemed possible by humans.

So, it’s a system using intelligent behavior. And those systems are becoming more and more intelligent. But how did we get there?

AI isn’t something new. Already in 1943, Pitts & McCulloch described the first definition of a neural network, the behavior of which mimics the workings of the human brain. In 1950 Alan Turing publishes a paper about “Systems that think”. Turing would later also describe the Turing Test, a test that defines whether or not an AI can be distinguished from a human by having a conversation with it. Chatbots today prove that this test still cannot be passed.

In 1956, the first official AI research started at Dartmouth College, and this flourishing of AI would continue up until 1973, when the lack of progress in the field made the government halt all funding. This would be the start of the first AI winter. This winter did not last long. In 1980, Japan started funding research again, and the world soon followed. But disillusioned by the lack of hardware power, funding stopped and the second AI winter began in 1987.

By the late 90s, interest started to rise again. People would make supercomputers, winning popular and hard-to-master games like Chess, Jeopardy and Go from humans. Progress is made in the field of Image Recognition and Natural Language Processing, and AI is quickly making its way into our daily lives. Interest is definitely back. But with all these new possibilities, some negative aspects of AI are also arising…

AI is threatening our privacy

The use of computer vision has definitely changed are life for the better. We can search through images on our phone by describing what is on them, we can enter information by simply scanning our credit card or bank transfer and AI systems can better detect and enclose cancerous cells from radiology imagery. We even use it for fun little gimmicks as detecting whether or not an item is a hotdog, or distinguishing Chihuahuas from muffins.

But along with these great capabilities, there’s another very useful technique that is also used in less-than-ideal applications: facial recognition.

Facial recognition is used for a lot of great things. You can use it to easily unlock your phone or look for pictures of your significant other in your massive library. But it is also used for looking for strangers on the internet, or for mass surveillance. Combine facial recognition with our omnipresence on the internet nowadays, and a simple picture of someone passing you by on the street is enough for you to find out who that person is.

In China, facial recognition is used to detect violations by people on the street, like passing a red light as a pedestrian. This affects your social score, which makes it harder for you to get a loan, buy a car, or allows you to get preferential treatment at the hospital. Unfortunately, apart from the perverse effects of this score, the detection system is far from perfect. This is showed by the false accusation of Dong Mingzhu, a Chinese businesswoman who was publicly shamed for jaywalking but was actually detected by the system because her face was on a bus passing by when the pedestrian red light was shown.

In Hong Kong, facial recognition is used to identify people protesting against the government. But apart from governmental mass surveillance, facial recognition can also be used by mere mortals like ourselves. The FindFace service indexed profiles from Vkontakte, the Russian LinkedIn, and allowed you to find user profiles based on a picture you provided. Reporters from the New York Times built their own mass surveillance system for about 100 dollars, by using public camera feeds and facial recognition services from a cloud provider.

So, because automatically identifying people is so easy, it is becoming harder and harder to move yourself on the streets or on the internet anonymously. Researchers are using Generative Adversial Networks (GANs) to generate patches that mislead AI systems, and these patterns are even incorporated in clothing. But neither of these solutions make you invisible, obviously.

Protect yourself! With some help of Machine Learning

The use of these GANs is bringing us to another point which AI is really good at, but that also comes at a cost. AI is really good at making fake things.

GANs are used for different fun experiments. Everyone has seen or used the possibility of changing your pictures into the style of different great artists. You can also convert drawn sketches into real objects by using the power of AI. You can use FaceApp to see what you will look like when you are older. The privacy issues have long been discussed, but this poses another potential danger. Insurance agencies for instance might use this information to check if you’re not aging more rapidly than you should because of your unhealthy lifestyle.

But AI should also make some companies concerned about their business model… Have a look at the pictures below:

These all look like pretty nice pictures of pretty nice people. Except they’re not. They are actually images generated by a neural network built by Generated Faces in their 100K Faces project. Pretty concerning if you’re currently making stock photos for a living.

AI isn’t only good at making images. It’s also really good at generating text. So good that OpenAI didn’t want to release their latest iteration of text generation to the public because it could be too dangerous to use. Spreading fake news would become a lot easier than before. AI is great at creating voice too. You can create your own digital voice with Lyrebird AI for example. Although my personal experiments were still producing obvious computer voices, I attribute this to the fact that there’s only a limited amount of annotated information about my voice available.

These things can be put to great use. Google Assistant can make restaurant bookings for you in 43 states in America (I guess the accents are too bad in the others). But there’s also a downside. Put into the wrong hands, this technology allows for the creation of fake pornography, fake x-ray goggles, or for phishing attempts. And we all know we’re fully capable of letting people say things they actually haven’t said, as the video of Obama demonstrates.

 
 


Funny enough, the solution to all this fake AI is… AI. Companies and research facilities are building machine learning models that can detect material that is created by an AI engine. And so begins a long game of cat-and-mouse between the good and bad AIs out there.

But not everything is the fault of those vicious computer models… The people behind those models are to blame as well.

People are awful — and their AI will be no different

Apart from the evil implementations of AI we’ve seen before, people are making right-out fake applications. The X-Ray app mentioned before is a great example, but you also have Faception, a company that claims that it can identify if you’re a terrorist by looking at your picture. I don’t link to them because they really don’t deserve the attention.

But apart from that, AI models are usually built on historical data produced by humans, and that can lead to unexpected results. There are examples of crime-predicting AI built on falsified police data, introducing racial bias in the results. TayTweets, a Twitter bot developed by Microsoft which aimed at helping them understand teen language better quickly developed into a swearing racist neo-Nazi.

And it can happen to everyone. Big companies like Microsoft, IBM, Google and Amazon have all developed tools with gender and racial bias, because of training on unbalanced or biased input data.

The problem is: your AI will be only as good as the data you put into it. And people are biased and make mistakes. Geckoboard actually made a list of common mistakes made in data analysis and thinking patterns. Some of which you can find in the image below. Thinking Fast and Slow is also a great reference on this point.

What can you do?

The message here is definitely not that we should all ban AI systems for good. Some regulation can be useful, but I believe in the power of the people. Below you can find four tips that can help you all in building better AI systems, and a better world!

1. Get informed

And you’re already doing great! By knowing what the limitations of AI are and knowing what the common pitfalls are, you will overcome possible problems more easily.

2. Have a mixed team

Introducing more opinions and knowledge into your team is really valuable. Men think differently from women, and different experiences lead you to different insights. And believe me, the great diversity at InfoFarm has already helped us tremendously in the problems we have tackled in the past.

3. Don’t be evil

Once the motto of Google but removed from its code of conduct in 2018, this phrase is still valuable for all developers of machine learning applications out there. Be careful in what you produce, and even if you know that your application can be used for bad applications, resist the urge to do so and release it out in the open. Because really, you know it will be exploited.

4. Data Science is best done by Data Scientists

It is becoming increasingly easy to develop AI systems and machine learning models, with easy to use modules and packages and SaaS solutions from cloud providers. Anyone with a slight knowledge of coding and the help of some tutorials can develop an AI model and release it out in the open. But only trained data scientists are aware of the limitations of those models, and can create software that doesn’t introduce bias, or only performs well on the data it is trained on but works terrible in real-life applications. And those things take time. The first 90% of an AI model is easy to develop, but getting those last 10% right can take a lot of time and effort.

And of course, if you want any assistance on this, InfoFarm will be more than happy to help you.

NV Infofarm, Ben Vermeersch 17 October, 2019
Share this post
Archive


The advantages of using the Spring framework