How biases slip into our technology - TMC (en) Shape caret-double-left caret-double-right caret-down caret-left caret-right-circle caret-right Shape close dropdown expand more facebook Logo linkedin logo-footer logo-mark logo-mobile mail play search twitter youtube instagram
Menu Close

How biases slip into our technology

When you think of an employee of a large tech company, someone who works on a game system in which players control the game with various hand gestures and voice commands. What kind of person do you think of? What gender is the employee? What age is he or she and what skin color do you envision?

The chance you just thought of a 30-year-old white male is quite big. And not entirely unfounded because the majority of game designers - more or less - meets that profile. It creates monotony within this sector which can impact the products the developers are working on. And that impact is not always positive, as it turns out. We cite a few examples.

In 2010, a Microsoft employee tested the new Kinect for the Xbox system. The system worked fine for her husband, but a lot less for her and their children. Turns out the system was only tested on men between the age of 18 and 35. Therefor the game was less able to recognize the movements of women and children.

Nowadays, more and more people are working with systems that incorporate artificial intelligence. Such a game system is a relatively harmless example. But imagine a product with more impact, such as a voice or facial recognition system for security purposes. Or a system that will select resumes for an application process.

Practical examples show that makers are not really representative of users. As a result, voice systems for example, have difficulty understanding higher voices. It has also been widely documented that facial recognition systems have a harder time recognizing dark-skinned faces. That's not because the systems are poorly built. This has to do with the fact systems are trained with voices and faces of white men. These are classic examples of biased artificial intelligence. Artificial intelligence copies the prejudices of its maker. Does the maker seem rather monotonous? Then such a system can often cause problems in real life.

Another well-known example comes from Amazon. That company tried A.I. to be used for the selection of CVs. The system was trained with previously submitted resumes of applicants, of which – as it turned out afterwards – the majority of them were men. This taught the system men were “better candidates.” The algorithm also showed how it easily copied prejudices from society and showed the one-sidedness of the application process in recent years.

The text continues below the info block

Smart Industry could use some 'human factor'

When speaking of Smart Industry, the emphasis often lies on technology. This is understandable, but the human factor cannot be underestimated. How can an organization support everyone in order to deal with an ever faster changing environment as effectively as possible?

Curious about the whole story?

Product designers should be an enrichment, not a limitation. Artificial intelligence turns on the input for truth and takes over the tunnel vision of the creator. Despite the fact that the power of tech companies is growing, this does not apply to the diversity within this branch. The big question that remains: do we want such a system (including the maker) to make relatively important decisions? The most obvious solution would be to put together more diverse teams. Something that Laura Manders, employeneur at TMC, is very supportive of. She experienced the monotony of the technical world during all phases of her education and saw few role models in both her education and media. Let alone seeing girls like herself who were working on technology: 'I was always the only woman or one of the few during internships." The enormous monotony has abated and Laura now works in a more varied team. “I have colleagues of all age categories and a good male-female ratio. It's kind of a small society, and it works.”

Creating a varied team is not that easy. Therefore, tech developers need to find other ways to test their technology, such as representative testing teams. This is easier to achieve than having technology developed by a more diverse group of people. Manders cites a practical example that shows it’s positive effect: “A large tech developer created teams from all disadvantaged groups within the company. Think of different ethnicities, political affiliations and sexual orientations. All these groups tested the products to remove any prejudice from the products.”

In the end: everyone is prejudiced at some level. Your upbringing creates specific ideas. But the information you collect during your life also depends on where and in what circumstances you grow up. All these unconscious biases are inside us. Not only that, they also have an important function. Most of our actions happen unconsciously. And it is those unconscious biases that allow us to make decisions (more quickly) in daily life.

Biases don't have to be a problem. We just have to realize that we incorporate these prejudices and ideas into the technology we develop. Resolving discrimination in algorithmic systems, however, does not happen overnight. It is a process and takes time. Just like fighting discrimination in other facets of society. To prevent really major complications from developing, a large group of AI researchers are working on algorithms that help detect (and limit) hidden biases within training data. We will see if this will bring the result we are hoping for: systems incorporating artificial intelligence that can be used by everyone!

Bram Thelen

Director Data Science | Nanotechnology | Physics, Netherlands

Tel: +31 (0)6 52 89 25 70

Latest from TMC

Ask your question