'Companies must analyze their data before they run an algorithm' - TMC (fr) Shape caret-double-left caret-double-right caret-down caret-left caret-right-circle caret-right Shape close dropdown expand more facebook Logo linkedin logo-footer logo-mark logo-mobile mail play search twitter youtube instagram
Menu Fermer

'Companies must analyze their data before they run an algorithm'

Artificial intelligence (AI) is becoming more and more important for businesses and consumers. But this is so much more than just technology. According to Natasja van de L'Isle, data science consultant at TMC, the outcome of AI systems can contain bias due to multiple causes. With possible huge consequences. AI systems are not always objective, also in the Netherlands. “Bias in data can be a problem, just like incorrect training or incorrect evaluation.”

These days you’ll find algorithms everywhere you go. They determine search results on Google and what you see on social media. But algorithms are also frequently used in other areas and industries. For example, security camera systems in shops, doublechecks at a self-service cash register and online credit checks.

An algorithm is a piece of code to solve a problem and can be self-learning. The data goes into the algorithm which leads to a result. It is artificial intelligence if those algorithms make independent decisions based on data or signals from their environment and learn from them. However, algorithms learn based on what people put into an algorithm. “If there are conscious or unconscious biases in data, the algorithm will use these in the outcome. Data may contain prejudice when it comes to gender, age, ethnic background or where they live. Take, for example, the selection of applicants. It is important to see whether the data is a good reflection of the target group. But also, how this relates to the rest of society. To avoid bias, companies should first thoroughly analyze their data before running an algorithm. They need to know the context in which data was collected, so it becomes clear what flaws there may be in the data.”

Objective assessment

Several examples show that this does not always happen. In the Netherlands more and more companies do credit checks for other organizations. The NS uses this service. With artificial intelligence another company checks whether someone can apply for an OV or NS business card in the form of a subscription. If someone wishes to object in when the outcome of this check is a negative result, they can only do so with the company that does the checks. NS itself does not take responsibility for the outcome. This became clear from several reports via the Consumers' Association reporting point 'the dupe of your data'. Several people were unable to get an NS subscription due to this automatic credit check. “An application is not always based on paid data sources such as BKR registrations. Sources that are often used are the Chamber of Commerce, daily and weekly newspapers, collection agencies and/or clients with whom someone has (had) a financial relationship, to name a few examples. This could ensure data is not always accurate or up-to-date and may contain information from another person. That results in an AI system that cannot draw correct conclusions. Yet, without checking the outcome, it is assumed someone is not creditworthy.”

The article continues below the info block

Building a solid data team

With more digital data entering the world every day, the first jobs were the ones of data scientists. Today there are so much data that not only Artificial Intelligence is growing, it is also getting smarter each day. When building data-driven products you need a data science team.

But how do you put together a solid team when it comes to developing data-driven products? And which roles are most crucial? With this article we hope to give you an insight.

Curious about the whole story?

Measuring prejudice and fairness

“If you work with AI applications where people are involved, there must be an extra check in the process. In addition, it is important to not only use data, algorithms and artificial intelligence correctly but also to train the data. If you don’t do this, a system can unconsciously build in bias. In the case of the Tax Authorities, for example, a second nationality was an official selection criteria to determine whether there was an increased risk of fraud. As a result, groups were wrongly labeled as 'fraudsters'. With several methods you can - and should - measure the bias and fairness of your data and outcome. You just don't always know why a system makes a certain prediction. And you want to prevent things from going wrong at all times, as with the Allowances affair. That they acted this way was due to the misuse of AI, an algorithm outcome that was incorrect and lack of control of the outcome. As long as it concerns people, checks must always be done by people.”

Far-reaching consequences

For companies that work with algorithms and artificial intelligence, but also for consumers, it is important to understand that and how prejudices arise in AI. Especially if we work with data that is linked to your personal data. We live in an era where data is the new gold. But if data about you is incorrect or used in the wrong way, it can have a negative impact. With far-reaching consequences. “We should not blindly trust the outcome of algorithms and there should be supervision of companies that use algorithms. Especially when it comes to people. The government must make this a priority and companies must recognize - and take - their responsibility. Also consumers need to be aware of what is at stake. Understanding what an algorithm does and why artificial intelligence makes certain choices is important for everyone!”

By adding explainable AI into the algorithm, certain decisions can be explained. In addition, it can show possible strengths and weaknesses of such a decision. Does that seem (too) biased? Then it can be changed or improved. This ultimately leads to a better AI system where everyone benefits from!

Would you like to know more? Feel free to reach out

Bram Thelen

Director Data Science | Nanotechnology | Physics, Netherlands

Tél. : +31 (0)6 52 89 25 70

Dernières nouvelles de TMC

Posez votre question