EW: Fear of Algorithms Offers Opportunities for Innovation

January 23, 2023 – Published on EW Magazine.nl

Discriminatory algorithms are all over the news. IT suppliers need to explain how their algorithms work, writes organizational innovator Laurens Waling on EW Podium. When they reveal how data works for you and how you maintain control over your data, a world of possibilities opens up.

The Dutch Data Protection Authority (AP) is establishing a new Directorate for Algorithm Coordination to oversee ‘discriminatory algorithms’. The privacy watchdog is doing this in response to incidents like the childcare benefits scandal. We don’t know how big the discrimination problem is with algorithms. Nor has anyone investigated how this problem compares to (unconsciously) discriminatory individuals who work without algorithms. Nevertheless, all publicity is welcome for suppliers of innovative algorithms. They can show how they maximize diversity and inclusion with algorithms and put the user in control. And the new AP directorate can promote the right algorithms.

Give users control over their personal data

Due to the proliferation of IT applications, each system has its own database—a significant problem. Job seekers have to upload their CVs to various platforms and then completely lose control over that data. Technology companies realize that this is inconvenient for everyone. Costs rise, data becomes outdated, and the attractiveness of applications decreases.

The trend is therefore to give end-users control over their own ‘data pod’. That is a personal data vault where users determine which apps may borrow their data at any given time. The Dutch government’s MijnOverheid already uses such a setup. There, citizens decide which agencies have access to their data. Various neutral initiatives like the Solid project by internet pioneer Tim Berners-Lee help IT suppliers decouple data from their applications. This way, users always remain in control and can grant different applications access to the same data source.


Show how algorithms work

A second challenge is that algorithms do not work transparently. Fortunately, government organizations are increasingly trying to offer more transparency. But complete transparency is virtually impossible. Under the hood, an enormous number of data operations are performed to enable calculations. With 100% transparency, an endless amount of accountability would need to be provided. It’s rarely relevant to users how such a calculation takes place, as long as the results are (surprisingly) good.

However, you want to trust that undesirable variables like age, gender, religion, or name are not used. The supplier can build into the application that such variables are not included and can show this to users. Suppliers who apply ‘privacy by design’ allow users to determine whether, for example, their location and education level are included or explicitly excluded, to prevent potential discrimination.


Highlight the diversity behind the algorithms and their creators

There are two more important points for IT suppliers if they want to gain trust in their algorithms. Who builds the algorithms? And on what data are the algorithms trained?

A recurring concern is that a group of ‘white, young, male programmers’ may have limited empathy regarding issues of diversity and inclusion. As a result, they build algorithms that take limited account of this. But suppliers can demonstrate that they have a diverse team and an inclusive culture to prevent this. Additionally, suppliers should train their algorithms on as large and diverse a dataset as possible. Their AI providers can assist them with this.


Objective system and subjective human need each other

Systems cannot solve everything. A recruitment app can suggest the best candidate in a completely unbiased way. But during the interview, employers still get an impression of name, age, gender, and education level and will also select based on this. Companies would be wise to train managers and recruiters to prefer the objective calculations of the app over their own biases.

Algorithms can help tremendously in reaching more candidates and making a better selection. Yet, humans should keep the final judgment in hand over whether someone fits into the team. Even tech idealists recognize this. Things like a ‘click’ are simply intangible. Just as humans with their unconscious biases need an objective system, a learning system needs feedback from humans. Hand in hand, the magical 1+1=3 emerges.


The Dutch Data Protection Authority (AP) is establishing a new Directorate for Algorithm Coordination to oversee ‘discriminatory algorithms’. Photo: ANP

In: