• Maria Polycarpou

Digital Rights: UBER Drivers -Victims of the Algorithm?

In July of 2020, UK Uber drivers filed a suit against Uber in order to uncover its algorithms. The UBER drivers found themselves banned from the platform due to claims of fraudulent use from the app’s systems. The lack of transparency from the platform, giving the drivers little to no information about how their personal data is used by the automated decision-making system that has banned them from the platform, has spurred the App Drivers and Couriers Union (ADCU) to take international legal action to demand their data. The General Data Protection Regulation (GDPR) requires that companies give access to the data & explanation of algorithmic management, and the Union alleges that Uber has not adhered to these rules. This article will explore the impact of Uber’s arguably obscure policies on its drivers, the GDPR protection regime this claim is based on, and the future of algorithmic decision making in the economy of the future.

UBER Algorithm

Through the passage of time, we find that our lives are increasingly regulated by algorithms that improve with data (Machine Learning Systems). The machine learning process is comprised of three stages; the data gathering stage, the modelling stage, where the data is processed through the algorithm, and the output stage. The algorithm's ‘rules’ can be constantly refined based on data by the machine learning system. Uber utilises these automated data processing systems to identify fraudulent activity and terminate drivers based on that analysis. Many large companies use automated data processing and machine learning in order to ensure the quick and efficient performance of their application. With more than 103 million monthly users, served by more than 5 million drivers worldwide, it is difficult to see a viable alternative to using such processes with such a monolithic platform.

However, there are numerous concerns regarding automated data processing; the main concern being the obscurity surrounding it. Frank Pasquale has characterised these algorithms as ‘black boxes’, where just as an aeroplane’s black box records everything that goes on during flight, so do these algorithms record every piece of data supplemented to them. The issue with this is that in the context of large companies, there is a lack of transparency regarding how these algorithms work because they are trade secrets. There are concerns that these algorithms are not as neutral as once believed but are, in fact, proxies for discrimination as they may become biased with stereotyped data.


In this particular case, the ADCU’s contention is that Uber is using an overly broad definition of ‘fraud’ to undercut its obligations to worker’s rights by concealing performance-related dismissal. The company’s ‘Community Guidelines’ define ‘fraud’ to include declining work offered and strategically logging out to await surge pricing. The drivers in question made subject access requests to Uber throughout the previous year, asking the company to reveal detailed data about how the algorithm is profiling the drivers. The ride-sharing giant has given access to little or no data regarding this request. This type of behaviour contributes to the aforementioned obscurity concern of automated processing as Uber drivers have had their employment terminated without a clear explanation or the ability to defend themselves.

GDPR

The GDPR provides some access rights for individuals who are subject to wholly automated decision-making processes where there is a substantial legal or similar impact. This is relevant here because Uber’s algorithms determine driver employment by assessing their level of fraudulent activity. Article 22 of the GDPR requires that data controllers provide some information about the logic of the processing to affected individuals. This would be the first article 22 case that would pierce the veil of algorithmic black box secrecy and provide a check to it.


Moreover, the article does not prescribe the level of detail that must be given; only that the information must be ‘meaningful’. Therefore, this case could set the boundaries of article 22. Section 4 of Article 22 also sets additional safeguards if the data processing is based on contract or consent, which in the case of Uber and its drivers, this is most certainly the case. These safeguards include the right to obtain human intervention, the prohibition of purely automated decision making and the ability to contest a decision. These safeguards may quench the Uber drivers’ search for answers, but do they go far enough?

Looking to the Future

Big data processing will remain part of the future economy. They are the key drivers of technological monoliths. However, these algorithmically generated ‘scores’ raise critical queries regarding their compatibility with rights and values such as privacy, autonomy and non-discrimination. The Uber case may be the first Article 22 case under the GDPR where these queries may be discussed. If ‘meaningful’ information could just be covered by human ‘rubber stamping’, then that means that not only uber drivers, but all of us could be victims of the algorithm.


The complaint was filed in the Netherlands, due to Uber’s headquarters being situated in Amsterdam. The result of this complaint remains to be seen.

0 comments