Can you trust AI? New legislation may help to address that question

download.jfif

Beyond the practical questions to be resolved in any AI application – i.e. the question of ensuring that it consistently works as intended - there are also quite a few legal issues to be addressed. Privacy protection and data protection are likely the most visible examples: how can we be certain that our data isn’t collected or used unlawfully by an AI application? Towards organisations, a citizen can easily raise questions and issue complaints; but an AI application is inherently a bit more opaque and harder to address for the average person.

Within the EU, the notorious General Data Protection Regulation (GDPR) is the principal legal framework for resolving such questions. It contains rules ensuring transparency, establishing citizens’ rights to their data, and protecting them against certain forms of automated decision making, including by AIs. As a result, European citizens can be relatively confident that they will not be confronted with decisions that are seemingly made entirely by an AI, without human oversight or recourse to human intervention.

But privacy isn’t the only relevant legal topic. There is also the much simpler and more intuitive question of whether an AI service is safe, and as a logical complement to that question, who the citizen can turn to in case of harm? On that point, no specific legislation exists, neither at the EU level, nor within the European Member States. While there is a European directive on product liability, this directive hails back to 1985 – barring a few smaller but more recent amendments – and focuses on products alone. While AI can of course be baked into products, some of the most exciting applications are entirely service based or consist principally of a logical framework incorporated into a software package. The application of product liability rules is complex, at best.

Not only are AI applications complex to begin with, their creation and use also often includes a complex chain of stakeholders: the persons who designed the logic, those who programmed it, those who integrated the resulting software into a hardware component, and those who incorporated that component in a bigger product, potentially comprising other AI modules. Then, the final product is circulated by a network of distributors and importers, sold by wholesalers, brough to end users by retailers, and used by them, possibly in innovate and unintended ways. Who does one turn to if things go sour?

In order to alleviate this risk to some extent, the European Commission published a new Proposal for a Regulation laying down harmonised rules on artificial intelligence - the so-called AI Regulation. The proposed Regulation puts forward rules to enhance transparency and minimise the risks to safety and fundamental rights. These rules must be applied and adhered to before AI systems can be brought to the European market.

In order not to needlessly slow down innovation, the Regulation generally focuses on so–called ‘high-risk’ AI use cases, i.e. situations where the risks that the AI systems pose are particularly high, as determined by specific criteria defined in the Regulation. Whether an AI system is classified as high-risk depends on its intended purpose of the system, on the severity of the possible harm, and the probability of its occurrence. High-risk systems include e.g. applications that involve biometric identification and classification of persons, management of critical infrastructures (such as road traffic or energy distribution), law enforcement, migration and border management, and the administration of justice. In all of these situations, AIs can conceivably make critical decisions that can strongly affect individuals, and the proposal aims to ensure that AI isn’t deployed or used too casually.

Such controversial use cases are of course not contemplated by a project such as TEAMING.AI, which focuses on the opposite perspective: how can AI help to facilitate interactions between persons and machines in a production context? The goal isn’t to exploit persons or deny them rights or choices, but rather to ensure that their interactions with machines – which already happen today in almost any industrial context – can be as intuitive, easy, pleasant and efficient as possible for employees.

None the less, the proposed Regulation will also steer TEAMING.AI’s efforts to implement state of the art ethical and legal safeguards. Beyond the high risk use cases mentioned above, the proposal also specifically examines the use of AI in an employment relationship, and notably “AI intended to be used for task allocation and for monitoring and evaluating performance and behaviour of persons in such relationships”. That certainly is a part of TEAMING.AI’s objectives.

As a result, TEAMING.AI must respect a set of specifically designed requirements, both during the project set-up and during its execution. These include the use of high-quality datasets to train the AI, the establishment of appropriate documentation to enhance traceability of the AI’s assessment processes, the sharing of adequate information with the user, the design and implementation of appropriate human oversight measures, and the achievement of the highest standards in terms of robustness, safety, cybersecurity and accuracy.

Formally, these obligations do not yet apply to TEAMING.AI, since the AI Regulation is presently still in its proposal stages. If adopted, in all likelihood it would not become effective until the end of the TEAMING.AI project. None the less, the TEAMING.AI team wants to be forward looking, and comply with the high expectations of European policy makers wherever possible. For this reason, the requirements of the AI Regulation are integrated into TEAMING.AI’s policies, and will be automatically evaluated during piloting activities. In that way, TEAMING.AI can contribute to an efficient, safe, ethically sound and legally compliant workspace.

 Author: TIMELEX

Previous
Previous

TEAMING.AI first review meeting by the European Commission

Next
Next

Project Coordinator's Note