top of page

ITRE

Sponsored by
LOGO%20LEONARDO_full%20red_edited.png

Committee on Industry, Research and Energy

I, Robot: With the rising importance of machines and Artificial Intelligence moving society into a new technocentric age, what should be the ethical and legal framework for the use and development of AI? 

Written by Thetis Georgiou (CY)

The Topic

The Topic in Depth

In 1950, Alan Turing introduced the question: ‘’Can machines think?’’, which was a starting point for establishing the idea of Artificial Intelligence (AI). Today, the term AI refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.

Digital assistants, self-driving vehicles, smart home devices and weather predictions are only a few examples of AI’s presence in everyday life. In reality, AI goes far beyond mere convenience, as it can play a key role in solving global challenges in crucial areas such as energy, education and health. As an example, AI is starting to exceed the diagnosis accuracy of human doctors. As much as AI is a key driver of the fourth industrial revolution, it also bears risks and threats to fundamental rights, such as data protection and privacy.

 

On average, AI could boost growth in European economic activity by almost 20% by 2030. In order to fully exploit the potential of an AI-powered future, in February 2020 the European Commission published the White Paper on Artificial Intelligence, aiming to promote the uptake of AI while addressing the risks associated with it, followed by a public consultation. The results of the consultation included criticism on the risk-based approach outlined in the Paper, according to which only high-risk AI sectors and applications will be subject to new regulatory requirements. It is argued that a binary distinction between high and low risk AI applications is too simplistic.

 

Furthermore, AI raises key questions of accountability. Due to characteristics specific to AI systems, the question of who is responsible if harm is caused by these complex systems currently has numerous answers. Modifications through updates or self-learning through operation, mean liability might be on the mathematician who developed the algorithm or the consumer who purchased and used the system.

A key characteristic of many AI systems is that the process that leads from input data to result is not visible. These systems are described as black boxes, and present an obstacle in the transparency of AI systems, demonstrating the importance of explainable AI when it comes to ensuring the fairness and robustness of these systems. However, it is argued that strictly regulating the degree of transparency in AI applications could decrease their accuracy and result in high costs, compromising advancements brought by AI. This would be the case for AI systems built on algorithms whose structures are too complex to be understood by human experts, such as those in deep learning, which consist of thousands of artificial neurons working together in a diffuse way to solve a problem.

 

Data is a key driver in AI development, which makes personal information extremely valuable. While many privacy and security concerns have been addressed by the General Data Protection Regulation, concerns exist about the limitations that strict regulation of data could impose on the development of AI. On the other end of the spectrum, remote biometric identification is a topic where many show strong reservations against its implementation due to privacy concerns.

 

AI systems are only as good as the data they can learn from, which means that biased or incomplete data in algorithms will be reflected in the deployed models. This issue has led to discrimination and bias in documented cases. The EU is already facing criticism for funding technologies that lack transparency regarding their ethical assessment. A recent example is the project iBorderCtlr, which includes an AI-based ‘video lie detector’ to assess the trustworthiness of travellers at the EU’s borders. Its case is being investigated by the European Court of Justice.

 

The lack of a fully defined AI ethical and legal framework is alarming, considering the rapid rate at which AI is transforming our world. Regulating such complex and fast-growing technologies challenges traditional regulatory models, which often take months, or even years to come into force. Misplaced regulations have the potential to hamper innovation and derail the enormous benefits that AI brings. Therefore, an AI framework requires an approach that is far from static; there is a need for an iterative regulatory approach that is dynamic and adaptive, just like AI technologies.

Media

Topic Content

created by Marichka Nadverniuk (UA)

marichka.JPG
Thoughts

Food for Thought

Is the risk-based approach outlined in the White Paper on AI an effective way to regulate AI applications, or is there an alternative approach?

Are there cases where AI could be used for the identification and profiling of individuals?

 

To what extent should data be regulated as a key enabler of AI?

Who should be responsible in cases where harm is caused by AI systems?

How can we strike a balance between allowing for innovation and competitiveness on the one hand and security and precaution on the other hand?

Useful Links

Committee

General Assembly

GA
bottom of page