The increasing presence of AI models in everyday life means that regulatory and risk concerns have burgeoned. There are risk-related, legal and ethical issues surrounding chatbots; profiling and business applications. Additionally, with increasing access for everyone to AI written, verbal, facial and visual/artistic models, we need to be concerned with fake news; plagiarism; breaches of copyright (image above generated by AI Chat GPT); cyber-security and the exploitation of children. As advisers and users, we need a developing understanding of how AI operates, its model architecture, and its training and testing aspects, as AI becomes increasingly part of the real world. We need to consider the risks; ethics and regulations that impact these new technologies.
While forms of AI have been around for some time, it is only with ChatGPT and other freely accessible platforms entering the consumer world a couple of years back that more interest in the Generative AI area has surged, as opposed to other forms of digital marketing; data analysis, information and news retrieval and computation. AI applications are now prominent in language models; transportation (e.g. driverless systems); cyber security, medicine, pharmaceuticals and many other areas.
What to look out for:
The current regulatory framework has been considered by the W Legal in earlier short articles, but we are now increasingly focussed on the introduction of the EU AI Act (applying in EU countries as opposed to the UK but impacting companies operating in the 27 member states). The development of GDPR-both the UK and the EU versions – are now evolving somewhat separately, particularly in areas of data automation and profiling.
Risk Classification of AI Systems:
You will need to assess the risk level of AI systems (e.g., low, high, or unacceptable risk). High-risk AI systems face stricter requirements, particularly in areas such as healthcare, recruitment, and law enforcement. In such cases, you will need to establish a risk management framework, ensuring human oversight, and maintain documentation.
Transparency in AI Systems:
Ensure that high-risk AI systems are transparent, and that individuals are informed when interacting with AI, especially in automated decision-making processes. Provide easy access to information on AI system functionalities.
Human Oversight:
Ensure that appropriate human oversight mechanisms are in place for high-risk AI systems, and that decisions made by AI can be reviewed by qualified individuals before they have significant effects on individuals’ lives.
AI Governance and Accountability:
Establish governance structures that ensure AI systems are developed, tested, and used responsibly. Carry out necessary impact assessments and assign responsibility for compliance with the EU AI Act and GDPR, to data protection officers (DPOs) and AI compliance officers.
You must ensure when operating in the EU that prohibited biomedical analysis and profiling as well as facial scanning are not taking place.
What action should you be considering?
Be alert to the risk of AI fabricating false and/or unfair information (known as “hallucination”). AI systems need regular and, in some cases, continuous monitoring. Ensure that data retention and transfers comply with laws and regulations in other jurisdictions, especially where data transfer and storage take place. Attention needs to be paid to regulations protecting consumers, particularly children. In many cases, full disclosure of AI usage and consent may be needed.
We can help with considering aspects of the changing regulations in other countries through our extensive overseas legal and compliance contacts-particularly in the US, EU and China. Clearly issues are emerging all the time, as we saw from the recent Paris AI Summit with legal/regulatory challenges and an evolving understanding of where AI is being used.
What are the potential liabilities?
Given that UK firms could be caught under both GDPR and EU AI Act regulations, breaches of these regulations, could result in substantial harm under each framework. Fines could be imposed under both with upper limits for the most serious breaches of 4% and 6% of worldwide turnover, or €20m and €30m, respectively, as well as facing civil litigation and reputational damage.
Compliance with GDPR and the EU AI Act requires a proactive approach, embedding privacy and ethical considerations throughout the development, deployment, and monitoring of AI systems. Firms should take a comprehensive approach, not only to meet legal requirements but also to ensure the ethical and responsible use of AI that benefits data subjects, creates trust, and minimises potential harm.
Our firm is well equipped and ready to provide expert guidance to ensure compliance in these areas. Please do not hesitate to contact us if we can be of assistance.
Written by David Ellis, Consultant Barrister – Regulation and Compliance