Why AI could be a legal nightmare for years to come

'AI' written on what appears to be a motherboward element soldered into place
(Image credit: Getty Images)

While the number of artificial intelligence (AI) tools on the market increases every day, and more businesses are eager to integrate generative AI into their workloads, regulation is still immature.

Many countries are investing heavily into the systems, with locked-in development cycles and an entrenched obsession on developing ‘world-beating’ AI systems to keep up with competing nations. 

READ MORE

But the spring period for AI may soon come to an end, under the harsh sun of pending regulation. In some regions, AI developers have less than a year to make sure their house is in order, or they could face the full force of the law.

Developers across the EU are staring down eye-watering fines if they implement AI systems in breach of privacy rights, while the US and UK have set out long-term goals of eliminating bias without impinging on innovation.

Legislation is bubbling across the world

Major legislation has been in the works for many years, but the speed at which it is being realized varies by region.

Jump to:

How the US is regulating AI

In the US, AI has been a point of focus for the White House since at least the Obama era The Biden administration proposed its AI Bill of Rights in October 2022. The aim is to guide policy to protect US citizens from unsafe AI systems and biased algorithms

In May, the White House has also released a National AI R&D Strategic Plan to guide safe and ethical investment into AI. In August the White House then launched the AI Cyber Challenge, a $20 million competition in collaboration with private sector AI firms such as Google, OpenAI, and Microsoft to fund developer bids for systems that can protect US critical national infrastructure (CNI).

RELATED RESOURCE

Whitepaper cover with title below image of purple and blue bar graph

(Image credit: AWS)

Discover the keys to successful AI implementation


DOWNLOAD FOR FREE


Public-private collaboration, rather than a carrot and stick approach, is at the heart of the US strategy. The White House has received voluntary commitments to uphold transparency and safety in AI development from 15 prominent software firms including OpenAI, Google, Microsoft, Amazon, IBM, and Nvidia.

Without strong regulation to hold these companies to their commitments, though, they are far from binding. Although the White House has said it is “developing an executive order” to enforce responsibility in AI development, this has yet to materialize. The inclusion of the highly controversial Palantir – described by Open Democracy as a “spy-tech firm” – on the list of voluntary agreements will not inspire confidence in privacy rights groups.

How the EU is regulating AI

The EU’s AI Act has passed all the major hurdles to being signed into law. The bill seeks to regulate the development and implementation of AI systems according to the risks each poses. Developers working on ‘high-risk’ AI systems – including those that can infringe on the fundamental rights of citizens – will be subject to transparency requirements. These include, for example, disclosing training data.

The law also sets out criteria to define what an ‘unacceptable risk’ might be; early drafts suggested this includes systems capable of subliminally influencing users, and an 11th-hour addition added real-time biometrics detection.

The EU has set out penalties for non-compliance with its clear requirements for developer transparency and abiding by the risk profiles it sets out for AI tech – companies could pay €20 million ($21.4 million) or 4% of their annual worldwide turnover for non-compliance – akin to GDPR fines

How the UK is regulating AI

Through its AI Whitepaper, the UK wants to strike a balance between risk aversion and support for innovation. Some industry insiders hail the document, which is in consultation, as the right move, but the UK still lags behind the EU when it comes to AI law.

The UK government has rejected the classification of AI technologies by risk, with the hope that a contextual approach based on ‘principles’ such as safety, transparency, and contestability will encourage innovation.

While some welcome it, the UK government has left the door open to changes down the line that might muddy the picture. While it supports a non-statutory AI regulation, in which existing regulators apply AI principles where possible within their remit, it’s said this approach could be ripped up and replaced with a statutory framework. This depends on the results of its experiment, with harsher regulations and new regulatory powers on the cards.

Many developers appear happy with the UK government’s approach to date, but more could be put off by its mercurial nature and inability to give guarantees.

What about the rest of the world?

In Australia, the Albanese government began a consultation process on AI regulation that began in June 2023 and ran until August. Submissions will be used to inform the government’s future AI policies.

“Italy has already banned the use of ChatGPT and Ireland this week blocked the launch of Google Bard,” says Michael Queenan, CEO and co-founder of Nephos Technologies. 

“France and Germany have expressed interest in following in their footsteps. So, will they follow the AI Act or continue acting independently? The trouble is that you can’t legislate globally. Regulating AI use in the EU won’t stop it from being developed elsewhere; it will just take the innovation out of the region.”

How will EU regulations affect US companies?

The EU’s approach  mean that firms in regions such as the US, which lacks centralized AI regulation, enjoy developmental freedoms that allow them to maintain their edge over competitors.

Similarly, some argue the AI Act’s stringency could drive companies out of the region altogether. Sam Altman, CEO at OpenAI, warned his company could leave the EU if it found AI Act regulations too difficult to implement, prompting criticism from lawmakers. But Altman backpedaled just days later, tweeting OpenAI has “no plans to leave”.

READ MORE

A CGI render of the EU flag shown as 12 gold stars hovering and creating a ripple effect in a wave of blue data

(Image credit: Getty Images)

What's the EU's problem with open source?

As with the GDPR, it’s likely AI firms will follow the guidelines as prescribed to continue their operations within the EU.

“It remains to be seen if these new proposed solutions will offer a meaningful distinction,” Mona Schroedel, a data protection specialist at national law firm Freeths tells ITPro. “If a “homegrown” data center is run, staffed, and supported within the EU rather than outsourcing certain elements of the services then such an offering would provide a real alternative for companies wishing to avoid falling foul of the rules governing international transfers.  

“We have seen a number of data controllers seek exactly such a streamlined environment to avoid the complications of having to map the data flow within systems which may ultimately be accessible from outside the EU. For those in Europe, this is likely to be the best way to ensure that appropriate safeguards are in place at all times for compliance purposes.”

Firms in the US could benefit from the EU-US Data Transfer Framework, which is set to ease the transfer of data across the Atlantic. But experts doubt the framework will stand the test of time, with Gartner’s Nader Henein telling ITPro it’s likely to be overturned within five years. In this way, EU law on AI could prove a headache for US firms, especially if US regulation proves significantly different from the risk-based EU approach.

At present, it’s almost impossible for copyright holders to know exactly where their IP data may have been used to train a model, and for what purpose. Discussions around how AI could kill art as we know it have been raging for some time, and the advent of models such as DALL·E 2 or StableDiffusion have added fuel to the fire. 

A core complaint from the art community is AI art can only produce images because it has been trained on vast sums of work created by real artists. Therefore, the argument goes, it’s tantamount to plagiarism and should be treated as such.

It’s possible this has failed to register on the radar of many businesses. Few outside the media industry have in-house art styles that could be cribbed for AI training purposes, and the benefits of hassle-free stock images may outweigh immediate concerns over sourcing.

Nevertheless, this is something of a legal time bomb, and one that could only become more difficult and costly to remedy as time goes on.

The legal implications of AI-generated content

The AI-generating artist ghostwriter977's album cover

(Image credit: ghostwriter977)

'Heart on My Sleeve’ by ghostwriter977 is an AI-generated song purporting to be a collaboration between Drake and The Weeknd. It used music from both to generate a virtual model of their voices.

Its creator could face legal action from Universal Music Group (UMG), which says the song breaches copyright. But it could prove a thorny case, as the question of whether using IP to to generate a new product constitutes fair use. 

Many governments and regulators are pondering over these concerns right now, which will only intensify as LLMs become more sophisticated. 

The EU’s AI Act contains provisions that would require AI developers to disclose the data sources used to train LLMs, and this may provide rights holders with the transparency and redress necessary to quell potential lawsuits.

“What we need to see happen is that technology really needs to be accessible to individuals in a way that is open, transparent, can be audited and understood,” Liv Erickson, ecosystem development lead at Mozilla, tells ITPro.

“We really need to understand how to tackle these challenges emerging around bias, misinformation, and this black box of not necessarily understanding what these systems are being trained on.”

It’s also certain new technologies like real-time deepfakes will be weaponized in the coming years. Products like Microsoft Vall-E can already synthesize a person’s voice after only three seconds of input, and future models could allow for people’s voices or likenesses to be replicated to promote products, damage reputations, or perform spear phishing.

We’re in an AI gold rush, and the pace at which the technology is developing has inevitably forced the hand of public bodies the world over. While the precise developments to come cannot be predicted, one can say with certainty the legal landscape for AI will look very different in just a few years.

The EU has been known as a kingmaker when it comes to regulation in the 21st century, with GDPR perhaps the best example of the Union setting a precedent that companies the world over have to follow. It looks set to retain this crown as we push through the century’s third decade and work through AI regulations.

READ MORE

The search results on Google and Bard when you search for an African country that begins with K

(Image credit: Rory Bathgate/ITPro)

AI will kill Google Search if we aren't careful

That said, there’s ample potential for the US to prove itself as a regulatory cradle, playing host to the lion’s share of home-grown AI firms. The Biden administration’s Inflation Reduction Act is an example of renewed strength when it comes to market stimulation and leadership, having prompted a scrambled response from the EU – rather than the reverse. 

If it can replicate this approach in AI, fuelling innovation while enforcing demands with regards to workers’ rights and reporting, the strength of US regulation could be the decisive factor for the future of the AI market. 

Collaborating with the UK on the likes of the Atlantic Declaration may only strengthen the standing of both countries, but much has yet to be proved. Not only in the UK’s legislative approach to AI untested, but its international hub for AI is yet unknown.

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.