|
> > |
META TOPICPARENT | name="SecondEssay" |
The challenge of regulating hyper-realist generative AI
-- By XuanyiLee - 23 Dec 2024
The release of ChatGPT? in 2022 represented the first mainstream adoption of artificial intelligence ("AI") by everyday users. Since then, Big Tech has conducted an AI arms race as each competitor seeks to consolidate their position in the lucrative AI market. For instance, Google has funnelled resources into Gemini, its own AI tool, whilst Microsoft has continued its investments into OpenAI? . However, Grok-2, the large language model chatbot owned by X (formerly Twitter), has generated more recent controversy over the rapid development of AI. On 9 December 2024, Grok-2 was upgraded with Aurora, an autoregressive image generation model that enabled Grok to generate hyper-realistic images based on user prompts on X. The controversy generated was due to Grok's seemingly more relaxed ethical guardrails (compared to OpenAI? 's DALLE-3 or Google's ImageFX? ). For instance, English Premier League footballer Kyle Walker complained when AI images circulated on social media of Walker dressed as a suicide bomber (B Morse, Manchester City defender Kyle Walker condemns ‘vile, racist and threatening’ online abuse, calls for action, CNN, 13 December 2024 https://edition.cnn.com/2024/12/13/sport/kyle-walker-online-abuse-manchester-city-spt-intl/index.html). As such, these developments have brought to the forefront a difficult legal problem – how do we regulate the use of AI?
The AI Arms Race
The rapid development of AI is clearly a transformative moment in our technological development. AI has undoubtably provided significant opportunities for innovation and economic growth. Tools such as Gemini and ChatGPT? have empowered ordinary individuals to streamline their tasks, complement the creation of original content and improve the overall efficiency of work. At the same time, the emergence of AI technologies has come with the risk of abuse – as exemplified by the use of image generative AI to harass and threaten individuals online. When AI tools come with relaxed ethical guidelines, malicious actors can exploit the transformative power of AI to produce harmful and defamatory content. The hyper-realism of this content can cause serious reputational and emotional damage.
AI’s legal vacuum: existing frameworks and challenges
Current legal frameworks are unfortunately ill-equipped to effectively regulate AI. Traditional defamation, privacy and intellectual property laws struggle to address unique challenges poses by AI-generated content. For instance, defamation laws often place a focus on human intent, but in the case of Grok-2, defamatory content is arguably created by an AI without direct human oversight (beyond the use of prompts). Determining liability – whether it lies with AI developers, users or the platform – remains a highly contentious issue without a simple answer.
Existing intellectual property doctrine likewise struggles to answer AI-related questions with clarity. For instance, it is not immediately clear who owns AI-generated content. If Grok-2 generates an ‘original’ image based on a user prompt, should the user or Grok-2 developers own the rights to that image? The answer to this question is likely interlinked with the question of defamatory liability as discussed above.
Freedom of speech and ethical guardrails: striking a balance
The controversy surrounding Grok-2’s image-generative capabilities highlights the need for robust guidelines governing the deployment and development of AI. Ideally, companies would responsibly implement safeguards to prevent abuses whilst maintaining the creative and commercial potential of AI. For example, OpenAI? ’s DALL-E 3 incorporates some user restrictions to refuse prompts that generate harmful or misleading content. The form of some guardrails are certainly essential to foster public trust in AI technology.
At the same time, overly restrictive guardrails could stifle the exact kind of innovation that AI seeks to push the boundaries of. This would also limit the utility and practical benefit that AI would potentially provide. Striking the right balance requires input from a diverse range of stakeholders, including policymakers, AI developers, and users. A combined collaborative effort stemming from a healthy discourse around what we want the use of AI to do for us as a society could ensure that AI is developed responsibly without compromising its potential to drive progress. An overly restrictive regulatory regime could deter investment into and innovation in the AI sector. Companies may be reluctant to develop new AI technology if lawmakers institute overly burdensome compliance requirements, which would ultimately prove to a lose-lose situation for all.
Potential regulatory approaches
Clearly, governments must take proactive steps to establish clear guidelines on the use and development of AI. One potential approach is to adopt a risk-based regulatory framework that categorises AI based on the potential for harm. For instance, AI tools used for autonomous vehicles would warrant much stricter oversight than large language models. Doing so would allow for a more nuanced regulatory regime that does not put blanket restrictions on all AI tools without considering the end use cases for each type of technology.
Another approach would be imposing accountability measures on AI developers and platforms. Companies may be required to conduct regular audits of their AI models and implement robust mechanisms to detect and prevent misuse. Transparency requirements could compel companies to disclose how their AI systems are trained, as well as any ethical considerations underlying their development. California is one state that has taken a step in this direction with their AI Transparency Act, signed into law in September 2024 (A Kourinian, H Waltzman, M Leibner, New California law will require AI transparency and disclosure measures, 23 Sept 2024, https://www.mayerbrown.com/en/insights/publications/2024/09/new-california-law-will-require-ai-transparency-and-disclosure-measures). For instance, the Act requires AI develops to make available an AI detection tool at no cost to users, allowing users to verify whether the content they are viewing has been created or altered by generative AI. California is the first US state to create specific requirements regarding the ‘watermarking’ of AI content. This type of accountability helps garner public trust in the use of AI, and represents a good starting point for lawmakers to take reference from.
Conclusion
Ultimately, the rapid development of AI technology underscores the urgent need for policymakers to consider comprehensive regulation for the AI industry. While AI offers unprecedented opportunities for innovation and creativity, it also poses significant risk of abuses that demand careful oversight. By establishing clear legal frameworks that toe the line between regulation and innovation, AI can be responsibly used as a catalyst for societal progress.
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines:
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list. |
|