UK (Parliament Politic Magazine) –The House of Lords, an influential institution responsible for scrutinizing legislation and shaping public policy, has recently declared its intention to investigate Large Language Models (LLMs) and has issued a call for evidence.
As the upper chamber of the UK Parliament, it is deeply concerned that the rapid advancement of this form of generative AI is surpassing our comprehension of its potential dangers. Consequently, the House of Lords is eager to explore ways in which the government and lawmakers can effectively mitigate the associated risks.
British Government Plans To Regulate the Development of Artificial Intelligence
The British Government has recently unveiled its AI White Paper, outlining its strategy for regulating the development of artificial intelligence. The current plans are based on principles and will not initially be enforced by law. Instead, the government aims to provide guidance that encourages innovation.
These principles primarily revolve around ensuring safety, security, transparency, fairness, accountability, governance, and paths to contestability. By focusing on these aspects, the government aims to address concerns related to the potential risks and ethical implications of AI.
However, as businesses and individuals delve into the rapidly evolving realm of generative AI, experts have cautioned that it could pose a threat to jobs and challenge our traditional notions of work. Recognizing the need for a comprehensive understanding of the subject, the House of Lords is actively seeking evidence to inform its decision-making process regarding LLMs (likely referring to Legal and Legislative Measures) over the next three years.
European Union Also Settings Its Own Approach For Artificial Intelligence Regulation
In the midst of these developments, the European Union finds itself at a crossroads in terms of regulation. A significant milestone has been achieved with the recent political agreement that paves the way for the forthcoming EU AI Act.
Anticipated to have far-reaching implications, similar to the international reverberations caused by GDPR in the realm of data protection, the AI Act is under close scrutiny. However, industry giants like Microsoft and Google have already voiced their dissent, contending that the EU’s definition of ‘high-risk AI’ is excessively wide-ranging.
UK Parliament Calls For Large Language Model Investigation
The call for evidence by the House of Lords Communications and Digital Committee highlights the widespread experimentation with the potential of this technology by governments, businesses, and individuals.
The opportunities that arise from generative AI are vast, with Goldman Sachs estimating a potential addition of $7 trillion (approximately £5.5 trillion) to the global economy over a decade.
However, it is important to acknowledge that this advancement may also lead to economic disruption, as automation could expose around 300 million jobs to a potential replacement. Nevertheless, it is worth noting that this process could also create numerous new roles.
The rapid pace of development coupled with a limited understanding of the capabilities of these models has raised concerns among experts regarding the growing risk of harm. In response, several industry figures have called for urgent reviews or the temporary suspension of new release plans.
The House of Lords Communications and Digital Committee’s call for evidence sheds light on the immense potential of generative AI, while also emphasizing the need for caution and thorough evaluation of its impact.
Read More: UK Parliament Members Face Scrutiny Over Sexual Misconduct
The House Of Lords Claim That LLMs Generate Factious Answers That Could Be Dangerous
The Committee acknowledges that LLMs have the potential to produce conflicting or false responses, commonly referred to as ‘hallucinations’. This poses a significant risk, especially in industries lacking adequate safeguards. Furthermore, as repeatedly emphasized on diginomica, these tools have the capacity to rapidly disseminate misinformation, which is a cause for concern.
Additionally, the inquiry highlights the presence of biased or harmful content within training datasets. Moreover, the opaque nature of machine learning algorithms, often referred to as ‘black box’, makes it challenging to comprehend the reasoning behind a model’s actions. Furthermore, there is a lack of understanding regarding its future behavior.
Consequently, the Committee is deeply concerned about the obstacles these issues pose to the secure, ethical, and reliable development of LLMs. Ultimately, these challenges undermine the potential to fully leverage the benefits they could offer.
The Committee is primarily concerned with determining the necessary actions to be taken within the next one to three years in order to ensure that the UK is well-prepared to address the risks and seize the opportunities presented by LLMs.