Four Reasons Regulations on Generative AI May Do More Harm than Good

Shivani Shukla
Published 08/16/2023
Share this on:

Is regulating generative AI harmfulGenerative artificial intelligence (GenAI) is too complex to regulate entirely—it would be akin to regulating the internet. Although the internet is not “generative” in its functionality, much of the information that flows through it either cannot be validated or is partially incorrect. Efforts to develop regulations for AI have been in progress for years in several parts of the world as lawmakers continue their debate on protecting data privacy, safety, and bias. The effectiveness of these regulations remains in question because technology moves much faster than lawmakers can anticipate or respond to. The release of ChatGPT added GenAI to the mix, emphasizing the need to address even more regulations.

Current regulations


The role of governments is critical in ensuring that technology evolves and grows organically yet safely. In Europe, the EU Artificial Intelligence Act is entering its final phase of adoption. Its intent is to ensure that AI systems are “safe, transparent, traceable, non-discriminatory and environmentally friendly.” Rules have been drafted for providers and users under three risk levels: unacceptable, high, and limited. For GenAI specifically, the act requires the providers to disclose if the content was produced by a GenAI algorithm, to design models to prevent the production of illegal content, and for summaries of copyrighted data used for training to be published.

Many tech providers in the affected space believe the regulations are prohibitive and difficult to comply with, potentially impeding development efforts. On the other hand, China’s national regulations are more detailed and do not use a risk-based approach. Softening the regulations from an earlier draft published in April, China revised the regulations to support technology builders better; they are more stringent for public-facing products with a significant leeway built-in for enterprise-facing products. The guidelines include content controls and censorship, prevention of discrimination, protection of intellectual property (IP), curbing misinformation, and privacy and data protection.

Within the United States, the National AI Initiative Act of 2020 called for the establishment of the National Artificial Intelligence Advisory Committee (NAIAC), which keeps the President and the National AI Initiative Office updated on associated topics. In the recent Blueprint for an AI Bill of Rights, the focus is on the safety and efficacy of AI, protection against algorithmic discrimination, data privacy, notice, and explanation of AI being used and its impact, and an ability to opt out or ask for a human alternative.

 


 

Want More Tech News? Subscribe to ComputingEdge Newsletter Today!

 


 

Regulatory challenges


While the development of GenAI is important, it is only beneficial if its positive impacts outweighs negative implications. With any new government policy comes added security and the opportunity for regulatory capture, allowing entities with enough finances and influence to help shape the outcomes, whether in legal avenues or not. It is critical to ensure that the interests of technology builders are met without adversely affecting competitors and consumers.

This balance is difficult to achieve without considering the nuances of GenAI. Four key areas impacted by regulatory attempts could suffer more than benefit. Consider the following facts:

  1. AI is global. The concept of geographic boundaries hasn’t affected the development and deployment of AI thus far, which has made technological expansion a global phenomenon, with researchers and scientists contributing worldwide. Country-specific or continent-specific regulations have a lower chance of success. For example, real-time facial recognition is classified as an unacceptable risk per the EU regulations, but it has significant benefits when used for defense. A pre-trained Gen AI model trained with data from a country where consent was provided but the usage of facial recognition is prohibited could be deployed in a different country where consent is an issue. Technically, the laws were not broken.The downside to viewing regulations as something other than global is that attempting to corral GenAI platforms too stringently may stifle advancement. If one country employs excessive regulation that slows or halts progress, innovation simply moves elsewhere, taking potential jobs and the technological edge. Including GenAI-related regulations in the International Privacy Laws for Data Protection stands a better chance. While still country-specific, Health Insurance Portability and Accountability Act (HIPAA), The Gramm-Leach-Bliley Act (GLBA), Electronic Communications Privacy Act (ECPA), and other privacy laws that protect communication within healthcare, finance, and electronic communications can include GenAI-specific laws for more effectual execution.
  2. Public-facing and enterprise-driven AI. The distinction between these is vital. Regulating ChatGPT fundamentally differs from regulating a healthcare tech company with GenAI capabilities in its product offerings. The outcomes of the former are overt and could be more widely influential. In reference to the AI Bill of Rights, human consideration and fallback involving user training may not be viable in this context.
  3. Open-source. This is an important and dominant arm of GenAI development. Since open-source frameworks like Pytorch, Keras, and others are essential to building AI models and several open-source large language models (LLMs) are outperforming the more established BARD, GPT-2, and many others, the movement has been critical to accessible and transparent AI development. These avenues encourage research at a massive scale, with benefits percolating across sectors. Regulations around these can severely hamper the technology’s growth.
  4. Unknown future. AI technology is constantly under development, difficult to comprehend, and has unanticipated outcomes. An important piece is timing. Regulations for AI and GenAI will need to evolve with openness from governments to efficiently scrap acts and laws that become moot or outdated. Quickly implementing a broad act out of fear could prevent development and run the risk of regulating too much or too little. For example, safety is a prominent concern with regulations that prohibit the production of illegal content. A vital part of GenAI is prompt engineering, a technique widely used to optimize and fine-tune LLMs to enhance their performance. The demand for the decryption of important messages might get rejected by an LLM like GPT; however, the process of decryption broken down into smaller tasks cleverly devised through prompt engineering will be addressed.

Currently, self-regulation is the only procedure in place for many organizations, but it can continue to be the main path forward. Entities are already inclined to abide by guidelines that protect and appease the consumer as part of doing good business. Governments could offer incentives or funding for organizations that do so. Within the development process of GenAI, self-regulatory measures by developers can go a long way in ensuring the entire onus of comprehension doesn’t lie with governmental regulators, thus prohibiting growth.

The true effectiveness of regulating GenAI remains to be seen as the technology continues to expand and be utilized in innovative ways. While some generalized parameters can provide benefits, the downside of putting too many regulations in place too quickly must be taken into consideration. In 2022, the global Gen AI market was estimated at $10.14 billion. It is projected to hit $13 billion in 2023 and $109.37 billion by 2030. With rapid growth in such a short period, regulators cannot afford to waste time addressing these concerns.

About the Author


Shivani ShuklaShivani Shukla specializes in operations research, statistics, and AI with several years of experience in academic and industry research. She currently serves as the director of undergraduate programs in business analytics as well as an associate professor in business analytics and IS. For more information, contact sgshukla@usfca.edu.

 

Disclaimer: The author is completely responsible for the content of this article. The opinions expressed are their own and do not represent IEEE’s position nor that of the Computer Society nor its Leadership.