top of page

The Case Against Early and Restrictive AI Regulation

Balancing Innovation with Risk Mitigation

April 2024

by Blake Duggan

Artificial intelligence (AI), with its rapidly increasing capabilities, ignites both excitement and trepidation with understandable demands for swift and extensive regulation to prevent misuse. While responsible and ethical AI development is essential, rushing into broad regulatory control (including the construct of industry-seeking regulatory capture) can inadvertently stifle the innovation needed to maximize AI's potential and address the very problems those regulations intend to solve. 


To offer necessary context, Regulatory capture is an important economic theory which posits that commercially-biased players actively seek controls to create barriers to entry, resulting in a marketplace where regulatory agencies are often dominated by the industries they are charged with regulating. This can result in an agency, charged with acting in the public interest, instead acting in ways that benefit incumbent firms in the industry it is supposed to be regulating.(1)

 

According to Nobel laureate George Stigler,


"Regulation is acquired by an industry and is often designed and operated primarily for its benefit." -- George Stigler


The power and resources of large technology companies present similar risks in today's rapidly evolving AI landscape.



Innovation is the Engine of Human Progress.


Technological innovation has been a consistent driver of economic growth and societal well-being. Advancements across diverse fields like medicine, communication, and transportation have dramatically improved lives.(2)  AI stands poised to accelerate this progress, potentially transforming industries like healthcare, scientific research, environmental protection, and countless others. Potential benefits range from enhanced disease diagnosis and treatment to new tools for combating climate change and optimizing resource allocation.(2)


  • Healthcare: AI is transforming healthcare with precision medicine (analyzing patient data for personalized treatments), improved medical imaging analysis (helping doctors diagnose diseases earlier), and accelerating drug discovery (finding new treatments faster).

  • Scientific Research: AI breakthroughs like DeepMind's AlphaFold are revolutionizing protein structure prediction (helping design new drugs) and researchers are using AI to analyze astronomical data for discoveries about the universe.

  • Environmental Protection: AI applications include refined climate modeling (for better forecasts and decision-making), biodiversity monitoring (to protect endangered species), and optimizing smart grids (for energy efficiency and sustainability).



There are Costs to Regulating Too Much and Too Early.


Despite its enormous potential, AI understandably raises concerns about ethical implications, algorithmic bias, data privacy violations, potential job displacement, and even the distant, speculative possibility of machines exceeding human-level intelligence. These concerns deserve careful consideration and proactive solutions to ensure that AI's development and use align with human values and societal goals. 


Premature, heavy-handed regulation risks suppressing the very innovation that could solve the problems we seek to address. History serves as a cautionary tale. Prior cases where industries exerted undue influence on the agencies meant to oversee them, have repeatedly hindered progress and limited consumer benefits. Examples abound, from the financial sector –  with its complex regulations that often favor established incumbents – to industries like telecommunications, where past attempts at restrictive governance have hampered innovation. 


A particularly good example includes the deployment of broadband internet service providers. In the early days of the internet, legacy telcos held considerable power over the infrastructure that delivered this service. There are cases where these telcos lobbied for regulations that would limit the ability of competitors to provide internet services or build their own infrastructure (i.e., laying new cable or installing new or upgraded telephone poles). This regulatory capture reduced competition, which meant consumers had fewer choices for internet providers and led to higher prices, slower speeds, and delays in the rollout of cutting-edge broadband technologies in certain areas. 


In highly regulated sectors like healthcare, education, and housing demonstrate how excessive restrictions often lead to inflated costs, limited consumer choice, and ultimately, diminished progress for society.(3)  


Other examples include:


  • Healthcare: Complex approval processes for new drugs and medical devices have contributed to high costs and delaying life-saving treatments. As well, overly strict regulations on telehealth services across state lines limited access to care, especially in rural and underserved areas.


  • Education: Emphasis on standardized testing has been criticized for narrowing curriculum, reducing creativity, and contributing to higher costs. As well, overly complex accreditation rules and occupational licensing requirements have created barriers to entry for new        providers and teaching methods.


  • Housing: Restrictive zoning limits the supply of affordable housing in many urban areas, driving up prices and contributing to housing shortages. As well, overly complex or outdated building codes can increase construction costs and discourage the use of new materials and techniques.


The rapid and dynamic nature of AI development demands regulations that can adapt alongside the technology, ensuring continued innovation in crucial fields. We must strike the right balance and we believe curating innovations in AI demand a nuanced approach that acknowledges the potential benefits alongside the risks. If we replicate the regulatory patterns seen in certain sectors, the field will inevitably suffer under the burden of compliance, losing its agility and innovative edge. 



We Believe Regulatory Frameworks and "Good-Actor" Collaboration Wins.  


At mXa, we believe a more collaborative, iterative model prioritizing ethical principles and adaptive oversight holds far more promise. We are seeing many of these tenants forming inside regulatory frameworks, like the current administration's recently enacted Executive Order on AI Innovation (4), which seeks to guide public and private interests towards:


  1. Responsible AI Development: Safety, security, equity, and commitment to civil rights. It emphasizes the need for AI systems to be developed with the utmost care to ensure:

    • They are safe, secure, and reliable.

    • They avoid biases and discrimination.

    • They protect individual rights and freedoms.


  2. Harnessing AI for Progress and Competitiveness: Foster innovation, competition, and collaboration in AI. It promotes the idea that:

    • Innovation is needed to maintain American leadership in AI.

    • Collaboration is crucial for unlocking AI's full potential.

    • Economic benefits to be accrued to the American workforce.


  3. Protecting Citizens and Setting Global Standards: Protection of citizen rights, privacy, and consumer interests, as well as responsible AI use by the government itself. It recognizes:

    • Protect consumers as they interact with AI on an increasing basis.

    • Robust privacy regulations to safeguard personal information.

    • Government's role as an ethical role model in its own use of AI.

    • U.S. role in shaping responsible AI standards globally.


We must develop safeguards to protect against potential misuse. However, we should espouse the fostering of industry-led safeguards, supporting adaptive regulation, promoting AI literacy across society, and weaving ethical considerations into the very fabric of AI development. This approach will maximize the chances of harnessing AI's power for the greater good while mitigating potential harms, allowing it to become a transformative force for positive change across diverse domains.

 


Public Policy Regarding AI Should Incorporate Five Key Elements:


1) Recognize the Challenge of AI Use Case Diversity (not all innovations should be subject to the same regulatory method, timeline, or dose). 

Remember that AI isn't a single technology but a diverse set of tools that essentially represents all future software innovations.(5) A "one-size-fits-all" regulatory approach will inevitably fail. Why do we say this?


  • There is diversity of application: AI is used in a wide spectrum of fields—from finance to education. A single set of rules can't address the unique challenges in each sector and usage type.

  • We are operating at a high pace of change: AI is evolving rapidly. Regulations need to be dynamic and adaptable to avoid stifling innovation by becoming outdated.

  • We are subject to black-box complexity: Complex AI systems can be difficult to fully understand, with the need to define and implement targeted regulatory oversight that takes into account the specific technology ideation and its potential risks and benefits.


Instead, we need nuanced and adaptable regulations that account for the distinct uses and risk profiles of various AI applications. A system designed for language generation may require entirely different governance from one deployed for autonomous vehicles. Regulations should evolve in harmony with the dynamic nature of AI. If government regulation forces a directional compass for companies, it can raise the barrier of entry for smaller parties and makes it harder for newer entrants to compete on some of the specific benefits smaller brands offer. Examples include financial regulation where we have seen people naturally absorb into bigger organizations instead of working with regional or local organizations.


2) Encourage Industry-Led Self-Regulation

Encouraging and empowering self-regulation by good-actors, and the development of technical standards within the AI industry is crucial. Companies and researchers at the forefront of AI development are well-positioned to forecast potential risks and proactively create solutions. This transparent and collaborative approach can foster responsibility, build public trust, and promote accountability. Organizations dedicated to setting AI standards play a vital role in driving best practices, ensuring transparency, and safeguarding ethical development.


3) Curate an Informed Public (Be Transparent About Risks & Safeguards)

Alongside focused regulation, fostering AI literacy is essential.(6) Public anxieties often arise from misunderstandings about the technology. Promoting open conversations and clear explanations of AI will demystify the field and empower individuals to participate meaningfully in shaping its future. This inclusive dialogue allows for constructive criticism and identifies areas where concrete regulation is truly necessary.


4) Leverage Ethics as the North Star Guiding Principle

The debate surrounding AI isn't just about efficiency; it's about aligning technological progress with societal well-being and human values. Ethical AI development should be at the heart of all endeavors. This means building diverse teams to combat biases, ensuring algorithmic transparency, prioritizing user privacy, and constantly analyzing potential negative impacts before they become entrenched.


5) Guard Against Regulatory Capture (Remember: Early Innovators Want This to Preserve their Market Advantage and Stifle Competition!)

We must be vigilant against the phenomenon of regulatory capture, where regulations ultimately become tools to benefit the regulated industry rather than protect the public interest. This risk in the AI sector is not insignificant. Powerful technology companies could exploit the complexity of AI to shape regulations that serve their purposes, potentially restricting competition and stifling innovation from smaller entities. This underscores the importance of ensuring multiple stakeholders, especially those representing consumers and broader social interests, have meaningful seats at the table where AI regulations are being shaped. It also emphasizes that while industry collaboration is important, strong oversight mechanisms are crucial.



The Bottom Line: AI's potential is vast and we must support innovation to reap the rewards, and to guard against the negative impacts on society. 


AI innovations are going to happen but its development is also an area of massive social responsibility we can't avoid. We must develop safeguards to protect against potential misuse -- however -- let's prioritize fostering industry-led safeguards, supporting adaptive regulation, promoting AI literacy across society, and weaving ethical considerations into the very fabric of AI development. This approach is expected to maximize the chances of harnessing AI's power for the greater good while mitigating potential harms, allowing it to become a transformative force for positive change across diverse domains.


At mXa, we're keen to engage in meaningful conversations about challenging topics. 

 

  • Do you share this belief in finding a balanced path between AI innovation and responsible regulation? 

  • What other factors must we consider as we seek to regulate AI effectively without hindering its vast potential for good? 


Let's use this platform for a thoughtful, collaborative discussion on how to shape the future of AI innovation.





 




About us: mXa, on the 20+ year foundation of Method360, was founded to intentionally serve fast-growth companies and the unique challenges they face. We understand that inorganic and organic growth provokes change, ambiguity, and uncertainty that can deeply burden the organizations involved. By seeking to understand the human element in M&A and fast growth environments, mXa embraces a unique, contrarian approach in advising clients that seeks to realize maximum value for them in alignment with business objectives.



Interested in learning more about our capabilities or discussing your M&A or AI story? We're here to help.




Citations

  1. Stigler, George J. (1971). The Theory of Economic Regulation. Bell Journal of Economics and Management Science, 2(1), pp. 3-21.

Interested in learning more about our expertise?

mXa Logo
bottom of page