Amid Growing Concerns Over AI, Sen. Josh Hawley Is Pushing for Regulation

Al Drago/Pool via AP

As artificial intelligence (and its potential impact on several aspects of society) grows, lawmakers are starting to become concerned to the point that they are considering new rules to begin to regulate it. One of those lawmakers, Sen. Josh Hawley of Missouri, has a long history of engaging on Big Tech issues, and AI fits right into that trend.

Advertisement

Axios this morning is reporting that Hawley is passing around the framework for potential federal guardrails for AI, and he may be able to find some bipartisan support for such a move. There are, according to their report, five principles to this plan.

The details: Hawley is unveiling, and circulating to colleagues, a framework for AI legislation focused on corporate guardrails. The five principles in the outline, according to a document viewed by Axios, are:

  1. Creating a legal avenue for individuals to sue companies over harm caused by AI models.
  2. Imposing stiff fines for AI models collecting sensitive personal data without consent.
  3. Restricting companies from making AI technology available to children or advertising it to them.
  4. Banning the import of AI-related technology from China and prohibiting American companies from assisting China’s development of AI.
  5. Requiring licenses to create generative AI models.

Most of those principles are likely to gather bipartisan support, though there could be some groups on the right – like those opposed to more licensing and regulations – who could question the need for such legislation.

Josh Hawley
AP Photo/Patrick Semansky, Pool

In an interview with Axios, Hawley did say that “Looking at different pieces of this is going to be important, and so that’s why I think we absolutely need to make sure that individuals have real power here.” He does not want the power (and too much personal data, etc.) to be in the hands of major corporations.

Advertisement

But the push for regulation needs to be quick yet nuanced. The tech space is filled with innovation and painting with broad strokes legislatively could stifle innovation – a concern that legislators will need to address in the coming weeks and months if they wish to get bills to the floors of their various chambers (at the state and federal level). Hawley, for his part, is moving now and coming up with a plan that could allow for safe innovation in the AI realm without stifling that innovation.

One crucial aspect prompting the need for regulation, however, is the ethical dilemma posed by AI. As AI systems become more sophisticated and autonomous, they are entrusted with critical decision-making tasks that have real-life consequences. From autonomous vehicles to facial recognition algorithms, these technologies hold immense power, demanding ethical guidelines to govern their behavior and prevent harm to individuals or society as a whole. Furthermore, privacy concerns (as Hawley notes) loom large in the age of AI. As algorithms process vast amounts of data, the risk of unauthorized access, misuse, or breaches becomes a pressing concern. Robust legislation may be essential to protect personal information and ensure that AI systems operate within legal boundaries.

Advertisement

And while Hawley doesn’t directly address it, accountability is another aspect that may necessitate new regulations. With AI increasingly making decisions and executing actions, determining liability in cases of error or harm becomes complex. Clear legal frameworks are needed to allocate responsibility, establish accountability mechanisms, and ensure that developers and users can be held liable for any AI-related damages.

The push for advancement in AI will mean a push for regulation, clearly. Hawley is out front early on that, and there may well be a need for it. But legislation and regulation needs to be firm without being a jackhammer that breaks what it’s intended to assist.

Recommended

Join the conversation as a VIP Member

Trending on RedState Videos