Premium

Tech Giants Roll Out Labels for AI-Generated Content to Combat Disinformation

AP Photo/Godofredo A. Vásquez, File

The age of artificial intelligence is upon us, and with the technology becoming more ubiquitous, it has brought with it new challenges and problems. Companies are now facing a slew of issues with how the technology is being used and are already taking steps to address these difficulties.

However, looming large over the growing prevalence of AI is the specter of government regulation. Indeed, there can be no doubt that our intrepid lawmakers in Congress are salivating over the idea of expanding the state’s power by imposing rules and restrictions on these tools.

Meta, the parent company of Facebook, and OpenAI have already taken steps to mitigate the use of AI technology to peddle disinformation and other inappropriate purposes. Meta recently announced that it is launching a new initiative to add “AI generated” labels to images created using software like DALL-E and Midjourney. Meta Global Affairs President Nick Clegg explained that this move is about “clearly labeling AI-generated imagery” to ensure that its users are not deceived.

In the coming months, Meta will start adding “AI generated” labels to images created by tools from Google, Microsoft, OpenAI, Adobe, Midjourney and Shutterstock, Meta Global Affairs President Nick Clegg said in a blog post Tuesday. Meta already applies a similar, “imagined with AI” label to photorealistic images created with its own AI generator tool.

Clegg said Meta is working with other leading firms developing artificial intelligence tools to implement common technical standards — essentially, certain invisible metadata or watermarks stored within images — that will allow its systems to identify AI-generated images made with their tools.

Meta’s labels will roll out across Facebook, Instagram and Threads in multiple languages.

Meta’s announcement comes as online information experts, lawmakers and even some tech executives raise alarms that new AI tools capable of producing realistic images — paired with social media’s ability to rapidly disseminate content — risk spreading false information that could mislead voters ahead of 2024 elections in the United States and dozens of other countries.

In a post on its blog, OpenAI also announced that it was starting to seek ways to prevent its technology from being used for nefarious purposes. It is rolling out a new watermarking system to alert users that a given image was generated using artificial intelligence.

Images generated with ChatGPT on the web and our API serving the DALL·E 3 model, will now include C2PA metadata. This change will also roll out to all mobile users by February 12th. People can use sites like Content Credentials Verify to check if an image was generated by the underlying DALL·E 3 model through OpenAI’s tools. This should indicate the image was generated through our API or ChatGPT unless the metadata has been removed.

Metadata like C2PA is not a silver bullet to address issues of provenance. It can easily be removed either accidentally or intentionally. For example, most social media platforms today remove metadata from uploaded images, and actions like taking a screenshot can also remove it. Therefore, an image lacking this metadata may or may not have been generated with ChatGPT or our API.

These moves appear to be a decent start when it comes to preventing AI technology from being abused. Indeed, a glaring example of how AI can be misused was highlighted when someone generated sexually suggestive AI images of singer Taylor Swift.

On the other hand, AI images can also be used for innocent fun. When the Senate was going through its silly dress code controversy, I couldn’t help but use AI to take advantage of the absurdity.

However, even though these images were completely ridiculous, there were still many who believed they were real. I got several emails from fact-checkers asking if they were real or generated by AI. One of them even contacted Sen. Cory Booker’s office to make sure he did not actually wear pink booty shorts at the Capitol building!

Still, my images were harmless. The worst that could happen would be some people thinking Sen. Rand Paul was mocking the dress code fiasco by showing up in a red robe. But what if someone had posted pictures of high-profile individuals engaged in questionable activities, and the images were realistic enough to fool people? It is not hard to see where this could lead, is it?

Still, even if Meta and OpenAI are taking steps to prevent negative outcomes, it is only a matter of time before government intervention becomes a reality. With more regulation will come stifled innovation and a distinct decline in competition. Of course, this will only make the situation worse. But some of these larger companies might actually welcome more restrictions because it helps them cement their place in the industry while making it even more difficult for smaller competitors to introduce innovation into the market.

Of course, if these companies do manage to take enough precautions, it could hold off Big Brother for a bit. But it will take a concerted battle to prevent the state from taking over as much of the industry as it can.

Recommended

Trending on RedState Videos