Before the next election, Meta says it will identify more AI-generated images

Paresh Jadhav

Meta

Meta announced its plan to identify more AI-generated images before the upcoming elections in response to concerns by information experts and lawmakers that these tools could be misused to spread disinformation regarding elections in both the United States and dozens of other nations.

This company already uses invisible watermarks and metadata to label images generated by its AI imaging software, but will expand that service to include those produced by competitors like Adobe, Google, OpenAI, Shutterstock and Midjourney.

AI-generated images

Instagram, Facebook and Threads owner Facebook recently announced it will soon label images created with artificial intelligence as part of a plan to prevent election-related misinformation from being spread on their sites. Working closely with other tech giants, watermarks or metadata will soon identify content produced using AI tools produced content that has been produced using this approach.

Meta said the labels will appear on images uploaded to platforms and will be offered in multiple languages, to protect users against stripping away these markers through editing or alteration of shared photos.

Tech experts and lawmakers have expressed concerns that new AI programs that produce realistic images, videos and audio could be used to deceive voters ahead of upcoming elections in the US and elsewhere. A robocall that purported to sound like President Biden and an internet meme featuring Pope Francis wearing an unflattering white coat were among examples that raised such alarms.

Deepfakes

Deepfakes are digitally altered photos, videos and audio files which appear real but are in fact faked. Deepfakes can be used to manipulate emotions or influence public opinion; for instance, Russian agents manipulated a video featuring Democratic House Speaker Nancy Pelosi to depict her speaking with exaggerated cognitive decline.

Experts caution that it’s crucial to recognize what signs to look out for when trying to spot an AI-generated image or video, including blurry details, strange lighting or disfigured hands. While there are online resources that provide red herrings such as these red herrings may provide red herrings – these clues might not always work due to video generators having difficulty reproducing human hands accurately and even resulting in extra fingers being added or violently contorted ones!

Although increasingly widespread, deepfakes remain difficult to create. Their creation requires extensive time, money and skill, although technology such as FakeApp and FaceSwap-GAN makes this easier than ever before. Furthermore, its impact remains uncertain on future elections or other high-profile social events.

Meta

Meta’s response

Meta, which owns Facebook, Instagram and Threads will soon start labeling images generated by other companies’ AI tools in the coming months using invisible markers built into each file to prevent users from stripping off watermarks off these images.

This initiative aims to address concerns that artificially altered images, videos and audio could spread false or disinformation ahead of global elections this year. Tech executives and experts alike are becoming more wary that advances in generative AI may enable deepfakes that appear real to be created and disseminated more widely.

The company will collaborate with technology partners to promote common technical standards for identifying when an image was AI-generated, as well as new policies that require advertisers using AI to generate political ads on its platforms to disclose when doing so. The global policy will take effect sometime within a year and will have global coverage.

Legality

Meta, the parent company of Facebook and Instagram, announced recently it is developing tools to detect artificial intelligence images created using image generators from other companies. Meta already tags photorealistic images created using AI with “imagined with AI”.

But according to the company, in order to detect invisible markers indicating AI image generation accurately and quickly. This capability will become particularly crucial during election years when misinformation targeting voters can become prevalent.

Copyright Office recently issued a ruling that an artist is ineligible to register an artwork created entirely by artificial intelligence (AI), as it violates laws requiring at least one author to be human.


Leave a Comment