AI videos spread misinformation amid Hurricane Melissa’s landfall

0

SPOKANE, Wash. – The rise of artificial intelligence has complicated the distinction between real and fake images, particularly evident in recent events like Hurricane Melissa’s landfall in Jamaica and Cuba.

Every AI-created image shown in this story, including those of a fake Hurricane Melissa and its alleged damage, is clearly labeled as fake. These images have spread widely on social media without proper labeling, making it challenging for users to discern their authenticity.

“We really are living in a world where we have to try to decipher what’s real and what’s fake, but the AI technology has advanced so fast that it’s harder and harder to do so,” Agneiszka McPeak, Associate Dean of Gonzaga Law, stated.

The legal responsibility for misleading images of humanitarian disasters like Hurricane Melissa is not straightforward. McPeak explained the legal landscape.

“Washington state has a new law, actually, where you can have some remedies when people use your likeness, but when it comes to a mass disaster, something that isn’t like a deepfake of a person’s voice or face, there’s far fewer protections,” McPeak said.

Social media companies currently remain immune from lawsuits regarding content shared on their platforms.

“One of the issues is that we have social media where we can very easily distribute this type of content, and the platforms themselves have broad immunity from any sort of lawsuits from the content they allow on their platforms,” McPeak said.

Traditional media outlets, however, face different standards.

“It’s section 230, and it only covers interactive online computer systems from being liable for the content that users share versus something like a newspaper or media organization that can be liable for any falsehoods they share,” McPeak said.

This means news organizations have a legal duty to vet content, unlike social media platforms, which follow their own policies.

“Facebook can’t get sued, but the New York Times can,” McPeak said.

Lisa Waanen Jones, a professor at Washington State University, emphasized the social duty of individuals to verify information.

“The good news is, a lot of the verification techniques that we’ve been teaching for a long time are still really relevant,” Jones said.

Jones teaches a class on visual communication and advises students to question the sources and production of images.

“What are the sources, trying to understand how an image was produced and how it reaches people,” Jones said.

She noted the emotional response AI-generated disaster scenarios can elicit.

“If we see something and respond in an emotional way, that can be a red flag that it’s misinformation, but it’s also, you know, what makes us human is being able to extend compassion to people somewhere else in the world,” Jones said.

The takeaway is that despite the lack of direct harm to individuals, misinformation can cause social harm. It’s crucial to exercise discretion in consuming and sharing content, as misinformation, especially during fast-moving natural disasters, can have real societal impacts.


 

FOX28 Spokane©