Friday, February 20, 2026

Microsoft has a brand new plan to show what’s actual and what’s AI on-line


Hany Farid, a professor at UC Berkeley who makes a speciality of digital forensics however wasn’t concerned within the Microsoft analysis, says that if the trade adopted the corporate’s blueprint, it will be meaningfully tougher to deceive the general public with manipulated content material. Subtle people or governments can work to bypass such instruments, he says, however the brand new normal might remove a good portion of deceptive materials.

“I don’t assume it solves the issue, however I feel it takes a pleasant large chunk out of it,” he says.

Nonetheless, there are causes to see Microsoft’s method for example of considerably naïve techno-optimism. There may be rising proof that persons are swayed by AI-generated content material even after they know that it’s false. And in a current research of pro-Russian AI-generated movies in regards to the warfare in Ukraine, feedback stating that the movies have been made with AI obtained far much less engagement than feedback treating them as real. 

“Are there individuals who, it doesn’t matter what you inform them, are going to consider what they consider?” Farid asks. “Sure.” However, he provides, “there are a overwhelming majority of People and residents world wide who I do assume need to know the reality.”

That need has not precisely led to pressing motion from tech corporations. Google began including a watermark to content material generated by its AI instruments in 2023, which Farid says has been useful in his investigations. Some platforms use C2PA, a provenance normal Microsoft helped launch in 2021. However the full suite of modifications that Microsoft suggests, highly effective as they’re, may stay solely recommendations in the event that they threaten the enterprise fashions of AI corporations or social media platforms.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles