When used in the context of movies and memes, deepfakes can occasionally be a source of entertainment.
But they’re also a growing concern. In the age of fake news and misinformation, deepfakes — i.e. AI-generated, manipulated photos, videos, or audio files — could potentially be used to confuse and mislead people.
Microsoft, however, has other ideas.
On Tuesday, the company announced two new pieces of technology, both of which aim to give readers the necessary tools to filter out what’s real and what isn’t.
The first of these, the Microsoft Video Authenticator, analyses images and videos to give “a percentage chance, or confidence score, that the media is artificially manipulated,” per a blog on Microsoft’s official site. The tool works by detecting blended elements of an image that our flimsy human eyes may not have picked up like subtle fading, greyscale elements, and boundaries. Read more…