
Manipulated media data can be detected using AI.
Now, a New York-based startup is working to prevent photos and selfies uploaded onto social media from being used by AI systems for facial recognition. The team’s software alters images but changes are imperceptible to the human eye.
The result? Photos posted online still look natural to users, while being unrecognisable for AI systems. Leaving algorithms unable to make links to a given person when comparing them against a face in their database.
Unlike current laissez-faire laws in many countries, the startup’s system is committed to upholding privacy and doesn’t store original images.
A research team created a similar system to protect musicians’ intellectual property online.



