
Training AI systems requires less and less energy. At the same time modern AI tools are getting more efficient at creating deepfakes, making impersonations appear real.
Now a German startup has developed an AI platform which detects manipulation in media data. The software, also hosted in Germany, combines different AI models.
These check whether a face has been swapped, or whether language or facial expressions have been changed. Even if the lips are in sync with the words being spoken.
The system also reveals irregularities in image frequency and compression. Blood flow in skin is, likewise, checked, and the internet scoured for similar data. System accuracy: 98%.
Useful in domestic violence cases where videos showing consent have been manufactured.



