Speech Archivator deals with problems emerging from deepfake technology. Advances in deepfake technology made it possible to manipulate or generate visual and audio content with a high potential to deceive. While this is a fascinating technology, it is also dangerous. Political (and other) powers can misuse this technology to create fake videos that are indistinguishable from the real ones. Such videos will cause false beliefs and opinions, just like typical fake news do. The ultimate goal of Speech Archivator is to watch out for live stream videos that include speeches of persons that have a high impact on society. We use an artificial neural network to detect faces in the video. The video segments containing specific persons are then uploaded to safe decentralized storage such as IPFS, where it can't be manipulated and anyone can always retrieve the original video.
How It's Made
This project is written in Python and uses ResNet convolutional neural network to perform face detection in videos. Neural network is implemented in PyTorch library. When we detect face in the video, we cut out video segments containing specific person. Then we upload these video segments to IPFS.