Deep Fake Video Detection Using Recurrent Neural Networks

Authors

  • Abdul Jamsheed V. Department of Computer Applications, National Institute of Technology, Tiruchirappalli, India
  • Janet B. Department of Computer Applications, National Institute of Technology, Tiruchirappalli, India

Keywords:

Deep Fake, Deep Learning, Face Manipulation, Convolutional- LSTM, Fake videos

Abstract

Generative adversarial networks progressed to the point where it is very difficult to distinguish the difference between what is real or fake. In recent times, different face manipulating tools are used to generate credible face swaps in videos that leave a very little trace of manipulation, which is commonly referred as “AI based Deep Fake videos”. Nowadays, these realistic fake videos are used in pornography, blackmailing, political distress etc. Creation of deep fake videos are a simple task, but when it comes to detection, it’s a major challenge. The advancement in the creation of AI based deep fake videos has made the older detection system less accurate. In this work, we describe a new method which use deep learning based methodology to effectively detect manipulated/fake videos from real videos. Our system uses a combination of CNN and LSTM to detect these manipulations. Convolutional Neural Networks (CNN) is used to extract frame level features and then these features are trained using a Long Short Term Memory (LSTM) Recurrent Neural Network that classifies real and fake videos separately. We also compared the results with existing methodologies and found out to be great. Dataset for training Deep Fake detection model were picked from different sources which include DFDC, Face Forensics, Celeb DF, deep fake detection challenge dataset to name a few. We are successful in obtaining a competitive result of 92 percent accuracy while using a simple architecture.

 

References

D. G¨uera, Y. Wang, L. Bondi, P. Bestagini, S. Tubaro, and E. J. Delp. A counter-forensic method for CNN-based camera model identification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1840–1847, July 2017. Honolulu,

D. G¨uera, S. K. Yarlagadda, P. Bestagini, F. Zhu, S. Tubaro, and E. J. Delp. Reliability map estimation for cnn-based camera model attribution. Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Mar. 2018. Lake Tahoe, NV

IEEE’s Signal Processing Society - Camera Model Identification-Kaggle. https://www.kaggle.com/c/sp-society-camera-model-identification/discussion/49299.

P. Bestagini et al. Local tampering detection in video sequences. Proceedings of the IEEE International Workshop on Multimedia Signal Processing, pages 488–493, Sept.2013. Pula, Italy.

W. Wang and H. Farid. Exposing digital forgeries in interlaced and de interlaced video. IEEE Transactions on Information Forensics and Security, 2(3), 2007.

V. Conotter, E. Bodnari, G. Boato, and H. Farid. Physiologically-based detection of computer generated faces in video. Proceedings of the IEEE International Conference on Image Processing, pages 248–252, Oct. 2014. Paris, France

N. Rahmouni, V. Nozick, J. Yamagishi, and I. Echizen. Distinguishing computer graphics from natural images using convolution neural networks. Proceedings of the IEEEWorkshop on Information Forensics and Security, pages 1–6, Dec. 2017. Rennes, France.

P. Zhou et al. Two-stream neural networks for tampered face detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1831–1839, July 2017. Honolulu, HI.

R. Raghavendra, K. B. Raja, S. Venkatesh, and C. Busch. Transferable deep-cnn features for detecting digital and print-scanned morphed face images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1822–1830, July 2017. Honolulu.

A. R¨ossler et al. Faceforensics: A large-scale video dataset for forgery detection in human faces. arXiv:1803.09179, Mar. 2018

Z. Lu, Z. Li, J. Cao, R. He, and Z. Sun. Recent progress of face image synthesis. arXiv:1706.04717, June 2017.

G. Antipov, M. Baccouche, and J.-L. Dugelay. Face aging with conditional generative adversarial networks.arXiv:1702.01983, Feb. 2017.

S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, Nov. 1997.

C. Bregler, M. Covell, and M. Slaney. Video rewrite: Driving visual speech with audio. Proceedings of the ACM Annual Conference on Computer Graphics And Interactive Techniques, pages 353–360, Aug. 1997. Los Angeles, CA.

M. Abadi et al. Tensorflow: A system for large-scale machine learning. Proceedings of the USENIX Conference on Operating Systems Design and Implementation, 16:265–283, Nov. 2016. Savannah, GA.

H. Averbuch-Elor et al. Bringing portraits to life. ACM Transactions on Graphics, 36(6):196:1–196:13, Nov. 2017.

M. Brundage et al. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv:1802.07228, Feb. 2018.

K. Dale, K. Sunkavalli, M. K. Johnson, D. Vlasic, W. Matusik, and H. Pfister. Video face replacement. ACM Transactions on Graphics, 30(6):1–130, Dec. 2011.

J. Donahue et al. Long-term recurrent convolutional networks for visual recognition and description. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4):677–691, Apr. 2017.

I. Goodfellow et al. Generative adversarial nets. Advances in Neural Information Processing Systems, pages 2672–2680, Dec. 2014. Montr´eal, Canada. 1

P. Isola, J. Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5967–5976, July 2017. Honolulu, HI.

Y. Qian et al. Recurrent color constancy. Proceedings of the IEEE International Conference on Computer Vision, pages 5459–5467, Oct. 2017. Venice, Italy.

Kshitij Tripathi, Rajendra G. Vyas, Anil K. Gupta, “Deep Learning through Convolutional Neural Networks for Classification of Image: A Novel Approach Using Hyper Filter”, International Journal of Computer Science and Engineering, Vol.7, Issue.6, pp.164-168, 2019.

Downloads

Published

2021-04-30

How to Cite

[1]
A. J. V. and J. B., “Deep Fake Video Detection Using Recurrent Neural Networks”, Int. J. Sci. Res. Comp. Sci. Eng., vol. 9, no. 2, pp. 22–26, Apr. 2021.

Issue

Section

Research Article

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.