AI Tool DeepGuard Exposes Fake Images and Strengthens Digital Security

With deepfakes and AI-generated images becoming increasingly sophisticated, DeepGuard steps in as a cutting-edge solution to detect and trace manipulated visuals, safeguarding identities and preventing fraud.

Examples of real images from the MS-COCO and Flickr30k, and their corresponding prompts used to generate fake images from SD 3, DALL·E 3, and ImagenExamples of real images from the MS-COCO and Flickr30k, and their corresponding prompts used to generate fake images from SD 3, DALL·E 3, and Imagen

Realistic images created by artificial intelligence (AI), including those generated from a text description and those used in video, pose a genuine threat to personal security. From identity theft to misuse of a personal image, spotting what's real and what's fake is getting harder and harder. 

A research collaboration involving the University of Portsmouth's Artificial Intelligence and Data Science (PAIDS) Research Centre has developed an innovative solution to accurately distinguish between fake and genuine images and identify the source of the artificial image. 

The solution, known as 'DeepGuard,' combines three advanced AI techniques: binary classification, ensemble learning, and multi-class classification. These methods enable the AI to learn from labeled data, making smarter and more reliable predictions.

It is a tool that can be used to investigate and prosecute criminal activity, such as fraud, or by the media to ensure the images used in their stories are authentic to prevent misinformation or unintentional bias

DeepGuard has been developed by a research team led by Dr. Gueltoum Bendiab and Yasmine Namani from the Department of Electronics at the University of Frères Mentouri in Algeria and involving Dr . Stavros Shiaeles from the University's PAIDS Research Centre and School of Computing. 

Dr Shiaeles said: "With ever-evolving technological capabilities, it will be a constant challenge to spot fake images with the human eye. Manipulated images pose a significant threat to our privacy and security as they can be used to forge documents for blackmail, undermine elections, falsify electronic evidence, damage reputations, and even be used to incite harm by adults to children. People are also profiteering disingenuously on social media platforms like TikTok, where images of models are being turned into characters and animated in different scenarios in games or for entertainment. 

"DeepGuard, and future iterations, should prove to be a valuable security measure for verifying images, including those in videos, in a wide range of contexts."

The research, published in The Journal of Information Security and Applications, will also support further academic research in this area, with additional datasets available to academics.

During its development, the team reviewed and analyzed image manipulation and detection methods, focusing specifically on fake images involving facial and bodily alterations. They considered 255 research articles published between 2016 and 2023 that examined various techniques for detecting manipulated images, such as changes in expression, pose, voice, or other facial or bodily features. 

Source:
Journal reference:
  • Namani, Y., Reghioua, I., Bendiab, G., Labiod, M. A., & Shiaeles, S. (2024). DeepGuard: Identification and Attribution of AI-Generated Synthetic Images. Electronics, 14(4), 665. DOI:10.3390/electronics14040665, https://www.mdpi.com/2079-9292/14/4/665

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
AI-Driven Microscopy Revolutionizes Scientific Imaging with Precision and Speed