How technology can detect fake news in videos

fake news

Credit: Pixabay/CC0 public domain

Social media is an important channel for the spread of fake news and disinformation. This situation has been exacerbated by recent advances in photo and video editing and artificial intelligence tools, which make it easy to tamper with audiovisual files, for example with so-called deepfakes, which combine and overlay images, audio and video clips to create to create montages that look like real images.

Researchers from the K-riptography and Information Security for Open Networks (KISON) and the Communication Networks & Social Change (CNSC) groups of the Internet Interdisciplinary Institute (IN3) at the Universitat Oberta de Catalunya (UOC) have launched a new project to develop innovative technology that, using artificial intelligence and data-hiding techniques, should help users automatically distinguish between original and counterfeit multimedia content, thus helping to minimize the reposting of fake news. DISSIMILAR is an international initiative led by the UOC, which includes researchers from Warsaw University of Technology (Poland) and Okayama University (Japan).

“The project has two objectives: firstly, it content creators with tools to watermark their creations, making any change easily detectable; and second, to offer social media user tools based on the latest generation of signal processing and machine learning methods to detect fake digital content,” explains Professor David Megías, Principal Investigator of KISON and Director of IN3. In addition, DISSIMILAR wants to “expand the cultural dimension and the point of view of the end user throughout the project”, from the design of the tools to the study of usability in the different phases.

The Danger of Prejudice

Currently, there are basically two types of tools to detect fake news. First, there are automatic ones based on machine learning, of which only a few prototypes (currently) exist. And second, there are the human-involved fake news detection platforms, as is the case with Facebook and Twitter, which require the participation of people to determine whether specific content is real or fake. According to David Megías, this centralized solution can be influenced by “various prejudices” and encourage censorship. “We believe that an objective review based on technology tools could be a better option, provided users have the final say in deciding whether or not to trust certain content based on a pre-evaluation,” explains he out.

For Megías, there is no “single silver bullet” that can detect fake news: rather, the detection has to be done with a combination of different tools. “That’s why we chose to explore information concealment (watermarks), digital content forensic analysis techniques (based in large part on signal processing) and, of course, machine learning,” he noted.

Automatically verify multimedia files

Digital watermarking encompasses a series of data concealment techniques that embed imperceptible information into the original file in order to verify a multimedia file “easily and automatically”. “It can be used to indicate the legitimacy of a content by, for example, confirming that a video or photo has been distributed by an official news agency, and can also be used as an authentication mark, which would be removed in the event of a change of the content, or to find out the origin of the data, in other words, it can see if the source of the information (for example, a Twitter account) is spreading false content,” explains Megías.

Digital Content Forensic Analysis Techniques

The project will combine the development of watermarks with the application of forensic analysis techniques for digital content. The goal is to use signal processing technology to detect the intrinsic distortions produced by the devices and programs used in creating or modifying an audiovisual file. These processes lead to a series of changes, such as sensor noise or optical distortion, which can be detected through machine learning models. “The idea is that the combination of all these tools improves the results compared to using separate solutions,” said Megías.

Studies with users in Catalonia, Poland and Japan

One of the main features of DISSIMILAR is its ‘holistic’ approach and collection of the ‘perceptions and cultural components surrounding fake news’. With this in mind, several user-oriented studies will be conducted, split into different phases. “First, we want to know how users interact with the news, what interests them, what media they consume depending on their interests, what they use as a basis to identify certain content as fake news, and what they are willing to do to check its veracity. If we can identify these things, it will make it easier for the technology tools we design to help prevent the spread of fake news,” explains Megías.

These perceptions will be probed in different places and cultural contexts, in user group studies in Catalonia, Poland and Japan, to integrate their idiosyncrasies when designing the solutions. “This is important because, for example, every country has governments and/or government agencies with varying degrees of credibility. This affects how news is followed and support for fake news: if I don’t believe in the word of the authorities, why should I pay attention to the news come from these sources? This was seen during the COVID-19 crisis: in countries where there was less trust in government, there was less respect for suggestions and rules about dealing with the pandemic and vaccination,” said Andrea Rosales, a CNSC researcher.

A product that is easy to use and understand

In the second phase, users will participate in the design of the tool to “ensure that the product is well received, easy to use and understandable,” said Andrea Rosales. “We would like them to be involved with us throughout the process until the final prototype is produced as this will help us better respond to their needs and priorities and do what other solutions have not been able to do,” added David Megas to.

In the future, this user adoption could be a factor driving social networking platforms to adopt the solutions developed in this project. “If our experiments bear fruit, it would be great if they integrated these technologies. For now, we would be happy to have a working prototype and proof of concept that could encourage social media platforms to adopt these technologies in the future,” concluded David Megias.

Previous research was published in the Special issue on the ARES Workshops 2021


Artificial intelligence may not really be the solution to stop the spread of fake news


More information:
D. Megías et al, Architecture of a fake news detection system combining digital watermarking, signal processing and machine learning, Special issue on the ARES Workshops 2021 (2022). DOI: 10.22667/JOWUA.2022.03.31.033

A. Qureshi et al, Detecting Deepfake Videos Using Digital Watermarks, 2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) (2021). ieeexplore.ieee.org/document/9689555

David Megías et al, DISSIMILAR: Towards the detection of fake news through information concealment, signal processing and machine learning, 16th International Conference on Availability, Reliability and Security (ARES 2021) (2021). doi.org/10.1145/3465481.3470088

Provided by Universitat Oberta de Catalunya (UOC)

Quote: How technology can detect fake news in videos (2022, June 29) retrieved June 29, 2022 from https://techxplore.com/news/2022-06-technology-fake-news-videos.html

This document is copyrighted. Other than fair dealing for personal study or research, nothing may be reproduced without written permission. The content is provided for informational purposes only.

Leave a Comment

Your email address will not be published. Required fields are marked *