For Matthew Stamm, PhD, associate professor of electrical and computer engineering at Drexel University, detecting manipulated media is more than just an intellectual pursuit — it’s a critical tool for safeguarding truth in an age of pervasive digital deception.

Stamm’s journey into the world of digital forensics began during his doctoral studies at the University of Maryland in the late 2000s. “I liked the idea of solving a puzzle while doing something I wasn’t supposed to be able to do,” he recalled. “By exploiting some hidden mathematical rules, you could figure out something about an image that wasn’t supposed to be detectable.”
Initially, Stamm focused on identifying edited images by analyzing the invisible traces left by different camera models and editing processes. His approach involved scrutinizing the relationships between neighboring pixels to discern what he calls the image’s “micro-structures.”
“We’re not looking at the actual content of the image, but rather the subtle statistical artifacts introduced by things like the camera’s sensor, compression algorithms, or editing tools,” Stamm explained. “If one part of an image has pixel micro-structures that are inconsistent with the rest of it, that tells us something is amiss.”
Unique to Stamm’s approach is the application of machine learning to identify fakes. His team pioneered the use of constrained convolutional neural networks, which are specially designed to focus on finding hidden clues about how the image was made or changed on the pixel level.

They also developed novel two-phase training approaches, allowing systems to pick up on new, previously unseen manipulation types. Another key innovation was creating similarity networks that can compare forensic traces between image patches, even for unknown camera models or editing operations.
“By teaching computers to recognize these subtle statistical patterns, we can detect manipulations that would be invisible to the human eye,” Stamm noted. This work laid the foundation for more advanced forensic AI systems that could keep up with an ever-increasing volume of deepfakes that would otherwise be unmanageable by human teams.
As deepfake technology continues to advance, Stamm has shifted focus to video manipulation detection — a particularly thorny challenge due to the way video compression can introduce local inconsistencies that mimic the traces of editing.
“The way that video coding works is you’re compressing every part of the video, both spatially and temporally, with different compression strengths,” Stamm said. “Which makes these kind of trace anomalies in unaltered video.”
To overcome this, Stamm’s team developed a system called VideoFACT (Video Forensics using Attention, Context and Traces) that utilizes advanced mathematical models, temporal data across frames, and machine learning techniques like attention mechanisms to identify fakes. It’s able to spot manipulated videos regardless of whether they’ve been altered with deepfakes, inpainting, splicing, speed changes or traditional editing.

Audio forensics presents yet another frontier. As AI voice cloning reaches unprecedented levels of realism, fraudsters have begun using it to mimic executives and dupe companies out of millions of dollars. Stamm is adapting his forensic techniques to audio, but says it remains an area ripe for further research.
To stay ahead of increasingly sophisticated fakes, Stamm employs a multi-stage training regimen for his models using large datasets of manipulated media. The goal is to teach their software to learn generalizable features rather than merely memorizing specific forgeries. Still, he likens it to an “arms race” against determined scammers.
Professional and government agencies have recognized the critical importance of Stamm’s research in an era of escalating digital threats. He served on the IEEE Information Forensics and Security Technical Committee from 2018 to 2020 and has been a key contributor to several DARPA programs focused on AI security and information integrity.
Stamm’s work continues to shape national policy and research initiatives. He is part of the National AI Research Resource (NAIRR) Pilot program, which was established following the Biden Administration’s executive order on AI. Separately, Stamm led Drexel’s successful effort to join the U.S. AI Safety Institute Consortium (AISIC), where he serves on the Synthetic Image Working Group. This consortium, formed by the National Institute of Standards and Technology (NIST), brings together experts to develop guidelines for trustworthy AI systems.
At the state level, Stamm has advised Pennsylvania officials on strategies to combat digital misinformation ahead of the 2024 election and has briefed the state’s Electoral Security Task Force on emerging threats.
“The government, at all levels, has recognized that AI-powered misinformation poses serious risks to national security, elections, and public trust,” Stamm said. “By bringing technologists together with policymakers, legal experts, and social scientists, we can start to develop a comprehensive strategy. But we’re racing against the clock.”

Yet Stamm emphasizes technology alone can’t solve what he sees as an impending societal crisis — the collapse of trust in text, images and video itself. He believes we need a top-to-bottom rethinking of how we create, consume and authenticate media for the AI age, one that brings together experts across engineering, computer science, psychology, political science, journalism and more.
“The 20th century paradigm was that information came from known, reputable sources. You focused on preventing unauthorized access,” Stamm said. “Now, information comes from unknown third parties, and we must assume it could be manipulated. This is a behavioral challenge we’re only beginning to grapple with.”
While there are no easy answers, Stamm believes cross-disciplinary collaboration, public awareness and a renewed commitment to truth-seeking are essential. The stakes, he says, couldn’t be higher.
“If we can’t agree on what’s real, it corrodes every aspect of society – politics, journalism, science, justice, you name it,” Stamm said. “So this work, in my view, is utterly essential if we want to protect trust and reality itself in the digital age.




