The Race Against Deepfakes

Rose Karuna

Old and Cranky Xherdan
VVO Supporter 🍦🎈👾❤
Joined
Sep 24, 2018
Messages
412
Location
Central Florida
SL Rez
2005
Joined SLU
2007
What is horrifying is that if the FBI, Homeland security or the local cop shop want to frame you they have all the tools to do it and they have them cheaply. If they want to change the scenario on their video cameras that they wear after they murder someone in cold blood, they can. There is no check and balance anymore and you can no longer believe what you see. If your crappy neighbor wants to make your life miserable they can create videos and give them to the cops (or everyone in your neighborhood) "proving" that you did X,Y and Z and you have no recourse other than to try and prove the video is fake. The onus of a fake video is on the victim presented in the video, not on the person who posts or provides the video to prove that it's real.
 

Sean Gorham

Verti's Minion
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
74
SL Rez
2005
Joined SLU
2007-09-27
SLU Posts
1928
We had this same problem when Photoshop was new. We developed tools to detect doctored photos. We'll develop tools to detect doctored videos, too. It's always been an arms race. Then the media will latch on to some other new horror story.
 

Kara Spengler

Queer OccupyE9 Sluni-Goon
Joined
Sep 20, 2018
Messages
2,965
Location
SL: November RL: DC
SL Rez
2007
Joined SLU
December, 2008
SLU Posts
23289
I dunno, I'm just less concerned about this whole business than people are telling me I should be. I need convincing that videos needing provenance by way of adequate source citation, which is something that's already true of photographs and printed quotes and we still use those without too much trouble, isn't the worst that's going to happen as a result of the arrival of deepfakes.

The scenario the beginning of that article proposes, of a presidential candidate's actual supporters seeing a deepfake video of their candidate saying she's withdrawing from the race and believing it over the insistence of the candidate herself saying live that she's not withdrawing and the video was faked, to me is just vastly silly. People forget that political figures, particularly their campaigns, are instantly connected with supporters via social media; it's not like the video would come out and the candidate wouldn't be able to respond until hours or even a day later. The deepfake video that just emerged of Mark Zuckerberg talking about "controlling humanity" or whatever is pretty convincing evidence of that; there isn't a soul on the internet who actually thinks it's real.

The discussion of the recent Pelosi video doesn't really belong, because it wasn't a deepfake. It was just slowed down video; an editing effect that could've easily been done 30 years ago with the same result.
Yes, as deepfakes rise videos will be seen as less evidentiary than now. If someone tries to use a video (faked or otherwise) against someone it will just be 'prove it is me'.

Of course this also means legitimate security footage will drop a peg in court evidence but that just opens the door for some new technology. Did anyone REALLY like having cameras everywhere anyway?
 

Eunoli

Well-known member
Joined
Sep 20, 2018
Messages
337
The risk here isn't in followers of a politician believing the fake. Here's a scenario: two days before the next general election a deepfake gets posted by foreign trolls that looks very much like the democratic candidate being caught secretly saying something extremely racist or sexist. Its a scenario we've seen happen too much in real life. When it happens at a critical time like that, how many will believe it until its far too late and they've either changed their vote or stayed home?
 

Dakota Tebaldi

Well-known member
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
2,348
Location
Gulf Coast, USA
Joined SLU
02-22-2008
SLU Posts
16791
Yeah but again, even such a video as that would be extremely easy to expose.

Deepfakes have two important weaknesses. The first is that the computer has to impose the face and fake mouth movements over those of an already-existing video. So exposing the video is as simple as recognizing the original, unaltered source video that was used to make it. The second is that accurate voices are still quite a ways off; most deepfake demonstration videos, for instance the ones featuring Obama, use an actual unaltered speech by Obama; sampling a person's voice and stringing together disconnected bits to make new statements doesn't quite "work" and is very obvious where it's done. And the biggest tell in the Zuckerberg video, if you didn't recognize the background video it was made from, was his voice, which sounds nothing like Zuckerberg's actual voice.
 

Sid

Boooooh!!!
VVO Supporter 🍦🎈👾❤
Joined
Sep 20, 2018
Messages
2,073
Location
NL
SL Rez
2007
Joined SLU
2009
I was once very impressed by Pacman. That is only one generation ago. Look at the game graphics these days.
The 2020 elections will be okay, but will that still be the case with elections and news in 30 years?
No reason to believe the graphics have reached the max of what is possible. Deepfakes will become a major problem in the future IMHO.
 

Han Held

It's all part of my Rococo N Roll Fantasy
Joined
Sep 20, 2018
Messages
480
Location
Anchorage
Joined SLU
September, 2010
SLU Posts
7705
The answer, obviously, is to get photo and video editing software out of the hands of the general public.
It's for our own good...

(I suspect that the 'arms race' crowd is closer to the truth than not -this will be a problem, not an insurmountable one though)
 

Kalel

hypnotized
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
279
Location
Miami,FL
SL Rez
2006
Joined SLU
2010
SLU Posts
1965
The second is that accurate voices are still quite a ways off; most deepfake demonstration videos, for instance the ones featuring Obama, use an actual unaltered speech by Obama; sampling a person's voice and stringing together disconnected bits to make new statements doesn't quite "work" and is very obvious where it's done.
This project VoCo video by adobe is a few years old... i think it works quite well... would make my job easier if i had it for missing voice talents lines...

go from 2 to 5 minutes..

 
  • 1Thanks
Reactions: Brenda Archer

Cristiano

I AM BABY GROOT
Admin
Joined
Sep 19, 2018
Messages
2,145
SL Rez
2002
Joined SLU
Nov 2003
SLU Posts
35836
I don't think the concerns about this technology are overblown. It is way beyond Photoshopped images. As others have said, it's the potential that the video would be believed to be real in the moment and something terrible could occur because of it, even if the video is ultimately proven to be fake. It will be too late if it affects an election or someone dies over a fake video.
 

Ashiri

√(-1)
Joined
Sep 20, 2018
Messages
440
Location
RL: NZ
SL Rez
2007
SLU Posts
-1
I don't think the concerns about this technology are overblown. It is way beyond Photoshopped images. As others have said, it's the potential that the video would be believed to be real in the moment and something terrible could occur because of it, even if the video is ultimately proven to be fake. It will be too late if it affects an election or someone dies over a fake video.
Or if one nation declares war against another.
 

GoblinCampFollower

Well-known member
Joined
Sep 20, 2018
Messages
406
We had this same problem when Photoshop was new. We developed tools to detect doctored photos. We'll develop tools to detect doctored videos, too. It's always been an arms race. Then the media will latch on to some other new horror story.
I think it is a near certainty that the fakers will eventually win this arms race once and for all. The reason why is because resolution is finite. Let's say in 10 years, the flaws in a deep fake are still obvious in a 50 mega pixel image. What happens if they compress it down to 1024x1024? It is absolutely possible that the AI will be good enough to be absolutely flawless with some finite amount of detail like that.
 

Katheryne Helendale

🐱 Kitty Queen🐱
Joined
Sep 20, 2018
Messages
2,798
Location
Right... Behind... You...
SL Rez
2008
Joined SLU
October 2009
(I suspect that the 'arms race' crowd is closer to the truth than not -this will be a problem, not an insurmountable one though)
It's going to become a very big problem, particularly in the near term. However, I think the technology to catch these deepfakes will also mature and catch up with the technology.
 

Sean Gorham

Verti's Minion
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
74
SL Rez
2005
Joined SLU
2007-09-27
SLU Posts
1928
AI and math will always find stuff that the human senses can't. I'm not too worried.
 
  • 1Thanks
Reactions: Brenda Archer

Innula Zenovka

Nasty Brit
VVO Supporter 🍦🎈👾❤
Joined
Sep 20, 2018
Messages
2,977
SLU Posts
18459
I'm not too worried about the effects of this technology in court cases -- as Argent suggests, proving the chain of evidence will become even more important, as will developing forensic techniques to detect this kind of forgery, and also video evidence is generally not the only evidence available.

Usually, at least in my experience, video evidence is most effective when it supplements other evidence. A police video, for example, recording a car chase makes the course of the incident much easier for everyone to follow, and similarly a video of a late night brawl is usually most useful in supplementing witness statements to make it easier to understand what was happening during a fast-moving, chaotic incident.

My main worry is that it simply helps the campaign to undermine trust and confidence in news-reporting altogether, since it makes it so much easier to question the authenticity of reports you don't like -- imagine, for example, if a video of Trump eventually surfaces purporting to be the infamous Moscow "pee tapes" or outtakes from The Apprentice showing him saying all manner of racist and sexist things, or a video purporting to show misconduct by military personnel or the police.

Whatever its authenticity, the whole thing is likely to turn into a huge argument about whether the video is fake or not, which will never get settled, and everyone ends up that bit less confident in the credibility of anything they see or read.

We've seen that already plenty of times, with heated arguments about the authenticity of what appear to be bona fide news clips of missile and bomb attacks in the Middle East, for example -- do they show genuine injuries and destruction or is it all staged?

"Truthers" will have a field day and everyone ends up not sure what to believe.

This rings all sorts of alarm bells. Maybe this is a flashback to my time, in an earlier life, as someone who spent a fair bit of time in Moscow and St Petersburg back in the "wild East" days just after the collapse of the USSR, but I really would suggest people read Nothing is True and Everything is Possible: Adventures in Modern Russia by Peter Pomerantsev and then The Road to Unfreedom: Russia, Europe, America by Timothy Snyder and consider the implications of this new technology -- you can bet there's some very smart people in Moscow and St P who are very excited about the implications of all this, and some not so smart people in the US with far more money than is good for them who are more then happy not only to, in a phrase attributed to Lenin, sell them (the Bolsheviks) the rope with which to hang them (capitalists) but also give them the money with which to buy it.

From Chapter 5 of The Road to Unfreedom:

[In the Russia of the 2010s] Ninety percent of Russians relied upon television for their news. Surkov was the head of public relations for Pervyi Kanal, the country’s most important channel, before he became a media manager for Boris Yeltsin and Vladimir Putin. He oversaw the transformation of Russian television from a true plurality representing various interests into a false plurality where images differed but the message was the same. In the mid-2010s, the state budget of Pervyi Kanal was about $850 million a year. Its employees and those of other Russian state networks were taught that power was real but that the facts of the world were not. Russia’s deputy minister of communications, Alexei Volin, described their career path: “They are going to work for The Man, and The Man will tell them what to write, what not to write, and how this or that thing should be written. And The Man has the right to do it, because he pays them.” Factuality was not a constraint: Gleb Pavlovsky, a leading political technologist, explained, “You can just say anything. Create realities.” International news came to substitute for regional and local news, which all but disappeared from television. Foreign coverage meant the daily registration of the eternal current of Western corruption, hypocrisy, and enmity. Nothing in Europe or America was worthy of emulation. True change was impossible—that was the message.

RT, Russia’s television propaganda sender for foreign audiences, had the same purpose: the suppression of knowledge that might inspire action, and the coaxing of emotion into inaction. It subverted the format of the news broadcast by its straight-faced embrace of baroque contradiction: inviting a Holocaust denier to speak and identifying him as a human rights activist; hosting a neo-Nazi and referring to him as a specialist on the Middle East. In the words of Vladimir Putin, RT was “funded by the government, so it cannot help but reflect the Russian government’s official position.” That position was the absence of a factual world, and the level of funding was about $400 million a year. Americans and Europeans found in the channel an amplifier of their own doubts—sometimes perfectly justified—in the truthfulness of their own leaders and the vitality of their own media. RT’s slogan, “Question More,” inspired an appetite for more uncertainty. It made no sense to question the factuality of what RT broadcast, since what it broadcast was the denial of factuality. As its director said: “There is no such thing as objective reporting.” RT wished to convey that all media lied, but that only RT was honest by not pretending to be truthful.

Factuality was replaced by a knowing cynicism that asked nothing of the viewer but the occasional nod before sleep.
 
Last edited:

GoblinCampFollower

Well-known member
Joined
Sep 20, 2018
Messages
406
AI and math will always find stuff that the human senses can't. I'm not too worried.
Yes, an infinite resolution image gives AI infinite opportunity to find out what is amiss. The AI will know that the microscopic folds in your skin look wrong that proves it's a fake.

HOWEVER. If in any finite resolution image, say 512x512, the amount of information in that image is absolutely finite. The number of things that the deep fake AI has to get right to make it look "real" is finite. The logically possible analysis on finite data is finite.

This kind of AI is still dependent on absolute mathematical limits around statistical relevance that have been known for 200 years. Modern AI can analyze this vast data faster than a human can, but if the data set to analyze is very limited (as it often is in the real world), AI's absolutely do struggle to do things that a human statistician can't.

I work with AI and data analysis in my actual job. I do this for a living. Machine learning produces probabilistic answers and absolutely can be wrong. News headlines will brag about tests where the AI was actually just a little better than random chance. We'll have situations where the AI is only 24% sure that a fake image is fake.
 
Last edited:

Sean Gorham

Verti's Minion
VVO Supporter 🍦🎈👾❤
Joined
Sep 19, 2018
Messages
74
SL Rez
2005
Joined SLU
2007-09-27
SLU Posts
1928
I probably should have written "almost" before "always" in my previous post. Right now AI and algorithms are limited by human ingenuity.
 
  • 1Like
Reactions: GoblinCampFollower

Innula Zenovka

Nasty Brit
VVO Supporter 🍦🎈👾❤
Joined
Sep 20, 2018
Messages
2,977
SLU Posts
18459
What is horrifying is that if the FBI, Homeland security or the local cop shop want to frame you they have all the tools to do it and they have them cheaply.
Presumably, though, if they are minded to do that, they can already achieve much the same results by planting some illegal drugs, of which they must have a ready supply in the evidence locker, on their target.
 

danielravennest

Active member
Joined
Sep 21, 2018
Messages
1,013
SLU Posts
9073
What is horrifying is that if the FBI, Homeland security or the local cop shop want to frame you they have all the tools to do it and they have them cheaply. If they want to change the scenario on their video cameras that they wear after they murder someone in cold blood, they can. There is no check and balance anymore and you can no longer believe what you see. If your crappy neighbor wants to make your life miserable they can create videos and give them to the cops (or everyone in your neighborhood) "proving" that you did X,Y and Z and you have no recourse other than to try and prove the video is fake. The onus of a fake video is on the victim presented in the video, not on the person who posts or provides the video to prove that it's real.
This is where the technology underlying Bitcoin - the blockchain - can be of service. Blocks in the blockchain are timestamped with a difficult to fake cryptographic checksum, and the blocks are linked in sequence, so that if one is altered, it is immediately evident it has been. Bitcoin happens to use this to secure financial transactions, but it works for any kind of data whatsoever, such as a video stream.

For a police camera, what you do is periodically checksum the video stream, and send it off to a neutral party to store. If the video is later tampered with, the checksum won't match any more, and you know it has been tampered. With the rise of cheap video editing, the default assumption has to be that any video that isn't timestamped and checksummed in real time is unproven hearsay. It would have to be corroborated with other evidence before it was accepted.

Ironically, other technology can come to your help to show it was fake. For example, your phone's GPS knows where you have been, so you can use it to prove you weren't there when the crime was committed. Another case is if there are multiple video streams of the same event. For example, police body cam, but also an ATM camera of the same scene. If the videos don't match, you know something has been altered. Of course, the videos have to be under independent control.
 
Last edited: