#twill #deepstate #sbalich #twill #tcot #maga #leadright #media @danproft @tomilahren @realdonaldtrump @tuckercarlson @ingrahamangle #deepfake #cheapfake

This Bill Hader Deepfake Video Is Amazing. It’s Also Terrifying for Our Future.

Everything you need to know about the technology that poses real dangers to our democracy.

BY KRISTINA LIBBYAUG 13, 2019

This Bill Hader Deepfake Video Is Amazing. It’s Also Terrifying for Our Future.

Everything you need to know about the technology that poses real dangers to our democracy.

BY KRISTINA LIBBYAUG 13, 2019

Imagine this: You click on a news clip and see the President of the United States at a press conference with a foreign leader. The dialogue is real. The news conference is real. You share with a friend. They share with a friend. Soon, everyone has seen it. Only later you learn that the President’s head was superimposed on someone else’s body. None of it ever actually happened.

Sound farfetched? Not if you’ve seen this wild video from YouTube user Ctrl Shift Face:This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

https://hmg-h-cdn.hearstapps.com/videos/hader-1565727618.mp4

In the clip, comedian Bill Hader shares a story about his encounters with Tom Cruise and Seth Rogen. As Hader, a skilled impressionist, does his best Cruise and Rogen, those actors’ faces seamlessly, frighteningly melt into his own. The technology makes Hader’s impressions that much more vivid, but it also illustrates how easy—and potentially dangerous—it is to manipulate video content.

MORE FROM POPULAR MECHANICS

The safest and fastest way to escapePrevious VideoPlayNext VideoUnmuteCurrent Time 0:02Loaded: 17.70%Remaining Time -3:22CaptionsFullscreen

What Is a Deepfake?

The Hader video is an expertly crafted deepfake, a technology invented in 2014 by Ian Goodfellow, a Ph.D. student who now works at Apple. Most deepfake technology is based on generative adversarial networks (GANs).

GANs enable algorithms to move beyond classifying data into generating or creating images. This occurs when two GANs try to fool each other into thinking an image is “real.” Using as little as one image, a seasoned GAN can create a video clip of that person. Samsung’s AI Center recently released research sharing the science behind this approach.

“Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters,” said the researchers behind the paper. “We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings.”

For now, this is only applied to talking head videos. But when 47 percent of Americans watch their news through online video content, what happens when GANs can make people dance, clap their hands, or otherwise be manipulated?

Why Are Deepfakes Dangerous?

If we forget the fact that there are over 30 nations actively engaged in cyberwar at any time, then the biggest concern with deepfakes might be things like the ill-conceived website Deepnudes, where celebrity faces and the faces of ordinary women could be superimposed on pornographic video content.

Deepnudes’ founder eventually canceled the site’s launch, fearing “the probability that people will misuse it is too high.” Well, what else would people do with fake pornography content?

“At the most basic level, deepfakes are lies disguised to look like truth,” says Andrea Hickerson, Director of the School of Journalism and Mass Communications at the University of South Carolina. “If we take them as truth or evidence, we can easily make false conclusions with potentially disastrous consequences.”

A lot of the fear about deepfakes rightfully concerns politics, Hickerson says. “What happens if a deepfake video portrays a political leader inciting violence or panic? Might other countries be forced to act if the threat was immediate?”

With the 2020 elections approaching and the continued threat of cyberattacks and cyberwar, we have to seriously consider a few scary scenarios:

  • Weaponized deepfakes will be used in the 2020 election cycle to further ostracize, insulate, and divide the American electorate.
  • Weaponized deepfakes will be used to change and impact the voting behavior, but also the consumer preferences of hundreds of millions of Americans.
  • Weaponized deepfakes will be used in spear phishing and other known cybersecurity attack strategies to more effectively target victims.

This means that deepfakes put companies, individuals, and the government at increased risk.

“The problem isn’t the GAN technology, necessarily,” says Ben Lamm, CEO of the AI company Hypergiant Industries. “The problem is that bad actors currently have an outsized advantage and there are not solutions in place to address the growing threat. However, there are a number of solutions and new ideas emerging in the AI community to combat this threat. Still, the solution must be humans first.”

What’s Being Done to Fight Deepfakes?

Last month, the U.S. House of Representatives’ Intelligence Committee sent a letter to Twitter, Facebook, and Google asking how the social media sites planned to combat deepfakes in the 2020 election. The inquiry came in large part after President Trump tweeted out a deepfake video of House Speaker Nancy Pelosi:This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

This followed the request that Congress made in January asking the Director of National Intelligence to provide a formal report on deepfake technology. While legislative inquiry is critical, it may not be enough.

Government institutions like DARPA and researchers at colleges like Carnegie Mellon, the University of Washington, Stanford University, and the Max Planck Institute for Informatics are also experimenting with deepfake technology. The organizations are looking at both how to use GAN technology, but also how to combat it.

Feeding algorithms deepfake and real video, they’re hoping to help computers identify when something is a deep fake. If this sounds like an arms race, it’s because it is. We’re using technology to fight technology in a race that won’t end.

Maybe the solution isn’t tech. Additional recent research suggests that mice might just be the key. Researchers at the University of Oregon Institute of Neuroscience think that “a mouse model, given the powerful genetic and electrophysiological tools for probing neural circuits available for them, has the potential to powerfully augment a mechanistic understanding of phonetic perception.”

This means mice could inform next-generation algorithms that could detect fake video and audio. Nature could counteract technology, but it’s still an arms race.

While advances in deepfake technology could help spot deepfakes, it may be too late. Once trust is corroded in a technology, it’s nearly impossible to bring it back. If we corrupt one’s faith in video, then how long until faith is lost in the news on television, in the clips on the Internet, or in live-streamed historic events?This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

“Deepfake videos threaten our civic discourse and can cause serious reputational and psychic harm to individuals,” says Sharon Bradford Franklin, Policy Director for New America’s Open Technology Institute.“They also make it even more challenging for platforms to engage in responsible moderation of online content.”

“While the public is understandably calling for social media companies to develop techniques to detect and prevent the spread of deepfakes,” she continues, “we must also avoid establishing legal rules that will push too far in the opposite direction, and pressure platforms to engage in censorship of free expression online.”

If restrictive legislation isn’t the solution, should the technology just be banned? While many argue yes, new research suggests GANs might be used to help improve “multi-resolution schemes [that] enable better image quality and prevents patch artifacts” in X-rays and that other medical use-case scenarios could be right around the corner.

Is that enough to outweigh the damage? Medicine is important. But so is ensuring the foundation of our democracy and our press.

How to Spot a Deepfake

Many Americans have already lost their faith in the news. And as deepfake technology grows, the cries of fake news are only going to get louder.

“The best way to protect yourself from a deepfake is to never take a video at face value,” says Hickerson. “We can’t assume seeing is believing. Audiences should independently seek out related contextual information and pay especially attention to who and why someone is sharing a video. Generally speaking, people are sloppy about what they share on social media. Even if your best friend shares it, you should think about where she got it. Who or what is the original source?”This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

The solution to this problem has to be driven by individuals until governments, technologists, or companies can find a solution. If there isn’t an immediate push for an answer, though, it could be too late.

What we should all do is demand that the platforms that propagate this information be held accountable, that the government enforces efforts to ensure technology has enough positive use cases to outweigh the negatives, and that education ensures we know about deepfakes and have enough sense to not share them.

Otherwise, we may find ourselves in a cyberwar that a hacker started based on nothing but an augmented video. What then?

Imagine this: You click on a news clip and see the President of the United States at a press conference with a foreign leader. The dialogue is real. The news conference is real. You share with a friend. They share with a friend. Soon, everyone has seen it. Only later you learn that the President’s head was superimposed on someone else’s body. None of it ever actually happened.

Sound farfetched? Not if you’ve seen this wild video from YouTube user Ctrl Shift Face:This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

In the clip, comedian Bill Hader shares a story about his encounters with Tom Cruise and Seth Rogen. As Hader, a skilled impressionist, does his best Cruise and Rogen, those actors’ faces seamlessly, frighteningly melt into his own. The technology makes Hader’s impressions that much more vivid, but it also illustrates how easy—and potentially dangerous—it is to manipulate video content.

MORE FROM POPULAR MECHANICS

The safest and fastest way to escapePrevious VideoPlayNext VideoUnmuteCurrent Time 0:00Loaded: 14.79%Remaining Time -3:24CaptionsFullscreen

What Is a Deepfake?

The Hader video is an expertly crafted deepfake, a technology invented in 2014 by Ian Goodfellow, a Ph.D. student who now works at Apple. Most deepfake technology is based on generative adversarial networks (GANs).

GANs enable algorithms to move beyond classifying data into generating or creating images. This occurs when two GANs try to fool each other into thinking an image is “real.” Using as little as one image, a seasoned GAN can create a video clip of that person. Samsung’s AI Center recently released research sharing the science behind this approach.

“Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters,” said the researchers behind the paper. “We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings.”

For now, this is only applied to talking head videos. But when 47 percent of Americans watch their news through online video content, what happens when GANs can make people dance, clap their hands, or otherwise be manipulated?

Why Are Deepfakes Dangerous?

If we forget the fact that there are over 30 nations actively engaged in cyberwar at any time, then the biggest concern with deepfakes might be things like the ill-conceived website Deepnudes, where celebrity faces and the faces of ordinary women could be superimposed on pornographic video content.

Deepnudes’ founder eventually canceled the site’s launch, fearing “the probability that people will misuse it is too high.” Well, what else would people do with fake pornography content?

“At the most basic level, deepfakes are lies disguised to look like truth,” says Andrea Hickerson, Director of the School of Journalism and Mass Communications at the University of South Carolina. “If we take them as truth or evidence, we can easily make false conclusions with potentially disastrous consequences.”

A lot of the fear about deepfakes rightfully concerns politics, Hickerson says. “What happens if a deepfake video portrays a political leader inciting violence or panic? Might other countries be forced to act if the threat was immediate?”

With the 2020 elections approaching and the continued threat of cyberattacks and cyberwar, we have to seriously consider a few scary scenarios:

  • Weaponized deepfakes will be used in the 2020 election cycle to further ostracize, insulate, and divide the American electorate.
  • Weaponized deepfakes will be used to change and impact the voting behavior, but also the consumer preferences of hundreds of millions of Americans.
  • Weaponized deepfakes will be used in spear phishing and other known cybersecurity attack strategies to more effectively target victims.

This means that deepfakes put companies, individuals, and the government at increased risk.

“The problem isn’t the GAN technology, necessarily,” says Ben Lamm, CEO of the AI company Hypergiant Industries. “The problem is that bad actors currently have an outsized advantage and there are not solutions in place to address the growing threat. However, there are a number of solutions and new ideas emerging in the AI community to combat this threat. Still, the solution must be humans first.”

What’s Being Done to Fight Deepfakes?

Last month, the U.S. House of Representatives’ Intelligence Committee sent a letter to Twitter, Facebook, and Google asking how the social media sites planned to combat deepfakes in the 2020 election. The inquiry came in large part after President Trump tweeted out a deepfake video of House Speaker Nancy Pelosi:This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

This followed the request that Congress made in January asking the Director of National Intelligence to provide a formal report on deepfake technology. While legislative inquiry is critical, it may not be enough.

Government institutions like DARPA and researchers at colleges like Carnegie Mellon, the University of Washington, Stanford University, and the Max Planck Institute for Informatics are also experimenting with deepfake technology. The organizations are looking at both how to use GAN technology, but also how to combat it.

Feeding algorithms deepfake and real video, they’re hoping to help computers identify when something is a deep fake. If this sounds like an arms race, it’s because it is. We’re using technology to fight technology in a race that won’t end.

Maybe the solution isn’t tech. Additional recent research suggests that mice might just be the key. Researchers at the University of Oregon Institute of Neuroscience think that “a mouse model, given the powerful genetic and electrophysiological tools for probing neural circuits available for them, has the potential to powerfully augment a mechanistic understanding of phonetic perception.”

This means mice could inform next-generation algorithms that could detect fake video and audio. Nature could counteract technology, but it’s still an arms race.

While advances in deepfake technology could help spot deepfakes, it may be too late. Once trust is corroded in a technology, it’s nearly impossible to bring it back. If we corrupt one’s faith in video, then how long until faith is lost in the news on television, in the clips on the Internet, or in live-streamed historic events?This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

“Deepfake videos threaten our civic discourse and can cause serious reputational and psychic harm to individuals,” says Sharon Bradford Franklin, Policy Director for New America’s Open Technology Institute.“They also make it even more challenging for platforms to engage in responsible moderation of online content.”

“While the public is understandably calling for social media companies to develop techniques to detect and prevent the spread of deepfakes,” she continues, “we must also avoid establishing legal rules that will push too far in the opposite direction, and pressure platforms to engage in censorship of free expression online.”

If restrictive legislation isn’t the solution, should the technology just be banned? While many argue yes, new research suggests GANs might be used to help improve “multi-resolution schemes [that] enable better image quality and prevents patch artifacts” in X-rays and that other medical use-case scenarios could be right around the corner.

Is that enough to outweigh the damage? Medicine is important. But so is ensuring the foundation of our democracy and our press.

How to Spot a Deepfake

Many Americans have already lost their faith in the news. And as deepfake technology grows, the cries of fake news are only going to get louder.

“The best way to protect yourself from a deepfake is to never take a video at face value,” says Hickerson. “We can’t assume seeing is believing. Audiences should independently seek out related contextual information and pay especially attention to who and why someone is sharing a video. Generally speaking, people are sloppy about what they share on social media. Even if your best friend shares it, you should think about where she got it. Who or what is the original source?”This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

The solution to this problem has to be driven by individuals until governments, technologists, or companies can find a solution. If there isn’t an immediate push for an answer, though, it could be too late.

What we should all do is demand that the platforms that propagate this information be held accountable, that the government enforces efforts to ensure technology has enough positive use cases to outweigh the negatives, and that education ensures we know about deepfakes and have enough sense to not share them.

Otherwise, we may find ourselves in a cyberwar that a hacker started based on nothing but an augmented video. What then?