This AI makes Robert De Niro perform lines in flawless German

This AI makes Robert De Niro perform lines in flawless German

Enlarge (credit: Paul Bednyakov | Getty Images)

You talkin’ to me… in German?

New deepfake technology allows Robert De Niro to deliver his famous line from Taxi Driverin flawless German—with realistic lip movements and facial expressions. The AI software manipulates an actor’s lips and facial expressions to make them convincingly match the speech of someone speaking the same lines in a different language. The artificial-intelligence-based tech could reshape the movie industry, in both alluring and troubling ways.

The technology is related to deepfaking, which uses AI to paste one person’s face onto someone else. It promises to allow directors to effectively reshoot movies in different languages, making foreign versions less jarring for audiences and more faithful to the original. But the power to automatically alter an actor’s face so easily might also prove controversial if not used carefully.

Read 14 remaining paragraphs | Comments

#ai, #deepfakes, #gaming-culture, #movies

Deepfake tech takes on satellite maps

While the concept of “deepfakes,” or AI-generated synthetic imagery, has been decried primarily in connection with involuntary depictions of people, the technology is dangerous (and interesting) in other ways as well. For instance, researchers have shown that it can be used to manipulate satellite imagery to produce real-looking — but totally fake — overhead maps of cities.

The study, led by Bo Zhao from the University of Washington, is not intended to alarm anyone but rather to show the risks and opportunities involved in applying this rather infamous technology to cartography. In fact their approach has as much in common with “style transfer” techniques — redrawing images in an impressionistic, crayon and arbitrary other fashions — than with deepfakes as they are commonly understood.

The team trained a machine learning system on satellite images of three different cities: Seattle, nearby Tacoma and Beijing. Each has its own distinctive look, just as a painter or medium does. For instance, Seattle tends to have larger overhanging greenery and narrower streets, while Beijing is more monochrome and — in the images used for the study — the taller buildings cast long, dark shadows. The system learned to associate details of a street map (like Google or Apple’s) with those of the satellite view.

The resulting machine learning agent, when given a street map, returns a realistic-looking faux satellite image of what that area would look like if it were in any of those cities. In the following image, the map corresponds to the top right satellite image of Tacoma, while the lower versions show how it might look in Seattle and Beijing.

Four images show a street map and a real satellite image of Tacoma, and two simulated satellite images of the same streets in Seattle and Beijing.

Image Credits: Zhao et al.

A close inspection will show that the fake maps aren’t as sharp as the real one, and there are probably some logical inconsistencies like streets that go nowhere and the like. But at a glance the Seattle and Beijing images are perfectly plausible.

One only has to think for a few minutes to conceive of uses for fake maps like this, both legitimate and otherwise. The researchers suggest that the technique could be used to simulate imagery of places for which no satellite imagery is available — like one of these cities in the days before such things were possible, or for a planned expansion or zoning change. The system doesn’t have to imitate another place altogether — it could be trained on a more densely populated part of the same city, or one with wider streets.

It could conceivably even be used, as this rather more whimsical project was, to make realistic-looking modern maps from ancient hand-drawn ones.

Should technology like this be bent to less constructive purposes, the paper also looks at ways to detect such simulated imagery using careful examination of colors and features.

The work challenges the general assumption of the “absolute reliability of satellite images or other geospatial data,” said Zhao in a UW news article, and certainly as with other media that kind thinking has to go by the wayside as new threats appear. You can read the full paper at the journal Cartography and Geographic Information Science.

#aerospace, #artificial-intelligence, #deepfakes, #mapping, #maps, #satellite-imagery, #science, #space, #tc, #university-of-washington

Deep science: AI is in the air, water, soil and steel

Research papers come out far too rapidly for anyone to read them all, especially in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect some of the most relevant recent discoveries and papers — particularly in but not limited to artificial intelligence — and explain why they matter.

This week brings a few unusual applications of or developments in machine learning, as well as a particularly unusual rejection of the method for pandemic-related analysis.

One hardly expects to find machine learning in the domain of government regulation, if only because one assumes federal regulators are hopelessly behind the times when it comes to this sort of thing. So it may surprise you that the U.S. Environmental Protection Agency has partnered with researchers at Stanford to algorithmically root out violators of environmental rules.

When you see the scope of the issue, it makes sense. EPA authorities need to process millions of permits and observations pertaining to Clean Water Act compliance, things such as self-reported amounts of pollutants from various industries and independent reports from labs and field teams. The Stanford-designed process sorted through these to isolate patterns like which types of plants, in which areas, were most likely to affect which demographics. For instance, wastewater treatment in urban peripheries may tend to underreport pollution and put communities of color at risk.

The very process of reducing the compliance question to something that can be computationally parsed and compared helped clarify the agency’s priorities, showing that while the technique could identify more permit holders with small violations, it may draw attention away from general permit types that act as a fig leaf for multiple large violators.

Another large source of waste and expense is processing scrap metal. Tons of it goes through sorting and recycling centers, where the work is still mostly done by humans, and as you might imagine, it’s a dangerous and dull job. Eversteel is a startup out of the University of Tokyo that aims to automate the process so that a large proportion of the work can be done before human workers even step in.

Image of scrap metal with AI-detected labels for various kinds of items overlaid.

Image Credits: Eversteel

Eversteel uses a computer vision system to classify incoming scrap into nearly two dozen categories, and to flag impure (i.e., an unrecyclable alloy) or anomalous items for removal. It’s still at an early stage, but the industry isn’t going anywhere, and the lack of any large data set for training their models (they had to make their own, informed by steelworkers and imagery) showed Eversteel that this was indeed virgin territory for AI. With luck, they’ll be able to commercialize their system and attract the funding they need to break into this large but tech-starved industry.

Another unusual but potentially helpful application of computer vision is in soil monitoring, a task every farmer has to do regularly to monitor water and nutrient levels. When they do manage to automate it, it’s done in a rather heavy-handed way. A team from the University of South Australia and Middle Technical University in Baghdad show that the sensors, hardware and thermal cameras used now may be overkill.

Buckets of soil shown under various lights.

Image Credits: UNISA/Middle Technical University

Surprisingly, their answer is a standard RGB digital camera, which analyzes the color of the soil to estimate moisture. “We tested it at different distances, times and illumination levels, and the system was very accurate,” said Ali Al-Naji, one of the creators. It could (and is planned to) be used to make a cheap but effective smart irrigation system that could improve crop yield for those who can’t afford industry-standard systems.

#artificial-intelligence, #cognitive-science, #deepfakes, #ec-food-climate-and-sustainability, #health, #machine-learning, #science, #smart-speaker, #tc, #ultrasound, #video

PA woman charged with using deepfakes to harass teenage cheerleaders

The manipulated images showed the cheerleaders holding much less innocent things than pompoms.

Enlarge / The manipulated images showed the cheerleaders holding much less innocent things than pompoms. (credit: Michael Moeller | EyeEm | Getty Images)

A woman in eastern Pennsylvania allegedly created a series of deepfake videos in a harassment and bullying campaign meant to intimidate teenage girls in competition with her daughter and get them kicked off a local cheerleading team.

Hilltown Township police earlier this month charged Raffaela Spone with three counts of cyber harassment of a child after she allegedly began harassing the teenagers last July, according to the Bucks County District Attorney Matthew Weintraub.

The girls received voice and text messages saying, “You should kill yourself,” followed by doctored videos taken from images on their social media profiles and altered to make them appear nude, vaping, or drinking. The altered images included captions reading, “toxic traits, revenge, dating boys, and smoking” and “was drinking at the shore, smokes pot, and uses ‘attentionwh0re69’ as a screen name.” The images and videos were also sent to coaches for the team, seemingly in an attempt to have the girls removed from the team.

Read 5 remaining paragraphs | Comments

#crime, #deepfakes, #policy

Deepfake Videos of Eerie Tom Cruise Revive Debate

A tool that allows old photographs to be animated, and viral videos of a Tom Cruise impersonation, shined new light on digital impersonations.

#artificial-intelligence, #computers-and-the-internet, #cruise-tom, #deepfakes, #myheritage, #pornography, #rumors-and-misinformation, #video-recordings-downloads-and-streaming

MyHeritage now lets you animate old family photos using deepfakery

AI-enabled synthetic media is being used as a tool for manipulating real emotions and capturing user data by genealogy service, MyHeritage, which has just launched a new feature — called ‘deep nostalgia‘ — that lets users upload a photo of a person (or several people) to see individual faces animated by algorithm.

The Black Mirror-style pull of seeing long lost relatives — or famous people from another era — brought to a synthetic approximation of life, eyes swivelling, faces tilting as if they’re wondering why they’re stuck inside this useless digital photo frame, has led to an inexorable stream of social shares since it was unveiled yesterday at a family history conference… 

MyHeritage’s AI-powered viral marketing playbook with this deepfakery isn’t a complicated one: They’re going straight for tugging on your heart strings to grab data which can be used to drive sign ups for their other (paid) services. (Selling DNA tests is their main business.)

It’s free to animate a photo using the ‘deep nostalgia’ tech on MyHeritage’s site but you don’t get to see the result until you hand over at least an email (along with the photos you want animating, ofc) — and agree to its T&Cs and privacy policy. Both of which have attracted a number of concerns, over the years.

Last year, for example, the Norwegian Consumer Council reported MyHeritage to the national consumer protection and data authorities after a legal assessment of the T&Cs found the contract it asks customers to sign to be “incomprehensible”.

In 2018 MyHeritage also suffered a major data breach — and data from that breach was later found for sale on the dark web, among a wider cache of hacked account info pertaining to several other services.

The company — which, as we reported earlier this week, is being acquired by a US private equity firm for ~$600M — is doubtless relying on the deep pull of nostalgia to smooth over any individual misgivings about handing over data and agreeing to its terms.

The face animation technology itself is impressive enough — if you set aside the ethics of encouraging people to drag their long lost relatives into the uncanny valley to help MyHeritage cross-sell DNA testing (with all the massive privacy considerations around putting that kind of data in the hands of a commercial entity).

Looking at the inquisitive face of my great grandmother I do have to wonder what she would have made of all this?

The facial animation feature is powered by Israeli company D-ID, a TechCrunch Disrupt battlefield alum — which started out building tech to digital de-identify faces with an eye on protecting image and video from being identifiable by facial recognition algorithms.

It released a demo video of the photo-animating technology last year. The tech uses a driver video to animate the photo — mapping the facial features of the photo onto that base driver to create a ‘live portrait’, as D-ID calls it.

“The Live Portrait solution brings still photos to life. The photo is mapped and then animated by a driver video, causing the subject to move its head and facial features, mimicking the motions of the driver video,” D-ID said in a press release. “This technology can be implemented by historical organizations, museums, and educational programs to animate well-known figures.”

It’s offering live portraits as part of a wider ‘AI Face’ platform which will offer third parties access to other deep learning, computer vision and image processing technologies. D-ID bills the platform as a ‘one-stop shop’ for syntheized video creation.

Other tools include a ‘face anonymization’ feature which replaces one person’s face on video with another’s (such as for documentary film makers to protect a whistleblower’s identity); and a ‘talking heads’ feature that can be used for lip syncing or to replace the need to pay actors to appear in content such as marketing videos as it can turn an audio track into a video of a person appearing to speak those words.

The age of synthesized media is going to be a weird one, that’s for sure.


#artificial-intelligence, #d-id, #deep-nostalgia, #deepfakes, #myheritage, #synthesized-media

Reface grabs $5.5M seed led by A16z to stoke its viral face-swap video app

Buzzy face-swap video app Reface, which lends users celebrity ‘superpowers’ by turning their selfies into “eerily realistic” famous video clips at the tap of a button, has caught the attention of Andreessen Horowitz. The Silicon Valley venture firm leads a $5.5 million seed round in the deep tech entertainment startup, announced today.

Reface tells us its apps (iOS and Android) have been downloaded some 70 million times since it launched in January 2020 — up from 20M when we spoke to one of its (seven) co-founders back in August. It’s also attained ‘top five’ leading app status in around 100 countries, the US included — as well as bagging a ‘top app’ award in the annual Google Play best of. Quite the year, then.

That kind of viral growth clip has been turning heads all over the place. As well as nabbing a16z for its seed lead, Reface has pulled in funding from a number of prominent angel investors across the gaming, music, film/content creation and tech industries. 

This includes — from the gaming industry — Ilkka Paananen, CEO of Supercell; and David Helgason, founder of Unity Technologies. From the world of music: Scooter Braun (known for managing top pop stars like Justin Bieber and Ariana Grande); and Adam Leber, a manager to Britney Spears and Miley Cyrus, and an Uber investor. 

On the film/content creation side its angels include Matt Stone, Trey Parker, and Peter Serafinowicz (via Deep Voodoo); Bryan Baum and Matt Kives, founder of K5 Global (whose clients have included the likes of Bruce Willis, Jesse Eisenberg and Eric Stonestreet); and Natalia Vodianova, a model, philanthropist, and actress.

Tech industry investors joining the round as angels are: Josh Elman (ex-investment partner at Greylock and on the boards of Medium, Operator, and Jelly); and Sriram Krishnan (investor and former product lead at Microsoft, Facebook, Snap and Twitter).

It’s the kind of broad-based excitement that can be generated when hot trend streams like ‘no code’ and viral social video get crossed. (At least if, like a rubbery face mask, we stretch the definition of ‘no code’ to cover — in Reface’s case — a push-button, AI tool for pro-style content creation; the ‘no code’ label typically refers to b2b tools that simplify app building but the common theme is supercharged accessibility.)

With such a sparkling portfolio of early stage backers Reface’s Ukrainian founders are surely proving the value of sticking with it where deep tech is concerned. As we reported back in the summer, three of the founders began working together almost a decade ago — honing their machine learning chops straight out of university. Their tenacity is now paying off in viral spades.

“The Reface team has taken their highly sophisticated, machine learning technology and transformed it into a consumer experience that is seamless to use and fun to share with your friends,” said Connie Chan, general partner at Andreessen Horowitz in a supporting statement on the funding.

“We’re just beginning to see the potential applications for their core technology across consumer, entertainment, and marketing experiences and the Reface team has the creativity and expertise to help shape that future,” she added.

“I believe that Reface has the potential to be the next-generation personalization platform that enables the gamification of movies, sports, music videos, and many other fields that people are passionate about,” added Supercell’s Paananen in another statement. “I’m excited to see the team grow Reface into a community that allows people to create active personal connections with artists and each other through content they love.” 

Reface’s co-founders, Denys Dmytrenko, Oles Petriv, Ivan Altsybieiev, Roman Mogylnyi, Yaroslav Boiko, Dima Shvets and Kyrylo Syhyda (Image credit: Reface)

Reface says the seed funding will allow it to step on the growth gas. Including stepping up work on a tool that’s capable of detecting its own fakes, which it wants to build to shrink the risk of the tech being misused.

Earlier this year the startup told us the detection tech would be ready by fall so it’s evidently taking a bit longer than expected. But garnering viral growth for its celeb-video face-swaps may well have reconfigured its priorities a tad.

A previously slated fall launch of UGC video for face-swapping has also not yet fully materialized.

A community that’s currently fuelled by creating and sharing high production value celebrity video clips seems a very different kind of ‘eerie’ vs letting users loose on face-swapping themselves onto the body of their kid brother, say, or grandparent (not to mention the wider risks of not quality controlling the base material for face swaps). So taking time to get robust controls in place makes good business sense. As does focusing on stoking the viral boom with fresh celeb content by keeping content partners happy.

Asked about the delay, Reface told us UGC video has been “partly” launched at this point, since users can download their GIFs. “It’s still in beta as we’re testing and improving — detection system, moderation, communication with users — to make sure that content will not be misused,” it said, adding: “Regarding video, we have a bunch of creators who provide us with content directly. This way we can test all the UGC mechanics. We plan to launch the UGC option to the public by the end of Q1.” 

On the still-in-development detection tool, Reface said the plan is to launch it alongside UGC.

“We are training our models to maximize the detection quality,” it told us on that, adding that it hopes to have the tool finalized in April 2021.

Reface’s its overarching ambition is to build “the biggest platform of personalized content” — monetizing that by partnering with content holders and celebrities to offer head-turning “creative digital marketing solutions”.

Having a near captive audience for buzzy social content during the pandemic has clearly helped the mission, even as it’s boosted social rivals like Snap.

With so many bored kids stuck at home with their phones this year, there’s been an opportunity for growth across the board of social media. (And a16z is a backer of several other social plays, including audio-based social network Clubhouse, and — for kids — the Roblox social gaming platform, to name just two.)

In August 2020, Reface says it became viral and ranked number one in the U.S. AppStore — (briefly) surpassing TikTok and Instagram. Celebrities including Justin Bieber, Snoop Dogg, Britney Spears, Joe Rogan, Chris Brown, Miley Cyrus and Dua Lipa have all shared their refaced videos on social media this year, it also notes.

This year it’s inked partnerships with entertainment industry luminaries to promote new video launches, including Bieber, Cyrus and John Legend, as well as working with Amazon Prime to advertise the Borat movie premiere — racking up “millions” more shares and refaces.  

“Funding from Andreessen Horowitz will allow us to accelerate this growth, empower our team with new talents and improve technology, as we will continue work on a fake videos detection tool to guarantee responsible use of our AI technology,” co-founder Denis Dmitrenko added in a statement. 

#andreessen-horowitz, #apps, #artificial-intelligence, #deepfakes, #fundings-exits, #machine-learning, #reface, #social, #social-apps, #synthesized-media, #tc

Sentinel loads up with $1.35M in the deepfake detection arms race

Estonia-based Sentinel, which is developing a detection platform for identifying synthesized media (aka deepfakes), has closed a $1.35 million seed round from some seasoned angle investors — including Jaan Tallinn (Skype), Taavet Hinrikus (Transferwise), Ragnar Sass & Martin Henk (Pipedrive) — and Baltics early stage VC firm, United Angels VC.

The challenge of building tools to detect deepfakes has been likened to an arms race — most recently by tech giant Microsoft, which earlier this month launched a detector tool in the hopes of helping pick up disinformation aimed at November’s US election. “The fact that [deepfakes are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology,” it warned, before suggesting there’s still short term value in trying to debunk malicious fakes with “advanced detection technologies”.

Sentinel co-founder and CEO, Johannes Tammekänd, agrees on the arms race point — which is why its approach to this ‘goal-post-shifting’ problem entails offering multiple layers of defence, following a cyber security-style template. He says rival tools — mentioning Microsoft’s detector and another rival, Deeptrace, aka Sensity — are, by contrast, only relying on “one fancy neural network that tries to detect defects”, as he puts it.

“Our approach is we think it’s impossible to detect all deepfakes with only one detection method,” he tells TechCrunch. “We have multiple layers of defence that if one layer gets breached then there’s a high probability that the adversary will get detected in the next layer.”

Tammekänd says Sentinel’s platform offers four layers of deepfake defence at this stage: An initial layer based on hashing known examples of in-the-wild deepfakes to check against (and which he says is scalable to “social media platform” level); a second layer comprised of a machine learning model that parses metadata for manipulation; a third that checks for audio changes, looking for synthesized voices etc; and lastly a technology that analyzes faces “frame by frame” to look for signs of visual manipulation.  

“We take input from all of those detection layers and then we finalize the output together [as an overall score] to have the highest degree of certainty,” he says.

“We already reached the point where somebody can’t say with 100% certainty if a video is a deepfake or not. Unless the video is somehow ‘cryptographically’ verifiable… or unless somebody has the original video from multiple angles and so forth,” he adds.

Tammekänd also emphasizes the importance of data in the deepfake arms race — over and above any specific technique. Sentinel’s boast on this front is that it’s amassed the “largest” database of in-the-wild deepfakes to train its algorithms on.

It has an in-house verification team working on data acquisition by applying its own detection system to suspect media, with three human verification specialists who “all have to agree” in order for it to verify the most sophisticated organic deepfakes. 

“Every day we’re downloading deepfakes from all the major social platforms — YouTube, Facebook, Instagram, TikTok, then there’s Asian ones, Russian ones, also porn sites as well,” he says.

“If you train a deepfake model based on let’s say Facebook data-sets then it doesn’t really generalize — it can detect deepfakes like itself but it doesn’t generalize well with deepfakes in the wild. So that’s why the detection is really 80% the data engine.”

Not that Sentinel can always be sure. Tammekänd gives the example of a short video clip released by Chinese state media of a poet who it was thought has been killed by the military — in which he appeared to say he was alive and well and told people not to worry. 

“Although our algorithms show that, with a very high degree of certainty, it is not manipulated — and most likely the person was just brainwashed — we can’t say with 100% certainty that the video is not a deepfake,” he says.  

Sentinel’s founders, who are ex NATO, Monese and the UK Royal Navy, actually started working on a very different startup idea back in 2018 — called Sidekik — building a Black Mirror-esque tech which ingested comms data to create a ‘digital clone’ of a person in the form of a tonally similar chatbot (or audiobot).

The idea was that people could use this virtual double to hand off basic admin-style tasks. But Tammekänd says they became concerned about the potential for misuse — hence pivoting to deepfake detection.

They’re targeting their technology at governments, international media outlets and defence agencies — with early clients, after the launch of their subscription service in Q2 this year, including the European Union External Action Service and the Estonian Government.

Their stated aim is to help to protect democracies from disinformation campaigns and other malicious information ops. So that means they’re being very careful about who gets access to their tech. “We have a very heavy vetting process,” he notes. “For example we work only with NATO allies.”

“We have had requests from Saudi Arabia and China but obviously that is a no-go from our side,” Tammekänd adds.

A recent study the startup conducted suggests exponential growth of deepfakes in the wild (i.e. found anywhere online) — with more than 145,000 examples identified so far in 2020, indicating a ninefold year-on-year growth. 

Tools to create deepfakes are certainly getting more accessible. And while plenty are, at face value, designed to offer harmless fun/entertainment — such as the likes of selfie-shifting app Reface — it’s clear that without thoughtful controls (including deepfake detection systems) the synthesized content they enable could be misappropriated to manipulate unsuspecting viewers.

Scaling up deepfake detection technology to the level of media swapping going on on social media platforms today is one major challenge Tammekänd mentions. 

“Facebook or Google could scale up [their own deepfake detection] but it would cost so much today that they would have to put in a lot of resources and their revenue would obviously fall drastically — so it’s fundamentally a triple standard; what are the business incentives?” he suggests.

There is also the risk posed by very sophisticated, very well funded adversaries — creating what he describes as “deepfake zero day” targeted attacks (perhaps state actors, presumably pursuing a very high value target).

“Fundamentally it is the same thing in cyber security,” he says. “Basically you can mitigate [the vast majority] of the deepfakes if the business incentives are right. You can do that. But there will always be those deepfakes which can be developed as zero days by sophisticated adversaries. And nobody today has a very good method or let’s say approach of how to detect those.

“The only known method is the layered defence — and hope that one of those defence layers will pick it up.”

Sentinel co-founders, Kaspar Peterson (left) & Johannes Tammekänd (right). Photo Credit: Sentinel

It’s certainly getting cheaper and easier for any Internet user to make and distribute plausible fakes. And as the risks posed by deepfakes rise up political and corporate agendas — the European Union is readying a Democracy Action Plan to respond to disinformation threats, for example — Sentinel is positioning itself to sell not only deekfake detection but bespoke consultancy services, powered by learnings extracted from its deepfake data-set. 

“We have a whole product — meaning we just don’t offer a ‘black box’ but also provide prediction explainability, training data statistics in order to mitigate bias, matching against already known deepfakes and threat modelling for our clients through consulting,” the startup tells us. “Those key factors have made us the choice of clients so far.”

Asked what he sees as the biggest risks that deepfakes pose to Western society, Tammekänd says, in the short term, the major worry is election interference. 

“One probability is that during the election — or a day or two days before — imagine Joe Biden saying ‘I have a cancer, don’t vote for me’. That video goes viral,” he suggests, sketching one near term risk. 

“The technology’s already there,” he adds noting that he had a recent call with a data scientist from one of the consumer deepfake apps who told him they’d been contacted by different security organizations concerned about just such a risk.

“From a technical perspective it could definitely be pulled off… and once it goes viral for people seeing is believing,” he adds. “If you look at the ‘cheap fakes’ that have already had a massive impact, a deepfake doesn’t have to be perfect, actually, it just has to be believable in a good context — so there’s a large number of voters who can fall for that.”

Longer term, he argues the risk is really massive: People could lose trust in digital media, period. 

“It’s not only about videos, it can be images, it can be voice. And actually we’re already seeing the convergence of them,” he says. “So what you can actually simulate are full events… that I could watch on social media and all the different channels.

“So we will only trust digital media that is verified, basically — that has some method of verification behind that.”

Another even more dystopian AI -warped future is that people will no longer care what’s real or not online — they’ll just believe whatever manipulated media panders to their existing prejudices. (And given how many people have fallen down bizarre conspiracy rabbit holes seeded by a few textual suggestions posted online, that seems all too possible.)

“Eventually people don’t care. Which is a very risky premise,” he suggests. “There’s a lot of talk about where are the ‘nuclear bombs’ of deepfakes? Let’s say it’s just a matter of time when a deepfake of a politician comes out that will do massive damage but… I don’t think that’s the biggest systematic risk here.

“The biggest systematic risk is, if you look from the perspective of history, what has happened is information production has become cheaper and easier and sharing has become quicker. So everything from Gutenberg’s printing press, TV, radio, social media, Internet. What’s happening now is the information that we consume on the Internet doesn’t have to be produced by another human — and thanks to algorithms you can on a binary time-scale do it on a mass scale and in a hyper-personalized way. So that’s the biggest systematic risk. We will not fundamentally understand what is reality anymore online. What is human and what is not human.”

The potential consequences of such a scenario are myriad — from social division on steroids; so even more confusion and chaos engendering rising anarchy and violent individualism to, perhaps, a mass switching off, if large swathes of the mainstream simply decide to stop listening to the Internet because so much online contents is nonsense.

From there things could even go full circle — back to people “reading more trusted sources again”, as Tammekänd suggests. But with so much at shapeshifting stake, one thing looks like a safe bet: Smart, data-driven tools that help people navigate an ever more chameleonic and questionable media landscape will be in demand. 

TechCrunch’s Steve O’Hear contributed to this report 

#ai, #artificial-intelligence, #deepfakes, #disinformation, #europe, #fundings-exits, #media, #sentinel, #synthesized-media, #tc

Microsoft launches a deepfake detector tool ahead of US election

Microsoft has added to the slowly growing pile of technologies aimed at spotting synthetic media (aka deepfakes) with the launch of a tool for analyzing videos and still photos to generate a manipulation score.

The tool, called Video Authenticator, provides what Microsoft calls “a percentage chance, or confidence score” that the media has been artificially manipulated.

“In the case of a video, it can provide this percentage in real-time on each frame as the video plays,” it writes in a blog post announcing the tech. “It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”

If a piece of online content looks real but ‘smells’ wrong chances are it’s a high tech manipulation trying to pass as real — perhaps with a malicious intent to misinform people.

And while plenty of deepfakes are created with a very different intent — to be funny or entertaining — taken out of context such synthetic media can still take on a life of its own as it spreads, meaning it can also end up tricking unsuspecting viewers.

While AI tech is used to generate realistic deepfakes, identifying visual disinformation using technology is still a hard problem — and a critically thinking mind remains the best tool for spotting high tech BS.

Nonetheless, technologists continue to work on deepfake spotters — including this latest offering from Microsoft.

Although its blog post warns the tech may offer only passing utility in the AI-fuelled disinformation arms race: “The fact that [deepfakes are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology. However, in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.”

This summer a competition kicked off by Facebook to develop a deepfake detector served up results that were better than guessing — but only just in the case of a data-set the researchers hadn’t had prior access to.

Microsoft, meanwhile, says its Video Authenticator tool was created using a public dataset from Face Forensic++ and tested on the DeepFake Detection Challenge Dataset, which it notes are “both leading models for training and testing deepfake detection technologies”.

It’s partnering with the San Francisco-based AI Foundation to make the tool available to organizations involved in the democratic process this year — including news outlets and political campaigns.

“Video Authenticator will initially be available only through RD2020 [Reality Defender 2020], which will guide organizations through the limitations and ethical considerations inherent in any deepfake detection technology. Campaigns and journalists interested in learning more can contact RD2020 here,” Microsoft adds.

The tool has been developed by its R&D division, Microsoft Research, in coordination with its Responsible AI team and an internal advisory body on AI, Ethics and Effects in Engineering and Research Committee — as part of a wider program Microsoft is running aimed at defending democracy from threats posed by disinformation.

“We expect that methods for generating synthetic media will continue to grow in sophistication,” it continues. “As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.”

On the latter front, Microsoft has also announced a system that will enable content producers to add digital hashes and certificates to media that remain in their metadata as the content travels online — providing a reference point for authenticity.

The second component of the system is a reader tool, which can be deployed as a browser extension, for checking certificates and matching the hashes to offer the viewer what Microsoft calls “a high degree of accuracy” that a particular piece of content is authentic/hasn’t been changed.

The certification will also provide the viewer with details about who produced the media.

Microsoft is hoping this digital watermarking authenticity system will end up underpinning a Trusted News Initiative announced last year by UK publicly funded broadcaster, the BBC — specifically for a verification component, called Project Origin, which is led by a coalition of the BBC, CBC/Radio-Canada, Microsoft and The New York Times.

It says the digital watermarking tech will be tested by Project Origin with the aim of developing it into a standard that can be adopted broadly.

“The Trusted News Initiative, which includes a range of publishers and social media companies, has also agreed to engage with this technology. In the months ahead, we hope to broaden work in this area to even more technology companies, news publishers and social media companies,” Microsoft adds.

While work on technologies to identify deepfakes continues, its blog post also emphasizes the importance of media literacy — flagging a partnership with the University of Washington, Sensity and USA Today aimed at boosting critical thinking ahead of the US election.

This partnership has launched a Spot the Deepfake Quiz for voters in the US to “learn about synthetic media, develop critical media literacy skills and gain awareness of the impact of synthetic media on democracy”, as it puts it.

The interactive quiz will be distributed across web and social media properties owned by USA Today, Microsoft and the University of Washington and through social media advertising, per the blog post.

The tech giant also notes that it’s supporting a public service announcement (PSA) campaign in the US encouraging people to take a “reflective pause” and check to make sure information comes from a reputable news organization before they share or promote it on social media ahead of the upcoming election.

“The PSA campaign will help people better understand the harm misinformation and disinformation have on our democracy and the importance of taking the time to identify, share and consume reliable information. The ads will run across radio stations in the United States in September and October,” it adds.

#artificial-intelligence, #canada, #computer-graphics, #deep-learning, #deepfakes, #disinformation, #election-interference, #facebook, #media, #media-literacy, #microsoft-research, #online-content, #san-francisco, #science-and-technology, #social-media, #special-effects, #synthetic-media, #the-new-york-times, #united-kingdom, #united-states, #university-of-washington, #usa-today

Deepfake video app Reface is just getting started on shapeshifting selfie culture

A bearded Rihanna gyrates and sings about shining bright like a diamond. A female Jack Sparrow looks like she’d be a right laugh over a pint. The cartoon contours of The Incredible Hulk lend envious tint to Donald Trump’s awfully familiar cheek bumps.

Selfie culture has a fancy new digital looking glass: Reface (previously Doublicat) is an app that uses AI-powered deepfake technology to let users try on another face/form for size. Aka “face swap videos”, in its marketing parlance.

Deepfake technology — or synthesized media, to give it its less pejorative label — is just getting into its creative stride, according to Roman Mogylnyi, CEO and co-founder of RefaceAI, which makes the eponymous app whose creepily lifelike output you may have noticed bubbling up in your social streams in recent months.

The startup has Ukrainian founders — as well as Mogylnyi, there’s Oles Petriv, Yaroslav Boiko, Dima Shvets, Denis Dmitrenko, Ivan Altsybieiev and Kyle Sygyda — but the business is incorporated in the US. Doubtless it helps to be nearer to Hollywood studios whose video clips power many of the available face swaps. (Want to see Titanic‘s Rose Hall recast with Trump’s visage staring out of Kate Winslet’s body? No we didn’t either — but once you’ve hit the button it’s horribly hard to unsee… 😷)

TechCrunch noticed a bunch of male friends WhatsApp-group-sharing video clips of themselves as scantily clad female singers and figured the developers must be onto something — a la Face App, or the earlier selfie trend of style transfer (a craze that was sparked by Prisma and cloned mercilessly by tech giants).

Reface’s deepfake effects are powered by a class of machine learning frameworks known as GANs (generative adversarial network) which is how it’s able to get such relatively slick results, per Mogylnyi. In a nutshell it’s generating a new animated face using the twin inputs (the selfie and the target video), rather than trying to mask one on top of the other.

Deepface technology has of course been around for a number of years, at this point, but the Reface team’s focus is on making the tech accessible and easy to use — serving it up as a push-button smartphone app with no need for more powerful hardware and near instant transformation from a single selfie snap. (It says it turns selfies into face vectors representing distinguishing user’s facial features — and pledges that uploaded photos are removed from its Google Cloud platform “within an hour”.)

No need for tech expertise nor lots of effort to achieve a lifelike effect. The inexorable social shares flowing from such a user friendly tech application then work to chalk off product marketing.

It was a similar story with the AI tech underpinning Prisma — which left that app open to merciless cloning, though it was initially only transforming photos. But Mogylnyi believes the team behind the video face swaps has enough of a head (ha!) start to avoid a similar fate.

He says usage of Reface has been growing “really fast” since it added high res videos this June — having initially launched with only far grainier GIF face swaps on offer.  In terms of metrics the startup us not disclosing active monthly users but says it’s had around 20 million downloads at this point across 100 countries. (On Google Play the app has almost a full five star rating, off of approaching 150k reviews.)

“I understand that an interest from huge companies might come. And it’s obvious. They see that it’s a great thing — personalization is the next trend, and they are all moving in the same direction, with Bitmoji, Memoji, all that stuff — but we see personalized, hyperrealistic face swapping as the next big thing,” Mogylnyi tells TechCrunch.

“Even for [tech giants] it takes time to create such a technology. Even speaking about our team we have a brilliant team, brilliant minds, and it took us a long time to get here. Even if you spawn many teams to work on the same problems surely you will get somewhere… but currently we’re ahead and we’re doing our best to work on new technologies to keep in pace,” he adds.

Reface’s app is certainly having a moment right now, bagging top download slots on the iOS App Store and Google Play in 100 countries — helped, along the way, by its reflective effects catching the eye of the likes of Elon Musk and Britney Spears (who Mogylnyi says have retweeted examples of its content).

But he sees this bump as just the beginning — predicting much bigger things coming down the sythensized pipe as more powerful features are switched on. The influx of bitesized celebrity face swaps signals an incoming era of personalized media, which could have a profoundly transformative effect on culture.

Mogylnyi’s hope is that wide access to synthensized media tools will increase humanity’s empathy and creativity — providing those who engage with the tech limitless chances to (auto)vicariously experience things they maybe otherwise couldn’t ever (or haven’t yet) — and so imagine themselves into new possibilities and lifestyles.

He reckons the tech will also open up opportunities for richly personalized content communities to grow up around stars and influencers — extending how their fans can interact with them.

“Right now the way influencers exist is only one way; they’re just giving their audience the content. In my understanding in our case we’ll let influencers have the possibility to give their audience access to the content and to feel themselves in it. It’s one of the really cool things we’re working on — so it will be a part of the platform,” he says.

“What’s interesting about new-gen social networks [like TikTok] is that people can both be like consumers and providers at the same time… So in our case people will also be able to be providers and consumers but on the next level because they will have the technology to allow themselves to feel themselves in the content.”

“I used to play basketball in school years but I had an injury and I was dreaming about a pro career but I had to stop playing really hard. I’ll never know how my life would have gone if I was a pro basketball player so I have to be a startup entrepreneur right now instead… So in the case with our platform I actually will have a chance to see how my pro basketball career would look like. Feel myself in the content and life this life,” he adds.

This vision is really the mirror opposite of the concerns that are typically attached to deepfakes, around the risk of people being taken in, tricked, shamed or otherwise manipulated by intentionally false imagery.

So it’s noteworthy that Reface is not letting users loose on their technology in a way that could risk an outpouring of problem content. For example, you can’t yet upload your own video to make into a deepfake — although the ability to do so is coming. For now, you have to pick from a selection of preloaded celebrity clips and GIFs which no one would mistake for the real-deal.

That’s a very deliberate decision, with Mogylnyi emphasizing they want to be responsible in how they bring the tech to market.

User generated video and a lot more — full body swaps are touted, next year — are coming, though. But before they turn on more powerful content generation functionality they’re working on building a counter tech to reliably detect such generated content. Mogylnyi says it will only open up usage once they’re confident of being able to spot their own fakes.

“It will be this autumn, actually,” he says of launching UGC video (plus the deepfake detection capability). “We’ll launch it with our Face Studio… which will be a tool for content creators, for small studios, for small post production studios, maybe some music video makers.”

“We also have five different technologies in our pipeline which we’ll show in the upcoming half a year,” he adds. “There are also other technologies and features based on current tech [stack] that we’ll be launching… We’ll allow users to swap faces in pictures with the new stack and also a couple of mechanics based on face swapping as well, and also separate technologies as well we’re aiming to put into the app.”

He says higher quality video swapping is another focus, alongside building out more technologies for post production studios. “Face Studio will be like an overall tool for people who want full access to our technologies,” he notes, saying the pro tool will launch later this year.

The Ukrainian team behind the app has been honing their deep tech chops for years — starting working together back in 2011 straight out of university and going on to set up a machine learning dev shop in 2013.

Work with post production studios followed, as they were asked to build face swapping technology to help budget-strapped film production studios do more while having to move their actors move around less.

By 2018, with plenty of expertise under their belt, they saw the potential for making deepface technology more accessible and user friendly — launching the GIF version of the app late last year, and going on to add video this summer when they also rebranded the app to Reface. The rest looks like it could be viral face swapping tech history…

So where does all this digital shapeshifting end up? “In our dreams and in our vision we see the app as a personalization platform where people will be able to live different lives during their one lifetime. So everyone can be anyone,” says Mogylnyi. “What’s the overall problem right now? People are scrolling content, not looking deep into it. And when I see people just using our app they always try to look inside — to look deeply into the picture. And that’s what really inspires us. So we understand that we can take the way people are browsing and the way they are consuming content to the next level.”

#apps, #artificial-intelligence, #britney-spears, #deep-learning, #deepfakes, #elon-musk, #europe, #gif, #machine-learning, #prisma, #reface, #selfie, #social, #social-networks, #special-effects, #tc, #united-states

Facebook’s ‘Deepfake Detection Challenge’ yields promising early results

The digitally face-swapped videos known as deepfakes aren’t going anywhere, but if platforms want to be able to keep an eye on them, they need to find them first. Doing so was the object of Facebook’s “Deepfake Detection Challenge,” launched last year. After months of competition the winners have emerged, and they’re… better than guessing. It’s a start!

Since their emergence in the last year or two, deepfakes have advanced from niche toy created for AI conferences to easily downloaded software that anyone can use to create convincing fake video of public figures.

“I’ve downloaded deepfake generators that you just double click and they run on a Windows box — there’s nothing like that for detection,” said Facebook CTO Mike Schroepfer in a call with press.

This is likely to be the first election year where malicious actors attempt to influence the political conversation using fake videos of candidates generated in this fashion. Given Facebook’s precarious position in public opinion, it’s very much in their interest to get out in front of this.

The competition started last year with the debut of a brand new database of deepfake footage. Until then there was little for researchers to play with — a handful of medium size sets of manipulated video, but nothing like the huge sets of data used to evaluate and improve things like computer vision algorithms.

Facebook footed the bill to have 3,500 actors record thousands of videos, each of which was present as an original and a deepfake. A bunch of other “distractor” modifications were also made, to force any algorithm hoping to spot fakes to pay attention to the important part: the face, obviously.

Researchers from all over participated, submitting thousands of models that attempt to decide whether a video is a deepfake or not. Here are six videos, three of which are deepfakes. Can you tell which is which? (The answers are at the bottom of the post.)

At first, these algorithms were no better than chance. But after many iterations and some clever tuning, they managed to reach more than 80 percent accuracy in identifying fakes. Unfortunately, when deployed on a reserved set of videos that the researchers had not been provided, the highest accuracy was about 65 percent.

It’s better than flipping a coin, but not by much. Fortunately, that was pretty much expected and the results are actually very promising. In artificial intelligence research, the hardest step is going from nothing to something — after that it’s a matter of getting better and better. But finding out if the problem can even be solved by AI is a big step. And the competition seems to indicate that it can.

Examples of a source video and multiple distractor versions.Image Credits: Facebook

An important note is that the dataset created by Facebook as deliberately made to be more representative and inclusive than others out there, not just larger. After all, AI is only as good as the data that goes into it, and bias found in AI can often be traced back to bias in the dataset.

“If your training set doesn’t have the appropriate variance in the ways that real people look, then your model will not have a representative understanding of that. I think we went through pains to make sure this dataset was fairly representative,” Schroepfer said.

I asked whether any groups or types of faces or situations were less likely to be identified as fake or real, but Schroepfer wasn’t sure. In response to my questions about representation in the dataset, a statement from the team read:

In creating the DFDC dataset, we considered many factors and it was important that we had representation across several dimensions including self-identified age, gender, and ethnicity. Detection technology needs to work for everyone so it was important that our data was representative of the challenge.

The winning models will be made open source in an effort to spur the rest of the industry into action, but Facebook is working on its own deepfake detection product that Schropfer said would not be shared. The adversarial nature of the problem — the bad guys learn from what the good guys do and adjust their approach, basically — means that telling everyone exactly what’s being done to prevent deepfakes may be counterproductive.

#artificial-intelligence, #deepfakes, #facebook, #science, #social

When audio deepfakes put words in Jay-Z’s mouth, did he have a legal case?

Jay-Z's visage hovers over a pair of robot DJs.

Enlarge / This is not an article about Daft Punk remixing or mashing up Jay-Z classics. It’s a photo illustration about machine learning models being applied to famous people’s voices. (But, hey, we’re ready for that Daft Punk + Jay-Z collab over here.) (credit: Getty Images / Sam Machkovech)

In late April, audio clips surfaced that appeared to capture Jay-Z rapping several unexpected texts. Did you ever imagine you’d hear Jay-Z do Shakespeare’s “To Be, Or Not to Be” soliloquy from Hamlet? How about Billy Joel’s “We Didn’t Start the Fire,” or a decade-old 4chan meme? All of these unlikely recitations were, of course, fake: “entirely computer-generated using a text-to-speech model trained on the speech patterns of Jay-Z,” according to a YouTube description. More specifically, they were deepfakes.

Deepfakes” are super-realistic videos, photos, or audio falsified through sophisticated artificial intelligence. The better-known deepfakes are probably videos, which can be as silly as Green Day frontman Billie Joe Armstrong’s face superimposed on Will Ferrell’s, or as disturbing as non-consensual porn and political disinformation. But audio deepfakes— AI-generated imitations of human voices—are possible, too. Two days after the Jay-Z YouTubes were posted, they were removed due to a copyright claim. But just as quickly, they returned. The takedowns may have been a first attempt to challenge audio deepfake makers, but musicians and fans could potentially be grappling with the weird consequences of machine-generated voice manipulations long into the future.

Here’s a breakdown of Jay-Z’s copyright dispute, the laws around audio deepfakes, and what all this could mean in the years to come.

Read 11 remaining paragraphs | Comments

#deepfakes, #gaming-culture, #jay-z, #pitchfork, #voice-synthesis

The real threat of fake voices in a time of crisis

As federal agencies take increasingly stringent actions to try to limit the spread of the novel coronavirus pandemic within the U.S., how can individual Americans and U.S. companies affected by these rules weigh in with their opinions and experiences? Because many of the new rules, such as travel restrictions and increased surveillance, require expansions of federal power beyond normal circumstances, our laws require the federal government to post these rules publicly and allow the public to contribute their comments to the proposed rules online. But are federal public comment websites — a vital institution for American democracy — secure in this time of crisis? Or are they vulnerable to bot attack?

In December 2019, we published a new study to see firsthand just how vulnerable the public comment process is to an automated attack. Using publicly available artificial intelligence (AI) methods, we successfully generated 1,001 comments of deepfake text, computer-generated text that closely mimics human speech, and submitted them to the Centers for Medicare & Medicaid Services’ (CMS) website for a proposed federal rule that would institute mandatory work reporting requirements for citizens on Medicaid in Idaho.

The comments we produced using deepfake text constituted over 55% of the 1,810 total comments submitted during the federal public comment period. In a follow-up study, we asked people to identify whether comments were from a bot or a human. Respondents were only correct half of the time — the same probability as random guessing.

deepfake text question

Image Credits: Zang/Weiss/Sweeney

The example above is deepfake text generated by the bot that all survey respondents thought was from a human.

We ultimately informed CMS of our deepfake comments and withdrew them from the public record. But a malicious attacker would likely not do the same.

Previous large-scale fake comment attacks on federal websites have occurred, such as the 2017 attack on the FCC website regarding the proposed rule to end net neutrality regulations.

During the net neutrality comment period, firms hired by industry group Broadband for America used bots to create comments expressing support for the repeal of net neutrality. They then submitted millions of comments, sometimes even using the stolen identities of deceased voters and the names of fictional characters, to distort the appearance of public opinion.

A retroactive text analysis of the comments found that 96-97% of the more than 22 million comments on the FCC’s proposal to repeal net neutrality were likely coordinated bot campaigns. These campaigns used relatively unsophisticated and conspicuous search-and-replace methods — easily detectable even on this mass scale. But even after investigations revealed the comments were fraudulent and made using simple search-and-replace-like computer techniques, the FCC still accepted them as part of the public comment process.

Even these relatively unsophisticated campaigns were able to affect a federal policy outcome. However, our demonstration of the threat from bots submitting deepfake text shows that future attacks can be far more sophisticated and much harder to detect.

The laws and politics of public comments

Let’s be clear: The ability to communicate our needs and have them considered is the cornerstone of the democratic model. As enshrined in the Constitution and defended fiercely by civil liberties organizations, each American is guaranteed a role in participating in government through voting, through self-expression and through dissent.

search and replace FCC questions

Image Credits: Zang/Weiss/Sweeney

When it comes to new rules from federal agencies that can have sweeping impacts across America, public comment periods are the legally required method to allow members of the public, advocacy groups and corporations that would be most affected by proposed rules to express their concerns to the agency and require the agency to consider these comments before they decide on the final version of the rule. This requirement for public comments has been in place since the passage of the Administrative Procedure Act of 1946. In 2002, the e-Government Act required the federal government to create an online tool to receive public comments. Over the years, there have been multiple court rulings requiring the federal agency to demonstrate that they actually examined the submitted comments and publish any analysis of relevant materials and justification of decisions made in light of public comments [see Citizens to Preserve Overton Park, Inc. v. Volpe, 401 U. S. 402, 416 (1971); Home Box Office, supra, 567 F.2d at 36 (1977), Thompson v. Clark, 741 F. 2d 401, 408 (CADC 1984)].

In fact, we only had a public comment website from CMS to test for vulnerability to deepfake text submissions in our study, because in June 2019, the U.S. Supreme Court ruled in a 7-1 decision that CMS could not skip the public comment requirements of the Administrative Procedure Act in reviewing proposals from state governments to add work reporting requirements to Medicaid eligibility rules within their state.

The impact of public comments on the final rule by a federal agency can be substantial based on political science research. For example, in 2018, Harvard University researchers found that banks that commented on Dodd-Frank-related rules by the Federal Reserve obtained $7 billion in excess returns compared to non-participants. When they examined the submitted comments to the “Volcker Rule” and the debit card interchange rule, they found significant influence from submitted comments by different banks during the “sausage-making process” from the initial proposed rule to the final rule.

Beyond commenting directly using their official corporate names, we’ve also seen how an industry group, Broadband for America, in 2017 would submit millions of fake comments in support of the FCC’s rule to end net neutrality in order to create the false perception of broad political support for the FCC’s rule amongst the American public.

Technology solutions to deepfake text on public comments

While our study highlights the threat of deepfake text to disrupt public comment websites, this doesn’t mean we should end this long-standing institution of American democracy, but rather we need to identify how technology can be used for innovative solutions that accepts public comments from real humans while rejecting deepfake text from bots.

There are two stages in the public comment process — (1) comment submission and (2) comment acceptance — where technology can be used as potential solutions.

In the first stage of comment submission, technology can be used to prevent bots from submitting deepfake comments in the first place; thus raising the cost for an attacker to need to recruit large numbers of humans instead. One technological solution that many are already familiar with are the CAPTCHA boxes that we see at the bottom of internet forms that ask us to identify a word — either visually or audibly — before being able to click submit. CAPTCHAs provide an extra step that makes the submission process increasingly difficult for a bot. While these tools can be improved for accessibility for disabled individuals, they would be a step in the right direction.

However, CAPTCHAs would not prevent an attacker willing to pay for low-cost labor abroad to solve any CAPTCHA tests in order to submit deepfake comments. One way to get around that may be to require strict identification to be provided along with every submission, but that would remove the possibility for anonymous comments that are currently accepted by agencies such as CMS and the Food and Drug Administration (FDA). Anonymous comments serve as a method of privacy protection for individuals who may be significantly affected by a proposed rule on a sensitive topic such as healthcare without needing to disclose their identity. Thus, the technological challenge would be to build a system that can separate the user authentication step from the comment submission step so only authenticated individuals can submit a comment anonymously.

Finally, in the second stage of comment acceptance, better technology can be used to distinguish between deepfake text and human submissions. While our study found that our sample of over 100 people surveyed were not able to identify the deepfake text examples, more sophisticated spam detection algorithms in the future may be more successful. As machine learning methods advance over time, we may see an arms race between deepfake text generation and deepfake text identification algorithms.

The challenge today

While future technologies may offer more comprehensive solutions, the threat of deepfake text to our American democracy is real and present today. Thus, we recommend that all federal public comment websites adopt state-of-the-art CAPTCHAs as an interim measure of security, a position that is also supported by the 2019 U.S. Senate Subcommittee on Investigations’ Report on Abuses of the Federal Notice-and-Comment Rulemaking Process.

In order to develop more robust future technological solutions, we will need to build a collaborative effort between the government, researchers and our innovators in the private sector. That’s why we at Harvard University have joined the Public Interest Technology University Network along with 20 other education institutions, New America, the Ford Foundation and the Hewlett Foundation. Collectively, we are dedicated to helping inspire a new generation of civic-minded technologists and policy leaders. Through curriculum, research and experiential learning programs, we hope to build the field of public interest technology and a future where technology is made and regulated with the public in mind from the beginning.

While COVID-19 has disrupted many parts of American society, it hasn’t stopped federal agencies under the Trump administration from continuing to propose new deregulatory rules that can have long-lasting legacies that will be felt long after the current pandemic has ended. For example, on March 18, 2020, the Environmental Protection Agency (EPA) proposed new rules about limiting which research studies can be used to support EPA regulations, which have received over 610,000 comments as of April 6, 2020. On April 2, 2020, the Department of Education proposed new rules for permanently relaxing regulations for online education and distance learning. On February 19, 2020, the FCC re-opened public comments on its net neutrality rules, which in 2017 saw 22 million comments submitted by bots, after a federal court ruled that the FCC ignored how ending net neutrality would affect public safety and cellphone access programs for low-income Americans.

Federal public comment websites offer the only way for the American public and organizations to express their concerns to the federal agency before the final rules are determined. We must adopt better technological defenses to ensure that deepfake text doesn’t further threaten American democracy during a time of crisis.

#ajit-pai, #artificial-intelligence, #column, #coronavirus, #covid-19, #deepfakes, #federal-communications-commission, #harvard-university, #machine-learning, #net-neutrality, #opinion, #policy, #security, #social, #tc

New Google Play policies to cut down on ‘fleeceware,’ deepfakes, and unnecessary location tracking

Google is today announcing a series of policy changes aimed at eliminating untrustworthy apps from its Android app marketplace, the Google Play store. The changes are meant to give users more control over how their data is used, tighten subscription policies, and help prevent deceptive apps and media — including those involving deepfakes — from becoming available on the Google Play Store.

Background Location

The first of these new policies is focused on the location tracking permissions requested by some apps.

Overuse of location tracking has been an area Google has struggled to rein in. In Android 10, users were able to restrict apps’ access to location while the app was in use, similar to what’s been available on iOS. With the debut of Android 11, Google decided to give users even more control with the new ability to grant a temporary “one-time” permission to sensitive data, like location.

In February, Google said it would also soon require developers to get user permission before accessing background location data, after noting that many apps were asking for unnecessary user data. The company found that a number of these apps would have been able to provide the same experience to users if they only accessed location while the app was in use — there was no advantage to running the app run in the background.

Of course, there’s an advantage for developers who are collecting location data. This sort of data can be sold to third-party through trackers that supply advertisers with detailed information about the app’s users, earning the developer additional income.

The new change to Google Play policies now requires that developers get approval to access background location in their app.

But Google is giving developers time to comply. It says no action will be taken for new apps until August 2020 or on existing apps until November 2020.


A second policy is focused on subscription-based apps. Subscriptions have become a booming business industry-wide. They’re often a better way for apps to generate revenue as opposed to other monetization methods — like paid downloads, ads, or in-app purchases.

However, many subscription apps are duping users into paying by not making it easy or obvious how to dismiss a subscription offer in order to use the free parts of an app, or not being clear about subscription terms or the length of free trials, among other things.

The new Google Play policy says developers will need to be explicit about their subscription terms, trials and offers, by telling users the following:

  • Whether a subscription is required to use all or parts of the app. (And if not required, allow users to dismiss the offer easily.)
  • The cost of the subscription
  • The frequency of the billing cycle
  • Duration of free trials and offers
  • The pricing of introductory offers
  • What is included with a free trial or introductory offer
  • When a free trial converts to a paid subscription
  • How users can cancel if they do not want to convert to a paid subscription

That means the “fine print” has to be included on the offer’s page, and developers shouldn’t use sneaky tricks like lighter font to hide the important bits, either.

For example:

This change aim to address the rampant problem with “fleeceware” across the Google Play store. Multiple studies have shown subscription apps have gotten out of control. In fact, one study from January stated that over 600 million Android users had installed “fleeceware” apps from the Play Store. To be fair, the problem is not limited to Android. The iOS App Store was recently found to have an issue, too, with more than 3.5 million users having installed “fleeceware.” 

Developers have until June 16, 2020 to come into compliance with this policy, Google says.


The final update has to do with the Play Store’s “Deceptive Behavior” policy.

This wasn’t detailed in Google’s official announcements about the new policies, but Google tells us it’s also rolling out updated rules around deceptive content and apps.

Before, Google’s policy was used to restrict apps that tried to deceive users — like apps claiming a functionally impossible task, those that lied in their listing about their content or features, or those that mimicked the Android OS, among others.

The updated policy is meant to better ensure all apps are clear about their behavior once their downloaded. In particular, it’s meant to prevent any manipulated content (aka “deepfakes”) from being available on the Play Store.

Google tells us this policy change won’t impact apps that allow users to make deepfakes that are “for fun” — like those that allow users to swap their face onto GIFs, for example. These will fall under an exception to the rule, which allows deepfakes which are “obvious satire or parody.”

However, it will take aim at apps that manipulate and alter media in a way that isn’t conventionally obvious or acceptable.

For example:

  • Apps adding a public figure to a demonstration during a politically sensitive event.
  • Apps using public figures or media from a sensitive event to advertise media altering capability within an app’s store listing.
  • Apps that alter media clips to mimic a news broadcast.

In particular, the policy will focus on apps that promote misleading imagery that could cause harm related to politics, social issues, or sensitive events. The apps must also disclose or watermark the altered media, too, if it isn’t clear the media has been altered.

Similar bans on manipulated media have been enacted across social media platforms, including Facebook, Twitter and WeChat. Apple’s App Store Developer Guidelines don’t specifically reference “deepfakes” by name, however, though it bans apps with false or defamatory information, outside of satire and humor.

Google says the apps currently available on Google Play have 30 days to comply with this change.

In Google’s announcement, the company said it understood these were difficult times for people, which is why it’s taken steps to minimize the short-term impact of these changes. In other words, it doesn’t sound like the policy changes will soon result in any mass banning or big Play Store clean-out — rather, they’re meant to set the stage for better policing of the store in the future.


#android, #android-apps, #apps, #deepfakes, #developers, #google, #google-play, #location, #mobile, #play-store, #privacy, #subscriptions