Issues like antitrust and privacy would remain on the agenda as his administration pursued policies to limit the power of the industry’s giants.
The Federal Trade Commission has announced a settlement with Zoom, after it accused the video calling giant of engaging in “a series of deceptive and unfair practices that undermined the security of its users,” in part by claiming the encryption was stronger than it actually was.
Cast your mind back earlier this year at the height of the pandemic lockdown, which forced millions to work from home and rely on Zoom for work meetings and remote learning. At the time, Zoom claimed video calls were protected by “end-to-end” encryption, a way of scrambling calls that makes it near-impossible for anyone — even Zoom — to listen in.
But those claims were false.
“In reality, the FTC alleges, Zoom maintained the cryptographic keys that could allow Zoom to access the content of its customers’ meetings, and secured its Zoom Meetings, in part, with a lower level of encryption than promised,” said the FTC in a statement Monday. “Zoom’s misleading claims gave users a false sense of security, according to the FTC’s complaint, especially for those who used the company’s platform to discuss sensitive topics such as health and financial information.”
Zoom quickly admitted it was wrong, prompting the company to launch a 90-day turnaround effort, which included the rollout of end-to-end encryption to its users. That eventually months later in late October — but not without another backtrack after Zoom initially said free users could not use end-to-end encryption.
The FTC also alleged in its complaint that Zoom stored some meeting recordings unencrypted on its servers for up to two months, and compromised the security of its users by covertly installing a web server on its users’ computers in order for users to jump into meetings faster. This, the FTC said, “was unfair and violated the FTC Act.” Zoom pushed out an update which removed the web server, but Apple also intervened to remove the vulnerable component from its customers’ computers.
In its statement, the FTC said it has prohibited Zoom from misrepresenting its security and privacy practices going forward, and has agreed to start a vulnerability management program and implement stronger security across its internal network.
Zoom did not immediately respond to a request for comment.
Democrats have been unified by their desire to oust President Trump. But if that happens, deep divisions on the issue of trade are likely to reappear.
Their animosity is likely to be on full display at a hearing on Wednesday with the leaders of Facebook, Google and Twitter.
Popular YouTube channels often bombard young children with thinly veiled ads for junk food, a new study finds.
Any action would follow the Justice Department’s landmark suit this week against Google, as a bipartisan tech backlash ramps up.
The federal government’s lawsuit isn’t likely to derail the company’s market dominance.
The time to do anything substantive about the overwhelming power of the giant tech companies passed very long ago.
More than 200,000 complaints of scams and fraud have been filed so far this year, data from the Federal Trade Commission shows.
Facebook’s photo and video portability tool has added support for two more third party services for users to send data via encrypted transfer — namely: cloud storage providers Dropbox and (EU-based) Koofr.
The tech giant debuted the photo porting tool in December last year, initially offering users in its EU HQ location of Ireland the ability to port their media direct to Google Photos, before going on to open up access in more markets. It completed a global rollout of that first offering in June.
Facebook users in all its markets now have three options to choose from if they want to transfer Facebook photos and videos elsewhere. A company spokesman confirmed support for other (unnamed) services is also in the works, telling us: “There will be more partnership announcements in the coming months.”
The transfer tool is based on code developed via Facebook’s participation in the Data Transfer Project — a collaborative effort started last year, with backing from other tech giants including Apple, Google, Microsoft and Twitter.
To access the tool, Facebook users need to navigate to the ‘Your Facebook Information’ menu and select ‘Transfer a copy of your photos and videos’. Facebook will then prompt you to re-enter your password prior to initiating the transfer. You will then be asked to select a destination service from the three on offer (Google Photos, Dropbox or Koofr) and asked to enter your password for that third party service — kicking off the transfer.
Users will receive a notification on Facebook and via email when the transfer has been completed.
The encrypted transfers work from both the desktop version of Facebook or its mobile app.
Last month, the tech giant signalled in comments to the FTC ahead of a hearing on portability scheduled for later this month that it would be expanding the scope of its data portability offerings — including hinting it might offer direct transfers for more types of content in future, such as events or even users’ “most meaningful” posts.
For now, though, Facebook only supports direct, encrypted transfers for photos and videos uploaded to Facebook.
While Google and Dropbox are familiar names, the addition of a smaller, EU-based cloud storage provider in the list of supported services does stand out a bit. On that, Facebook’s spokesperson told us it reached out to discuss adding Koofr to the transfer tool after a staffer came across an article on Mashable discussing it as an EU cloud storage solution.
A bigger question is when — or whether — Facebook will offer direct photo portability to users of its photo sharing service, Instagram . It has not mentioned anything specific on that front when discussing its plans to expand portability.
When we asked Facebook about bringing the photo porting tool to Instagram, a spokesman told us: “Facebook have prioritised portability tools on Facebook at the moment but look forward to exploring expansion to the other apps in the future.”
In a blog post announcing the new destinations for users of the Facebook photo & video porting tool, the tech giant repeats its call for lawmakers to come up with “clearer rules” to govern portability, writing that: “We want to continue to build data portability features people can trust. To do that, the Internet needs clearer rules about what kinds of data should be portable and who is responsible for protecting that data as it moves to different services. Policymakers have a vital role to play in this.”
It also writes that it’s keen for other companies to join the Data Transfer Project — “to expand options for people and push data portability innovation forward”.
In recent years Facebook has been lobbying for what it calls ‘the right regulation’ to wrap around portability — releasing a white paper on the topic last year which plays up what it couches as privacy and security trade-offs in a bid to influence regulatory thinking around requirements on direct data transfers.
Portability is in the frame as a possible tool for helping rebalance markets in favor of new entrants or smaller players as lawmakers dig into concerns around data-fuelled barriers to competition in an era of platform giants.
A lot happened in cybersecurity over the past week.
The University of Utah paid almost half a million dollars to stop hackers from leaking sensitive student data after a ransomware attack. Two major ATM makers patched flaws that could’ve allowed for fraudulent cash withdrawals from vulnerable ATMs. Grant Schneider, the U.S. federal chief information security officer, is leaving his post after more than three decades in government. And, a new peer-to-peer botnet is spreading like wildfire and infecting millions of machines around the world.
In this week’s column, we look at how Uber’s handling of its 2016 data breach put the company’s former chief security officer in hot water with federal prosecutors. And, what is “vishing” and why should companies take note?
THE BIG PICTURE
Uber’s former security chief charged with data breach cover-up
Joe Sullivan, Uber’s former security chief, was indicted this week by federal prosecutors for allegedly trying to cover up a data breach in 2016 that saw 57 million rider and driver records stolen.
Sullivan paid $100,000 in a “bug bounty” payment to the two hackers, who were also charged with the breach, in exchange for signing a nondisclosure agreement. It wasn’t until a year after the breach that former Uber chief executive Travis Kalanick was forced out and replaced with Dara Khosrowshahi, who fired Sullivan after learning of the cyberattack. Sullivan now serves as Cloudflare’s chief security officer.
The payout itself isn’t the issue, as some had claimed. Prosecutors in San Francisco took issue with how Sullivan allegedly tried to bury the breach, which later resulted in a massive $148 million settlement with the Federal Trade Commission.
Facebook is considering expanding the types of data its users are able to port directly to alternative platforms.
In comments on portability sent to US regulators ahead of an FTC hearing on the topic next month, Facebook says it intends to expand the scope of its data portability offerings “in the coming months”.
It also offers some “possible examples” of how it could build on the photo portability tool it began rolling out last year — suggesting it could in future allow users to transfer media they’ve produced or shared on Facebook to a rival platform or take a copy of their “most meaningful posts” elsewhere.
Allowing Facebook-based events to be shared to third party cloud-based calendar services is another example cited in Facebook’s paper.
It suggests expanding portability in such ways could help content creators build their brands on other platforms or help event organizers by enabling them to track Facebook events using calendar based tools.
However there are no firm commitments from Facebook to any specific portability product launches or expansions of what it offers currently.
“We remain committed to ensuring the current product remains stable and performant for people and we are also exploring how we might extend this tool, mindful of the need to preserve the privacy of our users and the integrity of our services,” Facebook writes of its photo transfer tool.
On whether it will expand support for porting photos to other rival services (i.e. not just Google Photos) Facebook has this non-committal line to offer regulators: “Supporting these additional use cases will mean finding more destinations to which people can transfer their data. In the short term, we’ll pursue these destination partnerships through bilateral agreements informed by user interest and expressions of interest from potential partners.”
Beyond allowing photo porting to Google Photos, Facebook users have long been able to download a copy of some of the information it holds on them.
But the kind of portability regulators are increasingly interested in is about going much further than that — meaning offering mechanisms that enable easy and secure data transfers to other services in a way that could encourage and support fast-moving competition to attention-monopolizing tech giants.
The Federal Trade Commission is due to host a public workshop on September 22, 2020, which it says will “examine the potential benefits and challenges to consumers and competition raised by data portability”.
The regulator notes that the topic has gained interest following the implementation of major privacy laws that include data portability requirements — such as the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).
It asked for comment submissions by August 21, which is what Facebook’s paper is responding to.
In comments to the Reuters news agency, Facebook’s privacy and public policy manager, Bijan Madhani, said the company wants to see “dedicated portability legislation” coming out of any post-workshop recommendations.
It reports that Facebook supports a portability bill that’s doing the rounds in Congress — called the Access Act, which is sponsored by Democratic Senators Richard Blumenthal and Mark Warner, and Republican senator Josh Hawley — which would require large tech platforms to let their users easily move their data to other services.
Albeit Madhani dubs it a good first step, adding that the company will continue to engage with the lawmakers on shaping its contents.
“Although some laws already guarantee the right to portability, our experience suggests that companies and people would benefit from additional guidance about what it means to put those rules into practice,” Facebook also writes in its comments to the FTC .
Ahead of dipping its toe into portability via the photo transfer tool, Facebook released a white paper on portability last year, seeking to shape the debate and influence regulatory thinking around any tighter or more narrowly defined portability requirements.
The Facebook founder pushed the European Commission to narrow the types of data that should fall under portability rules. In the public discussion with commissioner Thierry Breton, in May, he raised the example of the Cambridge Analytica Facebook data misuse scandal, claiming the episode illustrated the risks of too much platform “openness” — and arguing that there are “direct trade-offs about openness and privacy”.
Zuckerberg went on to press for regulation that helps industry “balance these two important values around openness and privacy”. So it’s clear the company is hoping to shape the conversation about what portability should mean in practice.
Or, to put it another way, Facebook wants to be able to define which data can flow to rivals and which can’t.
“Our position is that portability obligations should not mandate the inclusion of observed and inferred data types,” Facebook writes in further comments to the FTC — lobbying to put broad limits on how much insight rivals would be able to gain into Facebook users who wish to take their data elsewhere.
Both its white paper and comments to the FTC plough this preferred furrow of making portability into a ‘hard problem’ for regulators, by digging up downsides and fleshing out conundrums — such as how to tackle social graph data.
On portability requests that wrap up data on what Facebook refers to as “non-requesting users”, its comments to the FTC work to sew doubt about the use of consent mechanisms to allow people to grant each other permission to have their data exported from a particular service — with the company questioning whether services “could offer meaningful choice and control to non-requesting users”.
“Would requiring consent inappropriately restrict portability? If not, how could consent be obtained? Should, for example, non-requesting users have the ability to choose whether their data is exported each time one of their friends wants to share it with an app? Could an approach offering this level of granularity or frequency of notice could lead to notice fatigue?” Facebook writes, skipping lightly over the irony given the levels of fatigue its own apps’ default notifications can generate for users.
Facebook also appears to be advocating for an independent body or regulator to focus on policy questions and liability issues tied to portability, writing in a blog post announcing its FTC submission: “In our comments, we encourage the FTC to examine portability in practice. We also ask it to recommend dedicated federal portability legislation and provide advice to industry on the policy and regulatory tensions we highlight, so that companies implementing data portability have the clear rules and certainty necessary to build privacy-protective products that enhance people’s choice and control online.”
In its FTC submission the company goes on to suggest that “an independent mechanism or body” could “collaboratively set privacy and security standards to ensure data portability partnerships or participation in a portability ecosystem that are transparent and consistent with the broader goals of data portability”.
Facebook then further floats the idea of an accreditation model under which recipients of user data “could demonstrate, through certification to an independent body, that they meet the data protection and processing standards found in a particular regulation, such as the [EU’s] GDPR or associated code of conduct”.
“Accredited entities could then be identified with a seal and would be eligible to receive data from transferring service providers. The independent body (potentially in consultation with relevant regulators) could work to assess compliance of certifying entities, revoking accreditation where appropriate,” it further suggests.
However its paper also notes the risk that requiring accreditation might present a barrier to entry for the small businesses and startups that might otherwise be best positioned to benefit from portability.
It was the first known time that regulators directly interviewed a chief executive of one of the tech companies being scrutinized for potential antitrust violations.
Three current and former employees expressed concerns about the Chinese-owned app’s safeguards for preteen children.
Until late last year social video app TikTok was using an extra layer of encryption to conceal a tactic for tracking Android users via the MAC address of their device which skirted Google’s policies and did not allow users to opt out, The Wall Street Journal reports. Users were also not informed of this form of tracking, per its report.
Its analysis found that this concealed tracking ended in November as US scrutiny of the company dialled up, after at least 15 months during which TikTok had been gathering the fixed identifier without users’ knowledge.
A MAC address is a unique and fixed identifier assigned to an Internet connected device — which means it can be repurposed for tracking the individual user for profiling and ad targeting purposes, including by being able to re-link a user who has cleared their advertising ID back to the same device and therefore to all the prior profiling they wanted to jettison.
TikTok appears to have exploited a known bug on Android to gather users’ MAC addresses which Google has still failed to plug, per the WSJ.
A spokeswoman for TikTok did not deny the substance of its report, nor engage with specific questions we sent — including regarding the purpose of this opt-out-less tracking. Instead she sent the below statement, attributed to a spokesperson, in which company reiterates what has become a go-to claim that it has never given US user data to the Chinese government:
Under the leadership of our Chief Information Security Officer (CISO) Roland Cloutier, who has decades of experience in law enforcement and the financial services industry, we are committed to protecting the privacy and safety of the TikTok community. We constantly update our app to keep up with evolving security challenges, and the current version of TikTok does not collect MAC addresses. We have never given any US user data to the Chinese government nor would we do so if asked.
“We always encourage our users to download the most current version of TikTok,” the statement added.
With all eyes on TikTok, as the latest target of the Trump administration’s war on Chinese tech firms, scrutiny of the social video app’s handling of user data has inevitably dialled up.
And while no popular social app platform has its hands clean when it comes to user tracking and profiling for ad targeting, TikTok being owned by China’s ByteDance means its flavor of surveillance capitalism has earned it unwelcome attention from the US president — who has threatened to ban the app unless it sells its US business to a US company within a matter of weeks.
Trump’s fixation on China tech, generally, is centered on the claim that the tech firms pose threats to national security in the West via access to Western networks and/or user data.
The US government is able to point to China’s Internet security law which requires firms to provide the Chinese Communist Party with access to user data — hence TikTok’s emphatic denial of passing data. But the existence of the law makes such claims difficult to stick.
TikTok’s problems with user data don’t stop there, either. Yesterday it emerged that France’s data protection watchdog has been investigating TikTok since May, following a user complaint.
The CNIL’s concerns about how the app handled a user request to delete a video have since broadened to encompass issues related to how transparently it communicates with users, as well as to transfers of user data outside the EU — which, in recent weeks, have become even more legally complex in the region.
Compliance with EU rules on data access rights for users and the processing of minors’ information are other areas of stated concern for the regulator.
Under EU law any fixed identifier (e.g. a MAC address) is treated as personal data — meaning it falls under the bloc’s GDPR data protection framework, which places strict conditions on how such data can be processed, including requiring companies to have a legal basis to collect it in the first place.
If TikTok was concealing its tracking of MAC addresses from users it’s difficult to imagine what legal basis it could claim — consent would certainly not be possible. The penalties for violating GDPR can be substantial (France’s CNIL slapped Google with a $57M fine last year under the same framework, for example).
The WSJ’s report notes that the FTC has said MAC addresses are considered personally identifiable information under the Children’s Online Privacy Protection Act — implying the app could also face a regulatory probe on that front, to add to its pile of US problems.
Presented with the WSJ’s findings, Senator Josh Hawley (R., Mo.) told the newspaper that Google should remove TikTok’s app from its store. “If Google is telling users they won’t be tracked without their consent and knowingly allows apps like TikTok to break its rules by collecting persistent identifiers, potentially in violation of our children’s privacy laws, they’ve got some explaining to do,” he said.
We’ve reached out to Google for comment.
Most gamers may not view Apple as a games company to the same degree that they see Sony with PlayStation or Microsoft with Xbox, but the iPhone-maker continues to uniformly drive the industry with decisions made in the Apple App Store.
The company made the news a couple times late this week for App Store approvals. Once for denying a gaming app, and the other for approving one.
The denial was Microsoft’s xCloud gaming app, something the Xbox folks weren’t too psyched about. Microsoft xCloud is one of the Xbox’s most substantial software platform plays in quite some time, allowing gamers to live-stream titles from the cloud and play console-quality games across a number of devices. It’s a huge effort that’s been in preview for a bit, but is likely going to officially launch next month. The app had been in a Testflight preview for iOS, but as Microsoft looked to push it to primetime, Apple said not so fast.
The app that was approved was the Facebook Gaming app which Facebook has been trying to shove through the App Store for months to no avail. It was at last approved Friday after the company stripped one of its two central features, a library of playable mobile games. In a curt statement to The New York Times, Facebook COO Sheryl Sandberg said, “Unfortunately, we had to remove gameplay functionality entirely in order to get Apple’s approval on the stand-alone Facebook Gaming app.”
Microsoft’s Xbox team also took the unusually aggressive step of calling out Apple in a statement that reads, in-part, “Apple stands alone as the only general purpose platform to deny consumers from cloud gaming and game subscription services like Xbox Game Pass. And it consistently treats gaming apps differently, applying more lenient rules to non-gaming apps even when they include interactive content.”
Microsoft is still a $1.61 trillion company so don’t think I’m busting out the violin for them, but iOS is the world’s largest gaming platform, something CEO Tim Cook proudly proclaimed when the company launched its own game subscription platform, Apple Arcade, last year. Apple likes to play at its own pace, and all of these game-streaming platforms popping up at the same time seem poised to overwhelm them.
There are a few things about cloud gaming apps that seem at odds with some of the App Store’s rules, yet these rules are, of course, just guidelines written by Apple. For Apple’s part, they basically said (full statement later) that the App Store had curators for a reason and that approving apps like these means they can’t individually review the apps which compromises the App Store experience.
To say that’s “the reason” seems disingenuous because the company has long approved platforms to operate on the App Store without stamping approval on the individual pieces of content that can be accessed. With “Games” representing the App Store’s most popular category, Apple likely cares much more about keeping their own money straight.
Analysis from CNBC pinned Apple’s 2019 App Store total revenue at $50 billion.
When these cloud gaming platforms like xCloud scale with zero iOS support, millions of Apple customers, myself included, are actually going to be pissed that their iPhone can’t do something that their friend’s phone can. Playing console-class titles on the iPhone would be a substantial feature upgrade for consumers. There are about 90 million Xbox Live users out there, a substantial number of which are iPhone owners I would imagine. The games industry is steadily rallying around game subscription networks and cloud gaming as a move to encourage consumers to sample more titles and discover more indie hits.
I’ve seen enough of these sagas to realize that sometimes parties will kick off these fights purely as a tactic to get their way in negotiations and avoid workarounds, but it’s a tactic that really only works when consumers have a reason to care. Most of the bigger App Store developer spats have played in the background and come to light later, but at this point the Xbox team undoubtedly sees that Apple isn’t positioned all that well to wage an App Store war in the midst of increased antitrust attention over a cause that seems wholly focused on maintaining their edge in monetizing the games consumers play on Apple screens.
CEO Tim Cook spent an awful lot of time in his Congressional Zoom room answering question about perceived anticompetitiveness on the company’s application storefront.
The big point of tension I could see happening behind closed doors is that plenty of these titles offer in-game transactions and just because that in-app purchase framework is being live-streamed from a cloud computer doesn’t mean that a user isn’t still using experiencing that content on an Apple device. I’m not sure whether this is actually the point of contention, but it seems like it would be a major threat to Apple’s ecosystem-wide in-app purchase raking.
The App Store does not currently support cloud gaming on Nvidia’s GeForce platform or Google’s Stadia which are also both available on Android phones. Both of these platforms are more limited in scope than Microsoft’s offering which is expected to launch with wider support and pick up wider adoption.
While I can understand Apple’s desire to not have gaming titles ship that might not function properly on an iPhone because of system constraints, that argument doesn’t apply so well to the cloud gaming world where apps are translating button presses to the cloud and the cloud is sending them back the next engine-rendered frames of their game. Apple is being forced to get pretty particular about what media types of apps fall under the “reader” designation. The inherent interactivity of a cloud gaming platform seems to be the differentiation Apple is pushing here — as well as the interfaces that allows gamers to directly launch titles with an interface that’s far more specialized than some generic remote desktop app.
All of these platforms arrive after the company already launched Apple Arcade, a non-cloud gaming product made in the image of what Apple would like to think are the values it fosters in the gaming world: family friendly indie titles with no intrusive ads, no bothersome micro-transactions and Apple’s watchful review.
Apple’s driver’s seat position in the gaming world has been far from a wholly positive influence for the industry. Apple has acted as a gatekeeper, but the fact is plenty of the “innovations” pushed through as a result of App Store policies have been great for Apple but questionable for the development of a gamer-friendly games industry.
Apple facilitated the advent of free-to-play games by pushing in-app purchases which have been abused recklessly over the years as studios have been irresistibly pushed to structure their titles around principles of addiction. Mobile gaming has been one of the more insane areas of Wild West startup growth over the past decade and Apple’s mechanics for fueling quick transactions inside these titles has moved fast and broken things.
Take a look at the 200 top grossing games in the App Store (data via Sensor Tower) and you’ll see that all 199 of them rely solely on in-app micro-transaction to reach that status — Microsoft’s Minecraft, ranked 50th costs $6.99 to download, though it also offers in-app purchases.
In 2013, the company settled a class-action lawsuit that kicked off after parents sued Apple for making it too easy for kids to make in-app purchases. In 2014, Apple settled a case with the FTC over the same mechanism for $32 million. This year, a lawsuit filed against Apple questioned the legality of “loot box” in-app purchases which gave gamers randomized digital awards.
“Through the games it sells and offers for free to consumers through its AppStore, Apple engages in predatory practices enticing consumers, including children to engage in gambling and similar addictive conduct in violation of this and other laws designed to protect consumers and to prohibit such practices,” read that most recent lawsuit filing.
This is, of course, not how Apple sees its role in the gaming industry. In a statement to Business Insider responding to the company’s denial of Microsoft’s xCloud, Apple laid out its messaging.
The App Store was created to be a safe and trusted place for customers to discover and download apps, and a great business opportunity for all developers. Before they go on our store, all apps are reviewed against the same set of guidelines that are intended to protect customers and provide a fair and level playing field to developers.
Our customers enjoy great apps and games from millions of developers, and gaming services can absolutely launch on the App Store as long as they follow the same set of guidelines applicable to all developers, including submitting games individually for review, and appearing in charts and search. In addition to the App Store, developers can choose to reach all iPhone and iPad users over the web through Safari and other browsers on the App Store.
The impact has — quite obviously — not been uniformly negative, but Apple has played fast and loose with industry changes when they benefit the mothership. I won’t act like plenty of Sony and Microsoft’s actions over the years haven’t offered similar affronts to gamers, but Apple exercises the industry-wide sway it holds, operating the world’s largest gaming platform, too often and gamers should be cautious in trusting the App Store owner to make decisions that have their best interests at heart.
Twitter is facing a Federal Trade Commission probe and believes it will likely owe a fine of up to $250 million after being caught using phone numbers intended for two-factor authentication for advertising purposes.
The company received a draft complaint from the FTC on July 28, it disclosed in its regular quarterly filing with the Securities and Exchange commission. The complaint alleges that Twitter is in violation of its 2011 settlement with the FTC over the company’s “failure to safeguard personal information.”
That agreement included a provision banning Twitter from “misleading consumers about the extent to which it protects the security, privacy, and confidentiality of nonpublic consumer information, including the measures it takes to prevent unauthorized access to nonpublic information and honor the privacy choices made by consumers.” In October 2019, however, Twitter admitted that phone numbers and email addresses users provided it with for the purpose of securing their accounts were also used “inadvertently” for advertising purposes between 2013 and 2019.
Twitter has disclosed it’s facing a potential fine of more than a hundred million dollars as a result of a probe by the Federal Trade Commission (FTC) which believes the company violated a 2011 consent order by using data provided by users for a security purpose to target them with ads.
In an SEC filing, reported on earlier by the New York Times, Twitter revealed it received the draft complaint from the FTC late last month. The activity the regulator is complaining about is alleged to have taken place between 2013 and 2019.
Last October the social media firm publicly disclosed it had used phone numbers and email addresses provided by users to set up two-factor authentication to bolster the security of their accounts in order to serve targeted ads — blaming the SNAFU on a tailored audiences program, which allows companies to target ads against their own marketing lists.
Twitter found that when advertisers uploaded their own marketing lists (of emails and/or phone numbers) it matched users to data they had submitted purely to set up two-factor authentication on their Twitter account.
“The allegations relate to the Company’s use of phone number and/or email address data provided for safety and security purposes for targeted advertising during periods between 2013 and 2019,” Twitter writes in the SEC filing. “The Company estimates that the range of probable loss in this matter is $150.0 million to $250.0 million and has recorded an accrual of $150.0 million.”
“The matter remains unresolved, and there can be no assurance as to the timing or the terms of any final outcome,” it adds.
We’ve reached out to Twitter with questions. Update: A company spokeswoman said it had nothing to add outside this statement:
Following the announcement of our Q2 financial results, we received a draft complaint from the FTC alleging violations of our 2011 consent order. Following standard accounting rules we included an estimated range for settlement in our 10Q filed on August 3.
The company has had a torrid few weeks on the security front, suffering a major security incident last month after hackers gained access to its internal account management tools, enabling them to access accounts of scores of verified Twitter users, including Bill Gates, Elon Musk and Joe Biden, and use them to send cryptocurrency scam tweets. Police have since charged three people with the hack, including a 17-year-old Florida teen.
In June Twitter also disclosed a security lapse may have exposed some business customers’ information. While it was forced to report another crop of security incidents last year — including after a researcher identifying a bug that allowed him to discover phone numbers associated with millions of Twitter accounts.
Twitter also admitted it gave account location data to one of its partners, even if the user had opted-out of having their data shared; and inadvertently gave its ad partners more data than it should have.
Additionally, the company is now at the front of a long queue of tech giants pending enforcement in Europe, related to major GDPR complaints — where regional fines for data violations can scale to 4% of a company’s global annual turnover. Twitter’s lead data protection regulator, Ireland’s DPC, submitted a draft decision related to a probe of one of its security breaches to the bloc’s other data agencies in May — with a final decision slated as likely this summer.
The decision relates to an investigation the regulator instigated following yet another major security fail by Twitter in 2018 — when it revealed a bug had resulted in some passwords being stored in plain text.
As we reported at the time it’s pretty unusual for a company of such size to make such a basic security mistake. But Twitter has a very long history of failing to protect users’ data — with additional hacking incidents all the way back in 2009 leading to the 2011 FTC consent order.
Under the terms of that settlement Twitter was barred for 20 years from misleading consumers about the safety of their data in order to resolve FTC charges that it had “deceived consumers and put their privacy at risk by failing to safeguard their personal information”.
It also agreed to establish and maintain “a comprehensive information security program”, with independent auditor assessments taking place every other year for 10 years.
Given the terms of that order a fine does indeed look inevitable. However the wider failing here is that of US regulators — which, for over a decade, have failed to grapple with the exploitative, surveillance-based business models that have led to breaches and security lapses by a number of data-mining adtech giants, not just Twitter.
The social media company said the agency was examining whether it had misused people’s personal information to serve ads.
Members of Congress will be able to grill tech C.E.O.s at a hearing. Let’s hope they don’t waste the opportunity.
My college economics professor, Dr. Charles Britton, often said, “There’s no such thing as a free lunch.” The common principle known as TINSTAFL implies that even if something appears to be free, there is always a cost to someone, even if it is not the individual receiving the benefit.
For decades, the ad-supported ecosystem enjoyed much more than a proverbial free lunch. Brands, technology providers, publishers and platforms successfully transformed data provided by individuals into massive revenue gains, creating some of the world’s most profitable corporations. So if TINSTAFL is correct, what is the true cost of monetizing this data? Consumer trust, as it turns out.
Studies overwhelmingly demonstrate that the majority of people believe data collection and data use lack the necessary transparency and control. After a few highly publicized data breaches brought a spotlight on the lack of appropriate governance and regulation, people began to voice concerns that companies had operated with too little oversight for far too long, and unfairly benefited from the data individuals provided.
With increased attention, momentum and legislative activity in multiple individual states, we have never been in a better position to pass a federal data privacy law that can rebalance the system and set standards that rebuild trust with the people providing the data.
Over the last two decades, we’ve seen that individuals benefit from regulated use of data. The competitiveness of the banking markets is partly a result of laws around the collection and use of data for credit decisions. In exchange for data collection and use, individuals now have the ability to go online and get a home loan or buy a car with instant credit. A federal law would strengthen the value exchange and provide rules for companies around the collection and utilization of data, as well as establish consistency and uniformity, which can create a truly national market.
In order to close the gap and pass a law that properly balances the interests of people, society and commerce, the business sector must first unify on the need and the current political reality. Most already agree that a federal law should be preemptive of state laws, and many voices with legitimate differences of opinion have come a long way toward a consensus. Further unification on the following three assertions could help achieve bipartisan support:
A federal law must recognize that one size does not fit all. While some common sense privacy accountability requirements should be universal, a blanket approach for accountability practices is unrealistic. Larger enterprises with significant amounts of data on hand should have stricter requirements than other entities and be required to appoint a Data Ethics Officer and document privacy compliance processes and privacy reviews.
They should be required to regularly perform internal and external audits of data collection and use. These audits should be officer-certified and filed with a regulator. While larger companies are equipped to absorb this burden, smaller businesses should not be forced to forego using the data they need to innovate and thrive by imposing the same standards. Instead, requirements for accountability should be “right-sized,” and based on the amount and type of data collected and its intended use.
A federal law must properly empower the designated regulatory authority. The stated mission of the Federal Trade Commission is to protect American consumers. As the government agency of record for data privacy regulation and enforcement, the FTC has already imposed billions of dollars in penalties for privacy violations. However, in a modern world where every company collects and uses data, the FTC cannot credibly monitor or enforce federal regulation without substantially increasing funding and staffing.
With increased authority, equipped with skilled teams to diligently monitor those companies with the most consumer data, the FTC — with State Attorney Generals designated as back-ups — can hold them accountable by imposing meaningful remedial actions and fines.
A federal law must acknowledge that properly crafted private right-to-action is appropriate and necessary. The earlier points build an effective foundation for the protection of people’s privacy rights, but there will still be situations where a person should have access to the judicial system to seek redress. Certainly, if a business does not honor the data rights of an individual as defined by federal law, people should have the right to bring an action for equitable relief. If a person has suffered actual physical or significant economic harm directly caused by violation of a Federal Data Privacy law, they should be able to bring suit if, after giving notice, the FTC declines to pursue.
Too many leaders have been unwilling to venture toward possible common ground, but public opinion dictates that more must be done, otherwise states, counties, parishes and cities will inevitably continue to act if Congress does not. It is just as certain that those data privacy laws will be inconsistent, creating a patchwork of rules based on geography, leading to unnecessary friction and complexity. Consider how much time is spent sorting through the 50 discrete data breach laws that exist today, an expense that could easily be mitigated with a single national standard.
It is clear that responsible availability of data is critical to fostering innovation. American technology has led the world into this new data-driven era, and it’s time for our laws to catch up.
To drive economic growth and benefit all Americans, we need to properly balance the interests of people, society at-large and business, and pass a data law that levels the playing field and allows American enterprise to continue thinking with data. It should ensure that transparency and accountability are fostered and enforced and help rebuild trust in the system.
Coming together to support the passage of a comprehensive and preemptive federal data privacy law is increasingly important. If not, we are conceding that we’re okay with Americans remaining distrustful of the industry, and that the rest of the world should set the standards for us.
Scammers are out to get personal information that could lead to identity theft.
Attorney General William Barr’s attention to the Justice Department investigation shows the high stakes for the agency and for him.
Inquiries in California and Washington are a sign that the scrutiny of the tech giant continues to intensify.
A complaint to the Federal Trade Commission by a parent and two advocacy groups says the company “attracts, encourages and facilitates mass shooters.”
Twenty consumer groups said the video app had failed to make some changes it agreed to carry out last year to settle federal charges.
As the U.S. government’s small-business rescue fund reopens today, outrage remains about bigger companies that tapped the first round of loans.
In warning letters sent on Friday, the agency cracked down for the first time on claims about earnings opportunities amid the pandemic.
By telephone, phishing emails, text messages or social media promotions, unscrupulous actors are using their warped creativity to separate people from their cash.
A padlock—whether it uses a combination, a key, or “smart” tech—has exactly one job: to keep your stuff safe so other people can’t get it. Tapplock, Inc., based in Canada, produces such a product. The company’s locks unlock with a fingerprint or an app connected by Bluetooth to your phone. Unfortunately, the Federal Trade Commission said, the locks are full of both digital and physical vulnerabilities that leave users’ stuff, and data, at risk.
“We allege that Tapplock promised that its Internet-connected locks were secure, but in fact the company failed to even test if that claim was true,” Andrew Smith, director of the FTC’s Bureau of Consumer Protection, said in a written statement. “Tech companies should remember the basics—when you promise security, you need to deliver security.”
Back in 2018, cigarette maker Altria—formerly known as Philip Morris— apparently saw the writing on the wall for the tobacco industry’s future. In December of that year, the company dropped a cool $12.8 billion to gain a 35 percent minority stake in e-cigarette firm Juul. The Juul deal seemed like a particularly clever way to gain a massive toehold in the vaping market as traditional tobacco cigarette use waned—too clever, it seems, as now the Federal Trade Commission is suing to unwind the deal.
The transaction “eliminated competition in violation of federal antitrust laws,” the FTC said yesterday, announcing the unanimous vote to move forward with the suit.
At the time of the acquisition, Juul was the leading US e-cigarette brand, the FTC alleges, but Altria’s own MarkTen product was already the second most popular brand by market share. Instead of continuing to compete, however, Altria arranged to reap the benefits of its competitor without outright acquiring it.
YouTube has been criticized for continuing to host coronavirus disinformation on its video sharing platform during a global health emergency.
Two US advocacy groups which campaign for online safety undertook an 18-day investigation of the video sharing platform in March — finding what they say were “dozens” of examples of dubious videos, including videos touting bogus vaccines the sellers claimed would protect buyers from COVID-19.
They also found videos advertising medical masks of unknown quality for sale.
There have been concerns about shortages of masks for front-line medical staff, as well as the risk of online scammers hawking low grade kit that does not offered the claimed protection against the virus.
Google said last month that it would temporarily take down ads for masks from its ad network but sellers looking to exploit the coronavirus crisis appear to be circumventing the ban by using YouTube’s video sharing platform as an alternative digital shop window to lure buyers.
Researchers working for the Digital Citizens Alliance (DCA) and the Coalition for a Safer Web (CSW) initiated conversations with sellers they found touting dodgy coronavirus wares on YouTube — and were offered useless ‘vaccines’ for purchase and hundreds of masks of unknown quality.
“There was ample reason to believe the offers for masks were dubious as well [as the vaccines], as highlighted by interactions with representatives from some of the sellers,” they said.
Their report includes screengrabs of some of the interactions with the sellers. In one a seller tells the researchers they don’t accept credit cards — but they do accept CashApp, PayPal, Google or Amazon gift cards or Bitcoin.
The same seller offered the researchers vaccines priced at $135 each, and suggested they purchase MMR/Varicella when asked which one is “the best”. Such a vaccine, even if it functioned for MMR/Varicella, would obviously offer no protection against COVID-19.
Another seller was found to be hawking “COVID-19 drugs” using a YouTube account name “Real ID Card Fake Passport Producer”.
“How does a guy calling himself ‘Real ID Card Fake Passport Producer’ even get a page on YouTube?” said Eric Feinberg, lead researcher for CSW, in a statement accompanying the report. “It’s all too easy to get ahold of these guys. We called some of them. Once you contact them, they are relentless. They’ll call you back at all hours and hound you until you buy something. They’ll call you in the middle of the night. They are predators looking to capitalize on our fear.”
A spokesman for the DCA told us the researchers compiled the report based on content from around 60 videos they identified hawking coronavirus-related ‘cures’ or kit between March 6-24.
“There are too many to count. Everyday, I find more,” added Feinberg.
The groups are also critical of how YouTube’s platform risks lending credibility to coronavirus disinformation because the platform now displays official CDC-branded banners under any COVID-19 related material — including the dubious videos their report highlights.
“YouTube also mixes trusted resources with sites that shouldn’t be trusted and that could confuse consumers — especially when they are scared and desperate,” said DCA executive director, Tom Galvin, in a statement. “It’s hard enough to tell who’s legitimate and who’s not on YouTube.”
The DCA and CSW have written letters to the US Department of Justice and the Federal Trade Commission laying out their findings and calling for “swift action” to hold bad actors accountable.
“YouTube, and its parent company Google, are shirking their formal policy that prohibits content that capitalizes off sensitive events,” they write in a letter to attorney general Barr.
“Digital Citizens is sharing this information in the hopes your Justice Department will act swiftly to hold bad actors, who take advantage of the coronavirus, accountable. In this crisis, strong action will deter others from engaging in criminal or illicit acts that harm consumers or add to confusion and anxiety,” they add.
Responding to the groups’ findings a YouTube spokesperson said some of the videos the researchers had identified had not received many views.
After we contacted the company about the content YouTube told us it had removed three channels identified by the researchers in the report for violating its Community Guidelines.
In a statement YouTube added:
Our thoughts are with everyone affected by the coronavirus around the world. We’re committed to providing helpful information at this critical time, including raising authoritative content, reducing the spread of harmful misinformation and showing information panels, using WHO / CDC data, to help combat misinformation. To date, there have been over 5B impressions on our information panels for coronavirus related videos and searches. We also have clear policies against COVID-19 misinformation and we quickly remove videos violating these policies when flagged to us.
The DCA and CSW also recently undertook a similar review of Facebook’s platform — finding sellers touting masks for sale despite the tech giant’s claimed ban on such content. “Facebook promised CNN when they did a story on our report about them that the masks would be gone a week ago, but the researchers from CSW are still finding the masks now,” their spokesman told us.
Earlier this week the Tech Transparency Project also reported still being able to find masks for sale on Facebook’s platform. It found examples of masks showing up in Google’s targeted ads too.