A Senate proposal for a new US agency to protect Americans’ data is back

Democratic Senator Kirsten Gillibrand has revived a bill that would establish a new U.S. federal agency to shield Americans from the invasive practices of tech companies operating in their own backyard.

Last year, Gillibrand (D-NY) introduced the Data Protection Act, a legislative proposal that would create an independent agency designed to address modern concerns around privacy and tech that existing government regulators have proven ill-equipped to handle.

“The U.S. needs a new approach to privacy and data protection and it’s Congress’ duty to step forward and seek answers that will give Americans meaningful protection from private companies that value profits over people,” Sen. Gillibrand said.

The revamped bill, which retains its core promise of a new “Data Protection Agency,” is co-sponsored by Ohio Democrat Sherrod Brown and returns to the new Democratic Senate with a few modifications.

In the spirit of all of the tech antitrust regulation chatter going on right now, the 2021 version of the bill would also empower the Data Protection Agency to review any major tech merger involving a data aggregator or other deals that would see the user data of 50,000 people change hands.

Other additions to the bill would establish an office of civil rights to “advance data justice” and allow the agency to evaluate and penalize high-risk data practices, like the use of algorithms, biometric data and harvesting data from children and other vulnerable groups.

Gillibrand calls the notion of updating regulation to address modern tech concerns “critical” — and she’s not alone. Democrats and Republicans seldom find common ground in 2021, but a raft of new bipartisan antitrust bills show that Congress has at last grasped how important it is to rein in tech’s most powerful companies lest they lose the opportunity altogether.

The Data Protection Act lacks the bipartisan sponsorship enjoyed by the set of new House tech bills, but with interest in taking on big tech at an all-time high, it could attract more support. Of all of the bills targeting the tech industry in the works right now, this one isn’t likely to go anywhere without more bipartisan interest, but that doesn’t mean its ideas aren’t worth considering.

Like some other proposals wending their way through Congress, this bill recognizes that the FTC has failed to meaningfully punish big tech companies for their bad behavior. In Gillibrand’s vision, the Data Protection Agency could rise to modern regulatory challenges where the FTC has failed. In other proposals, the FTC would be bolstered with new enforcement powers or infused with cash that could help the agency’s bite match its bark.

It’s possible that modernizing the tools that federal agencies have at hand won’t be sufficient. Cutting back more than a decade of overgrowth from tech’s data giants won’t be easy, particularly because the stockpile of Americans’ data that made those companies so wealthy is already out in the wild.

A new agency dedicated to wresting control of that data from powerful tech companies could bridge the gap between Europe’s own robust data protections and the absence of federal regulation we’ve seen in the U.S. But until something does, Silicon Valley’s data hoarders will eagerly fill the power vacuum themselves.

#congress, #data-security, #europe, #federal-trade-commission, #policy, #regulation, #senate, #tc, #terms-of-service, #the-battle-over-big-tech, #united-states

0

Macron says G7 countries should work together to tackle toxic online content

In a press conference at the Élysée Palace, French President Emmanuel Macron reiterated his focus on online regulation, and more particularly toxic content. He called for more international cooperation as the Group of Seven (G7) summit is taking place later this week in the U.K.

“The third big topic that could benefit from efficient multilateralism and that we’re going to bring up during this G7 summit is online regulation,” Macron said. “This topic, and I’m sure we’ll talk about it again, is essential for our democracies.”

Macron also used that opportunity to sum up France’s efforts on this front. “During the summer of 2017, we launched an initiative to tackle online terrorist content with then Prime Minister Theresa May. At first, and as crazy as it sounds today, we mostly failed. Because of free speech, people told us to mind our own business, more or less.”

In 2019, there was a horrendous mass mosque shooting in Christchurch, New Zealand. And you could find multiple copies of the shooting videos on Facebook, YouTube and Twitter. Macron invited New Zealand Prime Minister Jacinda Ardern, several digital ministers of the G7 and tech companies to Paris.

They all signed a nonbinding pledge called the Christchurch Call. Essentially, tech companies that operate social platforms agreed to increase their efforts when it comes to blocking toxic content — and terrorist content in particular.

Facebook, Twitter, Google (and YouTube), Microsoft, Amazon and other tech companies signed the pledge. 17 countries and the European Commission also backed the Christchurch Call. There was one notable exception — the U.S. didn’t sign it.

“This strategy led to some concrete results because all online platforms that signed it have followed through,” Macron said. “Evidence of this lies in what happened in France last fall when we faced terrorist attacks.” In October 2020, French middle-school teacher Samuel Paty was killed and beheaded by a terrorist.

“Platforms flagged content and removed content within an hour,” he added.

Over time, more countries and online platforms announced their support for the Christchurch Call. In May, President Joe Biden joined the international bid against toxic content. “Given the number of companies incorporated in the U.S., it’s a major step and I welcome it,” Macron said today.

But what comes next after the Christchurch Call? First, Macron wants to convince more countries to back the call — China and Russia aren’t part of the supporters for instance.

“The second thing is that we have to push forward to create a framework for all sorts of online hate speech, racist speech, anti-semitic speech and everything related to online harassment,” Macron said.

He then briefly referred to French regulation on this front. Last year, French regulation on hate speech on online platforms has been widely deemed as unconstitutional by France’s Constitutional Council, the top authority in charge of ruling whether a new law complies with the constitution.

The list of hate-speech content was long and broad while potential fines were very high. The Constitutional Council feared that online platforms would censor content a bit too quickly.

But that doesn’t seem to be stopping Macron from backing new regulation on online content at the European level and at the G7 level.

“It’s the only way to build an efficient framework that we can bring at the G20 summit and that can help us fight against wild behavior in online interactions — and therefore wild behavior in our new world order,” Macron said, using the controversial ‘wild behavior’ metaphor (ensauvagement). That term was first popularized by far-right political figures.

According to him, if world leaders fail to find some common grounds when it comes to online regulation, it’ll lead to internet fragmentation. Some countries may choose to block several online services for instance.

And yet, recent events have showed us that this ship has sailed already. The Nigerian government suspended Twitter operations in the country just a few days ago. It’s easy to agree to block terrorist content, but it becomes tedious quite quickly when you want to moderate other content.

#emmanuel-macron, #europe, #hate-speech, #macron, #policy, #regulation, #social, #toxic-content

0

Bosch sees a place for renewable fuels, challenging proposed European Union engine ban

Bosch executives on Thursday criticized proposed EU regulations that would ban the internal combustion engine by 2025, saying that lawmakers “shy away” from discussing the consequences of such a ban on employment.

Although the company reported it is creating jobs through its new businesses, particularly its fuel cell business, and said it was filling more than 90% of these positions internally, it also said an all- or mostly-electric transportation revolution would likely affect jobs. As a case in point, the company told reporters that ten Bosch employees are needed to build a diesel powertrain system, three for a gasoline system — but only one for an electrical powertrain.

Instead, Bosch sees a place for renewable synthetic fuels and hydrogen fuel cells alongside electrification. Renewable synthetic fuels made from hydrogen are a different technology from hydrogen fuel cells. Fuel cells use hydrogen to generate electricity, while hydrogen-derived fuels can be combusted in a modified internal combustion engine (ICE).

“An opportunity is being missed if renewable synthetic fuel derived from hydrogen and CO2 remains off-limits in road transport,” Bosch CEO Volkmar Denner said.

“Climate action is not about the end of the internal-combustion engine,” he continued. “It’s about the end of fossil fuels. And while electromobility and green charging power make road transport carbon neutral, so do renewable fuels.”

Electric solutions have limits, Denner said, particularly in powering heavy-duty vehicles. The company earlier this month established a joint venture with Chinese automaker Qingling Motors to build fuel cell powertrains in a test fleet of 70 trucks.

Bosch’s confidence in hydrogen fuel cells and synthetic fuels isn’t to the exclusion of battery-electric mobility. The company, which is one of the world’s largest suppliers of automotive and industrial components, said its electromobility business is growing by almost 40 percent, and the company projects annual sales of electrical powertrain components to increase to around €5 billion ($6 billion) by 2025, a fivefold increase.

However, the German company said it was “keeping its options open” by also investing €600 million ($721.7 million) in fuel cell powertrains in the next three years.

“Ultimately Europe won’t be able to achieve climate neutrality without a hydrogen economy,” Denner said.

Bosch has not been immune from the effects of the global semiconductor shortage, which continues to drag into 2021. Board member Stefan Asenkerschbaumer warned that there is a risk the shortage “will stifle the recovery that was forecast” for this year. Taiwan Semiconductor Manufacturing Company executives told investors earlier this month that the situation may persist into 2022.

#automotive, #bosch, #electric-vehicles, #european-union, #regulation, #renewable-fuels, #synthetic-fuels, #tc, #transportation

0

EU plan for risk-based AI rules to set fines as high as 4% of global turnover, per leaked draft

European Union lawmakers who are drawing up rules for applying artificial intelligence are considering fines of up to 4% of global annual turnover (or €20M, if greater) for a set of prohibited use-cases, according to a leaked draft of the AI regulation — reported earlier by Politico — that’s expected to be officially unveiled next week.

The plan to regulate AI has been on the cards for a while. Back in February 2020 the European Commission published a white paper, sketching plans for regulating so-called “high risk” applications of artificial intelligence.

At the time EU lawmakers were toying with a sectoral focus — envisaging certain sectors like energy and recruitment as vectors for risk. However that approach appears to have been rethought, per the leaked draft — which does not limit discussion of AI risk to particular industries or sectors.

Instead, the focus is on compliance requirements for high risk AI applications, wherever they may occur (weapons/military uses are specifically excluded, however, as such use-cases fall outside the EU treaties). Although it’s not abundantly clear from this draft exactly how ‘high risk’ will be defined.

The overarching goal for the Commission here is to boost public trust in AI, via a system of compliance checks and balances steeped in “EU values” in order to encourage uptake of so-called “trustworthy” and “human-centric” AI. So even makers of AI applications not considered to be ‘high risk’ will still be encouraged to adopt codes of conduct — “to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems”, as the Commission puts it.

Another chunk of the regulation deals with measures to support AI development in the bloc — pushing Member States to establish regulatory sandboxing schemes in which startups and SMEs can be proritized for support to develop and test AI systems before bringing them to market.

Competent authorities “shall be empowered to exercise their discretionary powers and levers of proportionality in relation to artificial intelligence projects of entities participating the sandbox, while fully preserving authorities’ supervisory and corrective powers,” the draft notes.

What’s high risk AI?

Under the planned rules, those intending to apply artificial intelligence will need to determine whether a particular use-case is ‘high risk’ and thus whether they need to conduct a mandatory, pre-market compliance assessment or not.

“The classification of an AI system as high-risk should be based on its intended purpose — which should refer to the use for which an AI system is intended, including the specific context and conditions of use and — and be determined in two steps by considering whether it may cause certain harms and, if so, the severity of the possible harm and the probability of occurrence,” runs one recital in the draft.

“A classification of an AI system as high-risk for the purpose of this Regulation may not necessarily mean that the system as such or the product as a whole would necessarily be considered as ‘high-risk’ under the criteria of the sectoral legislation,” the text also specifies.

Examples of “harms” associated with high-risk AI systems are listed in the draft as including: “the injury or death of a person, damage of property, systemic adverse impacts for society at large, significant disruptions to the provision of essential services for the ordinary conduct of critical economic and societal activities, adverse impact on financial, educational or professional opportunities of persons, adverse impact on the access to public services and any form of public assistance, and adverse impact on [European] fundamental rights.”

Several examples of high risk applications are also discussed — including recruitment systems; systems that provide access to educational or vocational training institutions; emergency service dispatch systems; creditworthiness assessment; systems involved in determining taxpayer-funded benefits allocation; decision-making systems applied around the prevention, detection and prosecution of crime; and decision-making systems used to assist judges.

So long as compliance requirements — such as establishing a risk management system and carrying out post-market surveillance, including via a quality management system — are met such systems would not be barred from the EU market under the legislative plan.

Other requirements include in the area of security and that the AI achieves consistency of accuracy in performance — with a stipulation to report to “any serious incidents or any malfunctioning of the AI system which constitutes a breach of obligations” to an oversight authority no later than 15 days after becoming aware of it.

“High-risk AI systems may be placed on the Union market or otherwise put into service subject to compliance with mandatory requirements,” the text notes.

“Mandatory requirements concerning high-risk AI systems placed or otherwise put into service on the Union market should be complied with taking into account the intended purpose of the AI system and according to the risk management system to be established by the provider.

“Among other things, risk control management measures identified by the provider should be based on due consideration of the effects and possible interactions resulting from the combined application of the mandatory requirements and take into account the generally acknowledged state of the art, also including as reflected in relevant harmonised standards or common specifications.”

Prohibited practices and biometrics

Certain AI “practices” are listed as prohibited under Article 4 of the planned law, per this leaked draft — including (commercial) applications of mass surveillance systems and general purpose social scoring systems which could lead to discrimination.

AI systems that are designed to manipulate human behavior, decisions or opinions to a detrimental end (such as via dark pattern design UIs), are also listed as prohibited under Article 4; as are systems that use personal data to generate predictions in order to (detrimentally) target the vulnerabilities of persons or groups of people.

A casual reader might assume the regulation is proposing to ban, at a stroke, practices like behavioral advertising based on people tracking — aka the business models of companies like Facebook and Google. However that assumes adtech giants will accept that their tools have a detrimental impact on users.

On the contrary, their regulatory circumvention strategy is based on claiming the polar opposite; hence Facebook’s talk of “relevant” ads. So the text (as written) looks like it will be a recipe for (yet) more long-drawn out legal battles to try to make EU law stick vs the self-interested interpretations of tech giants.

The rational for the prohibited practices is summed up in an earlier recital of the draft — which states: “It should be acknowledged that artificial intelligence can enable new manipulative, addictive, social control and indiscriminate surveillance practices that are particularly harmful and should be prohibited as contravening the Union values of respect for human dignity, freedom, democracy, the rule of law and respect for human rights.”

It’s notable that the Commission has avoided proposing a ban on the use of facial recognition in public places — as it had apparently been considering, per a leaked draft early last year, before last year’s White Paper steered away from a ban.

In the leaked draft “remote biometric identification” in public places is singled out for “stricter conformity assessment procedures through the involvement of a notified body” — aka an “authorisation procedure that addresses the specific risks implied by the use of the technology” and includes a mandatory data protection impact assessment — vs most other applications of high risk AIs (which are allowed to meet requirements via self-assessment).

“Furthermore the authorising authority should consider in its assessment the likelihood and severity of harm caused by inaccuracies of a system used for a given purpose, in particular with regard to age, ethnicity, sex or disabilities,” runs the draft. “It should further consider the societal impact, considering in particular democratic and civic participation, as well as the methodology, necessity and proportionality for the inclusion of persons in the reference database.”

AI systems “that may primarily lead to adverse implications for personal safety” are also required to undergo this higher bar of regulatory involvement as part of the compliance process.

The envisaged system of conformity assessments for all high risk AIs is ongoing, with the draft noting: “It is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes.”

“For AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out) changes to the algorithm and performance which have not been pre-determined and assessed at the moment of the conformity assessment shall result in a new conformity
assessment of the AI system,” it adds.

The carrot for compliant businesses is to get to display a ‘CE’ mark to help them win the trust of users and friction-free access across the bloc’s single market.

“High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the Union,” the text notes, adding that: “Member States should not create obstacles to the placing on the market or putting into service of AI systems that comply with the requirements laid down in this Regulation.”

Transparency for bots and deepfakes

As well as seeking to outlaw some practices and establish a system of pan-EU rules for bringing ‘high risk’ AI systems to market safely — with providers expected to make (mostly self) assessments and fulfil compliance obligations (such as around the quality of the data-sets used to train the model; record-keeping/documentation; human oversight; transparency; accuracy) prior to launching such a product into the market and conduct ongoing post-market surveillance — the proposed regulation seeks shrink the risk of AI being used to trick people.

It does this by suggesting “harmonised transparency rules” for AI systems intended to interact with natural persons (aka voice AIs/chat bots etc); and for AI systems used to generate or manipulate image, audio or video content (aka deepfakes).

“Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems,” runs the text.

“In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a reasonable person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.

“This labelling obligation should not apply where the use of such content is necessary for the purposes of safeguarding public security or for the exercise of a legitimate right or freedom of a person such as for satire, parody or freedom of arts and sciences and subject to appropriate safeguards for the rights and freedoms of third parties.”

What about enforcement?

While the proposed AI regime hasn’t yet been officially unveiled by the Commission — so details could still change before next week — a major question mark looms over how a whole new layer of compliance around specific applications of (often complex) artificial intelligence can be effectively oversee and any violations enforced, especially given ongoing weaknesses in the enforcement of the EU’s data protection regime (which begun being applied back in 2018).

So while providers of high risk AIs are required to take responsibility for putting their system/s on the market (and therefore for compliance with all the various stipulations, which also include registering high risk AI systems in an EU database the Commission intends to maintain), the proposal leaves enforcement in the hands of Member States — who will be responsible for designating one or more national competent authorities to supervise application of the oversight regime.

We’ve seen how this story plays out with the General Data Protection Regulation. The Commission itself has conceded GDPR enforcement is not consistently or vigorously applied across the bloc — so a major question is how these fledgling AI rules will avoid the same forum-shopping fate?

“Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation,” runs the draft.

The Commission does add a caveat — about potentially stepping in in the event that Member State enforcement doesn’t deliver. But there’s no near term prospect of a different approach to enforcement, suggesting the same old pitfalls will likely appear.

“Since the objective of this Regulation, namely creating the conditions for an ecosystem of trust regarding the placing on the market, putting into service and use of artificial intelligence in the Union, cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures, in accordance with the principle of subsidiarity as set out in Article 5 of the Treaty on European Union,” is the Commission’s back-stop for future enforcement failure.

The oversight plan for AI includes setting up a mirror entity akin to the GDPR’s European Data Protection Board — to be called the European Artificial Intelligence Board — which will similarly support application of the regulation by issuing relevant recommendations and opinions for EU lawmakers, such as around the list of prohibited AI practices and high-risk systems.

 

#ai, #artificial-intelligence, #behavioral-advertising, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #facial-recognition, #general-data-protection-regulation, #policy, #regulation, #tc

0

Nvidia wants to buy CPU designer Arm—Qualcomm is not happy about it

Some current Arm licensees view the proposed acquisition as highly toxic.

Enlarge / Some current Arm licensees view the proposed acquisition as highly toxic. (credit: Aurich Lawson / Nvidia)

In September 2020, Nvidia announced its intention to buy Arm, the license holder for the CPU technology that powers the vast majority of mobile and high-powered embedded systems around the world.

Nvidia’s proposed deal would acquire Arm from Japanese conglomerate SoftBank for $40 billion—a number which is difficult to put into perspective. Forty billion dollars would represent one of the largest tech acquisitions of all time, but 40 Instagrams or so doesn’t seem like that much to pay for control of the architecture supporting every well-known smartphone in the world, plus a staggering array of embedded controllers, network routers, automobiles, and other devices.

Today’s Arm doesn’t sell hardware

Arm’s business model is fairly unusual in the hardware space, particularly from a consumer or small business perspective. Arm’s customers—including hardware giants such as Apple, Qualcomm, and Samsung—aren’t buying CPUs the way you’d buy an Intel Xeon or AMD Ryzen. Instead, they’re purchasing the license to design and/or manufacture CPUs based on Arm’s intellectual property. This typically means selecting one or more reference core designs, putting several of them in one system on chip (SoC), and tying them all together with the necessary cache and other peripherals.

Read 9 remaining paragraphs | Comments

#acquisition, #antitrust, #arm, #cpu, #gpu, #merger, #mobile-cpu, #nvidia, #processors, #qualcomm, #regulation, #tech

0

Tech and health companies including Microsoft and Salesforce team up on digital COVID-19 vaccination records

A new cross-industry initiative is seeking to establish a standard for digital vaccination records that can be used universally to identify COVID-19 vaccination status for individuals, in a way that can be both secure via encryption and traceable and verifiable for trustworthiness regarding their contents. The so-called ‘Vaccination Credential Initiative’ includes a range of big-name companies from both the healthcare and the tech industry, including Microsoft, Oracle, Salesforce and Epic, as well as the Mayo Clinic, Safe Health, Change Healthcare and the CARIN Alliance to name a few.

The effort is beginning with existing, recognized standards already in use in digital healthcare programs, like the SMART Health Cards specification, which adheres to HL7 FHIR (Fast Healthcare Interoperability Resources) which is a standard created for use in digital health records to make them interoperable between providers. The final product that the initiative aims to establish is an “encrypted digital copy of their immunization credentials to store in a digital wallet of their choice,” with a backup available as a printed QR code that includes W3C-standards verifiable credentials for individuals who don’t own or prefer not to use smartphones.

Vaccination credentials aren’t a new thing – they’ve existed in some form or another since the 1700s. But their use and history is also mired in controversy and accusations of inequity, since this is human beings we’re dealing with. And already with COVID-19, there efforts underway to make access to certain geographies dependent upon negative COVID-19 test results (though such results don’t actually guarantee that an individual doesn’t actually have COVID-19 or won’t transfer it to others).

A recent initiative by LA County specifically also is already providing digital immunization records to individuals via a partnership with Healthvana, facilitated by Apple’s Wallet technology. But Healthvana’s CEO and founder was explicit in telling me that that isn’t about providing a proof of immunity for use in deterring an individual’s social or geographic access. Instead, it’s about informing and supporting patients for optimal care outcomes.

It sounds like this initiative is much more about using a COVID-19 immunization record as a literal passport of sorts. It’s right in the name of the initiative, for once (‘Credential’ is pretty explicit). The companies involved also at least seem cognizant of the potential pitfalls of such a program, as MITRE’s chief digital health physician Dr. Brian Anderson said that “we are working to ensure that underserved populations have access to this verification,” and added that “just as COVID-19 does not discriminate based on socio-economic status, we must ensure that convenient access to records crosses the digital divide.”

Other quotes from Oracle and Salesforce, and additional member leaders confirm that the effort is focused on fostering a reopening of social and ecomicn activity, including “resuming travel,” get[ting] back to public life,” and “get[ting] concerts and sporting events going again.” Safe Health also says that they’ll help facility a “privacy-preserving health status verification” solution that is at least in part “blockchain-enabled.”

Given the urgency of solutions that can lead to a safe re-opening, and a way to keep tabs on the massive, global vaccination program that’s already underway, it makes sense that a modern approach would include a digital version of historic vaccination record systems. But such an approach, while it leverages new conveniences and modes made possible by smartphones and the internet, also opens itself up to new potential pitfalls and risks that will no doubt be highly scrutinized, particularly by public interest groups focused on privacy and equitable treatment.

#articles, #ceo, #encryption, #epic, #health, #healthcare, #healthvana, #mayo-clinic, #microsoft, #oracle, #regulation, #salesforce, #smartphones, #standards, #tc, #vaccination

0

In a parting gift, EPA finalizes rules to limit its use of science

A bird stands on a tube snaking through the water.

Enlarge / BARATARIA BAY, Lousiana – JULY 14: A young seagull rests on a boom used to contain the oil spill July 14, 2010. In the future, should this bird be killed by the oil, nobody could be held responsible. (credit: Mario Tama / Getty Images)

With the days counting down to the inauguration of President-elect Joe Biden, the Trump administration has been undertaking a series of actions that will make it more difficult for its replacements to reverse any of its policies or pursue new ones. This is especially true in the area of environmental regulations, where both the Environmental Protection Agency and the Department of Interior have recently issued decisions.

Over the last few days, Interior has issued new rules that will allow industries to kill migratory birds with impunity, and the department has moved ahead with plans to lease portions of the Arctic National Wildlife Refuge for drilling tomorrow. Meanwhile, the EPA has finally pushed through a new rule that could severely limit the ability of the agency to establish future regulations. The only small bit of consolation is that the EPA’s final rule is less awful than some earlier drafts.

Only the science we like

The EPA’s new rule, which will be formally published tomorrow, is an attempt to set additional standards for the evidence it considers when establishing new regulations for pollutants. In principle, the rule sounds great: it wants the data behind the scientific papers it uses to be made publicly available before it can be used to support regulatory decisions. In reality, the rule is problematic, because many of these studies rely on patient records that need to be kept confidential. In other cases, the organizations with the best information on some environmental hazards are the companies that produce or work with them, and they may not be interested in sharing proprietary data.

Read 14 remaining paragraphs | Comments

#environmental-laws, #epa, #government, #interior, #policy, #regulation, #science

0

Italy fines Apple $12 million over iPhone marketing claims

The iPhone 11 Pro Max

Enlarge / The iPhone 11 Pro Max. (credit: Samuel Axon)

Italy has again hit Apple with a fine for what the country’s regulators deem to be misleading marketing claims, though the fine is only €10 million ($12 million)—a pittance from a company like Apple.

This time around, Italy’s Autorita Garante della Concorrenza e del Mercato (AGCM) claims that Apple told consumers that many iPhone models are water resistant but that the iPhones are not as resistant as Apple says. In one example, Apple claimed the iPhone 8 was rated IP67 for water and dust resistance, meaning the phone could survive for up to 30 minutes under three feet of water. But the Italian regulator says that’s only true under special lab conditions with static and pure water conditions.

An announcement by the AGCM specifically names the iPhone 8, iPhone 8 Plus, iPhone XR, iPhone XS, iPhone XS Max, iPhone 11, iPhone 11 Pro, and iPhone 11 Pro Max. Presumably, the claims would also apply to the iPhone 12 line, but that line was only just introduced to the market.

Read 3 remaining paragraphs | Comments

#agcm, #antitrust, #apple, #eu, #iphone, #italy, #regulation, #smartphone, #tech, #water-resistance

0

Coalition for App Fairness, a group fighting for app store reforms, adds 20 new partners

The Coalition for App Fairness (CAF), a newly-formed advocacy group pushing for increased regulation over app stores, has more than doubled in size with today’s announcement of 20 new partners — just one month after its launch. The organization, led by top app publishers and critics including Epic Games, Deezer, Basecamp, Tile, Spotify and others, debuted in late September to fight back against Apple and Google’s control over app stores, and particularly the stores’ rules around in-app purchases and commissions.

The coalition claims both Apple and Google engage in anti-competitive behavior, as they require publishers to use the platforms’ own payment mechanisms, and charge 30% commission on these forced in-app purchases. In some cases, those commissions are collected from apps where Apple and Google offer a direct competitor. For example, the app stores commission Spotify, which competes with Google’s YouTube Music and Apple’s own Apple Music.

The group also calls out Apple more specifically for not allowing app publishers any other means of addressing the iOS user base except through the App Store that Apple controls. Google, however, allows apps to be sideloaded, so is less a concern on that platform.

The coalition launched last month with 13 app publishers as its initial members, and invited other interested parties to sign up to join.

Since then, CAF says “hundreds” of app developers expressed interest in the organization. It’s been working through applications to evaluate prospective members, and is today announcing its latest cohort of new partners.

This time, the app publishers aren’t necessarily big household names, like Spotify and Epic Games, but instead represent a wide variety of apps, ranging from studios to startups.

The apps also hail from a number of app store categories, including Business, Education, Entertainment, Developer Tools, Finance, Games, Health & Fitness, Lifestyle, Music, Navigation, News, Productivity, Shopping, Sport, and Travel.

The new partners include: development studio Beonex, health app Breath Ball, social app Challenge by Eristica, shopping app Cladwell, fitness app Down Dog Yoga, developer tool Gift Card Offerwall, game maker Green Heart Games, app studio Imagine BC, business app Passbase, music app Qobuz, lifestyle app QuackQuack and Qustodio, game Safari Forever, news app Schibsted, app studio Snappy Mob, education app SpanishDict, navigation app Sygic, app studio Vertical Motion, education app YARXI, and the Mobile Marketing Marketing Association.

With the additions, CAF now includes members from Austria, Australia, Canada, France, Germany, India, Israel, Malaysia, Norway, Singapore, Slovakia, Spain, United Kingdom, and the United States.

The new partners have a range of complaints against the app stores, and particularly Apple.

SpanishDict, for instance, was frustrated by weeks of rejections with no recourse and inconsistently applied policies, it says. It also didn’t want to use Apple’s new authentication system, Apple Sign-In, but Apple made this a requirement for being included on the App Store.

Passbase, a Sign In With Apple competitor, also argues that Apple applied its rules unfairly, denying its submission but allowing its competitors on the App Store.

While some of the app partners are speaking out against Apple for the first time, others have already detailed their struggles publicly.

Eristica posted on its own website how Apple shut down its seven-year old social app business, which allowed users to challenge each other to dares to raise money for charity. The company claims it pre-moderated the content to ensure dangerous and harmful content wasn’t published, and employed human moderators, but was still rejected over dangerous content.

Meanwhile, TikTok remained on the App Store, despite hosting harmful challenges, like the pass out challenge, cereal challenge, salt and ice challenge and others, Eristica says.

Apple, of course, tends to use its policies to shape what kind of apps it wants to host on its App Store — and an app that focused on users daring one another may have been seen as a potential liability.

That said, Eristica presents a case where it claims to have followed all the rules and made all the changes Apple said it wanted, and yet still couldn’t get back in.

Down Dog Yoga also recently made waves by calling out Apple for rejecting its app because it refused to auto-charge customers at the end of its free trial.

The issue, in this case, wasn’t just that Apple wants a cut of developers’ businesses, it also wanted to dictate how those businesses are run.

Another new CAF partner, Qustodio, was among the apps impacted by Apple’s 2018 parental control app ban, which arrived shortly after Apple introduced its own parental control software in iOS.

The app developer had then co-signed a letter asking Apple release a Screen Time API rather than banning parental control apps — a consideration that TechCrunch had earlier suggested should have been Apple’s course of action in the first place.

Under increased regulatory scrutiny, Apple eventually relented and allowed the apps back on the App Store last year.

Not all partners are some little guy getting crushed by App Store rules. Some may have run afoul of rules designed to protect consumers, like Apple’s crackdown on offerwalls. Gift Card Offerwall’s SDK, for example, was used to incentivize app monetization and in-app purchases, which isn’t something consumers tend to welcome.

Despite increased regulatory pressure and antitrust investigations in their business practices, both Apple and Google have modified their app store rules in recent weeks to ensure they’re clear about their right to collect in-app purchases from developers.

Meanwhile, Apple and CAF member Epic Games are engaged in a lawsuit over the Fortnite ban, as Epic chose to challenge the legality of the app store business model in the court system.

Other CAF members, including Spotify and Tile, have testified in antitrust investigations against Apple’s business practices, as well.

“Apple must be held accountable for its anticompetitive behavior. We’re committed to creating a level playing field and fair future, and we’re just getting started,” CAF said in an announcement about the new partners. It says it’s still open to new members.

#advocacy, #app-developer, #app-stores, #app-store, #apple, #apple-inc, #apps, #basecamp, #coalition-for-app-fairness, #developers, #epic-games, #google, #itunes, #policy, #regulation, #spotify, #tile

0

Who regulates social media?

Social media platforms have repeatedly found themselves in the United States government’s crosshairs over the last few years, as it has been progressively revealed just how much power they really wield, and to what purposes they’ve chosen to wield it. But unlike, say, a firearm or drug manufacturer, there is no designated authority who says what these platforms can and can’t do. So who regulates them? You might say everyone and no one.

Now, it must be made clear at the outset that these companies are by no means “unregulated,” in that no legal business in this country is unregulated. For instance Facebook, certainly a social media company, received a record $5 billion fine last year for failure to comply with rules set by the FTC. But not because the company violated its social media regulations — there aren’t any.

Facebook and others are bound by the same rules that most companies must follow, such as generally agreed-upon definitions of fair business practices, truth in advertising, and so on. But industries like medicine, energy, alcohol, and automotive have additional rules, indeed entire agencies, specific to them; Not so social media companies.

I say “social media” rather than “tech” because the latter is much too broad a concept to have a single regulator. Although Google and Amazon (and Airbnb, and Uber, and so on) need new regulation as well, they may require a different specialist, like an algorithmic accountability office or online retail antitrust commission. (Inasmuch as tech companies act within regulated industries, such as Google in broadband, they are already regulated as such.)

Social media can roughly defined as platforms where people sign up to communicate and share messages and media, and that’s quite broad enough already without adding in things like ad marketplaces, competition quashing and other serious issues.

Who, then, regulates these social media companies? For the purposes of the U.S., there are four main directions from which meaningful limitations or policing may emerge, but each one has serious limitations, and none was actually created for the task.

1. Federal regulators

Image Credits: Andrew Harrer/Bloomberg

The Federal Communications Commission and Federal Trade Commission are what people tend to think of when “social media” and “regulation” are used in a sentence together. But one is a specialist — not the right kind, unfortunately — and the other a generalist.

The FCC, unsurprisingly, is primarily concerned with communication, but due to the laws that created it and grant it authority, it has almost no authority over what is being communicated. The sabotage of net neutrality has complicated this somewhat, but even the faction of the Commission dedicated to the backwards stance adopted during this administration has not argued that the messages and media you post are subject to their authority. They have indeed called for regulation of social media and big tech — but are for the most part unwilling and unable to do so themselves.

The Commission’s mandate is explicitly the cultivation of a robust and equitable communications infrastructure, which these days primarily means fixed and mobile broadband (though increasingly satellite services as well). The applications and businesses that use that broadband, though they may be affected by the FCC’s decisions, are generally speaking none of the agency’s business, and it has repeatedly said so.

The only potentially relevant exception is the much-discussed Section 230 of the Communications Decency Act (an amendment to the sprawling Communications Act), which waives liability for companies when illegal content is posted to their platforms, as long as those companies make a “good faith” effort to remove it in accordance with the law.

But this part of the law doesn’t actually grant the FCC authority over those companies or define good faith, and there’s an enormous risk of stepping into unconstitutional territory, because a government agency telling a company what content it must keep up or take down runs full speed into the First Amendment. That’s why although many think Section 230 ought to be revisited, few take Trump’s feeble executive actions along these lines seriously.

The agency did announce that it will be reviewing the prevailing interpretation of Section 230, but until there is some kind of established statutory authority or Congress-mandated mission for the FCC to look into social media companies, it simply can’t.

The FTC is a different story. As watchdog over business practices at large, it has a similar responsibility towards Twitter as it does towards Nabisco. It doesn’t have rules about what a social media company can or can’t do any more than it has rules about how many flavors of Cheez-It there should be. (There are industry-specific “guidelines” but these are more advisory about how general rules have been interpreted.)

On the other hand, the FTC is very much the force that comes into play should Facebook misrepresent how it shares user data, or Nabisco overstate the amount of real cheese in its crackers. The agency’s most relevant responsibility to the social media world is that of enforcing the truthfulness of material claims.

You can thank the FTC for the now-familiar, carefully worded statements that avoid any real claims or responsibilities: “We take security very seriously” and “we think we have the best method” and that sort of thing — so pretty much everything that Mark Zuckerberg says. Companies and executives are trained to do this to avoid tangling with the FTC: “Taking security seriously” isn’t enforceable, but saying “user data is never shared” certainly is.

In some cases this can still have an effect, as in the $5 billion fine recently dropped into Facebook’s lap (though for many reasons that was actually not very consequential). It’s important to understand that the fine was for breaking binding promises the company had made — not for violating some kind of social-media-specific regulations, because again, there really aren’t any.

The last point worth noting is that the FTC is a reactive agency. Although it certainly has guidelines on the limits of legal behavior, it doesn’t have rules that when violated result in a statutory fine or charges. Instead, complaints filter up through its many reporting systems and it builds a case against a company, often with the help of the Justice Department. That makes it slow to respond compared with the lightning-fast tech industry, and the companies or victims involved may have moved beyond the point of crisis while a complaint is being formalized there. Equifax’s historic breach and minimal consequences are an instructive case:

So: While the FCC and FTC do provide important guardrails for the social media industry, it would not be accurate to say they are its regulators.

2. State legislators

States are increasingly battlegrounds for the frontiers of tech, including social media companies. This is likely due to frustration with partisan gridlock in Congress that has left serious problems unaddressed for years or decades. Two good examples of states that lost their patience are California’s new privacy rules and Illinois’s Biometric Information Privacy Act (BIPA).

The California Consumer Privacy Act (CCPA) was arguably born out the ashes of other attempts at a national level to make companies more transparent about their data collection policies, like the ill-fated Broadband Privacy Act.

Californian officials decided that if the feds weren’t going to step up, there was no reason the state shouldn’t at least look after its own. By convention, state laws that offer consumer protections are generally given priority over weaker federal laws — this is so a state isn’t prohibited from taking measures for their citizens’ safety while the slower machinery of Congress grinds along.

The resulting law, very briefly stated, creates formal requirements for disclosures of data collection, methods for opting out of them, and also grants authority for enforcing those laws. The rules may seem like common sense when you read them, but they’re pretty far out there compared to the relative freedom tech and social media companies enjoyed previously. Unsurprisingly, they have vocally opposed the CCPA.

BIPA has a somewhat similar origin, in that a particularly far-sighted state legislature created a set of rules in 2008 limiting companies’ collection and use of biometric data like fingerprints and facial recognition. It has proven to be a huge thorn in the side of Facebook, Microsoft, Amazon, Google, and others that have taken for granted the ability to analyze a user’s biological metrics and use them for pretty much whatever they want.

Many lawsuits have been filed alleging violations of BIPA, and while few have produced notable punishments like this one, they have been invaluable in forcing the companies to admit on the record exactly what they’re doing, and how. Sometimes it’s quite surprising! The optics are terrible, and tech companies have lobbied (fortunately, with little success) to have the law replaced or weakened.

What’s crucially important about both of these laws is that they force companies to, in essence, choose between universally meeting a new, higher standard for something like privacy, or establishing a tiered system whereby some users get more privacy than others. The thing about the latter choice is that once people learn that users in Illinois and California are getting “special treatment,” they start asking why Mainers or Puerto Ricans aren’t getting it as well.

In this way state laws exert outsize influence, forcing companies to make changes nationally or globally because of decisions that technically only apply to a small subset of their users. You may think of these states as being activists (especially if their attorneys general are proactive), or simply ahead of the curve, but either way they are making their mark.

This is not ideal, however, because taken to the extreme, it produces a patchwork of state laws created by local authorities that may conflict with one another or embody different priorities. That, at least, is the doomsday scenario predicted almost universally by companies in a position to lose out.

State laws act as a test bed for new policies, but tend to only emerge when movement at the federal level is too slow. Although they may hit the bullseye now and again, like with BIPA, it would be unwise to rely on a single state or any combination among them to miraculously produce, like so many simian legislators banging on typewriters, a comprehensive regulatory structure for social media. Unfortunately, that leads us to Congress.

3. Congress

Image: Bryce Durbin/TechCrunch

What can be said about the ineffectiveness of Congress that has not already been said, again and again? Even in the best of times few would trust these people to establish reasonable, clear rules that reflect reality. Congress simply is not the right tool for the job, because of its stubborn and willful ignorance on almost all issues of technology and social media, its countless conflicts of interest, and its painful sluggishness — sorry, deliberation — in actually writing and passing any bills, let alone good ones.

Companies oppose state laws like the CCPA while calling for national rules because they know that it will take forever and there’s more opportunity to get their finger in the pie before it’s baked. National rules, in addition to coming far too late, are much more likely also be watered down and riddled with loopholes by industry lobbyists. (This is indicative of the influence these companies wield over their own regulation, but it’s hardly official.)

But Congress isn’t a total loss. In moments of clarity it has established expert agencies like those in the first item, which have Congressional oversight but are otherwise independent, empowered to make rules, and kept technically — if somewhat limply — nonpartisan.

Unfortunately, the question of social media regulation is too recent for Congress to have empowered a specialist agency to address it. Social media companies don’t fit neatly into any of the categories that existing specialists regulate, something that is plainly evident by the present attempt to stretch Section 230 beyond the breaking point just to put someone on the beat.

Laws at the federal level are not to be relied on for regulation of this fast-moving industry, as the current state of things shows more than adequately. And until a dedicated expert agency or something like it is formed, it’s unlikely that anything spawned on Capitol Hill will do much to hold back the Facebooks of the world.

4. European regulators

eu gdpr 1Of course, however central it considers itself to be, the U.S. is only a part of a global ecosystem of various and shifting priorities, leaders, and legal systems. But in a sort of inside-out version of state laws punching above their weight, laws that affect a huge part of the world except the U.S. can still have a major effect on how companies operate here.

The most obvious example is the General Data Protection Regulation or GDPR, a set of rules, or rather augmentation of existing rules dating to 1995, that has begun to change the way some social media companies do business.

But this is only the latest step in a fantastically complex, decades-long process that must harmonize the national laws and needs of the E.U. member states in order to provide the clout it needs to compel adherence to the international rules. Red tape seldom bothers tech companies, which rely on bottomless pockets to plow through or in-born agility to dance away.

Although the tortoise may eventually in this case overtake the hare in some ways, at present the GDPR’s primary hindrance is not merely the complexity of its rules, but the lack of decisive enforcement of them. Each country’s Data Protection Agency acts as a node in a network that must reach consensus in order to bring the hammer down, a process that grinds slow and exceedingly fine.

When the blow finally lands, though, it may be a heavy one, outlawing entire practices at an industry-wide level rather than simply extracting pecuniary penalties these immensely rich entities can shrug off. There is space for optimism as cases escalate and involve heavy hitters like antitrust laws in efforts that grow to encompass the entire “big tech” ecosystem.

The rich tapestry of European regulations is really too complex of a topic to address here in the detail it deserves, and also reaches beyond the question of who exactly regulates social media. Europe’s role in that question of, if you will, speaking slowly and carrying a big stick promises to produce results on a grand scale, but for the purposes of this article it cannot really be considered an effective policing body.

(TechCrunch’s E.U. regulatory maven Natasha Lomas contributed to this section.)

5. No one? Really?

As you can see, the regulatory ecosystem in which social media swims is more or less free of predators. The most dangerous are the small, agile ones — state legislatures — that can take a bite before the platforms have had a chance to brace for it. The other regulators are either too slow, too compromised, or too involved (or some combination of the three) to pose a real threat. For this reason it may be necessary to introduce a new, but familiar, species: the expert agency.

As noted above, the FCC is the most familiar example of one of these, though its role is so fragmented that one could be forgiven for forgetting that it was originally created to ensure the integrity of the telephone and telegraph system. Why, then, is it the expert agency for orbital debris? That’s a story for another time.

Capitol building

Image Credit: Bryce Durbin/TechCrunch

What is clearly needed is the establishment of an independent expert agency or commission in the U.S., at the federal level, that has statutory authority to create and enforce rules pertaining to the handling of consumer data by social media platforms.

Like the FCC (and somewhat like the E.U.’s DPAs), this should be officially nonpartisan — though like the FCC it will almost certainly vacillate in its allegiance — and should have specific mandates on what it can and can’t do. For instance, it would be improper and unconstitutional for such an agency to say this or that topic of speech should be disallowed from Facebook or Twitter. But it would be able to say that companies need to have a reasonable and accessible definition of the speech they forbid, and likewise a process for auditing and contesting takedowns. (The details of how such an agency would be formed and shaped is well beyond the scope of this article.)

Even the likes of the FAA lags behind industry changes, such as the upsurge in drones that necessitated a hasty revisit of existing rules, or the huge increase in commercial space launches. But that’s a feature, not a bug. These agencies are designed not to act unilaterally based on the wisdom and experience of their leaders, but are required to perform or solicit research, consult with the public and industry alike, and create evidence-based policies involving, or at least addressing, a minimum of sufficiently objective data.

Sure, that didn’t really work with net neutrality, but I think you’ll find that industries have been unwilling to capitalize on this temporary abdication of authority by the FCC because they see that the Commission’s current makeup is fighting a losing battle against voluminous evidence, public opinion, and common sense. They see the writing on the wall and understand that under this system it can no longer be ignored.

With an analogous authority for social media, the evidence could be made public, the intentions for regulation plain, and the shareholders — that is to say, users — could make their opinions known in a public forum that isn’t owned and operated by the very companies they aim to rein in.

Without such an authority these companies and their activities — the scope of which we have only the faintest clue to — will remain in a blissful limbo, picking and choosing by which rules to abide and against which to fulminate and lobby. We must help them decide, and weigh our own priorities against theirs. They have already abused the naive trust of their users across the globe — perhaps it’s time we asked them to trust us for once.

#facebook, #fcc, #ftc, #gdpr, #government, #instagram, #regulation, #social, #social-media, #social-networks, #tc, #twitter

0

Twitter hack probe leads to call for cybersecurity rules for social media giants

An investigation into this summer’s Twitter hack by the New York State Department of Financial Services (NYSDFS) has ended with a stinging rebuke for how easily Twitter let itself be duped by a “simple” social engineering technique — and with a wider call for key social media platforms to be regulated on security.

In the report, the NYSDFS points, by way of contrasting example, to how quickly regulated cryptocurrency companies acted to prevent the Twitter hackers scamming even more people — arguing this demonstrates that tech innovation and regulation aren’t mutually exclusive.

Its point is that the biggest social media platforms have huge societal power (with all the associated consumer risk) but no regulated responsibilities to protect users.

The report concludes this is a problem U.S. lawmakers need to get on and tackle stat — recommending that an oversight council be established (to “designate systemically important social media companies”) and an “appropriate” regulator appointed to ‘monitor and supervise’ the security practices of mainstream social media platforms.

“Social media companies have evolved into an indispensable means of communications: more than half of Americans use social media to get news, and connect with colleagues, family, and friends. This evolution calls for a regulatory regime that reflects social media as critical infrastructure,” the NYSDFS writes, before going on to point out there is still “no dedicated state or federal regulator empowered to ensure adequate cybersecurity practices to prevent fraud, disinformation, and other systemic threats to social media giants”.

“The Twitter Hack demonstrates, more than anything, the risk to society when systemically important institutions are left to regulate themselves,” it adds. “Protecting systemically important social media against misuse is crucial for all of us — consumers, voters, government, and industry. The time for government action is now.”

We’ve reached out to Twitter for comment on the report

Among the key findings from the Department’s investigation are that the hackers broke into Twitter’s systems by calling employees and claiming to be from Twitter’s IT department — through which simple social engineering method they were able to trick four employees into handing over their log-in credentials. From there they were able to access the Twitter accounts of high profile politicians, celebrities, and entrepreneurs, including Barack Obama, Kim Kardashian West, Jeff Bezos, Elon Musk, and a number of cryptocurrency companies — using the hijacked accounts to tweet out a crypto scam to millions of users.

Twitter has previously confirmed that a “phone spear phishing” attack was used to gain credentials.

Per the report, the hackers’ “double your bitcoin” scam messages, which contained links to make a payment in bitcoins, enabled them to steal more than $118,000 worth of bitcoins from Twitter users.

Although a considerably larger sum was prevented from being stolen as a result of swift action taken by regulated crypto companies — namely: Coinbase, Square, Gemini Trust Company and Bitstamp — who the Department said blocked scores of attempted transfers by the fraudsters.

“This swift action blocked over 6,000 attempted transfers worth approximately $1.5 million to the Hackers’ bitcoin addresses,” the report notes.

Twitter is also called out for not having a cybersecurity chief in post at the time of the hack — after failing to replace Michael Coates, who left in March. (Last month it announced Rinki Sethi had been hired as CISO).

“Despite being a global social media platform boasting over 330 million average monthly users in 2019, Twitter lacked adequate cybersecurity protection,” the NYSDFS writes. “At the time of the attack, Twitter did not have a chief information security officer, adequate access controls and identity management, and adequate security monitoring — some of the core measures required by the Department’s first-in-the-nation cybersecurity regulation.”

European Union data protection law already bakes in security requirements as part of a comprehensive privacy and security framework (with major penalties possible for security breaches). However an investigation by the Irish DPC of a 2018 Twitter security incident is still yet to conclude after a draft decision failed to gain the backing of the other EU data watchdogs this August — triggering a further delay to the pan-EU regulatory process.

#crypto, #hack, #policy, #regulation, #security, #social, #social-media, #twitter

0

The next big tech hearing is scheduled for October 28

A day after the Senate Commerce Committee moved forward with plans to subpoena the CEOs of Twitter, Facebook and Google, it looks like some of the most powerful leaders in tech will testify willingly.

Twitter announced late Friday that Jack Dorsey would appear virtually before the committee on October 28, just days before the U.S. election. While Twitter is the only company that’s openly agreed to the hearing so far, Politico reports that Sundar Pichai and Mark Zuckerberg also plan to appear.

Members of both parties on the committee planned to use the hearings to examine Section 230, the key legal shield that protects online platforms from liability from the content their users create.

As we’ve discussed previously, the political parties are approach Section 230 from very different perspectives. Democrats see threatening changes to Section 230 as a way to force platforms to take toxic content like misinformation and harassment more seriously.

Many Republicans believe tech companies should be stripped of Section 230 protections because platforms have an anti-conservative bias — a claim that the facts don’t bear out.

Twitter had some choice words about that perspective, calling claims of political bias an “unsubstantiated allegation that we have refuted on many occasions to Congress” and noting that those accusations have been “widely disproven” by researchers.

“We do not enforce our policies on the basis of political ideology,” the company added.

It sounds like the company and members of the Senate have very different agendas. Twitter indicated that it plans to use the hearing’s timing to steer the conversation toward the election. Politico also reports that the scope of the hearing will be broadened to include “data privacy and media consolidation” — not just Section 230.

A spokesperson tweeting on the company’s public policy account insisted that the hearing “must be constructive,” addressing how tech companies can protect the integrity of the vote.

“At this critical time, we’re committed to keeping our focus squarely on what matters the most to our company: joint efforts to protect our shared democratic conversation from harm — from both foreign and domestic threats,” a Twitter spokesperson wrote.

Regardless of the approach, dismantling Section 230 could prove potentially catastrophic for the way the internet as we know it works, so the stakes are high, both for tech companies and for regular internet users.

#congress, #regulation, #section-230, #section-230-of-the-communications-decency-act, #senate-hearings, #tc, #twitter

0

Draft EU data rules target Apple, Google, Facebook, Amazon

Draft EU data rules target Apple, Google, Facebook, Amazon

Enlarge (credit: Walter Zerla | Getty Images)

European regulators once again have the behavior of the biggest US tech companies—Amazon, Apple, Facebook, and Google among them—squarely in their sights as they move forward with a proposal to reform how digital marketplaces and data sharing operate.

An early draft of the Digital Services Act, under consideration by the European Parliament, would not only require tech forms to share data with smaller rivals but would also limit the ways companies can use customer data they’ve already collected, the Financial Times was first to report.

Under the proposal, tech firms with the potential to act as gatekeepers “shall not pre-install exclusively their own applications nor require from any third-party operating system developers or hardware manufacturers to pre-install exclusively gatekeepers’ own application,” according to Reuters. The draft also mandates that gatekeeper companies will also not be permitted to use data collected on their platforms to target users unless that data is also shared with rival firms.

Read 5 remaining paragraphs | Comments

#antitrust, #competition, #europe, #european-commission, #european-parliament, #european-union, #laws, #policy, #regulation

0

Uber wins latest London licence appeal

Uber has won its appeal against having its licence to operate withdrawn in London.

In today’s judgement the court decided it was satisfied with process improvements made by the ride-hailing company, including around its communication with the city’s transport regulator.

The new licence comes with 21 conditions, jointly suggested to the Magistrate by Uber and TfL.

However it’s still not clear how long Uber will be granted a licence for — with the judge wanting to hear more evidence before taking a decision.

We’ve reached out to Uber and TfL for comment.

The ride-sharing giant has faced a multi-year battle to have its licence reinstated after Transport for London, the city’s transport regulator, took the shock decision not to issue a renewal in 2017 — citing safety concerns and deeming Uber not “fit and proper” to hold a private hire operator licence.

It went on to win a provisional appeal back in 2018 — when a UK court granted it a 15-month licence to give it time to continue working on meeting TfL’s requirements. However last November the regulator once again denied a full licence renewal — raising a range of new safety issues.

Despite that Uber has been able to continue operating in London throughout the appeals process — albeit, with ongoing uncertainty over the future of its licence. Now it will be hoping this is in the past.

In the appeal, Uber’s key argument was it is now “fit and proper” to hold a licence — claiming it’s listened to the regulator’s concerns and learnt from errors, making major changes to address issues related to passenger safety.

For example Uber pointed to improvements in its governance and document review systems, including a freeze on drivers who had not taken a trip for an extended period; real-time driver ID verification; and new scrutiny teams and processes; as well as the launch of ‘Programme Zero’ — which aims to prevent all breaches of licence conditions.

It also argued system flaws were not widespread — claiming only 24 of the 45,000 drivers using the app had exploited its system to its knowledge.

It also argued it now cooperates effectively and proactively with TfL and police forces, denying it conceals any failures. Furthermore, it claimed denying its licence would have a “profound effect” on groups at risk of street harassment — such as women and ethnic minorities, as well as disabled people.

It’s certainly fair to say the Uber of 2020 has travelled some distance from the company whose toxic internal culture included developing proprietary software to try to thwart regulatory oversight and eventually led to a major change of guard of its senior management.

However it’s interesting the court has taken the step of choosing to debate what length of licence Uber should receive. So while it’s a win for Uber, there are still some watchful caveats.

Offering commentary on today’s ruling, Anna McCaffrey, a senior counsel for the law firm Taylor Wessing, highlighted this element of the judgement. “The Magistrates Court agreed that Uber had made improvements and addressed TfL safety concerns. However, the fact that the length of extension is up for debate, rather than securing Uber’s preferred five year licence, demonstrates that Uber will have to work hard to continue to prove to TfL and the Court that it has really changed. If not, Uber is likely to find itself back in Court facing the same battle next year,” she noted in a statement.

She also pointed out that a decision is still pending from the Supreme Court to “finally settle” the question as to whether Uber’s drivers are workers or self-employed — another long-running legal saga for Uber in the UK.

The company is also facing fresh legal challenges related to its algorithmic management of drivers. So there’s still plenty of work for its lawyers.

The App Drivers and Couriers Union (ADCU), meanwhile, offered a cautious welcome of the court’s decision to grant Uber’s licence renewal — given how many of its members are picking up jobs via its platform.

However the union also called for the mayor of London to break up what it dubbed Uber’s “monopoly” by imposing limits on the numbers of drivers who can register on its platform. In a statement, ADCU president, Yaseen Aslam, argued: “The reduced scale will give both Uber and Transport for London the breathing space necessary to ensure all compliance obligations -– including worker rights — are met in future.”

Update: Uber has now sent this statement — attributed to Jamie Heywood, regional general manager for Northern & Eastern Europe: “This decision is a recognition of Uber’s commitment to safety and we will continue to work constructively with TfL. There is nothing more important than the safety of the people who use the Uber app as we work together to keep London moving.”

#apps, #europe, #lawsuit, #london, #regulation, #ride-hailing, #tfl, #uber

0

Apple iCloud, Google Drive and Dropbox probed over ‘unfair’ T&Cs in Italy

Italy’s competition authority has opened an investigation into cloud storage services operated by Apple, Dropbox and Google, in response to a number of complaints alleging unfair commercial practices.

In a press release announcing the probe, the AGCM says it’s opened six investigations in all. The services of concern are Google’s Drive, Apple iCloud and the eponymous Dropbox cloud storage service.

As well as allegations of unfair commercial practices, the regulator said it’s looking into complaints of violations of Italy’s Consumer Rights Directive.

A further complaint alleges the presence of vexatious clauses in the contract.

We’ve reached out to the three tech giants for comment.

All three cloud storage services are being investigated over complaints of unfair practices related to the collection of user data for commercial purposes — such as a lack of proper information or valid consent for such commercial data collection — per the press release.

Dropbox is also being accused of failing to clearly communicate contractual conditions such as procedures for withdrawing from a contract or exercising a right to reconsider. Access to out-of-court dispute settlement mechanisms is also being looked at by the regulator.

Other contractual conditions probed over concerns of unfairness include clauses with sweeping rights for providers to suspend and interrupt the service; liability exemptions even in the event of loss of documents stored in the user’s cloud space; the possibility of unilateral modification of the contractual conditions; and the prevalence of the English version of the contract text over the Italian version.

In recent years the European Commission has made a pan-EU push for social media firms to clarify their T&Cs — which led to Facebook agreeing to plainer worded T&Cs last year, as well as making some additional tweaks, such as amending its power to unilaterally amend contracts.

#antitrust, #apple-icloud, #apps, #cloud, #dropbox, #europe, #google-drive, #italy, #regulation

0

Google pushes Europe to limit ‘gatekeeper’ platform rules

Google has made its pitch to shape the next decades of digital regulation across the European Union, submitting a 135-page response yesterday to the consultation on the forthcoming Digital Services Act (DSA) — which will update the bloc’s long-standing rules around ecommerce.

The package also looks set to introduce specific rules for so-called “gatekeeper platforms” which wield outsized market power thanks to digital network effects. Hence Mountain View’s dialled-up attention to detail.

The lion’s share of Google’s submission focuses on lobbying against the prospect of ex ante regulation for such platform giants — something the European Commission has nonetheless signalled is front of mind as it looks at how to rein in platform power.

This type of regulation intervention aims to identify competitive problems and shape responses ‘before the event’ via the application of obligations on players who hold significant market power vs after the fact competition enforcement when market harm has been established.

“A blanket approach to ex ante competition regulation could have unintended consequences on user experience as well as multiplying costs for European businesses,” it writes, urging lawmakers to take a long, hard look at existing regulation to see if it’s not able to do the job of ensuring markets are “working properly”.

“Where the evidence shows meaningful gaps, the next step ought to be to consider how one can modernise those existing rules and procedures to address the underlying concerns before turning to consideration of new and distinct regulatory frameworks,” it adds.

If EU lawmakers must go ahead with ex ante regulation of platforms giants, Google — an adtech giant — is especially keen that they do not single out any specific business models. So it definitely wouldn’t be a fan of ex ante regs applied only to surveillance-fuelled ad-targeting platforms. Funny that. 

“The criteria for identifying ‘gatekeeper power’ should be independent of the particular business model that a platform uses, making no distinction as between platforms that operate business models based on advertising, subscriptions, sales commissions, or sales of hardware,” Google writes.

“Digital platforms often operate using different business and monetization strategies, across multiple markets, geographies, and sectors, with varying degrees of competitive strength in each. Regulators should not favor or discriminate against any business, business model, or technology from the outset,” it goes on.

“In certain sectors, the platform may have market power; in others, it may be a new entrant or marginal player. The digital ecosystem is extremely diverse and evolving rapidly and it would be misguided for gatekeeper designations to be evaluated by reference to the position of an entire company or corporate group.”

Nor should lawmakers opt for what Google dubs “an overly simplistic” assessment of what constitutes a gatekeeper — giving the example of number of users as an inadequate way to determine whether a platform giant has significant market power in a given moment. (Relevant: Google market share of search in Europe exceeds 90%.)

“Recent competition enforcement demonstrates the range of platforms that have been found to have market power (e.g., Microsoft, Google, Facebook, Amazon, and Apple) and other platforms may be found to have market power in the future (borne out, for example, by the UK CMA’s investigation into online auction platform services),” it writes. “The gatekeeper assessment should therefore recognize that a range of platforms — operating a range of different business models (e.g., ad-funded, subscription-based, commission-based, hardware sales) — may hold ‘market power’ in different circumstances and vis-à-vis different platform participants.”

The tech giant can also be seen pushing a familiar talking point when its business is accused of profiting, parasitically, off of others’ content — by suggesting that when regulators are assessing whether a platform is a gatekeeper or not by considering the economic dependence of traditional businesses on a limited number of online platforms they should look favorably on those platforms “through which a materially significant proportion of business (e.g. in the form of highly valuable traffic) is channeled”.

But of course it would say that clicks are just as good as all the ad dollars it’s making.

Google is also pushing for regular review of any gatekeeper designations to ensure any obligations keep pace with fast-moving markets and competition shifts (it points to the recent rise of TikTok by way of example).

It also doesn’t want gatekeeper designations to apply universally across all markets — arguing instead they should only apply in the specific market where a platform is “found to have ‘gatekeeper’ power”.

“Large digital platforms tend to operate across multiple markets and sectors, with varying degrees of competitive strength in each,” Google argues, adding that: “Applying ex ante rules outside these markets would create a risk of deterring pro-competitive market entry through excessive regulation, thereby depriving SMEs and consumers of attractive new products.”

That would stand in contrast to the EU’s modus operandi around competition law enforcement — where a business that’s been judged to be dominant in one market (like Google is in search) has what competition chief Margrethe Vestager likes to refer to as a “special responsibility” not to abuse its market power to leverage that advantage in any other market, not only the one it’s been found to hold most of the market power.

At the same time as Google is lobbying for limits on any gatekeeper designations, the tech giant wants to see certain types of rules applied universally to all players. Here it gives the examples of privacy, transparency (such as for fees) and ranking decisions.

Data portability is another area it’s urging rules to be applied industry-wide.

It also wants to see any online ad rules applied universally, not just to gatekeeper platforms. But it’s also very keen for hard limits on any such rules.

“It will be important that any interventions seeking to achieve more transparency and accountability are carefully designed to avoid inadvertently hampering the ability of online advertising tools to deliver the value that publishers and advertisers have come to expect,” the adtech giant writes, lobbying to reduce the amount of transparency and accountability set down in law by invoking claims of privacy risks to user data; threats to commercial IP; and ‘bad actors’ gaming the system if it’s not allowed to continue being (an ad-fraud-tastic) blackbox.

“Consideration of these measures will therefore require the balancing of factors including protection of users’ personal data and partners’ commercially sensitive information, and potential harm to users and competition through disclosure of data signals that allow ‘bad actors’ to game the system, or rivals to copy innovations. We stand ready to engage with the Commission on these issues,” Google intones.

On updating ecommerce rules and liability — which is a stated aim of the DSA plan — Google is cautiously supportive of regulatory changes to reflect what it describes as “the digital transformation of the last two decades”. While pushing to retain core elements of the current e-Commerce Directive regime, including the country-of-origin principle and freedom to provide cross-border digital services. 

For example it wants to see more expansive definitions of digital services, to allow for more specific rules for certain types of businesses — pushing for a move away from the ‘active’ and ‘passive’ hosts distinction for platforms, to enable them to respond more proactively in a content moderation context without inviting liability by doing so, but suggesting hosting services may be better served by retaining the current regime (Article 14 of the e-Commerce Directive).

On liability for illegal content it is lobbying for see clear lines between illegal material and what’s “lawful-but-harmful”.

“Where Member States believe a category of content is sufficiently harmful, their governments may make that content illegal directly, through democratic processes, in a clear and proportionate manner, rather than through back-door regulation of amorphously-defined harms,” it writes.

It also wants the updated law to retain the general prohibition on content monitoring obligations — and downplays the potential of AI to offer any ‘third way’ there.

“While breakthroughs in machine learning and other technology are impressive, the technology is far from perfect, and less accurate on more nuanced or context-dependent content. Their mandated use would be inappropriate, and could lead to restrictions on lawful content and on citizens’ fundamental rights,” Google warns. “The DSA can help prevent risks to fundamental rights by ensuring that companies are not forced to prioritise speed of removal over careful decision-making,” it adds, saying it encounters “many grey-area cases that require appropriate time to evaluate the law and context”.

“We remain concerned about recent laws that enable imposition of large penalties if short, fixed turn-around times are not met,” it goes on, pointing to a recent ruling by the French Constitutional Council which struck down an online hate speech law on freedom of expression grounds.

“Any new standard should safeguard fundamental rights by ensuring an appropriate balance between speed and accuracy of removal,” Google adds.

You can read its full submission — including answers to the Commission’s questionnaire — here.

The Commission’s DSA consultation closes on September 8. EU lawmakers have previously said they will come forward with a draft proposal for the new rules by the end of the year.

#competition, #digital-markets, #digital-services-act, #europe, #european-commission, #google, #policy, #regulation

0

TikTok chief Kevin Mayer launches stinging attack on Facebook

Visitors visit the booth of Douyin (TikTok) at the 2019 smart expo in Hangzhou, China, Oct. 18, 2019.

Enlarge / Visitors visit the booth of Douyin (TikTok) at the 2019 smart expo in Hangzhou, China, Oct. 18, 2019. (credit: Costfoto | Barcroft Media | Getty Images)

Kevin Mayer, the chief executive of TikTok, has accused Facebook of trying to destroy the Chinese app’s US business by smearing it with “maligning attacks.”

In his first public comments since joining TikTok from Disney, Mr. Mayer issued an 800-word defense of the viral video app, which is under pressure from US regulators and may even be banned by the White House.

Without TikTok, he said, American advertisers “would again be left with few choices”. He described Instagram Reels, a new video service from Facebook that will launch in the US in early August, as a “copycat product” and noted that a similar service from Facebook called Lasso had “failed quickly”.

Read 12 remaining paragraphs | Comments

#facebook, #gaming-culture, #policy, #regulation, #tiktok

0

US investors try to buy TikTok from Chinese owner

US investors try to buy TikTok from Chinese owner

Enlarge (credit: Getty Images)

A group of US tech investors has launched an ambitious plan to buy TikTok from its Chinese owner, as the popular short video app tries to escape being banned by the White House.

The investors, led by the venture capital firms General Atlantic and Sequoia Capital, are in discussions with the US Treasury and other regulators to see if spinning out TikTok and firewalling it from its Chinese parent would satisfy US concerns about the app, according to two people involved in the process.

Last weekend, President Donald Trump’s election campaign placed ads on Facebook suggesting that TikTok was “spying” on US users, a claim the company has denied. Other critics have noted the app’s huge influence as it sits on the mobile phones of tens of millions of Americans.

Read 14 remaining paragraphs | Comments

#gaming-culture, #policy, #regulation, #social-media, #tiktok

0

With pandemic-era acquisitions, big tech is back in the antitrust crosshairs

With many major sectors totally frozen and reeling from losses, tech’s biggest players are proving themselves to be the exception to the rule yet again. On Friday, Facebook confirmed its plans to buy Giphy, a popular gif search engine, in a deal believed to be worth $400 million.

Facebook has indicated it wants to forge new developer and content relationships for Giphy, but what the world’s largest social network really wants with the popular gif platform might be more than meets the eye. As Bloomberg and other outlets have suggested, it’s possible that Facebook really wants the company as a lens into how users engage with its competitors’ social platforms. Giphy’s gif search tools are currently integrated into a number of messaging platforms, including TikTok, Twitter and Apple’s iMessage.

In 2018, Facebook famously got into hot water over its use of a mobile app called Onavo, which gave the company a peek into mobile usage beyond Facebook’s own suite of apps—and violated Apple’s policies around data collection in the process. After that loophole closed, Facebook was so desperate for this kind of insight on the competition that it paid people—including teens—to sideload an app granting the company root access and allowing Facebook to view all of their mobile activity, as TechCrunch revealed last year.

For lawmakers and other regulatory powers, the Giphy buy could ring two separate sets of alarm bells: one for the further evidence of anti-competitive behavior stacking the deck in the tech industry and another for the deal’s potential consumer privacy implications.

“The Department of Justice or the Federal Trade Commission must investigate this proposed deal,” Minnesota Senator Amy Klobuchar said in a statement provided to TechCrunch. “Many companies, including some of Facebook’s rivals, rely on Giphy’s library of sharable content and other services, so I am very concerned about this proposed acquisition.”

In proposed legislation late last month, Sen. Elizabeth Warren (D-MA) and Rep. Alexandria Ocasio-Cortez (D-NY) called for a freeze on big mergers, warning that huge companies might view the pandemic as a chance to consolidate power by buying smaller businesses at fire sale rates.

In a statement, a spokesperson for Sen. Warren called the Facebook news “yet another example of a giant company using the pandemic to further consolidate power,” noting the company’s “history of privacy violations.”

“We need Senator Warren’s plan for a moratorium on large mergers during this crisis, and we need enforcers who will break up Big Tech,” the spokesperson said.

News of Facebook’s latest moves come just days after a Wall Street Journal report revealed that Uber is looking at buying Grubhub, the food delivery service it competes with directly through Uber Eats.

That news also raised eyebrows among pro-regulation lawmakers who’ve been looking to break up big tech. Rep. David Cicilline (D-RI), who chairs the House’s antitrust subcommittee, called that deal “a new low in pandemic profiteering.”

“This deal underscores the urgency for a merger moratorium, which I and several of my colleagues have been urging our caucus to support,” Cicilline said in a statement on the Grubhub acquisition.

The early days of the pandemic may have taken some of the antitrust attention off of tech’s biggest companies, but as the government and the American people fall into a rhythm during the coronavirus crisis, that’s unlikely to last. On Friday, the Wall Street Journal reported that the Department of Justice and a collection of state attorneys general are in the process of filing antitrust lawsuits against Google, with the case expected to hit in the summer months.

#amy-klobuchar, #congress, #elizabeth-warren, #facebook, #government, #mergers-and-acquisitions, #regulation, #tc, #uber

0

Powerful House committee demands Jeff Bezos testify after ‘misleading’ statements

Amazon is in hot water with a powerful congressional committee interested in the company’s potentially anticompetitive business practices.

In a bipartisan letter sent Friday to Jeff Bezos, the House Judiciary committee demanded that the Amazon CEO explain discrepancies between his own prior statements and recent reporting from the Wall Street Journal. Specifically, the letter addressed Amazon’s apparent practice of diving into its trove of data on products and third-party sellers to come up with its own Amazon-branded competing products.

As the Journal notes, Amazon “has long asserted, including to Congress, that when it makes and sells its own products, it doesn’t use information it collects from the site’s individual third-party sellers—data those sellers view as proprietary.”

In documents and interviews with many former employees, the Journal found that Amazon does indeed consult that information when making decisions about pricing, product features and the kinds of products with the most potential to make the company money.

In the letter, the House Judiciary Committee accuses Bezos of making “misleading, and possibly criminally false or perjurious” statements to the committee when asked about the practice in the past.

“It is vital to the Committee, as part of its critical work investigating and understanding competition issues in the digital market, that Amazon respond to these and other critical questions concerning competition issues in digital markets,” the committee wrote, adding that it would subpoena the tech CEO if necessary.

While the coronavirus crisis has taken some of the heat off of tech’s mounting regulatory worries in the U.S., the committee’s actions make it clear that plenty of lawmakers are still interested in taking tech companies to task, even with so many aspects of life still up in the air.

#amazon, #congress, #government, #house-judiciary-committee, #regulation, #tc

0