EU’s top data protection supervisor urges ban on facial recognition in public

The European Union’s lead data protection supervisor has called for remote biometric surveillance in public places to be banned outright under incoming AI legislation.

The European Data Protection Supervisor’s (EDPS) intervention follows a proposal, put out by EU lawmakers on Wednesday, for a risk-based approach to regulating applications of artificial intelligence.

The Commission’s legislative proposal includes a partial ban on law enforcement’s use of remote biometric surveillance technologies (such as facial recognition) in public places. But the text includes wide-ranging exceptions, and digital and humans rights groups were quick to warn over loopholes they argue will lead to a drastic erosion of EU citizens’ fundamental rights. And last week a cross-party group of MEPs urged the Commission to screw its courage to the sticking place and outlaw the rights-hostile tech.

The EDPS, whose role includes issuing recommendations and guidance for the Commission, tends to agree. In a press release today Wojciech Wiewiórowski urged a rethink.

“The EDPS regrets to see that our earlier calls for a moratorium on the use of remote biometric identification systems — including facial recognition — in publicly accessible spaces have not been addressed by the Commission,” he wrote.

“The EDPS will continue to advocate for a stricter approach to automated recognition in public spaces of human features — such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals — whether these are used in a commercial or administrative context, or for law enforcement purposes.

“A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives.”

Wiewiórowski had some warm words for the legislative proposal too, saying he welcomed the horizontal approach and the broad scope set out by the Commission. He also agreed there are merits to a risk-based approach to regulating applications of AI.

But the EDPB has made it clear that the red lines devised by EU lawmakers are a lot pinker in hue than he’d hoped for — adding a high profile voice to the critique that the Commission hasn’t lived up to its much trumpeted claim to have devised a framework that will ensure ‘trustworthy’ and ‘human-centric’ AI.

The coming debate over the final shape of the regulation is sure to include plenty of discussion over where exactly Europe’s AI red lines should be. A final version of the text isn’t expected to be agreed until next year at the earliest.

“The EDPS will undertake a meticulous and comprehensive analysis of the Commission’s proposal to support the EU co-legislators in strengthening the protection of individuals and society at large. In this context, the EDPS will focus in particular on setting precise boundaries for those tools and systems which may present risks for the fundamental rights to data protection and privacy,” Wiewiórowski added.

 

#ai-regulation, #artificial-intelligence, #biometrics, #edps, #europe, #european-union, #facial-recognition, #law-enforcement, #policy, #privacy, #surveillance, #wojciech-wiewiorowski

0

New privacy bill would end law enforcement practice of buying data from brokers

A new bill known as the Fourth Amendment is Not for Sale Act would seal up a loophole that intelligence and law enforcement agencies use to obtain troves of sensitive and identifying information to which they wouldn’t otherwise have legal access.

The new legislation, proposed by Senators Ron Wyden (D-OR) and Rand Paul (R-KY), would require government agencies to obtain a court order to access data from brokers. Court orders are already required when the government seeks analogous data from mobile providers and tech platforms.

“There’s no reason information scavenged by data brokers should be treated differently than the same data held by your phone company or email provider,” Wyden said. Wyden describes the loophole as a way that police and other agencies buy data to “end-run the Fourth Amendment.”

Paul criticized the government for using the current data broker loophole to circumvent Americans’ constitutional rights. “The Fourth Amendment’s protection against unreasonable search and seizure ensures that the liberty of every American cannot be violated on the whims, or financial transactions, of every government officer,” Paul said.

Critically, the bill would also ban law enforcement agencies from buying data on Americans when it was obtained through hacking, violations of terms of service or “from a user’s account or device.”

That bit highlights the questionable practices of Clearview AI, a deeply controversial tech company that sells access to a facial recognition search engine. Clearview’s platform collects pictures of faces scraped from across the web, including social media sites, and sells access to that data to police departments around the country and federal agencies like ICE.

In scraping their sites for data to sell, Clearview has run afoul of just about every major social media platform’s terms of service. Facebook, YouTube, Twitter, LinkedIn and Google have all denounced Clearview for using data culled from their services and some have even sent cease-and-desists ordering the data broker to stop.

The bill would also expand privacy laws to apply to infrastructure companies that own cell towers and data cables, seal up workarounds that allow intelligence agencies to obtain metadata from Americans’ international communications without review by a FISA court and ensure that agencies seek probable cause orders to obtain location and web browsing data.

The bill, embedded below, isn’t just some nascent proposal. It’s already attracted bipartisan support from a number of key co-sponsors, including Senate Majority Leader Chuck Schumer and Bernie Sanders on the Democratic side and Republicans Mike Lee and Steve Daines. A House version of the legislation was also introduced Wednesday.

 

#bernie-sanders, #cell-towers, #chuck-schumer, #clearview-ai, #facial-recognition, #google, #government, #mass-surveillance, #rand-paul, #ron-wyden, #tc

0

EU lawmakers propose strict curbs on use of facial recognition

EU lawmakers propose strict curbs on use of facial recognition

Enlarge (credit: John Lamb / The Image Bank / Getty Images)

EU regulators have proposed strict curbs on the use of facial recognition in public spaces, limiting the controversial technology to a small number of public-interest scenarios, according to new draft legislation seen by the Financial Times.

In a confidential 138-page document, officials said facial recognition systems infringed on individuals’ civil rights and therefore should only be used in scenarios in which they were deemed essential, for instance in the search for missing children and the policing of terrorist events.

The draft legislation added that “real-time” facial recognition—which uses live tracking, rather than past footage or photographs—in public spaces by the authorities should only ever be used for limited periods of time, and it should be subject to prior consent by a judge or a national authority.

Read 9 remaining paragraphs | Comments

#european-union, #facial-recognition, #law-enforcement, #policy, #privacy

0

MEPs call for European AI rules to ban biometric surveillance in public

A cross-party group of 40 MEPs in the European parliament has called on the Commission to strengthen an incoming legislative proposal on artificial intelligence to include an outright ban on the use of facial recognition and other forms of biometric surveillance in public places.

They have also urged EU lawmakers to outlaw automated recognition of people’s sensitive characteristics (such as gender, sexuality, race/ethnicity, health status and disability) — warning that such AI-fuelled practices pose too great a rights risk and can fuel discrimination.

The Commission is expected to presented its proposal for a framework to regulate ‘high risk’ applications of AI next week — but a copy of the draft leaked this week (via Politico). And, as we reported earlier, this leaked draft does not include a ban on the use of facial recognition or similar biometric remote identification technologies in public places, despite acknowledging the strength of public concern over the issue.

“Biometric mass surveillance technology in publicly accessible spaces is widely being criticised for wrongfully reporting large numbers of innocent citizens, systematically discriminating against under-represented groups and having a chilling effect on a free and diverse society. This is why a ban is needed,” the MEPs write now in a letter to the Commission which they’ve also made public.

They go on to warn over the risks of discrimination through automated inference of people’s sensitive characteristics — such as in applications like predictive policing or the indiscriminate monitoring and tracking of populations via their biometric characteristics.

“This can lead to harms including violating rights to privacy and data protection; suppressing free speech; making it harder to expose corruption; and have a chilling effect on everyone’s autonomy, dignity and self-expression – which in particular can seriously harm LGBTQI+ communities, people of colour, and other discriminated-against groups,” the MEPs write, calling on the Commission to amend the AI proposal to outlaw the practice in order to protect EU citizens’ rights and the rights of communities who faced a heightened risk of discrimination (and therefore heightened risk from discriminatory tools supercharged with AI).

“The AI proposal offers a welcome opportunity to prohibit the automated recognition of gender, sexuality, race/ethnicity, disability and any other sensitive and protected characteristics,” they add.

The leaked draft of the Commission’s proposal does tackle indiscriminate mass surveillance — proposing to prohibit this practice, as well as outlawing general purpose social credit scoring systems.

However the MEPs want lawmakers to go further — warning over weaknesses in the wording of the leaked draft and suggesting changes to ensure that the proposed ban covers “all untargeted and indiscriminate mass surveillance, no matter how many people are exposed to the system”.

They also express alarm at the proposal having an exemption on the prohibition on mass surveillance for public authorities (or commercial entities working for them) — warning that this risks deviating from existing EU legislation and from interpretations by the bloc’s top court in this area.

“We strongly protest the proposed second paragraph of this Article 4 which would exempt public authorities and even private actors acting on their behalf ‘in order to safeguard public security’,” they write. “Public security is precisely what mass surveillance is being justified with, it is where it is practically relevant, and it is where the courts have consistently annulled legislation on indiscriminate bulk processing of personal data (e.g. the Data Retention Directive). This carve-out needs to be deleted.”

“This second paragraph could even be interpreted to deviate from other secondary legislation which the Court of Justice has so far interpreted to ban mass surveillance,” they continue. “The proposed AI regulation needs to make it very clear that its requirements apply in addition to those resulting from the data protection acquis and do not replace it. There is no such clarity in the leaked draft.”

The Commission has been contacted for comment on the MEPs’ calls but is unlikely to do so ahead of the official reveal of the draft AI regulation — which is expected around the middle of next week.

It remains to be seen whether the AI proposal will undergo any significant amendments between now and then. But MEPs have fired a swift warning shot that fundamental rights must and will be a key feature of the co-legislative debate — and that lawmakers’ claims of a framework to ensure ‘trustworthy’ AI won’t look credible if the rules don’t tackle unethical technologies head on.

#ai, #ai-regulation, #artificial-intelligence, #biometrics, #discrimination, #europe, #european-parliament, #european-union, #facial-recognition, #fundamental-rights, #law-enforcement, #mass-surveillance, #meps, #national-security, #policy, #privacy, #surveillance

0

Uber hit with default ‘robo-firing’ ruling after another EU labor rights GDPR challenge

Labor activists challenging Uber over what they allege are ‘robo-firings’ of drivers in Europe have trumpeted winning a default judgement in the Netherlands — where the Court of Amsterdam ordered the ride-hailing giant to reinstate six drivers who the litigants claim were unfairly terminated “by algorithmic means.”

The court also ordered Uber to pay the fired drivers compensation.

The challenge references Article 22 of the European Union’s General Data Protection Regulation (GDPR) — which provides protection for individuals against purely automated decisions with a legal or significant impact.

The activists say this is the first time a court has ordered the overturning of an automated decision to dismiss workers from employment.

However the judgement, which was issued on February 24, was issued by default — and Uber says it was not aware of the case until last week, claiming that was why it did not contest it (nor, indeed, comply with the order).

It had until March 29 to do so, per the litigants, who are being supported by the App Drivers & Couriers Union (ADCU) and Worker Info Exchange (WIE).

Uber argues the default judgement was not correctly served and says it is now making an application to set the default ruling aside and have its case heard “on the basis that the correct procedure was not followed.”

It envisages the hearing taking place within four weeks of its Dutch entity, Uber BV, being made aware of the judgement — which it says occurred on April 8.

“Uber only became aware of this default judgement last week, due to representatives for the ADCU not following proper legal procedure,” an Uber spokesperson told TechCrunch.

A spokesperson for WIE denied that correct procedure was not followed but welcomed the opportunity for Uber to respond to questions over how its driver ID systems operate in court, adding: “They [Uber] are out of time. But we’d be happy to see them in court. They will need to show meaningful human intervention and provide transparency.”

Uber pointed to a separate judgement by the Amsterdam Court last month — which rejected another ADCU- and WIE-backed challenge to Uber’s anti-fraud systems, with the court accepting its explanation that algorithmic tools are mere aids to human “anti-fraud” teams who it said take all decisions on terminations.

“With no knowledge of the case, the Court handed down a default judgement in our absence, which was automatic and not considered. Only weeks later, the very same Court found comprehensively in Uber’s favour on similar issues in a separate case. We will now contest this judgement,” Uber’s spokesperson added.

However WIE said this default judgement “robo-firing” challenge specifically targets Uber’s Hybrid Real-Time ID System — a system that incorporates facial recognition checks and which labor activists recently found misidentifying drivers in a number of instances.

It also pointed to a separate development this week in the U.K. where it said the City of London Magistrates Court ordered the city’s transport regulator, TfL, to reinstate the licence of one of the drivers revoked after Uber routinely notified it of a dismissal (also triggered by Uber’s real time ID system, per WIE).

Reached for comment on that, a TfL spokesperson said: “The safety of the travelling public is our top priority and where we are notified of cases of driver identity fraud, we take immediate licensing action so that passenger safety is not compromised. We always require the evidence behind an operator’s decision to dismiss a driver and review it along with any other relevant information as part of any decision to revoke a licence. All drivers have the right to appeal a decision to remove a licence through the Magistrates’ Court.”

The regulator has been applying pressure to Uber since 2017 when it took the (shocking to Uber) decision to revoke the company’s licence to operate — citing safety and corporate governance concerns.

Since then Uber has been able to continue to operate in the U.K. capital but the company remains under pressure to comply with a laundry list of requirements set by TfL as it tries to regain a full operator licence.

Commenting on the default Dutch judgement on the Uber driver terminations in a statement, James Farrar, director of WIE, accused gig platforms of “hiding management control in algorithms.”

“For the Uber drivers robbed of their jobs and livelihoods this has been a dystopian nightmare come true,” he said. “They were publicly accused of ‘fraudulent activity’ on the back of poorly governed use of bad technology. This case is a wake-up call for lawmakers about the abuse of surveillance technology now proliferating in the gig economy. In the aftermath of the recent U.K. Supreme Court ruling on worker rights gig economy platforms are hiding management control in algorithms. This is misclassification 2.0.”

In another supporting statement, Yaseen Aslam, president of the ADCU, added: “I am deeply concerned about the complicit role Transport for London has played in this catastrophe. They have encouraged Uber to introduce surveillance technology as a price for keeping their operator’s license and the result has been devastating for a TfL licensed workforce that is 94% BAME. The Mayor of London must step in and guarantee the rights and freedoms of Uber drivers licensed under his administration.”  

When pressed on the driver termination challenge being specifically targeted at its Hybrid Real-Time ID system, Uber declined to comment in greater detail — claiming the case is “now a live court case again”.

But its spokesman suggested it will seek to apply the same defence against the earlier “robo-firing” charge — when it argued its anti-fraud systems do not equate to automated decision making under EU law because “meaningful human involvement [is] involved in decisions of this nature”.

 

#app-drivers-couriers-union, #artificial-intelligence, #automated-decisions, #europe, #european-union, #facial-recognition, #gdpr, #general-data-protection-regulation, #gig-worker, #james-farrar, #labor, #lawsuit, #london, #netherlands, #transport-for-london, #uber, #united-kingdom

0

EU plan for risk-based AI rules to set fines as high as 4% of global turnover, per leaked draft

European Union lawmakers who are drawing up rules for applying artificial intelligence are considering fines of up to 4% of global annual turnover (or €20M, if greater) for a set of prohibited use-cases, according to a leaked draft of the AI regulation — reported earlier by Politico — that’s expected to be officially unveiled next week.

The plan to regulate AI has been on the cards for a while. Back in February 2020 the European Commission published a white paper, sketching plans for regulating so-called “high risk” applications of artificial intelligence.

At the time EU lawmakers were toying with a sectoral focus — envisaging certain sectors like energy and recruitment as vectors for risk. However that approach appears to have been rethought, per the leaked draft — which does not limit discussion of AI risk to particular industries or sectors.

Instead, the focus is on compliance requirements for high risk AI applications, wherever they may occur (weapons/military uses are specifically excluded, however, as such use-cases fall outside the EU treaties). Although it’s not abundantly clear from this draft exactly how ‘high risk’ will be defined.

The overarching goal for the Commission here is to boost public trust in AI, via a system of compliance checks and balances steeped in “EU values” in order to encourage uptake of so-called “trustworthy” and “human-centric” AI. So even makers of AI applications not considered to be ‘high risk’ will still be encouraged to adopt codes of conduct — “to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems”, as the Commission puts it.

Another chunk of the regulation deals with measures to support AI development in the bloc — pushing Member States to establish regulatory sandboxing schemes in which startups and SMEs can be proritized for support to develop and test AI systems before bringing them to market.

Competent authorities “shall be empowered to exercise their discretionary powers and levers of proportionality in relation to artificial intelligence projects of entities participating the sandbox, while fully preserving authorities’ supervisory and corrective powers,” the draft notes.

What’s high risk AI?

Under the planned rules, those intending to apply artificial intelligence will need to determine whether a particular use-case is ‘high risk’ and thus whether they need to conduct a mandatory, pre-market compliance assessment or not.

“The classification of an AI system as high-risk should be based on its intended purpose — which should refer to the use for which an AI system is intended, including the specific context and conditions of use and — and be determined in two steps by considering whether it may cause certain harms and, if so, the severity of the possible harm and the probability of occurrence,” runs one recital in the draft.

“A classification of an AI system as high-risk for the purpose of this Regulation may not necessarily mean that the system as such or the product as a whole would necessarily be considered as ‘high-risk’ under the criteria of the sectoral legislation,” the text also specifies.

Examples of “harms” associated with high-risk AI systems are listed in the draft as including: “the injury or death of a person, damage of property, systemic adverse impacts for society at large, significant disruptions to the provision of essential services for the ordinary conduct of critical economic and societal activities, adverse impact on financial, educational or professional opportunities of persons, adverse impact on the access to public services and any form of public assistance, and adverse impact on [European] fundamental rights.”

Several examples of high risk applications are also discussed — including recruitment systems; systems that provide access to educational or vocational training institutions; emergency service dispatch systems; creditworthiness assessment; systems involved in determining taxpayer-funded benefits allocation; decision-making systems applied around the prevention, detection and prosecution of crime; and decision-making systems used to assist judges.

So long as compliance requirements — such as establishing a risk management system and carrying out post-market surveillance, including via a quality management system — are met such systems would not be barred from the EU market under the legislative plan.

Other requirements include in the area of security and that the AI achieves consistency of accuracy in performance — with a stipulation to report to “any serious incidents or any malfunctioning of the AI system which constitutes a breach of obligations” to an oversight authority no later than 15 days after becoming aware of it.

“High-risk AI systems may be placed on the Union market or otherwise put into service subject to compliance with mandatory requirements,” the text notes.

“Mandatory requirements concerning high-risk AI systems placed or otherwise put into service on the Union market should be complied with taking into account the intended purpose of the AI system and according to the risk management system to be established by the provider.

“Among other things, risk control management measures identified by the provider should be based on due consideration of the effects and possible interactions resulting from the combined application of the mandatory requirements and take into account the generally acknowledged state of the art, also including as reflected in relevant harmonised standards or common specifications.”

Prohibited practices and biometrics

Certain AI “practices” are listed as prohibited under Article 4 of the planned law, per this leaked draft — including (commercial) applications of mass surveillance systems and general purpose social scoring systems which could lead to discrimination.

AI systems that are designed to manipulate human behavior, decisions or opinions to a detrimental end (such as via dark pattern design UIs), are also listed as prohibited under Article 4; as are systems that use personal data to generate predictions in order to (detrimentally) target the vulnerabilities of persons or groups of people.

A casual reader might assume the regulation is proposing to ban, at a stroke, practices like behavioral advertising based on people tracking — aka the business models of companies like Facebook and Google. However that assumes adtech giants will accept that their tools have a detrimental impact on users.

On the contrary, their regulatory circumvention strategy is based on claiming the polar opposite; hence Facebook’s talk of “relevant” ads. So the text (as written) looks like it will be a recipe for (yet) more long-drawn out legal battles to try to make EU law stick vs the self-interested interpretations of tech giants.

The rational for the prohibited practices is summed up in an earlier recital of the draft — which states: “It should be acknowledged that artificial intelligence can enable new manipulative, addictive, social control and indiscriminate surveillance practices that are particularly harmful and should be prohibited as contravening the Union values of respect for human dignity, freedom, democracy, the rule of law and respect for human rights.”

It’s notable that the Commission has avoided proposing a ban on the use of facial recognition in public places — as it had apparently been considering, per a leaked draft early last year, before last year’s White Paper steered away from a ban.

In the leaked draft “remote biometric identification” in public places is singled out for “stricter conformity assessment procedures through the involvement of a notified body” — aka an “authorisation procedure that addresses the specific risks implied by the use of the technology” and includes a mandatory data protection impact assessment — vs most other applications of high risk AIs (which are allowed to meet requirements via self-assessment).

“Furthermore the authorising authority should consider in its assessment the likelihood and severity of harm caused by inaccuracies of a system used for a given purpose, in particular with regard to age, ethnicity, sex or disabilities,” runs the draft. “It should further consider the societal impact, considering in particular democratic and civic participation, as well as the methodology, necessity and proportionality for the inclusion of persons in the reference database.”

AI systems “that may primarily lead to adverse implications for personal safety” are also required to undergo this higher bar of regulatory involvement as part of the compliance process.

The envisaged system of conformity assessments for all high risk AIs is ongoing, with the draft noting: “It is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes.”

“For AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out) changes to the algorithm and performance which have not been pre-determined and assessed at the moment of the conformity assessment shall result in a new conformity
assessment of the AI system,” it adds.

The carrot for compliant businesses is to get to display a ‘CE’ mark to help them win the trust of users and friction-free access across the bloc’s single market.

“High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the Union,” the text notes, adding that: “Member States should not create obstacles to the placing on the market or putting into service of AI systems that comply with the requirements laid down in this Regulation.”

Transparency for bots and deepfakes

As well as seeking to outlaw some practices and establish a system of pan-EU rules for bringing ‘high risk’ AI systems to market safely — with providers expected to make (mostly self) assessments and fulfil compliance obligations (such as around the quality of the data-sets used to train the model; record-keeping/documentation; human oversight; transparency; accuracy) prior to launching such a product into the market and conduct ongoing post-market surveillance — the proposed regulation seeks shrink the risk of AI being used to trick people.

It does this by suggesting “harmonised transparency rules” for AI systems intended to interact with natural persons (aka voice AIs/chat bots etc); and for AI systems used to generate or manipulate image, audio or video content (aka deepfakes).

“Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems,” runs the text.

“In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a reasonable person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.

“This labelling obligation should not apply where the use of such content is necessary for the purposes of safeguarding public security or for the exercise of a legitimate right or freedom of a person such as for satire, parody or freedom of arts and sciences and subject to appropriate safeguards for the rights and freedoms of third parties.”

What about enforcement?

While the proposed AI regime hasn’t yet been officially unveiled by the Commission — so details could still change before next week — a major question mark looms over how a whole new layer of compliance around specific applications of (often complex) artificial intelligence can be effectively oversee and any violations enforced, especially given ongoing weaknesses in the enforcement of the EU’s data protection regime (which begun being applied back in 2018).

So while providers of high risk AIs are required to take responsibility for putting their system/s on the market (and therefore for compliance with all the various stipulations, which also include registering high risk AI systems in an EU database the Commission intends to maintain), the proposal leaves enforcement in the hands of Member States — who will be responsible for designating one or more national competent authorities to supervise application of the oversight regime.

We’ve seen how this story plays out with the General Data Protection Regulation. The Commission itself has conceded GDPR enforcement is not consistently or vigorously applied across the bloc — so a major question is how these fledgling AI rules will avoid the same forum-shopping fate?

“Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation,” runs the draft.

The Commission does add a caveat — about potentially stepping in in the event that Member State enforcement doesn’t deliver. But there’s no near term prospect of a different approach to enforcement, suggesting the same old pitfalls will likely appear.

“Since the objective of this Regulation, namely creating the conditions for an ecosystem of trust regarding the placing on the market, putting into service and use of artificial intelligence in the Union, cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures, in accordance with the principle of subsidiarity as set out in Article 5 of the Treaty on European Union,” is the Commission’s back-stop for future enforcement failure.

The oversight plan for AI includes setting up a mirror entity akin to the GDPR’s European Data Protection Board — to be called the European Artificial Intelligence Board — which will similarly support application of the regulation by issuing relevant recommendations and opinions for EU lawmakers, such as around the list of prohibited AI practices and high-risk systems.

 

#ai, #artificial-intelligence, #behavioral-advertising, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #facial-recognition, #general-data-protection-regulation, #policy, #regulation, #tc

0

Facebook gets a C – Startup rates the ‘ethics’ of social media platforms, targets asset managers

By now you’ve probably heard of ESG (Environmental, Social, Governance) ratings for companies, or ratings for their carbon footprint. Well, now a UK company has come up with a way of rating the ‘ethics’ social media companies. 
  
EthicsGrade is an ESG ratings agency, focusing on AI governance. Headed up Charles Radclyffe, the former head of AI at Fidelity, it uses AI-driven models to create a more complete picture of the ESG of organizations, harnessing Natural Language Processing to automate the analysis of huge data sets. This includes tracking controversial topics, and public statements.

Frustrated with the green-washing of some ‘environmental’ stocks, Radclyffe realized that the AI governance of social media companies was not being properly considered, despite presenting an enormous risk to investors in the wake of such scandals as the manipulation of Facebook by companies such as Cambridge Analytica during the US Election and the UK’s Brexit referendum.

EthicsGrade Industry Summary Scorecard – Social Media

The idea is that these ratings are used by companies to better see where they should improve. But the twist is that asset managers can also see where the risks of AI might lie.

Speaking to TechCrunch he said: “While at Fidelity I got a reputation within the firm for being the go-to person, for my colleagues in the investment team, who wanted to understand the risks within the technology firms that we were investing in. After being asked a number of times about some dodgy facial recognition company or a social media platform, I realized there was actually a massive absence of data around this stuff as opposed to anecdotal evidence.”

He says that when he left Fidelity he decided EthicsGrade would out to cover not just ESGs but also AI ethics for platforms that are driven by algorithms.

He told me: “We’ve built a model to analyze technology governance. We’ve covered 20 industries. So most of what we’ve published so far has been non-tech companies because these are risks that are inherent in many other industries, other than simply social media or big tech. But over the next couple of weeks, we’re going live with our data on things which are directly related to tech, starting with social media.”

Essentially, what they are doing is a big parallel with what is being done in the ESG space.

“The question we want to be able to answer is how does Tik Tok compare against Twitter or Wechat as against WhatsApp. And what we’ve essentially found is that things like GDPR have done a lot of good in terms of raising the bar on questions like data privacy and data governance. But in a lot of the other areas that we cover, such as ethical risk or a firm’s approach to public policy, are indeed technical questions about risk management,” says Radclyffe.

But, of course, they are effectively rating algorithms. Are the ratings they are giving the social platforms themselves derived from algorithms? EthicsGrade says they are training their own AI through NLP as they go so that they can automate what is currently very human analysts centric, just as ‘sustainalytics’ et al did years ago in the environmental arena.

So how are they coming up with these ratings? EthicsGrade says are evaluating “the extent to which organizations implement transparent and democratic values, ensure informed consent and risk management protocols, and establish a positive environment for error and improvement.” And this is all achieved, they say, all through publicly available data – policy, website, lobbying etc. In simple terms, they rate the governance of the AI not necessarily the algorithms themselves but what checks and balances are in place to ensure that the outcomes and inputs are ethical and managed.

“Our goal really is to target asset owners and asset managers,” says Radclyffe. “So if you look at any of these firms like, let’s say Twitter, 29% of Twitter is owned by five organizations: it’s Vanguard, Morgan Stanley, Blackrock, State Street and ClearBridge. If you look at the ownership structure of Facebook or Microsoft, it’s the same firms: Fidelity, Vanguard and BlackRock. And so really we only need to win a couple of hearts and minds, we just need to convince the asset owners and the asset managers that questions like the ones journalists have been asking for years are pertinent and relevant to their portfolios and that’s really how we’re planning to make our impact.”

Asked if they look at content of things like Tweets, he said no: “We don’t look at content. What we concern ourselves is how they govern their technology, and where we can find evidence of that. So what we do is we write to each firm with our rating, with our assessment of them. We make it very clear that it’s based on publicly available data. And then we invite them to complete a survey. Essentially, that survey helps us validate data of these firms. Microsoft is the only one that’s completed the survey.”

Ideally, firms will “verify the information, that they’ve got a particular process in place to make sure that things are well-managed and their algorithms don’t become discriminatory.”

In an age increasingly driven by algorithms, it will be interesting to see if this idea of rating them for risk takes off, especially amongst asset managers.

#articles, #artificial-intelligence, #asset-management, #blackrock, #environmentalism, #esg, #europe, #facebook, #facial-recognition, #fidelity, #finance, #governance, #microsoft, #morgan-stanley, #natural-language-processing, #social-media, #tc, #technology, #twitter, #united-kingdom, #united-states

0

Uber under pressure over facial recognition checks for drivers

Uber’s use of facial recognition technology for a driver identity system is being challenged in the UK where the App Drivers & Couriers Union (ADCU) and Worker Info Exchange (WIE) have called for Microsoft to suspend the ride-hailing giant’s use of B2B facial recognition after finding multiple cases where drivers were mis-identified and went on to have their licence to operate revoked by Transport for London (TfL).

The union said it has identified seven cases of “failed facial recognition and other identity checks” leading to drivers losing their jobs and license revocation action by TfL.

When Uber launched the “Real Time ID Check” system in the UK, in April 2020, it said it would “verify that driver accounts aren’t being used by anyone other than the licensed individuals who have undergone an Enhanced DBS check”. It said then that drivers could “choose whether their selfie is verified by photo-comparison software or by our human reviewers”.

In one misidentification case the ADCU said the driver was dismissed from employment by Uber and his license was revoked by TfL. The union adds that it was able to assist the member to establish his identity correctly forcing Uber and TfL to reverse their decisions. But it highlights concerns over the accuracy of the Microsoft facial recognition technology — pointing out that the company suspended the sale of the system to US police forces in the wake of the Black Lives Matter protests of last summer.

Research has shown that facial recognition systems can have an especially high error rate when used to identify people of color — and the ADCU cites a 2018 MIT study which found Microsoft’s system can have an error rate as high as 20% (accuracy was lowest for dark skinned women).

The union said it’s written to the Mayor of London to demand that all TfL private hire driver license revocations based on Uber reports using evidence from its Hybrid Real Time Identification systems are immediately reviewed.

Microsoft has been contacted for comment on the call for it to suspend Uber’s licence for its facial recognition tech.

The ADCU said Uber rushed to implement a workforce electronic surveillance and identification system as part of a package of measures implemented to regain its license to operate in the UK capital.

Back in 2017, TfL made the shock decision not to grant Uber a licence renewal — ratcheting up regulatory pressure on its processes and maintaining this hold in 2019 when it again deemed Uber ‘not fit and proper’ to hold a private hire vehicle licence.

Safety and security failures were a key reason cited by TfL for withholding Uber’s licence renewal.

Uber has challenged TfL’s decision in court and it won another appeal against the licence suspension last year — but the renewal granted was for only 18 months (not the full five years). It also came with a laundry list of conditions — so Uber remains under acute pressure to meet TfL’s quality bar.

Now, though, Labor activists are piling pressure on Uber from the other direction too — pointing out that no regulatory standard has been set around the workplace surveillance technology that the ADCU says TfL encouraged Uber to implement. No equalities impact assessment has even been carried out by TfL, it adds.

WIE confirmed to TechCrunch that it’s filing a discrimination claim in the case of one driver, called Imran Raja, who was dismissed after Uber’s Real ID check — and had his license revoked by TfL.

His licence was subsequently restored — but only after the union challenged the action.

A number of other Uber drivers who were also misidentified by Uber’s facial recognition checks will be appealing TfL’s revocation of their licences via the UK courts, per WIE.

A spokeswoman for TfL told us it is not a condition of Uber’s licence renewal that it must implement facial recognition technology — only that Uber must have adequate safety systems in place.

The relevant condition of its provisional licence on ‘driver identity’ states:

ULL shall maintain appropriate systems, processes and procedures to confirm that a driver using the app is an individual licensed by TfL and permitted by ULL to use the app.

We’ve also asked TfL and the UK’s Information Commissioner’s Office for a copy of the data protection impact assessment Uber says was carried before the Real-Time ID Check was launched — and will update this report if we get it.

Uber, meanwhile, disputes the union’s assertion that its use of facial recognition technology for driver identity checks risks automating discrimination because it says it has a system of manual (human) review in place that’s intended to prevent failures.

Albeit it accepts that that system clearly failed in the case of Raja — who only got his Uber account back (and an apology) after the union’s intervention.

Uber said its Real Time ID system involves an automated ‘picture matching’ check on a selfie that the driver must provide at the point of log in, with the system comparing that selfie with a (single) photo of them held on file. 

If there’s no machine match, the system sends the query to a three-person human review panel to conduct a manual check. Uber said checks will be sent to a second human panel if the first can’t agree. 

In a statement the tech giant told us:

“Our Real-Time ID Check is designed to protect the safety and security of everyone who uses the app by ensuring the correct driver or courier is using their account. The two situations raised do not reflect flawed technology — in fact one of the situations was a confirmed violation of our anti-fraud policies and the other was a human error.

“While no tech or process is perfect and there is always room for improvement, we believe the technology, combined with the thorough process in place to ensure a minimum of two manual human reviews prior to any decision to remove a driver, is fair and important for the safety of our platform.”

In two of the cases referred to by the ADCU, Uber said that in one instance a driver had shown a photo during the Real-Time ID Check instead of taking a selfie as required to carry out the live ID check — hence it argues it was not wrong for the ID check to have failed as the driver was not following the correct protocol.

In the other instance Uber blamed human error on the part of its manual review team(s) who (twice) made an erroneous decision. It said the driver’s appearance had changed and its staff were unable to recognize the face of the (now bearded) man who sent the selfie as the same person in the clean-shaven photo Uber held on file.

Uber was unable to provide details of what happened in the other five identity check failures referred to by the union.

It also declined to specify the ethnicities of the seven drivers the union says were misidentified by its checks.

Asked what measures it’s taking to prevent human errors leading to more misidentifications in future Uber declined to provide a response.

Uber said it has a duty to notify TfL when a driver fails an ID check — a step which can lead to the regulator suspending the license, as happened in Raja’s case. So any biases in its identity check process clearly risk having disproportionate impacts on affected individuals’ ability to work.

WIE told us it knows of three TfL licence revocations that relate solely to facial recognition checks.

“We know of more [UberEats] couriers who have been deactivated but no further action since they are not licensed by TfL,” it noted.

TechCrunch also asked Uber how many driver deactivations have been carried out and reported to TfL in which it cited facial recognition in its testimony to the regulator — but again the tech giant declined to answer our questions.

WIE told us it has evidence that facial recognition checks are incorporated into geo-location-based deactivations Uber carries out.

It said that in one case a driver who had their account revoked was given an explanation by Uber relating solely to location but TfL accidentally sent WIE Uber’s witness statement — which it said “included facial recognition evidence”.

That suggests a wider role for facial recognition technology in Uber’s identity checks vs the one the ride-hailing giant gave us when explaining how its Real Time ID system works. (Again, Uber declined to answer follow up questions about this or provide any other information beyond its on-the-record statement and related background points.)

But even just focusing on Uber’s Real Time ID system there’s the question of much say Uber’s human review staff actually have in the face of machine suggestions combined with the weight of wider business imperatives (like an acute need to demonstrate regulatory compliance on the issue of safety).

James Farrer, the founder of WIE, queries the quality of the human checks Uber has put in place as a backstop for facial recognition technology which has a known discrimination problem.

“Is Uber just confecting legal plausible deniability of automated decision making or is there meaningful human intervention,” he told TechCrunch. “In all of these cases, the drivers were suspended and told the specialist team would be in touch with them. A week or so typically would go by and they would be permanently deactivated without ever speaking to anyone.”

“There is research out there to show when facial recognition systems flag a mismatch humans have bias to confirm the machine. It takes a brave human being to override the machine. To do so would mean they would need to understand the machine, how it works, its limitations and have the confidence and management support to over rule the machine,” Farrer added. “Uber employees have the risk of Uber’s license to operate in London to consider on one hand and what… on the other? Drivers have no rights and there are in excess so expendable.”

He also pointed out that Uber has previously said in court that it errs on the side of customer complaints rather than give the driver benefit of the doubt. “With that in mind can we really trust Uber to make a balanced decision with facial recognition?” he asked.

Farrer further questioned why Uber and TfL don’t show drivers the evidence that’s being relied upon to deactivate their accounts — to given them a chance to challenge it via an appeal on the actual substance of the decision.

“IMHO this all comes down to tech governance,” he added. “I don’t doubt that Microsoft facial recognition is a powerful and mostly accurate tool. But the governance of this tech must be intelligent and responsible. Microsoft are smart enough themselves to acknowledge this as a limitation.

“The prospect of Uber pressured into surveillance tech as a price of keeping their licence… and a 94% BAME workforce with no worker rights protection from unfair dismissal is a recipe for disaster!”

The latest pressure on Uber’s business processes follows hard on the heels of a major win for Farrer and other former Uber drivers and labor rights activists after years of litigation over the company’s bogus claim that drivers are ‘self employed’, rather than workers under UK law.

On Tuesday Uber responded to last month’s Supreme Court quashing of its appeal saying it would now treat drivers as workers in the market — expanding the benefits it provides.

However the litigants immediately pointed out that Uber’s ‘deal’ ignored the Supreme Court’s assertion that working time should be calculated when a driver logs onto the Uber app. Instead Uber said it would calculate working time entitlements when a driver accepts a job — meaning it’s still trying to avoid paying drivers for time spent waiting for a fare.

The ADCU therefore estimates that Uber’s ‘offer’ underpays drivers by between 40%-50% of what they are legally entitled to — and has said it will continue its legal fight to get a fair deal for Uber drivers.

At an EU level, where regional lawmakers are looking at how to improve conditions for gig workers, the tech giant is now pushing for an employment law carve out for platform work — and has been accused of trying to lower legal standards for workers.

In additional Uber-related news this month, a court in the Netherlands ordered the company to hand over more of the data it holds on drivers, following another ADCU+WIE challenge. Although the court rejected the majority of the drivers’ requests for more data. But notably it did not object to drivers seeking to use data rights established under EU law to obtain information collectively to further their ability to collectively bargain against a platform — paving the way for more (and more carefully worded) challenges as Farrer spins up his data trust for workers.

The applicants also sought to probe Uber’s use of algorithms for fraud-based driver terminations under an article of EU data protection law that provides for a right not to be subject to solely automated decisions in instances where there is a legal or significant effect. In that case the court accepted Uber’s explanation at face value that fraud-related terminations had been investigated by a human team — and that the decisions to terminate involved meaningful human decisions.

But the issue of meaningful human invention/oversight of platforms’ algorithmic suggestions/decisions is shaping up to be a key battleground in the fight to regulate the human impacts of and societal imbalances flowing from powerful platforms which have both god-like view of users’ data and an allergy to complete transparency.

The latest challenge to Uber’s use of facial recognition-linked terminations shows that interrogation of the limits and legality of its automated decisions is far from over — really, this work is just getting started.

Uber’s use of geolocation for driver suspensions is also facing legal challenge.

While pan-EU legislation now being negotiated by the bloc’s institutions also aims to increase platform transparency requirements — with the prospect of added layers of regulatory oversight and even algorithmic audits coming down the pipe for platforms in the near future.

Last week the same Amsterdam court that ruled on the Uber cases also ordered India-based ride-hailing company Ola to disclose data about its facial-recognition-based ‘Guardian’ system — aka its equivalent to Uber’s Real Time ID system. The court said Ola must provided applicants with a wider range of data than it currently does — including disclosing a ‘fraud probability profile’ it maintains on drivers and data within a ‘Guardian’ surveillance system it operates.

Farrer says he’s thus confident that workers will get transparency — “one way or another”. And after years fighting Uber through UK courts over its treatment of workers his tenacity in pursuit of rebalancing platform power cannot be in doubt.

 

#app-drivers-couriers-union, #artificial-intelligence, #europe, #facial-recognition, #james-farrer, #lawsuit, #microsoft, #policy, #tfl, #uber, #worker-info-exchange

0

Minneapolis bans its police department from using facial recognition software

Minneapolis voted Friday to ban the use of facial recognition software for its police department, growing the list of major cities that have implemented local restrictions on the controversial technology. After an ordinance on the ban was approved earlier this week, 13 members of the city council voted in favor of the ban with no opposition.

The new ban will block the Minneapolis Police Department from using any facial recognition technology, including software by Clearview AI. That company sells access to a large database of facial images, many scraped from major social networks, to federal law enforcement agencies, private companies and a number of U.S. police departments. The Minneapolis Police Department is known to have a relationship with Clearview AI, as is the Hennepin County Sheriff’s Office, which will not be restricted by the new ban.

The vote is a landmark decision in the city that set off racial justice protests around the country after a Minneapolis police officer killed George Floyd last year. The city has been in the throes of police reform ever since, leading the nation by pledging to defund the city’s police department in June before backing away from that commitment into more incremental reforms later that year.

Banning the use of facial recognition is one targeted measure that can rein in emerging concerns about aggressive policing. Many privacy advocates are concerned that the AI-powered face recognition systems would not only disproportionately target communities of color, but that the tech has been demonstrated to have technical shortcomings in discerning non-white faces.

Cities around the country are increasingly looking to ban the controversial technology and have implemented restrictions in many different ways. In Portland, Oregon, new laws passed last year block city bureaus from using facial recognition but also forbid private companies from deploying the technology in public spaces. Previous legislation in San Francisco, Oakland and Boston restricted city governments from using facial recognition systems though didn’t include a similar provision for private companies.

#clearview-ai, #facial-recognition, #government, #minnesota, #surveillance, #tc

0

Sweden’s data watchdog slaps police for unlawful use of Clearview AI

Sweden’s data protection authority, the IMY, has fined the local police authority €250,000 ($300k+) for unlawful use of the controversial facial recognition software, Clearview AI, in breach of the country’s Criminal Data Act.

As part of the enforcement the police must conduct further training and education of staff in order to avoid any future processing of personal data in breach of data protection rules and regulations.

The authority has also been ordered to inform people whose personal data was sent to Clearview — when confidentiality rules allow it to do so, per the IMY.

Its investigation found that the police had used the facial recognition tool on a number of occasions and that several employees had used it without prior authorization.

Earlier this month Canadian privacy authorities found Clearview had breached local laws when it collected photos of people to plug into its facial recognition database without their knowledge or permission.

“IMY concludes that the Police has not fulfilled its obligations as a data controller on a number of accounts with regards to the use of Clearview AI. The Police has failed to implement sufficient organisational measures to ensure and be able to demonstrate that the processing of personal data in this case has been carried out in compliance with the Criminal Data Act. When using Clearview AI the Police has unlawfully processed biometric data for facial recognition as well as having failed to conduct a data protection impact assessment which this case of processing would require,” the Swedish data protection authority writes in a press release.

The IMY’s full decision can be found here (in Swedish).

“There are clearly defined rules and regulations on how the Police Authority may process personal data, especially for law enforcement purposes. It is the responsibility of the Police to ensure that employees are aware of those rules,” added Elena Mazzotti Pallard, legal advisor at IMY, in a statement.

The fine (SEK2.5M in local currency) was decided on the basis of an overall assessment, per the IMY, though it falls quite a way short of the maximum possible under Swedish law for the violations in question — which the watchdog notes would be SEK10M. (The authority’s decision notes that not knowing the rules or having inadequate procedures in place are not a reason to reduce a penalty fee so it’s not entirely clear why the police avoided a bigger fine.)

The data authority said it was not possible to determine what had happened to the data of the people whose photos the police authority had sent to Clearview — such as whether the company still stored the information. So it has also ordered the police to take steps to ensure Clearview deletes the data.

The IMY said it investigated the police’s use of the controversial technology following reports in local media.

Just over a year ago, US-based Clearview AI was revealed by the New York Times to have amassed a database of billions of photos of people’s faces — including by scraping public social media postings and harvesting people’s sensitive biometric data without individuals’ knowledge or consent.

European Union data protection law puts a high bar on the processing of special category data, such as biometrics.

Ad hoc use by police of a commercial facial recognition database — with seemingly zero attention paid to local data protection law — evidently does not meet that bar.

Last month it emerged that the Hamburg data protection authority had instigating proceedings against Clearview following a complaint by a German resident over consentless processing of his biometric data.

The Hamburg authority cited Article 9 (1) of the GDPR, which prohibits the processing of biometric data for the purpose of uniquely identifying a natural person, unless the individual has given explicit consent (or for a number of other narrow exceptions which it said had not been met) — thereby finding Clearview’s processing unlawful.

However the German authority only made a narrow order for the deletion of the individual complainant’s mathematical hash values (which represent the biometric profile).

It did not order deletion of the photos themselves. It also did not issue a pan-EU order banning the collection of any European resident’s photos as it could have done and as European privacy campaign group, noyb, had been pushing for.

noyb is encouraging all EU residents to use forms on Clearview AI’s website to ask the company for a copy of their data and ask it to delete any data it has on them, as well as to object to being included in its database. It also recommends that individuals who finds Clearview holds their data submit a complaint against the company with their local DPA.

European Union lawmakers are in the process of drawing up a risk-based framework to regulate applications of artificial intelligence — with draft legislation expected to be put forward this year although the Commission intends it to work in concert with data protections already baked into the EU’s General Data Protection Regulation (GDPR).

Earlier this month the controversial facial recognition company was ruled illegal by Canadian privacy authorities — who warned they would “pursue other actions” if the company does not follow recommendations that include stopping the collection of Canadians’ data and deleting all previously collected images.

Clearview said it had stopped providing its tech to Canadian customers last summer.

It is also facing a class action lawsuit in the U.S. citing Illinois’ biometric protection laws.

Last summer the UK and Australian data protection watchdogs announced a joint investigation into Clearview’s personal data handling practices. That probe is ongoing.

 

#artificial-intelligence, #clearview-ai, #eu-data-protection-law, #europe, #facial-recognition, #gdpr, #privacy, #sweden, #tc

0

Clearview AI ruled ‘illegal’ by Canadian privacy authorities

Controversial facial recognition startup Clearview AI violated Canadian privacy laws when it collected photos of Canadians without their knowledge or permission, the country’s top privacy watchdog has ruled.

The New York-based company made its splashy newspaper debut a year ago by claiming it had collected over 3 billion photos of people’s faces and touting its connections to law enforcement and police departments. But the startup has faced a slew of criticism for scraping social media sites also without their permission, prompting Facebook, LinkedIn and Twitter to send cease and desist letters to demand it stops.

In a statement, Canada’s Office of the Privacy Commissioner said its investigation found Clearview had “collected highly sensitive biometric information without the knowledge or consent of individuals,” and that the startup “collected, used and disclosed Canadians’ personal information for inappropriate purposes, which cannot be rendered appropriate via consent.”

Clearview rebuffed the allegations, claiming Canada’s privacy laws do not apply because the company doesn’t have a “real and substantial connection” to the country, and that consent was not required because the images it scraped were publicly available.

That’s a challenge the company continues to face in court, as it faces a class action suit citing Illinois’ biometric protection laws that last year dinged Facebook to the tune of $550 million for violating the same law.

The Canadian privacy watchdog rejected Clearview’s arguments, and said it would “pursue other actions” if the company does not follow its recommendations, which included stopping the collection on Canadians and deleting all previously collected images. Clearview said in July that it stopped providing its technology to Canadian customers after the Royal Canadian Mounted Police and the Toronto Police Service were using the startup’s technology.

“What Clearview does is mass surveillance and it is illegal,” said Daniel Therrien, Canada’s privacy commissioner. “It is an affront to individuals’ privacy rights and inflicts broad-based harm on all members of society, who find themselves continually in a police lineup. This is completely unacceptable.”

A spokesperson for Clearview AI did not immediately return a request for comment.

#articles, #canada, #clearview-ai, #digital-rights, #facebook, #facial-recognition, #facial-recognition-software, #human-rights, #illinois, #law-enforcement, #mass-surveillance, #new-york, #privacy, #security, #social-issues, #spokesperson, #terms-of-service

0

Cybersecurity startup SpiderSilk raises $2.25M to help prevent data breaches

Dubai-based cybersecurity startup SpiderSilk has raised $2.25 million in a pre-Series A round, led by venture firms Global Ventures and STV.

In the past two years, SpiderSilk has discovered some of the biggest data breaches: Blind, the allegedly anonymous social network that exposed private complaints by Silicon Valley employees; a lab leaked highly sensitive Samsung source code; an inadvertently public code repository revealed apps, code, and apartment building camera footage belonging to controversial facial recognition startup Clearview AI; and a massive spill of unencrypted customer card numbers at now-defunct MoviePass may have been the final nail in the already-beleaguered subscription service’s casket.

Much of those discoveries were found from the company’s proprietary internet scanner, SpiderSilk co-founder and chief security officer Mossab Hussein told TechCrunch.

Any company would want their data locked down, but mistakes happen and misconfigurations can leave sensitive internal corporate data accessible from the internet. SpiderSilk helps its customers understand their attack surface by looking for things that are exposed but shouldn’t be.

The cybersecurity startup uses its scanner to map out a company’s assets and attack surfaces to detect vulnerabilities and data exposures, and it also simulates cyberattacks to help customers understand where vulnerabilities are in their defenses.

“The attack surface management and threat detection platform we built scans the open internet on a continuous basis in order to attribute all publicly accessible assets back to organizations that could be affected by them, either directly or indirectly,” SpiderSilk’s co-founder and chief executive Rami El Malak told TechCrunch. “As a result, the platform regularly uncovers exploits and highlights how no organization is immune from infrastructure visibility blind-spots.”

El Malak said the funding will help to build out its security, engineering and data science teams, as well as its marketing and sales. He said the company is expanding its presence to North America with sales and engineering teams.

It’s the company’s second round of funding, after a seed round of $500,000 in November 2019, also led by Global Ventures and several angel investors.

“The SpiderSilk team are outstanding partners, solving a critical problem in the ever-complex world of cybersecurity, and protecting companies online from the increasing threats of malicious activity,” said Basil Moftah, general partner at Global Ventures.

#clearview-ai, #computer-security, #computing, #cybersecurity-startup, #data-security, #dubai, #facial-recognition, #general-partner, #north-america, #open-internet, #samsung, #security, #social-network, #spidersilk, #vulnerability

0

This site posted every face from Parler’s Capitol Hill insurrection videos

This site posted every face from Parler’s Capitol Hill insurrection videos

Enlarge (credit: Getty Images | Wired)

When hackers exploited a bug in Parler to download all of the right-wing social media platform’s contents last week, they were surprised to find that many of the pictures and videos contained geolocation metadata revealing exactly how many of the site’s users had taken part in the invasion of the US Capitol building just days before. But the videos uploaded to Parler also contain an equally sensitive bounty of data sitting in plain sight: thousands of images of unmasked faces, many of whom participated in the Capitol riot. Now one website has done the work of cataloging and publishing every one of those faces in a single, easy-to-browse lineup.

Late last week, a website called Faces of the Riot appeared online, showing nothing but a vast grid of more than 6,000 images of faces, each one tagged only with a string of characters associated with the Parler video in which it appeared. The site’s creator tells WIRED that he used simple open source machine learning and facial recognition software to detect, extract, and deduplicate every face from the 827 videos that were posted to Parler from inside and outside the Capitol building on January 6, the day when radicalized Trump supporters stormed the building in a riot that resulted in five people’s deaths. The creator of Faces of the Riot says his goal is to allow anyone to easily sort through the faces pulled from those videos to identify someone they may know or recognize who took part in the mob, or even to reference the collected faces against FBI wanted posters and send a tip to law enforcement if they spot someone.

Read 10 remaining paragraphs | Comments

#biz-it, #capitol-hill, #dc, #facial-recognition, #gaming-culture, #insurrection, #policy, #washington

0

Facial recognition reveals political party in troubling new research

Researchers have created a machine learning system that they claim can determine a person’s political party, with reasonable accuracy, based only on their face. The study, from a group that also showed that sexual preference can seemingly be inferred this way, candidly addresses and carefully avoids the pitfalls of “modern phrenology,” leading to the uncomfortable conclusion that our appearance may express more personal information that we think.

The study, which appeared this week in the Nature journal Scientific Reports, was conducted by Stanford University’s Michal Kosinski. Kosinski made headlines in 2017 with work that found that a person’s sexual preference could be predicted from facial data.

The study drew criticism not so much for its methods but for the very idea that something that’s notionally non-physical could be detected this way. But Kosinski’s work, as he explained then and afterwards, was done specifically to challenge those assumptions and was as surprising and disturbing to him as it was to others. The idea was not to build a kind of AI gaydar — quite the opposite, in fact. As the team wrote at the time, it was necessary to publish in order to warn others that such a thing may be built by people whose interests went beyond the academic:

We were really disturbed by these results and spent much time considering whether they should be made public at all. We did not want to enable the very risks that we are warning against. The ability to control when and to whom to reveal one’s sexual orientation is crucial not only for one’s well-being, but also for one’s safety.

We felt that there is an urgent need to make policymakers and LGBTQ communities aware of the risks that they are facing. We did not create a privacy-invading tool, but rather showed that basic and widely used methods pose serious privacy threats.

Similar warnings may be sounded here, for while political affiliation at least in the U.S. (and at least at present) is not as sensitive or personal an element as sexual preference, it is still sensitive and personal. A week hardly passes without reading of some political or religious “dissident” or another being arrested or killed. If oppressive regimes could obtain what passes for probable cause by saying “the algorithm flagged you as a possible extremist,” instead of for example intercepting messages, it makes this sort of practice that much easier and more scalable.

The algorithm itself is not some hyper-advanced technology. Kosinski’s paper describes a fairly ordinary process of feeding a machine learning system images of more than a million faces, collected from dating sites in the U.S., Canada, and the U.K., as well as American Facebook users. The people whose faces were used identified as politically conservative or liberal as part of the site’s questionnaire.

The algorithm was based on open-source facial recognition software, and after basic processing to crop to just the face (that way no background items creep in as factors), the faces are reduced to 2,048 scores representing various features — as with other face recognition algorithms these aren’t necessary intuitive thinks like “eyebrow color” and “nose type” but more computer-native concepts.

Chart showing how faces are cropped and reduced to neural network representations.

Image Credits: Michael Kosinski / Nature Scientific Reports

The system was given political affiliation data sourced from the people themselves, and with this it diligently began to study the differences between the facial stats of people identifying as conservatives and those identifying as liberal. Because it turns out, there are differences.

Of course it’s not as simple as “conservatives have bushier eyebrows” or “liberals frown more.” Nor does it come down to demographics, which would make things too easy and simple. After all, if political party identification correlates with both age and skin color, that makes for a simple prediction algorithm right there. But although the software mechanisms used by Kosinski are quite standard, he was careful to cover his bases in order that this study, like the last one, can’t be dismissed as pseudoscience.

The most obvious way of addressing this is by having the system make guesses as to the political party of people of the same age, gender, and ethnicity. The test involved being presented with two faces, one of each party, and guessing which was which. Obviously chance accuracy is 50 percent. Humans aren’t very good at this task, performing only slightly above chance, about 55 percent accurate.

The algorithm managed to reach as high as 71 percent accurate when predicting political party between two like individuals, and 73 percent when presented with two individuals of any age, ethnicity, or gender (but still guaranteed to be one conservative, one liberal).

Image Credits: Michael Kosinski / Nature Scientific Reports

Getting three out of four may not seem like a triumph for modern AI, but considering people can barely do better than a coin flip, there seems to be something worth considering here. Kosinski has been careful to cover other bases as well; this doesn’t appear to be a statistical anomaly or exaggeration of an isolated result.

The idea that your political party may be written on your face is an unnerving one, for while one’s political leanings are far from the most private of info, it’s also something that is very reasonably thought of as being intangible. People may choose to express their political beliefs with a hat, pin, or t-shirt, but one generally considers one’s face to be nonpartisan.

If you’re wondering which facial features in particular are revealing, unfortunately the system is unable to report that. In a sort of para-study, Kosinski isolated a couple dozen facial features (facial hair, directness of gaze, various emotions) and tested whether those were good predictors of politics, but none led to more than a small increase in accuracy over chance or human expertise.

“Head orientation and emotional expression stood out: Liberals tended to face the camera more directly, were more likely to express surprise, and less likely to express disgust,” Kosinski wrote in author’s notes for the paper. But what they added left more than 10 percentage points of accuracy not accounted for: “That indicates that the facial recognition algorithm found many other features revealing political orientation.”

The knee-jerk defense of “this can’t be true – phrenology was snake oil” doesn’t hold much water here. It’s scary to think it’s true, but it doesn’t help us to deny what could be a very important truth, since it could be used against people very easily.

As with the sexual orientation research, the point here is not to create a perfect detector for this information, but to show that it can be done in order that people begin to consider the dangers that creates. If for example an oppressive theocratic regime wanted to crack down on either non-straight people or those with a certain political leaning, this sort of technology gives them a plausible technological method to do so “objectively.” And what’s more, it can be done with very little work or contact with the target, unlike digging through their social media history or analyzing their purchases (also very revealing).

We have already heard of China deploying facial recognition software to find members of the embattled Uyghur religious minority. And in our own country this sort of AI is trusted by authorities as well — it’s not hard to imagine police using the “latest technology” to, for instance, classify faces at a protest, saying “these 10 were determined by the system as being the most liberal,” or what have you.

The idea that a couple researchers using open-source software and a medium-sized database of faces (for a government, this is trivial to assemble in the unlikely possibility they do not have one already) could do so anywhere in the world, for any purpose, is chilling.

“Don’t shoot the messenger,” said Kosinski. “In my work, I am warning against widely used facial recognition algorithms. Worryingly, those AI physiognomists are now being used to judge people’s intimate traits – scholars, policymakers, and citizens should take notice.”

#artificial-intelligence, #facial-recognition, #machine-learning, #michal-kosinski, #stanford-university, #tc

0

FTC settlement with Ever orders data and AIs deleted after facial recognition pivot

The maker of a defunct cloud photo storage app that pivoted to selling facial recognition services has been ordered to delete user data and any algorithms trained on it, under the terms of an FTC settlement.

The regulator investigated complaints the Ever app — which gained earlier notoriety for using dark patterns to spam users’ contacts — had applied facial recognition to users’ photographs without properly informing them what it was doing with their selfies.

Under the proposed settlement, Ever must delete photos and videos of users who deactivated their accounts and also delete all face embeddings (i.e. data related to facial features which can be used for facial recognition purposes) that it derived from photos of users who did not give express consent to such a use.

Moreover, it must delete any facial recognition models or algorithms developed with users’ photos or videos.

This full suite of deletion requirements — not just data but anything derived from it and trained off of it — is causing great excitement in legal and tech policy circles, with experts suggesting it could have implications for  other facial recognition software trained on data that wasn’t lawfully processed.

Or, to put it another way, tech giants that surreptitiously harvest data to train AIs could find their algorithms in hot water with the US regulator.

The quick background here is that the Ever app shut down last August, claiming it had been squeezed out of the market by increased competition from tech giants like Apple and Google.

However the move followed an investigation by NBC News — which in 2019 reported that app maker Everalbum had pivoted to selling facial recognition services to private companies, law enforcement and the military (using the brand name Paravision) — apparently repurposing people’s family snaps to train face reading AIs.

NBC reported Ever had only added a “brief reference” to the new use in its privacy policy after journalists contacted it to ask questions about the pivot in April of that year.

In a press release yesterday, reported earlier by The Verge, the FTC announced the proposed settlement with Ever received unanimous backing from commissioners.

One commissioner, Rohit Chopra, issued a standalone statement in which he warns that current gen facial recognition technology is “fundamentally flawed and reinforces harmful biases”, saying he supports “efforts to enact moratoria or otherwise severely restrict its use”.

“Until such time, it is critical that the FTC meaningfully enforce existing law to deprive wrongdoers of technologies they build through unlawful collection of Americans’ facial images and likenesses,” he adds.

Chopra’s statement highlights the fact that commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that “derive much of their value from ill-gotten data”, as he puts it — flagging an earlier settlement with Google and YouTube under which the tech giant was allowed to retain algorithms and other technologies “enhanced by illegally obtained data on children”.

And he dubs the Ever decision “an important course correction”.

Ever has not been fined under the settlement — something Chopra describes as “unfortunate” (saying it’s related to commissioners “not having restated this precedent into a rule under Section 18 of the FTC Act”).

He also highlights the fact that Ever avoided processing the facial data of a subset of users in States which have laws against facial recognition and the processing of biometric data — citing that as an example of “why it’s important to maintain States’ authority to protect personal data”. (NB: Ever also avoided processing EU users’ biometric data; another region with data protection laws.)

“With the tsunami of data being collected on individuals, we need all hands on deck to keep these companies in check,” he goes on. “State and local governments have rightfully taken steps to enact bans, moratoria, and other restrictions on the use of these technologies. While special interests are actively lobbying for federal legislation to delete state data protection laws, it will be important for Congress to resist these efforts. Broad federal preemption would severely undercut this multifront approach and leave more consumers less protected.

“It will be critical for the Commission, the states, and regulators around the globe to pursue additional enforcement actions to hold accountable providers of facial recognition technology who make false accuracy claims and engage in unfair, discriminatory conduct.”

Paravision has been contacted for comment on the FTC settlement.

#artificial-intelligence, #biometrics, #data-protection, #ever, #facial-recognition, #ftc, #paravision, #privacy

0

Insurrectionists’ social media presence gives feds an easy way to ID them

Men with flags and bizarre costumes pose for a photo in a neoclassical corridor.

Enlarge / The seditionists who broke into the US Capitol on Wednesday were not particularly subtle and did not put any particular effort into avoiding being identified. (credit: Saul Loeb | AFP | Getty Images)

Law enforcement agencies trying to track down insurrectionists who participated in yesterday’s events at the US Capitol have a wide array of tools at their disposal thanks to the ubiquity of cameras and social media.

Both local police and the FBI are seeking information about individuals who were “actively instigating violence” in Washington, DC, on January 6. While media organizations took thousands of photos police can use, they also have more advanced technologies at their disposal to identify participants, following what several other agencies have done in recent months.

Several police departments, such as Miami, Philadelphia, and New York City, turned to facial recognition platforms—including the highly controversial Clearview AI—during the widespread summer 2020 demonstrations against police brutality and in support of Black communities. In Philadelphia, for example, police used software to compare protest footage against Instagram photos to identify and arrest a protestor. In November, The Washington Post reported that investigators from 14 local and federal agencies in the DC area have used a powerful facial recognition system more than 12,000 times since 2019.

Read 10 remaining paragraphs | Comments

#dc, #facial-recognition, #fbi, #insurrection, #law-enforcement, #livestreams, #police, #policy, #sedition, #washington

0

2020 was a disaster, but the pandemic put security in the spotlight

Let’s preface this year’s predictions by acknowledging and admitting how hilariously wrong we were when this time last year we said that 2020 “showed promise.”

In fairness (almost) nobody saw a pandemic coming.

With 2020 wrapping up, much of the security headaches exposed by the pandemic will linger into the new year.

The pandemic is, and remains, a global disaster of epic proportions that’s forced billions of people into lockdown, left economies in tatters with companies (including startups) struggling to stay afloat. The mass shifting of people working from home brought security challenges with it, like how to protect your workforce when employees are working outside the security perimeter of their offices. But it’s forced us to find and solve solutions to some of the most complex challenges, like pulling off a secure election and securing the supply chain for the vaccines that will bring our lives back to some semblance of normality.

With 2020 wrapping up, much of the security headaches exposed by the pandemic will linger into the new year. This is what to expect.

Working from home has given hackers new avenues for attacks

The sudden lockdowns in March drove millions to work from home. But hackers quickly found new and interesting ways to target big companies by targeting the employees themselves. VPNs were a big target because of outstanding vulnerabilities that many companies didn’t bother to fix. Bugs in enterprise software left corporate networks open to attack. The flood of personal devices logging onto the network — and the influx of malware with it — introduced fresh havoc.

Sophos says that this mass decentralizing of the workforce has turned us all into our own IT departments. We have to patch our own computers, install security updates, and there’s no IT just down the hallway to ask if that’s a phishing email.

Companies are having to adjust to the cybersecurity challenges, since working from home is probably here to stay. Managed service providers, or outsourced IT departments, have a “huge opportunity to benefit from the work-from-home shift,” said Grayson Milbourne, security intelligence director at cybersecurity firm Webroot.

Ransomware has become more targeted and more difficult to escape

File-encrypting malware, or ransomware, is getting craftier and sneakier. Where traditional ransomware would encrypt and hold a victim’s files hostage in exchange for a ransom payout, the newer and more advanced strains first steal a victim’s files, encrypt the network and then threaten to publish the stolen files if the ransom isn’t paid.

This data-stealing ransomware makes escaping an attack far more difficult because a victim can’t just restore their systems from a backup (if there is one). CrowdStrike’s chief technology officer Michael Sentonas calls this new wave of ransomware “double extortion” because victims are forced to respond to the data breach as well.

The healthcare sector is under the closest guard because of the pandemic. Despite promises from some (but not all) ransomware groups that hospitals would not be deliberately targeted during the pandemic, medical practices were far from immune. 2020 saw several high profile attacks. A ransomware attack at Universal Health Services, one of the largest healthcare providers in the U.S., caused widespread disruption to its systems. Just last month U.S. Fertility confirmed a ransomware attack on its network.

These high-profile incidents are becoming more common because hackers are targeting their victims very carefully. These hyperfocused attacks require a lot more skill and effort but improve the hackers’ odds of landing a larger ransom — in some cases earning the hackers millions of dollars from a single attack.

“This coming year, these sophisticated cyberattacks will put enormous stress on the availability of services — in everything from rerouted healthcare services impacting patient care, to availability of online and mobile banking and finance platforms,” said Sentonas.

#computer-security, #cyberattacks, #encryption, #enterprise-software, #facial-recognition, #government, #law-enforcement, #malware, #privacy, #ransomware, #security, #u-s-government

0

Massachusetts governor won’t sign police reform bill with facial recognition ban

Massachusetts Governor Charlie Baker has returned a police reform bill back to the state legislature, asking lawmakers to strike out several provisions — including one for a statewide ban on police and public authorities using facial recognition technology, the first of its kind in the United States.

The bill, which also banned police from using rubber bullets and tear gas, was passed on December 1 by both the state’s House and Senate after senior lawmakers overcame months of deadlock to reach a consensus. Lawmakers brought the bill to the state legislature in the wake of the killing of George Floyd, an unarmed Black man who was killed by a white Minneapolis police officer, later charged with his murder.

Baker said in a letter to lawmakers that he objected to the ban, saying the use of facial recognition helped to convict several criminals, including a child sex offender and a double murderer.

In an interview with The Boston Globe, Baker said that he’s “not going to sign something that is going to ban facial recognition.”

Under the bill, police and public agencies across the state would be prohibited from using facial recognition, with a single exception to run facial recognition searches against the state’s driver license database with a warrant. The state would be required to publish annual transparency figures on the number of searches made by officers going forward.

The Massachusetts House voted to pass by 92-67, and the Senate voted 28-12 — neither of which were veto-proof majorities.

The Boston Globe said that Baker did not outright say he would veto the bill. After the legislature hands a revised (or the same) version of the bill back to the governor, it’s up to Baker to sign it, veto it, or — under Massachusetts law, he could allow it to become law without his signature by waiting 10 days.

“Unchecked police use of surveillance technology also harms everyone’s rights to anonymity, privacy, and free speech. We urge the legislature to reject Governor Baker’s amendment and to ensure passage of commonsense regulations of government use of face surveillance,” said Carol Rose, executive director of the ACLU of Massachusetts.

A spokesperson for Baker’s office did not immediately return a request for comment.

#driver, #facial-recognition, #george-floyd, #government, #governor, #learning, #massachusetts, #officer, #security, #senate, #spokesperson, #surveillance, #video-surveillance

0

Massachusetts lawmakers vote to pass a statewide police ban on facial recognition

Massachusetts lawmakers have voted to pass a new police reform bill that will ban police departments and public agencies from using facial recognition technology across the state.

The bill was passed by both the state’s House and Senate on Tuesday, a day after senior lawmakers announced an agreement that ended months of deadlock.

The police reform bill also bans the use of chokeholds and rubber bullets, and limits the use of chemical agents like tear gas, and also allows police officers to intervene to prevent the use of excessive and unreasonable force. But the bill does not remove qualified immunity for police, a controversial measure that shields serving police from legal action for misconduct, following objections from police groups.

Lawmakers brought the bill to the state legislature in the wake of the killing of George Floyd, an unarmed Black man who was killed by a white Minneapolis police officer, since charged with his murder.

Critics have for years complained that facial recognition technology is flawed, biased, and disproportionately misidentifies people and communities of color. But the bill grants police an exception to run facial recognition searches against the state’s driver’s license database with a warrant. In granting that exception, the state will have to publish annual transparency figures on the number of searches made by officers.

The Massachusetts Senate voted 28-12 to pass, and the House voted 92-67. The bill will now be sent to Massachusetts governor Charlie Baker for his signature.

In the absence of privacy legislation from the federal government, laws curtailing the use of facial recognition are popping up on a state and city level. The patchwork nature of that legislation means that state and city laws have room to experiment, creating an array of blueprints for future laws that can be replicated elsewhere.

Portland, Oregon passed a broad ban on facial recognition tech this September. The ban, one of the most aggressive in the nation, blocks city bureaus from using the technology but will also prohibit private companies from deploying facial recognition systems in public spaces. Months of clashes between protesters and aggressive law enforcement in that city raised the stakes on Portland’s ban.

Earlier bans in Oakland, San Francisco, and Boston focused on forbidding their city governments from using the technology but, like Massachusetts, stopped short of limiting its use by private companies. San Francisco’s ban passed in May of last year, making the international tech hub the first major city to ban the use of facial recognition by city agencies and police departments.

At the same time that cities across the U.S. are acting to limit the creep of biometric surveillance, those same systems are spreading at the federal level. In August, Immigration and Customs Enforcement (ICE) signed a contract for access to a facial recognition database created by Clearview AI, a deeply controversial company that scrapes facial images from online sources, including social media sites.

While most activism against facial recognition only pertains to local issues, at least one state law has proven powerful enough to make waves on a national scale. In Illinois, the Biometric Information Privacy Act (BIPA) has ensnared major tech companies including Amazon, Microsoft and Alphabet for training facial recognition systems on Illinois residents without permission.

#biometrics, #bipa, #facial-recognition, #privacy, #security, #surveillance

0

Who’s building the grocery store of the future?

The future of grocery stores will be a win-win for both stores and customers.

On one hand, stores want to decrease their operational expenditures that come from hiring cashiers and conducting inventory management. On the other hand, consumers want to decrease the friction of buying groceries. This friction includes both finding high-quality groceries at consumers’ personal price points and waiting in long lines for checkout. The future of grocery stores promises to alleviate, and even eliminate, these points of friction.

Amazon’s foray into grocery store technology provides a succinct introduction into the state of the industry. Amazon’s first act was its Amazon Go store, which opened in Seattle in early 2018. When customers enter an Amazon Go store, they swipe the Amazon app at the entrance, enabling Amazon to link purchases to their accounts. As they shop, a collection of ceiling cameras and shelf sensors identify the items and places them in a a virtual shopping cart. When they’re done shopping, Amazon automatically charges for the items they grabbed.

Earlier this year, Amazon opened a 10,400-square-foot Go store, about five times bigger than the largest prior location. At larger store sizes, however, tracking people and products gets more computationally complex and larger SKU counts become more difficult to manage. This is especially true if the computer vision AI-based system also must be retrofitted into buildings that come with nooks and crannies that can obstruct camera angles and affect lighting.

Perhaps Amazon’s confidence in its ability to scale its Go stores comes from vertical integration that enables it to optimize customer experiences through control over store format, product selection and placement.

While Amazon Go is vertically integrated, in Amazon’s second act, it revealed a separate, more horizontal strategy: Earlier this year, Amazon announced that it would license its cashierless Just Walk Out technology.

In Just Walk Out-enabled stores, shoppers enter the store using a credit card. They don’t need to download an app or create an Amazon account. Using cameras and sensors, the Just Walk Out technology detects which products shoppers take from or return to the shelves and keeps track of them. When done shopping, as in an Amazon Go store, customers can “just walk out” and their credit card will be charged for the items in their virtual cart.

Just Walk Out may enable Amazon to penetrate the market much more quickly, as Amazon promises that existing stores can be retrofitted in “as little as a few weeks.” Amazon can also get massive amounts of data to improve its computer vision systems and machine learning algorithms, accelerating the speed with which it can leverage those capabilities elsewhere.

In Amazon’s third and latest act, Amazon in July announced its Dash Cart, a departure from its two prior strategies. Rather than equipping stores with ceiling cameras and shelf sensors, Amazon is building smart carts that use a combination of computer vision and sensor fusion to identify items placed in the cart. Customers take barcoded items off shelves, place them in the cart, wait for a beep, and then one of two things happens: Either the shopper gets an alert telling him to try again, or the shopper receives a green signal to confirm the item was added to the cart correctly.

For items that don’t have a barcode, the shopper can add them to the cart by manually adding them on the cart screen and confirming the measured weight of the product. When a customer exits through the store’s Amazon Dash Cart lane, sensors automatically identify the cart, and payment is processed using the credit card on the customer’s Amazon account. The Dash Cart is specifically designed for small- to medium-sized grocery trips that fit two grocery bags and is currently only available in an Amazon Fresh store in California.

The pessimistic interpretation of Amazon’s foray into grocery technology is that its three strategies are mutually incompatible, reflecting a lack of conviction on the correct strategy to commit to. Indeed, the vertically integrated smart store strategy suggests Amazon is willing to incur massive fixed costs to optimize the customer experience. The modular smart store strategy suggests Amazon is willing to make the tradeoff in customer experience for faster market penetration.

The smart cart strategy suggests that smart stores are too complex to capture all customer behaviors correctly, thus requiring Amazon to restrict the freedom of user behavior. The more charitable interpretation, however, is that, well, Amazon is one of the most customer-centric companies in the world, and it has the capital to experiment with different approaches to figure out what works best.

While Amazon serves as a helpful case study to the current state of the industry, many other players exist in the space, all using different approaches to build an aspect of the grocery store of the future.

Cashierless checkout

According to some estimates, people spend more than 60 hours per year standing in checkout lines. Cashierless checkout changes everything, as shoppers are immediately identified upon entry and can grab products from the shelf and leave the store without having to interact with a cashier. Different companies have taken different approaches to cashierless checkout:

Smart shelves: Like Amazon Go, some companies utilize computer vision mounted on ceilings and advanced sensors on shelves to detect when shoppers take an item from the shelf. Companies associate the correct item with the correct shopper, and the shopper is charged for all the items they grabbed when they are finished with their shopping journey. Standard Cognition, Zippin and Trigo are some of the leaders in computer vision and smart shelf technology.

Smart carts and baskets: Like Amazon’s Dash Cart, some companies are moving the AI and the sensors from the ceilings and shelves to the cart. When a shopper places an item in their cart, the cart can detect exactly which item was placed and the quantity of that item. Caper Labs, for instance, is pursuing a smart cart approach. Its cart has a credit card reader for the customer to checkout without a cashier.

Touchless checkout kiosks: Touchless checkout kiosk stations use overhead cameras that verify and charge a customer for their purchase. For instance, Mashgin built a kiosk that uses computer vision to quickly verify a customer’s items when they’re done shopping. Customers can then pay using a credit card without ever having to scan a barcode.

Self-scanning: Some companies still require customers to scan items themselves, but once items are scanned, checkout becomes quick and painless. Supersmart, for instance, built a mobile app for customers to quickly scan products as they add them to their carts. When customers are finished shopping, they scan a QR code at a Supersmart kiosk, which verifies that the items in the cart match the items scanned using the mobile app. Amazon’s Dash Cart, described above, also requires a level of human involvement in manually adding certain items to the cart.

Notably, even with the approaches detailed above, cashiers may not be going anywhere just yet because they still play important roles in the customer shopping experience. Cashiers, for instance, help to bag a customer’s items quickly and efficiently. Cashiers can also conduct random checks of customer’s bags as they leave the store and check IDs for alcohol purchases. Finally, cashiers also can untangle tricky corner cases where automated systems fail to detect or validate certain shoppers’ carts. Grabango and FutureProof are therefore building hybrid cashierless checkout systems that keep a human in the loop.

Advanced software analytics

#amazon, #amazon-go, #artificial-intelligence, #cashierless-checkout, #column, #ecommerce, #facial-recognition, #food, #grocery-store, #inventory-management, #labor, #payments, #point-of-sale, #real-estate, #retail, #robotics

0

Portland, Maine passes referendum banning facial surveillance

As we’re currently shifting through all of the national and local votes from last night’s elections, here’s a small but important victory for privacy advocates out of Portland, Maine . Per the Bangor Daily News, the city passed “Referendum Question B,” designed to curb government and police use of facial recognition technology.

According to the initiative:

An Act to Ban Facial Surveillance by Public Officials in Portland will ban the city of Portland and its departments and officials from using or authorizing the use of any facial surveillance software on any groups or member of the public, and provides a right to members of the public to sue if facial surveillance data is illegally gathered and/or used.

It’s one of four progressive measures that passed last night in the city. Other successful measures include a $15/hour minimum wage and a cap on rent increases. It also joins other recent local ordinances. Other cities to pass similar legislation include San Francisco, Boston and the other Portland, which offered a pretty sweeping ban back in September.

Meanwhile, earlier this week, an arrest was made in Washington, DC using facial recognition. The individual was reportedly identified using an image found on Twitter.

#2020-election, #apps, #facial-recognition, #maine, #portland, #privacy

0

Human Capital: Uber Eats hit with claims of ‘reverse racism’

With less than one week left until the election, DoorDash made a late contribution of $3.75 million to try to ensure California’s gig worker ballot measure Prop 22 passes. Meanwhile, Coinbase is looking for a head of diversity and inclusion and Uber was hit with claims of reverse racism.

All that and more in this week’s edition of Human Capital, a weekly newsletter where we unpack all-things labor and D&I. To receive this in your inbox every Friday at 1 p.m. PT, be sure to sign up here.

Let’s jump in.

Employees at surveillance startup Verkada reportedly used tech to harass co-workers

Oof. Just when we thought we were safe from surveillance, we’ve found yet another reason not to trust people with facial recognition tech. Just to be clear, the first part of that was sarcasm. Anyway, Vice reported earlier this week that some Verkada employees used the startup’s tech to take photos of their female colleagues and then made sexually explicit jokes.

When other employees reported the incident to human resources, Verkada CEO Filip Kaliszan simply gave the offenders a choice of leaving the company or having their share of stock reduced. After the Vice story went out, however, Verkada fired the three employees in question.

Coinbase is looking for a head of D&I

Coinbase is on the hunt for a director of belonging, inclusion and diversity. It’s worth noting Coinbase previously had a head of D&I, Tariq Meyers, but he began focusing on an employee support task force role as a result of COVID-19 in April, according to his LinkedIn page. Meyers later left the company in August, which was before Coinbase CEO Brian Armstrong took a stance about not speaking out about social issues.

That stance led to 5% of Coinbase’s employees opting to take a severance package to leave the company. Two of those employees were Coinbase Global Head of Marketing, John Russ and Coinbase VP Dan Yoo.

“We believe that it’s possible to be 100% committed to an inclusive workplace that values diversity where everyone is safe and belongs (and as part of that, working to root out and eliminate any intolerance or bias that exists at the company), and simultaneously maintain laser focus on our mission,” the job posting states. “To this end, we have made a public stance that Coinbase won’t issue external statements on topics beyond the scope of our mission of building a more open financial system and expanding economic freedom, while also redoubling our commitment to making the company an amazing place to work for all employees, regardless of background.”

Precursor VC promotes Sydney Thomas to Principal

Image Credits: Precursor Ventures

Sydney Thomas, who started her career at Precursor Ventures as an intern, was promoted to Principal. That means she’s able to deploy capital to startups on behalf of the fund.

“This is a promotion that has been earned through hard work, aptitude and a clear demonstration that Sydney embodies all of the values we hold dear here at Precursor,” the firm wrote in a blog post. “She has already made a number of investments on behalf of the firm and will continue to do so going forward.”

Indian engineers allege caste bias in tech industry

The Washington Post’s Nitasha Tiku shed some light on caste-based discrimination in the tech ecosystem. Specifically, 30 female Indian engineers who are part of the Dalit caste and work for companies like Apple, Google, Microsoft and Cisco, say they have faced caste bias. As Tiku explains, those in the Dalit caste are part of the lowest rank castes within India’s social hierarchy.

PayPal puts money into Black and Latinx-led VC funds

PayPal is investing $50 million in a handful of early-stage funds led by Black and Latinx venture capitalists. The investment is part of PayPal’s $530 million commitment to support Black-owned businesses.

The funds receiving money include Chingona Ventures, Fearless Fund, Harlem Capital, Precursor Ventures, Slauson & Co, VamosVenturs, Zeal Capital Partners and another undisclosed fund.

Reddit elevates its VP of people and culture

Nellie Peshkov, formerly Reddit’s VP of People and Culture, is now Chief People Officer. Her appointment to the C-suite is part of the much-needed, growing trend of tech companies elevating employees focused on diversity and inclusion to the highest leadership ranks.

Uber Eats hit with claims of “reverse racism”

Uber said it has received more than 8,500 demands for arbitration as a result of it ditching delivery fees for Black-owned restaurants via Uber Eats.

Uber Eats made this change in June, following racial justice protests around the police killing of George Floyd, an unarmed Black man. Uber Eats said it wanted to make it easier for customers to support Black-owned businesses in the U.S. and Canada. To qualify, the restaurant must be a small or medium-sized business and, therefore, not part of a franchise. In contrast, delivery fees are still in place for other restaurants.

In one of these claims, viewed by TechCrunch, a customer says Uber Eats violates the Unruh civil Rights Act by “charging discriminatory delivery fees based on race (of the business owner).” That claim seeks $12,000 as well as a permanent injunction that would prevent Uber from continuing to offer free delivery from Black-owned restaurants.

Uber driver claims rating system is racially biased
Uber is no stranger to lawsuits, so this one shouldn’t come as a surprise. Uber is now facing a lawsuit regarding its customer ratings and how the company deactivates drivers whose ratings fall below a certain threshold. The suit alleges the system “constitues race discrimination, as it is widely recognized that customer evaluations of workers are frequently racially biased.”

In a statement to NPR, Uber called the suit “flimsy” and said “ridesharing has greatly reduced bias for both drivers and riders, who now have fairer, more equitable access to work and transportation than ever before.”

Yes on Prop 22 gets another $3.75 million influx of cash
DoorDash put in an additional $3.75 million into the Yes on 22 campaign, according to a late contribution filing. Proposition 22 is the California ballot measure that aims to keep gig workers classified as independent contractors.

The latest influx of cash brought Yes on 22’s total contributions north of $200 million. As of October 14, the campaign had raised $189 million. But thanks to a number of late contributions, the total put toward Yes on 22 comes out to about $202,955,106.38, or, $203 million.

Prop 22 hit the most-funded California ballot measure long ago, but it’s now surpassed the $200 million mark.

TechCrunch Sessions: Justice is back

I am pleased to announce TechCrunch Sessions: Justice is officially happening again! Save the date for March 3, 2021.

We’ll explore inclusive hiring, access to funding for Black, Latinx and Indigenous people, and workplace tools to foster inclusion and belonging. We’ll also examine the experiences of gig workers and formerly incarcerated people who are often left out of Silicon Valley’s wealth cycle. Rounding out the program will be a discussion about the role of venture capital in creating a more inclusive tech ecosystem. We’ll discuss all of that and more at TC Sessions: Justice.

#coinbase, #diversity, #facial-recognition, #labor, #tc, #uber

0

President Trump’s Twitter accessed by security expert who guessed password “maga2020!”

A Dutch security researcher says he accessed President Trump’s @realDonaldTrump Twitter account last week by guessing his password: “maga2020!”.

Victor Gevers, a security researcher at the GDI Foundation and chair of the Dutch Institute for Vulnerability Disclosure, which finds and reports security vulnerabilities, told TechCrunch he guessed the president’s account password and was successful on the fifth attempt.

The account was not protected by two-factor authentication, granting Gevers access to the president’s account.

After logging in, he emailed US-CERT, a division of Homeland Security’s cyber unit Cybersecurity and Infrastructure Security Agency (CISA), to disclose the security lapse, which TechCrunch has seen. Gevers said the president’s Twitter password was changed shortly after.

A screenshot from inside Trump’s Twitter account. (Image: Victor Gevers)

It’s the second time Gevers has gained access to Trump’s Twitter account.

The first time was in 2016, when Gevers and two others extracted and cracked Trump’s password from the 2012 LinkedIn breach. The researchers took his password — “yourefired” — his catchphrase from the television show The Apprentice — and found it let them into his Twitter account. Gevers reported the breach to local authorities in the Netherlands, with suggestions on how Trump could improve his password security. One of the passwords he suggested at the time was “maga2020!” he said. Gevers said he “did not expect” the password to work years later.

Dutch news outlet RTL News first reported the story.

Trump’s account is said to be locked down with extra protections after he became president, though Twitter has not said publicly what those protections entail. His account was untouched by hackers who broke into Twitter’s network in July in order to abuse an “admin tool” to hijack high-profile accounts and spread a cryptocurrency scam.

A spokesperson for the White House and the Trump campaign did not immediately comment. A Twitter spokesperson did not comment on the record. A spokesperson for CISA did not immediately confirm the report.

Gevers has previously reported security incidents involving a facial recognition database used to track Uyghur Muslims and a vulnerability in Oman’s stock exchange.

#chair, #donald-trump, #facial-recognition, #netherlands, #oman, #operating-systems, #president, #security, #social-media, #software, #spokesperson, #trump, #united-states, #white-house

0

Microsoft and partners aim to shrink the ‘data desert’ limiting accessible AI

AI-based tools like computer vision and voice interfaces have the potential to be life-changing for people with disabilities, but the truth is those AI models are usually built with very little data sourced from those people. Microsoft is working with several nonprofit partners to help make these tools reflect the needs and everyday realities of people living with conditions like blindness and limited mobility.

Consider for example a computer vision system that recognizes objects and can describe what is, for example, on a table. Chances are that algorithm was trained with data collected by able people, from their point of view — likely standing.

A person in a wheelchair looking to do the same thing might find the system isn’t nearly as effective from that lower angle. Similarly a blind person will not know to hold the camera in the right position for long enough for the algorithm to do its work, so they must do so by trial and error.

Or consider a face recognition algorithm that’s meant to tell when you’re paying attention to the screen for some metric or another. What’s the likelihood that among the faces used to train that system, any significant amount have things like a ventilator, or a puff-and-blow controller, or a headstrap obscuring part of it? These “confounders” can significantly affect accuracy if the system has never seen anything like them.

Facial recognition software that fails on people with dark skin, or has lower accuracy on women, is a common example of this sort of “garbage in, garbage out.” Less commonly discussed but no less important is the visual representation of people with disabilities, or of their point of view.

Microsoft today announced a handful of efforts co-led by advocacy organizations that hope to do something about this “data desert” limiting the inclusivity of AI.

The first is a collaboration with Team Gleason, an organization formed to improve awareness around the neuromotor degenerative disease amyotrophic lateral sclerosis, or ALS (it’s named after former NFL star Steve Gleason, who was diagnosed with the disease some years back).

Their concern is the one above regarding facial recognition. People living with ALS have a huge variety of symptoms and assistive technologies, and those can interfere with algorithms that have never seen them before. That becomes an issue if, for example, a company wanted to ship gaze tracking software that relied on face recognition, as Microsoft would surely like to do.

“Computer vision and machine learning don’t represent the use cases and looks of people with ALS and other conditions,” said Team Gleason’s Blair Casey. “Everybody’s situation is different and the way they use technology is different. People find the most creative ways to be efficient and comfortable.”

Project Insight is the name of a new joint effort with Microsoft that will collect face imagery of volunteer users with ALS as they go about their business. In time that face data will be integrated with Microsoft’s existing cognitive services, but also released freely so others can improve their own algorithms with it.

They aim to have a release in late 2021. If the timeframe seems a little long, Microsoft’s Mary Bellard, from the company’s AI for Accessibility effort, pointed out that they’re basically starting from scratch and getting it right is important.

“Research leads to insights, insights lead to models that engineers bring into products. But we have to have data to make it accurate enough to be in a product in the first place,” she said. “The data will be shared — for sure this is not about making any one product better, it’s about accelerating research around these complex opportunities. And that’s work we don’t want to do alone.”

Another opportunity for improvement is in sourcing images from users who don’t use an app the same way as most. Like the person with impaired vision or in a wheelchair mentioned above, there’s a want of data from their perspective. There are two efforts aiming to address this.

Images taken by people needing objects in them to be identified or located.

Image Credits: ORBIT

One with City University of London is the expansion and eventual public release of the Object Recognition for Blind Image Training project, which is assembling a dataset for everyday for identifying everyday objects — a can of pop, a keyring — using a smartphone camera. Unlike other datasets, though, this will be sourced entirely from blind users, meaning the algorithm will learn from the start to work with the kind of data it will be given later anyway.

AI captioned images

Image Credits: Microsoft

The other is an expansion of VizWiz to better encompass this kind of data. The tool is used by people who need help right away in telling, say, whether a cup of yogurt is expired or if there’s a car in the driveway. Microsoft worked with the app’s creator, Danna Gurari, to improve the app’s existing database of tens of thousands of images with associated questions and captions. They’re also working to alert a user when their image is too dark or blurry to analyze or submit.

Inclusivity is complex because it’s about people and systems that, perhaps without even realizing it, define “normal” and then don’t work outside of those norms. If AI is going to be inclusive, “normal” needs to be redefined and that’s going to take a lot of hard work. Until recently, people weren’t even talking about it. But that’s changing.

“This is stuff the ALS community wanted years ago,” said Casey. “This is technology that exists — it’s sitting on a shelf. Let’s put it to use. When we talk about it, people will do more, and that’s something the community needs as a whole.”

#accessibility, #als, #artificial-intelligence, #computer-vision, #disabilities, #face-recognition, #facial-recognition, #microsoft, #tc, #team-gleason

0

Portland adopts strictest facial recognition ban in nation to date

A helpful neon sign in Portland, Ore.

Enlarge / A helpful neon sign in Portland, Ore. (credit: Seth K. Hughes | Getty Images)

City leaders in Portland, Oregon, yesterday adopted the most sweeping ban on facial recognition technology passed anywhere in the United States so far.

The Portland City Council voted on two ordinances related to facial recognition: one prohibiting use by public entities, including the police, and the other limiting its use by private entities. Both measures passed unanimously, according to local NPR and PBS affiliate Oregon Public Broadcasting.

The first ordinance (PDF) bans the “acquisition and use” of facial recognition technologies by any bureau of the city of Portland. The second (PDF) prohibits private entities from using facial recognition technologies “in places of public accommodation” in the city.

Read 8 remaining paragraphs | Comments

#face-recognition, #facial-recognition, #laws, #oregon, #policy, #portland, #privacy, #racism, #surveillance

0

Portland passes expansive city ban on facial recognition tech

The city council in Portland, Oregon passed legislation Wednesday that’s widely regarded as the most aggressive municipal ban on facial recognition technology so far.

Through a pair of ordinances, Portland will both prohibit city bureaus from using the controversial technology and stop private companies from employing it in public areas. Oakland, San Francisco and Boston have all banned their governments from using facial recognition tech, but Portland’s ban on corporate uses in public spaces breaks new ground.

The draft ordinance proposing the private ban cites the risk of “biases against Black people, women, and older people” baked into facial recognition systems. Evidence of bias in these systems has been widely observed by researchers and even by the U.S. federal government, in a study published late last year. Known flaws in these systems can lead to false positives with serious consequences, given facial recognition’s law enforcement applications.

City Council Commissioner Jo Ann Hardesty linked concerns around high-tech law enforcement tools to ongoing protests in Portland, which have taken place for more than three months. Last month, the U.S. Marshals Service confirmed that it used a small aircraft to surveil crowds near the protest’s epicenter at the Multnomah County Justice Center in downtown Portland.

Hardesty called the decision to ban local law enforcement from employing facial recognition tech “especially important” for the moment Portland now finds itself in.

“No one should have something as private as their face photographed, stored, and sold to third parties for a profit,” Hardesty said. “No one should be unfairly thrust into the criminal justice system because the tech algorithm misidentified an innocent person.”

The ACLU also celebrated Wednesday’s vote as a historic digital privacy win.

“With today’s vote, the community made clear we hold the real power in this city,” ACLU of Oregon Interim Executive Director Jann Carson said. “We will not let Portland turn into a surveillance state where police and corporations alike can track us wherever we go.”

Portland’s dual bans on the public and private use of facial recognition may serve as a roadmap for other cities looking to carve out similar digital privacy policies — an outcome privacy advocates are hoping for.

“Now, cities across the country must look to Portland and pass bans of their own,” Fight for the Future’s Lia Holland said. “We have the momentum, and we have the will to beat back this dangerous and discriminatory technology.”

#digital-privacy, #facial-recognition, #surveillance, #tc

0