Mass surveillance must have meaningful safeguards, says ECHR

The highest chamber of the European Court of Human Rights (ECHR) has delivered a blow to anti-surveillance campaigners in Europe by failing to find that bulk interception of digital comms is inherently incompatible with human rights law — which enshrines individual rights to privacy and freedom of expression.

However today’s Grand Chamber judgement underscores the need for such intrusive intelligence powers to be operated with what the judges describe as “end-to-end safeguards”.

Governments in Europe that fail to do so are opening such laws up to further legal challenge under the European Convention on Human Rights.

The Grand Chamber ruling also confirms that the UK’s historic surveillance regime — under the Regulation of Investigatory Powers Act 2000 (aka RIPA) — was unlawful because it lacked the necessary safeguards.

Per the court, ‘end-to-end’ safeguards means that bulk intercept powers need to involve assessments at each stage of the process of the necessity and proportionality of the measures being taken; that bulk interception should be subject to independent authorisation at the outset, when the object and scope of the operation are being defined; and that the operation should be subject to supervision and independent ‘ex post facto’ review.

The Grand Chamber judgement identified a number of deficiencies with the bulk regime operated in the UK at the time of RIPA — including that bulk interception had been authorised by the Secretary of State, rather than by a body independent of the executive; categories of search terms defining the kinds of communications that would become liable for examination had not been included in the application for a warrant; and search terms linked to an individual (e.g. specific identifiers such as an email address) had not been subject to prior internal authorisation.

The court also found that the UK’s bulk intercept regime had breached Article 10 (freedom of expression) because it had not contained sufficient protections for confidential journalistic material.

While the regime used for obtaining comms data from communication service providers was found to have violated Articles 8 (right to privacy and family life/comms) and 10 “as it had not been in accordance with the law”.

However, the court held that the regime by which the UK could request intelligence from foreign governments and/or intelligence agencies had had sufficient safeguards in place to protect against abuse and to ensure that UK authorities had not used such requests as a means of circumventing their duties under domestic law and the Convention.

The Court considered that, owing to the multitude of threats States face in modern society, operating a bulk interception regime did not in and of itself violate the Convention,” it added in a press release. 

The RIPA regime has since replaced by the UK’s Investigatory Powers Act (IPA) — which put bulk intercept powers explicitly into law (albeit with claimed layers of oversight).

The IPA has also been subject to a number of human rights challenges — and in 2018 the government was ordered by the UK High Court to revise parts of the law which had been found to be incompatible with human rights law.

Today’s Grand Chamber judgement relates specifically to RIPA and to a number of legal challenges brought against the UK’s mass surveillance regime by journalists and privacy and digital rights campaigners in the wake of the 2013 mass surveillance revelations by NSA whistleblower Edward Snowden which the ECHR heard simultaneously.

In a similar ruling back in 2018 the lower Chamber found some aspects of the UK’s regime violated human rights law — with a majority vote then finding that its bulk interception regime had violated Article 8 because there was insufficient oversight (such as of selectors and filtering; and of search and selection of intercepted communications for examination; as well as inadequate safeguards governing the selection of related comms data). 

Human rights campaigners followed up by requesting and securing a referral to the Grand Chamber — which has now handed down its view.

It unanimously found there had been a violation of Article 8 in respect of the regime for obtaining communications data from communication service providers.

But by 12 votes to 5 it ruled there had been no violation of Article 8 in respect of the UK’s regime for requesting intercepted material from foreign governments and intelligence agencies.

In another unanimous vote the Grand Chamber found there had been a violation of Article 10, concerning both the bulk interception regime and the regime for obtaining communications data from comms service providers.

But, again, by 12 votes to 5 it ruled there had been no violation of Article 10 in respect of the regime for requesting intercepted material from foreign governments and intelligence agencies.

Responding to the judgement in a statement, the privacy rights group Big Brother Watch — which was one of the parties involved in the challenges — said the judgement “confirms definitively that the UK’s bulk interception practices were unlawful for decades”, thereby vindicating Snowden’s whistleblowing.

The organization also highlighted a dissenting opinion from Judge Pinto de Alburquerque, who wrote that:

“Admitting non-targeted bulk interception involves a fundamental change in how we view crime prevention and investigation and intelligence gathering in Europe, from targeting a suspect who can be identified to treating everyone as a potential suspect, whose data must be stored, analysed and profiled (…) a society built upon such foundations is more akin to a police state than to a democratic society. This would be the opposite of what the founding fathers wanted for Europe when they signed the Convention in 1950.”

In further remarks on the judgement, Silkie Carlo, director of Big Brother Watch, added: “Mass surveillance damages democracies under the cloak of defending them, and we welcome the Court’s acknowledgement of this. As one judge put it, we are at great risk of living in an electronic ‘Big Brother’ in Europe. We welcome the judgment that the UK’s surveillance regime was unlawful, but the missed opportunity for the Court to prescribe clearer limitations and safeguards mean that risk is current and real.”

“We will continue our work to protect privacy, from parliament to the courts, until intrusive mass surveillance practices are ended,” she added.

Privacy International — another party to the case — sought to put a positive spin on the outcome, saying the Grand Chamber goes further than the ECHR’s 2018 ruling by “providing for new and stronger safeguards, adding a new requirement of prior independent or judicial authorisation for bulk interception”.

“Authorisation must be meaningful, rigorous and check for proper ‘end-to-end safeguards’,” it added in a statement.

Also commenting publicly, the Open Rights Group’s executive director, Jim Killock, said: “The court has shown that the UK Government’s legal framework was weak and inadequate when we took them to court with Big Brother Watch and Constanze Kurz in 2013. The court has set out clear criteria for assessing future bulk interception regimes, but we believe these will need to be developed into harder red lines in future judgments, if bulk interception is not to be abused.”

“As the court sets out, bulk interception powers are a great power, secretive in nature, and hard to keep in check. We are far from confident that today’s bulk interception is sufficiently safeguarded, while the technical capacities continue to deepen. GCHQ continues to share technology platforms and raw data with the US,” Killock went on to say, couching the judgment as “an important step on a long journey”.

 

#big-brother-watch, #counter-terrorism, #echr, #edward-snowden, #europe, #european-court-of-human-rights, #gchq, #investigatory-powers-act, #mass-surveillance, #national-security, #national-security-agency, #open-rights-group, #policy, #privacy, #security, #surveillance, #tc, #uk-government, #united-kingdom, #united-states

0

This crypto surveillance startup — ‘We’re bomb sniffing dogs’ — just raised Series A funding

Solidus Labs, a company that says its surveillance and risk-monitoring software can detect manipulation across cryptocurrency trading platforms, is today announcing $20 million in Series A funding led by Evolution Equity Partners, with participation from Hanaco Ventures, Avon Ventures, 645 Ventures, the cryptocurrencies derivative exchange FTX,  and also a sprinkling of government officials, including former CFTC commissioner Chris Giancarlo and former SEC commissioner Troy Paredes.

It’s pretty great timing, given the various signals coming from the U.S. government just last week that it’s intent on improving its crypto monitoring efforts — such as the U.S. Treasury’s call for stricter cryptocurrency compliance with the IRS.

Of course, Solidus didn’t spring into existence last week. Rather, Solidus was founded in 2017 by several former Goldman Sachs employees who worked on the firm’s electronic trading desk for equities. At the time, Bitcoin was only becoming buzzier, but while the engineers anticipated different use cases for the cryptocurrency, they also recognized that a lack of compliance tools would be a barrier to its adoption by bigger financial institution, so they left to build these.

Fast forward and today Solidus employs 30 people, has raised $23.75 million altogether, and is the process of doubling its head count to address growing demand. We talked with Solidus’s New York-based cofounder and CEO Asaf Meir — who was himself one of those former Goldman engineers — about the company late last week. Excerpts from chat follow, edited lightly for length.

TC: Who are your customers?

AM: We work with exchanges, broker dealers, OTC desks, liquidity providers, and regulators — anyone who is exposed to the risk of buying and selling cryptocurrencies crypto assets or digital assets, whatever you want to call them.

TC: What are you promising to uncover for them?

AM: What we detect, largely speaking, is volume and price manipulation, and that has to do with wash trading, spoofing, layering, pump and dumps, and an additional growing library of crypto native alerts that truly only exist in our unique market.

We had a 400% increase in inbound demand over 2020 driven largely by two factors, I think. One is regulatory scrutiny. Globally, regulators have gone off to market participants, letting them know that they have to ask for permission not forgiveness. The second reason — which I like better — is the drastic institutional increase in appetite toward exposure for this asset class. Every institution, the first question they ask any executing platform is: ‘What are your risk mitigation tools? How do you make sure there is market integrity?’

TC: We talked a couple of months ago, and you mentioned having a growing pipeline of customers, like the trading platform Bittrex in Seattle. Is demand coming primarily from the U.S.?

AM: We have demand in Asia and in Europe, as well, so we will be our opening offices there, too.

TC: Is your former employer Goldman a customer?

AM: I can’t comment on that, but I would say there isn’t a bank right now that isn’t thinking about how they’re going to get exposure to crypto assets, and in order to do that in a safe, compliant and robust way, they have to employ crypto-specific solutions.

Right now, there’s the new frontier —  the clients we’re currently working with, which are these crypto-pure exchanges, broker dealers. liquidity providers, and even traditional financial institutions that are coming into crypto and opening a crypto operation or a crypto desk. Then there’s the new new frontier; your NFTs, stablecoins, indexes, lending platforms, decentralized protocols and God knows what [else] all of a sudden reaching out to us, telling us they want to do the right thing, to ensure the users on their platform are well-protected, and that trading activities are audited, and [to enlist us] to prevent any manipulation.

TC: How does your subscription service work and who is building the tech?

AM: We consume private data from our clients — all their training data —  and we then put it in our detection models, which we ultimately surface through insights and alerts on our dashboard, which they have access to.

As for who is building it, we have a lot of fintech engineers who are coming from Goldman and Morgan Stanley and Citi and bringing that traditional knowledge of large trading systems at scale; we also have incredible data scientists out of Israel whose expertise is in anomaly detection, which they are applying to financial crime, working with us.

TC: What do these crimes look like?

AM: When we started out, there was much more wholesale manipulation happening whether through wash trading or pump-and-dumps — things that are more easy to perform. What we’re seeing today are extremely sophisticated manipulation schemes where bad actors are able to exploit different executing platforms. We’re quite literally surfacing new alerts that if you were to use a legacy, rule-based system you wouldn’t be able to [surface] because you’re not really sure what you’re looking for. We oftentimes have an alert that we haven’t named yet; we just know that this type of behavior is considered manipulative in nature and that our client should be looking into it.

TC: Can you elaborate a bit more about these new anomalies?

AM: I’m conflicted about how much can we share of our clients’ private data. But one thing we’re seeing is [a surge in] account extraction attacks, which is when through different ways, bad actors are able to gain access to an account’s funds and are able in a sophisticated way to trade out of the exchange or broker dealer or custodian. That’s happening in different social engineering-related ways, but we’re able, through account deviation and account profiling, to alert the exchange or broker dealer or financial institution we’re working with to avoid that.

We’re about detection and prevention, not about tracing [what went wrong and where] after the fact. And we can do that regardless of knowing even personal identifiable information about that account. It’s not about the name or the IP address; it’s all about the attributes of trading. In fact, if we have an exchange in Hong Kong that’s experiencing a pump-and-dump on a certain coin pair, we can preemptively warn the rest of our client base so they can take steps to prepare and protect themselves.

TC: On the prevention front, could you also stop that activity on the Hong Kong exchange? Are you empowered by your clients to step in if you detect something anomalous?

AM: We’re bomb sniffing dogs, so we’re not coming to disable the bot. We know how to take the data and point out manipulation, but it’s then up to the financial institution to handle the case.

Pictured above: Seated left to right is CTO Praveen Kumar and CEO Asaf Meir. Standing is COO Chen Arad.

#645-ventures, #analytics, #asaf-meir, #blockchain, #chainalysis, #crypto, #elementus, #evolution-equity-partners, #ftx, #hanaco-ventures, #recent-funding, #solidus-labs, #startups, #surveillance, #tc, #venture-capital

0

US towns are buying Chinese surveillance tech tied to Uighur abuses

At least a hundred U.S. counties, towns, and cities have bought China-made surveillance systems that the U.S. government has linked to human rights abuses, according to contract data seen by TechCrunch.

Some municipalities have spent tens of thousands of dollars or more to buy surveillance equipment made by two Chinese technology companies, Hikvision and Dahua, after the companies were added to the U.S. government’s economic backlist in 2019 after the companies were linked to China’s ongoing efforts to suppress ethnic minorities in Xinjiang, where most Uighur Muslims live. Congress also banned U.S. federal agencies from buying new Hikvision and Dahua technology or renewing contracts over fears that it could help the Chinese government to conduct espionage.

But those federal actions broadly do not apply at the state and city level, allowing local governments to buy these China-made surveillance systems — including video cameras and thermal imaging scanners — largely uninhibited, so long as federal funds are not used to buy the equipment.

Details of the contracts were provided by GovSpend, which tracks federal and state government spending, to TechCrunch via IPVM, a leading news publication on video surveillance, which has followed the Hikvision and Dahua bans closely.

The biggest spender, according to the data and as previously reported by IPVM, showed that the Board of Education in Fayette County, Georgia spent $490,000 in August 2020 on dozens of Hikvision thermal cameras, used for temperature checks at its public schools.

A statement provided by Fayette County Public Schools spokesperson Melinda Berry-Dreisbach said the cameras were purchased from its longtime security vendor, authorized dealer for Hikvision. The statement did not address whether the Board of Education was aware of Hikvision’s links to human rights abuses. Berry-Dreisbach did not respond to our follow-up questions.

IPVM research found many thermal scanners, including Hikvision and Dahua models, produced inaccurate readings, prompting the U.S. Food and Drug Administration to issue a public health alert warning that misreported readings could present “potentially serious public health risks.”

Nash County in North Carolina, which has a population of 95,000 residents, spent more than $45,000 between September and December 2020 to buy Dahua thermal cameras. County Manager Zee Lamb forwarded emails that confirmed the purchases and that the gear was deployed at the county’s public schools, but did not comment.

The data also shows that the Parish of Jefferson in Louisiana, which includes part of the city of New Orleans, spent $35,000 on Hikvision surveillance cameras and video storage between October 2019 and September 2020. A parish spokesperson did not comment.

Only one municipality we contacted addressed the links between the technology they bought and human rights abuses. Kern County in California spent more than $15,000 on Hikvision surveillance cameras and video recording equipment in June 2020 for its probation department offices. The contract data showed a local vendor, Tel Tec Security, supplied the Hikvision technology to the county.

Ryan Alsop, chief administrative officer for Kern County, said he was “not familiar at all with the issues you’re referencing with regard to Hikvision,” when asked about Hikvision’s links to human rights abuses.

“Again, we didn’t contract with Hikvision, we contracted with Tel Tec Security,” said Alsop.

Kern County spent more than $15,000 on Hikvision equipment at its county probation service offices. (Data: GovSpend/supplied)

A spokesperson for the City of Hollywood in Florida, which spent close to $30,000 on Hikvision thermal cameras, said the Chinese technology maker “was the only major manufacturer with a viable solution that was ready for delivery; would serve the defined project scope; and was within the project budget.” The cameras were used to take employees’ body temperatures to curb the spread of COVID-19. The spokesperson did not address the links to human rights abuses but noted that the federal ban did not apply to the city.

Maya Wang, a senior researcher at Human Rights Watch, said a lack of privacy regulations at the local level contributed to municipalities buying this technology.

“One of the problems is that these kinds of cameras, regardless of the country of origin and regardless of whether or not they’re even linked to human rights abuses, have been introduced to various parts of the country — especially at state and city levels — without any kind of regulation to ensure that they comply with privacy standards,” said Wang in a call. “There is, again, no kind of regulatory framework to vet the companies based on their track record, whether or not they have abused human rights in their practices, such that we can evaluate or choose better companies, and encourage the ones with better privacy protections to win, essentially.”

Chief among the U.S. government’s allegations are that Beijing has relied heavily on Hikvision, Dahua, and others to supply the surveillance technology it uses to monitor the Uighur population as part of the government’s ongoing efforts to suppress the ethnic group, which it has repeatedly denied.

United Nations watchdogs say Beijing has detained more than a million Uighurs in internment camps in recent years as part of these efforts, which led to the U.S. blacklisting of the two surveillance technology makers.

In adding the companies to the government’s economic blacklist, the Commerce Department said Hikvision and Dahua “have been implicated in human rights violations and abuses in the implementation of China’s campaign of repression, mass arbitrary detention, and high-technology surveillance against Uighurs, Kazakhs, and other members of Muslim minority groups.” The Biden administration called the human rights abuses a “genocide.”

IPVM has also reported extensively on how the companies’ surveillance technology has been used to suppress the Uighurs. Dahua was found to have race detection in its code for providing “real-time Uighur warnings” to police.

Earlier this year, the Thomson Reuters Foundation found half of London’s councils and the largest 20 U.K. cities were using the technology linked to Uighur abuses. The Guardian also found that Hikvision surveillance technology was used in U.K. schools.

When reached, Dahua pointed to a blog post with a statement, and claimed that “contrary to some reporting in the media, our company has never developed any technology or solution that seeks to target a specific ethnic group.” The statement added: “Claims to the contrary are simply false and we are aware of no evidence that has ever been put forward to support such claims.”

Hikvision did not respond to a request for comment.


Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

#china, #dahua, #government, #hikvision, #human-rights, #privacy, #security, #surveillance, #u-s-government

0

If you don’t want robotic dogs patrolling the streets, consider CCOPS legislation

Boston Dynamics’ robot “dogs,” or similar versions thereof, are already being employed by police departments in Hawaii, Massachusetts and New York. Partly through the veil of experimentation, few answers are being given by these police forces about the benefits and costs of using these powerful surveillance devices.

The American Civil Liberties Union, in a position paper on CCOPS (community control over police surveillance), proposes an act to promote transparency and protect civil rights and liberties with respect to surveillance technology. To date, 19 U.S. cities in have passed CCOPS laws, which means, in practical terms, that virtually all other communities don’t have a requirement that police are transparent about their use of surveillance technologies.

For many, this ability to use new, unproven technologies in a broad range of ways presents a real danger. Stuart Watt, a world-renowned expert in artificial intelligence and the CTO of Turalt, is not amused.

Even seemingly fun and harmless “toys” have all the necessary functions and features to be weaponized.

“I am appalled both by the principle and the dogbots and of them in practice. It’s a big waste of money and a distraction from actual police work,” he said. “Definitely communities need to be engaged with. I am honestly not even sure what the police forces think the whole point is. Is it to discourage through a physical surveillance system, or is it to actually prepare people for some kind of enforcement down the line?

“Chunks of law enforcement have forgotten the whole ‘protect and serve’ thing, and do neither,” Watts added. “If they could use artificial intelligence to actually protect and actually serve vulnerable people, the homeless, folks addicted to drugs, sex workers, those in poverty and maligned minorities, it’d be tons better. If they have to spend the money on AI, spend it to help people.”

The ACLU is advocating exactly what Watt suggests. In proposed language to city councils across the nation, the ACLU makes it clear that:

The City Council shall only approve a request to fund, acquire, or use a surveillance technology if it determines the benefits of the surveillance technology outweigh its costs, that the proposal will safeguard civil liberties and civil rights, and that the uses and deployment of the surveillance technology will not be based upon discriminatory or viewpoint-based factors or have a disparate impact on any community or group.

From a legal perspective, Anthony Gualano, a lawyer and special counsel at Team Law, believes that CCOPS legislation makes sense on many levels.

“As police increase their use of surveillance technologies in communities around the nation, and the technologies they use become more powerful and effective to protect people, legislation requiring transparency becomes necessary to check what technologies are being used and how they are being used.”

For those not only worried about this Boston Dynamics dog, but all future incarnations of this supertech canine, the current legal climate is problematic because it essentially allows our communities to be testing grounds for Big Tech and Big Government to find new ways to engage.

Just last month, public pressure forced the New York Police Department to suspend use of a robotic dog, quite unassumingly named Digidog. After the tech hound was placed on temporary leave due to public pushback, the NYPD used it at a public housing building in March. This went over about as well as you could expect, leading to discussions as to the immediate fate of this technology in New York.

The New York Times phrased it perfectly, observing that “the NYPD will return the device earlier than planned after critics seized on it as a dystopian example of overly aggressive policing.”

While these bionic dogs are powerful enough to take a bite out of crime, the police forces seeking to use them have a lot of public relations work to do first. A great place to begin would be for the police to actively and positively participate in CCOPS discussions, explaining what the technology involves, and how it (and these robots) will be used tomorrow, next month and potentially years from now.

#american-civil-liberties-union, #artificial-intelligence, #boston-dynamics, #column, #law-enforcement, #mass-surveillance, #opinion, #robotics, #security, #surveillance, #surveillance-technologies, #united-states

0

EU’s top data protection supervisor urges ban on facial recognition in public

The European Union’s lead data protection supervisor has called for remote biometric surveillance in public places to be banned outright under incoming AI legislation.

The European Data Protection Supervisor’s (EDPS) intervention follows a proposal, put out by EU lawmakers on Wednesday, for a risk-based approach to regulating applications of artificial intelligence.

The Commission’s legislative proposal includes a partial ban on law enforcement’s use of remote biometric surveillance technologies (such as facial recognition) in public places. But the text includes wide-ranging exceptions, and digital and humans rights groups were quick to warn over loopholes they argue will lead to a drastic erosion of EU citizens’ fundamental rights. And last week a cross-party group of MEPs urged the Commission to screw its courage to the sticking place and outlaw the rights-hostile tech.

The EDPS, whose role includes issuing recommendations and guidance for the Commission, tends to agree. In a press release today Wojciech Wiewiórowski urged a rethink.

“The EDPS regrets to see that our earlier calls for a moratorium on the use of remote biometric identification systems — including facial recognition — in publicly accessible spaces have not been addressed by the Commission,” he wrote.

“The EDPS will continue to advocate for a stricter approach to automated recognition in public spaces of human features — such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals — whether these are used in a commercial or administrative context, or for law enforcement purposes.

“A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives.”

Wiewiórowski had some warm words for the legislative proposal too, saying he welcomed the horizontal approach and the broad scope set out by the Commission. He also agreed there are merits to a risk-based approach to regulating applications of AI.

But the EDPB has made it clear that the red lines devised by EU lawmakers are a lot pinker in hue than he’d hoped for — adding a high profile voice to the critique that the Commission hasn’t lived up to its much trumpeted claim to have devised a framework that will ensure ‘trustworthy’ and ‘human-centric’ AI.

The coming debate over the final shape of the regulation is sure to include plenty of discussion over where exactly Europe’s AI red lines should be. A final version of the text isn’t expected to be agreed until next year at the earliest.

“The EDPS will undertake a meticulous and comprehensive analysis of the Commission’s proposal to support the EU co-legislators in strengthening the protection of individuals and society at large. In this context, the EDPS will focus in particular on setting precise boundaries for those tools and systems which may present risks for the fundamental rights to data protection and privacy,” Wiewiórowski added.

 

#ai-regulation, #artificial-intelligence, #biometrics, #edps, #europe, #european-union, #facial-recognition, #law-enforcement, #policy, #privacy, #surveillance, #wojciech-wiewiorowski

0

MEPs call for European AI rules to ban biometric surveillance in public

A cross-party group of 40 MEPs in the European parliament has called on the Commission to strengthen an incoming legislative proposal on artificial intelligence to include an outright ban on the use of facial recognition and other forms of biometric surveillance in public places.

They have also urged EU lawmakers to outlaw automated recognition of people’s sensitive characteristics (such as gender, sexuality, race/ethnicity, health status and disability) — warning that such AI-fuelled practices pose too great a rights risk and can fuel discrimination.

The Commission is expected to presented its proposal for a framework to regulate ‘high risk’ applications of AI next week — but a copy of the draft leaked this week (via Politico). And, as we reported earlier, this leaked draft does not include a ban on the use of facial recognition or similar biometric remote identification technologies in public places, despite acknowledging the strength of public concern over the issue.

“Biometric mass surveillance technology in publicly accessible spaces is widely being criticised for wrongfully reporting large numbers of innocent citizens, systematically discriminating against under-represented groups and having a chilling effect on a free and diverse society. This is why a ban is needed,” the MEPs write now in a letter to the Commission which they’ve also made public.

They go on to warn over the risks of discrimination through automated inference of people’s sensitive characteristics — such as in applications like predictive policing or the indiscriminate monitoring and tracking of populations via their biometric characteristics.

“This can lead to harms including violating rights to privacy and data protection; suppressing free speech; making it harder to expose corruption; and have a chilling effect on everyone’s autonomy, dignity and self-expression – which in particular can seriously harm LGBTQI+ communities, people of colour, and other discriminated-against groups,” the MEPs write, calling on the Commission to amend the AI proposal to outlaw the practice in order to protect EU citizens’ rights and the rights of communities who faced a heightened risk of discrimination (and therefore heightened risk from discriminatory tools supercharged with AI).

“The AI proposal offers a welcome opportunity to prohibit the automated recognition of gender, sexuality, race/ethnicity, disability and any other sensitive and protected characteristics,” they add.

The leaked draft of the Commission’s proposal does tackle indiscriminate mass surveillance — proposing to prohibit this practice, as well as outlawing general purpose social credit scoring systems.

However the MEPs want lawmakers to go further — warning over weaknesses in the wording of the leaked draft and suggesting changes to ensure that the proposed ban covers “all untargeted and indiscriminate mass surveillance, no matter how many people are exposed to the system”.

They also express alarm at the proposal having an exemption on the prohibition on mass surveillance for public authorities (or commercial entities working for them) — warning that this risks deviating from existing EU legislation and from interpretations by the bloc’s top court in this area.

“We strongly protest the proposed second paragraph of this Article 4 which would exempt public authorities and even private actors acting on their behalf ‘in order to safeguard public security’,” they write. “Public security is precisely what mass surveillance is being justified with, it is where it is practically relevant, and it is where the courts have consistently annulled legislation on indiscriminate bulk processing of personal data (e.g. the Data Retention Directive). This carve-out needs to be deleted.”

“This second paragraph could even be interpreted to deviate from other secondary legislation which the Court of Justice has so far interpreted to ban mass surveillance,” they continue. “The proposed AI regulation needs to make it very clear that its requirements apply in addition to those resulting from the data protection acquis and do not replace it. There is no such clarity in the leaked draft.”

The Commission has been contacted for comment on the MEPs’ calls but is unlikely to do so ahead of the official reveal of the draft AI regulation — which is expected around the middle of next week.

It remains to be seen whether the AI proposal will undergo any significant amendments between now and then. But MEPs have fired a swift warning shot that fundamental rights must and will be a key feature of the co-legislative debate — and that lawmakers’ claims of a framework to ensure ‘trustworthy’ AI won’t look credible if the rules don’t tackle unethical technologies head on.

#ai, #ai-regulation, #artificial-intelligence, #biometrics, #discrimination, #europe, #european-parliament, #european-union, #facial-recognition, #fundamental-rights, #law-enforcement, #mass-surveillance, #meps, #national-security, #policy, #privacy, #surveillance

0

US privacy, consumer, competition and civil rights groups urge ban on ‘surveillance advertising’

Ahead of another big tech vs Congress ‘grab your popcorn’ grilling session, scheduled for March 25 — when US lawmakers will once again question the CEOs of Facebook, Google and Twitter on the unlovely topic of misinformation — a coalition of organizations across the privacy, antitrust, consumer protection and civil rights spaces has called for a ban on “surveillance advertising”, further amplifying the argument that “big tech’s toxic business model is undermining democracy”.

The close to 40-strong coalition behind this latest call to ban ‘creepy ads’ which rely on the mass tracking and profiling of web users in order to target them with behavioral ads includes the American Economic Liberties Project, the Campaign for a Commercial Free Childhood, the Center for Digital Democracy, the Center for Humane Technology, Epic.org, Fair Vote, Media Matters for America, the Tech Transparency Project and The Real Facebook Oversight Board, to name a few.

As leaders across a broad range of issues and industries, we are united in our concern for the safety of our communities and the health of democracy,” they write in the open letter. “Social media giants are eroding our consensus reality and threatening public safety in service of a toxic, extractive business model. That’s why we’re joining forces in an effort to ban surveillance advertising.”

The coalition is keen to point out that less toxic non-tracking alternatives (like contextual ads) exist, while arguing that greater transparency and oversight of adtech infrastructure could help clean up a range of linked problems, from junk content and rising conspiracism to ad fraud and denuded digital innovation.

“There is no silver bullet to remedy this crisis – and the members of this coalition will continue to pursue a range of different policy approaches, from comprehensive privacy legislation to reforming our antitrust laws and liability standards,” they write. “But here’s one thing we all agree on: It’s time to ban surveillance advertising.”

“Big Tech platforms amplify hate, illegal activities, and conspiracism — and feed users increasingly extreme content — because that’s what generates the most engagement and profit,” they warn.

“Their own algorithmic tools have boosted everything from white supremacist groups and Holocaust denialism to COVID-19 hoaxes, counterfeit opioids and fake cancer cures. Echo chambers, radicalization, and viral lies are features of these platforms, not bugs — central to the business model.”

The coalition also warns over surveillance advertising’s impact on the traditional news business, noting that shrinking revenues for professional journalism is raining more harm down upon the (genuine) information ecosystem democracies need to thrive.

The potshots are well rehearsed at this point although it’s an oversimplification to blame the demise of traditional news on tech giants so much as ‘giant tech’: aka the industrial disruption wrought by the Internet making so much information freely available. But dominance of the programmatic adtech pipeline by a couple of platform giants clearly doesn’t help. (Australia’s recent legislative answer to this problem is still too new to assess for impacts but there’s a risk its news media bargaining code will merely benefit big media and big tech while doing nothing about the harms of either industry profiting off of outrage.)

“Facebook and Google’s monopoly power and data harvesting practices have given them an unfair advantage, allowing them to dominate the digital advertising market, siphoning up revenue that once kept local newspapers afloat. So while Big Tech CEOs get richer, journalists get laid off,” the coalition warns, adding: “Big Tech will continue to stoke discrimination, division, and delusion — even if it fuels targeted violence or lays the groundwork for an insurrection — so long as it’s in their financial interest.”

Among a laundry list of harms the coalition is linking to the dominant ad-based online business models of tech giants Facebook and Google is the funding of what they describe as “insidious misinformation sites that promote medical hoaxes, conspiracy theories, extremist content, and foreign propaganda”.

“Banning surveillance advertising would restore transparency and accountability to digital ad placements, and substantially defund junk sites that serve as critical infrastructure in the disinformation pipeline,” they argue, adding: “These sites produce an endless drumbeat of made-to-go-viral conspiracy theories that are then boosted by bad-faith social media influencers and the platforms’ engagement-hungry algorithms — a toxic feedback loop fueled and financed by surveillance advertising.”

Other harms they point to are the risks posed to public health by platforms’ amplification of junk/bogus content such as COVID-19 conspiracy theories and vaccine misinformation; the risk of discrimination through unfairly selective and/or biased ad targeting, such as job ads that illegally exclude women or ethnic minorities; and the perverse economic incentives for ad platforms to amplify extremist/outrageous content in order to boost user engagement with content and ads, thereby fuelling societal division and driving partisanship as a byproduct of the fact platforms benefit financially from more content being spread.

The coalition also argues that the surveillance advertising system is “rigging the game against small businesses” because it embeds platform monopolies — which is a neat counterpoint to tech giants’ defensive claim that creepy ads somehow level the playing field for SMEs vs larger brands.

“While Facebook and Google portray themselves as lifelines for small businesses, the truth is they’re simply charging monopoly rents for access to the digital economy,” they write, arguing that the duopoly’s “surveillance-driven stranglehold over the ad market leaves the little guys with no leverage or choice” — opening them up to exploitation by big tech.

The current market structure — with Facebook and Google controlling close to 60% of the US ad market — is thus stifling innovation and competition, they further assert.

“Instead of being a boon for online publishers, surveillance advertising disproportionately benefits Big Tech platforms,” they go on, noting that Facebook made $84.2BN in 2020 ad revenue and Google made $134.8BN off advertising “while the surveillance ad industry ran rife with allegations of fraud”.

The campaign being kicked off is by no means the first call for a ban on behavioral advertising but given how many signatories are backing this one it’s a sign of the scale of the momentum building against a data-harvesting business model that has shaped the modern era and allowed a couple of startups to metamorphosize into society- and democracy-denting giants.

That looks important as US lawmakers are now paying close attention to big tech impacts — and have a number of big tech antitrust cases actively on the table. Although it was European privacy regulators that were among the first to sound the alarm over microtargeting’s abusive impacts and risks for democratic societies.

Back in 2018, in the wake of the Facebook data misuse and voter targeting scandal involving Cambridge Analytica, the UK’s ICO called for an ethical pause on the use of online ad tools for political campaigning — penning a report entitled Democracy Disrupted? Personal information and political influence.

It’s no small irony that the self-same regulator has so far declined to take any action against the adtech industry’s unlawful use of people’s data — despite warning in 2019 that behavioral advertising is out of control.

The ICO’s ongoing inaction seems likely to have fed into the UK government’s decision that a dedicated unit is required to oversee big tech.

In recent years the UK has singled out the online ad space for antitrust concern — saying it will establish a pro-competition regulator to tackle big tech’s dominance, following a market study of the digital advertising sector carried out in 2019 by its Competition and Markets Authority which reported substantial concerns over the power of the adtech duopoly.

Last month, meanwhile, the European Union’s lead data protection supervisor urged not a pause but a ban on targeted advertising based on tracking internet users’ digital activity — calling on regional lawmakers’ to incorporate the lever into a major reform of digital services rules which is intended to boost operators’ accountability, among other goals.

The European Commission’s proposal had avoided going so far. But negotiations over the Digital Services Act and Digital Markets Act are ongoing.

Last year the European Parliament also backed a tougher stance on creepy ads. Again, though, the Commission’s framework for tackling online political ads does not suggest anything so radical — with EU lawmakers pushing for greater transparency instead.

It remains to be seen what US lawmakers will do but with US civil society organizations joining forces to amplify an anti-ad-targeting message there’s rising pressure to clean up the toxic adtech in its own backyard.

Commenting in a statement on the coalition’s website, Zephyr Teachout, an associate professor of law at Fordham Law School, said: “Facebook and Google possess enormous monopoly power, combined with the surveillance regimes of authoritarian states and the addiction business model of cigarettes. Congress has broad authority to regulate their business models and should use it to ban them from engaging in surveillance advertising.”

“Surveillance advertising has robbed newspapers, magazines, and independent writers of their livelihoods and commoditized their work — and all we got in return were a couple of abusive monopolists,” added David Heinemeier Hansson, creator of Ruby on Rails, in another supporting statement. “That’s not a good bargain for society. By banning this practice, we will return the unique value of writing, audio, and video to the people who make it rather than those who aggregate it.”

With US policymakers paying increasingly close attention to adtech, it’s interesting to see Google is accelerating its efforts to replace support for individual-level tracking with what it’s branded as a ‘privacy-safe’ alternative (FLoC).

Yet the tech it’s proposed via its Privacy Sandbox will still enable groups (cohorts) of web users to be targeted by advertisers, with ongoing risks for discrimination, the targeting of vulnerable groups of people and societal-scale manipulation — so lawmakers will need to pay close attention to the detail of the ‘Privacy Sandbox’ rather than Google’s branding.

“This is, in a word, bad for privacy,” warned the EFF, writing about the proposal back in 2019. “A flock name would essentially be a behavioral credit score: a tattoo on your digital forehead that gives a succinct summary of who you are, what you like, where you go, what you buy, and with whom you associate.”

“FLoC is the opposite of privacy-preserving technology,” it added. “Today, trackers follow you around the web, skulking in the digital shadows in order to guess at what kind of person you might be. In Google’s future, they will sit back, relax, and let your browser do the work for them.”

#advertising-tech, #behavioral-ads, #facebook, #google, #microtargeting, #misinformation, #online-ads, #policy, #privacy, #surveillance

0

One company wants to sell the feds location data from every car on Earth

Cars driving down I-80 in Berkeley, California, in May, 2018 when there were still places to go.

Enlarge / Cars driving down I-80 in Berkeley, California, in May, 2018 when there were still places to go. (credit: David Paul Morris | Bloomberg | Getty Images)

There is a strange sort of symmetry in the world of personal data this week: one new report has identified a company that wants to sell the US government granular car location data from basically every vehicle in the world, while a group of privacy advocates is suing another company for providing customer data to the feds.

A surveillance contractor called Ulysses can “remotely geolocate vehicles in nearly every country except for North Korea and Cuba on a near real-time basis,” Vice Motherboard reports.

Ulysses obtains vehicle telematics data from embedded sensors and communications sensors that can transmit information such as seatbelt status, engine temperature, and current vehicle location back to automakers or other parties.

Read 9 remaining paragraphs | Comments

#data-privacy, #location-data, #personal-data, #policy, #privacy, #surveillance, #ulysses

0

Hackers access security cameras inside Cloudflare, jails, and hospitals

Hackers access security cameras inside Cloudflare, jails, and hospitals

Enlarge (credit: Getty Images)

Hackers say they broke into the network of Silicon Valley startup Verkada and gained access to live video feeds from more than 150,000 surveillance cameras the company manages for Cloudflare, Tesla, and a host of other organizations.

The group published videos and images they said were taken from offices, warehouses, and factories of those companies as well as from jail cells, psychiatric wards, banks, and schools. Bloomberg News, which first reported the breach, said footage viewed by a reporter showed staffers at Florida hospital Halifax Health tackling a man and pinning him to a bed. Another video showed a handcuffed man in a police station in Stoughton, Massachusetts, being questioned by officers.

“I don’t think the claim ‘we hacked the internet’ has ever been as accurate as now,” Tillie Kottmann, a member of a hacker collective calling itself APT 69420 Arson Cats, wrote on Twitter.

Read 6 remaining paragraphs | Comments

#biz-it, #hacking, #privacy, #security-cameras, #surveillance, #tech

0

A race to reverse engineer Clubhouse raises security concerns

As live audio chat app Clubhouse ascends in popularity around the world, concerns about its data practices also grow.

The app is currently only available on iOS, so some developers set out in a race to create Android, Windows and Mac versions of the service. While these endeavors may not be ill-intentioned, the fact that it takes programmers little effort to reverse engineer and fork Clubhouse — that is, when developers create new software based on its original code — is sounding an alarm about the app’s security.

The common goal of these unofficial apps, as of now, is to broadcast Clubhouse audio feeds in real-time to users who cannot access the app otherwise because they don’t have an iPhone. One such effort is called Open Clubhouse, which describes itself as a “third-party web application based on flask to play Clubhouse audio.” The developer confirmed to TechCrunch that Clubhouse blocked its service five days after its launch without providing an explanation.

“[Clubhouse] asks a lot of information from users, analyzes those data and even abuses them. Meanwhile, it restricts how people use the app and fails to give them the rights they deserve. To me, this constitutes monopoly or exploitation,” said Open Clubhouse’s developer nicknamed AiX.

Clubhouse cannot be immediately reached for comment on this story.

AiX wrote the program “for fun” and wanted it to broaden Clubhouse’s access to more people. Another similar effort came from a developer named Zhuowei Zhang, who created Hipster House to let those without an invite browse rooms and users, and those with an invite to join rooms as a listener though they can’t speak — Clubhouse is invite-only at the moment. Zhang stopped developing the project, however, after noticing a better alternative.

These third-party services, despite their innocuous intentions, can be exploited for surveillance purposes, as Jane Manchun Wong, a researcher known for uncovering upcoming features in popular apps through reverse engineering, noted in a tweet.

“Even if the intent of that webpage is to bring Clubhouse to non-iOS users, without a safeguard, it could be abused,” said Wong, referring to a website rerouting audio data from Clubhouse’s public rooms.

Clubhouse lets people create public chat rooms, which are available to any user who joins before a room reaches its maximum capacity, and private rooms, which are only accessible to room hosts and users authorized by the hosts.

But not all users are aware of the open nature of Clubhouse’s public rooms. During its brief window of availability in China, the app was flooded with mainland Chinese debating politically sensitive issues from Taiwan to Xinjiang, which are heavily censored in the Chinese cybserspace. Some vigilant Chinese users speculated the possibility of being questioned by the police for delivering sensitive remarks. While no such event has been publicly reported, the Chinese authorities have banned the app since February 8.

Clubhouse’s design is by nature at odds with the state of communication it aims to achieve. The app encourages people to use their real identity — registration requires a phone number and an existing user’s invite. Inside a room, everyone can see who else is there. This setup instills trust and comfort in users when they speak as if speaking at a networking event.

But the third-party apps that are able to extract Clubhouse’s audio feeds show that the app isn’t even semi-public: It’s public.

More troublesome is that users can “ghost listen,” as developer Zerforschung found. That is, users can hear a room’s conversation without having their profile displayed to the room participants. Eavesdropping is made possible by establishing communication directly with Agora, a service provider employed by Clubhouse. As multiple security researchers found, Clubhouse relies on Agora’s real-time audio communication technology. Sources have also confirmed the partnership with TechCrunch.

Some technical explanation is needed here. When a user joins a chatroom on Clubhouse, it makes a request to Agora’s infrastructure, as the Stanford Internet Observatory discovered. To make the request, the user’s phone contacts Clubhouse’s application programming interface (API), which then creates “tokens”, the basic building block in programming that authenticates an action, to establish a communication pathway for the app’s audio traffic.

Now, the problem is there can be a disconnect between Clubhouse and Agora, allowing the Clubhouse end, which manages user profiles, to be inactive while the Agora end, which transmits audio data, remains active, as technology analyst Daniel Sinclair noted. That’s why users can continue to eavesdrop on a room without having their profile displayed to the room’s participants.

The Agora partnership has sparked other forms of worries. The company, which operates mainly from the U.S. and China, noted in its IPO prospectus that its data may be subject to China’s cybersecurity law, which requires network operators in China to assist police investigations. That possibility, as the Stanford Internet Observatory points out, is contingent on whether Clubhouse stores its data in China.

While the Clubhouse API is banned in China, the Agora API appears unblocked. Tests by TechCrunch find that users currently need a VPN to join a room, an action managed by Clubhouse, but can listen to the room conversation, which is facilitated by Agora, with the VPN off. What’s the safest way for China-based users to access the app, given the official attitude is that it should not exist? It’s also worth noting that the app was not available on the Chinese App Store even before its ban, and Chinese users had downloaded the app through workarounds.

The Clubhouse team may be overwhelmed by data questions in the past few days, but these early observations from researchers and hackers may urge it to fix its vulnerabilities sooner, paving its way to grow beyond its several million loyal users and $1 billion valuation mark.

#audio, #clubhouse, #privacy, #security, #social-audio, #social-networking, #surveillance, #tc, #voice-chat

0

Minneapolis bans its police department from using facial recognition software

Minneapolis voted Friday to ban the use of facial recognition software for its police department, growing the list of major cities that have implemented local restrictions on the controversial technology. After an ordinance on the ban was approved earlier this week, 13 members of the city council voted in favor of the ban with no opposition.

The new ban will block the Minneapolis Police Department from using any facial recognition technology, including software by Clearview AI. That company sells access to a large database of facial images, many scraped from major social networks, to federal law enforcement agencies, private companies and a number of U.S. police departments. The Minneapolis Police Department is known to have a relationship with Clearview AI, as is the Hennepin County Sheriff’s Office, which will not be restricted by the new ban.

The vote is a landmark decision in the city that set off racial justice protests around the country after a Minneapolis police officer killed George Floyd last year. The city has been in the throes of police reform ever since, leading the nation by pledging to defund the city’s police department in June before backing away from that commitment into more incremental reforms later that year.

Banning the use of facial recognition is one targeted measure that can rein in emerging concerns about aggressive policing. Many privacy advocates are concerned that the AI-powered face recognition systems would not only disproportionately target communities of color, but that the tech has been demonstrated to have technical shortcomings in discerning non-white faces.

Cities around the country are increasingly looking to ban the controversial technology and have implemented restrictions in many different ways. In Portland, Oregon, new laws passed last year block city bureaus from using facial recognition but also forbid private companies from deploying the technology in public spaces. Previous legislation in San Francisco, Oakland and Boston restricted city governments from using facial recognition systems though didn’t include a similar provision for private companies.

#clearview-ai, #facial-recognition, #government, #minnesota, #surveillance, #tc

0

Privacy complaint targets European parliament’s COVID-19 test-booking site

The European Parliament is being investigated by the EU’s lead data regulator over a complaint that a website it set up for MEPs to book coronavirus tests may have violated data protection laws.

The complaint, which has been filed by six MEPs and is being supported by the privacy campaign group noyb, alleges third party trackers were dropped without proper consent and that cookie banners presented to visitors were confusing and deceptively designed.

It also alleges personal data was transferred to the US without a valid legal basis, making reference to a landmark legal ruling by Europe’s top court last summer (aka Schrems II).

The European Data Protection Supervisor (EDPS), which oversees EU institutions’ compliance with data rules, confirmed receipt of the complaint and said it has begun investigating.

It also said the “litigious cookies” had been disabled following the complaints, adding that the parliament told it no user data had in fact been transferred outside the EU.

“A complaint was indeed filed by some MEPs about the European Parliament’s coronavirus testing website; the EDPS has started investigating it in accordance with Article 57(1)(e) EUDPR (GDPR for EU institutions),” an EDPS spokesman told TechCrunch. “Following this complaint, the Data Protection Office of the European Parliament informed the EDPS that the litigious cookies were now disabled on the website and confirmed that no user data was sent to outside the European Union.”

“The EDPS is currently assessing this website to ensure compliance with EUDPR requirements. EDPS findings will be communicated to the controller and complainants in due course,” it added.

MEP, Alexandra Geese, of Germany’s Greens, filed an initial complaint with the EDPS on behalf of other parliamentarians.

Two of the MEPs that have joined the complaint and are making their names public are Patrick Breyer and Mikuláš Peksa — both members of the Pirate Party, in Germany and the Czech Republic respectively.

We’ve reached out to the European Parliament and the company it used to supply the testing website for comment.

The complaint is noteworthy for a couple of reasons. Firstly because the allegations of a failure to uphold regional data protection rules look pretty embarrassing for an EU institution. Data protection may also feel especially important for “politically exposed persons like Members and staff of the European Parliament”, as noyb puts it.

Back in 2019 the European Parliament was also sanctioned by the EDPS over use of US-based digital campaign company, NationBuilder, to process citizens’ voter data ahead of the spring elections — in the regulator’s first ever such enforcement of an EU institution.

So it’s not the first time the parliament has got in hot water over its attention to detail vis-a-vis third party data processors (the parliament’s COVID-19 test registration website is being provided by a German company called Ecolog Deutschland GmbH). Once may be an oversight, twice starts to look sloppy…

Secondly, the complaint could offer a relatively quick route for a referral to the EU’s top court, the CJEU, to further clarify interpretation of Schrems II — a ruling that has implications for thousands of businesses involved in transferring personal data out of the EU — should there be a follow-on challenge to a decision by the EDPS.

“The decisions of the EDPS can be directly challenged before the Court of Justice of the EU,” noyb notes in a press release. “This means that the appeal can be brought directly to the highest court of the EU, in charge of the uniform interpretation of EU law. This is especially interesting as noyb is working on multiple other cases raising similar issues before national DPAs.”

Guidance for businesses involved in transferring data out of the EU who are trying to understand how to (or often whether they can) be compliant with data protection law, post-Schrems II, is so far limited to what EU regulators have put out.

Further interpretation by the CJEU could bring more clarifying light — and, indeed, less wiggle room for processors wanting to keep schlepping Europeans’ data over the pond legally, depending on how the cookie crumbles (if you’ll pardon the pun).

noyb notes that the complaint asks the EDPS to prohibit transfers that violate EU law.

“Public authorities, and in particular the EU institutions, have to lead by example to comply with the law,” said Max Schrems, honorary chairman of noyb, in a statement. “This is also true when it comes to transfers of data outside of the EU. By using US providers, the European Parliament enabled the NSA to access data of its staff and its members.”

Per the complaint, concerns about third party trackers and data transfers were initially raised to the parliament last October — after an MEP used a tracker scanning tool to analyze the COVID-19 test booking website and found a total of 150 third-party requests and a cookie were placed on her browser.

Specifically, the EcoCare COVID-19 testing registration website was found to drop a cookie from the US-based company Stripe, as well as including many more third-party requests from Google and Stripe.

The complaint also notes that a data protection notice on the site informed users that data on their usage generated by the use of Google Analytics is “transmitted to and stored on a Google server in the US”.

Where consent was concerned, the site was found to serve users with two different conflicting data protection notices — with one containing a (presumably copypasted) reference to Brussels Airport.

Different consent flows were also presented, depending on the user’s region, with some visitors being offered no clear opt out button. The cookie notices were also found to contain a ‘dark pattern’ nudge toward a bright green button for ‘accepting all’ processing, as well as confusing wording for unclear alternatives.

A screengrab of the cookie consent prompt that the parliament’s COVID-19 test booking website displayed at the time of writing – with still no clearly apparent opt-out for non-essential cookies (Image credit: TechCrunch)

The EU has stringent requirements for (legally) gathering consents for (non-essential) cookies and other third party tracking technologies which states that consent must be clearly informed, specific and freely given.

In 2019, Europe’s top court further confirmed that consent must be obtained prior to dropping non-essential trackers. (Health-related data also generally carries a higher consent-bar to process legally in the EU, although in this case the personal information relates to appointment registrations rather than special category medical data).

The complaints allege that EU cookie consent requirements are not being met on the website.

While the presence of requests for US-based services (and the reference to storing data in the US) is a legal problem in light of the Schrems II judgement.

The US no longer enjoys legally frictionless flows of personal data out of the EU after the CJEU torpedoed the adequacy arrangement the Commission had granted (invalidating the EU-US Privacy Shield mechanism) — which in turn means transfers of data on EU peoples to US-based companies are complicated.

Data controllers are responsible for assessing each such proposed transfer, on a case by case basis. A data transfer mechanism called Standard Contractual Clauses was not invalidated by the CJEU. But the court made it clear SCCs can only be used for transfers to third countries where data protection is essentially equivalent to the legal regime offered in the EU — doing so at the same time as saying the US does not meet that standard.

Guidance from the European Data Protection Board in the wake of the ruling suggests that some EU-US data transfers may be possible to carry in compliance with European law. Such as those that involve encrypted data with no access by the receiving US-based entity.

However the bar for compliance varies depending on the specific context and case.

Additionally, for a subset of companies that are definitely subject to US surveillance law (such as Google) the compliance bar may be impossibly high — as surveillance law is the main legal sticking point for EU-US transfers.

So, once again, it’s not a good look for the parliament website to have had a notice on its COVID-19 testing website that said personal data would be transferred to a Google’s server in the US. (Even if that functionality had not been activated, as seems to have been claimed.)

Another reason the complaint against the European Parliament is noteworthy is that it further highlights how much web infrastructure in use within Europe could be risking legal sanction for failing to comply with regional data protection rules. If the European Parliament can’t get it right, who is?

noyb filed a raft of complaints against EU websites last year which it had identified still sending data to the US via Google Analytics and/or Facebook Connect integrations a short while after the Schrems II ruling. (Those complaints are being looked into by DPAs across the EU.)

Facebook’s EU data transfers are also very much on the hook here. Earlier this month the tech giant’s lead EU data regulator agreed to ‘swiftly resolve’ a long-standing complaint over its transfers.

Schrems filed that complaint all the way back in 2013. He told us he expects the case to be resolved this year, likely within around six to nine months. So a final decision should come in 2021.

He has previously suggested the only way for Facebook to fix the data transfers issue is to federate its service, storing European users’ data locally. While last year the tech giant was forced to deny it would shut its service in Europe if its lead EU regulator followed through on enforcing a preliminary order to suspend transfers (which it blocked by applying for a judicial review of the Irish DPC’s processes).

The alternative outcome Facebook has been lobbying for is some kind of a political resolution to the legal uncertainty clouding EU-US data transfers. However the European Commission has warned there’s no quick fix — and reform of US surveillance law is needed.

So with options for continued icing of EU data protection enforcement against US tech giants melting fast in the face of bar-setting CJEU rulings and ongoing strategic litigation like this latest noyb-supported complaint pressure is only going to keep building for pro-privacy reform of US surveillance law. Not that Facebook has openly come out in support of reforming FISA yet.

#cookie-consent, #covid-19, #data-protection, #europe, #european-parliament, #noyb, #privacy, #schrems-ii, #surveillance, #tc

0

Teledyne to acquire FLIR in $8 billion cash and stock deal

Industrial sensor giant Teledyne is set to acquire sensing company FLIR in a deal valued at around $8 billion in a mix of stock and cash, pending approvals with an expected closing date sometime in the middle of this year. While both companies make sensors, aimed primarily at industrial and commercial customers, they actually focus on different specialties that Teledyne said in a press release makes FLIR’s business complimentary to, rather than competitive with, its existing offerings.

FLIR’s technology has appeared in the consumer market via add-on thermal cameras designed for mobile devices, including the iPhone. These are useful for things like identifying the source of drafts and potential plumbing leaks, but the company’s main business, which includes not only thermal imaging, but also visible light imaging, video analysts and threat detection technology, serves deep-pocketed customers including the aerospace and defense industries.

Teledyne also serves aerospace and defense customers, including NASA, as well as healthcare, marine and climate monitoring agencies. The company’s suite of offerings include seismic sensors, oscilloscopes and other instrumentation, as well as digital imaging, but FLIR’s products cover some areas not currently addressed by Teledyne, and in more depth.

#aerospace, #california, #companies, #digital-imaging, #flir, #healthcare, #imaging, #iphone, #mobile-devices, #surveillance, #tc, #thermal-imaging

0

How your digital trails wind up in the hands of the police

How your digital trails wind up in the hands of the police

Enlarge (credit: Tracy J. Lee | Getty Images)

Michael Williams’ every move was being tracked without his knowledge—even before the fire. In August, Williams, an associate of R&B star and alleged rapist R. Kelly, allegedly used explosives to destroy a potential witness’s car. When police arrested Williams, the evidence cited in a Justice Department affidavit was drawn largely from his smartphone and online behavior: text messages to the victim, cell phone records, and his search history.

The investigators served Google a “keyword warrant,” asking the company to provide information on any user who had searched for the victim’s address around the time of the arson. Police narrowed the search, identified Williams, then filed another search warrant for two Google accounts linked to him. They found other searches: the “detonation properties” of diesel fuel, a list of countries that do not have extradition agreements with the US, and YouTube videos of R. Kelly’s alleged victims speaking to the press. Williams has pleaded not guilty.

Read 12 remaining paragraphs | Comments

#policy, #privace, #surveillance

0

SARS-CoV-2’s spread to wild mink not yet a reason to panic

Image of a mink at the base of a tree.

Enlarge (credit: Eric Sonstroem / Flickr)

Did anyone have “mink farms” on their 2020 catastrophe bingo cards? It turns out that the SARS-CoV-2 virus readily spreads to mink, leading to outbreaks on mink farms in Europe and the United States. Denmark responded by culling its entire mink population, which naturally went wrong as mink bodies began resurfacing from their mass graves, forcing the country to rebury them. Because 2020 didn’t seem apocalyptic enough.

More seriously, health authorities are carefully monitoring things like mink farms because the spread of the virus to our domesticated animals raises two risks. One is that the virus will be under different evolutionary selection in these animals, producing mutant strains that then pose different risks if they transfer back to humans. So far, fortunately, that seems not to be happening. The second risk is that these animals will provide a reservoir from which the virus can spread back to humans, circumventing pandemic control focused on human interactions.

Heightening those worries, mid-December saw a report that the US Department of Agriculture had found a wild mink near a mink farm that had picked up the virus, presumably from its domesticated peers. Fortunately, so far at least, the transfer to wild populations seems very limited.

Read 6 remaining paragraphs | Comments

#biology, #covid-19, #disease, #medicine, #mink, #sars-cov-2, #science, #surveillance

0

Kazakhstan spies on citizens’ HTTPS traffic; browser makers fight back

Surveillance camera peering into laptop computer

Enlarge (credit: Thomas Jackson | Stone | Getty Images)

Google, Mozilla, Apple, and Microsoft said they’re joining forces to stop Kazakhstan’s government from decrypting and reading HTTPS-encrypted traffic sent between its citizens and overseas social media sites.

All four of the companies’ browsers recently received updates that block a root certificate the government has been requiring some citizens to install. The self-signed certificate caused traffic sent to and from select websites to be encrypted with a key controlled by the government. Under industry standards HTTPS keys are supposed to be private and under the control only of the site operator.

A thread on Mozilla’s bug-reporting site first reported the certificate in use on December 6. The Censored Planet website later reported that the certificate worked against dozens of Web services that mostly belonged to Google, Facebook, and Twitter. Censored Planet identified the sites affected as:

Read 3 remaining paragraphs | Comments

#biz-it, #censorship, #encryption, #https, #policy, #spying, #surveillance

0

Massachusetts governor won’t sign police reform bill with facial recognition ban

Massachusetts Governor Charlie Baker has returned a police reform bill back to the state legislature, asking lawmakers to strike out several provisions — including one for a statewide ban on police and public authorities using facial recognition technology, the first of its kind in the United States.

The bill, which also banned police from using rubber bullets and tear gas, was passed on December 1 by both the state’s House and Senate after senior lawmakers overcame months of deadlock to reach a consensus. Lawmakers brought the bill to the state legislature in the wake of the killing of George Floyd, an unarmed Black man who was killed by a white Minneapolis police officer, later charged with his murder.

Baker said in a letter to lawmakers that he objected to the ban, saying the use of facial recognition helped to convict several criminals, including a child sex offender and a double murderer.

In an interview with The Boston Globe, Baker said that he’s “not going to sign something that is going to ban facial recognition.”

Under the bill, police and public agencies across the state would be prohibited from using facial recognition, with a single exception to run facial recognition searches against the state’s driver license database with a warrant. The state would be required to publish annual transparency figures on the number of searches made by officers going forward.

The Massachusetts House voted to pass by 92-67, and the Senate voted 28-12 — neither of which were veto-proof majorities.

The Boston Globe said that Baker did not outright say he would veto the bill. After the legislature hands a revised (or the same) version of the bill back to the governor, it’s up to Baker to sign it, veto it, or — under Massachusetts law, he could allow it to become law without his signature by waiting 10 days.

“Unchecked police use of surveillance technology also harms everyone’s rights to anonymity, privacy, and free speech. We urge the legislature to reject Governor Baker’s amendment and to ensure passage of commonsense regulations of government use of face surveillance,” said Carol Rose, executive director of the ACLU of Massachusetts.

A spokesperson for Baker’s office did not immediately return a request for comment.

#driver, #facial-recognition, #george-floyd, #government, #governor, #learning, #massachusetts, #officer, #security, #senate, #spokesperson, #surveillance, #video-surveillance

0

German secure email provider Tutanota forced to monitor an account, after regional court ruling

German e2e encrypted email provider Tutanota has been ordered by a regional court to develop a function that allows it to monitor an individual account.

The encrypted email service provider has been fighting a number of such orders in its home country.

The ruling, which was reported in the German press late last month, contradicts an earlier Hanover court finding that Tutanota, a provider of web-based email, is not a telecommunications service.

The order by the Cologne court comes under a German law (known as “TKG”) which requires telecommunications service providers to disclose data to law enforcement/intelligence agencies if they receive a lawful intercept request.

The Cologne court ruling also runs counter to a 2019 decision by Europe’s top court, the CJEU, which found that another web-based email service, Gmail, is not an ‘electronic communications service’ as defined in EU law — meaning it can’t be subject to common EU rules for telcos.

Tutanota co-founder Matthias Pfau described the Cologne ruling as “absurd” — and confirmed it’s appealing.

“The argumentation is as follows: Although we are no longer a provider of telecommunications services, we would be involved in providing telecommunications services and must therefore still enable telecommunications and traffic data collection,” he told TechCrunch.

“From our point of view — and law German law experts agree with us — this is absurd. Neither does the court state what telecommunications service we are involved in nor do they name the actual provider of the telecommunications service.

“The telecommunications service cannot be email, because we provide it completely ourselves. And if we were to participate, we would have to have a business relationship with the actual provider.”

Despite the absurdity of a regional court treating an email provider as an ISP — in apparent contradiction of earlier CJEU guidance — Tutanota is nonetheless required to comply with the order, and develop a surveillance function for the specific inbox, while its appeal continues.

A spokeswoman for Tutanota confirmed it has told the court it will develop the function by the end of this year — whereas she suggested its appeals process is likely to take “months” more to run its course.

“We are going to the higher court in parallel. We are already preparing an appeal to the Bundesgerichtshof [Germany’s Federal Court of Justice],” she added.

The Cologne court order is for a surveillance function to be implemented on a single Tutanota account that had been used for an extortion attempt. The Tutanota spokeswoman said the monitoring function will only apply to future emails this account receives — it will not affect emails previously received.

She added that the account in question appears to no longer be in use.

While after-the-fact monitoring seems unlikely to make any difference to the specific case the suspicion is that court wants to create a precedence — raising the hackles of security watchers who are worried about the risk of digital service providers being compelled to bake backdoors into their services in the region.

Last month a draft resolution of the Council of the European Union triggered substantial concern that EU lawmakers are considering a ban on e2e encryption as part of an anti-terrorism security push. However the draft document discussed only “lawful and targeted access” — while expressing support for “strong encryption”.

Returning to the Tutanote surveillance order, it can only be made to apply to unencrypted emails linked to the specific account.

This is because the email service provider applies e2e encryption to its own users’ content — meaning it does not hold decryption keys so is unable to decrypt the data — though it also allows users to receive emails from email services that do not apply e2e encryption (hence it can be compelled to provide that data in plain text).

However, if the EU were to legislate to compel e2e encryption service providers to provide decrypted data in response to lawful intercept requests, it would effectively outlaw the use of e2e encryption.

That’s the scenario of most concern — though no such law has yet been proposed by any EU institutions. (And would very likely face fierce opposition in the European parliament, as well as more broadly, from academia, civil society, consumer protection, and privacy and digital rights groups, among others.)

According to the ruling of the Cologne Regional Court, we were obliged to release unencrypted incoming and outgoing emails from one mailbox. Emails that are encrypted end-to-end in Tutanota cannot be decrypted by us, not even after the court order,” noted Pfau.

“Tutanota is one of the few mail providers that encrypts the entire mailbox, also calendar and contacts. The encrypted data cannot be decrypted by us, because only the user has the key to decrypt it.”

“This decision shows again why end-to-end encryption is so important,” he added. 

#e2e-encryption, #europe, #germany, #security, #surveillance, #tutanota

0

Feds logged website visitors in 2019, citing Patriot Act authority

Feds logged website visitors in 2019, citing Patriot Act authority

Enlarge (credit: Peter Dazeley | Getty Images)

The federal government gathered up visitor logs for some websites in 2019, the Office of Director of National Intelligence disclosed in letters made public this week. And the feds cited authority derived from a provision of the Patriot Act to do it.

Director of National Intelligence John Ratcliffe confirmed these actions in a November 6 letter to Sen. Ron Wyden (D-Ore.), part of an exchange (PDF) first obtained and published by the New York Times.

The exchange begins with a May 20 letter from Wyden to the ODNI asking then-director Richard Grenell to explain if and how the federal government uses section 215 of the Patriot Act to obtain IP addresses and other Web browsing information. At the time, the Senate had just passed legislation re-authorizing the law. Wyden was among the privacy advocates in the Senate pushing to amend the law to prevent the FBI from using Section 215 to obtain users’ search and browsing histories, but his measure did not succeed.

Read 8 remaining paragraphs | Comments

#patriot-act, #policy, #privacy, #section-215, #surveillance

0

Massachusetts lawmakers vote to pass a statewide police ban on facial recognition

Massachusetts lawmakers have voted to pass a new police reform bill that will ban police departments and public agencies from using facial recognition technology across the state.

The bill was passed by both the state’s House and Senate on Tuesday, a day after senior lawmakers announced an agreement that ended months of deadlock.

The police reform bill also bans the use of chokeholds and rubber bullets, and limits the use of chemical agents like tear gas, and also allows police officers to intervene to prevent the use of excessive and unreasonable force. But the bill does not remove qualified immunity for police, a controversial measure that shields serving police from legal action for misconduct, following objections from police groups.

Lawmakers brought the bill to the state legislature in the wake of the killing of George Floyd, an unarmed Black man who was killed by a white Minneapolis police officer, since charged with his murder.

Critics have for years complained that facial recognition technology is flawed, biased, and disproportionately misidentifies people and communities of color. But the bill grants police an exception to run facial recognition searches against the state’s driver’s license database with a warrant. In granting that exception, the state will have to publish annual transparency figures on the number of searches made by officers.

The Massachusetts Senate voted 28-12 to pass, and the House voted 92-67. The bill will now be sent to Massachusetts governor Charlie Baker for his signature.

In the absence of privacy legislation from the federal government, laws curtailing the use of facial recognition are popping up on a state and city level. The patchwork nature of that legislation means that state and city laws have room to experiment, creating an array of blueprints for future laws that can be replicated elsewhere.

Portland, Oregon passed a broad ban on facial recognition tech this September. The ban, one of the most aggressive in the nation, blocks city bureaus from using the technology but will also prohibit private companies from deploying facial recognition systems in public spaces. Months of clashes between protesters and aggressive law enforcement in that city raised the stakes on Portland’s ban.

Earlier bans in Oakland, San Francisco, and Boston focused on forbidding their city governments from using the technology but, like Massachusetts, stopped short of limiting its use by private companies. San Francisco’s ban passed in May of last year, making the international tech hub the first major city to ban the use of facial recognition by city agencies and police departments.

At the same time that cities across the U.S. are acting to limit the creep of biometric surveillance, those same systems are spreading at the federal level. In August, Immigration and Customs Enforcement (ICE) signed a contract for access to a facial recognition database created by Clearview AI, a deeply controversial company that scrapes facial images from online sources, including social media sites.

While most activism against facial recognition only pertains to local issues, at least one state law has proven powerful enough to make waves on a national scale. In Illinois, the Biometric Information Privacy Act (BIPA) has ensnared major tech companies including Amazon, Microsoft and Alphabet for training facial recognition systems on Illinois residents without permission.

#biometrics, #bipa, #facial-recognition, #privacy, #security, #surveillance

0

San Diego’s spying streetlights stuck switched “on,” despite directive

Two of San Diego's camera-equipped smart streetlights at twilight in August, 2020.

Enlarge / Two of San Diego’s camera-equipped smart streetlights at twilight in August, 2020. (credit: Bing Guan | Bloomberg | Getty Images)

Over the past few years, streetlights in the city of San Diego have become “smart,” equipped with a slate of cameras and sensors that report back data on the city and its denizens. Following protests from privacy activists, the city mayor ordered the network disabled for the time being—but it turns out that city staff can’t turn the cameras off just yet without plunging the city into literal darkness.

Thousands of streetlight cameras were supposed to be disabled this fall, the Voice of San Diego reports, but there is no software switch for doing so. In lieu of disabling the cameras, the vendor responsible for them at the time instead simply cut off the city’s network access to the devices.

The Smart Streetlight project began five years ago, in 2015, when San Diego Mayor Kevin Faulconer announced (PDF) a new “partnership” with GE Lighting to deploy “a software-defined lighting technology that will help San Diego solve some of the city’s infrastructure challenges.”

Read 8 remaining paragraphs | Comments

#cameras, #policy, #san-diego, #smart-city, #street-lights, #streetlights, #surveillance

0

Family tracking app Life360 launches ‘Bubbles,’ a location-sharing feature inspired by teens on TikTok

Helicopter parenting turned into surveillance with the debut of family tracking apps like Life360. While the app can alleviate parental fears when setting younger kids loose in the neighborhood, Life360’s teenage users have hated the app’s location tracking features so much that avoiding and dissing the app quickly became a TikTok meme. Life360 could have ignored the criticism — after all, teens aren’t the app’s paying subscribers; it’s the parents. But Life360 CEO Chris Hulls took a different approach. He created a TikTok account and started a dialogue with the app’s younger users. As a result of these conversations, the company has now launched a new privacy-respecting feature, “Bubbles.”

Bubbles work by allowing any Life360 Circle member to share a circle representing their generalized location instead of their exact whereabouts. To set a bubble, the user can adjust the radius on the map anywhere from 1 to 25 miles in diameter, for a given period of time of 1 to 6 hours. After this temporary bubble is created, Life360’s other existing safety and messaging features will remain enabled. But parents won’t be able to see precisely where their teen is located, other than somewhere in the bubble.

Image Credits: Life360

For example, a teen could tell their parents they were hanging out with some friends in a given part of town after school, then set a bubble accordingly. But without popping that bubble, the parents wouldn’t know if their teenager was at a friend’s house, out driving around, at a park, out shopping, and so on. The expectation is that parents and teens should communicate with one another, not relying on cyberstalking. Plus, parents need to respect that teens deserve to have more freedom to make choices, even if they will sometimes break the rules and then have to suffer the consequences.

A location bubble isn’t un-poppable, however. The bubble will burst if a car crash or another other emergency is detected, the company says. A parent can also choose to override the setting and pop the bubble for any reason — like if they don’t hear from the teen for a long period of time or suspect the teen may be unsafe. This could encourage a teen to increase their direct communication with a parent in order to reassure them that they are safe, rather than risk their parent turning tracking back on.

But parents are actively discouraged from popping the bubbles out of fear. Before the bubble is burst, the app will ask if the user if they’re sure they want to do so, reminding the them also that the member will be notified about the bubble being burst. This gives parents a moment to pause and reconsider whether it’s really enough of an emergency to break their teen’s trust and privacy.

Image Credits: Life360

The feature isn’t necessarily going to solve the problems for teens who want to sneak out or just be un-tracked entirely, which is where many of the complaints have stemmed from in recent years. Instead, it’s meant to represent a compromise in the battle between adult surveillance of kids’ every move and teenagers’ needs to have more personal freedom.

Hulls says the idea for the new feature was inspired by conversations he had with teens on TikTok about Life360’s issues.

“Teens are a core part of the family unit – and our user base – and we value their input,” said Hulls. “After months of communicating with both parents and teens, I am proud to launch a feature that was designed with the whole family in mind, continuing our mission of redefining how safety is delivered to families,” he added.

Before joining TikTok, the Life360 mobile app had been subject to a downrating campaign where teen users rated the app with just one star in hopes of getting it kicked off the App Store. (Apps are not automatically removed for low ratings, but that hasn’t stopped teens from trying this tactic with anything they don’t like, from Google Classroom’s app to the Trump 2020 app, at times.)

In his TikTok debut, Hulls appeared as Darth Vader then took off the mask to reveal, in his own words, “just your standard, awkward tech CEO.” In the months since, his account has posted and reacted to Life360 memes, answered questions, asked for — and even paid for — helpful user feedback. One of the ideas resulting from the collaboration was “ghost mode,” which is now being referred to at launch as “Bubbles” — a name generated by a TikTok contest to brand the feature.

In addition to sourcing ideas on TikTok, Hulls used the platform to rehabilitate the Life360 brand among teens, explaining how he created the app after Hurricane Katrina to help families reconnect after big emergencies, for example. (True). His videos also suggested that he was now on teens’ side and that building “ghost mode” was going to piss off parents or even lose him his job. (Highly debatable.)

In a related effort, the company posted a YouTube parody video to explain the app’s benefits to parents and teens. The video, suggested to teen users through a notification, hit over a million views in 24 hours.

Many teens, ultimately, came around. “i’m crying he seems so nice,” said one commenter. “ngl it’s the parents not the app,” admitted another.

In other words, the strategy worked. Hulls’ “life360ceo” TikTok account has since gained over 231,000 followers and its videos have been “liked” 6.5 million times. Teens have also turned their righteous anger back to where it may actually belong — at their cyberstalking parents, not the tech enabling the location-tracking.

Bubbles is now part of the most recent version of the Life360 app, a free download on iOS and Android. The company offers an optional upgrade to premium plans for families in need of extra features, like location history, crash detection and roadside assistance, among other things.

Family trackers are a large and growing business. As of June 2020, Life360 had 25 million monthly active users located in more than 195 countries. The company’s annualized monthly revenue was forecasted at $77.9 million, a 26% increase year-over-year.

To celebrate the launch of Bubbles, this past Saturday, Life360 launched a branded Hashtag Challenge on TikTok, #ghostmode, for a $10,000 prize. As of today, the hashtag already has 1.4 billion views.

 

 

 

 

 

 

#apps, #life360, #mobile, #privacy, #surveillance, #tiktok

0

Russian surveillance tech startup NtechLab nets $13M from sovereign wealth funds

NtechLab, a startup that helps analyze footage captured by Moscow’s 100,000 surveillance cameras, just closed an investment of more than 1RUB billion ($13 million) to further global expansion.

The five-year-old company sells software that recognizes faces, silhouettes and actions on videos. It’s able to do so on a vast scale in real time, allowing clients to react promptly to situations It’s a key “differentiator” of the company, co-founder Artem Kukharenko told TechCrunch.

“There could be systems which can process, for example, 100 cameras. When there are a lot of cameras in a city, [these systems] connect 100 cameras from one part of the city, then disconnect them and connect another hundred cameras in another part of the city, so it’s not so interesting,” he suggested.

The latest round, financed by Russia’s sovereign wealth fund, the Russian Direct Investment Fund, and an undisclosed sovereign wealth fund from the Middle East, certainly carries more strategic than financial importance. The company broke even last year with revenue reaching $8 million, three times the number from the previous year, ane expects to finish 2020 at a similar growth pace.

Nonetheless, the new round will enable the startup to develop new capabilities such as automatic detection of aggressive behavior and vehicle recognition as it seeks new customers in its key markets of the Middle East, Southeast Asia and Latin America. City contracts have a major revenue driver for the firm, but it has plans to woo non-government clients, such as those in the entertainment industry, finance, trade and hospitality.

The company currently boasts clients in 30 cities across 15 countries in the Commonwealth of Independent States (CIS) bloc, Middle East, Latin America, Southeast Asia and Europe.

These customers may procure from a variety of hardware vendors featuring different graphic processing units (GPUs) to carry out computer vision tasks. As such, NtechLab needs to ensure it’s constantly in tune with different GPU suppliers. Ten years ago, Nvidia was the go-to solution, recalled Kukharenko, but rivals such as Intel and Huawei have cropped up in recent times.

The Moscow-based startup began life as a consumer software that allowed users to find someone’s online profile by uploading a photo of the person. It later pivoted to video and has since attracted government clients keen to deploy facial recognition in law enforcement. For instance, during the COVID-19 pandemic, the Russian government uses NtechLab’s system to monitor large gatherings and implement access control.

Around the world, authorities have rushed to implement similar forms of public health monitoring and tracking for virus control. While these projects are usually well-meaning, they inspire a much-needed debate around privacy, discrimination, and other consequences brought by the scramble for large-scale data solutions. NtechLab’s view is that when used properly, video surveillance generally does more good than harm.

“If you can monitor people quite [effectively], you don’t need to close all people in the city… The problem is people who don’t respect the laws. When you can monitor these people and [impose] a penalty on them, you can control the situation better,” argued Alexander Kabakov, the other co-founder of the company.

As it expands globally, NtechLab inevitably comes across customers who misuse or abuse its algorithms. While it claimed to keep all customer data private and have no control over how its software is used, the company strives to “create a process that can be in compliance with local laws,” said Kukharenko.

“We vet our partners so we can trust them, and we know that they will not use our technology for bad purposes.”

#ai, #artificial-intelligence, #funding, #russia, #surveillance, #tc

0

Anduril launches a smarter drone and picks up more money to build a virtual border wall

The company building the virtual border wall has a new version of its stealthy fast-flying drones — and a fresh contract with Customs and Border Protection to match. Anduril, a young defense-friendly tech company from the founder of Oculus, received $36 million from Customs and Border Protection this month for its AI-powered autonomous surveillance towers.

Anduril has flourished over the course of its short Trump-era lifespan, attracting surprising interest from defense agencies considering that the company has only existed for three years. In July, CBP awarded Anduril $25 million for a previous set of surveillance towers. The agency plans to implement 200 towers by 2022 in an ongoing relationship with the contractor worth more than $200 million.

The unusual company is iterating on its hardware innovations quickly, which makes sense for a company founded by Palmer Luckey, the controversial figure who spearheaded consumer VR through Oculus. Luckey, a big Trump booster in tech, attracted plenty of talent from the now Facebook-owned VR company when he struck out with his new venture. The company has also collected a number of former employees from Peter Thiel-founded Palantir, which grew its own federal contract business and is in the process of going public.

While the company kept completely quiet in its early launch days, it’s opened up about its drone capabilities in particular over the last year. Anduril previously did a press push around the launch of a counter-UAS drone it calls “Anvil” that can identify a target and knock it out of the sky. (The company would prefer if you don’t call them “attack drones.”) Now, Anduril is launching the fourth iteration of its small, ultra-quiet “Ghost” drones, adding some key features.

Ghost drones are capable of staying aloft for long stretches and communicating what they sees to a central AI-powered nervous system. They combine data with Anduril’s sentry towers and any other hardware, relaying it back to the company’s Lattice software platform, which flags anything of interest. In the case of CBP, that looks like a system autonomously identifying someone crossing U.S. border and sending a push alert to border agents.

Ghost 4 is the latest version of the Ghost drone, boasting 100 minutes of flight time and a “near-silent acoustic signature” that makes it difficult to detect. The Ghost 4 drones now apparently pack Anduril’s Lattice AI software on board, which allows them to operate and identify potential targets in spots with low connectivity or “contested” areas. The new version of the Ghost drone also allows one operator to command a group of Ghost drones to form a swarm, collecting data across many devices.

According to the company, the Ghost 4 is designed for an array of mission types, including “aerial intelligence, surveillance and reconnaissance, cargo delivery, counter intrusion, signal intelligence and electronic warfare.” With the system’s modular, customizable design, Anduril continues to cast a wide net, though for now it’s mostly won contracts for perimeter and border surveillance.

The company began its work with CBP through pilot programs in Texas and San Diego starting in 2018. By the following year, Anduril had formalized its relationship on the U.S. southern border, with a number of its sentry towers operating in CBP’s San Diego sector, an order for more in Texas and a new pilot program testing a cold weather variation of its hardware at northern border sites in Montana and Vermont.

In July, Anduril announced that it had raised $200 million from investors including Andreessen Horowitz and Thiel’s Founders Fund, bringing its valuation to around $2 billion three years in. “We founded Anduril because we believe there is value in Silicon Valley technology companies partnering with the Department of Defense,” Anduril CEO Brian Schimpf said at the time.

The Department of Defense was exploring use cases with a previous version of the Ghost drone, and it’s clear the company would like to expand that nascent business. It’s not that far off: Anduril landed a $13.5 million contract last year to surround Marine Corps bases in Arizona, Japan and Hawaii with a “virtual ‘digital fortress’” and has recruited talent specifically to liaise with the military. Now that the company’s work is established as a line item in the homeland security budget, the door is open for Anduril to seal the deal on even more lucrative defense work.

#anduril, #defense, #department-of-defense, #surveillance, #tc, #trump-administration

0

Portland adopts strictest facial recognition ban in nation to date

A helpful neon sign in Portland, Ore.

Enlarge / A helpful neon sign in Portland, Ore. (credit: Seth K. Hughes | Getty Images)

City leaders in Portland, Oregon, yesterday adopted the most sweeping ban on facial recognition technology passed anywhere in the United States so far.

The Portland City Council voted on two ordinances related to facial recognition: one prohibiting use by public entities, including the police, and the other limiting its use by private entities. Both measures passed unanimously, according to local NPR and PBS affiliate Oregon Public Broadcasting.

The first ordinance (PDF) bans the “acquisition and use” of facial recognition technologies by any bureau of the city of Portland. The second (PDF) prohibits private entities from using facial recognition technologies “in places of public accommodation” in the city.

Read 8 remaining paragraphs | Comments

#face-recognition, #facial-recognition, #laws, #oregon, #policy, #portland, #privacy, #racism, #surveillance

0

Portland passes expansive city ban on facial recognition tech

The city council in Portland, Oregon passed legislation Wednesday that’s widely regarded as the most aggressive municipal ban on facial recognition technology so far.

Through a pair of ordinances, Portland will both prohibit city bureaus from using the controversial technology and stop private companies from employing it in public areas. Oakland, San Francisco and Boston have all banned their governments from using facial recognition tech, but Portland’s ban on corporate uses in public spaces breaks new ground.

The draft ordinance proposing the private ban cites the risk of “biases against Black people, women, and older people” baked into facial recognition systems. Evidence of bias in these systems has been widely observed by researchers and even by the U.S. federal government, in a study published late last year. Known flaws in these systems can lead to false positives with serious consequences, given facial recognition’s law enforcement applications.

City Council Commissioner Jo Ann Hardesty linked concerns around high-tech law enforcement tools to ongoing protests in Portland, which have taken place for more than three months. Last month, the U.S. Marshals Service confirmed that it used a small aircraft to surveil crowds near the protest’s epicenter at the Multnomah County Justice Center in downtown Portland.

Hardesty called the decision to ban local law enforcement from employing facial recognition tech “especially important” for the moment Portland now finds itself in.

“No one should have something as private as their face photographed, stored, and sold to third parties for a profit,” Hardesty said. “No one should be unfairly thrust into the criminal justice system because the tech algorithm misidentified an innocent person.”

The ACLU also celebrated Wednesday’s vote as a historic digital privacy win.

“With today’s vote, the community made clear we hold the real power in this city,” ACLU of Oregon Interim Executive Director Jann Carson said. “We will not let Portland turn into a surveillance state where police and corporations alike can track us wherever we go.”

Portland’s dual bans on the public and private use of facial recognition may serve as a roadmap for other cities looking to carve out similar digital privacy policies — an outcome privacy advocates are hoping for.

“Now, cities across the country must look to Portland and pass bans of their own,” Fight for the Future’s Lia Holland said. “We have the momentum, and we have the will to beat back this dangerous and discriminatory technology.”

#digital-privacy, #facial-recognition, #surveillance, #tc

0

CBP does not make it clear Americans can opt-out of airport face scanning, watchdog says

A government watchdog has criticized U.S. border authorities for failing to properly disclose the agency’s use of facial recognition at airports, which included instructions on how Americans can opt out.

U.S. Customs and Border Protection (CBP), tasked with protecting the border and screening immigrants, has deployed its face-scanning technology in 27 U.S. airports as part of its Biometric Entry-Exit Program.

The program was set up to catch visitors who overstay their visas.

Foreign nationals must complete a facial recognition check before they are allowed to enter and leave the United States, but U.S. citizens are allowed to opt out.

But the Government Accountability Office (GAO) said in a new report out Wednesday that CBP did “not consistently” provide notices that informed Americans that they would be scanned as they depart the United States.

A notice warning passengers of CBP’s use of facial recognition at U.S. airports. The GAO said these notices were not always clear that U.S. citizens can opt out. (Image: Twitter/Juli Lyskawa)

“These notices are intended to provide travelers with information about CBP’s use of facial recognition technology at locations where this technology has been deployed, and how data collected will be used. The notices should also provide information on procedures for opting out, if applicable, among other things,” according to the watchdog. “However, we found that CBP’s notices were not always current or complete, provided limited information on how to request to opt out of facial recognition, and were not always available.”

Some of the notices were outdated and contained wrong or inconsistent information, the watchdog said. But CBP officials told the GAO that printing new signs is “costly” and “not practical” after each policy change.

CBP uses the airlines to collect biometric scans of a traveler’s face before boarding a plane. The data is fed into a database run by CBP, where face scans are held for two weeks for U.S. citizens and up to 75 years for nonimmigrant visitors.

As part of this cooperation, CBP is required to conduct audits to ensure that airlines are compliant with the agency’s data collection and privacy practices. But the watchdog found that CBP had only audited one airline, and as of May had “not yet audited the majority of its airline business partners to ensure they are adhering to CBP’s privacy requirements.”

The watchdog took issue with this following the 2019 data breach involving CBP subcontractor Perceptics, a license plate recognition company, which CBP accused of transferring travelers’ license plate data to its network without permission.

Hackers stole about 100,000 traveler images and license plate records in the breach, which were later posted on the dark web.

CBP said it concurred with the watchdog’s five overall recommendations.

#biometrics, #facial-recognition, #government, #government-accountability-office, #national-security, #privacy, #security, #surveillance, #u-s-customs-and-border-protection

0

College contact-tracing app readily leaked personal data, report finds

A surveillance camera mounted on a wall on a sunny day.

Enlarge / A surveillance camera mounted on a wall on a sunny day. (credit: Thomas Winz / Getty)

In an attempt to mitigate the potential spread of COVID-19, one Michigan college is requiring all students to install an app that will track their live locations at all times. Unfortunately, researchers have already found two major vulnerabilities in the app that can expose students’ personal and health data.

Albion College informed students two weeks before the start of the fall term that they would be required to install and run the contact tracing app, called Aura.

Exposure notification apps being deployed by states, based on the iOS and Android framework that Apple and Google announced earlier this year, are designed to minimize harms to privacy. That framework basically uses a phone’s Bluetooth capabilities as a proximity sensor, to see if the phone it’s installed on has been near a phone of someone who reports having tested positive for COVID-19.

Read 12 remaining paragraphs | Comments

#college, #covid-19, #data-privacy, #education, #higher-education, #location-data, #policy, #privacy, #student-privacy, #student-rights, #surveillance

0

Palantir moves its HQ from Palo Alto to Denver as plans to go public percolate

Between the IPO buzz and a raft of new federal contracts for COVID-19 work, it’s been a year of big moves for Palantir. Now, the company is making a more literal one: decamping from its Palo Alto headquarters to Denver, Colorado.

The decision to relocate its Palo Alto headquarters, first reported by the Denver Business Journal, comes after the company filed SEC paperwork last month to take the company public. The most recent whispers say Palantir is aiming for a direct listing in late September rather than a traditional IPO.

While its chief executive’s vocal complaints about a cultural mismatch played a role in Palantir’s decision to relocate its main office away from the Bay Area, cost of living improvements and a proximity to clients in the center of the country also factored into the decision.

For a company with around 2,500 employees, Palantir maintains a surprising array of office locations, both in the U.S. and internationally. Palantir’s Palo Alto office will likely remain a hub for its developers and software engineers. The company’s New York and London offices currently house a large portion of its product development work.

Palantir CEO Alex Karp announced plans to move the company’s headquarters away from California in an Axios interview back in May.

“We haven’t picked a place yet, but it’s going to be closer to the East Coast than the West Coast,” Karp said, adding that Colorado would be his guess for where the headquarters would land.

In the same interview, Karp railed against what he called Silicon Valley’s “monoculture,” a reference to left-leaning views that generally characterize both Bay Area culture and the company’s vocal critics.

While Silicon Valley is far from monocultural by any traditional measure, Karp cites an “increasing intolerance” in the region — particularly for the company’s own federal defense work. Palantir continued to seek contracts with federal law enforcement agencies, even as some tech companies dropped or declined to pursue them.

Palantir’s work supplying software for ICE’s deportation efforts is a particular nexus of controversy. “… It’s a de minimis part of our work, finding people in our country who are undocumented, but it’s a legitimate, complex issue,” Karp told CNBC in Davos earlier this year.

Google famously declined to renew a Pentagon contract known as Project Maven in 2018 after an internal backlash. Peter Thiel, the co-founder of Palantir and one of the Trump administration’s closest allies in tech, slammed Google’s decision as “very problematic.”

All Palantir employees not currently working with customers in the field are working from home with no set plan to return to the office at this time. Karp, a frequent critic of Silicon Valley’s regional myopia, currently runs the company from his home in the libertarian enclave of New Hampshire.

“I’m pretty happy outside the monoculture in New Hampshire and I like living free here,” Karp told Axios, referencing the state’s motto “Live free or die.”

#defense, #government, #palantir, #surveillance, #tc

0

Secret Service buys location data that would otherwise need a warrant

Stock photo of hands using smartphones against white background.

Enlarge / Dozens of apps on your phone know where you are, whether you’re home, at a doctor’s appointment, at the airport, or sitting still in a blank white room to pose artfully for a photo shoot. (credit: JGI | Tom Grill | Getty Images)

An increasing number of law enforcement agencies, including the US Secret Service, are simply buying their way into data that would ordinarily require a warrant, a new report has found, and at least one US senator wants to put a stop to it.

The Secret Service paid about $2 million in 2017-2018 to a firm called Babel Street to use its service Locate X, according to a document (PDF) Vice Motherboard obtained. The contract outlines what kind of content, training, and customer support Babel Street is required to provide to the Secret Service.

Locate X provides location data harvested and collated from a wide variety of other apps, tech site Protocol reported earlier this year. Users can “draw a digital fence around an address or area, pinpoint mobile devices that were within that area, and see where else those devices have traveled” in the past several months, Protocol explained.

Read 8 remaining paragraphs | Comments

#big-data, #data-privacy, #location-data, #policy, #privacy, #surveillance, #the-feds, #united-states-secret-service

0

A new technique can detect newer 4G ‘stingray’ cell phone snooping

Security researchers say they have developed a new technique to detect modern cell-site simulators.

Cell site simulators, known as “stingrays,” impersonate cell towers and can capture information about any phone in its range — including in some cases calls, messages and data. Police secretly deploy stingrays hundreds of times a year across the United States, often capturing the data on innocent bystanders in the process.

Little is known about stingrays, because they are deliberately shrouded in secrecy. Developed by Harris Corp. and sold exclusively to police and law enforcement, stingrays are covered under strict nondisclosure agreements that prevent police from discussing how the technology works. But what we do know is that stingrays exploit flaws in the way that cell phones connect to 2G cell networks.

Most of those flaws are fixed in the newer, faster and more secure 4G networks, though not all. Newer cell site simulators, called “Hailstorm” devices, take advantage of similar flaws in 4G that let police snoop on newer phones and devices.

Some phone apps claim they can detect stingrays and other cell site simulators, but most produce wrong results.

But now researchers at the Electronic Frontier Foundation have discovered a new technique that can detect Hailstorm devices.

Enter the EFF’s latest project, dubbed “Crocodile Hunter” — named after Australian nature conservationist Steve Irwin who was killed by a stingray’s barb in 2006 — helps detect cell site simulators and decodes nearby 4G signals to determine if a cell tower is legitimate or not.

Every time your phone connects to the 4G network, it runs through a checklist — known as a handshake — to make sure that the phone is allowed to connect to the network. It does this by exchanging a series of unencrypted messages with the cell tower, including unique details about the user’s phone — such as its IMSI number and its approximate location. These messages, known as the master information block (MIB) and the system information block (SIB), are broadcast by the cell tower to help the phone connect to the network.

“This is where the heart of all of the vulnerabilities lie in 4G,” said Cooper Quintin, a senior staff technologist at the EFF, who headed the research.

Quintin and fellow researcher Yomna Nasser, who authored the EFF’s technical paper on how cell site simulators work, found that collecting and decoding the MIB and SIB messages over the air can identify potentially illegitimate cell towers.

This became the foundation of the Crocodile Hunter project.

A rare public photo of a stingray, manufactured by Harris Corp. Image Credits: U.S. Patent and Trademark Office

Crocodile Hunter is open-source, allowing anyone to run it, but it requires a stack of both hardware and software to work. Once up and running, Crocodile Hunter scans for 4G cellular signals, begins decoding the tower data, and uses trilateration to visualize the towers on a map.

But the system does require some thought and human input to find anomalies that could identify a real cell site simulator. Those anomalies can look like cell towers appearing out of nowhere, towers that appear to move or don’t match known mappings of existing towers, or are broadcasting MIB and SIB messages that don’t seem to make sense.

That’s why verification is important, Quintin said, and stingray-detecting apps don’t do this.

“Just because we find an anomaly, doesn’t mean we found the cell site simulator. We actually need to go verify,” he said.

In one test, Quintin traced a suspicious-looking cell tower to a truck outside a conference center in San Francisco. It turned out to be a legitimate mobile cell tower, contracted to expand the cell capacity for a tech conference inside. “Cells on wheels are pretty common,” said Quintin. “But they have some interesting similarities to cell site simulators, namely in that they are a portable cell that isn’t usually there and suddenly it is, and then leaves.”

In another test carried out earlier this year at the ShmooCon security conference in Washington, D.C. where cell site simulators have been found before, Quintin found two suspicious cell towers using Crocodile Hunter: One tower that was broadcasting a mobile network identifier associated with a Bermuda cell network and another tower that didn’t appear to be associated with a cell network at all. Neither made much sense, given Washington, D.C. is nowhere near Bermuda.

Quintin said that the project was aimed at helping to detect cell site simulators, but conceded that police will continue to use cell site simulators for as long as the cell networks are vulnerable to their use, an effort that could take years to fix.

Instead, Quintin said that the phone makers could do more at the device level to prevent attacks by allowing users to switch off access to legacy 2G networks, effectively allowing users to opt-out of legacy stingray attacks. Meanwhile, cell networks and industry groups should work to fix the vulnerabilities that Hailstorm devices exploit.

“None of these solutions are going to be foolproof,” said Quintin. “But we’re not even doing the bare minimum yet.”


Send tips securely over Signal and WhatsApp to +1 646-755-8849 or send an encrypted email to: zack.whittaker@protonmail.com

#black-hat-2020, #cell-phones, #dc, #def-con-2020, #electronic-frontier-foundation, #law-enforcement, #mobile-phone, #mobile-security, #privacy, #san-francisco, #security, #surveillance, #telecommunications, #united-states, #washington-dc

0