Facebook is a hub of sex trafficking recruitment in the US, report says

Facebook is a hub of sex trafficking recruitment in the US, report says

Enlarge (credit: Serhii Ivashchuk/Getty Images)

Facebook is the most commonly used social media platform for human sex trafficking recruitment in the US, according to a new report published by the Human Trafficking Institute.

Last year, 59 percent of victims in active cases who were recruited through social media were found through Facebook, the report states, with 41 percent of all recruitment taking place online.

“The Internet has become the dominant tool that traffickers use to recruit victims, and they often recruit them on a number of very common social networking websites,” Victor Boutros, CEO of the Human Trafficking Institute, told CBS News. “Facebook overwhelmingly is used by traffickers to recruit victims in active sex trafficking cases.”

Read 7 remaining paragraphs | Comments

#facebook, #law-enforcement, #policy, #sex-trafficking, #social-media

0

Ring won’t say how many users had footage obtained by police

Ring gets a lot of criticism, not just for its massive surveillance network of home video doorbells and its problematic privacy and security practices, but also for giving that doorbell footage to law enforcement. While Ring is making moves towards transparency, the company refuses to disclose how many users had their data given to police.

The video doorbell maker, acquired by Amazon in 2018, has partnerships with at least 1,800 U.S. police departments (and growing) that can request camera footage from Ring doorbells. Prior to a change this week, any police department that Ring partnered with could privately request doorbell camera footage from Ring customers for an active investigation. Ring will now let its police partners publicly request video footage from users through its Neighbors app.

The change ostensibly gives Ring users more control when police can access their doorbell footage, but ignores privacy concerns that police can access users’ footage without a warrant.

Civil liberties advocates and lawmakers have long warned that police can obtain camera footage from Ring users through a legal back door because Ring’s sprawling network of doorbell cameras are owned by private users. Police can still serve Ring with a legal demand, such as a subpoena for basic user information, or a search warrant or court order for video content, assuming there is evidence of a crime.

Ring received over 1,800 legal demands during 2020, more than double from the year earlier, according to a transparency report that Ring published quietly in January. Ring does not disclose sales figures but says it has “millions” of customers. But the report leaves out context that most transparency reports include: how many users or accounts had footage given to police when Ring was served with a legal demand?

When reached, Ring declined to say how many users had footage obtained by police.

That number of users or accounts subject to searches is not inherently secret, but an obscure side effect of how companies decide — if at all — to disclose when the government demands user data. Though they are not obligated to, most tech companies publish transparency reports once or twice a year to show how often user data is obtained by the government.

Transparency reports were a way for companies subject to data requests to push back against damning allegations of intrusive bulk government surveillance by showing that only a fraction of a company’s users are subject to government demands.

But context is everything. Facebook, Apple, Microsoft, Google, and Twitter all reveal how many legal demands they receive, but also specify how many users or accounts had data given. In some cases, the number of users or accounts affected can be twice or more than threefold the number of demands they received.

Ring’s parent, Amazon, is a rare exception among the big tech giants, which does not break out the specific number of users whose information was turned over to law enforcement.

“Ring is ostensibly a security camera company that makes devices you can put on your own homes, but it is increasingly also a tool of the state to conduct criminal investigations and surveillance,” Matthew Guariglia, policy analyst at the Electronic Frontier Foundation, told TechCrunch.

Guariglia added that Ring could release the numbers of users subject to legal demands, but also how many users have previously responded to police requests through the app.

Ring users can opt out of receiving requests from police, but this option would not stop law enforcement from obtaining a legal order from a judge for your data. Users can also switch on end-to-end encryption to prevent anyone other than the user, including Ring, from accessing their videos.

#amazon, #apple, #articles, #electronic-frontier-foundation, #encryption, #facebook, #google, #hardware, #judge, #law-enforcement, #microsoft, #neighbors, #operating-systems, #privacy, #ring, #security, #smart-doorbell, #software, #terms-of-service, #transparency-report

0

Maryland and Montana are restricting police access to DNA databases

Maryland and Montana have become the first U.S. states to pass laws that make it tougher for law enforcement to access DNA databases.

The new laws, which aim to safeguard the genetic privacy of millions of Americans, focus on consumer DNA databases, such as 23andMe, Ancestry, GEDmatch and FamilyTreeDNA, all of which let people upload their genetic information and use it to connect with distant relatives and trace their family tree. While popular — 23andMe has more than three million users, and GEDmatch more than one million — many are unaware that some of these platforms share genetic data with third parties, from the pharmaceutical industry and scientists to law enforcement agencies.

When used by law enforcement through a technique known as forensic genetic genealogy searching (FGGS), officers can upload DNA evidence found at a crime scene to make connections on possible suspects, the most famous example being the identification of the Golden State Killer in 2018. This saw investigators upload a DNA sample taken at the time of a 1980 murder linked to the serial killer into GEDmatch and subsequently identify distant relatives of the suspect — a critical breakthrough that led to the arrest of Joseph James DeAngelo.

While law enforcement agencies have seen success in using consumer DNA databases to aid with criminal investigations, privacy advocates have long warned of the dangers of these platforms. Not only can these DNA profiles help trace distant ancestors, but the vast troves of genetic data they hold can divulge a person’s propensity for various diseases, predict addiction and drug response, and even be used by companies to create images of what they think a person looks like.

Ancestry and 23andMe have kept their genetic databases closed to law enforcement without a warrant, GEDmatch (which was acquired by a crime scene DNA company in December 2019) and FamilyTreeDNA have previously shared their database with investigators. 

To ensure the genetic privacy of the accused and their relatives, Maryland will, starting October 1, require law enforcement to get a judge’s sign-off before using genetic genealogy, and will limit its use to serious crimes like murder, kidnapping, and human trafficking. It also says that investigators can only use databases that explicitly tell users that their information could be used to investigate crimes. 

In Montana, where the new rules are somewhat narrower, law enforcement would need a warrant before using a DNA database unless the users waived their rights to privacy.

The laws “demonstrate that people across the political spectrum find law enforcement use of consumer genetic data chilling, concerning and privacy-invasive,” said Natalie Ram, a law professor at the University of Maryland. “I hope to see more states embrace robust regulation of this law enforcement technique in the future.”

The introduction of these laws has also been roundly welcomed by privacy advocates, including the Electronic Frontier Foundation. Jennifer Lynch, surveillance litigation director at the EFF, described the restrictions as a “step in the right direction,” but called for more states — and the federal government — to crack down further on FGGS.

“Our genetic data is too sensitive and important to leave it up to the whims of private companies to protect it and the unbridled discretion of law enforcement to search it,” Lynch said.

“Companies like GEDmatch and FamilyTreeDNA have allowed and even encouraged law enforcement searches. Because of this, law enforcement officers are increasingly accessing these databases in criminal investigations across the country.”

A spokesperson for 23andMe told TechCrunch: “We fully support legislation that provides consumers with stronger privacy protections. In fact we are working on legislation in a number of states to increase consumer genetic privacy protections. Customer privacy and transparency are core principles that guide 23andMe’s approach to responding to legal requests and maintaining customer trust. We closely scrutinize all law enforcement and regulatory requests and we will only comply with court orders, subpoenas, search warrants or other requests that we determine are legally valid. To date we have not released any customer information to law enforcement.”

GEDmatch and FamilyTreeDNA, both of which opt users into law enforcement searches by default, told the New York Times that they have no plans to change their existing policies around user consent in response to the new regulation. 

Ancestry did not immediately comment.

Read more:

#23andme, #ancestry, #dna, #electronic-frontier-foundation, #federal-government, #gedmatch, #genetics, #health, #judge, #law-enforcement, #maryland, #montana, #privacy, #security, #the-new-york-times, #united-states

0

Proton, the privacy startup behind e2e encrypted ProtonMail, confirms passing 50M users

End-to-end encrypted email provider ProtonMail has officially confirmed it’s passed 50 million users globally as it turns seven years old.

It’s a notable milestone for a services provider that intentionally does not have a data business — opting instead for a privacy pledge based on zero access architecture that means it has no way to decrypt the contents of ProtonMail users’ emails.

Although, to be clear, the 50M+ figure applies to total users of all its products (which includes a VPN offering), not just users of its e2e encrypted email. (It declined to break out email users vs other products when we asked.)

Commenting in a statement, Andy Yen, founder and CEO, said: “The conversation about privacy has shifted surprisingly quickly in the past seven years. Privacy has gone from being an afterthought, to the main focus of a lot of discussions about the future of the Internet. In the process, Proton has gone from a crowdfunded idea of a better Internet, to being at the forefront of the global privacy wave. Proton is an alternative to the surveillance capitalism model advanced by Silicon Valley’s tech giants, that allows us to put the needs of users and society first.”

ProtonMail, which was founded in 2014, has diversified into offering a suite of products — including the aforementioned VPN and a calendar offering (Proton Calendar). A cloud storage service, Proton Drive, is also slated for public release later this year.

For all these products it claims take the same ‘zero access’ hands off approach to user data. Albeit, it’s a bit of an apples and oranges comparison to compare e2e encrypted email with an encrypted VPN service — since the issue with VPN services is that they can see activity (i.e. where the encrypted or otherwise packets are going) and that metadata can sum to a log of your Internet activity (even with e2e encryption of the packets themselves).

Proton claims it doesn’t track or record its VPN users’ web browsing. And given its wider privacy-dependent reputation that’s at least a more credible claim vs the average VPN service. Nonetheless, you do still have to trust Proton not to do that (or be forced to do that by, for e.g., law enforcement). It’s not the same technical ‘zero access’ guarantee as it can offer for its e2e encrypted email.

Proton does also offer a free VPN — which, as we’ve said before, can be a red flag for data logging risk — but the company specifies that users of the paid version subsidize free users. So, again, the claim is zero logging but you still need to make a judgement call on whether to trust that.

From Snowden to 50M+

Over ProtonMail’s seven year run privacy has certainly gained cache as a brand promise — which is why you can now see data-mining giants like Facebook making ludicrous claims about ‘pivoting’ their people-profiling surveillance empires to ‘privacy’. So, as ever, PR that’s larded with claims of ‘respect for privacy’ demands very close scrutiny.

And while it’s clearly absurd for an adtech giant like Facebook to try to cloak the fact that its business model relies on stripping away people’s privacy with claims to the contrary, in Proton’s case the privacy claim is very strong indeed — since the company was founded with the goal of being “immune to large scale spying”. Spying such as that carried out by the NSA.

ProtonMail’s founding idea was to build a system “that does not require trusting us”.

While usage of e2e encryption has grown enormously since 2013 — when disclosures by NSA whistleblower, Edward Snowden, revealed the extent of data gathering by government mass surveillance programs, which were shown (il)liberally tapping into Internet cables and mainstream digital services to grab people’s data without their knowledge or consent — growth that’s certainly been helped by consumer friendly services like ProtonMail making robust encryption far more accessible — there are worrying moves by lawmakers in a number of jurisdictions that clash with the core idea and threaten access to e2e encryption.

In the wake of the Snowden disclosures, ‘Five Eyes’ countries steadily amped up international political pressure on e2e encryption. Australia, for example, passed an anti-encryption law in 2018 — which grants police powers to issue ‘technical notices’ to force companies operating on its soil to help the government hack, implant malware, undermine encryption or insert backdoors at the behest of the government.

While, in 2016, the UK reaffirmed its surveillance regime — passing a law that gives the government powers to compel companies to remove or not implement e2e encryption. Under the Investigatory Powers Act, a statutory instrument called a Technical Capability Notice (TCN) can be served on comms services providers to compel decrypted access. (And as the ORG noted in April, there’s no way to track usage as the law gags providers from reporting anything at all about a TCN application, including that it even exists.)

More recently, UK ministers have kept up public pressure on e2e encryption — framing it as an existential threat to child protection. Simultaneously they are legislating — via an Online Safety Bill, out in draft earlier this month — to put a legally binding obligation on service providers to ‘prevent bad things from happening on the Internet’ (as the ORG neatly sums it up). And while still at the draft stage, private messaging services are in scope of that bill — putting the law on a potential collision course with messaging services that use e2e encryption.

The U.S., meanwhile, has declined to reform warrantless surveillance.

And if you think the EU is a safe space for e2e encryption, there are reasons to be concerned in continental Europe too.

EU lawmakers have recently made a push for what they describe as “lawful access” to encrypted data — without specifying exactly how that might be achieved, i.e. without breaking and/or backdooring e2e encryption and therefore undoing the digital security they also say is vital.

In a further worrying development, EU lawmakers have proposed automated scanning of encrypted communications services — aka a provision called ‘chatcontrol’ that’s ostensibly targeted at prosecuting those who share child exploitation content — which raises further questions over how such laws might intersect with ‘zero access’ services like ProtonMail.

The European Pirate Party has been sounding the alarm — and dubs the ‘chatcontrol’ proposal “the end of the privacy of digital correspondence” — warning that “securely encrypted communication is at risk”.

A plenary vote on the proposal is expected in the coming months — so where exactly the EU lands on that remains to be seen.

ProtonMail, meanwhile, is based in Switzerland which is not a member of the EU and has one of the stronger reputations for privacy laws globally. However the country also backed beefed-up surveillance powers in 2016 — extending the digital snooping capabilities of its own intelligence agencies.

It does also adopt some EU regulations — so, again, it’s not clear whether or not any pan-EU automated scanning of message content could end up being applied to services based in the country.

The threats to e2e encryption are certainly growing, even as usage of such properly private services keeps scaling.

Asked whether it has concerns, ProtonMail pointed out that the EU’s current temporary chatcontrol proposal is voluntary — meaning it would be up to the company in question to decide its own policy. Although it accepts there is “some support” in the Commission for the chatcontrol proposals to be made mandatory.

“It’s not clear at this time whether these proposals could impact Proton specifically [i.e. if they were to become mandatory],” the spokesman also told us. “The extent to which a Swiss company like Proton might be impacted by such efforts would have to be assessed based on the specific legal proposal. To our knowledge, none has been made for now.”

“We completely agree that steps have to be taken to combat the spread of illegal explicit material. However, our concern is that the forced scanning of communications would be an ineffective approach and would instead have the unintended effect of undermining many of the basic freedoms that the EU was established to protect,” he added. “Any form of automated content scanning is incompatible with end-to-end encryption and by definition undermines the right to privacy.”

So while Proton is rightly celebrating that a steady commitment to zero access infrastructure over the past seven years has helped its business grow to 50M+ users, there are reasons for all privacy-minded folk to be watchful of what the next years of political developments might mean for the privacy and security of all our data.

 

#andy-yen, #australia, #computer-security, #e2e, #e2e-encryption, #edward-snowden, #email-encryption, #encryption, #end-to-end-encryption, #europe, #european-union, #facebook, #human-rights, #internet-cables, #law-enforcement, #online-safety-bill, #privacy, #proton, #protonmail, #switzerland, #tc, #united-kingdom, #united-states, #vpn, #web-browsing

0

If you don’t want robotic dogs patrolling the streets, consider CCOPS legislation

Boston Dynamics’ robot “dogs,” or similar versions thereof, are already being employed by police departments in Hawaii, Massachusetts and New York. Partly through the veil of experimentation, few answers are being given by these police forces about the benefits and costs of using these powerful surveillance devices.

The American Civil Liberties Union, in a position paper on CCOPS (community control over police surveillance), proposes an act to promote transparency and protect civil rights and liberties with respect to surveillance technology. To date, 19 U.S. cities in have passed CCOPS laws, which means, in practical terms, that virtually all other communities don’t have a requirement that police are transparent about their use of surveillance technologies.

For many, this ability to use new, unproven technologies in a broad range of ways presents a real danger. Stuart Watt, a world-renowned expert in artificial intelligence and the CTO of Turalt, is not amused.

Even seemingly fun and harmless “toys” have all the necessary functions and features to be weaponized.

“I am appalled both by the principle and the dogbots and of them in practice. It’s a big waste of money and a distraction from actual police work,” he said. “Definitely communities need to be engaged with. I am honestly not even sure what the police forces think the whole point is. Is it to discourage through a physical surveillance system, or is it to actually prepare people for some kind of enforcement down the line?

“Chunks of law enforcement have forgotten the whole ‘protect and serve’ thing, and do neither,” Watts added. “If they could use artificial intelligence to actually protect and actually serve vulnerable people, the homeless, folks addicted to drugs, sex workers, those in poverty and maligned minorities, it’d be tons better. If they have to spend the money on AI, spend it to help people.”

The ACLU is advocating exactly what Watt suggests. In proposed language to city councils across the nation, the ACLU makes it clear that:

The City Council shall only approve a request to fund, acquire, or use a surveillance technology if it determines the benefits of the surveillance technology outweigh its costs, that the proposal will safeguard civil liberties and civil rights, and that the uses and deployment of the surveillance technology will not be based upon discriminatory or viewpoint-based factors or have a disparate impact on any community or group.

From a legal perspective, Anthony Gualano, a lawyer and special counsel at Team Law, believes that CCOPS legislation makes sense on many levels.

“As police increase their use of surveillance technologies in communities around the nation, and the technologies they use become more powerful and effective to protect people, legislation requiring transparency becomes necessary to check what technologies are being used and how they are being used.”

For those not only worried about this Boston Dynamics dog, but all future incarnations of this supertech canine, the current legal climate is problematic because it essentially allows our communities to be testing grounds for Big Tech and Big Government to find new ways to engage.

Just last month, public pressure forced the New York Police Department to suspend use of a robotic dog, quite unassumingly named Digidog. After the tech hound was placed on temporary leave due to public pushback, the NYPD used it at a public housing building in March. This went over about as well as you could expect, leading to discussions as to the immediate fate of this technology in New York.

The New York Times phrased it perfectly, observing that “the NYPD will return the device earlier than planned after critics seized on it as a dystopian example of overly aggressive policing.”

While these bionic dogs are powerful enough to take a bite out of crime, the police forces seeking to use them have a lot of public relations work to do first. A great place to begin would be for the police to actively and positively participate in CCOPS discussions, explaining what the technology involves, and how it (and these robots) will be used tomorrow, next month and potentially years from now.

#american-civil-liberties-union, #artificial-intelligence, #boston-dynamics, #column, #law-enforcement, #mass-surveillance, #opinion, #robotics, #security, #surveillance, #surveillance-technologies, #united-states

0

EU’s top data protection supervisor urges ban on facial recognition in public

The European Union’s lead data protection supervisor has called for remote biometric surveillance in public places to be banned outright under incoming AI legislation.

The European Data Protection Supervisor’s (EDPS) intervention follows a proposal, put out by EU lawmakers on Wednesday, for a risk-based approach to regulating applications of artificial intelligence.

The Commission’s legislative proposal includes a partial ban on law enforcement’s use of remote biometric surveillance technologies (such as facial recognition) in public places. But the text includes wide-ranging exceptions, and digital and humans rights groups were quick to warn over loopholes they argue will lead to a drastic erosion of EU citizens’ fundamental rights. And last week a cross-party group of MEPs urged the Commission to screw its courage to the sticking place and outlaw the rights-hostile tech.

The EDPS, whose role includes issuing recommendations and guidance for the Commission, tends to agree. In a press release today Wojciech Wiewiórowski urged a rethink.

“The EDPS regrets to see that our earlier calls for a moratorium on the use of remote biometric identification systems — including facial recognition — in publicly accessible spaces have not been addressed by the Commission,” he wrote.

“The EDPS will continue to advocate for a stricter approach to automated recognition in public spaces of human features — such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals — whether these are used in a commercial or administrative context, or for law enforcement purposes.

“A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives.”

Wiewiórowski had some warm words for the legislative proposal too, saying he welcomed the horizontal approach and the broad scope set out by the Commission. He also agreed there are merits to a risk-based approach to regulating applications of AI.

But the EDPB has made it clear that the red lines devised by EU lawmakers are a lot pinker in hue than he’d hoped for — adding a high profile voice to the critique that the Commission hasn’t lived up to its much trumpeted claim to have devised a framework that will ensure ‘trustworthy’ and ‘human-centric’ AI.

The coming debate over the final shape of the regulation is sure to include plenty of discussion over where exactly Europe’s AI red lines should be. A final version of the text isn’t expected to be agreed until next year at the earliest.

“The EDPS will undertake a meticulous and comprehensive analysis of the Commission’s proposal to support the EU co-legislators in strengthening the protection of individuals and society at large. In this context, the EDPS will focus in particular on setting precise boundaries for those tools and systems which may present risks for the fundamental rights to data protection and privacy,” Wiewiórowski added.

 

#ai-regulation, #artificial-intelligence, #biometrics, #edps, #europe, #european-union, #facial-recognition, #law-enforcement, #policy, #privacy, #surveillance, #wojciech-wiewiorowski

0

Europe lays out plan for risk-based AI rules to boost trust and uptake

European Union lawmakers have presented their risk-based proposal for regulating high risk applications of artificial intelligence within the bloc’s single market.

The plan includes prohibitions on a small number of use-cases that are considered too dangerous to people’s safety or EU citizens’ fundamental rights, such as a China-style social credit scoring system or certain types of AI-enabled mass surveillance.

Most uses of AI won’t face any regulation (let alone a ban) under the proposal but a subset of so-called “high risk” uses will be subject to specific regulatory requirements, both ex ante and ex post.

There are also transparency requirements for certain use-cases — such as chatbots and deepfakes — where EU lawmakers believe that potential risk can be mitigated by informing users that they are interacting with something artificial.

The overarching goal for EU lawmakers is to foster public trust in how AI is implemented to help boost uptake of the technology. Senior Commission officials talk about wanting to develop an excellence ecosystem that’s aligned with European values.

“Today, we aim to make Europe world-class in the development of a secure, trustworthy and human-centered Artificial Intelligence, and the use of it,” said EVP Margrethe Vestager, announcing adoption of the proposal at a press conference.

“On the one hand, our regulation addresses the human and societal risks associated with specific uses of AI. This is to create trust. On the other hand, our coordinated plan outlines the necessary steps that Member States should take to boost investments and innovation. To guarantee excellence. All this, to ensure that we strengthen the uptake of AI across Europe.”

Under the proposal, mandatory requirements are attached to a “high risk” category of applications of AI — meaning those that present a clear safety risk or threaten to impinge on EU fundamental rights (such as the right to non-discrimination).

Examples of high risk AI use-cases that will be subject to the highest level of regulation on use are set out in annex 3 of the regulation — which the Commission said it will have the power to expand by delegate acts, as use-cases of AI continue to develop and risks evolve.

For now cited high risk examples fall into the following categories: Biometric identification and categorisation of natural persons; Management and operation of critical infrastructure; Education and vocational training; Employment, workers management and access to self-employment; Access to and enjoyment of essential private services and public services and benefits; Law enforcement; Migration, asylum and border control management; Administration of justice and democratic processes.

Military uses of AI are specifically excluded from scope as the regulation is focused on the bloc’s internal market.

The makers of high risk applications will have a set of ex ante obligations to comply with before bringing their product to market, including around the quality of the data-sets used to train their AIs and a level of human oversight over not just design but use of the system — as well as ongoing, ex post requirements, in the form of post-market surveillance.

Commission officials suggested the vast majority of applications of AI will fall outside this highly regulated category. Makers of those ‘low risk’ AI systems will merely be encouraged to adopt (non-legally binding) codes of conduct on use.

Penalties for infringing the rules on specific AI use-case bans have been set at up to 6% of global annual turnover or €30M (whichever is greater). While violations of the rules related to high risk applications can scale up to 4% (or €20M).

Enforcement will involve multiple agencies in each EU Member State — with the proposal intending oversight be carried out by existing (relevant) agencies, such as product safety bodies and data protection agencies.

That raises immediate questions over adequate resourcing of national bodies, given the additional work and technical complexity they will face in policing the AI rules; and also how enforcement bottlenecks will be avoided in certain Member States. (Notably, the EU’s General Data Protection Regulation is also overseen at the Member State level and has suffered from lack of uniformly vigorous enforcement.)

There will also be an EU-wide database set up to create a register of high risk systems implemented in the bloc (which will be managed by the Commission).

A new body, called the European Artificial Intelligence Board (EAIB), will also be set up to support a consistent application of the regulation — in a mirror to the European Data Protection Board which offers guidance for applying the GDPR.

In step with rules on certain uses of AI, the plan includes measures to co-ordinate EU Member State support for AI development — such as by establishing regulatory sandboxes to help startups and SMEs develop and test AI-fuelled innovations — and via the prospect of targeted EU funding to support AI developers.

Internal market commissioner Thierry Breton said investment is a crucial piece of the plan.

“Under our Digital Europe and Horizon Europe program we are going to free up a billion euros per year. And on top of that we want to generate private investment and a collective EU-wide investment of €20BN per year over the coming decade — the ‘digital decade’ as we have called it,” he said. “We also want to have €140BN which will finance digital investments under Next Generation EU [COVID-19 recovery fund] — and going into AI in part.”

Shaping rules for AI has been a key priority for EU president Ursula von der Leyen who took up her post at the end of 2019. A white paper was published last year, following a 2018 AI for EU strategy — and Vestager said that today’s proposal is the culmination of three years’ work.

Breton added that providing guidance for businesses to apply AI will give them legal certainty and Europe an edge. “Trust… we think is vitally important to allow the development we want of artificial intelligence,” he said. [Applications of AI] need to be trustworthy, safe, non-discriminatory — that is absolutely crucial — but of course we also need to be able to understand how exactly these applications will work.”

A version of today’s proposal leaked last week — leading to calls by MEPs to beef up the plan, such as by banning remote biometric surveillance in public places.

In the event the final proposal does treat remote biometric surveillance as a particularly high risk application of AI — and there is a prohibition in principal on the use of the technology in public by law enforcement.

However use is not completely proscribed, with a number of exceptions where law enforcement would still be able to make use of it, subject to a valid legal basis and appropriate oversight.

Today’s proposal kicks off the start of the EU’s co-legislative process, with the European Parliament and Member States via the EU Council set to have their say on the draft — meaning a lot could change ahead of agreement on a final pan-EU regulation.

Commissioners declined to give a timeframe for when legislation might be adopted, saying only that they hoped the other EU institutions would engage immediately and that the process could be done asap. It could, nonetheless, be several years before the AI regulation is ratified and in force.

This story is developing, refresh for updates… 

#ai, #artificial-intelligence, #digital-regulation, #europe, #european-data-protection-board, #european-union, #general-data-protection-regulation, #law-enforcement, #margrethe-vestager, #policy, #science-and-technology, #thierry-breton, #ursula-von-der-leyen

0

EU lawmakers propose strict curbs on use of facial recognition

EU lawmakers propose strict curbs on use of facial recognition

Enlarge (credit: John Lamb / The Image Bank / Getty Images)

EU regulators have proposed strict curbs on the use of facial recognition in public spaces, limiting the controversial technology to a small number of public-interest scenarios, according to new draft legislation seen by the Financial Times.

In a confidential 138-page document, officials said facial recognition systems infringed on individuals’ civil rights and therefore should only be used in scenarios in which they were deemed essential, for instance in the search for missing children and the policing of terrorist events.

The draft legislation added that “real-time” facial recognition—which uses live tracking, rather than past footage or photographs—in public spaces by the authorities should only ever be used for limited periods of time, and it should be subject to prior consent by a judge or a national authority.

Read 9 remaining paragraphs | Comments

#european-union, #facial-recognition, #law-enforcement, #policy, #privacy

0

MEPs call for European AI rules to ban biometric surveillance in public

A cross-party group of 40 MEPs in the European parliament has called on the Commission to strengthen an incoming legislative proposal on artificial intelligence to include an outright ban on the use of facial recognition and other forms of biometric surveillance in public places.

They have also urged EU lawmakers to outlaw automated recognition of people’s sensitive characteristics (such as gender, sexuality, race/ethnicity, health status and disability) — warning that such AI-fuelled practices pose too great a rights risk and can fuel discrimination.

The Commission is expected to presented its proposal for a framework to regulate ‘high risk’ applications of AI next week — but a copy of the draft leaked this week (via Politico). And, as we reported earlier, this leaked draft does not include a ban on the use of facial recognition or similar biometric remote identification technologies in public places, despite acknowledging the strength of public concern over the issue.

“Biometric mass surveillance technology in publicly accessible spaces is widely being criticised for wrongfully reporting large numbers of innocent citizens, systematically discriminating against under-represented groups and having a chilling effect on a free and diverse society. This is why a ban is needed,” the MEPs write now in a letter to the Commission which they’ve also made public.

They go on to warn over the risks of discrimination through automated inference of people’s sensitive characteristics — such as in applications like predictive policing or the indiscriminate monitoring and tracking of populations via their biometric characteristics.

“This can lead to harms including violating rights to privacy and data protection; suppressing free speech; making it harder to expose corruption; and have a chilling effect on everyone’s autonomy, dignity and self-expression – which in particular can seriously harm LGBTQI+ communities, people of colour, and other discriminated-against groups,” the MEPs write, calling on the Commission to amend the AI proposal to outlaw the practice in order to protect EU citizens’ rights and the rights of communities who faced a heightened risk of discrimination (and therefore heightened risk from discriminatory tools supercharged with AI).

“The AI proposal offers a welcome opportunity to prohibit the automated recognition of gender, sexuality, race/ethnicity, disability and any other sensitive and protected characteristics,” they add.

The leaked draft of the Commission’s proposal does tackle indiscriminate mass surveillance — proposing to prohibit this practice, as well as outlawing general purpose social credit scoring systems.

However the MEPs want lawmakers to go further — warning over weaknesses in the wording of the leaked draft and suggesting changes to ensure that the proposed ban covers “all untargeted and indiscriminate mass surveillance, no matter how many people are exposed to the system”.

They also express alarm at the proposal having an exemption on the prohibition on mass surveillance for public authorities (or commercial entities working for them) — warning that this risks deviating from existing EU legislation and from interpretations by the bloc’s top court in this area.

“We strongly protest the proposed second paragraph of this Article 4 which would exempt public authorities and even private actors acting on their behalf ‘in order to safeguard public security’,” they write. “Public security is precisely what mass surveillance is being justified with, it is where it is practically relevant, and it is where the courts have consistently annulled legislation on indiscriminate bulk processing of personal data (e.g. the Data Retention Directive). This carve-out needs to be deleted.”

“This second paragraph could even be interpreted to deviate from other secondary legislation which the Court of Justice has so far interpreted to ban mass surveillance,” they continue. “The proposed AI regulation needs to make it very clear that its requirements apply in addition to those resulting from the data protection acquis and do not replace it. There is no such clarity in the leaked draft.”

The Commission has been contacted for comment on the MEPs’ calls but is unlikely to do so ahead of the official reveal of the draft AI regulation — which is expected around the middle of next week.

It remains to be seen whether the AI proposal will undergo any significant amendments between now and then. But MEPs have fired a swift warning shot that fundamental rights must and will be a key feature of the co-legislative debate — and that lawmakers’ claims of a framework to ensure ‘trustworthy’ AI won’t look credible if the rules don’t tackle unethical technologies head on.

#ai, #ai-regulation, #artificial-intelligence, #biometrics, #discrimination, #europe, #european-parliament, #european-union, #facial-recognition, #fundamental-rights, #law-enforcement, #mass-surveillance, #meps, #national-security, #policy, #privacy, #surveillance

0

Jamaica’s JamCOVID pulled offline after third security lapse exposed travelers’ data

Jamaica’s JamCOVID app and website were taken offline late on Thursday following a third security lapse, which exposed quarantine orders on more than half a million travelers to the island.

JamCOVID was set up last year to help the government process travelers arriving on the island. Quarantine orders are issued by the Jamaican Ministry of Health and instruct travelers to stay in their accommodation for two weeks to prevent the spread of COVID-19.

These orders contain the traveler’s name and the address of where they are ordered to stay.

But a security researcher told TechCrunch that the quarantine orders were publicly accessible from the JamCOVID website but were not protected with a password. Although the files were accessible from anyone’s web browser, the researcher asked not to be named for fear of legal repercussions from the Jamaican government.

More than 500,000 quarantine orders were exposed, some dating back to March 2020.

TechCrunch shared these details with the Jamaica Gleaner, which was first to report on the security lapse after the news outlet verified the data spillage with local cybersecurity experts.

Amber Group, which was contracted to build and maintain the JamCOVID coronavirus dashboard and immigration service, pulled the service offline a short time after TechCrunch and the Jamaica Gleaner contacted the company on Thursday evening. JamCOVID’s website was replaced with a holding page that said the site was “under maintenance.” At the time of publication, the site had returned.

Amber Group’s chief executive Dushyant Savadia did not return a request for comment.

Matthew Samuda, a minister in Jamaica’s Ministry of National Security, also did not respond to a request for comment or our questions — including if the Jamaican government plans to continue its contract or relationship with Amber Group.

This is the third security lapse involving JamCOVID in the past two weeks.

Last week, Amber Group secured an exposed cloud storage server hosted on Amazon Web Services that was left open and public, despite containing more than 70,000 negative COVID-19 lab results and over 425,000 immigration documents authorizing travel to the island. Savadia said in response that there were “no further vulnerabilities” with the app. Days later, the company fixed a second security lapse after leaving a file containing private keys and passwords for the service on the JamCOVID server.

The Jamaican government has repeatedly defended Amber Group, which says it provided the JamCOVID technology to the government “for free.” Amber Group’s Savadia has previously been quoted as saying that the company built the service in “three days.”

In a statement on Thursday, Jamaica’s prime minister Andrew Holness said JamCOVID “continues to be a critical element” of the country’s immigration process and that the government was “accelerating” to migrate the JamCOVID database — though specifics were not given.

An earlier version of this report misspelled the Jamaican Gleaner newspaper. We regret the error.

#amazon-web-services, #countries, #cybersecurity, #data-leaks, #government, #law-enforcement, #privacy, #quarantine, #security, #web-browser, #web-services

0

Facebook has been helping law enforcement identify Capitol rioters

Supporters of former President Donald Trump, including Jake Angeli, a QAnon supporter known for his painted face and horned hat, enter the US Capitol on January 6.

Enlarge / Supporters of former President Donald Trump, including Jake Angeli, a QAnon supporter known for his painted face and horned hat, enter the US Capitol on January 6. (credit: Saul Loeb/AFP via Getty Images)

Facebook has gone out of its way to help law enforcement officials identify those who participated in the January 6 riot at the US Capitol, the company said in a Thursday conference call with reporters.

“We were appalled by the violence,” said Monika Bickert, Facebook’s vice president of content policy. “We were monitoring the assault in real time and made appropriate referrals to law enforcement to assist their efforts to bring those responsible to account.”

She added that this “includes helping them identify people who posted photos of themselves from the scene, even after the attack was over” and that Facebook is “continuing to share more information with law enforcement in response to valid legal requests.”

Read 5 remaining paragraphs | Comments

#facebook, #january-6, #law-enforcement, #policy, #qanon, #rioting

0

Base Operations raises $2.2 million to modernize physical enterprise security

Typically when we talk about tech and security, the mind naturally jumps to cybersecurity. But equally important, especially for global companies with large, multinational organizations, is physical security – a key function at most medium-to-large enterprises, and yet one that to date, hasn’t really done much to take advantage of recent advances in technology. Enter Base Operations, a startup founded by risk management professional Cory Siskind in 2018. Base Operations just closed their $2.2 million seed funding round, and will use the money to capitalize on its recent launch of a street-level threat mapping platform for use in supporting enterprise security operations.

The funding, led by Good Growth Capital and including investors like Magma Partners, First In Capital, Gaingels and First Round Capital founder Howard Morgan, will be used primarily for hiring, as Base Operations looks to continue its team growth after doubling its employe base this past month. It’ll also be put to use extending and improving the company’s product, and growing the startup’s global footprint. I talked to Siskind about her company’s plans on the heels of this round, as well as the wider opportunity and how her company is serving the market in a novel way.

“What we do at Base Operations is help companies keep their people in operation secure with ‘Micro Intelligence,’ which is street-level threat assessments that facilitate a variety of routine security tasks in the travel security, real estate and supply chain security buckets,” Siskind explained. “Anything that the Chief Security Officer would be in charge of, but not cyber – so anything that intersects with the physical world.”

Siskind has first-hand experience about the complexity and challenges that enter into enterprise security, since she began her career working for global strategic risk consultancy firm Control Risks in Mexico City. Because of her time in the industry, she’s keenly aware of just how far physical and political security operations lag behind their cybersecurity counterparts. It’s an often-overlooked aspect of corporate risk management, particularly since in the past it’s been something that most employees at North American companies only ever encounter periodically, when their roles involve frequent travel. The events of the past couple of years have changed that, however.

“This was the last bastion of a company that hadn’t been optimized by a SaaS platform, basically, so there was some resistance and some allegiance to legacy players,” Siskind told me. “However, the events of 2020 sort of turned everything on its head, and companies realized that the security department ,and what happens in the physical world, is not just about compliance – it’s actually a strategic advantage to invest in those sort of services, because it helps you maintain business continuity.”

The COVID-19 pandemic, increased frequency and severity of natural disasters, and global political unrest all had significant impact on businesses worldwide in 2020, and Siskind says that this has proven a watershed moment in how enterprises consider physical security in their overall risk profile and strategic planning cycles.

“[Companies] have just realized that if you don’t invest and how to keep your operations running smoothly in the face of rising catastrophic events, you’re never going to achieve the the profits that you need, because it’s too choppy, and you have all sorts of problems,” she said.

Base Operations addresses this problem by taking available data from a range of sources and pulling it together to inform threat profiles. Their technology is all about making sense of the myriad stream of information we encounter daily – taking the wash of news that we sometimes associate with ‘doom-scrolling’ on social media, for instance, and combining it with other sources using machine learning to extrapolate actionable insights.

Those sources of information include “government statistics, social media, local news, data from partnerships, like NGOs and universities,” Siskind said. That data set powers their Micro Intelligence platform, and while the startup’s focus today is on helping enterprises keep people safe, while maintaining their operations, you can easily see how the same information could power everything from planning future geographical expansion, to tailoring product development to address specific markets.

Siskind saw there was a need for this kind of approach to an aspect of business that’s essential, but that has been relatively slow to adopt new technologies. From her vantage point two years ago, however, she couldn’t have anticipated just how urgent the need for better, more scalable enterprise security solutions would arise, and Base Operations now seems perfectly positioned to help with that need.

#artificial-intelligence, #computer-security, #cryptography, #data-security, #enterprise, #first-round-capital, #funding, #law-enforcement, #machine-learning, #magma-partners, #malware, #mexico-city, #real-estate, #risk-management, #saas, #security, #security-guard, #social-media, #startup, #tc

0

Clearview AI ruled ‘illegal’ by Canadian privacy authorities

Controversial facial recognition startup Clearview AI violated Canadian privacy laws when it collected photos of Canadians without their knowledge or permission, the country’s top privacy watchdog has ruled.

The New York-based company made its splashy newspaper debut a year ago by claiming it had collected over 3 billion photos of people’s faces and touting its connections to law enforcement and police departments. But the startup has faced a slew of criticism for scraping social media sites also without their permission, prompting Facebook, LinkedIn and Twitter to send cease and desist letters to demand it stops.

In a statement, Canada’s Office of the Privacy Commissioner said its investigation found Clearview had “collected highly sensitive biometric information without the knowledge or consent of individuals,” and that the startup “collected, used and disclosed Canadians’ personal information for inappropriate purposes, which cannot be rendered appropriate via consent.”

Clearview rebuffed the allegations, claiming Canada’s privacy laws do not apply because the company doesn’t have a “real and substantial connection” to the country, and that consent was not required because the images it scraped were publicly available.

That’s a challenge the company continues to face in court, as it faces a class action suit citing Illinois’ biometric protection laws that last year dinged Facebook to the tune of $550 million for violating the same law.

The Canadian privacy watchdog rejected Clearview’s arguments, and said it would “pursue other actions” if the company does not follow its recommendations, which included stopping the collection on Canadians and deleting all previously collected images. Clearview said in July that it stopped providing its technology to Canadian customers after the Royal Canadian Mounted Police and the Toronto Police Service were using the startup’s technology.

“What Clearview does is mass surveillance and it is illegal,” said Daniel Therrien, Canada’s privacy commissioner. “It is an affront to individuals’ privacy rights and inflicts broad-based harm on all members of society, who find themselves continually in a police lineup. This is completely unacceptable.”

A spokesperson for Clearview AI did not immediately return a request for comment.

#articles, #canada, #clearview-ai, #digital-rights, #facebook, #facial-recognition, #facial-recognition-software, #human-rights, #illinois, #law-enforcement, #mass-surveillance, #new-york, #privacy, #security, #social-issues, #spokesperson, #terms-of-service

0

MetroMile says a website bug let a hacker obtain driver’s license numbers

Car insurance startup MetroMile said it has fixed a security flaw on its website that allowed a hacker to obtain driver’s license numbers.

The San Francisco-based insurance startup disclosed the security breach in its latest 8-K filing with the U.S. Securities and Exchange Commission.

MetroMile said a bug in the quote form and application process on the company’s website allowed the hacker to “obtain personal information of certain individuals, including individuals’ driver’s license numbers.” It’s not clear exactly how the form allowed the hacker to obtain driver’s license numbers or how many individuals had their driver’s license numbers obtained.

The disclosure added: “Metromile immediately took steps to contain and remediate the issue, including by releasing software fixes, notified its insurance carrier, and has continued its ongoing operations. Metromile is working diligently with security experts and legal counsel to ascertain how the incident occurred, identify additional containment and remediation measures, and notify affected individuals, law enforcement, and regulatory bodies, as appropriate.”

Rick Chen, a spokesperson for MetroMile, said that the company has so far confirmed that driver’s license numbers were accessed, but that the “investigation is still ongoing.”

MetroMile has not disclosed the security incident on its website or its social channels. Chen said the company plans to notify affected individuals of the incident.

News of the security incident landed as the company confirmed a $50 million investment from former Uber executive Ryan Graves, who will also join the company’s board. It comes just weeks after the auto insurance startup announced it was planning to go public via a special-purpose acquisition company — or SPAC — in a $1.3 billion deal.

#articles, #automotive, #computer-security, #computing, #data-security, #driver, #executive, #insurance, #law-enforcement, #metromile, #ryan-graves, #san-francisco, #security, #security-breaches, #startup-company, #u-s-securities-and-exchange-commission, #uber

0

Location broker X-Mode continues to track users despite app store bans

Hundreds of Android apps, far more than previously disclosed, have sent granular user location data to X-Mode, a data broker known to sell location data to U.S. military contractors.

The apps include messaging apps, a free video and file converter, several dating sites, and religion and prayer apps — each accounting for tens of millions of downloads to date, according to new research.

Sean O’Brien, principal researcher at ExpressVPN Digital Security Lab, and Esther Onfroy, co-founder of the Defensive Lab Agency, found close to 200 Android apps that at some point over the past year contained X-Mode tracking code.

Some of the apps were still sending location data to X-Mode as recently as December when Apple and Google told developers to remove X-Mode from their apps or face a ban from the app stores.

But weeks after the ban took effect, one popular U.S. transit map app that had been installed hundreds of thousands of times was still downloadable from Google Play even though it was still sending location data to X-Mode.

The new research, now published, is believed to be the broadest review to date of apps that collaborate with X-Mode, one of dozens of companies in a multibillion-dollar industry that buys and sells access to the location data collected from ordinary phone apps, often for the purposes of serving targeted advertising.

But X-Mode has faced greater scrutiny for its connections to government work, amid fresh reports that U.S. intelligence bought access to commercial location data to search for Americans’ past movements without first obtaining a warrant.

X-Mode pays app developers to include its tracking code, known as a software development kit, or SDK, in exchange for collecting and handing over the user’s location data. Users opt-in to this tracking by accepting the app’s terms of use and privacy policies. But not all apps that use X-Mode disclose to their users that their location data may end up with the data broker or is sold to military contractors.

X-Mode’s ties to military contractors (and by extension the U.S. military) was first disclosed by Motherboard, which first reported that a popular prayer app with more than 98 million downloads worldwide sent granular movement data to X-Mode.

In November, Motherboard found that another previously unreported Muslim prayer app called Qibla Compass sent data to X-Mode. O’Brien’s findings corroborate that and also point to several more Muslim-focused apps as containing X-Mode. By conducting network traffic analysis, Motherboard verified that at least three of those apps did at some point send location data to X-Mode, although none of the versions currently on Google Play do so. You can read Motherboard’s full story here.

X-Mode’s chief executive Josh Anton told CNN last year that the data broker tracks 25 million devices in the U.S., and told Motherboard its SDK had been used in about 400 apps.

In a statement to TechCrunch, Anton said:

“The ban on X-Mode’s SDK has broader ecosystem implications considering X-Mode collected similar mobile app data as most advertising SDKs. Apple and Google have set the precedent that they can determine private enterprises’ ability to collect and use mobile app data even when a majority of our publishers had secondary consent for the collection and use of location data.

We’ve recently sent a letter to Apple and Google to understand how we can best resolve this issue together so that we can both continue to use location data to save lives and continue to power the tech communities’ ability to build location-based products. We believe it’s important to ensure that Apple and Google hold X-Mode to the same standard they hold upon themselves when it comes to the collection and use of location data.”

The researchers also published new endpoints that apps using X-Mode’s SDK are known to communicate with, which O’Brien said he hoped would help others discover which apps are sending — or have historically sent — users’ location data to X-Mode.

“We hope consumers can identify if they’re the target of one of these location trackers and, more importantly, demand that this spying end. We want researchers to build off of our findings in the public interest, helping to shine light on these threats to privacy, security, and rights,” said O’Brien.

TechCrunch analyzed the network traffic on about two-dozen of the most downloaded Android apps in the researchers’ findings to look for apps that were communicating with any of the known X-Mode endpoints, and confirmed that several of the apps were at some point sending location data to X-Mode.

We also used the endpoints identified by the researchers to look for other popular apps that may have communicated with X-Mode.

At least one app identified by TechCrunch slipped through Google’s app store ban.

New York Subway in Google Play., until it was removed by Google. (Image: TechCrunch)

New York Subway, a popular app for navigating the New York City subway system that has been downloaded 250,000 times, according to data provided by Sensor Tower, was still listed in Google Play as of this week. But the app, which had not been updated since the app store bans were implemented, was still sending location data to X-Mode.

As soon as the app loads, a splash screen immediately asks for the user’s consent to send data to X-Mode for ads, analytics and market research, but the app did not mention X-Mode’s government work.

Desoline, the Israel-based app maker, did not respond to multiple requests for comment, but removed references to X-Mode from its privacy policy a short while after we reached out. At the time of writing, the app has not returned to Google Play.

A Google spokesperson confirmed the company removed the app from Google Play.

Using the researchers’ list of apps, TechCrunch also found that previous versions of two highly popular apps, Moco and Video MP3 Converter, which account for more than 115 million downloads to date, are still sending user location data to X-Mode. That poses a privacy risk to users who install Android apps from outside Google Play, and those who are running older apps that are still sending data to X-Mode.

Neither app maker responded to a request for comment. Google would not say if it had removed any other apps for similar violations or what measures it would take, if any, to protect users running older app versions that are still sending location data to X-Mode.

None of the corresponding and namesake apps for Apple’s iOS that we tested appeared to communicate with X-Mode’s endpoints. When reached, Apple declined to say if it had blocked any apps after its ban went into effect.

Read more on TechCrunch

“The sensors in smartphones provide rich data that can be exploited to limit our movements, our free expression, and our autonomy,” said O’Brien. “Location spying poses a serious threat to human rights because it peers into the most sensitive aspects of our lives and who we associate with.”

The newly published research is likely to bring fresh scrutiny to how ordinary smartphone apps are harvesting and selling vast amounts of personal data on millions of Americans, often without the user’s explicit consent.

Several federal agencies, including the Internal Revenue Service and Homeland Security, are under investigation by government watchdogs for buying and using location data from various data brokers without first obtaining a warrant. Last week it emerged that intelligence analysts at the Defense Intelligence Agency buy access to commercial databases of Americans’ location data.

Critics say the government is exploiting a loophole in a 2018 Supreme Court ruling, which stopped law enforcement from obtaining cell phone location data directly from the cell carriers without a warrant.

Now the government says it doesn’t believe it needs a warrant for what it can buy directly from brokers.

Sen. Ron Wyden, a vocal privacy critic whose office has been investigating the data broker industry, previously drafted legislation that would grant the Federal Trade Commission new powers to regulate and fine data brokers.

“Americans are sick of learning that their location data is being sold by data brokers to anyone with a credit card. Industry self-regulation clearly isn’t working — Congress needs to pass tough legislation, like my Mind Your Own Business Act, to give consumers effective tools to prevent their data being sold and to give the FTC the power to hold companies accountable when they violate Americans’ privacy,” said Wyden.


Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents with SecureDrop.

#apps, #broker, #government, #law-enforcement, #location-data, #mobile-app, #new-york-city, #policy, #privacy, #security, #smartphone

0

Fired former data scientist Rebekah Jones arrested, tests positive for COVID-19

Florida's handling of the pandemic has been... a mixed bag. This beach was hopping on May 20, around the same time Jones publicly claimed the state fired her for refusing to manipulate Florida's COVID-19 data.

Enlarge / Florida’s handling of the pandemic has been… a mixed bag. This beach was hopping on May 20, around the same time Jones publicly claimed the state fired her for refusing to manipulate Florida’s COVID-19 data. (credit: Mike Ehrmann | Getty Images)

Florida police have arrested former state data scientist Rebekah Jones, accusing her of breaking state laws prohibiting accessing computer systems without authorization.

Jones on Saturday disclosed the arrest warrant herself, writing in a series of tweets that she and her lawyer had arranged with Florida police for her to present herself to law enforcement in Tallahassee on Sunday evening. (Jones and her family currently live in the Maryland suburbs of Washington, DC.) Jones was released from jail Monday after posting $2,500 bail and reportedly testing positive for COVID-19 while in police custody.

The warrant alleges that Jones gained unauthorized access to state Department of Health systems in November to send an unauthorized message to employees and to download a list of contact information for approximately 19,000 people.

Read 7 remaining paragraphs | Comments

#arrests, #florida, #law-enforcement, #policy, #rebekah-jones

0

How law enforcement gets around your smartphone’s encryption

Uberwachung, Symbolbild, Datensicherheit, Datenhoheit

Enlarge / Uberwachung, Symbolbild, Datensicherheit, Datenhoheit (credit: Westend61 | Getty Images)

Lawmakers and law enforcement agencies around the world, including in the United States, have increasingly called for backdoors in the encryption schemes that protect your data, arguing that national security is at stake. But new research indicates governments already have methods and tools that, for better or worse, let them access locked smartphones thanks to weaknesses in the security schemes of Android and iOS.

Cryptographers at Johns Hopkins University used publicly available documentation from Apple and Google as well as their own analysis to assess the robustness of Android and iOS encryption. They also studied more than a decade’s worth of reports about which of these mobile security features law enforcement and criminals have previously bypassed, or can currently, using special hacking tools. The researchers have dug into the current mobile privacy state of affairs and provided technical recommendations for how the two major mobile operating systems can continue to improve their protections.

“It just really shocked me, because I came into this project thinking that these phones are really protecting user data well,” says Johns Hopkins cryptographer Matthew Green, who oversaw the research. “Now I’ve come out of the project thinking almost nothing is protected as much as it could be. So why do we need a backdoor for law enforcement when the protections that these phones actually offer are so bad?”

Read 19 remaining paragraphs | Comments

#android, #biz-it, #encryption, #ios, #law-enforcement, #security, #smartphones, #tech

0

Scraped Parler data is a metadata goldmine

Embattled social media platform Parler is offline after Apple, Google and Amazon pulled the plug on the site after the violent riot at the U.S. Capitol last week that left five people dead.

But while the site is gone (for now), millions of posts published to the site since the riot are not.

A lone hacker scraped millions of posts, videos and photos published to the site after the riot but before the site went offline on Monday, preserving a huge trove of potential evidence for law enforcement investigating the attempted insurrection, many of which allegedly used the platform to plan and coordinate the breach of the Capitol.

The hacker and internet archivist, who goes by the online handle @donk_enby, scraped the social network and uploaded copies to the Internet Archive, which hosts old and historical versions of web pages.

In a tweet, @donk_enby said she scraped data from Parler that included deleted and private posts, and the videos contained “all associated metadata.”

Metadata is information about a file — such as when it was made and on what device. This information is usually embedded in the file itself. The scraped videos from Parler appear to also include the precise location data of where the videos were taken. That metadata could be a goldmine of evidence for authorities investigating the Capitol riot, which may tie some rioters to their Parler accounts or help police to unmask rioters based on their location data.

Most web services remove metadata when you upload your photos and videos, but Parler apparently wasn’t.

Parler quickly became the social network of choice after President Trump was deplatformed from Twitter and Facebook for inciting the riot on January 6. But the tech giants said Parler violated their rules by not having a content moderation policy – which is what drew many users to the site.

Many of the posts made calls to “burn down [Washington] D.C.,” while others called for violence and the execution of Vice President Mike Pence.

Already several rioters have been arrested and charged with breaking into the Capitol building. Many of the rioters weren’t wearing masks (the pandemic notwithstanding), making it easier for them to be identified. But thanks to Parler’s own security blunder, many more could soon face an unwelcome knock at the door.

#amazon, #computing, #internet-archive, #law-enforcement, #microblogging, #operating-systems, #parler, #president, #real-time-web, #security, #social-network, #software, #trump, #vice-president, #washington, #web-services

0

Michelle Obama calls on Silicon Valley to permanently ban Trump and prevent platform abuse by future leaders

In a new statement issued by former First Lady Michelle Obama, she calls on Silicon Valley specifically to address its role in the violent insurrection attempt by pro-Trump rioters at the U.S. Capitol building on Wednesday. Obama’s statement also calls out the obviously biased treatment that the primarily white pro-Trump fanatics faced by law enforcement relative to that received by mostly peaceful BLM supporters during their lawful demonstrations (as opposed to Wednesday’s criminal activity), but it includes a specific redress for the tech industry’s leaders and platform operators.

“Now is the time for companies to stop enabling this monstrous behavior – and go even further than they have already by permanently banning this man from their platforms and putting in place policies to prevent their technology from being used by the nation’s leaders to fuel insurrection,” Obama wrote in her statement, which she shared on Twitter and on Facebook.

The call for action goes beyond what most social platforms have done already: Facebook has banned Trump, but though it describes the term of the suspension as “indefinite,” it left open the possibility for a restoration of his accounts in as little as two weeks’ time once Joe Biden has officially assumed the presidency. Twitter, meanwhile, initially removed three tweets it found offended its rules by inciting violence, and then locked Trump’s account pending his deletion of the same. Earlier on Thursday, Twitter confirmed that Trump had removed these, and that his account would subsequently be restored twelve hours after their deletion. Twitch has also disabled Trump’s channel at least until the end of his term, while Shopify has removed Trump’s official merchandise stores from its platform.

No social platform thus far has permanently banned Trump, so far as TechCrunch is aware, which is what Obama is calling for in her statement. And while both Twitter and Facebook have discussed how Trump’s recent behavior have violated their policies regarding use of their platform, neither have yet provided any detailed information regarding how they’ll address any potential similar behavior from other world leaders going forward. In other words, we don’t yet know what would be different (if anything) should another Trump-styled megalomaniac take office and use available social channels in a similar manner.

Obama is hardly the only political figure to call for action from social media platforms around “sustained misuse of their platforms to sow discord and violence,” as Senator Mark Warner put it in a statement on Wednesday. Likely once the dust clears from this week’s events, Facebook, Twitter, YouTube, et al. will face renewed scrutiny from lawmakers and public interest groups around any corrective action they’re taking.

#articles, #capitol-riot, #deception, #donald-trump, #joe-biden, #law-enforcement, #mark-warner, #michelle-obama, #qanon, #shopify, #social-media, #social-media-platforms, #tc, #trump, #twitch, #twitter

0

Insurrectionists’ social media presence gives feds an easy way to ID them

Men with flags and bizarre costumes pose for a photo in a neoclassical corridor.

Enlarge / The seditionists who broke into the US Capitol on Wednesday were not particularly subtle and did not put any particular effort into avoiding being identified. (credit: Saul Loeb | AFP | Getty Images)

Law enforcement agencies trying to track down insurrectionists who participated in yesterday’s events at the US Capitol have a wide array of tools at their disposal thanks to the ubiquity of cameras and social media.

Both local police and the FBI are seeking information about individuals who were “actively instigating violence” in Washington, DC, on January 6. While media organizations took thousands of photos police can use, they also have more advanced technologies at their disposal to identify participants, following what several other agencies have done in recent months.

Several police departments, such as Miami, Philadelphia, and New York City, turned to facial recognition platforms—including the highly controversial Clearview AI—during the widespread summer 2020 demonstrations against police brutality and in support of Black communities. In Philadelphia, for example, police used software to compare protest footage against Instagram photos to identify and arrest a protestor. In November, The Washington Post reported that investigators from 14 local and federal agencies in the DC area have used a powerful facial recognition system more than 12,000 times since 2019.

Read 10 remaining paragraphs | Comments

#dc, #facial-recognition, #fbi, #insurrection, #law-enforcement, #livestreams, #police, #policy, #sedition, #washington

0

COVID-19 contact-tracing data is fair game for police, Singapore says

Close-up image of a hand holding a palm-sized electronic device.

Enlarge / A user in Singaapore holding the TraceTogether device that can be used for COVID-19 contact tracing in lieu of a smartphone app. (credit: Roslan Rahman | AFP | Getty Images)

The government of Singapore said this week it has used data gathered for COVID-19 mitigation purposes in criminal investigations, sparking privacy concerns about contact tracing both in Singapore and elsewhere in the world.

Singapore’s contract-tracing app, TraceTogether, has been adopted by nearly 80 percent of the country’s population, according to The Guardian, and Singaporeans are required to use it to enter certain gathering places such as shopping malls.

TraceTogether’s privacy statement originally read, “Data will only be used for Covid-19 contact tracing,” but it was updated this week to add, “Authorised Police officers may invoke Criminal Procedure Code (CPC) powers to request users to upload their TraceTogether data for criminal investigations. The Singapore Police Force is empowered under the CPC to obtain any data, including TraceTogether data, for criminal investigations,” The Register reports.

Read 8 remaining paragraphs | Comments

#contact-tracing, #coronavirus, #covid-19, #data-privacy, #law-enforcement, #personal-privacy, #police, #policy, #privacy, #singapore

0

2020 was a disaster, but the pandemic put security in the spotlight

Let’s preface this year’s predictions by acknowledging and admitting how hilariously wrong we were when this time last year we said that 2020 “showed promise.”

In fairness (almost) nobody saw a pandemic coming.

With 2020 wrapping up, much of the security headaches exposed by the pandemic will linger into the new year.

The pandemic is, and remains, a global disaster of epic proportions that’s forced billions of people into lockdown, left economies in tatters with companies (including startups) struggling to stay afloat. The mass shifting of people working from home brought security challenges with it, like how to protect your workforce when employees are working outside the security perimeter of their offices. But it’s forced us to find and solve solutions to some of the most complex challenges, like pulling off a secure election and securing the supply chain for the vaccines that will bring our lives back to some semblance of normality.

With 2020 wrapping up, much of the security headaches exposed by the pandemic will linger into the new year. This is what to expect.

Working from home has given hackers new avenues for attacks

The sudden lockdowns in March drove millions to work from home. But hackers quickly found new and interesting ways to target big companies by targeting the employees themselves. VPNs were a big target because of outstanding vulnerabilities that many companies didn’t bother to fix. Bugs in enterprise software left corporate networks open to attack. The flood of personal devices logging onto the network — and the influx of malware with it — introduced fresh havoc.

Sophos says that this mass decentralizing of the workforce has turned us all into our own IT departments. We have to patch our own computers, install security updates, and there’s no IT just down the hallway to ask if that’s a phishing email.

Companies are having to adjust to the cybersecurity challenges, since working from home is probably here to stay. Managed service providers, or outsourced IT departments, have a “huge opportunity to benefit from the work-from-home shift,” said Grayson Milbourne, security intelligence director at cybersecurity firm Webroot.

Ransomware has become more targeted and more difficult to escape

File-encrypting malware, or ransomware, is getting craftier and sneakier. Where traditional ransomware would encrypt and hold a victim’s files hostage in exchange for a ransom payout, the newer and more advanced strains first steal a victim’s files, encrypt the network and then threaten to publish the stolen files if the ransom isn’t paid.

This data-stealing ransomware makes escaping an attack far more difficult because a victim can’t just restore their systems from a backup (if there is one). CrowdStrike’s chief technology officer Michael Sentonas calls this new wave of ransomware “double extortion” because victims are forced to respond to the data breach as well.

The healthcare sector is under the closest guard because of the pandemic. Despite promises from some (but not all) ransomware groups that hospitals would not be deliberately targeted during the pandemic, medical practices were far from immune. 2020 saw several high profile attacks. A ransomware attack at Universal Health Services, one of the largest healthcare providers in the U.S., caused widespread disruption to its systems. Just last month U.S. Fertility confirmed a ransomware attack on its network.

These high-profile incidents are becoming more common because hackers are targeting their victims very carefully. These hyperfocused attacks require a lot more skill and effort but improve the hackers’ odds of landing a larger ransom — in some cases earning the hackers millions of dollars from a single attack.

“This coming year, these sophisticated cyberattacks will put enormous stress on the availability of services — in everything from rerouted healthcare services impacting patient care, to availability of online and mobile banking and finance platforms,” said Sentonas.

#computer-security, #cyberattacks, #encryption, #enterprise-software, #facial-recognition, #government, #law-enforcement, #malware, #privacy, #ransomware, #security, #u-s-government

0

Florida police raid home of former state coronavirus data manager

Workers removing a sign from a drive-through COVID-19 testing site in Orlando, Florida, in October 2020.

Enlarge / Workers removing a sign from a drive-through COVID-19 testing site in Orlando, Florida, in October 2020. (credit: Paul Hennessy | NurPhoto | Getty Images)

Police on Monday raided the Florida home of data scientist Rebekah Jones, who alleged in May that she was fired from her job collating COVID-19 data for the state because she refused to “manipulate” data to make the governor’s agenda look more favorable.

“At 8:30 this morning, state police came into my house and took all my hardware and tech,” Jones said in a Twitter thread on Monday afternoon. Her initial post included a 30-second video of armed officers pointing guns up a staircase and shouting for Jones’ husband and children to come down before another officer shouted, “search warrant!” loudly to no one in particular.

“They pointed a gun in my face. They pointed guns at my kids,” Jones added. “They took my phone and the computer I use every day to post the case numbers in Florida, and school cases for the entire country. They took evidence of corruption at the state level.”

Read 16 remaining paragraphs | Comments

#coronavirus, #covid-19, #florida, #law-enforcement, #police, #policy, #ron-desantis

0

Decrypted: How Twitter was hacked, GitHub DMCA backfires

One week to the U.S. presidential election and things are getting spicy.

It’s not just the rhetoric — hackers are actively working to disrupt the election, officials have said, and last week they came with a concrete example and an unusually quick pointing of blame.

On Wednesday night, Director of National Intelligence John Ratcliffe blamed Iran for an email operation designed to intimidate voters in Florida into voting for President Trump “or else.” Ratcliffe, who didn’t take any questions from reporters and has been accused of politicizing the typically impartial office, said Iran had used voter registration data — which is largely public in the U.S. — to send emails that looked like they came from the far-right group the Proud Boys. Google security researchers also linked the campaign to Iran, which denied claims of its involvement. It’s estimated about 2,500 emails went through in the end, with the rest getting caught in spam filters.

The announcement was lackluster in detail. But experts like John Hultquist, who heads intelligence analysis at FireEye-owned security firm Mandiant, said the incident is “clearly aimed at undermining voter confidence,” just as the Russians attempted during the 2016 election.

 


THE BIG PICTURE

Twitter was hacked using a fake VPN portal, New York investigation finds

The hackers who broke into Twitter’s network used a fake VPN page to steal the credentials — and two-factor authentication code — of an employee, an investigation by New York’s Department of Financial Affairs found. The state tax division got involved after the hackers then hijacked user accounts using an internal “admin tool” to spread a cryptocurrency scam.

In a report published last week, the department said the hackers called several Twitter employees and used social engineering to trick one employee into entering their username and password on a site that looked like the company’s VPN portal, which most employees use to access the network from home during the pandemic.

“As the employee entered their credentials into the phishing website, the hackers would simultaneously enter the information into the real Twitter website. This false log-in generated a [two-factor authentication] notification requesting that the employees authenticate themselves, which some of the employees did,” wrote the report. Once onto the network using the employee’s VPN credentials, the hackers used that access to investigate how to access the company’s internal tools.

Twitter said in September that its employees would receive hardware security keys, which would make it far more difficult for a repeat phishing attack to be successful.

Open-source YouTube download tool hit by DMCA takedown, but backfires

#android, #computer-security, #decrypted, #encryption, #github, #iphone, #iran, #law-enforcement, #mandiant, #president, #security, #social, #social-engineering, #startups, #team8, #trump-administration, #united-states

0

This is how police request customer data from Amazon

Anyone can access portions of a web portal, used by law enforcement to request customer data from Amazon, even though the portal is supposed to require a verified email address and password.

Amazon’s law enforcement request portal allows police and federal agents to submit formal requests for customer data along with a legal order, like a subpoena, a search warrant, or a court order. The portal is publicly accessible from the internet, but law enforcement must register an account with the site in order to allow Amazon to “authenticate” the requesting officer’s credentials before they can make requests.

Only time sensitive emergency requests can be submitted without an account, but this requires the user to “declare and acknowledge” that they are an authorized law enforcement officer before they can submit a request.

The portal does not display customer data or allow access to existing law enforcement requests. But parts of the website still load without needing to log in, including its dashboard and the “standard” request form used by law enforcement to request customer data.

The portal provides a rare glimpse into how Amazon handles law enforcement requests.

This form allows law enforcement to request customer data using a wide variety of data points, including Amazon order numbers, serial numbers of Amazon Echo and Fire devices, credit cards details and bank account numbers, gift cards, delivery and shipping numbers, and even the Social Security number of delivery drivers.

It also allows law enforcement to obtain records related to Amazon Web Services accounts by submitting domain names or IP addresses related to the request.

Assuming this was a bug, we sent Amazon several emails prior to publication but did not hear back.

Amazon is not the only tech company with a portal for law enforcement requests. Many of the bigger tech companies with millions or even billions of users around the world, like Google and Twitter, have built portals to allow law enforcement to request customer and user data.

Motherboard reported a similar issue earlier this month that allowed anyone with an email address to access law enforcement portals set up by Facebook and WhatsApp.

#amazon-web-services, #computing, #law-enforcement, #neighbors, #officer, #privacy, #publishing, #retailers, #security, #web-portal

0

Senate’s encryption backdoor bill is ‘dangerous for Americans,’ says Rep. Lofgren

A Senate bill that would compel tech companies to build backdoors to allow law enforcement access to encrypted devices and data would be “very dangerous” for Americans, said a leading House Democrat.

Law enforcement frequently spars with tech companies over their use of strong encryption, which protects user data from hackers and theft, but the government says makes it harder to catch criminals accused of serious crime. Tech companies like Apple and Google have in recent years doubled down on their security efforts by securing data with encryption that even they cannot unlock.

Senate Republicans in June introduced their latest “lawful access” bill, renewing previous efforts to force tech companies to allow law enforcement access to a user’s data when presented with a court order.

“It’s dangerous for Americans, because it will be hacked, it will be utilized, and there’s no way to make it secure,” Rep. Zoe Lofgren, whose congressional seat covers much of Silicon Valley, told TechCrunch at Disrupt 2020. “If we eliminate encryption, we’re just opening ourselves up to massive hacking and disruption,” she said.

Lofgren’s comments echo those of critics and security experts, who have long criticized efforts to undermine encryption, arguing that there is no way to build a backdoor for law enforcement that could not also be exploited by hackers.

Several previous efforts by lawmakers to weaken and undermine encryption have failed. Currently, law enforcement has to use existing tools and techniques to find weaknesses in phones and computers. The FBI claimed for years that it had thousands of devices that it couldn’t get into, but admitted in 2018 that it repeatedly overstated the number of encrypted devices it had and the number of investigations that were negatively impacted as a result.

Lofgren has served in Congress since 1995 during the first so-called “Crypto Wars,” during which the security community fought the federal government to limit access to strong encryption. In 2016, Lofgren was part of an encryption working group on the House Judiciary Committee. The group’s final report, bipartisan but not binding, found that any measures to undermine encryption “works against the national interest.”

Still, it’s a talking point that the government continues to push, even as recently as this year when U.S. Attorney General William Barr said that Americans should accept the security risks that encryption backdoors pose.

“You cannot eliminate encryption safely,” Lofgren told TechCrunch. “And if you do, you will create chaos in the country and for Americans, not to mention others around the world,” she said. “It’s just an unsafe thing to do, and we can’t permit it.”

#apple, #attorney-general, #computer-security, #congress, #crypto-wars, #cryptography, #disrupt-2020, #encryption, #government, #law-enforcement, #security, #senate, #united-states, #william-barr, #zoe-lofgren

0

Apple opens up — slightly — on Hong Kong’s national security law

After Beijing unilaterally imposed a new national security law on Hong Kong on July 1, many saw the move as an effort by Beijing to crack down on dissent and protests in the semi-autonomous region.

Soon after, a number of tech giants — including Microsoft, Twitter and Google — said they would stop processing requests for user data from Hong Kong authorities, fearing that the requested data could end up in the hands of Beijing.

But Apple was noticeably absent from the list. Instead, Apple said it was “assessing” the new law.

When reached by TechCrunch, Apple did not say how many requests for user data it had received from Hong Kong authorities since the new national security law went into effect. But the company reiterated that it doesn’t receive requests for user content directly from Hong Kong. Instead, it relies on a long-established so-called mutual legal assistance treaty, allowing U.S. authorities to first review requests from foreign governments.

Apple said it stores iCloud data for Hong Kong users in the United States, so any requests by Hong Kong authorities for user content has to be first approved by the Justice Department, and a warrant has to be issued by a U.S. federal judge before the data can be handed over to Hong Kong.

The company said that it received a limited number of non-content requests from Hong Kong related to fraud or stolen devices, and that the number of requests it received from Hong Kong authorities since the introduction of the national security law will be included in an upcoming transparency report.

Hong Kong authorities made 604 requests for device information, 310 requests for financial data, and 10 requests for user account data during 2019.

The report also said that Apple received 5,295 requests from U.S. authorities during the second half of last year for data related to 80,235 devices, a seven-fold increase from the previous six months.

Apple also received 4,095 requests from U.S. authorities for user data stored in iCloud on 31,780 accounts, twice the number of accounts affected during the previous six months.

Most of the requests related to ongoing return and repair fraud investigations, Apple said.

The report said it received 2,522 requests from U.S. authorities to preserve data on 6,741 user accounts, allowing law enforcement to obtain the right legal process to access the data.

Apple also said it received between 0-499 national security requests for non-content data on between 15,500 and 15,999 users or accounts, an increase of 40% on the previous report.

Tech companies are only allowed to report the number of national security requests in ranges, per rules set out by the Justice Department.

The company also published two FBI national security letters, or NSLs, from 2019, which the company petitioned to make public. These letters are subpoenas issued by the FBI with no judicial oversight and often with a gag order preventing the company from disclosing their existence. Since the introduction of the Freedom Act in 2015, the FBI was required to periodically review the gag orders and lift them when they were no longer deemed necessary.

Apple also said it received 54 requests from governments to remove 258 apps from its app store. China filed the vast majority of requests.

#apple, #department-of-justice, #government, #icloud, #law-enforcement, #operating-systems, #security, #transparency-report

0

Decrypted: Uber’s former security chief charged, FBI’s ‘vishing’ warning

A lot happened in cybersecurity over the past week.

The University of Utah paid almost half a million dollars to stop hackers from leaking sensitive student data after a ransomware attack. Two major ATM makers patched flaws that could’ve allowed for fraudulent cash withdrawals from vulnerable ATMs. Grant Schneider, the U.S. federal chief information security officer, is leaving his post after more than three decades in government. And, a new peer-to-peer botnet is spreading like wildfire and infecting millions of machines around the world.

In this week’s column, we look at how Uber’s handling of its 2016 data breach put the company’s former chief security officer in hot water with federal prosecutors. And, what is “vishing” and why should companies take note?


THE BIG PICTURE

Uber’s former security chief charged with data breach cover-up

Joe Sullivan, Uber’s former security chief, was indicted this week by federal prosecutors for allegedly trying to cover up a data breach in 2016 that saw 57 million rider and driver records stolen.

Sullivan paid $100,000 in a “bug bounty” payment to the two hackers, who were also charged with the breach, in exchange for signing a nondisclosure agreement. It wasn’t until a year after the breach that former Uber chief executive Travis Kalanick was forced out and replaced with Dara Khosrowshahi, who fired Sullivan after learning of the cyberattack. Sullivan now serves as Cloudflare’s chief security officer.

The payout itself isn’t the issue, as some had claimed. Prosecutors in San Francisco took issue with how Sullivan allegedly tried to bury the breach, which later resulted in a massive $148 million settlement with the Federal Trade Commission.

#computer-security, #crime, #data-breach, #decrypted, #federal-trade-commission, #law-enforcement, #peer-to-peer, #privacy, #san-francisco, #security, #social-engineering, #telephony, #united-states

0

Leaked S-1 says Palantir would fight an order demanding its encryption keys

Palantir, the secretive data analytics startup founded by billionaire investor Peter Thiel, would challenge a government order seeking the company’s encryption keys, according to a leaked document.

TechCrunch has obtained a leaked copy of Palantir’s S-1, filed with U.S. regulators to take the company public. We’ve covered some ground already, including looking at Palantir’s financials, its customers, and some of the company’s self-identified risk factors.

But despite close relationships with law enforcement and government customers — including the U.S. government — Palantir indicated where it would draw the line if it was served a legal demand for its data.

From the leaked S-1 filing:

From time to time, government entities may seek our assistance with obtaining information about our customers or could request that we modify our platforms in a manner to permit access or monitoring. In light of our confidentiality and privacy commitments, we may legally challenge law enforcement or other government requests to provide information, to obtain encryption keys, or to modify or weaken encryption.

The S-1 touches on a particularly thorny issue in the U.S., given repeated efforts by the Trump administration to undermine and weaken encryption at the request of law enforcement, who say that encryption used by U.S. tech and internet giants makes it harder to investigate crimes.

But despite the close ties between Palantir co-founder Peter Thiel and the administration, Palantir’s position on encryption aligns closer with that of other Silicon Valley tech companies, which say strong encryption protects their users and customers from hackers and data theft.

In June, the government doubled down on its anti-encryption position with the introduction of two bills which, if passed, would force tech giants to build encryption backdoors into their systems.

Tech companies — including Apple, Facebook, Google, Microsoft, and Twitter — strongly opposed the bills, arguing that backdoors “would leave all Americans, businesses, and government agencies dangerously exposed to cyber threats from criminals and foreign adversaries.” (Verizon Media, which owns TechCrunch, is also a member of the coalition.)

Orders demanding a company’s encryption keys are rare but not unheard of.

In 2013 the government ordered Lavabit, an encrypted email provider, to turn over the site’s encryption keys. It was later confirmed, though long suspected, that the government wanted access to the Lavabit account belonging to NSA whistleblower Edward Snowden.

More recently, the FBI launched legal action in 2016 to compel Apple to build a custom backdoor that would have allowed federal agents access to an encrypted iPhone belonging to one of the San Bernardino shooters, Syed Rizwan Farook, who with his wife Tashfeen Malik, killed 14 people and injured 22 others. The FBI dropped the case after hiring hackers to break into the shooter’s iPhone, without Apple’s help.

Palantir did not say in the S-1 if it had received a legal order to date. But the S-1 filing said that the company risks “adverse political, business, and reputational consequences” regardless of whether or not the company challenged a legal order in court.

A Palantir spokesperson did not return a request for comment.

#apple, #co-founder, #computing, #cryptography, #edward-snowden, #email-encryption, #encryption, #finance, #government, #iphone, #law-enforcement, #mass-surveillance, #palantir, #peter-thiel, #privacy, #security, #spokesperson, #trump-administration, #u-s-government, #united-states, #verizon

0

Cops in Miami, NYC arrest protesters from facial recognition matches

People hold up signs while police in riot gear watch from above.

Enlarge / Demonstrators marching on a roadway during a protest against police brutality and the death of George Floyd, on May 31, 2020, in Miami, Florida. (credit: Joe Raedle | Getty Images)

Law enforcement in several cities, including New York and Miami, have reportedly been using controversial facial recognition software to track down and arrest individuals who allegedly participated in criminal activity during Black Lives Matter protests months after the fact.

Miami police used Clearview AI to identify and arrest a woman for allegedly throwing a rock at a police officer during a May protest, local NBC affiliate WTVJ reported this week. The agency has a policy against using facial recognition technology to surveil people exercising “constitutionally protected activities” such as protesting, according to the report.

“If someone is peacefully protesting and not committing a crime, we cannot use it against them,” Miami Police Assistant Chief Armando Aguilar told NBC6. But, Aguilar added, “We have used the technology to identify violent protesters who assaulted police officers, who damaged police property, who set property on fire. We have made several arrests in those cases, and more arrests are coming in the near future.”

Read 10 remaining paragraphs | Comments

#black-lives-matter, #clearview, #clearview-ai, #facial-recognition, #law-enforcement, #police, #policy, #protests, #racism

0

A new technique can detect newer 4G ‘stingray’ cell phone snooping

Security researchers say they have developed a new technique to detect modern cell-site simulators.

Cell site simulators, known as “stingrays,” impersonate cell towers and can capture information about any phone in its range — including in some cases calls, messages and data. Police secretly deploy stingrays hundreds of times a year across the United States, often capturing the data on innocent bystanders in the process.

Little is known about stingrays, because they are deliberately shrouded in secrecy. Developed by Harris Corp. and sold exclusively to police and law enforcement, stingrays are covered under strict nondisclosure agreements that prevent police from discussing how the technology works. But what we do know is that stingrays exploit flaws in the way that cell phones connect to 2G cell networks.

Most of those flaws are fixed in the newer, faster and more secure 4G networks, though not all. Newer cell site simulators, called “Hailstorm” devices, take advantage of similar flaws in 4G that let police snoop on newer phones and devices.

Some phone apps claim they can detect stingrays and other cell site simulators, but most produce wrong results.

But now researchers at the Electronic Frontier Foundation have discovered a new technique that can detect Hailstorm devices.

Enter the EFF’s latest project, dubbed “Crocodile Hunter” — named after Australian nature conservationist Steve Irwin who was killed by a stingray’s barb in 2006 — helps detect cell site simulators and decodes nearby 4G signals to determine if a cell tower is legitimate or not.

Every time your phone connects to the 4G network, it runs through a checklist — known as a handshake — to make sure that the phone is allowed to connect to the network. It does this by exchanging a series of unencrypted messages with the cell tower, including unique details about the user’s phone — such as its IMSI number and its approximate location. These messages, known as the master information block (MIB) and the system information block (SIB), are broadcast by the cell tower to help the phone connect to the network.

“This is where the heart of all of the vulnerabilities lie in 4G,” said Cooper Quintin, a senior staff technologist at the EFF, who headed the research.

Quintin and fellow researcher Yomna Nasser, who authored the EFF’s technical paper on how cell site simulators work, found that collecting and decoding the MIB and SIB messages over the air can identify potentially illegitimate cell towers.

This became the foundation of the Crocodile Hunter project.

A rare public photo of a stingray, manufactured by Harris Corp. Image Credits: U.S. Patent and Trademark Office

Crocodile Hunter is open-source, allowing anyone to run it, but it requires a stack of both hardware and software to work. Once up and running, Crocodile Hunter scans for 4G cellular signals, begins decoding the tower data, and uses trilateration to visualize the towers on a map.

But the system does require some thought and human input to find anomalies that could identify a real cell site simulator. Those anomalies can look like cell towers appearing out of nowhere, towers that appear to move or don’t match known mappings of existing towers, or are broadcasting MIB and SIB messages that don’t seem to make sense.

That’s why verification is important, Quintin said, and stingray-detecting apps don’t do this.

“Just because we find an anomaly, doesn’t mean we found the cell site simulator. We actually need to go verify,” he said.

In one test, Quintin traced a suspicious-looking cell tower to a truck outside a conference center in San Francisco. It turned out to be a legitimate mobile cell tower, contracted to expand the cell capacity for a tech conference inside. “Cells on wheels are pretty common,” said Quintin. “But they have some interesting similarities to cell site simulators, namely in that they are a portable cell that isn’t usually there and suddenly it is, and then leaves.”

In another test carried out earlier this year at the ShmooCon security conference in Washington, D.C. where cell site simulators have been found before, Quintin found two suspicious cell towers using Crocodile Hunter: One tower that was broadcasting a mobile network identifier associated with a Bermuda cell network and another tower that didn’t appear to be associated with a cell network at all. Neither made much sense, given Washington, D.C. is nowhere near Bermuda.

Quintin said that the project was aimed at helping to detect cell site simulators, but conceded that police will continue to use cell site simulators for as long as the cell networks are vulnerable to their use, an effort that could take years to fix.

Instead, Quintin said that the phone makers could do more at the device level to prevent attacks by allowing users to switch off access to legacy 2G networks, effectively allowing users to opt-out of legacy stingray attacks. Meanwhile, cell networks and industry groups should work to fix the vulnerabilities that Hailstorm devices exploit.

“None of these solutions are going to be foolproof,” said Quintin. “But we’re not even doing the bare minimum yet.”


Send tips securely over Signal and WhatsApp to +1 646-755-8849 or send an encrypted email to: zack.whittaker@protonmail.com

#black-hat-2020, #cell-phones, #dc, #def-con-2020, #electronic-frontier-foundation, #law-enforcement, #mobile-phone, #mobile-security, #privacy, #san-francisco, #security, #surveillance, #telecommunications, #united-states, #washington-dc

0