Amazon studio plans lighthearted show of Ring surveillance footage

Amazon's combining its endless reach with its constant surveillance—but for laughs.

Enlarge / Amazon’s combining its endless reach with its constant surveillance—but for laughs. (credit: Getty Images)

For some people, the term “Ring Nation” might evoke a warrantless surveillance dystopia overseen by an omnipotent megacorp. To Amazon-owned MGM, Ring Nation is a clip show hosted by comedian Wanda Sykes, featuring dancing delivery people and adorable pets.

Deadline reports that the show, due to debut on September 26, is “the latest example of corporate synergy at Amazon.” Amazon owns household video security brand Ring, Hollywood studio MGM, and Big Fish, the producer of Ring Nation

Viral videos captured by doorbell cameras have been hot for a while now. You can catch them on late-night talk shows, the r/CaughtOnRing subreddit, and on millions of TikTok users’ For You page. Amazon’s media properties, perhaps sensing an opportunity to capitalize and soften Ring’s image, are sallying forth with an officially branded offering.

Read 5 remaining paragraphs | Comments

#amazon, #gaming-culture, #mgm, #privacy, #ring, #surveillance, #tech

FTC aims to counter the “massive scale” of online data collection

FTC Chair Lina Khan said the commission intends to act on commercial data collection, which happens at "a massive scale and in a stunning array of contexts."

Enlarge / FTC Chair Lina Khan said the commission intends to act on commercial data collection, which happens at “a massive scale and in a stunning array of contexts.” (credit: Getty Images)

The Federal Trade Commission has kicked off the rulemaking process for privacy regulations that could restrict online surveillance and punish bad data-security practices. It’s a move that some privacy advocates say is long overdue, as similar Congressional efforts face endless uncertainty.

The Advanced Notice of Proposed Rulemaking, approved on a 3-2 vote along partisan lines, was spurred by commercial data collection, which occurs at “a massive scale and in a stunning array of contexts,” FTC Chair Lina M. Khan said in a press release. Companies surveil online activity, friend networks, browsing and purchase history, location data, and other details; analyze it with opaque algorithms; and sell it through “the massive, opaque market for consumer data,” Khan said.

Companies can also fail to secure that data or use it to make services addictive to children. They can also potentially discriminate against customers based on legally protected statuses like race, gender, religion, and age, the FTC said. What’s more, the release said, some companies make taking part in their “commercial surveillance” required for service or charge a premium to avoid it, employing dark patterns to keep the systems in place.

Read 8 remaining paragraphs | Comments

#alvaro-bedoya, #american-data-privacy-and-protection-act, #ftc, #lina-khan, #online-privacy, #policy, #privacy, #privacy-protection, #surveillance

Amazon finally admits giving cops Ring doorbell data without user consent

Amazon finally admits giving cops Ring doorbell data without user consent

Enlarge (credit: Smith Collection/Gado / Contributor | Archive Photos)

More than 10 million people rely on Ring video doorbells to monitor what’s happening directly outside the front doors of their homes. The popularity of the technology has raised a question that concerns privacy advocates: Should police have access to Ring video doorbell recordings without first gaining user consent?

Ring recently revealed how often the answer to that question has been yes. The Amazon company responded to an inquiry from US Senator Ed Markey (D-Mass.), confirming that there have been 11 cases in 2022 where Ring complied with police “emergency” requests. In each case, Ring handed over private recordings, including video and audio, without letting users know that police had access to—and potentially downloaded—their data. This raises many concerns about increased police reliance on private surveillance, a practice that’s long gone unregulated.

Ring says it will only “respond immediately to urgent law enforcement requests for information in cases involving imminent danger of death or serious physical injury to any person.” Its policy is to review any requests for assistance from police, then make “a good-faith determination whether the request meets the well-known standard, grounded in federal law, that there is imminent danger of death or serious physical injury to any person requiring disclosure of information without delay.”

Read 17 remaining paragraphs | Comments

#amazon, #amazon-ring, #police, #policy, #surveillance

The danger of license plate readers in post-Roe America

A license plate reader in California.

Enlarge / A license plate reader in California. (credit: Gado | Getty Images)

Since the United States Supreme Court overturned Roe v. Wade last month, America’s extensive surveillance state could soon be turned against those seeking abortions or providing abortion care.

Currently, nine states have almost entirely banned abortion, and more are expected to follow suit. Many Republican lawmakers in these states are discussing the possibility of preventing people from traveling across state lines to obtain an abortion. If such plans are enacted and withstand legal scrutiny, one of the key technologies that could be deployed to track people trying to cross state lines is automated license plate readers (ALPRs). They’re employed heavily by police forces across the US, but they’re also used by private actors.

Read 18 remaining paragraphs | Comments

#license-plate-reader, #license-plate-readers, #policy, #privacy, #roe-v-wade, #surveillance

COVID cases are again on the rise globally as testing, health measures decline

World Health Organization (WHO) Director-General Tedros Adhanom Ghebreyesus (L) and WHO Technical Lead Maria Van Kerkhove attend a daily press briefing on COVID-19 at the WHO headquarters on March 2, 2020, in Geneva.

Enlarge / World Health Organization (WHO) Director-General Tedros Adhanom Ghebreyesus (L) and WHO Technical Lead Maria Van Kerkhove attend a daily press briefing on COVID-19 at the WHO headquarters on March 2, 2020, in Geneva. (credit: Getty | Fabrice Coffrini)

After weeks of decline, the global tally of COVID-19 cases is now ticking back up. This uptick is raising concerns that we could see yet another surge amid relaxed health measures and the rise of the omicron subvariant BA.2, the most highly transmissible version of the virus identified to date.

According to the latest COVID-19 situation report by the World Health Organization, the global tally of new weekly cases increased 8 percent for the week ending on March 13, totaling over 11 million cases. Cases are increasing in the Western Pacific, European, and African regions. Korea, Vietnam, Germany, France, and the Netherlands reported the highest numbers of new cases.

“These increases are occurring despite reductions in testing in some countries, which means the cases we are seeing are just the tip of the iceberg,” director-general of the World Health Organization, Dr. Tedros Adhanom Ghebreyesus, said in a press briefing Wednesday.

Read 9 remaining paragraphs | Comments

#ba-2, #cases, #covid-19, #infectious-disease, #omicron, #public-health, #science, #surveillance, #testing, #vaccine

>1,000 Android phones found infected by creepy new spyware

>1,000 Android phones found infected by creepy new spyware

Enlarge (credit: Getty Images)

More than 1,000 Android users have been infected with newly discovered malware that surreptitiously records audio and video in real time, downloads files, and performs a variety of other creepy surveillance activities.

In all, researchers uncovered 23 apps that covertly installed spyware that researchers from security firm Zimperium are calling PhoneSpy. The malware offers a full-featured array of capabilities that, besides eavesdropping and document theft, also includes transmitting GPS location data, modifying Wi-Fi connections, and performing overlay attacks for harvesting passwords to Facebook, Instagram, Google, and the Kakao Talk messaging application.

“These malicious Android apps are designed to run silently in the background, constantly spying on their victims without raising any suspicion,” Zimperium researcher Aazim Yaswant wrote. “We believe the malicious actors responsible for PhoneSpy have gathered significant amounts of personal and corporate information on their victims, including private communications and photos.”

Read 5 remaining paragraphs | Comments

#android, #malware, #surveillance, #tech

Biden’s new FTC nominee is a digital privacy advocate critical of Big Tech

President Biden made his latest nomination to the Federal Trade Commission this week, tapping digital privacy expert Alvaro Bedoya to join the agency as it takes a hard look at the tech industry.

Bedoya is the founding director of the Center on Privacy & Technology at Georgetown’s law school and previously served as chief counsel for former Senator Al Franken and the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Bedoya has worked on legislation addressing some of the most pressing privacy issues in tech, including stalkerware and facial recognition systems.

In 2016, Bedoya co-authored a report titled “The Perpetual Line-Up: Unregulated Police Face Recognition in America,” a year-long investigation that dove deeply into the police use of facial recognition systems in the U.S. The 2016 report examined law enforcement’s reliance on facial recognition systems and biometric databases on a state level. It argued that regulations are desperately needed to curtail potential abuses and algorithmic failures before the technology inevitably becomes even more commonplace.

Bedoya also isn’t shy about calling out Big Tech. In a New York Times op-ed a few years ago, he took aim at Silicon Valley companies giving user privacy lip service in public while quietly funneling millions toward lobbyists to undermine consumer privacy. The new FTC nominee singled out Facebook specifically, pointing to the company’s efforts to undermine the Illinois Biometric Information Privacy Act, a state law that serves as one of the only meaningful checks on invasive privacy practices in the U.S.

Bedoya argued that the tech industry would have an easier time shaping a single, sweeping piece of privacy regulations with its lobbying efforts rather than a flurry of targeted, smaller bills. Antitrust advocates in Congress taking aim at tech today seem to have learned that same lesson as well.

“We cannot underestimate the tech sector’s power in Congress and in state legislatures,” Bedoya wrote. “If the United States tries to pass broad rules for personal data, that effort may well be co-opted by Silicon Valley, and we’ll miss our best shot at meaningful privacy protections.”

If confirmed, Bedoya would join big tech critic Lina Khan, a recent Biden FTC nominee who now chairs the agency. Khan’s focus on antitrust and Amazon in particular would dovetail with Bedoya’s focus on adjacent privacy concerns, making the pair a formidable regulatory presence as the Biden administration seeks to rein in some of the tech industry’s most damaging excesses.

#biden, #biden-administration, #big-tech, #biometrics, #congress, #consumer-privacy, #facial-recognition, #federal-trade-commission, #government, #lina-khan, #privacy, #surveillance, #tc, #united-states

The past, present and future of IoT in physical security

When Axis Communications released the first internet protocol (IP) camera after the 1996 Olympic games in Atlanta, there was some initial confusion. Connected cameras weren’t something the market had been clamoring for, and many experts questioned whether they were even necessary.

Today, of course, traditional analog cameras have been almost completely phased out as organizations have recognized the tremendous advantage that IoT devices can offer, but that technology felt like a tremendous risk during those early days.

To say that things have changed since then would be a dramatic understatement. The growth of the Internet of Things (IoT) represents one of the ways physical security has evolved. Connected devices have become the norm, opening up exciting new possibilities that go far beyond recorded video. Further developments, such as the improvement and widespread acceptance of the IP camera, have helped power additional breakthroughs including improved analytics, increased processing power, and the growth of open-architecture technology. On the 25th anniversary of the initial launch of the IP camera, it is worth reflecting on how far the industry has come — and where it is likely to go from here.

Tech improvements herald the rise of IP cameras

Comparing today’s IP cameras to those available in 1996 is almost laughable. While they were certainly groundbreaking at the time, those early cameras could record just one frame every 17 seconds — quite a change from what cameras can do today.

But despite this drawback, those on the cutting edge of physical security understood what a monumental breakthrough the IP camera could represent. After all, creating a network of cameras would enable more effective remote monitoring, which — if the technology could scale — would enable them to deploy much larger systems, tying together disparate groups of cameras. Early applications might include watching oil fields, airport landing strips or remote cell phone towers. Better still, the technology had the potential to usher in an entirely new world of analytics capabilities.

Of course, better chipsets were needed to make that endless potential a reality. Groundbreaking or not, the limited frame rate of the early cameras was never going to be effective enough to drive widespread adoption of traditional surveillance applications. Solving this problem required a significant investment of resources, but before long these improved chipsets brought IP cameras from one frame every 17 seconds to 30 frames per second. Poor frame rate could no longer be listed as a justification for shunning IP cameras in favor of their analog cousins, and developers could begin to explore the devices’ analytics potential.

Perhaps the most important technological leap was the introduction of embedded Linux, which made IP cameras more practical from a developer point of view. During the 1990s, most devices used proprietary operating systems, which made them difficult to develop for.

Even within the companies themselves, proprietary systems meant that developers had to be trained on a specific technology, costing companies both time and money. There were a few attempts at standardization within the industry, such as the Wind River operating system, but these ultimately failed. They were too small, with limited resources behind them — and besides, a better solution already existed: Linux.

Linux offered a wide range of benefits, not the least of which was the ability to collaborate with other developers in the open source community. This was a road that ran two ways. Because most IP cameras lacked the hard disk necessary to run Linux, hardware known as JFFS was developed that would allow a device to use a Flash memory chip as a hard disk. That technology was contributed to the open source community, and while it is currently on its third iteration, it remains in widespread use today.

Compression technology represented a similar challenge, with the more prominent data compression models in the late ’90s and early 2000s poorly suited for video. At the time, video storage involved individual frames being stored one-by-one — a data storage nightmare. Fortunately, the H.264 compression format, which was designed with video in mind, became much more commonplace in 2009.

By the end of that year, more than 90% of IP cameras and most video management systems used the H.264 compression format. It is important to note that improvements in compression capabilities have also enabled manufacturers to improve their video resolution as well. Before the new compression format, video resolution had not changed since the ’60s with NTSC/PAL. Today, most cameras are capable of recording in high definition (HD).

1996: First IP camera is released.
2001: Edge-based analytics with video motion detection arrive.
2006: First downloadable, edge-based analytics become available.
2009: Full HD becomes the standard video resolution; H.264 compression goes mainstream.
2015: Smart compression revolutionizes video storage.

The growth of analytics

Analytics is not exactly a “new” technology — customers requested various analytics capabilities even in the early days of the IP camera — but it is one that has seen dramatic improvement. Although it might seem quaint by today’s high standards, video motion detection was one of the earliest analytics loaded onto IP cameras.

Customers needed a way to detect movement within certain parameters to avoid having a tree swaying in the wind, or a squirrel running by, trigger a false alarm. Further refinement of this type of detection and recognition technology has helped automate many aspects of physical security, triggering alerts when potentially suspicious activity is detected and ensuring that it is brought to human attention. By taking human fallibility out of the equation, analytics has turned video surveillance from a reactive tool to a proactive one.

Reliable motion detection remains one of the most widely used analytics, and while false alarms can never be entirely eliminated, modern improvements have made it a reliable way to detect potential intruders. Object detection is also growing in popularity and is increasingly capable of classifying cars, people, animals and other objects.

License plate recognition is popular in many countries (though less so in the United States), not just for identifying vehicles involved in criminal activity, but for uses as simple as parking recognition. Details like car model, shirt color or license plate number are easy for the human eye to miss or fail to notice — but thanks to modern analytics, that data is cataloged and stored for easy reference. The advent of technology like deep learning, which features better pattern recognition and object classification through improved labeling and categorization, will drive further advancements in this area of analytics.

The rise of analytics also helps highlight why the security industry has embraced open-architecture technology. Simply put, it is impossible for a single manufacturer to keep up with every application that its customers might need. By using open-architecture technology, they can empower those customers to seek out the solutions that are right for them, without the need to specifically tailor the device for certain use cases. Hospitals might look to add audio analytics to detect signs of patient distress; retail stores might focus on people counting or theft detection; law enforcement might focus on gunshot detection — with all of these applications housed within the same device model.

It is also important to note that the COVID-19 pandemic drove interesting new uses for both physical security devices and analytics — though some applications, such as using thermal cameras for fever measurement, proved difficult to implement with a high degree of accuracy. Within the healthcare industry, camera usage increased significantly — something that is unlikely to change. Hospitals have seen the benefit of cameras within patient rooms, with video and intercom technology enabling healthcare professionals to monitor and communicate with patients while maintaining a secure environment.

Even simple analytics like cross-line detection can generate an alert if a patient who is a fall risk attempts to leave a designated area, potentially reducing accidents and overall liability. The fact that analytics like this bear only a passing mention today highlights how far physical security has come since the early days of the IP camera.

Looking to the future of security

That said, an examination of today’s trends can provide a glimpse into what the future might hold for the security industry. For instance, video resolution will certainly continue to improve.

Ten years ago, the standard resolution for video surveillance was 720p (1 megapixel), and 10 years before that it was the analog NTSC/PAL resolution of 572×488, or 0.3 megapixels. Today, the standard resolution is 1080p (2 megapixels), and a healthy application of Moore’s law indicates that 10 years from now it will be 4K (8 megapixels).

As ever, the amount of storage that higher-resolution video generates is the limiting factor, and the development of smart storage technologies such as Zipstream has helped tremendously in recent years. We will likely see further improvements in smart storage and video compression that will help make higher-resolution video possible.

Cybersecurity will also be a growing concern for both manufacturers and end users.

Recently, one of Sweden’s largest retailers was shut down for a week because of a hack, and others will meet the same fate if they continue to use poorly secured devices. Any piece of software can contain a bug, but only developers and manufacturers committed to identifying and fixing these potential vulnerabilities can be considered reliable partners. Governments across the globe will likely pass new regulations mandating cybersecurity improvements, with California’s recent IoT protection law serving as an early indicator of what the industry can expect.

Finally, ethical behavior will continue to become more important. A growing number of companies have begun foregrounding their ethics policies, issuing guidelines for how they expect technology like facial recognition to be used — not abused.

While new regulations are coming, it’s important to remember that regulation always lags behind, and companies that wish to have a positive reputation will need to adhere to their own ethical guidelines. More and more consumers now list ethical considerations among their major concerns—especially in the wake of the COVID-19 pandemic—and today’s businesses will need to strongly consider how to broadcast and enforce responsible product use.

Change is always around the corner

Physical security has come a long way since the IP camera was introduced, but it is important to remember that these changes, while significant, took place over more than two decades. Changes take time — often more time than you might think. Still, it is impossible to compare where the industry stands today to where it stood 25 years ago without being impressed. The technology has evolved, end users’ needs have shifted, and even the major players in the industry have come and gone according to their ability to keep up with the times.

Change is inevitable, but careful observation of today’s trends and how they fit into today’s evolving security needs can help today’s developers and device manufacturers understand how to position themselves for the future. The pandemic highlighted the fact that today’s security devices can provide added value in ways that no one would have predicted just a few short years ago, further underscoring the importance of open communication, reliable customer support and ethical behavior.

As we move into the future, organizations that continue to prioritize these core values will be among the most successful.

#column, #facial-recognition, #hardware, #internet-protocol, #ip-camera, #linux, #opinion, #physical-security, #security, #surveillance, #tc

Uber asked contractor to allow video surveillance in employee homes, bedrooms

Uber asked contractor to allow video surveillance in employee homes, bedrooms

Enlarge (credit: Smith Collection/Gado/Getty Images)

For years, employers have used surveillance to keep tabs on their employees on the job. Cameras have watched as workers moved cash in and out of registers, GPS has reported on the movements of employees driving company vehicles, and software has been monitoring people’s work email.

Now, with more work being done remotely, many of those same surveillance tools are entering people’s homes. A marketing company in Minnesota forced employees to install software that would record videos of employee’s screens and even cut their hours if they took a bathroom break that was too long. A New York e-commerce company told employees that they would have to install monitoring software on their personal computers that would log keystrokes and mouse movements—and they’d have to install an app on their phones that would track their movements throughout the workday.

The situation isn’t limited to the US, either. One multinational company appears to be testing the boundaries of what’s an acceptable level of surveillance for remote workers. Teleperformance, one of the world’s largest call center companies, is reportedly requiring some employees to consent to video monitoring in their homes. Employees in Colombia told NBC News that their new contract granted the company the right to use AI-powered cameras to observe and record their workspaces. The contract also requires employees to share biometric data like fingerprints and photos of themselves, and workers have to agree to share data and images that may include children under 18.

Read 9 remaining paragraphs | Comments

#civil-rights, #employees, #policy, #privacy, #surveillance, #uber

Despite controversies and bans, facial recognition startups are flush with VC cash

If efforts by states and cities to pass privacy regulations curbing the use of facial recognition are anything to go by, you might fear the worst for the companies building the technology. But a recent influx of investor cash suggests the facial recognition startup sector is thriving, not suffering.

Facial recognition is one of the most controversial and complex policy areas in play. The technology can be used to track where you go and what you do. It’s used by public authorities and in private businesses like stores. But facial recognition has been shown to be flawed and inaccurate, often misidentifies non-white faces, and is disproportionately affects communities of color. Its flawed algorithms have already been used to send innocent people to jail, and privacy advocates have raised countless concerns about how this kind of biometric data is stored and used.

With the threat of federal legislation looming, some of the biggest facial recognition companies like Amazon, IBM, and Microsoft announced they would stop selling their facial recognition technology to police departments to try to appease angry investors, customers, and even their own employees who protested the deployment of such technologies by the U.S. government and immigration authorities.

The pushback against facial recognition didn’t stop there. Since the start of the year, Maine, Massachusetts, and the city of Minneapolis have all passed legislation curbing or banning the use of facial recognition in some form, following in the steps of many other cities and states before them and setting the stage for others, like New York, which are eyeing legislation of their own.

In those same six or so months, investors have funneled hundreds of millions into several facial recognition startups. A breakdown of Crunchbase data by FindBiometrics shows a sharp rise in venture funding in facial recognition companies at well over $500 million in 2021 so far, compared to $622 million for all of 2020.

About half of that $500 million comes from one startup alone. Israel-based startup AnyVision raised $235 million at Series C earlier this month from SoftBank’s Vision Fund 2 for its facial recognition technology that’s used in schools, stadiums, casinos, and retail stores. Macy’s is a known customer, and uses the face-scanning technology to identify shoplifters. It’s a steep funding round compared to a year earlier when Microsoft publicly pulled its investment in AnyVision’s Series A following an investigation by former U.S. attorney general Eric Holder into reports that the startup’s technology was being used by the Israeli government to surveil residents in the West Bank.

Read more on TechCrunch

Paravision, the company marred by controversy after it was accused of using facial recognition on its users without informing them, raised $23 million in a funding round led by J2 Ventures.

And last week, Clearview AI, the controversial facial recognition startup that is the subject of several government investigations and multiple class-action suits for allegedly scraping billions of profile photos from social media sites, confirmed to The New York Times it raised $30 million from investors who asked “not to be identified,” only that they are “institutional investors and private family offices.” That is to say, while investors are happy to see their money go towards building facial recognition systems, they too are all too aware of the risks and controversies associated with attaching their names to the technology.

Although the applications and customers of facial recognition wildly vary, there’s still a big market for the technology.

Many of the cities and towns with facial recognition bans also have carve outs that allow its use in some circumstances, or broad exemptions for private businesses that can freely buy and use the technology. The exclusion of many China-based facial recognition companies, like Hikvision and Dahua, which the government has linked to human rights abuses against the Uighur Muslim minority in Xinjiang, as well as dozens of other startups blacklisted by the U.S. government, has helped push out some of the greatest competition from the most lucrative U.S. markets, like government customers

But as facial recognition continues to draw scrutiny, investors are urging companies to do more to make sure their technologies are not being misused.

In June, a group of 50 investors with more than $4.5 trillion in assets called on dozens of facial recognition companies, including Amazon, Facebook, Alibaba and Huawei, to build their technologies ethically.

“In some instances, new technologies such as facial recognition technology may also undermine our fundamental rights. Yet this technology is being designed and used in a largely unconstrained way, presenting risks to basic human rights,” the statement read.

It’s not just ethics, but also a matter of trying to future-proof the industry from inevitable further political headwinds. In April, the European Union’s top data protection watchdog called for an end to facial recognition in public spaces across the bloc.

“As mass surveillance expands, technological innovation is outpacing human rights protection. There are growing reports of bans, fines, and blacklistings of the use of facial recognition technology. There is a pressing need to consider these questions,” the statement added.

#anyvision, #clearview-ai, #dahua, #face-id, #facial-recognition, #funding, #hikvision, #huawei, #microsoft, #paravision, #privacy, #retail-stores, #security, #surveillance, #video-surveillance

UK’s Mindtech raises $3.25M from In-Q-Tel, among others, to train CCTV cameras on synthetic humans

Imagine a world where no one’s privacy is breached, no faces are scanned into a gargantuan database, and no privacy laws are broken. This is a world that is fast approaching. Could companies simply dump the need for real-world CCTV footage, and switch to synthetic humans, acting out potential scenarios a million times over? That’s the tantalizing prospect of a new UK startup that has attracted funding from an influential set of investors.

UK-based Mindtech Global has developed what it describes as an end-to-end synthetic data creation platform. In plain English, its system can imagine visual scenarios such as someone’s behavior inside a store, or crossing the street. This data is then used to train AI-based computer vision systems for customers such as big retailers, warehouse operators, healthcare, transportation systems and robotics. It literally trains a ‘synthetic’ CCTV camera inside a synthetic world.

It’s now closed a $3.25 million early-stage funding round led by UK regional backer NPIF – Mercia Equity Finance, with Deeptech Labs and In-Q-Tel.

That last investor is significant. In-Q-Tel invests in startups that support US intelligence capabilities and is based in Arlington, Virginia…

Mindtech’s Chameleon platform is designed to help computers understand and predict human interactions. As we all know, current approaches to training AI vision systems require companies to source data such as CCTV footage. The process is fraught with privacy issues, costly, and time-consuming. Mindtech says Chameleon solves that problem, as its customers quickly “build unlimited scenes and scenarios using photo-realistic smart 3D models”.

An added bonus is that these synthetic humans can be used to train AI vision systems to weed out human failings around diversity and bias.

Mindtech CEO Steve Harris

Mindtech CEO Steve Harris

Steve Harris, CEO, Mindtech said: “Machine learning teams can spend up to 80% of their time sourcing, cleaning, and organizing training data. Our Chameleon platform solves the AI training challenge, freeing the industry to focus on higher-value tasks like AI network innovation. This round will enable us to accelerate our growth, enabling a new generation of AI solutions that better understand the way humans interact with each other and the world around them.”

So what can you do with it? Consider the following: A kid slips from its parent’s hand at the mall. The synthetic CCTV running inside Mindtech’s scenario is trained thousands of times over how to spot it in real-time and alert staff. Another: a delivery robot meets kids playing in a street and works out how to how to avoid them. Finally: a passenger on the platform is behaving erratically too close to the rails – the CCTV is trained to automatically spot them and send help.

Nat Puffer, Managing Director (London), In-Q-Tel commented: “Mindtech impressed us with the maturity of their Chameleon platform and their commercial traction with global customers. We’re excited by the many applications this platform has across diverse markets and its ability to remove a significant roadblock in the development of smarter, more intuitive AI systems.”

Miles Kirby, CEO, Deeptech Labs said: “As a catalyst for deeptech success, our investment, and accelerator program supports ambitious teams with novel solutions and the appetite to build world-changing companies. Mindtech’s highly-experienced team are on a mission to disrupt the way AI systems are trained, and we’re delighted to support their journey.”

There is of course potential for darker applications, such a spotting petty theft inside supermarkets, or perhaps ‘optimising’ hard-pressed warehouse workers in some dystopian fashion. However, in theory, Mindtech’s customers can use this platform to rid themselves of the biases of middle-managers, and better serve customers.

#arlington, #articles, #artificial-general-intelligence, #artificial-intelligence, #ceo, #chameleon, #cybernetics, #europe, #healthcare, #london, #machine-learning, #surveillance, #tc, #technology, #united-kingdom, #virginia

Apple under pressure over iPhone security after NSO spyware claims

Mobile devices can make people vulnerable to online piracy through privacy settings, Bydgoszcz, Poland, on August 7, 2016. (Photo by Jaap Arriens/NurPhoto via Getty Images)

Enlarge / Mobile devices can make people vulnerable to online piracy through privacy settings, Bydgoszcz, Poland, on August 7, 2016. (Photo by Jaap Arriens/NurPhoto via Getty Images) (credit: NurPhoto | Getty Images)

Apple has come under pressure to collaborate with its Silicon Valley rivals to fend off the common threat of surveillance technology after a report alleged that NSO Group’s Pegasus spyware was used to target journalists and human rights activists.

Amnesty International, which analysed dozens of smartphones targeted by clients of NSO, said Apple’s marketing claims about its devices’ superior security and privacy had been “ripped apart” by the discovery of vulnerabilities in even the most recent versions of its iPhones and iOS software.

“Thousands of iPhones have potentially been compromised,” said Danna Ingleton, deputy director of Amnesty’s tech unit. “This is a global concern—anyone and everyone is at risk, and even technology giants like Apple are ill-equipped to deal with the massive scale of surveillance at hand.”

Read 16 remaining paragraphs | Comments

#0-day, #apple, #biz-it, #ios, #iphone, #nso, #pegasus, #surveillance, #tech

New York City’s new biometrics privacy law takes effect

A new biometrics privacy ordinance has taken effect across New York City, putting new limits on what businesses can do with the biometric data they collect on their customers.

From Friday, businesses that collect biometric information — most commonly in the form of facial recognition and fingerprints — are required to conspicuously post notices and signs to customers at their doors explaining how their data will be collected. The ordinance applies to a wide range of businesses — retailers, stores, restaurants, and theaters, to name a few — which are also barred from selling, sharing, or otherwise profiting from the biometric information that they collect.

The move will give New Yorkers — and its millions of visitors each year — greater protections over how their biometric data is collected and used, while also serving to dissuade businesses from using technology that critics say is discriminatory and often doesn’t work.

Businesses can face stiff penalties for violating the law, but can escape fines if they fix the violation quickly.

The law is by no means perfect, as none of these laws ever are. For one, it doesn’t apply to government agencies, including the police. Of the businesses that the ordinance does cover, it exempts employees of those businesses, such as those required to clock in and out of work with a fingerprint. And the definition of what counts as a biometric will likely face challenges that could expand or narrow what is covered.

New York is the latest U.S. city to enact a biometric privacy law, after Portland, Oregon passed a similar ordinance last year. But the law falls short of stronger biometric privacy laws in effect.

Illinois, which has the Biometric Information Privacy Act, a law that grants residents the right to sue for any use of their biometric data without consent. Facebook this year settled for $650 million in a class-action suit that Illinois residents filed in 2015 after the social networking giant used facial recognition to tag users in photos without their permission.

Albert Fox Cahn, the executive director of the New York-based Surveillance Technology Oversight Project, said the law is an “important step” to learn how New Yorkers are tracked by local businesses.

“A false facial recognition match could mean having the NYPD called on you just for walking into a Rite Aid or Target,” he told TechCrunch. He also said that New York should go further by outlawing systems like facial recognition altogether, as some cities have done.

Read more:

#articles, #biometrics, #face-id, #facebook, #facial-recognition, #facial-recognition-software, #illinois, #learning, #new-york, #new-york-city, #new-yorkers, #oregon, #portland, #privacy, #rite-aid, #security, #surveillance, #techniques

Ban biometric surveillance in public to safeguard rights, urge EU bodies

There have been further calls from EU institutions to outlaw biometric surveillance in public.

In a joint opinion published today, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS), Wojciech Wiewiórowski, have called for draft EU regulations on the use of artificial intelligence technologies to go further than the Commission’s proposal in April — urging that the planned legislation should be beefed up to include a “general ban on any use of AI for automated recognition of human features in publicly accessible spaces, such as recognition of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals, in any context”.

Such technologies are simply too harmful to EU citizens’ fundamental rights and freedoms — like privacy and equal treatment under the law — to permit their use, is the argument.

The EDPB is responsible for ensuring a harmonization application of the EU’s privacy rules, while the EDPS oversees EU institutions’ own compliance with data protection law and also provides legislative guidance to the Commission.

EU lawmakers’ draft proposal on regulating applications of AI contained restrictions on law enforcement’s use of biometric surveillance in public places — but with very wide-ranging exemptions which quickly attracted major criticism from digital rights and civil society groups, as well as a number of MEPs.

The EDPS himself also quickly urged a rethink. Now he’s gone further, with the EDPB joining in with the criticism.

The EDPB and the EDPS have jointly fleshed out a number of concerns with the EU’s AI proposal — while welcoming the overall “risk-based approach” taken by EU lawmakers — saying, for example, that legislators must be careful to ensure alignment with the bloc’s existing data protection framework to avoid rights risks.

“The EDPB and the EDPS strongly welcome the aim of addressing the use of AI systems within the European Union, including the use of AI systems by EU institutions, bodies or agencies. At the same time, the EDPB and EDPS are concerned by the exclusion of international law enforcement cooperation from the scope of the Proposal,” they write.

“The EDPB and EDPS also stress the need to explicitly clarify that existing EU data protection legislation (GDPR, the EUDPR and the LED) applies to any processing of personal data falling under the scope of the draft AI Regulation.”

As well as calling for the use of biometric surveillance to be banned in public, the pair have urged a total ban on AI systems using biometrics to categorize individuals into “clusters based on ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights”.

That’s an interesting concern in light of Google’s push, in the adtech realm, to replace behavioral micromarketing of individuals with ads that address cohorts (or groups) of users, based on their interests — with such clusters of web users set to be defined by Google’s AI algorithms.

(It’s interesting to speculate, therefore, whether FLoCs risks creating a legal discrimination risk — based on how individual mobile users are grouped together for ad targeting purposes. Certainly, concerns have been raised over the potential for FLoCs to scale bias and predatory advertising. And it’s also interesting that Google avoided running early tests in Europe, likely owning to the EU’s data protection regime.)

In another recommendation today, the EDPB and the EDPS also express a view that the use of AI to infer emotions of a natural person is “highly undesirable and should be prohibited” —  except for what they describe as “very specified cases, such as some health purposes, where the patient emotion recognition is important”.

“The use of AI for any type of social scoring should be prohibited,” they go on — touching on one use-case that the Commission’s draft proposal does suggest should be entirely prohibited, with EU lawmakers evidently keen to avoid any China-style social credit system taking hold in the region.

However by failing to include a prohibition on biometric surveillance in public in the proposed regulation the Commission is arguably risking just such a system being developed on the sly — i.e. by not banning private actors from deploying technology that could be used to track and profile people’s behavior remotely and en masse.

Commenting in a statement, the EDPB’s chair Andrea Jelinek and the EDPS Wiewiórowski argue as much, writing [emphasis ours]:

“Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places. Applications such as live facial recognition interfere with fundamental rights and freedoms to such an extent that they may call into question the essence of these rights and freedoms. This calls for an immediate application of the precautionary approach. A general ban on the use of facial recognition in publicly accessible areas is the necessary starting point if we want to preserve our freedoms and create a human-centric legal framework for AI. The proposed regulation should also prohibit any type of use of AI for social scoring, as it is against the EU fundamental values and can lead to discrimination.”

In their joint opinion they also express concerns about the Commission’s proposed enforcement structure for the AI regulation, arguing that data protection authorities (within Member States) should be designated as national supervisory authorities (“pursuant to Article 59 of the [AI] Proposal”) — pointing out the EU DPAs are already enforcing the GDPR (General Data Protection Regulation) and the LED (Law Enforcement Directive) on AI systems involving personal data; and arguing it would therefore be “a more harmonized regulatory approach, and contribute to the consistent interpretation of data processing provisions across the EU” if they were given competence for supervising the AI Regulation too.

They are also not happy with the Commission’s plan to give itself a predominant role in the planned European Artificial Intelligence Board (EAIB) — arguing that this “would conflict with the need for an AI European body independent from any political influence”. To ensure the Board’s independence the proposal should give it more autonomy and “ensure it can act on its own initiative”, they add.

The Commission has been contacted for comment.

The AI Regulation is one of a number of digital proposals unveiled by EU lawmakers in recent months. Negotiations between the different EU institutions — and lobbying from industry and civil society — continues as the bloc works toward adopting new digital rules.

In another recent and related development, the UK’s information commissioner warned last week over the threat posed by big data surveillance systems that are able to make use of technologies like live facial recognition — although she claimed it’s not her place to endorse or ban a technology.

But her opinion makes it clear that many applications of biometric surveillance may be incompatible with the UK’s privacy and data protection framework.

#andrea-jelinek, #artificial-intelligence, #biometrics, #data-protection, #data-protection-law, #edpb, #edps, #europe, #european-data-protection-board, #european-union, #facial-recognition, #general-data-protection-regulation, #law-enforcement, #privacy, #surveillance, #united-kingdom, #wojciech-wiewiorowski

Ring gave cops free cameras to build and promote surveillance network

Ring gave cops free cameras to build and promote surveillance network

Enlarge (credit: Chip Somodevilla | Getty Images)

When Ring wanted to boost sales of it surveillance cameras and burnish its self-styled image as a crime-fighting company, it embarked on a brand-ambassador marketing campaign that would be familiar to many startups. But rather than chase down the Instagram influencers or beat bloggers, the company instead wooed officers at the Los Angeles Police Department.

For years, including during Amazon’s early ownership of the company, Ring gave no fewer than 100 LAPD officers free devices or discount codes worth tens of thousands of dollars, and possibly more, according to a new report from the Los Angeles Times.

Emails obtained by the LA Times through a public records request reveal Ring employees encouraging LAPD officers to “spread the word about how this doorbell is proven to reduce crime in neighborhoods” and offering freebies and discounts.

Read 16 remaining paragraphs | Comments

#civil-liberties, #influencers, #police, #policy, #ring, #surveillance

UK’s ICO warns over ‘big data’ surveillance threat of live facial recognition in public

The UK’s chief data protection regulator has warned over reckless and inappropriate use of live facial recognition (LFR) in public places.

Publishing an opinion today on the use of this biometric surveillance in public — to set out what is dubbed as the “rules of engagement” — the information commissioner, Elizabeth Denham, also noted that a number of investigations already undertaken by her office into planned applications of the tech have found problems in all cases.

“I am deeply concerned about the potential for live facial recognition (LFR) technology to be used inappropriately, excessively or even recklessly. When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant,” she warned in a blog post.

“Uses we’ve seen included addressing public safety concerns and creating biometric profiles to target people with personalised advertising.

“It is telling that none of the organisations involved in our completed investigations were able to fully justify the processing and, of those systems that went live, none were fully compliant with the requirements of data protection law. All of the organisations chose to stop, or not proceed with, the use of LFR.”

“Unlike CCTV, LFR and its algorithms can automatically identify who you are and infer sensitive details about you. It can be used to instantly profile you to serve up personalised adverts or match your image against known shoplifters as you do your weekly grocery shop,” Denham added.

“In future, there’s the potential to overlay CCTV cameras with LFR, and even to combine it with social media data or other ‘big data’ systems — LFR is supercharged CCTV.”

The use of biometric technologies to identify individuals remotely sparks major human rights concerns, including around privacy and the risk of discrimination.

Across Europe there are campaigns — such as Reclaim your Face — calling for a ban on biometric mass surveillance.

In another targeted action, back in May, Privacy International and others filed legal challenges at the controversial US facial recognition company, Clearview AI, seeking to stop it from operating in Europe altogether. (Some regional police forces have been tapping in — including in Sweden where the force was fined by the national DPA earlier this year for unlawful use of the tech.)

But while there’s major public opposition to biometric surveillance in Europe, the region’s lawmakers have so far — at best — been fiddling around the edges of the controversial issue.

A pan-EU regulation the European Commission presented in April, which proposes a risk-based framework for applications of artificial intelligence, included only a partial prohibition on law enforcement’s use of biometric surveillance in public places — with wide ranging exemptions that have drawn plenty of criticism.

There have also been calls for a total ban on the use of technologies like live facial recognition in public from MEPs across the political spectrum. The EU’s chief data protection supervisor has also urged lawmakers to at least temporarily ban the use of biometric surveillance in public.

The EU’s planned AI Regulation won’t apply in the UK, in any case, as the country is now outside the bloc. And it remains to be seen whether the UK government will seek to weaken the national data protection regime.

A recent report it commissioned to examine how the UK could revise its regulatory regime, post-Brexit, has — for example — suggested replacing the UK GDPR with a new “UK framework” — proposing changes to “free up data for innovation and in the public interest”, as it puts it, and advocating for revisions for AI and “growth sectors”. So whether the UK’s data protection regime will be put to the torch in a post-Brexit bonfire of ‘red tape’ is a key concern for rights watchers.

(The Taskforce on Innovation, Growth and Regulatory Reform report advocates, for example, for the complete removal of Article 22 of the GDPR — which gives people rights not to be subject to decisions based solely on automated processing — suggesting it be replaced with “a focus” on “whether automated profiling meets a legitimate or public interest test”, with guidance on that envisaged as coming from the Information Commissioner’s Office (ICO). But it should also be noted that the government is in the process of hiring Denham’s successor; and the digital minister has said he wants her replacement to take “a bold new approach” that “no longer sees data as a threat, but as the great opportunity of our time”. So, er, bye-bye fairness, accountability and transparency then?)

For now, those seeking to implement LFR in the UK must comply with provisions in the UK’s Data Protection Act 2018 and the UK General Data Protection Regulation (aka, its implementation of the EU GDPR which was transposed into national law before Brexit), per the ICO opinion, including data protection principles set out in UK GDPR Article 5, including lawfulness, fairness, transparency, purpose limitation, data minimisation, storage limitation, security and accountability.

Controllers must also enable individuals to exercise their rights, the opinion also said.

“Organisations will need to demonstrate high standards of governance and accountability from the outset, including being able to justify that the use of LFR is fair, necessary and proportionate in each specific context in which it is deployed. They need to demonstrate that less intrusive techniques won’t work,” wrote Denham. “These are important standards that require robust assessment.

“Organisations will also need to understand and assess the risks of using a potentially intrusive technology and its impact on people’s privacy and their lives. For example, how issues around accuracy and bias could lead to misidentification and the damage or detriment that comes with that.”

The timing of the publication of the ICO’s opinion on LFR is interesting in light of wider concerns about the direction of UK travel on data protection and privacy.

If, for example, the government intends to recruit a new, ‘more pliant’ information commissioner — who will happily rip up the rulebook on data protection and AI, including in areas like biometric surveillance — it will at least be rather awkward for them to do so with an opinion from the prior commissioner on the public record that details the dangers of reckless and inappropriate use of LFR.

Certainly, the next information commissioner won’t be able to say they weren’t given clear warning that biometric data is particularly sensitive — and can be used to estimate or infer other characteristics, such as their age, sex, gender or ethnicity.

Or that ‘Great British’ courts have previously concluded that “like fingerprints and DNA [a facial biometric template] is information of an ‘intrinsically private’ character”, as the ICO opinion notes, while underlining that LFR can cause this super sensitive data to be harvested without the person in question even being aware it’s happening. 

Denham’s opinion also hammers hard on the point about the need for public trust and confidence for any technology to succeed, warning that: “The public must have confidence that its use is lawful, fair, transparent and meets the other standards set out in data protection legislation.”

The ICO has previously published an Opinion into the use of LFR by police forces — which she said also sets “a high threshold for its use”. (And a few UK police forces — including the Met in London — have been among the early adopters of facial recognition technology, which has in turn led some into legal hot water on issues like bias.)

Disappointingly, though, for human rights advocates, the ICO opinion shies away from recommending a total ban on the use of biometric surveillance in public by private companies or public organizations — with the commissioner arguing that while there are risks with use of the technology there could also be instances where it has high utility (such as in the search for a missing child).

“It is not my role to endorse or ban a technology but, while this technology is developing and not widely deployed, we have an opportunity to ensure it does not expand without due regard for data protection,” she wrote, saying instead that in her view “data protection and people’s privacy must be at the heart of any decisions to deploy LFR”.

Denham added that (current) UK law “sets a high bar to justify the use of LFR and its algorithms in places where we shop, socialise or gather”.

“With any new technology, building public trust and confidence in the way people’s information is used is crucial so the benefits derived from the technology can be fully realised,” she reiterated, noting how a lack of trust in the US has led to some cities banning the use of LFR in certain contexts and led to some companies pausing services until rules are clearer.

“Without trust, the benefits the technology may offer are lost,” she also warned.

There is one red line that the UK government may be forgetting in its unseemly haste to (potentially) gut the UK’s data protection regime in the name of specious ‘innovation’. Because if it tries to, er, ‘liberate’ national data protection rules from core EU principles (of lawfulness, fairness, proportionality, transparency, accountability and so on) — it risks falling out of regulatory alignment with the EU, which would then force the European Commission to tear up a EU-UK data adequacy arrangement (on which the ink is still drying).

The UK having a data adequacy agreement from the EU is dependent on the UK having essentially equivalent protections for people’s data. Without this coveted data adequacy status UK companies will immediately face far greater legal hurdles to processing the data of EU citizens (as the US now does, in the wake of the demise of Safe Harbor and Privacy Shield). There could even be situations where EU data protection agencies order EU-UK data flows to be suspended altogether…

Obviously such a scenario would be terrible for UK business and ‘innovation’ — even before you consider the wider issue of public trust in technologies and whether the Great British public itself wants to have its privacy rights torched.

Given all this, you really have to wonder whether anyone inside the UK government has thought this ‘regulatory reform’ stuff through. For now, the ICO is at least still capable of thinking for them.

 

#artificial-intelligence, #biometrics, #clearview-ai, #data-protection, #data-protection-law, #elizabeth-denham, #europe, #european-commission, #european-union, #facial-recognition, #general-data-protection-regulation, #information-commissioners-office, #law-enforcement, #privacy, #privacy-international, #safe-harbor, #surveillance, #tc, #uk-government, #united-kingdom

US lawmakers want to restrict police use of ‘Stingray’ cell tower simulators

According to BuzzFeed News, Democratic Senator Ron Wyden and Representative Ted Lieu will introduce legislation later today that seeks to restrict police use of international mobile subscriber identity (IMSI) catchers. More commonly known as Stingrays, police frequently use IMSI catchers and cell-site simulators to collect information on suspects and intercept calls, SMS messages and other forms of communication. Law enforcement agencies in the US currently do not require a warrant to use the technology. The Cell-Site Simulator Act of 2021 seeks to change that.

IMSI catchers mimic cell towers to trick mobile phones into connecting with them. Once connected, they can collect data a device sends out, including its location and subscriber identity key. Cell-site simulators pose a two-fold problem.

The first is that they’re surveillance blunt instruments. When used in a populated area, IMSI catchers can collect data from bystanders. The second is that they can also pose a safety risk to the public. The reason for this is that while IMSI catchers act like a cell tower, they don’t function as one, and they can’t transfer calls to a public wireless network. They can therefore prevent a phone from connecting to 9-1-1. Despite the dangers they pose, their use is widespread. In 2018, the American Civil Liberties Union found at least 75 agencies in 27 states and the District of Columbia owned IMSI catchers.

In trying to address those concerns, the proposed legislation would make it so that law enforcement agencies would need to make a case before a judge on why they should be allowed to use the technology. They would also need to explain why other surveillance methods wouldn’t be as effective. Moreover, it seeks to ensure those agencies delete any data they collect from those not listed on a warrant.

Although the bill reportedly doesn’t lay out a time limit on IMSI catcher use, it does push agencies to use the devices for the least amount of time possible. It also details exceptions where police could use the technology without a warrant. For instance, it would leave the door open for law enforcement to use the devices in contexts like bomb threats where an IMSI catcher can prevent a remote detonation.

“Our bipartisan bill ends the secrecy and uncertainty around Stingrays and other cell-site simulators and replaces it with clear, transparent rules for when the government can use these invasive surveillance devices,” Senator Ron Wyden told BuzzFeed News.

The bill has support from some Republicans. Senator Steve Daines of Montana and Representative Tom McClintock of California are co-sponsoring the proposed legislation. Organizations like the Electronic Frontier Foundation and the Electronic Privacy Information Center have also endorsed the bill.

This article was originally published on Engadget.

 

#american-civil-liberties-union, #california, #catcher, #column, #electronic-frontier-foundation, #imsi-catcher, #judge, #law-enforcement, #mobile-phone, #mobile-phones, #mobile-security, #montana, #ron-wyden, #sim-card, #sms, #surveillance, #technology, #ted-lieu, #telecommunications, #united-states

Mass surveillance must have meaningful safeguards, says ECHR

The highest chamber of the European Court of Human Rights (ECHR) has delivered a blow to anti-surveillance campaigners in Europe by failing to find that bulk interception of digital comms is inherently incompatible with human rights law — which enshrines individual rights to privacy and freedom of expression.

However today’s Grand Chamber judgement underscores the need for such intrusive intelligence powers to be operated with what the judges describe as “end-to-end safeguards”.

Governments in Europe that fail to do so are opening such laws up to further legal challenge under the European Convention on Human Rights.

The Grand Chamber ruling also confirms that the UK’s historic surveillance regime — under the Regulation of Investigatory Powers Act 2000 (aka RIPA) — was unlawful because it lacked the necessary safeguards.

Per the court, ‘end-to-end’ safeguards means that bulk intercept powers need to involve assessments at each stage of the process of the necessity and proportionality of the measures being taken; that bulk interception should be subject to independent authorisation at the outset, when the object and scope of the operation are being defined; and that the operation should be subject to supervision and independent ‘ex post facto’ review.

The Grand Chamber judgement identified a number of deficiencies with the bulk regime operated in the UK at the time of RIPA — including that bulk interception had been authorised by the Secretary of State, rather than by a body independent of the executive; categories of search terms defining the kinds of communications that would become liable for examination had not been included in the application for a warrant; and search terms linked to an individual (e.g. specific identifiers such as an email address) had not been subject to prior internal authorisation.

The court also found that the UK’s bulk intercept regime had breached Article 10 (freedom of expression) because it had not contained sufficient protections for confidential journalistic material.

While the regime used for obtaining comms data from communication service providers was found to have violated Articles 8 (right to privacy and family life/comms) and 10 “as it had not been in accordance with the law”.

However, the court held that the regime by which the UK could request intelligence from foreign governments and/or intelligence agencies had had sufficient safeguards in place to protect against abuse and to ensure that UK authorities had not used such requests as a means of circumventing their duties under domestic law and the Convention.

The Court considered that, owing to the multitude of threats States face in modern society, operating a bulk interception regime did not in and of itself violate the Convention,” it added in a press release. 

The RIPA regime has since replaced by the UK’s Investigatory Powers Act (IPA) — which put bulk intercept powers explicitly into law (albeit with claimed layers of oversight).

The IPA has also been subject to a number of human rights challenges — and in 2018 the government was ordered by the UK High Court to revise parts of the law which had been found to be incompatible with human rights law.

Today’s Grand Chamber judgement relates specifically to RIPA and to a number of legal challenges brought against the UK’s mass surveillance regime by journalists and privacy and digital rights campaigners in the wake of the 2013 mass surveillance revelations by NSA whistleblower Edward Snowden which the ECHR heard simultaneously.

In a similar ruling back in 2018 the lower Chamber found some aspects of the UK’s regime violated human rights law — with a majority vote then finding that its bulk interception regime had violated Article 8 because there was insufficient oversight (such as of selectors and filtering; and of search and selection of intercepted communications for examination; as well as inadequate safeguards governing the selection of related comms data). 

Human rights campaigners followed up by requesting and securing a referral to the Grand Chamber — which has now handed down its view.

It unanimously found there had been a violation of Article 8 in respect of the regime for obtaining communications data from communication service providers.

But by 12 votes to 5 it ruled there had been no violation of Article 8 in respect of the UK’s regime for requesting intercepted material from foreign governments and intelligence agencies.

In another unanimous vote the Grand Chamber found there had been a violation of Article 10, concerning both the bulk interception regime and the regime for obtaining communications data from comms service providers.

But, again, by 12 votes to 5 it ruled there had been no violation of Article 10 in respect of the regime for requesting intercepted material from foreign governments and intelligence agencies.

Responding to the judgement in a statement, the privacy rights group Big Brother Watch — which was one of the parties involved in the challenges — said the judgement “confirms definitively that the UK’s bulk interception practices were unlawful for decades”, thereby vindicating Snowden’s whistleblowing.

The organization also highlighted a dissenting opinion from Judge Pinto de Alburquerque, who wrote that:

“Admitting non-targeted bulk interception involves a fundamental change in how we view crime prevention and investigation and intelligence gathering in Europe, from targeting a suspect who can be identified to treating everyone as a potential suspect, whose data must be stored, analysed and profiled (…) a society built upon such foundations is more akin to a police state than to a democratic society. This would be the opposite of what the founding fathers wanted for Europe when they signed the Convention in 1950.”

In further remarks on the judgement, Silkie Carlo, director of Big Brother Watch, added: “Mass surveillance damages democracies under the cloak of defending them, and we welcome the Court’s acknowledgement of this. As one judge put it, we are at great risk of living in an electronic ‘Big Brother’ in Europe. We welcome the judgment that the UK’s surveillance regime was unlawful, but the missed opportunity for the Court to prescribe clearer limitations and safeguards mean that risk is current and real.”

“We will continue our work to protect privacy, from parliament to the courts, until intrusive mass surveillance practices are ended,” she added.

Privacy International — another party to the case — sought to put a positive spin on the outcome, saying the Grand Chamber goes further than the ECHR’s 2018 ruling by “providing for new and stronger safeguards, adding a new requirement of prior independent or judicial authorisation for bulk interception”.

“Authorisation must be meaningful, rigorous and check for proper ‘end-to-end safeguards’,” it added in a statement.

Also commenting publicly, the Open Rights Group’s executive director, Jim Killock, said: “The court has shown that the UK Government’s legal framework was weak and inadequate when we took them to court with Big Brother Watch and Constanze Kurz in 2013. The court has set out clear criteria for assessing future bulk interception regimes, but we believe these will need to be developed into harder red lines in future judgments, if bulk interception is not to be abused.”

“As the court sets out, bulk interception powers are a great power, secretive in nature, and hard to keep in check. We are far from confident that today’s bulk interception is sufficiently safeguarded, while the technical capacities continue to deepen. GCHQ continues to share technology platforms and raw data with the US,” Killock went on to say, couching the judgment as “an important step on a long journey”.

 

#big-brother-watch, #counter-terrorism, #echr, #edward-snowden, #europe, #european-court-of-human-rights, #gchq, #investigatory-powers-act, #mass-surveillance, #national-security, #national-security-agency, #open-rights-group, #policy, #privacy, #security, #surveillance, #tc, #uk-government, #united-kingdom, #united-states

This crypto surveillance startup — ‘We’re bomb sniffing dogs’ — just raised Series A funding

Solidus Labs, a company that says its surveillance and risk-monitoring software can detect manipulation across cryptocurrency trading platforms, is today announcing $20 million in Series A funding led by Evolution Equity Partners, with participation from Hanaco Ventures, Avon Ventures, 645 Ventures, the cryptocurrencies derivative exchange FTX,  and also a sprinkling of government officials, including former CFTC commissioner Chris Giancarlo and former SEC commissioner Troy Paredes.

It’s pretty great timing, given the various signals coming from the U.S. government just last week that it’s intent on improving its crypto monitoring efforts — such as the U.S. Treasury’s call for stricter cryptocurrency compliance with the IRS.

Of course, Solidus didn’t spring into existence last week. Rather, Solidus was founded in 2017 by several former Goldman Sachs employees who worked on the firm’s electronic trading desk for equities. At the time, Bitcoin was only becoming buzzier, but while the engineers anticipated different use cases for the cryptocurrency, they also recognized that a lack of compliance tools would be a barrier to its adoption by bigger financial institution, so they left to build these.

Fast forward and today Solidus employs 30 people, has raised $23.75 million altogether, and is the process of doubling its head count to address growing demand. We talked with Solidus’s New York-based cofounder and CEO Asaf Meir — who was himself one of those former Goldman engineers — about the company late last week. Excerpts from chat follow, edited lightly for length.

TC: Who are your customers?

AM: We work with exchanges, broker dealers, OTC desks, liquidity providers, and regulators — anyone who is exposed to the risk of buying and selling cryptocurrencies crypto assets or digital assets, whatever you want to call them.

TC: What are you promising to uncover for them?

AM: What we detect, largely speaking, is volume and price manipulation, and that has to do with wash trading, spoofing, layering, pump and dumps, and an additional growing library of crypto native alerts that truly only exist in our unique market.

We had a 400% increase in inbound demand over 2020 driven largely by two factors, I think. One is regulatory scrutiny. Globally, regulators have gone off to market participants, letting them know that they have to ask for permission not forgiveness. The second reason — which I like better — is the drastic institutional increase in appetite toward exposure for this asset class. Every institution, the first question they ask any executing platform is: ‘What are your risk mitigation tools? How do you make sure there is market integrity?’

TC: We talked a couple of months ago, and you mentioned having a growing pipeline of customers, like the trading platform Bittrex in Seattle. Is demand coming primarily from the U.S.?

AM: We have demand in Asia and in Europe, as well, so we will be our opening offices there, too.

TC: Is your former employer Goldman a customer?

AM: I can’t comment on that, but I would say there isn’t a bank right now that isn’t thinking about how they’re going to get exposure to crypto assets, and in order to do that in a safe, compliant and robust way, they have to employ crypto-specific solutions.

Right now, there’s the new frontier —  the clients we’re currently working with, which are these crypto-pure exchanges, broker dealers. liquidity providers, and even traditional financial institutions that are coming into crypto and opening a crypto operation or a crypto desk. Then there’s the new new frontier; your NFTs, stablecoins, indexes, lending platforms, decentralized protocols and God knows what [else] all of a sudden reaching out to us, telling us they want to do the right thing, to ensure the users on their platform are well-protected, and that trading activities are audited, and [to enlist us] to prevent any manipulation.

TC: How does your subscription service work and who is building the tech?

AM: We consume private data from our clients — all their training data —  and we then put it in our detection models, which we ultimately surface through insights and alerts on our dashboard, which they have access to.

As for who is building it, we have a lot of fintech engineers who are coming from Goldman and Morgan Stanley and Citi and bringing that traditional knowledge of large trading systems at scale; we also have incredible data scientists out of Israel whose expertise is in anomaly detection, which they are applying to financial crime, working with us.

TC: What do these crimes look like?

AM: When we started out, there was much more wholesale manipulation happening whether through wash trading or pump-and-dumps — things that are more easy to perform. What we’re seeing today are extremely sophisticated manipulation schemes where bad actors are able to exploit different executing platforms. We’re quite literally surfacing new alerts that if you were to use a legacy, rule-based system you wouldn’t be able to [surface] because you’re not really sure what you’re looking for. We oftentimes have an alert that we haven’t named yet; we just know that this type of behavior is considered manipulative in nature and that our client should be looking into it.

TC: Can you elaborate a bit more about these new anomalies?

AM: I’m conflicted about how much can we share of our clients’ private data. But one thing we’re seeing is [a surge in] account extraction attacks, which is when through different ways, bad actors are able to gain access to an account’s funds and are able in a sophisticated way to trade out of the exchange or broker dealer or custodian. That’s happening in different social engineering-related ways, but we’re able, through account deviation and account profiling, to alert the exchange or broker dealer or financial institution we’re working with to avoid that.

We’re about detection and prevention, not about tracing [what went wrong and where] after the fact. And we can do that regardless of knowing even personal identifiable information about that account. It’s not about the name or the IP address; it’s all about the attributes of trading. In fact, if we have an exchange in Hong Kong that’s experiencing a pump-and-dump on a certain coin pair, we can preemptively warn the rest of our client base so they can take steps to prepare and protect themselves.

TC: On the prevention front, could you also stop that activity on the Hong Kong exchange? Are you empowered by your clients to step in if you detect something anomalous?

AM: We’re bomb sniffing dogs, so we’re not coming to disable the bot. We know how to take the data and point out manipulation, but it’s then up to the financial institution to handle the case.

Pictured above: Seated left to right is CTO Praveen Kumar and CEO Asaf Meir. Standing is COO Chen Arad.

#645-ventures, #analytics, #asaf-meir, #blockchain, #chainalysis, #crypto, #elementus, #evolution-equity-partners, #ftx, #hanaco-ventures, #recent-funding, #solidus-labs, #startups, #surveillance, #tc, #venture-capital

US towns are buying Chinese surveillance tech tied to Uighur abuses

At least a hundred U.S. counties, towns, and cities have bought China-made surveillance systems that the U.S. government has linked to human rights abuses, according to contract data seen by TechCrunch.

Some municipalities have spent tens of thousands of dollars or more to buy surveillance equipment made by two Chinese technology companies, Hikvision and Dahua, after the companies were added to the U.S. government’s economic backlist in 2019 after the companies were linked to China’s ongoing efforts to suppress ethnic minorities in Xinjiang, where most Uighur Muslims live. Congress also banned U.S. federal agencies from buying new Hikvision and Dahua technology or renewing contracts over fears that it could help the Chinese government to conduct espionage.

But those federal actions broadly do not apply at the state and city level, allowing local governments to buy these China-made surveillance systems — including video cameras and thermal imaging scanners — largely uninhibited, so long as federal funds are not used to buy the equipment.

Details of the contracts were provided by GovSpend, which tracks federal and state government spending, to TechCrunch via IPVM, a leading news publication on video surveillance, which has followed the Hikvision and Dahua bans closely.

The biggest spender, according to the data and as previously reported by IPVM, showed that the Board of Education in Fayette County, Georgia spent $490,000 in August 2020 on dozens of Hikvision thermal cameras, used for temperature checks at its public schools.

A statement provided by Fayette County Public Schools spokesperson Melinda Berry-Dreisbach said the cameras were purchased from its longtime security vendor, authorized dealer for Hikvision. The statement did not address whether the Board of Education was aware of Hikvision’s links to human rights abuses. Berry-Dreisbach did not respond to our follow-up questions.

IPVM research found many thermal scanners, including Hikvision and Dahua models, produced inaccurate readings, prompting the U.S. Food and Drug Administration to issue a public health alert warning that misreported readings could present “potentially serious public health risks.”

Nash County in North Carolina, which has a population of 95,000 residents, spent more than $45,000 between September and December 2020 to buy Dahua thermal cameras. County Manager Zee Lamb forwarded emails that confirmed the purchases and that the gear was deployed at the county’s public schools, but did not comment.

The data also shows that the Parish of Jefferson in Louisiana, which includes part of the city of New Orleans, spent $35,000 on Hikvision surveillance cameras and video storage between October 2019 and September 2020. A parish spokesperson did not comment.

Only one municipality we contacted addressed the links between the technology they bought and human rights abuses. Kern County in California spent more than $15,000 on Hikvision surveillance cameras and video recording equipment in June 2020 for its probation department offices. The contract data showed a local vendor, Tel Tec Security, supplied the Hikvision technology to the county.

Ryan Alsop, chief administrative officer for Kern County, said he was “not familiar at all with the issues you’re referencing with regard to Hikvision,” when asked about Hikvision’s links to human rights abuses.

“Again, we didn’t contract with Hikvision, we contracted with Tel Tec Security,” said Alsop.

Kern County spent more than $15,000 on Hikvision equipment at its county probation service offices. (Data: GovSpend/supplied)

A spokesperson for the City of Hollywood in Florida, which spent close to $30,000 on Hikvision thermal cameras, said the Chinese technology maker “was the only major manufacturer with a viable solution that was ready for delivery; would serve the defined project scope; and was within the project budget.” The cameras were used to take employees’ body temperatures to curb the spread of COVID-19. The spokesperson did not address the links to human rights abuses but noted that the federal ban did not apply to the city.

Maya Wang, a senior researcher at Human Rights Watch, said a lack of privacy regulations at the local level contributed to municipalities buying this technology.

“One of the problems is that these kinds of cameras, regardless of the country of origin and regardless of whether or not they’re even linked to human rights abuses, have been introduced to various parts of the country — especially at state and city levels — without any kind of regulation to ensure that they comply with privacy standards,” said Wang in a call. “There is, again, no kind of regulatory framework to vet the companies based on their track record, whether or not they have abused human rights in their practices, such that we can evaluate or choose better companies, and encourage the ones with better privacy protections to win, essentially.”

Chief among the U.S. government’s allegations are that Beijing has relied heavily on Hikvision, Dahua, and others to supply the surveillance technology it uses to monitor the Uighur population as part of the government’s ongoing efforts to suppress the ethnic group, which it has repeatedly denied.

United Nations watchdogs say Beijing has detained more than a million Uighurs in internment camps in recent years as part of these efforts, which led to the U.S. blacklisting of the two surveillance technology makers.

In adding the companies to the government’s economic blacklist, the Commerce Department said Hikvision and Dahua “have been implicated in human rights violations and abuses in the implementation of China’s campaign of repression, mass arbitrary detention, and high-technology surveillance against Uighurs, Kazakhs, and other members of Muslim minority groups.” The Biden administration called the human rights abuses a “genocide.”

IPVM has also reported extensively on how the companies’ surveillance technology has been used to suppress the Uighurs. Dahua was found to have race detection in its code for providing “real-time Uighur warnings” to police.

Earlier this year, the Thomson Reuters Foundation found half of London’s councils and the largest 20 U.K. cities were using the technology linked to Uighur abuses. The Guardian also found that Hikvision surveillance technology was used in U.K. schools.

When reached, Dahua pointed to a blog post with a statement, and claimed that “contrary to some reporting in the media, our company has never developed any technology or solution that seeks to target a specific ethnic group.” The statement added: “Claims to the contrary are simply false and we are aware of no evidence that has ever been put forward to support such claims.”

Hikvision did not respond to a request for comment.


Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more

#china, #dahua, #government, #hikvision, #human-rights, #privacy, #security, #surveillance, #u-s-government

If you don’t want robotic dogs patrolling the streets, consider CCOPS legislation

Boston Dynamics’ robot “dogs,” or similar versions thereof, are already being employed by police departments in Hawaii, Massachusetts and New York. Partly through the veil of experimentation, few answers are being given by these police forces about the benefits and costs of using these powerful surveillance devices.

The American Civil Liberties Union, in a position paper on CCOPS (community control over police surveillance), proposes an act to promote transparency and protect civil rights and liberties with respect to surveillance technology. To date, 19 U.S. cities in have passed CCOPS laws, which means, in practical terms, that virtually all other communities don’t have a requirement that police are transparent about their use of surveillance technologies.

For many, this ability to use new, unproven technologies in a broad range of ways presents a real danger. Stuart Watt, a world-renowned expert in artificial intelligence and the CTO of Turalt, is not amused.

Even seemingly fun and harmless “toys” have all the necessary functions and features to be weaponized.

“I am appalled both by the principle and the dogbots and of them in practice. It’s a big waste of money and a distraction from actual police work,” he said. “Definitely communities need to be engaged with. I am honestly not even sure what the police forces think the whole point is. Is it to discourage through a physical surveillance system, or is it to actually prepare people for some kind of enforcement down the line?

“Chunks of law enforcement have forgotten the whole ‘protect and serve’ thing, and do neither,” Watts added. “If they could use artificial intelligence to actually protect and actually serve vulnerable people, the homeless, folks addicted to drugs, sex workers, those in poverty and maligned minorities, it’d be tons better. If they have to spend the money on AI, spend it to help people.”

The ACLU is advocating exactly what Watt suggests. In proposed language to city councils across the nation, the ACLU makes it clear that:

The City Council shall only approve a request to fund, acquire, or use a surveillance technology if it determines the benefits of the surveillance technology outweigh its costs, that the proposal will safeguard civil liberties and civil rights, and that the uses and deployment of the surveillance technology will not be based upon discriminatory or viewpoint-based factors or have a disparate impact on any community or group.

From a legal perspective, Anthony Gualano, a lawyer and special counsel at Team Law, believes that CCOPS legislation makes sense on many levels.

“As police increase their use of surveillance technologies in communities around the nation, and the technologies they use become more powerful and effective to protect people, legislation requiring transparency becomes necessary to check what technologies are being used and how they are being used.”

For those not only worried about this Boston Dynamics dog, but all future incarnations of this supertech canine, the current legal climate is problematic because it essentially allows our communities to be testing grounds for Big Tech and Big Government to find new ways to engage.

Just last month, public pressure forced the New York Police Department to suspend use of a robotic dog, quite unassumingly named Digidog. After the tech hound was placed on temporary leave due to public pushback, the NYPD used it at a public housing building in March. This went over about as well as you could expect, leading to discussions as to the immediate fate of this technology in New York.

The New York Times phrased it perfectly, observing that “the NYPD will return the device earlier than planned after critics seized on it as a dystopian example of overly aggressive policing.”

While these bionic dogs are powerful enough to take a bite out of crime, the police forces seeking to use them have a lot of public relations work to do first. A great place to begin would be for the police to actively and positively participate in CCOPS discussions, explaining what the technology involves, and how it (and these robots) will be used tomorrow, next month and potentially years from now.

#american-civil-liberties-union, #artificial-intelligence, #boston-dynamics, #column, #law-enforcement, #mass-surveillance, #opinion, #robotics, #security, #surveillance, #surveillance-technologies, #united-states

EU’s top data protection supervisor urges ban on facial recognition in public

The European Union’s lead data protection supervisor has called for remote biometric surveillance in public places to be banned outright under incoming AI legislation.

The European Data Protection Supervisor’s (EDPS) intervention follows a proposal, put out by EU lawmakers on Wednesday, for a risk-based approach to regulating applications of artificial intelligence.

The Commission’s legislative proposal includes a partial ban on law enforcement’s use of remote biometric surveillance technologies (such as facial recognition) in public places. But the text includes wide-ranging exceptions, and digital and humans rights groups were quick to warn over loopholes they argue will lead to a drastic erosion of EU citizens’ fundamental rights. And last week a cross-party group of MEPs urged the Commission to screw its courage to the sticking place and outlaw the rights-hostile tech.

The EDPS, whose role includes issuing recommendations and guidance for the Commission, tends to agree. In a press release today Wojciech Wiewiórowski urged a rethink.

“The EDPS regrets to see that our earlier calls for a moratorium on the use of remote biometric identification systems — including facial recognition — in publicly accessible spaces have not been addressed by the Commission,” he wrote.

“The EDPS will continue to advocate for a stricter approach to automated recognition in public spaces of human features — such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals — whether these are used in a commercial or administrative context, or for law enforcement purposes.

“A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives.”

Wiewiórowski had some warm words for the legislative proposal too, saying he welcomed the horizontal approach and the broad scope set out by the Commission. He also agreed there are merits to a risk-based approach to regulating applications of AI.

But the EDPB has made it clear that the red lines devised by EU lawmakers are a lot pinker in hue than he’d hoped for — adding a high profile voice to the critique that the Commission hasn’t lived up to its much trumpeted claim to have devised a framework that will ensure ‘trustworthy’ and ‘human-centric’ AI.

The coming debate over the final shape of the regulation is sure to include plenty of discussion over where exactly Europe’s AI red lines should be. A final version of the text isn’t expected to be agreed until next year at the earliest.

“The EDPS will undertake a meticulous and comprehensive analysis of the Commission’s proposal to support the EU co-legislators in strengthening the protection of individuals and society at large. In this context, the EDPS will focus in particular on setting precise boundaries for those tools and systems which may present risks for the fundamental rights to data protection and privacy,” Wiewiórowski added.

 

#ai-regulation, #artificial-intelligence, #biometrics, #edps, #europe, #european-union, #facial-recognition, #law-enforcement, #policy, #privacy, #surveillance, #wojciech-wiewiorowski

MEPs call for European AI rules to ban biometric surveillance in public

A cross-party group of 40 MEPs in the European parliament has called on the Commission to strengthen an incoming legislative proposal on artificial intelligence to include an outright ban on the use of facial recognition and other forms of biometric surveillance in public places.

They have also urged EU lawmakers to outlaw automated recognition of people’s sensitive characteristics (such as gender, sexuality, race/ethnicity, health status and disability) — warning that such AI-fuelled practices pose too great a rights risk and can fuel discrimination.

The Commission is expected to presented its proposal for a framework to regulate ‘high risk’ applications of AI next week — but a copy of the draft leaked this week (via Politico). And, as we reported earlier, this leaked draft does not include a ban on the use of facial recognition or similar biometric remote identification technologies in public places, despite acknowledging the strength of public concern over the issue.

“Biometric mass surveillance technology in publicly accessible spaces is widely being criticised for wrongfully reporting large numbers of innocent citizens, systematically discriminating against under-represented groups and having a chilling effect on a free and diverse society. This is why a ban is needed,” the MEPs write now in a letter to the Commission which they’ve also made public.

They go on to warn over the risks of discrimination through automated inference of people’s sensitive characteristics — such as in applications like predictive policing or the indiscriminate monitoring and tracking of populations via their biometric characteristics.

“This can lead to harms including violating rights to privacy and data protection; suppressing free speech; making it harder to expose corruption; and have a chilling effect on everyone’s autonomy, dignity and self-expression – which in particular can seriously harm LGBTQI+ communities, people of colour, and other discriminated-against groups,” the MEPs write, calling on the Commission to amend the AI proposal to outlaw the practice in order to protect EU citizens’ rights and the rights of communities who faced a heightened risk of discrimination (and therefore heightened risk from discriminatory tools supercharged with AI).

“The AI proposal offers a welcome opportunity to prohibit the automated recognition of gender, sexuality, race/ethnicity, disability and any other sensitive and protected characteristics,” they add.

The leaked draft of the Commission’s proposal does tackle indiscriminate mass surveillance — proposing to prohibit this practice, as well as outlawing general purpose social credit scoring systems.

However the MEPs want lawmakers to go further — warning over weaknesses in the wording of the leaked draft and suggesting changes to ensure that the proposed ban covers “all untargeted and indiscriminate mass surveillance, no matter how many people are exposed to the system”.

They also express alarm at the proposal having an exemption on the prohibition on mass surveillance for public authorities (or commercial entities working for them) — warning that this risks deviating from existing EU legislation and from interpretations by the bloc’s top court in this area.

“We strongly protest the proposed second paragraph of this Article 4 which would exempt public authorities and even private actors acting on their behalf ‘in order to safeguard public security’,” they write. “Public security is precisely what mass surveillance is being justified with, it is where it is practically relevant, and it is where the courts have consistently annulled legislation on indiscriminate bulk processing of personal data (e.g. the Data Retention Directive). This carve-out needs to be deleted.”

“This second paragraph could even be interpreted to deviate from other secondary legislation which the Court of Justice has so far interpreted to ban mass surveillance,” they continue. “The proposed AI regulation needs to make it very clear that its requirements apply in addition to those resulting from the data protection acquis and do not replace it. There is no such clarity in the leaked draft.”

The Commission has been contacted for comment on the MEPs’ calls but is unlikely to do so ahead of the official reveal of the draft AI regulation — which is expected around the middle of next week.

It remains to be seen whether the AI proposal will undergo any significant amendments between now and then. But MEPs have fired a swift warning shot that fundamental rights must and will be a key feature of the co-legislative debate — and that lawmakers’ claims of a framework to ensure ‘trustworthy’ AI won’t look credible if the rules don’t tackle unethical technologies head on.

#ai, #ai-regulation, #artificial-intelligence, #biometrics, #discrimination, #europe, #european-parliament, #european-union, #facial-recognition, #fundamental-rights, #law-enforcement, #mass-surveillance, #meps, #national-security, #policy, #privacy, #surveillance

US privacy, consumer, competition and civil rights groups urge ban on ‘surveillance advertising’

Ahead of another big tech vs Congress ‘grab your popcorn’ grilling session, scheduled for March 25 — when US lawmakers will once again question the CEOs of Facebook, Google and Twitter on the unlovely topic of misinformation — a coalition of organizations across the privacy, antitrust, consumer protection and civil rights spaces has called for a ban on “surveillance advertising”, further amplifying the argument that “big tech’s toxic business model is undermining democracy”.

The close to 40-strong coalition behind this latest call to ban ‘creepy ads’ which rely on the mass tracking and profiling of web users in order to target them with behavioral ads includes the American Economic Liberties Project, the Campaign for a Commercial Free Childhood, the Center for Digital Democracy, the Center for Humane Technology, Epic.org, Fair Vote, Media Matters for America, the Tech Transparency Project and The Real Facebook Oversight Board, to name a few.

As leaders across a broad range of issues and industries, we are united in our concern for the safety of our communities and the health of democracy,” they write in the open letter. “Social media giants are eroding our consensus reality and threatening public safety in service of a toxic, extractive business model. That’s why we’re joining forces in an effort to ban surveillance advertising.”

The coalition is keen to point out that less toxic non-tracking alternatives (like contextual ads) exist, while arguing that greater transparency and oversight of adtech infrastructure could help clean up a range of linked problems, from junk content and rising conspiracism to ad fraud and denuded digital innovation.

“There is no silver bullet to remedy this crisis – and the members of this coalition will continue to pursue a range of different policy approaches, from comprehensive privacy legislation to reforming our antitrust laws and liability standards,” they write. “But here’s one thing we all agree on: It’s time to ban surveillance advertising.”

“Big Tech platforms amplify hate, illegal activities, and conspiracism — and feed users increasingly extreme content — because that’s what generates the most engagement and profit,” they warn.

“Their own algorithmic tools have boosted everything from white supremacist groups and Holocaust denialism to COVID-19 hoaxes, counterfeit opioids and fake cancer cures. Echo chambers, radicalization, and viral lies are features of these platforms, not bugs — central to the business model.”

The coalition also warns over surveillance advertising’s impact on the traditional news business, noting that shrinking revenues for professional journalism is raining more harm down upon the (genuine) information ecosystem democracies need to thrive.

The potshots are well rehearsed at this point although it’s an oversimplification to blame the demise of traditional news on tech giants so much as ‘giant tech’: aka the industrial disruption wrought by the Internet making so much information freely available. But dominance of the programmatic adtech pipeline by a couple of platform giants clearly doesn’t help. (Australia’s recent legislative answer to this problem is still too new to assess for impacts but there’s a risk its news media bargaining code will merely benefit big media and big tech while doing nothing about the harms of either industry profiting off of outrage.)

“Facebook and Google’s monopoly power and data harvesting practices have given them an unfair advantage, allowing them to dominate the digital advertising market, siphoning up revenue that once kept local newspapers afloat. So while Big Tech CEOs get richer, journalists get laid off,” the coalition warns, adding: “Big Tech will continue to stoke discrimination, division, and delusion — even if it fuels targeted violence or lays the groundwork for an insurrection — so long as it’s in their financial interest.”

Among a laundry list of harms the coalition is linking to the dominant ad-based online business models of tech giants Facebook and Google is the funding of what they describe as “insidious misinformation sites that promote medical hoaxes, conspiracy theories, extremist content, and foreign propaganda”.

“Banning surveillance advertising would restore transparency and accountability to digital ad placements, and substantially defund junk sites that serve as critical infrastructure in the disinformation pipeline,” they argue, adding: “These sites produce an endless drumbeat of made-to-go-viral conspiracy theories that are then boosted by bad-faith social media influencers and the platforms’ engagement-hungry algorithms — a toxic feedback loop fueled and financed by surveillance advertising.”

Other harms they point to are the risks posed to public health by platforms’ amplification of junk/bogus content such as COVID-19 conspiracy theories and vaccine misinformation; the risk of discrimination through unfairly selective and/or biased ad targeting, such as job ads that illegally exclude women or ethnic minorities; and the perverse economic incentives for ad platforms to amplify extremist/outrageous content in order to boost user engagement with content and ads, thereby fuelling societal division and driving partisanship as a byproduct of the fact platforms benefit financially from more content being spread.

The coalition also argues that the surveillance advertising system is “rigging the game against small businesses” because it embeds platform monopolies — which is a neat counterpoint to tech giants’ defensive claim that creepy ads somehow level the playing field for SMEs vs larger brands.

“While Facebook and Google portray themselves as lifelines for small businesses, the truth is they’re simply charging monopoly rents for access to the digital economy,” they write, arguing that the duopoly’s “surveillance-driven stranglehold over the ad market leaves the little guys with no leverage or choice” — opening them up to exploitation by big tech.

The current market structure — with Facebook and Google controlling close to 60% of the US ad market — is thus stifling innovation and competition, they further assert.

“Instead of being a boon for online publishers, surveillance advertising disproportionately benefits Big Tech platforms,” they go on, noting that Facebook made $84.2BN in 2020 ad revenue and Google made $134.8BN off advertising “while the surveillance ad industry ran rife with allegations of fraud”.

The campaign being kicked off is by no means the first call for a ban on behavioral advertising but given how many signatories are backing this one it’s a sign of the scale of the momentum building against a data-harvesting business model that has shaped the modern era and allowed a couple of startups to metamorphosize into society- and democracy-denting giants.

That looks important as US lawmakers are now paying close attention to big tech impacts — and have a number of big tech antitrust cases actively on the table. Although it was European privacy regulators that were among the first to sound the alarm over microtargeting’s abusive impacts and risks for democratic societies.

Back in 2018, in the wake of the Facebook data misuse and voter targeting scandal involving Cambridge Analytica, the UK’s ICO called for an ethical pause on the use of online ad tools for political campaigning — penning a report entitled Democracy Disrupted? Personal information and political influence.

It’s no small irony that the self-same regulator has so far declined to take any action against the adtech industry’s unlawful use of people’s data — despite warning in 2019 that behavioral advertising is out of control.

The ICO’s ongoing inaction seems likely to have fed into the UK government’s decision that a dedicated unit is required to oversee big tech.

In recent years the UK has singled out the online ad space for antitrust concern — saying it will establish a pro-competition regulator to tackle big tech’s dominance, following a market study of the digital advertising sector carried out in 2019 by its Competition and Markets Authority which reported substantial concerns over the power of the adtech duopoly.

Last month, meanwhile, the European Union’s lead data protection supervisor urged not a pause but a ban on targeted advertising based on tracking internet users’ digital activity — calling on regional lawmakers’ to incorporate the lever into a major reform of digital services rules which is intended to boost operators’ accountability, among other goals.

The European Commission’s proposal had avoided going so far. But negotiations over the Digital Services Act and Digital Markets Act are ongoing.

Last year the European Parliament also backed a tougher stance on creepy ads. Again, though, the Commission’s framework for tackling online political ads does not suggest anything so radical — with EU lawmakers pushing for greater transparency instead.

It remains to be seen what US lawmakers will do but with US civil society organizations joining forces to amplify an anti-ad-targeting message there’s rising pressure to clean up the toxic adtech in its own backyard.

Commenting in a statement on the coalition’s website, Zephyr Teachout, an associate professor of law at Fordham Law School, said: “Facebook and Google possess enormous monopoly power, combined with the surveillance regimes of authoritarian states and the addiction business model of cigarettes. Congress has broad authority to regulate their business models and should use it to ban them from engaging in surveillance advertising.”

“Surveillance advertising has robbed newspapers, magazines, and independent writers of their livelihoods and commoditized their work — and all we got in return were a couple of abusive monopolists,” added David Heinemeier Hansson, creator of Ruby on Rails, in another supporting statement. “That’s not a good bargain for society. By banning this practice, we will return the unique value of writing, audio, and video to the people who make it rather than those who aggregate it.”

With US policymakers paying increasingly close attention to adtech, it’s interesting to see Google is accelerating its efforts to replace support for individual-level tracking with what it’s branded as a ‘privacy-safe’ alternative (FLoC).

Yet the tech it’s proposed via its Privacy Sandbox will still enable groups (cohorts) of web users to be targeted by advertisers, with ongoing risks for discrimination, the targeting of vulnerable groups of people and societal-scale manipulation — so lawmakers will need to pay close attention to the detail of the ‘Privacy Sandbox’ rather than Google’s branding.

“This is, in a word, bad for privacy,” warned the EFF, writing about the proposal back in 2019. “A flock name would essentially be a behavioral credit score: a tattoo on your digital forehead that gives a succinct summary of who you are, what you like, where you go, what you buy, and with whom you associate.”

“FLoC is the opposite of privacy-preserving technology,” it added. “Today, trackers follow you around the web, skulking in the digital shadows in order to guess at what kind of person you might be. In Google’s future, they will sit back, relax, and let your browser do the work for them.”

#advertising-tech, #behavioral-ads, #facebook, #google, #microtargeting, #misinformation, #online-ads, #policy, #privacy, #surveillance

One company wants to sell the feds location data from every car on Earth

Cars driving down I-80 in Berkeley, California, in May, 2018 when there were still places to go.

Enlarge / Cars driving down I-80 in Berkeley, California, in May, 2018 when there were still places to go. (credit: David Paul Morris | Bloomberg | Getty Images)

There is a strange sort of symmetry in the world of personal data this week: one new report has identified a company that wants to sell the US government granular car location data from basically every vehicle in the world, while a group of privacy advocates is suing another company for providing customer data to the feds.

A surveillance contractor called Ulysses can “remotely geolocate vehicles in nearly every country except for North Korea and Cuba on a near real-time basis,” Vice Motherboard reports.

Ulysses obtains vehicle telematics data from embedded sensors and communications sensors that can transmit information such as seatbelt status, engine temperature, and current vehicle location back to automakers or other parties.

Read 9 remaining paragraphs | Comments

#data-privacy, #location-data, #personal-data, #policy, #privacy, #surveillance, #ulysses

Hackers access security cameras inside Cloudflare, jails, and hospitals

Hackers access security cameras inside Cloudflare, jails, and hospitals

Enlarge (credit: Getty Images)

Hackers say they broke into the network of Silicon Valley startup Verkada and gained access to live video feeds from more than 150,000 surveillance cameras the company manages for Cloudflare, Tesla, and a host of other organizations.

The group published videos and images they said were taken from offices, warehouses, and factories of those companies as well as from jail cells, psychiatric wards, banks, and schools. Bloomberg News, which first reported the breach, said footage viewed by a reporter showed staffers at Florida hospital Halifax Health tackling a man and pinning him to a bed. Another video showed a handcuffed man in a police station in Stoughton, Massachusetts, being questioned by officers.

“I don’t think the claim ‘we hacked the internet’ has ever been as accurate as now,” Tillie Kottmann, a member of a hacker collective calling itself APT 69420 Arson Cats, wrote on Twitter.

Read 6 remaining paragraphs | Comments

#biz-it, #hacking, #privacy, #security-cameras, #surveillance, #tech

A race to reverse engineer Clubhouse raises security concerns

As live audio chat app Clubhouse ascends in popularity around the world, concerns about its data practices also grow.

The app is currently only available on iOS, so some developers set out in a race to create Android, Windows and Mac versions of the service. While these endeavors may not be ill-intentioned, the fact that it takes programmers little effort to reverse engineer and fork Clubhouse — that is, when developers create new software based on its original code — is sounding an alarm about the app’s security.

The common goal of these unofficial apps, as of now, is to broadcast Clubhouse audio feeds in real-time to users who cannot access the app otherwise because they don’t have an iPhone. One such effort is called Open Clubhouse, which describes itself as a “third-party web application based on flask to play Clubhouse audio.” The developer confirmed to TechCrunch that Clubhouse blocked its service five days after its launch without providing an explanation.

“[Clubhouse] asks a lot of information from users, analyzes those data and even abuses them. Meanwhile, it restricts how people use the app and fails to give them the rights they deserve. To me, this constitutes monopoly or exploitation,” said Open Clubhouse’s developer nicknamed AiX.

Clubhouse cannot be immediately reached for comment on this story.

AiX wrote the program “for fun” and wanted it to broaden Clubhouse’s access to more people. Another similar effort came from a developer named Zhuowei Zhang, who created Hipster House to let those without an invite browse rooms and users, and those with an invite to join rooms as a listener though they can’t speak — Clubhouse is invite-only at the moment. Zhang stopped developing the project, however, after noticing a better alternative.

These third-party services, despite their innocuous intentions, can be exploited for surveillance purposes, as Jane Manchun Wong, a researcher known for uncovering upcoming features in popular apps through reverse engineering, noted in a tweet.

“Even if the intent of that webpage is to bring Clubhouse to non-iOS users, without a safeguard, it could be abused,” said Wong, referring to a website rerouting audio data from Clubhouse’s public rooms.

Clubhouse lets people create public chat rooms, which are available to any user who joins before a room reaches its maximum capacity, and private rooms, which are only accessible to room hosts and users authorized by the hosts.

But not all users are aware of the open nature of Clubhouse’s public rooms. During its brief window of availability in China, the app was flooded with mainland Chinese debating politically sensitive issues from Taiwan to Xinjiang, which are heavily censored in the Chinese cybserspace. Some vigilant Chinese users speculated the possibility of being questioned by the police for delivering sensitive remarks. While no such event has been publicly reported, the Chinese authorities have banned the app since February 8.

Clubhouse’s design is by nature at odds with the state of communication it aims to achieve. The app encourages people to use their real identity — registration requires a phone number and an existing user’s invite. Inside a room, everyone can see who else is there. This setup instills trust and comfort in users when they speak as if speaking at a networking event.

But the third-party apps that are able to extract Clubhouse’s audio feeds show that the app isn’t even semi-public: It’s public.

More troublesome is that users can “ghost listen,” as developer Zerforschung found. That is, users can hear a room’s conversation without having their profile displayed to the room participants. Eavesdropping is made possible by establishing communication directly with Agora, a service provider employed by Clubhouse. As multiple security researchers found, Clubhouse relies on Agora’s real-time audio communication technology. Sources have also confirmed the partnership with TechCrunch.

Some technical explanation is needed here. When a user joins a chatroom on Clubhouse, it makes a request to Agora’s infrastructure, as the Stanford Internet Observatory discovered. To make the request, the user’s phone contacts Clubhouse’s application programming interface (API), which then creates “tokens”, the basic building block in programming that authenticates an action, to establish a communication pathway for the app’s audio traffic.

Now, the problem is there can be a disconnect between Clubhouse and Agora, allowing the Clubhouse end, which manages user profiles, to be inactive while the Agora end, which transmits audio data, remains active, as technology analyst Daniel Sinclair noted. That’s why users can continue to eavesdrop on a room without having their profile displayed to the room’s participants.

The Agora partnership has sparked other forms of worries. The company, which operates mainly from the U.S. and China, noted in its IPO prospectus that its data may be subject to China’s cybersecurity law, which requires network operators in China to assist police investigations. That possibility, as the Stanford Internet Observatory points out, is contingent on whether Clubhouse stores its data in China.

While the Clubhouse API is banned in China, the Agora API appears unblocked. Tests by TechCrunch find that users currently need a VPN to join a room, an action managed by Clubhouse, but can listen to the room conversation, which is facilitated by Agora, with the VPN off. What’s the safest way for China-based users to access the app, given the official attitude is that it should not exist? It’s also worth noting that the app was not available on the Chinese App Store even before its ban, and Chinese users had downloaded the app through workarounds.

The Clubhouse team may be overwhelmed by data questions in the past few days, but these early observations from researchers and hackers may urge it to fix its vulnerabilities sooner, paving its way to grow beyond its several million loyal users and $1 billion valuation mark.

#audio, #clubhouse, #privacy, #security, #social-audio, #social-networking, #surveillance, #tc, #voice-chat

Minneapolis bans its police department from using facial recognition software

Minneapolis voted Friday to ban the use of facial recognition software for its police department, growing the list of major cities that have implemented local restrictions on the controversial technology. After an ordinance on the ban was approved earlier this week, 13 members of the city council voted in favor of the ban with no opposition.

The new ban will block the Minneapolis Police Department from using any facial recognition technology, including software by Clearview AI. That company sells access to a large database of facial images, many scraped from major social networks, to federal law enforcement agencies, private companies and a number of U.S. police departments. The Minneapolis Police Department is known to have a relationship with Clearview AI, as is the Hennepin County Sheriff’s Office, which will not be restricted by the new ban.

The vote is a landmark decision in the city that set off racial justice protests around the country after a Minneapolis police officer killed George Floyd last year. The city has been in the throes of police reform ever since, leading the nation by pledging to defund the city’s police department in June before backing away from that commitment into more incremental reforms later that year.

Banning the use of facial recognition is one targeted measure that can rein in emerging concerns about aggressive policing. Many privacy advocates are concerned that the AI-powered face recognition systems would not only disproportionately target communities of color, but that the tech has been demonstrated to have technical shortcomings in discerning non-white faces.

Cities around the country are increasingly looking to ban the controversial technology and have implemented restrictions in many different ways. In Portland, Oregon, new laws passed last year block city bureaus from using facial recognition but also forbid private companies from deploying the technology in public spaces. Previous legislation in San Francisco, Oakland and Boston restricted city governments from using facial recognition systems though didn’t include a similar provision for private companies.

#clearview-ai, #facial-recognition, #government, #minnesota, #surveillance, #tc