IRS says you can now create account without submitting to facial recognition

Illustration of a man taking a selfie with a phone and having his face scanned.

Enlarge (credit: Getty Images | Spencer Whalen/EyeEm)

The Internal Revenue Service today said that selfies collected from taxpayers will be deleted and that it has deployed a new verification option as an alternative to its controversial facial recognition system. The IRS’s use of the ID.me facial recognition service has been criticized by privacy and civil rights advocates as well as lawmakers from both the Democratic and Republican parties.

Two weeks ago, the IRS responded to the bipartisan backlash by saying it “will transition away from using a third-party service for facial recognition to help authenticate people creating new online accounts” and “quickly develop and bring online an additional authentication process that does not involve facial recognition.” Today, the IRS announced that a new option for creating accounts without facial recognition is “now available for taxpayers.”

Instead of providing a selfie, “taxpayers will have the option of verifying their identity during a live, virtual interview with agents; no biometric data—including facial recognition—will be required if taxpayers choose to authenticate their identity through a virtual interview,” the IRS said.

Read 11 remaining paragraphs | Comments

#facial-recognition, #id-me, #irs, #policy

IRS stops requiring selfies after facial recognition system is widely panned

A man using a smartphone to take a selfie. The illustration has lines extending from the phone to his face to indicate that facial recognition is being used.

Enlarge (credit: Getty Images | imaginima)

The Internal Revenue Service is dropping a controversial facial recognition system that requires people to upload video selfies when creating new IRS online accounts.

“The IRS announced it will transition away from using a third-party service for facial recognition to help authenticate people creating new online accounts,” the agency said today. “The transition will occur over the coming weeks in order to prevent larger disruptions to taxpayers during filing season. During the transition, the IRS will quickly develop and bring online an additional authentication process that does not involve facial recognition.”

The IRS has been using the third-party system ID.me for facial recognition of taxpayers. Privacy and civil rights advocates and lawmakers from both major parties have objected to the system. The IRS wasn’t demanding ID.me verification for filing tax returns but was requiring it for accessing related services, such as account information, applying for payment plans online, requesting transcripts, and the Child Tax Credit Update Portal.

Read 11 remaining paragraphs | Comments

#facial-recognition, #irs, #policy

After tagging people for 10 years, Facebook to stop most uses of facial recognition

With an image of himself on a screen in the background, Facebook co-founder and CEO Mark Zuckerberg testifies before the House Financial Services Committee in the Rayburn House Office Building on Capitol Hill October 23, 2019, in Washington, DC.

Enlarge / With an image of himself on a screen in the background, Facebook co-founder and CEO Mark Zuckerberg testifies before the House Financial Services Committee in the Rayburn House Office Building on Capitol Hill October 23, 2019, in Washington, DC. (credit: Chip Somodevilla/Getty Images)

Facebook introduced facial recognition in 2010, allowing users to automatically tag people in photos. The feature was intended to ease photo sharing by eliminating a tedious task for users. But over the years, facial recognition became a headache for the company itself—it drew regulatory scrutiny along with lawsuits and fines that have cost the company hundreds of millions of dollars.

Today, Facebook (which recently renamed itself Meta), announced that it would be shutting down its facial recognition system and deleting the facial recognition templates of more than 1 billion people.

The change, while significant, doesn’t mean that Facebook is forswearing the technology entirely. “Looking ahead, we still see facial recognition technology as a powerful tool, for example, for people needing to verify their identity, or to prevent fraud and impersonation,” said Jérôme Pesenti, Facebook/Meta’s vice president of artificial intelligence. “We believe facial recognition can help for products like these with privacy, transparency and control in place, so you decide if and how your face is used. We will continue working on these technologies and engaging outside experts.”

Read 7 remaining paragraphs | Comments

#algorithmic-bias, #algorithms, #facebook, #facial-recognition, #meta, #policy

Biden’s new FTC nominee is a digital privacy advocate critical of Big Tech

President Biden made his latest nomination to the Federal Trade Commission this week, tapping digital privacy expert Alvaro Bedoya to join the agency as it takes a hard look at the tech industry.

Bedoya is the founding director of the Center on Privacy & Technology at Georgetown’s law school and previously served as chief counsel for former Senator Al Franken and the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Bedoya has worked on legislation addressing some of the most pressing privacy issues in tech, including stalkerware and facial recognition systems.

In 2016, Bedoya co-authored a report titled “The Perpetual Line-Up: Unregulated Police Face Recognition in America,” a year-long investigation that dove deeply into the police use of facial recognition systems in the U.S. The 2016 report examined law enforcement’s reliance on facial recognition systems and biometric databases on a state level. It argued that regulations are desperately needed to curtail potential abuses and algorithmic failures before the technology inevitably becomes even more commonplace.

Bedoya also isn’t shy about calling out Big Tech. In a New York Times op-ed a few years ago, he took aim at Silicon Valley companies giving user privacy lip service in public while quietly funneling millions toward lobbyists to undermine consumer privacy. The new FTC nominee singled out Facebook specifically, pointing to the company’s efforts to undermine the Illinois Biometric Information Privacy Act, a state law that serves as one of the only meaningful checks on invasive privacy practices in the U.S.

Bedoya argued that the tech industry would have an easier time shaping a single, sweeping piece of privacy regulations with its lobbying efforts rather than a flurry of targeted, smaller bills. Antitrust advocates in Congress taking aim at tech today seem to have learned that same lesson as well.

“We cannot underestimate the tech sector’s power in Congress and in state legislatures,” Bedoya wrote. “If the United States tries to pass broad rules for personal data, that effort may well be co-opted by Silicon Valley, and we’ll miss our best shot at meaningful privacy protections.”

If confirmed, Bedoya would join big tech critic Lina Khan, a recent Biden FTC nominee who now chairs the agency. Khan’s focus on antitrust and Amazon in particular would dovetail with Bedoya’s focus on adjacent privacy concerns, making the pair a formidable regulatory presence as the Biden administration seeks to rein in some of the tech industry’s most damaging excesses.

#biden, #biden-administration, #big-tech, #biometrics, #congress, #consumer-privacy, #facial-recognition, #federal-trade-commission, #government, #lina-khan, #privacy, #surveillance, #tc, #united-states

Amazon partners with AXS to install Amazon One palm readers at entertainment venues

Amazon’s biometric scanner for retail, the Amazon One palm reader, is expanding beyond the e-commerce giant’s own stores. The company announced today it has acquired its initial third-party customer with ticketing company AXS, which will implement the Amazon One system at Denver, Colorado’s Red Rocks Amphitheatre as an option for contactless entry for event-goers.

This is the first time the Amazon One system will be used outside an Amazon-owned retail store, and the first time it’s used for entry into an entertainment venue. Amazon says it expects AXS to roll out the system to more venues in the future, but didn’t offer any specifics as to which ones or when.

At Red Rocks, guests will be able to associate their AXS Mobile ID with Amazon One at dedicated stations before they enter the amphitheatre, or they can enroll at a second station once inside in order to use the reader at future AXS events. The enrollment process takes about a minute and customers can choose to enroll either one or both palms. Once set up, ticketholders can use a dedicated entry line for Amazon One users.

“We are proud to work with Amazon to continue shaping the future of ticketing through cutting-edge innovation,” said Bryan Perez, CEO of AXS, in a statement. “We are also excited to bring Amazon One to our clients and the industry at a time when there is a need for fast, convenient, and contactless ticketing solutions. At AXS, we are continually deploying new technologies to develop secure and smarter ticketing offerings that improve the fan experience before, during, and after events,” he added.

Amazon’s palm reader was first introduced amid the pandemic in September 2020, as a way for shoppers to pay at Amazon Go convenience stores using their palm. To use the system, customers would first insert their credit card then hover their palm over the device to associate their unique palm print with their payment mechanism. After setup, customers could enter the store just by holding their palm above the biometric scanner for a second or so. Amazon touted the system as a safer, “contactless” means of payment, as customers aren’t supposed to actually touch the reader. (Hopefully, that’s the case, considering the pandemic rages on.)

On the tech side, Amazon One uses computer vision technology to create the palm signatures, it said.

In the months that followed, Amazon expanded the biometric system to several more stores, including other Amazon Go convenience stores, Amazon Go Grocery stores, and its Amazon Books and Amazon 4-star stores. This April, it brought the system to select Whole Foods locations. To encourage more sign-ups, Amazon even introduced a $10 promotional credit to enroll your palm prints at its supported stores.

When palm prints are linked to Amazon accounts, the company is able to collect data from customers’ offline activity to target ads, offers, and recommendations over time. And the data remains with Amazon until a customer explicitly deletes it, or if the customer doesn’t use the feature for at least two years.

While the system offers an interesting take on contactless payments, Amazon’s track record in this area has raised privacy concerns. The company had in the past sold biometric facial recognition services to law enforcement in the U.S. Its facial recognition technology was the subject of a data privacy lawsuit. And it was found to be still storing Alexa voice data even after users deleted their audio files.

Amazon has responded by noting its palm print images are encrypted and sent to a secure area built for Amazon One in the cloud where Amazon creates the customers’ palm signatures. It also has noted it allows customers to unenroll from either a device or from its website, one.amazon.com once all transactions have been processed.

#alexa, #amazon, #amazon-music, #ceo, #computer-vision-technology, #computing, #denver, #e-reader, #ecommerce, #facial-recognition, #privacy, #retail-store, #retailers, #technology, #united-states, #whole-foods

Biden’s FTC pick is a privacy champion who wants limits on facial recognition

Illustration of a woman's eye being scanned with technology.

Enlarge (credit: Getty Images | Yuichiro Chino)

President Joe Biden will reportedly nominate Georgetown law professor and privacy researcher Alvaro Bedoya to the Federal Trade Commission. Bedoya is the founding director of Georgetown Law’s Center on Privacy & Technology, where he has focused heavily on facial recognition and other forms of surveillance.

Bedoya co-authored a 2016 report about “unregulated police face recognition in America” after a “year-long investigation that revealed that most American adults are enrolled in a police face recognition network and that vendor companies were doing little to address the race and gender bias endemic to face scanning software,” according to Bedoya’s bio on the Georgetown Law website. The investigation led to Congressional hearings as well as “a slate of laws reining in the technology across the country, and the first-ever comprehensive bias audit of the technology by the National Institute of Standards & Technology.”

Before starting the privacy center at Georgetown, Bedoya was chief counsel for the US Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Bedoya’s nomination hasn’t been officially announced but was reported today by media outlets including Axios and The Washington Post. Biden’s announcement is expected to be made today, the Post wrote.

Read 8 remaining paragraphs | Comments

#biden, #facial-recognition, #ftc, #policy

The past, present and future of IoT in physical security

When Axis Communications released the first internet protocol (IP) camera after the 1996 Olympic games in Atlanta, there was some initial confusion. Connected cameras weren’t something the market had been clamoring for, and many experts questioned whether they were even necessary.

Today, of course, traditional analog cameras have been almost completely phased out as organizations have recognized the tremendous advantage that IoT devices can offer, but that technology felt like a tremendous risk during those early days.

To say that things have changed since then would be a dramatic understatement. The growth of the Internet of Things (IoT) represents one of the ways physical security has evolved. Connected devices have become the norm, opening up exciting new possibilities that go far beyond recorded video. Further developments, such as the improvement and widespread acceptance of the IP camera, have helped power additional breakthroughs including improved analytics, increased processing power, and the growth of open-architecture technology. On the 25th anniversary of the initial launch of the IP camera, it is worth reflecting on how far the industry has come — and where it is likely to go from here.

Tech improvements herald the rise of IP cameras

Comparing today’s IP cameras to those available in 1996 is almost laughable. While they were certainly groundbreaking at the time, those early cameras could record just one frame every 17 seconds — quite a change from what cameras can do today.

But despite this drawback, those on the cutting edge of physical security understood what a monumental breakthrough the IP camera could represent. After all, creating a network of cameras would enable more effective remote monitoring, which — if the technology could scale — would enable them to deploy much larger systems, tying together disparate groups of cameras. Early applications might include watching oil fields, airport landing strips or remote cell phone towers. Better still, the technology had the potential to usher in an entirely new world of analytics capabilities.

Of course, better chipsets were needed to make that endless potential a reality. Groundbreaking or not, the limited frame rate of the early cameras was never going to be effective enough to drive widespread adoption of traditional surveillance applications. Solving this problem required a significant investment of resources, but before long these improved chipsets brought IP cameras from one frame every 17 seconds to 30 frames per second. Poor frame rate could no longer be listed as a justification for shunning IP cameras in favor of their analog cousins, and developers could begin to explore the devices’ analytics potential.

Perhaps the most important technological leap was the introduction of embedded Linux, which made IP cameras more practical from a developer point of view. During the 1990s, most devices used proprietary operating systems, which made them difficult to develop for.

Even within the companies themselves, proprietary systems meant that developers had to be trained on a specific technology, costing companies both time and money. There were a few attempts at standardization within the industry, such as the Wind River operating system, but these ultimately failed. They were too small, with limited resources behind them — and besides, a better solution already existed: Linux.

Linux offered a wide range of benefits, not the least of which was the ability to collaborate with other developers in the open source community. This was a road that ran two ways. Because most IP cameras lacked the hard disk necessary to run Linux, hardware known as JFFS was developed that would allow a device to use a Flash memory chip as a hard disk. That technology was contributed to the open source community, and while it is currently on its third iteration, it remains in widespread use today.

Compression technology represented a similar challenge, with the more prominent data compression models in the late ’90s and early 2000s poorly suited for video. At the time, video storage involved individual frames being stored one-by-one — a data storage nightmare. Fortunately, the H.264 compression format, which was designed with video in mind, became much more commonplace in 2009.

By the end of that year, more than 90% of IP cameras and most video management systems used the H.264 compression format. It is important to note that improvements in compression capabilities have also enabled manufacturers to improve their video resolution as well. Before the new compression format, video resolution had not changed since the ’60s with NTSC/PAL. Today, most cameras are capable of recording in high definition (HD).

1996: First IP camera is released.
2001: Edge-based analytics with video motion detection arrive.
2006: First downloadable, edge-based analytics become available.
2009: Full HD becomes the standard video resolution; H.264 compression goes mainstream.
2015: Smart compression revolutionizes video storage.

The growth of analytics

Analytics is not exactly a “new” technology — customers requested various analytics capabilities even in the early days of the IP camera — but it is one that has seen dramatic improvement. Although it might seem quaint by today’s high standards, video motion detection was one of the earliest analytics loaded onto IP cameras.

Customers needed a way to detect movement within certain parameters to avoid having a tree swaying in the wind, or a squirrel running by, trigger a false alarm. Further refinement of this type of detection and recognition technology has helped automate many aspects of physical security, triggering alerts when potentially suspicious activity is detected and ensuring that it is brought to human attention. By taking human fallibility out of the equation, analytics has turned video surveillance from a reactive tool to a proactive one.

Reliable motion detection remains one of the most widely used analytics, and while false alarms can never be entirely eliminated, modern improvements have made it a reliable way to detect potential intruders. Object detection is also growing in popularity and is increasingly capable of classifying cars, people, animals and other objects.

License plate recognition is popular in many countries (though less so in the United States), not just for identifying vehicles involved in criminal activity, but for uses as simple as parking recognition. Details like car model, shirt color or license plate number are easy for the human eye to miss or fail to notice — but thanks to modern analytics, that data is cataloged and stored for easy reference. The advent of technology like deep learning, which features better pattern recognition and object classification through improved labeling and categorization, will drive further advancements in this area of analytics.

The rise of analytics also helps highlight why the security industry has embraced open-architecture technology. Simply put, it is impossible for a single manufacturer to keep up with every application that its customers might need. By using open-architecture technology, they can empower those customers to seek out the solutions that are right for them, without the need to specifically tailor the device for certain use cases. Hospitals might look to add audio analytics to detect signs of patient distress; retail stores might focus on people counting or theft detection; law enforcement might focus on gunshot detection — with all of these applications housed within the same device model.

It is also important to note that the COVID-19 pandemic drove interesting new uses for both physical security devices and analytics — though some applications, such as using thermal cameras for fever measurement, proved difficult to implement with a high degree of accuracy. Within the healthcare industry, camera usage increased significantly — something that is unlikely to change. Hospitals have seen the benefit of cameras within patient rooms, with video and intercom technology enabling healthcare professionals to monitor and communicate with patients while maintaining a secure environment.

Even simple analytics like cross-line detection can generate an alert if a patient who is a fall risk attempts to leave a designated area, potentially reducing accidents and overall liability. The fact that analytics like this bear only a passing mention today highlights how far physical security has come since the early days of the IP camera.

Looking to the future of security

That said, an examination of today’s trends can provide a glimpse into what the future might hold for the security industry. For instance, video resolution will certainly continue to improve.

Ten years ago, the standard resolution for video surveillance was 720p (1 megapixel), and 10 years before that it was the analog NTSC/PAL resolution of 572×488, or 0.3 megapixels. Today, the standard resolution is 1080p (2 megapixels), and a healthy application of Moore’s law indicates that 10 years from now it will be 4K (8 megapixels).

As ever, the amount of storage that higher-resolution video generates is the limiting factor, and the development of smart storage technologies such as Zipstream has helped tremendously in recent years. We will likely see further improvements in smart storage and video compression that will help make higher-resolution video possible.

Cybersecurity will also be a growing concern for both manufacturers and end users.

Recently, one of Sweden’s largest retailers was shut down for a week because of a hack, and others will meet the same fate if they continue to use poorly secured devices. Any piece of software can contain a bug, but only developers and manufacturers committed to identifying and fixing these potential vulnerabilities can be considered reliable partners. Governments across the globe will likely pass new regulations mandating cybersecurity improvements, with California’s recent IoT protection law serving as an early indicator of what the industry can expect.

Finally, ethical behavior will continue to become more important. A growing number of companies have begun foregrounding their ethics policies, issuing guidelines for how they expect technology like facial recognition to be used — not abused.

While new regulations are coming, it’s important to remember that regulation always lags behind, and companies that wish to have a positive reputation will need to adhere to their own ethical guidelines. More and more consumers now list ethical considerations among their major concerns—especially in the wake of the COVID-19 pandemic—and today’s businesses will need to strongly consider how to broadcast and enforce responsible product use.

Change is always around the corner

Physical security has come a long way since the IP camera was introduced, but it is important to remember that these changes, while significant, took place over more than two decades. Changes take time — often more time than you might think. Still, it is impossible to compare where the industry stands today to where it stood 25 years ago without being impressed. The technology has evolved, end users’ needs have shifted, and even the major players in the industry have come and gone according to their ability to keep up with the times.

Change is inevitable, but careful observation of today’s trends and how they fit into today’s evolving security needs can help today’s developers and device manufacturers understand how to position themselves for the future. The pandemic highlighted the fact that today’s security devices can provide added value in ways that no one would have predicted just a few short years ago, further underscoring the importance of open communication, reliable customer support and ethical behavior.

As we move into the future, organizations that continue to prioritize these core values will be among the most successful.

#column, #facial-recognition, #hardware, #internet-protocol, #ip-camera, #linux, #opinion, #physical-security, #security, #surveillance, #tc

Lawmakers ask Amazon what it plans to do with palm print biometric data

A group of senators sent new Amazon CEO Andy Jassy a letter Friday pressing the company for more information about how it scans and stores customer palm prints for use in some of its retail stores.

The company rolled out the palm print scanners through a program it calls Amazon One, encouraging people to make contactless payments in its brick and mortar stores without the use of a card. Amazon introduced its Amazon One scanners late last year, and they can now be found in Amazon Go convenience and grocery stores, Amazon Books and Amazon 4-star stores across the U.S. The scanners are also installed in eight Washington state-based Whole Foods locations.

In the new letter, Senators Amy Klobuchar (D-MN), Bill Cassidy (R-LA) and Jon Ossoff (D-GA) press Jassy for details about how Amazon plans to expand its biometric payment system and if the data collected will help the company target ads.

“Amazon’s expansion of biometric data collection through Amazon One raises serious questions about Amazon’s plans for this data and its respect for user privacy, including about how Amazon may use the data for advertising and tracking purposes,” the senators wrote in the letter, embedded below.

The lawmakers also requested information on how many people have enrolled in Amazon One to date, how Amazon will secure the sensitive data and if the company has ever paired the palm prints with facial recognition data it collects elsewhere.

“In contrast with biometric systems like Apple’s Face ID and Touch ID or Samsung Pass, which store biometric information on a user’s device, Amazon One reportedly uploads biometric information to the cloud, raising unique security risks,” the senators wrote. “… Data security is particularly important when it comes to immutable customer data, like palm prints.”

The company controversially introduced a $10 credit for new users who enroll their palm prints in the program, prompting an outcry from privacy advocates who see it as a cheap tactic to coerce people to hand over sensitive personal data.

There’s plenty of reason to be skeptical. Amazon has faced fierce criticism for its other big biometric data project, the AI facial recognition software known as Rekognition, which the company provided to U.S. law enforcement agencies before eventually backtracking with a moratorium on policing applications for the software last year.

#amazon, #amy-klobuchar, #andy-jassy, #artificial-intelligence, #biometric-scanning, #biometrics, #ceo, #facial-recognition, #facial-recognition-software, #privacy, #tc, #united-states, #whole-foods

Despite controversies and bans, facial recognition startups are flush with VC cash

If efforts by states and cities to pass privacy regulations curbing the use of facial recognition are anything to go by, you might fear the worst for the companies building the technology. But a recent influx of investor cash suggests the facial recognition startup sector is thriving, not suffering.

Facial recognition is one of the most controversial and complex policy areas in play. The technology can be used to track where you go and what you do. It’s used by public authorities and in private businesses like stores. But facial recognition has been shown to be flawed and inaccurate, often misidentifies non-white faces, and is disproportionately affects communities of color. Its flawed algorithms have already been used to send innocent people to jail, and privacy advocates have raised countless concerns about how this kind of biometric data is stored and used.

With the threat of federal legislation looming, some of the biggest facial recognition companies like Amazon, IBM, and Microsoft announced they would stop selling their facial recognition technology to police departments to try to appease angry investors, customers, and even their own employees who protested the deployment of such technologies by the U.S. government and immigration authorities.

The pushback against facial recognition didn’t stop there. Since the start of the year, Maine, Massachusetts, and the city of Minneapolis have all passed legislation curbing or banning the use of facial recognition in some form, following in the steps of many other cities and states before them and setting the stage for others, like New York, which are eyeing legislation of their own.

In those same six or so months, investors have funneled hundreds of millions into several facial recognition startups. A breakdown of Crunchbase data by FindBiometrics shows a sharp rise in venture funding in facial recognition companies at well over $500 million in 2021 so far, compared to $622 million for all of 2020.

About half of that $500 million comes from one startup alone. Israel-based startup AnyVision raised $235 million at Series C earlier this month from SoftBank’s Vision Fund 2 for its facial recognition technology that’s used in schools, stadiums, casinos, and retail stores. Macy’s is a known customer, and uses the face-scanning technology to identify shoplifters. It’s a steep funding round compared to a year earlier when Microsoft publicly pulled its investment in AnyVision’s Series A following an investigation by former U.S. attorney general Eric Holder into reports that the startup’s technology was being used by the Israeli government to surveil residents in the West Bank.

Read more on TechCrunch

Paravision, the company marred by controversy after it was accused of using facial recognition on its users without informing them, raised $23 million in a funding round led by J2 Ventures.

And last week, Clearview AI, the controversial facial recognition startup that is the subject of several government investigations and multiple class-action suits for allegedly scraping billions of profile photos from social media sites, confirmed to The New York Times it raised $30 million from investors who asked “not to be identified,” only that they are “institutional investors and private family offices.” That is to say, while investors are happy to see their money go towards building facial recognition systems, they too are all too aware of the risks and controversies associated with attaching their names to the technology.

Although the applications and customers of facial recognition wildly vary, there’s still a big market for the technology.

Many of the cities and towns with facial recognition bans also have carve outs that allow its use in some circumstances, or broad exemptions for private businesses that can freely buy and use the technology. The exclusion of many China-based facial recognition companies, like Hikvision and Dahua, which the government has linked to human rights abuses against the Uighur Muslim minority in Xinjiang, as well as dozens of other startups blacklisted by the U.S. government, has helped push out some of the greatest competition from the most lucrative U.S. markets, like government customers

But as facial recognition continues to draw scrutiny, investors are urging companies to do more to make sure their technologies are not being misused.

In June, a group of 50 investors with more than $4.5 trillion in assets called on dozens of facial recognition companies, including Amazon, Facebook, Alibaba and Huawei, to build their technologies ethically.

“In some instances, new technologies such as facial recognition technology may also undermine our fundamental rights. Yet this technology is being designed and used in a largely unconstrained way, presenting risks to basic human rights,” the statement read.

It’s not just ethics, but also a matter of trying to future-proof the industry from inevitable further political headwinds. In April, the European Union’s top data protection watchdog called for an end to facial recognition in public spaces across the bloc.

“As mass surveillance expands, technological innovation is outpacing human rights protection. There are growing reports of bans, fines, and blacklistings of the use of facial recognition technology. There is a pressing need to consider these questions,” the statement added.

#anyvision, #clearview-ai, #dahua, #face-id, #facial-recognition, #funding, #hikvision, #huawei, #microsoft, #paravision, #privacy, #retail-stores, #security, #surveillance, #video-surveillance

Maine’s facial recognition law shows bipartisan support for protecting privacy

Maine has joined a growing number of cities, counties and states that are rejecting dangerously biased surveillance technologies like facial recognition.

The new law, which is the strongest statewide facial recognition law in the country, not only received broad, bipartisan support, but it passed unanimously in both chambers of the state legislature. Lawmakers and advocates spanning the political spectrum — from the progressive lawmaker who sponsored the bill to the Republican members who voted it out of committee, from the ACLU of Maine to state law enforcement agencies — came together to secure this major victory for Mainers and anyone who cares about their right to privacy.

Maine is just the latest success story in the nationwide movement to ban or tightly regulate the use of facial recognition technology, an effort led by grassroots activists and organizations like the ACLU. From the Pine Tree State to the Golden State, national efforts to regulate facial recognition demonstrate a broad recognition that we can’t let technology determine the boundaries of our freedoms in the digital 21st century.

Facial recognition technology poses a profound threat to civil rights and civil liberties. Without democratic oversight, governments can use the technology as a tool for dragnet surveillance, threatening our freedoms of speech and association, due process rights, and right to be left alone. Democracy itself is at stake if this technology remains unregulated.

Facial recognition technology poses a profound threat to civil rights and civil liberties.

We know the burdens of facial recognition are not borne equally, as Black and brown communities — especially Muslim and immigrant communities — are already targets of discriminatory government surveillance. Making matters worse, face surveillance algorithms tend to have more difficulty accurately analyzing the faces of darker-skinned people, women, the elderly and children. Simply put: The technology is dangerous when it works — and when it doesn’t.

But not all approaches to regulating this technology are created equal. Maine is among the first in the nation to pass comprehensive statewide regulations. Washington was the first, passing a weak law in the face of strong opposition from civil rights, community and religious liberty organizations. The law passed in large part because of strong backing from Washington-based megacorporation Microsoft. Washington’s facial recognition law would still allow tech companies to sell their technology, worth millions of dollars, to every conceivable government agency.

In contrast, Maine’s law strikes a different path, putting the interests of ordinary Mainers above the profit motives of private companies.

Maine’s new law prohibits the use of facial recognition technology in most areas of government, including in public schools and for surveillance purposes. It creates carefully carved out exceptions for law enforcement to use facial recognition, creating standards for its use and avoiding the potential for abuse we’ve seen in other parts of the country. Importantly, it prohibits the use of facial recognition technology to conduct surveillance of people as they go about their business in Maine, attending political meetings and protests, visiting friends and family, and seeking out healthcare.

In Maine, law enforcement must now — among other limitations — meet a probable cause standard before making a facial recognition request, and they cannot use a facial recognition match as the sole basis to arrest or search someone. Nor can local police departments buy, possess or use their own facial recognition software, ensuring shady technologies like Clearview AI will not be used by Maine’s government officials behind closed doors, as has happened in other states.

Maine’s law and others like it are crucial to preventing communities from being harmed by new, untested surveillance technologies like facial recognition. But we need a federal approach, not only a piecemeal local approach, to effectively protect Americans’ privacy from facial surveillance. That’s why it’s crucial for Americans to support the Facial Recognition and Biometric Technology Moratorium Act, a bill introduced by members of both houses of Congress last month.

The ACLU supports this federal legislation that would protect all people in the United States from invasive surveillance. We urge all Americans to ask their members of Congress to join the movement to halt facial recognition technology and support it, too.

#artificial-intelligence, #biometrics, #clearview-ai, #column, #facial-recognition, #facial-recognition-software, #government, #law-enforcement, #maine, #opinion, #privacy, #surveillance-technologies, #tc

New York City’s new biometrics privacy law takes effect

A new biometrics privacy ordinance has taken effect across New York City, putting new limits on what businesses can do with the biometric data they collect on their customers.

From Friday, businesses that collect biometric information — most commonly in the form of facial recognition and fingerprints — are required to conspicuously post notices and signs to customers at their doors explaining how their data will be collected. The ordinance applies to a wide range of businesses — retailers, stores, restaurants, and theaters, to name a few — which are also barred from selling, sharing, or otherwise profiting from the biometric information that they collect.

The move will give New Yorkers — and its millions of visitors each year — greater protections over how their biometric data is collected and used, while also serving to dissuade businesses from using technology that critics say is discriminatory and often doesn’t work.

Businesses can face stiff penalties for violating the law, but can escape fines if they fix the violation quickly.

The law is by no means perfect, as none of these laws ever are. For one, it doesn’t apply to government agencies, including the police. Of the businesses that the ordinance does cover, it exempts employees of those businesses, such as those required to clock in and out of work with a fingerprint. And the definition of what counts as a biometric will likely face challenges that could expand or narrow what is covered.

New York is the latest U.S. city to enact a biometric privacy law, after Portland, Oregon passed a similar ordinance last year. But the law falls short of stronger biometric privacy laws in effect.

Illinois, which has the Biometric Information Privacy Act, a law that grants residents the right to sue for any use of their biometric data without consent. Facebook this year settled for $650 million in a class-action suit that Illinois residents filed in 2015 after the social networking giant used facial recognition to tag users in photos without their permission.

Albert Fox Cahn, the executive director of the New York-based Surveillance Technology Oversight Project, said the law is an “important step” to learn how New Yorkers are tracked by local businesses.

“A false facial recognition match could mean having the NYPD called on you just for walking into a Rite Aid or Target,” he told TechCrunch. He also said that New York should go further by outlawing systems like facial recognition altogether, as some cities have done.

Read more:

#articles, #biometrics, #face-id, #facebook, #facial-recognition, #facial-recognition-software, #illinois, #learning, #new-york, #new-york-city, #new-yorkers, #oregon, #portland, #privacy, #rite-aid, #security, #surveillance, #techniques

Dozens of Chinese phone games now require facial scans to play at night

A child on streetside is fascinated by what is on a smartphone.

Enlarge (credit: Aurich Lawson | Getty Images)

Tencent, the world’s largest Chinese video game publisher, has taken an extreme step to comply with its nation’s rules about limiting minors’ access to video games. As of this week, the publisher has added a facial recognition system, dubbed “Midnight Patrol,” to over 60 of its China-specific smartphone games, and it will disable gameplay in popular titles like Honor of Kings if users either decline the facial check or fail it.

In all affected games, once a gameplay session during the nation’s official gaming curfew hours (10 pm to 8 am) exceeds an unspecified amount of time, the game in question will be interrupted by a prompt to scan the player’s face. Should an adult fail the test for any reason, Tencent makes its “too bad, so sad” attitude clear in its announcement: users can try to play again the next day.

This week’s change doubles down on a limited facial-scan system implemented by Tencent in the Chinese version of Honor of Kings in 2018. Since that rollout, we’ve yet to hear exactly how the system works. Does it determine a user’s age based on facial highlights? Does it cross-reference existing facial data—and possibly leverage any of its home nation’s public facial-scanning systems? Tencent has not clarified any of Midnight Patrol’s technical details.

Read 5 remaining paragraphs | Comments

#china, #facial-recognition, #gaming-culture, #tencent

Ban biometric surveillance in public to safeguard rights, urge EU bodies

There have been further calls from EU institutions to outlaw biometric surveillance in public.

In a joint opinion published today, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS), Wojciech Wiewiórowski, have called for draft EU regulations on the use of artificial intelligence technologies to go further than the Commission’s proposal in April — urging that the planned legislation should be beefed up to include a “general ban on any use of AI for automated recognition of human features in publicly accessible spaces, such as recognition of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals, in any context”.

Such technologies are simply too harmful to EU citizens’ fundamental rights and freedoms — like privacy and equal treatment under the law — to permit their use, is the argument.

The EDPB is responsible for ensuring a harmonization application of the EU’s privacy rules, while the EDPS oversees EU institutions’ own compliance with data protection law and also provides legislative guidance to the Commission.

EU lawmakers’ draft proposal on regulating applications of AI contained restrictions on law enforcement’s use of biometric surveillance in public places — but with very wide-ranging exemptions which quickly attracted major criticism from digital rights and civil society groups, as well as a number of MEPs.

The EDPS himself also quickly urged a rethink. Now he’s gone further, with the EDPB joining in with the criticism.

The EDPB and the EDPS have jointly fleshed out a number of concerns with the EU’s AI proposal — while welcoming the overall “risk-based approach” taken by EU lawmakers — saying, for example, that legislators must be careful to ensure alignment with the bloc’s existing data protection framework to avoid rights risks.

“The EDPB and the EDPS strongly welcome the aim of addressing the use of AI systems within the European Union, including the use of AI systems by EU institutions, bodies or agencies. At the same time, the EDPB and EDPS are concerned by the exclusion of international law enforcement cooperation from the scope of the Proposal,” they write.

“The EDPB and EDPS also stress the need to explicitly clarify that existing EU data protection legislation (GDPR, the EUDPR and the LED) applies to any processing of personal data falling under the scope of the draft AI Regulation.”

As well as calling for the use of biometric surveillance to be banned in public, the pair have urged a total ban on AI systems using biometrics to categorize individuals into “clusters based on ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights”.

That’s an interesting concern in light of Google’s push, in the adtech realm, to replace behavioral micromarketing of individuals with ads that address cohorts (or groups) of users, based on their interests — with such clusters of web users set to be defined by Google’s AI algorithms.

(It’s interesting to speculate, therefore, whether FLoCs risks creating a legal discrimination risk — based on how individual mobile users are grouped together for ad targeting purposes. Certainly, concerns have been raised over the potential for FLoCs to scale bias and predatory advertising. And it’s also interesting that Google avoided running early tests in Europe, likely owning to the EU’s data protection regime.)

In another recommendation today, the EDPB and the EDPS also express a view that the use of AI to infer emotions of a natural person is “highly undesirable and should be prohibited” —  except for what they describe as “very specified cases, such as some health purposes, where the patient emotion recognition is important”.

“The use of AI for any type of social scoring should be prohibited,” they go on — touching on one use-case that the Commission’s draft proposal does suggest should be entirely prohibited, with EU lawmakers evidently keen to avoid any China-style social credit system taking hold in the region.

However by failing to include a prohibition on biometric surveillance in public in the proposed regulation the Commission is arguably risking just such a system being developed on the sly — i.e. by not banning private actors from deploying technology that could be used to track and profile people’s behavior remotely and en masse.

Commenting in a statement, the EDPB’s chair Andrea Jelinek and the EDPS Wiewiórowski argue as much, writing [emphasis ours]:

“Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places. Applications such as live facial recognition interfere with fundamental rights and freedoms to such an extent that they may call into question the essence of these rights and freedoms. This calls for an immediate application of the precautionary approach. A general ban on the use of facial recognition in publicly accessible areas is the necessary starting point if we want to preserve our freedoms and create a human-centric legal framework for AI. The proposed regulation should also prohibit any type of use of AI for social scoring, as it is against the EU fundamental values and can lead to discrimination.”

In their joint opinion they also express concerns about the Commission’s proposed enforcement structure for the AI regulation, arguing that data protection authorities (within Member States) should be designated as national supervisory authorities (“pursuant to Article 59 of the [AI] Proposal”) — pointing out the EU DPAs are already enforcing the GDPR (General Data Protection Regulation) and the LED (Law Enforcement Directive) on AI systems involving personal data; and arguing it would therefore be “a more harmonized regulatory approach, and contribute to the consistent interpretation of data processing provisions across the EU” if they were given competence for supervising the AI Regulation too.

They are also not happy with the Commission’s plan to give itself a predominant role in the planned European Artificial Intelligence Board (EAIB) — arguing that this “would conflict with the need for an AI European body independent from any political influence”. To ensure the Board’s independence the proposal should give it more autonomy and “ensure it can act on its own initiative”, they add.

The Commission has been contacted for comment.

The AI Regulation is one of a number of digital proposals unveiled by EU lawmakers in recent months. Negotiations between the different EU institutions — and lobbying from industry and civil society — continues as the bloc works toward adopting new digital rules.

In another recent and related development, the UK’s information commissioner warned last week over the threat posed by big data surveillance systems that are able to make use of technologies like live facial recognition — although she claimed it’s not her place to endorse or ban a technology.

But her opinion makes it clear that many applications of biometric surveillance may be incompatible with the UK’s privacy and data protection framework.

#andrea-jelinek, #artificial-intelligence, #biometrics, #data-protection, #data-protection-law, #edpb, #edps, #europe, #european-data-protection-board, #european-union, #facial-recognition, #general-data-protection-regulation, #law-enforcement, #privacy, #surveillance, #united-kingdom, #wojciech-wiewiorowski

UK’s ICO warns over ‘big data’ surveillance threat of live facial recognition in public

The UK’s chief data protection regulator has warned over reckless and inappropriate use of live facial recognition (LFR) in public places.

Publishing an opinion today on the use of this biometric surveillance in public — to set out what is dubbed as the “rules of engagement” — the information commissioner, Elizabeth Denham, also noted that a number of investigations already undertaken by her office into planned applications of the tech have found problems in all cases.

“I am deeply concerned about the potential for live facial recognition (LFR) technology to be used inappropriately, excessively or even recklessly. When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant,” she warned in a blog post.

“Uses we’ve seen included addressing public safety concerns and creating biometric profiles to target people with personalised advertising.

“It is telling that none of the organisations involved in our completed investigations were able to fully justify the processing and, of those systems that went live, none were fully compliant with the requirements of data protection law. All of the organisations chose to stop, or not proceed with, the use of LFR.”

“Unlike CCTV, LFR and its algorithms can automatically identify who you are and infer sensitive details about you. It can be used to instantly profile you to serve up personalised adverts or match your image against known shoplifters as you do your weekly grocery shop,” Denham added.

“In future, there’s the potential to overlay CCTV cameras with LFR, and even to combine it with social media data or other ‘big data’ systems — LFR is supercharged CCTV.”

The use of biometric technologies to identify individuals remotely sparks major human rights concerns, including around privacy and the risk of discrimination.

Across Europe there are campaigns — such as Reclaim your Face — calling for a ban on biometric mass surveillance.

In another targeted action, back in May, Privacy International and others filed legal challenges at the controversial US facial recognition company, Clearview AI, seeking to stop it from operating in Europe altogether. (Some regional police forces have been tapping in — including in Sweden where the force was fined by the national DPA earlier this year for unlawful use of the tech.)

But while there’s major public opposition to biometric surveillance in Europe, the region’s lawmakers have so far — at best — been fiddling around the edges of the controversial issue.

A pan-EU regulation the European Commission presented in April, which proposes a risk-based framework for applications of artificial intelligence, included only a partial prohibition on law enforcement’s use of biometric surveillance in public places — with wide ranging exemptions that have drawn plenty of criticism.

There have also been calls for a total ban on the use of technologies like live facial recognition in public from MEPs across the political spectrum. The EU’s chief data protection supervisor has also urged lawmakers to at least temporarily ban the use of biometric surveillance in public.

The EU’s planned AI Regulation won’t apply in the UK, in any case, as the country is now outside the bloc. And it remains to be seen whether the UK government will seek to weaken the national data protection regime.

A recent report it commissioned to examine how the UK could revise its regulatory regime, post-Brexit, has — for example — suggested replacing the UK GDPR with a new “UK framework” — proposing changes to “free up data for innovation and in the public interest”, as it puts it, and advocating for revisions for AI and “growth sectors”. So whether the UK’s data protection regime will be put to the torch in a post-Brexit bonfire of ‘red tape’ is a key concern for rights watchers.

(The Taskforce on Innovation, Growth and Regulatory Reform report advocates, for example, for the complete removal of Article 22 of the GDPR — which gives people rights not to be subject to decisions based solely on automated processing — suggesting it be replaced with “a focus” on “whether automated profiling meets a legitimate or public interest test”, with guidance on that envisaged as coming from the Information Commissioner’s Office (ICO). But it should also be noted that the government is in the process of hiring Denham’s successor; and the digital minister has said he wants her replacement to take “a bold new approach” that “no longer sees data as a threat, but as the great opportunity of our time”. So, er, bye-bye fairness, accountability and transparency then?)

For now, those seeking to implement LFR in the UK must comply with provisions in the UK’s Data Protection Act 2018 and the UK General Data Protection Regulation (aka, its implementation of the EU GDPR which was transposed into national law before Brexit), per the ICO opinion, including data protection principles set out in UK GDPR Article 5, including lawfulness, fairness, transparency, purpose limitation, data minimisation, storage limitation, security and accountability.

Controllers must also enable individuals to exercise their rights, the opinion also said.

“Organisations will need to demonstrate high standards of governance and accountability from the outset, including being able to justify that the use of LFR is fair, necessary and proportionate in each specific context in which it is deployed. They need to demonstrate that less intrusive techniques won’t work,” wrote Denham. “These are important standards that require robust assessment.

“Organisations will also need to understand and assess the risks of using a potentially intrusive technology and its impact on people’s privacy and their lives. For example, how issues around accuracy and bias could lead to misidentification and the damage or detriment that comes with that.”

The timing of the publication of the ICO’s opinion on LFR is interesting in light of wider concerns about the direction of UK travel on data protection and privacy.

If, for example, the government intends to recruit a new, ‘more pliant’ information commissioner — who will happily rip up the rulebook on data protection and AI, including in areas like biometric surveillance — it will at least be rather awkward for them to do so with an opinion from the prior commissioner on the public record that details the dangers of reckless and inappropriate use of LFR.

Certainly, the next information commissioner won’t be able to say they weren’t given clear warning that biometric data is particularly sensitive — and can be used to estimate or infer other characteristics, such as their age, sex, gender or ethnicity.

Or that ‘Great British’ courts have previously concluded that “like fingerprints and DNA [a facial biometric template] is information of an ‘intrinsically private’ character”, as the ICO opinion notes, while underlining that LFR can cause this super sensitive data to be harvested without the person in question even being aware it’s happening. 

Denham’s opinion also hammers hard on the point about the need for public trust and confidence for any technology to succeed, warning that: “The public must have confidence that its use is lawful, fair, transparent and meets the other standards set out in data protection legislation.”

The ICO has previously published an Opinion into the use of LFR by police forces — which she said also sets “a high threshold for its use”. (And a few UK police forces — including the Met in London — have been among the early adopters of facial recognition technology, which has in turn led some into legal hot water on issues like bias.)

Disappointingly, though, for human rights advocates, the ICO opinion shies away from recommending a total ban on the use of biometric surveillance in public by private companies or public organizations — with the commissioner arguing that while there are risks with use of the technology there could also be instances where it has high utility (such as in the search for a missing child).

“It is not my role to endorse or ban a technology but, while this technology is developing and not widely deployed, we have an opportunity to ensure it does not expand without due regard for data protection,” she wrote, saying instead that in her view “data protection and people’s privacy must be at the heart of any decisions to deploy LFR”.

Denham added that (current) UK law “sets a high bar to justify the use of LFR and its algorithms in places where we shop, socialise or gather”.

“With any new technology, building public trust and confidence in the way people’s information is used is crucial so the benefits derived from the technology can be fully realised,” she reiterated, noting how a lack of trust in the US has led to some cities banning the use of LFR in certain contexts and led to some companies pausing services until rules are clearer.

“Without trust, the benefits the technology may offer are lost,” she also warned.

There is one red line that the UK government may be forgetting in its unseemly haste to (potentially) gut the UK’s data protection regime in the name of specious ‘innovation’. Because if it tries to, er, ‘liberate’ national data protection rules from core EU principles (of lawfulness, fairness, proportionality, transparency, accountability and so on) — it risks falling out of regulatory alignment with the EU, which would then force the European Commission to tear up a EU-UK data adequacy arrangement (on which the ink is still drying).

The UK having a data adequacy agreement from the EU is dependent on the UK having essentially equivalent protections for people’s data. Without this coveted data adequacy status UK companies will immediately face far greater legal hurdles to processing the data of EU citizens (as the US now does, in the wake of the demise of Safe Harbor and Privacy Shield). There could even be situations where EU data protection agencies order EU-UK data flows to be suspended altogether…

Obviously such a scenario would be terrible for UK business and ‘innovation’ — even before you consider the wider issue of public trust in technologies and whether the Great British public itself wants to have its privacy rights torched.

Given all this, you really have to wonder whether anyone inside the UK government has thought this ‘regulatory reform’ stuff through. For now, the ICO is at least still capable of thinking for them.

 

#artificial-intelligence, #biometrics, #clearview-ai, #data-protection, #data-protection-law, #elizabeth-denham, #europe, #european-commission, #european-union, #facial-recognition, #general-data-protection-regulation, #information-commissioners-office, #law-enforcement, #privacy, #privacy-international, #safe-harbor, #surveillance, #tc, #uk-government, #united-kingdom

Supreme Court revives LinkedIn case to protect user data from web scrapers

The Supreme Court has given LinkedIn another chance to stop a rival company from scraping personal information from users’ public profiles, a practice LinkedIn says should be illegal but one that could have broad ramifications for internet researchers and archivists.

LinkedIn lost its case against Hiq Labs in 2019 after the U.S. Ninth Circuit Court of Appeals ruled that the CFAA does not prohibit a company from scraping data that is publicly accessible on the internet.

The Microsoft-owned social network argued that the mass scraping of its users’ profiles was in violation of the Computer Fraud and Abuse Act, or CFAA, which prohibits accessing a computer without authorization.

Hiq Labs, which uses public data to analyze employee attrition, argued at the time that a ruling in LinkedIn’s favor “could profoundly impact open access to the Internet, a result that Congress could not have intended when it enacted the CFAA over three decades ago.” (Hiq Labs has also been sued by Facebook, which it claims scraped public data across Facebook and Instagram, but also Amazon Twitter, and YouTube.)

The Supreme Court said it would not take on the case, but instead ordered the appeal’s court to hear the case again in light of its recent ruling, which found that a person cannot violate the CFAA if they improperly access data on a computer they have permission to use.

The CFAA was once dubbed the “worst law” in the technology law books by critics who have long argued that its outdated and vague language failed to keep up with the pace of the modern internet.

Journalists and archivists have long scraped public data as a way to save and archive copies of old or defunct websites before they shut down. But other cases of web scraping have sparked anger and concerns over privacy and civil liberties. In 2019, a security researcher scraped millions of Venmo transactions, which the company does not make private by default. Clearview AI, a controversial facial recognition startup, claimed it scraped over 3 billion profile photos from social networks without their permission.

 

#amazon, #clearview-ai, #computer-fraud-and-abuse-act, #congress, #facebook, #facial-recognition, #hacking, #linkedin, #microsoft, #privacy, #security, #social-network, #social-networks, #supreme-court, #twitter, #venmo, #web-scraping

EU’s top data protection supervisor urges ban on facial recognition in public

The European Union’s lead data protection supervisor has called for remote biometric surveillance in public places to be banned outright under incoming AI legislation.

The European Data Protection Supervisor’s (EDPS) intervention follows a proposal, put out by EU lawmakers on Wednesday, for a risk-based approach to regulating applications of artificial intelligence.

The Commission’s legislative proposal includes a partial ban on law enforcement’s use of remote biometric surveillance technologies (such as facial recognition) in public places. But the text includes wide-ranging exceptions, and digital and humans rights groups were quick to warn over loopholes they argue will lead to a drastic erosion of EU citizens’ fundamental rights. And last week a cross-party group of MEPs urged the Commission to screw its courage to the sticking place and outlaw the rights-hostile tech.

The EDPS, whose role includes issuing recommendations and guidance for the Commission, tends to agree. In a press release today Wojciech Wiewiórowski urged a rethink.

“The EDPS regrets to see that our earlier calls for a moratorium on the use of remote biometric identification systems — including facial recognition — in publicly accessible spaces have not been addressed by the Commission,” he wrote.

“The EDPS will continue to advocate for a stricter approach to automated recognition in public spaces of human features — such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals — whether these are used in a commercial or administrative context, or for law enforcement purposes.

“A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives.”

Wiewiórowski had some warm words for the legislative proposal too, saying he welcomed the horizontal approach and the broad scope set out by the Commission. He also agreed there are merits to a risk-based approach to regulating applications of AI.

But the EDPB has made it clear that the red lines devised by EU lawmakers are a lot pinker in hue than he’d hoped for — adding a high profile voice to the critique that the Commission hasn’t lived up to its much trumpeted claim to have devised a framework that will ensure ‘trustworthy’ and ‘human-centric’ AI.

The coming debate over the final shape of the regulation is sure to include plenty of discussion over where exactly Europe’s AI red lines should be. A final version of the text isn’t expected to be agreed until next year at the earliest.

“The EDPS will undertake a meticulous and comprehensive analysis of the Commission’s proposal to support the EU co-legislators in strengthening the protection of individuals and society at large. In this context, the EDPS will focus in particular on setting precise boundaries for those tools and systems which may present risks for the fundamental rights to data protection and privacy,” Wiewiórowski added.

 

#ai-regulation, #artificial-intelligence, #biometrics, #edps, #europe, #european-union, #facial-recognition, #law-enforcement, #policy, #privacy, #surveillance, #wojciech-wiewiorowski

New privacy bill would end law enforcement practice of buying data from brokers

A new bill known as the Fourth Amendment is Not for Sale Act would seal up a loophole that intelligence and law enforcement agencies use to obtain troves of sensitive and identifying information to which they wouldn’t otherwise have legal access.

The new legislation, proposed by Senators Ron Wyden (D-OR) and Rand Paul (R-KY), would require government agencies to obtain a court order to access data from brokers. Court orders are already required when the government seeks analogous data from mobile providers and tech platforms.

“There’s no reason information scavenged by data brokers should be treated differently than the same data held by your phone company or email provider,” Wyden said. Wyden describes the loophole as a way that police and other agencies buy data to “end-run the Fourth Amendment.”

Paul criticized the government for using the current data broker loophole to circumvent Americans’ constitutional rights. “The Fourth Amendment’s protection against unreasonable search and seizure ensures that the liberty of every American cannot be violated on the whims, or financial transactions, of every government officer,” Paul said.

Critically, the bill would also ban law enforcement agencies from buying data on Americans when it was obtained through hacking, violations of terms of service or “from a user’s account or device.”

That bit highlights the questionable practices of Clearview AI, a deeply controversial tech company that sells access to a facial recognition search engine. Clearview’s platform collects pictures of faces scraped from across the web, including social media sites, and sells access to that data to police departments around the country and federal agencies like ICE.

In scraping their sites for data to sell, Clearview has run afoul of just about every major social media platform’s terms of service. Facebook, YouTube, Twitter, LinkedIn and Google have all denounced Clearview for using data culled from their services and some have even sent cease-and-desists ordering the data broker to stop.

The bill would also expand privacy laws to apply to infrastructure companies that own cell towers and data cables, seal up workarounds that allow intelligence agencies to obtain metadata from Americans’ international communications without review by a FISA court and ensure that agencies seek probable cause orders to obtain location and web browsing data.

The bill, embedded below, isn’t just some nascent proposal. It’s already attracted bipartisan support from a number of key co-sponsors, including Senate Majority Leader Chuck Schumer and Bernie Sanders on the Democratic side and Republicans Mike Lee and Steve Daines. A House version of the legislation was also introduced Wednesday.

 

#bernie-sanders, #cell-towers, #chuck-schumer, #clearview-ai, #facial-recognition, #google, #government, #mass-surveillance, #rand-paul, #ron-wyden, #tc

EU lawmakers propose strict curbs on use of facial recognition

EU lawmakers propose strict curbs on use of facial recognition

Enlarge (credit: John Lamb / The Image Bank / Getty Images)

EU regulators have proposed strict curbs on the use of facial recognition in public spaces, limiting the controversial technology to a small number of public-interest scenarios, according to new draft legislation seen by the Financial Times.

In a confidential 138-page document, officials said facial recognition systems infringed on individuals’ civil rights and therefore should only be used in scenarios in which they were deemed essential, for instance in the search for missing children and the policing of terrorist events.

The draft legislation added that “real-time” facial recognition—which uses live tracking, rather than past footage or photographs—in public spaces by the authorities should only ever be used for limited periods of time, and it should be subject to prior consent by a judge or a national authority.

Read 9 remaining paragraphs | Comments

#european-union, #facial-recognition, #law-enforcement, #policy, #privacy

MEPs call for European AI rules to ban biometric surveillance in public

A cross-party group of 40 MEPs in the European parliament has called on the Commission to strengthen an incoming legislative proposal on artificial intelligence to include an outright ban on the use of facial recognition and other forms of biometric surveillance in public places.

They have also urged EU lawmakers to outlaw automated recognition of people’s sensitive characteristics (such as gender, sexuality, race/ethnicity, health status and disability) — warning that such AI-fuelled practices pose too great a rights risk and can fuel discrimination.

The Commission is expected to presented its proposal for a framework to regulate ‘high risk’ applications of AI next week — but a copy of the draft leaked this week (via Politico). And, as we reported earlier, this leaked draft does not include a ban on the use of facial recognition or similar biometric remote identification technologies in public places, despite acknowledging the strength of public concern over the issue.

“Biometric mass surveillance technology in publicly accessible spaces is widely being criticised for wrongfully reporting large numbers of innocent citizens, systematically discriminating against under-represented groups and having a chilling effect on a free and diverse society. This is why a ban is needed,” the MEPs write now in a letter to the Commission which they’ve also made public.

They go on to warn over the risks of discrimination through automated inference of people’s sensitive characteristics — such as in applications like predictive policing or the indiscriminate monitoring and tracking of populations via their biometric characteristics.

“This can lead to harms including violating rights to privacy and data protection; suppressing free speech; making it harder to expose corruption; and have a chilling effect on everyone’s autonomy, dignity and self-expression – which in particular can seriously harm LGBTQI+ communities, people of colour, and other discriminated-against groups,” the MEPs write, calling on the Commission to amend the AI proposal to outlaw the practice in order to protect EU citizens’ rights and the rights of communities who faced a heightened risk of discrimination (and therefore heightened risk from discriminatory tools supercharged with AI).

“The AI proposal offers a welcome opportunity to prohibit the automated recognition of gender, sexuality, race/ethnicity, disability and any other sensitive and protected characteristics,” they add.

The leaked draft of the Commission’s proposal does tackle indiscriminate mass surveillance — proposing to prohibit this practice, as well as outlawing general purpose social credit scoring systems.

However the MEPs want lawmakers to go further — warning over weaknesses in the wording of the leaked draft and suggesting changes to ensure that the proposed ban covers “all untargeted and indiscriminate mass surveillance, no matter how many people are exposed to the system”.

They also express alarm at the proposal having an exemption on the prohibition on mass surveillance for public authorities (or commercial entities working for them) — warning that this risks deviating from existing EU legislation and from interpretations by the bloc’s top court in this area.

“We strongly protest the proposed second paragraph of this Article 4 which would exempt public authorities and even private actors acting on their behalf ‘in order to safeguard public security’,” they write. “Public security is precisely what mass surveillance is being justified with, it is where it is practically relevant, and it is where the courts have consistently annulled legislation on indiscriminate bulk processing of personal data (e.g. the Data Retention Directive). This carve-out needs to be deleted.”

“This second paragraph could even be interpreted to deviate from other secondary legislation which the Court of Justice has so far interpreted to ban mass surveillance,” they continue. “The proposed AI regulation needs to make it very clear that its requirements apply in addition to those resulting from the data protection acquis and do not replace it. There is no such clarity in the leaked draft.”

The Commission has been contacted for comment on the MEPs’ calls but is unlikely to do so ahead of the official reveal of the draft AI regulation — which is expected around the middle of next week.

It remains to be seen whether the AI proposal will undergo any significant amendments between now and then. But MEPs have fired a swift warning shot that fundamental rights must and will be a key feature of the co-legislative debate — and that lawmakers’ claims of a framework to ensure ‘trustworthy’ AI won’t look credible if the rules don’t tackle unethical technologies head on.

#ai, #ai-regulation, #artificial-intelligence, #biometrics, #discrimination, #europe, #european-parliament, #european-union, #facial-recognition, #fundamental-rights, #law-enforcement, #mass-surveillance, #meps, #national-security, #policy, #privacy, #surveillance

Uber hit with default ‘robo-firing’ ruling after another EU labor rights GDPR challenge

Labor activists challenging Uber over what they allege are ‘robo-firings’ of drivers in Europe have trumpeted winning a default judgement in the Netherlands — where the Court of Amsterdam ordered the ride-hailing giant to reinstate six drivers who the litigants claim were unfairly terminated “by algorithmic means.”

The court also ordered Uber to pay the fired drivers compensation.

The challenge references Article 22 of the European Union’s General Data Protection Regulation (GDPR) — which provides protection for individuals against purely automated decisions with a legal or significant impact.

The activists say this is the first time a court has ordered the overturning of an automated decision to dismiss workers from employment.

However the judgement, which was issued on February 24, was issued by default — and Uber says it was not aware of the case until last week, claiming that was why it did not contest it (nor, indeed, comply with the order).

It had until March 29 to do so, per the litigants, who are being supported by the App Drivers & Couriers Union (ADCU) and Worker Info Exchange (WIE).

Uber argues the default judgement was not correctly served and says it is now making an application to set the default ruling aside and have its case heard “on the basis that the correct procedure was not followed.”

It envisages the hearing taking place within four weeks of its Dutch entity, Uber BV, being made aware of the judgement — which it says occurred on April 8.

“Uber only became aware of this default judgement last week, due to representatives for the ADCU not following proper legal procedure,” an Uber spokesperson told TechCrunch.

A spokesperson for WIE denied that correct procedure was not followed but welcomed the opportunity for Uber to respond to questions over how its driver ID systems operate in court, adding: “They [Uber] are out of time. But we’d be happy to see them in court. They will need to show meaningful human intervention and provide transparency.”

Uber pointed to a separate judgement by the Amsterdam Court last month — which rejected another ADCU- and WIE-backed challenge to Uber’s anti-fraud systems, with the court accepting its explanation that algorithmic tools are mere aids to human “anti-fraud” teams who it said take all decisions on terminations.

“With no knowledge of the case, the Court handed down a default judgement in our absence, which was automatic and not considered. Only weeks later, the very same Court found comprehensively in Uber’s favour on similar issues in a separate case. We will now contest this judgement,” Uber’s spokesperson added.

However WIE said this default judgement “robo-firing” challenge specifically targets Uber’s Hybrid Real-Time ID System — a system that incorporates facial recognition checks and which labor activists recently found misidentifying drivers in a number of instances.

It also pointed to a separate development this week in the U.K. where it said the City of London Magistrates Court ordered the city’s transport regulator, TfL, to reinstate the licence of one of the drivers revoked after Uber routinely notified it of a dismissal (also triggered by Uber’s real time ID system, per WIE).

Reached for comment on that, a TfL spokesperson said: “The safety of the travelling public is our top priority and where we are notified of cases of driver identity fraud, we take immediate licensing action so that passenger safety is not compromised. We always require the evidence behind an operator’s decision to dismiss a driver and review it along with any other relevant information as part of any decision to revoke a licence. All drivers have the right to appeal a decision to remove a licence through the Magistrates’ Court.”

The regulator has been applying pressure to Uber since 2017 when it took the (shocking to Uber) decision to revoke the company’s licence to operate — citing safety and corporate governance concerns.

Since then Uber has been able to continue to operate in the U.K. capital but the company remains under pressure to comply with a laundry list of requirements set by TfL as it tries to regain a full operator licence.

Commenting on the default Dutch judgement on the Uber driver terminations in a statement, James Farrar, director of WIE, accused gig platforms of “hiding management control in algorithms.”

“For the Uber drivers robbed of their jobs and livelihoods this has been a dystopian nightmare come true,” he said. “They were publicly accused of ‘fraudulent activity’ on the back of poorly governed use of bad technology. This case is a wake-up call for lawmakers about the abuse of surveillance technology now proliferating in the gig economy. In the aftermath of the recent U.K. Supreme Court ruling on worker rights gig economy platforms are hiding management control in algorithms. This is misclassification 2.0.”

In another supporting statement, Yaseen Aslam, president of the ADCU, added: “I am deeply concerned about the complicit role Transport for London has played in this catastrophe. They have encouraged Uber to introduce surveillance technology as a price for keeping their operator’s license and the result has been devastating for a TfL licensed workforce that is 94% BAME. The Mayor of London must step in and guarantee the rights and freedoms of Uber drivers licensed under his administration.”  

When pressed on the driver termination challenge being specifically targeted at its Hybrid Real-Time ID system, Uber declined to comment in greater detail — claiming the case is “now a live court case again”.

But its spokesman suggested it will seek to apply the same defence against the earlier “robo-firing” charge — when it argued its anti-fraud systems do not equate to automated decision making under EU law because “meaningful human involvement [is] involved in decisions of this nature”.

 

#app-drivers-couriers-union, #artificial-intelligence, #automated-decisions, #europe, #european-union, #facial-recognition, #gdpr, #general-data-protection-regulation, #gig-worker, #james-farrar, #labor, #lawsuit, #london, #netherlands, #transport-for-london, #uber, #united-kingdom

EU plan for risk-based AI rules to set fines as high as 4% of global turnover, per leaked draft

European Union lawmakers who are drawing up rules for applying artificial intelligence are considering fines of up to 4% of global annual turnover (or €20M, if greater) for a set of prohibited use-cases, according to a leaked draft of the AI regulation — reported earlier by Politico — that’s expected to be officially unveiled next week.

The plan to regulate AI has been on the cards for a while. Back in February 2020 the European Commission published a white paper, sketching plans for regulating so-called “high risk” applications of artificial intelligence.

At the time EU lawmakers were toying with a sectoral focus — envisaging certain sectors like energy and recruitment as vectors for risk. However that approach appears to have been rethought, per the leaked draft — which does not limit discussion of AI risk to particular industries or sectors.

Instead, the focus is on compliance requirements for high risk AI applications, wherever they may occur (weapons/military uses are specifically excluded, however, as such use-cases fall outside the EU treaties). Although it’s not abundantly clear from this draft exactly how ‘high risk’ will be defined.

The overarching goal for the Commission here is to boost public trust in AI, via a system of compliance checks and balances steeped in “EU values” in order to encourage uptake of so-called “trustworthy” and “human-centric” AI. So even makers of AI applications not considered to be ‘high risk’ will still be encouraged to adopt codes of conduct — “to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems”, as the Commission puts it.

Another chunk of the regulation deals with measures to support AI development in the bloc — pushing Member States to establish regulatory sandboxing schemes in which startups and SMEs can be proritized for support to develop and test AI systems before bringing them to market.

Competent authorities “shall be empowered to exercise their discretionary powers and levers of proportionality in relation to artificial intelligence projects of entities participating the sandbox, while fully preserving authorities’ supervisory and corrective powers,” the draft notes.

What’s high risk AI?

Under the planned rules, those intending to apply artificial intelligence will need to determine whether a particular use-case is ‘high risk’ and thus whether they need to conduct a mandatory, pre-market compliance assessment or not.

“The classification of an AI system as high-risk should be based on its intended purpose — which should refer to the use for which an AI system is intended, including the specific context and conditions of use and — and be determined in two steps by considering whether it may cause certain harms and, if so, the severity of the possible harm and the probability of occurrence,” runs one recital in the draft.

“A classification of an AI system as high-risk for the purpose of this Regulation may not necessarily mean that the system as such or the product as a whole would necessarily be considered as ‘high-risk’ under the criteria of the sectoral legislation,” the text also specifies.

Examples of “harms” associated with high-risk AI systems are listed in the draft as including: “the injury or death of a person, damage of property, systemic adverse impacts for society at large, significant disruptions to the provision of essential services for the ordinary conduct of critical economic and societal activities, adverse impact on financial, educational or professional opportunities of persons, adverse impact on the access to public services and any form of public assistance, and adverse impact on [European] fundamental rights.”

Several examples of high risk applications are also discussed — including recruitment systems; systems that provide access to educational or vocational training institutions; emergency service dispatch systems; creditworthiness assessment; systems involved in determining taxpayer-funded benefits allocation; decision-making systems applied around the prevention, detection and prosecution of crime; and decision-making systems used to assist judges.

So long as compliance requirements — such as establishing a risk management system and carrying out post-market surveillance, including via a quality management system — are met such systems would not be barred from the EU market under the legislative plan.

Other requirements include in the area of security and that the AI achieves consistency of accuracy in performance — with a stipulation to report to “any serious incidents or any malfunctioning of the AI system which constitutes a breach of obligations” to an oversight authority no later than 15 days after becoming aware of it.

“High-risk AI systems may be placed on the Union market or otherwise put into service subject to compliance with mandatory requirements,” the text notes.

“Mandatory requirements concerning high-risk AI systems placed or otherwise put into service on the Union market should be complied with taking into account the intended purpose of the AI system and according to the risk management system to be established by the provider.

“Among other things, risk control management measures identified by the provider should be based on due consideration of the effects and possible interactions resulting from the combined application of the mandatory requirements and take into account the generally acknowledged state of the art, also including as reflected in relevant harmonised standards or common specifications.”

Prohibited practices and biometrics

Certain AI “practices” are listed as prohibited under Article 4 of the planned law, per this leaked draft — including (commercial) applications of mass surveillance systems and general purpose social scoring systems which could lead to discrimination.

AI systems that are designed to manipulate human behavior, decisions or opinions to a detrimental end (such as via dark pattern design UIs), are also listed as prohibited under Article 4; as are systems that use personal data to generate predictions in order to (detrimentally) target the vulnerabilities of persons or groups of people.

A casual reader might assume the regulation is proposing to ban, at a stroke, practices like behavioral advertising based on people tracking — aka the business models of companies like Facebook and Google. However that assumes adtech giants will accept that their tools have a detrimental impact on users.

On the contrary, their regulatory circumvention strategy is based on claiming the polar opposite; hence Facebook’s talk of “relevant” ads. So the text (as written) looks like it will be a recipe for (yet) more long-drawn out legal battles to try to make EU law stick vs the self-interested interpretations of tech giants.

The rational for the prohibited practices is summed up in an earlier recital of the draft — which states: “It should be acknowledged that artificial intelligence can enable new manipulative, addictive, social control and indiscriminate surveillance practices that are particularly harmful and should be prohibited as contravening the Union values of respect for human dignity, freedom, democracy, the rule of law and respect for human rights.”

It’s notable that the Commission has avoided proposing a ban on the use of facial recognition in public places — as it had apparently been considering, per a leaked draft early last year, before last year’s White Paper steered away from a ban.

In the leaked draft “remote biometric identification” in public places is singled out for “stricter conformity assessment procedures through the involvement of a notified body” — aka an “authorisation procedure that addresses the specific risks implied by the use of the technology” and includes a mandatory data protection impact assessment — vs most other applications of high risk AIs (which are allowed to meet requirements via self-assessment).

“Furthermore the authorising authority should consider in its assessment the likelihood and severity of harm caused by inaccuracies of a system used for a given purpose, in particular with regard to age, ethnicity, sex or disabilities,” runs the draft. “It should further consider the societal impact, considering in particular democratic and civic participation, as well as the methodology, necessity and proportionality for the inclusion of persons in the reference database.”

AI systems “that may primarily lead to adverse implications for personal safety” are also required to undergo this higher bar of regulatory involvement as part of the compliance process.

The envisaged system of conformity assessments for all high risk AIs is ongoing, with the draft noting: “It is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes.”

“For AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out) changes to the algorithm and performance which have not been pre-determined and assessed at the moment of the conformity assessment shall result in a new conformity
assessment of the AI system,” it adds.

The carrot for compliant businesses is to get to display a ‘CE’ mark to help them win the trust of users and friction-free access across the bloc’s single market.

“High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the Union,” the text notes, adding that: “Member States should not create obstacles to the placing on the market or putting into service of AI systems that comply with the requirements laid down in this Regulation.”

Transparency for bots and deepfakes

As well as seeking to outlaw some practices and establish a system of pan-EU rules for bringing ‘high risk’ AI systems to market safely — with providers expected to make (mostly self) assessments and fulfil compliance obligations (such as around the quality of the data-sets used to train the model; record-keeping/documentation; human oversight; transparency; accuracy) prior to launching such a product into the market and conduct ongoing post-market surveillance — the proposed regulation seeks shrink the risk of AI being used to trick people.

It does this by suggesting “harmonised transparency rules” for AI systems intended to interact with natural persons (aka voice AIs/chat bots etc); and for AI systems used to generate or manipulate image, audio or video content (aka deepfakes).

“Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems,” runs the text.

“In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a reasonable person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.

“This labelling obligation should not apply where the use of such content is necessary for the purposes of safeguarding public security or for the exercise of a legitimate right or freedom of a person such as for satire, parody or freedom of arts and sciences and subject to appropriate safeguards for the rights and freedoms of third parties.”

What about enforcement?

While the proposed AI regime hasn’t yet been officially unveiled by the Commission — so details could still change before next week — a major question mark looms over how a whole new layer of compliance around specific applications of (often complex) artificial intelligence can be effectively oversee and any violations enforced, especially given ongoing weaknesses in the enforcement of the EU’s data protection regime (which begun being applied back in 2018).

So while providers of high risk AIs are required to take responsibility for putting their system/s on the market (and therefore for compliance with all the various stipulations, which also include registering high risk AI systems in an EU database the Commission intends to maintain), the proposal leaves enforcement in the hands of Member States — who will be responsible for designating one or more national competent authorities to supervise application of the oversight regime.

We’ve seen how this story plays out with the General Data Protection Regulation. The Commission itself has conceded GDPR enforcement is not consistently or vigorously applied across the bloc — so a major question is how these fledgling AI rules will avoid the same forum-shopping fate?

“Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation,” runs the draft.

The Commission does add a caveat — about potentially stepping in in the event that Member State enforcement doesn’t deliver. But there’s no near term prospect of a different approach to enforcement, suggesting the same old pitfalls will likely appear.

“Since the objective of this Regulation, namely creating the conditions for an ecosystem of trust regarding the placing on the market, putting into service and use of artificial intelligence in the Union, cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures, in accordance with the principle of subsidiarity as set out in Article 5 of the Treaty on European Union,” is the Commission’s back-stop for future enforcement failure.

The oversight plan for AI includes setting up a mirror entity akin to the GDPR’s European Data Protection Board — to be called the European Artificial Intelligence Board — which will similarly support application of the regulation by issuing relevant recommendations and opinions for EU lawmakers, such as around the list of prohibited AI practices and high-risk systems.

 

#ai, #artificial-intelligence, #behavioral-advertising, #europe, #european-commission, #european-data-protection-board, #european-union, #facebook, #facial-recognition, #general-data-protection-regulation, #policy, #regulation, #tc

Facebook gets a C – Startup rates the ‘ethics’ of social media platforms, targets asset managers

By now you’ve probably heard of ESG (Environmental, Social, Governance) ratings for companies, or ratings for their carbon footprint. Well, now a UK company has come up with a way of rating the ‘ethics’ social media companies. 
  
EthicsGrade is an ESG ratings agency, focusing on AI governance. Headed up Charles Radclyffe, the former head of AI at Fidelity, it uses AI-driven models to create a more complete picture of the ESG of organizations, harnessing Natural Language Processing to automate the analysis of huge data sets. This includes tracking controversial topics, and public statements.

Frustrated with the green-washing of some ‘environmental’ stocks, Radclyffe realized that the AI governance of social media companies was not being properly considered, despite presenting an enormous risk to investors in the wake of such scandals as the manipulation of Facebook by companies such as Cambridge Analytica during the US Election and the UK’s Brexit referendum.

EthicsGrade Industry Summary Scorecard – Social Media

The idea is that these ratings are used by companies to better see where they should improve. But the twist is that asset managers can also see where the risks of AI might lie.

Speaking to TechCrunch he said: “While at Fidelity I got a reputation within the firm for being the go-to person, for my colleagues in the investment team, who wanted to understand the risks within the technology firms that we were investing in. After being asked a number of times about some dodgy facial recognition company or a social media platform, I realized there was actually a massive absence of data around this stuff as opposed to anecdotal evidence.”

He says that when he left Fidelity he decided EthicsGrade would out to cover not just ESGs but also AI ethics for platforms that are driven by algorithms.

He told me: “We’ve built a model to analyze technology governance. We’ve covered 20 industries. So most of what we’ve published so far has been non-tech companies because these are risks that are inherent in many other industries, other than simply social media or big tech. But over the next couple of weeks, we’re going live with our data on things which are directly related to tech, starting with social media.”

Essentially, what they are doing is a big parallel with what is being done in the ESG space.

“The question we want to be able to answer is how does Tik Tok compare against Twitter or Wechat as against WhatsApp. And what we’ve essentially found is that things like GDPR have done a lot of good in terms of raising the bar on questions like data privacy and data governance. But in a lot of the other areas that we cover, such as ethical risk or a firm’s approach to public policy, are indeed technical questions about risk management,” says Radclyffe.

But, of course, they are effectively rating algorithms. Are the ratings they are giving the social platforms themselves derived from algorithms? EthicsGrade says they are training their own AI through NLP as they go so that they can automate what is currently very human analysts centric, just as ‘sustainalytics’ et al did years ago in the environmental arena.

So how are they coming up with these ratings? EthicsGrade says are evaluating “the extent to which organizations implement transparent and democratic values, ensure informed consent and risk management protocols, and establish a positive environment for error and improvement.” And this is all achieved, they say, all through publicly available data – policy, website, lobbying etc. In simple terms, they rate the governance of the AI not necessarily the algorithms themselves but what checks and balances are in place to ensure that the outcomes and inputs are ethical and managed.

“Our goal really is to target asset owners and asset managers,” says Radclyffe. “So if you look at any of these firms like, let’s say Twitter, 29% of Twitter is owned by five organizations: it’s Vanguard, Morgan Stanley, Blackrock, State Street and ClearBridge. If you look at the ownership structure of Facebook or Microsoft, it’s the same firms: Fidelity, Vanguard and BlackRock. And so really we only need to win a couple of hearts and minds, we just need to convince the asset owners and the asset managers that questions like the ones journalists have been asking for years are pertinent and relevant to their portfolios and that’s really how we’re planning to make our impact.”

Asked if they look at content of things like Tweets, he said no: “We don’t look at content. What we concern ourselves is how they govern their technology, and where we can find evidence of that. So what we do is we write to each firm with our rating, with our assessment of them. We make it very clear that it’s based on publicly available data. And then we invite them to complete a survey. Essentially, that survey helps us validate data of these firms. Microsoft is the only one that’s completed the survey.”

Ideally, firms will “verify the information, that they’ve got a particular process in place to make sure that things are well-managed and their algorithms don’t become discriminatory.”

In an age increasingly driven by algorithms, it will be interesting to see if this idea of rating them for risk takes off, especially amongst asset managers.

#articles, #artificial-intelligence, #asset-management, #blackrock, #environmentalism, #esg, #europe, #facebook, #facial-recognition, #fidelity, #finance, #governance, #microsoft, #morgan-stanley, #natural-language-processing, #social-media, #tc, #technology, #twitter, #united-kingdom, #united-states

Uber under pressure over facial recognition checks for drivers

Uber’s use of facial recognition technology for a driver identity system is being challenged in the UK where the App Drivers & Couriers Union (ADCU) and Worker Info Exchange (WIE) have called for Microsoft to suspend the ride-hailing giant’s use of B2B facial recognition after finding multiple cases where drivers were mis-identified and went on to have their licence to operate revoked by Transport for London (TfL).

The union said it has identified seven cases of “failed facial recognition and other identity checks” leading to drivers losing their jobs and license revocation action by TfL.

When Uber launched the “Real Time ID Check” system in the UK, in April 2020, it said it would “verify that driver accounts aren’t being used by anyone other than the licensed individuals who have undergone an Enhanced DBS check”. It said then that drivers could “choose whether their selfie is verified by photo-comparison software or by our human reviewers”.

In one misidentification case the ADCU said the driver was dismissed from employment by Uber and his license was revoked by TfL. The union adds that it was able to assist the member to establish his identity correctly forcing Uber and TfL to reverse their decisions. But it highlights concerns over the accuracy of the Microsoft facial recognition technology — pointing out that the company suspended the sale of the system to US police forces in the wake of the Black Lives Matter protests of last summer.

Research has shown that facial recognition systems can have an especially high error rate when used to identify people of color — and the ADCU cites a 2018 MIT study which found Microsoft’s system can have an error rate as high as 20% (accuracy was lowest for dark skinned women).

The union said it’s written to the Mayor of London to demand that all TfL private hire driver license revocations based on Uber reports using evidence from its Hybrid Real Time Identification systems are immediately reviewed.

Microsoft has been contacted for comment on the call for it to suspend Uber’s licence for its facial recognition tech.

The ADCU said Uber rushed to implement a workforce electronic surveillance and identification system as part of a package of measures implemented to regain its license to operate in the UK capital.

Back in 2017, TfL made the shock decision not to grant Uber a licence renewal — ratcheting up regulatory pressure on its processes and maintaining this hold in 2019 when it again deemed Uber ‘not fit and proper’ to hold a private hire vehicle licence.

Safety and security failures were a key reason cited by TfL for withholding Uber’s licence renewal.

Uber has challenged TfL’s decision in court and it won another appeal against the licence suspension last year — but the renewal granted was for only 18 months (not the full five years). It also came with a laundry list of conditions — so Uber remains under acute pressure to meet TfL’s quality bar.

Now, though, Labor activists are piling pressure on Uber from the other direction too — pointing out that no regulatory standard has been set around the workplace surveillance technology that the ADCU says TfL encouraged Uber to implement. No equalities impact assessment has even been carried out by TfL, it adds.

WIE confirmed to TechCrunch that it’s filing a discrimination claim in the case of one driver, called Imran Raja, who was dismissed after Uber’s Real ID check — and had his license revoked by TfL.

His licence was subsequently restored — but only after the union challenged the action.

A number of other Uber drivers who were also misidentified by Uber’s facial recognition checks will be appealing TfL’s revocation of their licences via the UK courts, per WIE.

A spokeswoman for TfL told us it is not a condition of Uber’s licence renewal that it must implement facial recognition technology — only that Uber must have adequate safety systems in place.

The relevant condition of its provisional licence on ‘driver identity’ states:

ULL shall maintain appropriate systems, processes and procedures to confirm that a driver using the app is an individual licensed by TfL and permitted by ULL to use the app.

We’ve also asked TfL and the UK’s Information Commissioner’s Office for a copy of the data protection impact assessment Uber says was carried before the Real-Time ID Check was launched — and will update this report if we get it.

Uber, meanwhile, disputes the union’s assertion that its use of facial recognition technology for driver identity checks risks automating discrimination because it says it has a system of manual (human) review in place that’s intended to prevent failures.

Albeit it accepts that that system clearly failed in the case of Raja — who only got his Uber account back (and an apology) after the union’s intervention.

Uber said its Real Time ID system involves an automated ‘picture matching’ check on a selfie that the driver must provide at the point of log in, with the system comparing that selfie with a (single) photo of them held on file. 

If there’s no machine match, the system sends the query to a three-person human review panel to conduct a manual check. Uber said checks will be sent to a second human panel if the first can’t agree. 

In a statement the tech giant told us:

“Our Real-Time ID Check is designed to protect the safety and security of everyone who uses the app by ensuring the correct driver or courier is using their account. The two situations raised do not reflect flawed technology — in fact one of the situations was a confirmed violation of our anti-fraud policies and the other was a human error.

“While no tech or process is perfect and there is always room for improvement, we believe the technology, combined with the thorough process in place to ensure a minimum of two manual human reviews prior to any decision to remove a driver, is fair and important for the safety of our platform.”

In two of the cases referred to by the ADCU, Uber said that in one instance a driver had shown a photo during the Real-Time ID Check instead of taking a selfie as required to carry out the live ID check — hence it argues it was not wrong for the ID check to have failed as the driver was not following the correct protocol.

In the other instance Uber blamed human error on the part of its manual review team(s) who (twice) made an erroneous decision. It said the driver’s appearance had changed and its staff were unable to recognize the face of the (now bearded) man who sent the selfie as the same person in the clean-shaven photo Uber held on file.

Uber was unable to provide details of what happened in the other five identity check failures referred to by the union.

It also declined to specify the ethnicities of the seven drivers the union says were misidentified by its checks.

Asked what measures it’s taking to prevent human errors leading to more misidentifications in future Uber declined to provide a response.

Uber said it has a duty to notify TfL when a driver fails an ID check — a step which can lead to the regulator suspending the license, as happened in Raja’s case. So any biases in its identity check process clearly risk having disproportionate impacts on affected individuals’ ability to work.

WIE told us it knows of three TfL licence revocations that relate solely to facial recognition checks.

“We know of more [UberEats] couriers who have been deactivated but no further action since they are not licensed by TfL,” it noted.

TechCrunch also asked Uber how many driver deactivations have been carried out and reported to TfL in which it cited facial recognition in its testimony to the regulator — but again the tech giant declined to answer our questions.

WIE told us it has evidence that facial recognition checks are incorporated into geo-location-based deactivations Uber carries out.

It said that in one case a driver who had their account revoked was given an explanation by Uber relating solely to location but TfL accidentally sent WIE Uber’s witness statement — which it said “included facial recognition evidence”.

That suggests a wider role for facial recognition technology in Uber’s identity checks vs the one the ride-hailing giant gave us when explaining how its Real Time ID system works. (Again, Uber declined to answer follow up questions about this or provide any other information beyond its on-the-record statement and related background points.)

But even just focusing on Uber’s Real Time ID system there’s the question of much say Uber’s human review staff actually have in the face of machine suggestions combined with the weight of wider business imperatives (like an acute need to demonstrate regulatory compliance on the issue of safety).

James Farrer, the founder of WIE, queries the quality of the human checks Uber has put in place as a backstop for facial recognition technology which has a known discrimination problem.

“Is Uber just confecting legal plausible deniability of automated decision making or is there meaningful human intervention,” he told TechCrunch. “In all of these cases, the drivers were suspended and told the specialist team would be in touch with them. A week or so typically would go by and they would be permanently deactivated without ever speaking to anyone.”

“There is research out there to show when facial recognition systems flag a mismatch humans have bias to confirm the machine. It takes a brave human being to override the machine. To do so would mean they would need to understand the machine, how it works, its limitations and have the confidence and management support to over rule the machine,” Farrer added. “Uber employees have the risk of Uber’s license to operate in London to consider on one hand and what… on the other? Drivers have no rights and there are in excess so expendable.”

He also pointed out that Uber has previously said in court that it errs on the side of customer complaints rather than give the driver benefit of the doubt. “With that in mind can we really trust Uber to make a balanced decision with facial recognition?” he asked.

Farrer further questioned why Uber and TfL don’t show drivers the evidence that’s being relied upon to deactivate their accounts — to given them a chance to challenge it via an appeal on the actual substance of the decision.

“IMHO this all comes down to tech governance,” he added. “I don’t doubt that Microsoft facial recognition is a powerful and mostly accurate tool. But the governance of this tech must be intelligent and responsible. Microsoft are smart enough themselves to acknowledge this as a limitation.

“The prospect of Uber pressured into surveillance tech as a price of keeping their licence… and a 94% BAME workforce with no worker rights protection from unfair dismissal is a recipe for disaster!”

The latest pressure on Uber’s business processes follows hard on the heels of a major win for Farrer and other former Uber drivers and labor rights activists after years of litigation over the company’s bogus claim that drivers are ‘self employed’, rather than workers under UK law.

On Tuesday Uber responded to last month’s Supreme Court quashing of its appeal saying it would now treat drivers as workers in the market — expanding the benefits it provides.

However the litigants immediately pointed out that Uber’s ‘deal’ ignored the Supreme Court’s assertion that working time should be calculated when a driver logs onto the Uber app. Instead Uber said it would calculate working time entitlements when a driver accepts a job — meaning it’s still trying to avoid paying drivers for time spent waiting for a fare.

The ADCU therefore estimates that Uber’s ‘offer’ underpays drivers by between 40%-50% of what they are legally entitled to — and has said it will continue its legal fight to get a fair deal for Uber drivers.

At an EU level, where regional lawmakers are looking at how to improve conditions for gig workers, the tech giant is now pushing for an employment law carve out for platform work — and has been accused of trying to lower legal standards for workers.

In additional Uber-related news this month, a court in the Netherlands ordered the company to hand over more of the data it holds on drivers, following another ADCU+WIE challenge. Although the court rejected the majority of the drivers’ requests for more data. But notably it did not object to drivers seeking to use data rights established under EU law to obtain information collectively to further their ability to collectively bargain against a platform — paving the way for more (and more carefully worded) challenges as Farrer spins up his data trust for workers.

The applicants also sought to probe Uber’s use of algorithms for fraud-based driver terminations under an article of EU data protection law that provides for a right not to be subject to solely automated decisions in instances where there is a legal or significant effect. In that case the court accepted Uber’s explanation at face value that fraud-related terminations had been investigated by a human team — and that the decisions to terminate involved meaningful human decisions.

But the issue of meaningful human invention/oversight of platforms’ algorithmic suggestions/decisions is shaping up to be a key battleground in the fight to regulate the human impacts of and societal imbalances flowing from powerful platforms which have both god-like view of users’ data and an allergy to complete transparency.

The latest challenge to Uber’s use of facial recognition-linked terminations shows that interrogation of the limits and legality of its automated decisions is far from over — really, this work is just getting started.

Uber’s use of geolocation for driver suspensions is also facing legal challenge.

While pan-EU legislation now being negotiated by the bloc’s institutions also aims to increase platform transparency requirements — with the prospect of added layers of regulatory oversight and even algorithmic audits coming down the pipe for platforms in the near future.

Last week the same Amsterdam court that ruled on the Uber cases also ordered India-based ride-hailing company Ola to disclose data about its facial-recognition-based ‘Guardian’ system — aka its equivalent to Uber’s Real Time ID system. The court said Ola must provided applicants with a wider range of data than it currently does — including disclosing a ‘fraud probability profile’ it maintains on drivers and data within a ‘Guardian’ surveillance system it operates.

Farrer says he’s thus confident that workers will get transparency — “one way or another”. And after years fighting Uber through UK courts over its treatment of workers his tenacity in pursuit of rebalancing platform power cannot be in doubt.

 

#app-drivers-couriers-union, #artificial-intelligence, #europe, #facial-recognition, #james-farrer, #lawsuit, #microsoft, #policy, #tfl, #uber, #worker-info-exchange

Minneapolis bans its police department from using facial recognition software

Minneapolis voted Friday to ban the use of facial recognition software for its police department, growing the list of major cities that have implemented local restrictions on the controversial technology. After an ordinance on the ban was approved earlier this week, 13 members of the city council voted in favor of the ban with no opposition.

The new ban will block the Minneapolis Police Department from using any facial recognition technology, including software by Clearview AI. That company sells access to a large database of facial images, many scraped from major social networks, to federal law enforcement agencies, private companies and a number of U.S. police departments. The Minneapolis Police Department is known to have a relationship with Clearview AI, as is the Hennepin County Sheriff’s Office, which will not be restricted by the new ban.

The vote is a landmark decision in the city that set off racial justice protests around the country after a Minneapolis police officer killed George Floyd last year. The city has been in the throes of police reform ever since, leading the nation by pledging to defund the city’s police department in June before backing away from that commitment into more incremental reforms later that year.

Banning the use of facial recognition is one targeted measure that can rein in emerging concerns about aggressive policing. Many privacy advocates are concerned that the AI-powered face recognition systems would not only disproportionately target communities of color, but that the tech has been demonstrated to have technical shortcomings in discerning non-white faces.

Cities around the country are increasingly looking to ban the controversial technology and have implemented restrictions in many different ways. In Portland, Oregon, new laws passed last year block city bureaus from using facial recognition but also forbid private companies from deploying the technology in public spaces. Previous legislation in San Francisco, Oakland and Boston restricted city governments from using facial recognition systems though didn’t include a similar provision for private companies.

#clearview-ai, #facial-recognition, #government, #minnesota, #surveillance, #tc

Sweden’s data watchdog slaps police for unlawful use of Clearview AI

Sweden’s data protection authority, the IMY, has fined the local police authority €250,000 ($300k+) for unlawful use of the controversial facial recognition software, Clearview AI, in breach of the country’s Criminal Data Act.

As part of the enforcement the police must conduct further training and education of staff in order to avoid any future processing of personal data in breach of data protection rules and regulations.

The authority has also been ordered to inform people whose personal data was sent to Clearview — when confidentiality rules allow it to do so, per the IMY.

Its investigation found that the police had used the facial recognition tool on a number of occasions and that several employees had used it without prior authorization.

Earlier this month Canadian privacy authorities found Clearview had breached local laws when it collected photos of people to plug into its facial recognition database without their knowledge or permission.

“IMY concludes that the Police has not fulfilled its obligations as a data controller on a number of accounts with regards to the use of Clearview AI. The Police has failed to implement sufficient organisational measures to ensure and be able to demonstrate that the processing of personal data in this case has been carried out in compliance with the Criminal Data Act. When using Clearview AI the Police has unlawfully processed biometric data for facial recognition as well as having failed to conduct a data protection impact assessment which this case of processing would require,” the Swedish data protection authority writes in a press release.

The IMY’s full decision can be found here (in Swedish).

“There are clearly defined rules and regulations on how the Police Authority may process personal data, especially for law enforcement purposes. It is the responsibility of the Police to ensure that employees are aware of those rules,” added Elena Mazzotti Pallard, legal advisor at IMY, in a statement.

The fine (SEK2.5M in local currency) was decided on the basis of an overall assessment, per the IMY, though it falls quite a way short of the maximum possible under Swedish law for the violations in question — which the watchdog notes would be SEK10M. (The authority’s decision notes that not knowing the rules or having inadequate procedures in place are not a reason to reduce a penalty fee so it’s not entirely clear why the police avoided a bigger fine.)

The data authority said it was not possible to determine what had happened to the data of the people whose photos the police authority had sent to Clearview — such as whether the company still stored the information. So it has also ordered the police to take steps to ensure Clearview deletes the data.

The IMY said it investigated the police’s use of the controversial technology following reports in local media.

Just over a year ago, US-based Clearview AI was revealed by the New York Times to have amassed a database of billions of photos of people’s faces — including by scraping public social media postings and harvesting people’s sensitive biometric data without individuals’ knowledge or consent.

European Union data protection law puts a high bar on the processing of special category data, such as biometrics.

Ad hoc use by police of a commercial facial recognition database — with seemingly zero attention paid to local data protection law — evidently does not meet that bar.

Last month it emerged that the Hamburg data protection authority had instigating proceedings against Clearview following a complaint by a German resident over consentless processing of his biometric data.

The Hamburg authority cited Article 9 (1) of the GDPR, which prohibits the processing of biometric data for the purpose of uniquely identifying a natural person, unless the individual has given explicit consent (or for a number of other narrow exceptions which it said had not been met) — thereby finding Clearview’s processing unlawful.

However the German authority only made a narrow order for the deletion of the individual complainant’s mathematical hash values (which represent the biometric profile).

It did not order deletion of the photos themselves. It also did not issue a pan-EU order banning the collection of any European resident’s photos as it could have done and as European privacy campaign group, noyb, had been pushing for.

noyb is encouraging all EU residents to use forms on Clearview AI’s website to ask the company for a copy of their data and ask it to delete any data it has on them, as well as to object to being included in its database. It also recommends that individuals who finds Clearview holds their data submit a complaint against the company with their local DPA.

European Union lawmakers are in the process of drawing up a risk-based framework to regulate applications of artificial intelligence — with draft legislation expected to be put forward this year although the Commission intends it to work in concert with data protections already baked into the EU’s General Data Protection Regulation (GDPR).

Earlier this month the controversial facial recognition company was ruled illegal by Canadian privacy authorities — who warned they would “pursue other actions” if the company does not follow recommendations that include stopping the collection of Canadians’ data and deleting all previously collected images.

Clearview said it had stopped providing its tech to Canadian customers last summer.

It is also facing a class action lawsuit in the U.S. citing Illinois’ biometric protection laws.

Last summer the UK and Australian data protection watchdogs announced a joint investigation into Clearview’s personal data handling practices. That probe is ongoing.

 

#artificial-intelligence, #clearview-ai, #eu-data-protection-law, #europe, #facial-recognition, #gdpr, #privacy, #sweden, #tc