Two decades after the attack on New York City, the Police Department is using counterterrorism tools and tactics to combat routine street crime.
Facebook called it “an unacceptable error.” The company has struggled with other issues related to race.
A group of senators sent new Amazon CEO Andy Jassy a letter Friday pressing the company for more information about how it scans and stores customer palm prints for use in some of its retail stores.
The company rolled out the palm print scanners through a program it calls Amazon One, encouraging people to make contactless payments in its brick and mortar stores without the use of a card. Amazon introduced its Amazon One scanners late last year, and they can now be found in Amazon Go convenience and grocery stores, Amazon Books and Amazon 4-star stores across the U.S. The scanners are also installed in eight Washington state-based Whole Foods locations.
In the new letter, Senators Amy Klobuchar (D-MN), Bill Cassidy (R-LA) and Jon Ossoff (D-GA) press Jassy for details about how Amazon plans to expand its biometric payment system and if the data collected will help the company target ads.
“Amazon’s expansion of biometric data collection through Amazon One raises serious questions about Amazon’s plans for this data and its respect for user privacy, including about how Amazon may use the data for advertising and tracking purposes,” the senators wrote in the letter, embedded below.
The lawmakers also requested information on how many people have enrolled in Amazon One to date, how Amazon will secure the sensitive data and if the company has ever paired the palm prints with facial recognition data it collects elsewhere.
“In contrast with biometric systems like Apple’s Face ID and Touch ID or Samsung Pass, which store biometric information on a user’s device, Amazon One reportedly uploads biometric information to the cloud, raising unique security risks,” the senators wrote. “… Data security is particularly important when it comes to immutable customer data, like palm prints.”
The company controversially introduced a $10 credit for new users who enroll their palm prints in the program, prompting an outcry from privacy advocates who see it as a cheap tactic to coerce people to hand over sensitive personal data.
There’s plenty of reason to be skeptical. Amazon has faced fierce criticism for its other big biometric data project, the AI facial recognition software known as Rekognition, which the company provided to U.S. law enforcement agencies before eventually backtracking with a moratorium on policing applications for the software last year.
Maine has joined a growing number of cities, counties and states that are rejecting dangerously biased surveillance technologies like facial recognition.
The new law, which is the strongest statewide facial recognition law in the country, not only received broad, bipartisan support, but it passed unanimously in both chambers of the state legislature. Lawmakers and advocates spanning the political spectrum — from the progressive lawmaker who sponsored the bill to the Republican members who voted it out of committee, from the ACLU of Maine to state law enforcement agencies — came together to secure this major victory for Mainers and anyone who cares about their right to privacy.
Maine is just the latest success story in the nationwide movement to ban or tightly regulate the use of facial recognition technology, an effort led by grassroots activists and organizations like the ACLU. From the Pine Tree State to the Golden State, national efforts to regulate facial recognition demonstrate a broad recognition that we can’t let technology determine the boundaries of our freedoms in the digital 21st century.
Facial recognition technology poses a profound threat to civil rights and civil liberties. Without democratic oversight, governments can use the technology as a tool for dragnet surveillance, threatening our freedoms of speech and association, due process rights, and right to be left alone. Democracy itself is at stake if this technology remains unregulated.
Facial recognition technology poses a profound threat to civil rights and civil liberties.
We know the burdens of facial recognition are not borne equally, as Black and brown communities — especially Muslim and immigrant communities — are already targets of discriminatory government surveillance. Making matters worse, face surveillance algorithms tend to have more difficulty accurately analyzing the faces of darker-skinned people, women, the elderly and children. Simply put: The technology is dangerous when it works — and when it doesn’t.
But not all approaches to regulating this technology are created equal. Maine is among the first in the nation to pass comprehensive statewide regulations. Washington was the first, passing a weak law in the face of strong opposition from civil rights, community and religious liberty organizations. The law passed in large part because of strong backing from Washington-based megacorporation Microsoft. Washington’s facial recognition law would still allow tech companies to sell their technology, worth millions of dollars, to every conceivable government agency.
In contrast, Maine’s law strikes a different path, putting the interests of ordinary Mainers above the profit motives of private companies.
Maine’s new law prohibits the use of facial recognition technology in most areas of government, including in public schools and for surveillance purposes. It creates carefully carved out exceptions for law enforcement to use facial recognition, creating standards for its use and avoiding the potential for abuse we’ve seen in other parts of the country. Importantly, it prohibits the use of facial recognition technology to conduct surveillance of people as they go about their business in Maine, attending political meetings and protests, visiting friends and family, and seeking out healthcare.
In Maine, law enforcement must now — among other limitations — meet a probable cause standard before making a facial recognition request, and they cannot use a facial recognition match as the sole basis to arrest or search someone. Nor can local police departments buy, possess or use their own facial recognition software, ensuring shady technologies like Clearview AI will not be used by Maine’s government officials behind closed doors, as has happened in other states.
Maine’s law and others like it are crucial to preventing communities from being harmed by new, untested surveillance technologies like facial recognition. But we need a federal approach, not only a piecemeal local approach, to effectively protect Americans’ privacy from facial surveillance. That’s why it’s crucial for Americans to support the Facial Recognition and Biometric Technology Moratorium Act, a bill introduced by members of both houses of Congress last month.
The ACLU supports this federal legislation that would protect all people in the United States from invasive surveillance. We urge all Americans to ask their members of Congress to join the movement to halt facial recognition technology and support it, too.
A new biometrics privacy ordinance has taken effect across New York City, putting new limits on what businesses can do with the biometric data they collect on their customers.
From Friday, businesses that collect biometric information — most commonly in the form of facial recognition and fingerprints — are required to conspicuously post notices and signs to customers at their doors explaining how their data will be collected. The ordinance applies to a wide range of businesses — retailers, stores, restaurants, and theaters, to name a few — which are also barred from selling, sharing, or otherwise profiting from the biometric information that they collect.
The move will give New Yorkers — and its millions of visitors each year — greater protections over how their biometric data is collected and used, while also serving to dissuade businesses from using technology that critics say is discriminatory and often doesn’t work.
Businesses can face stiff penalties for violating the law, but can escape fines if they fix the violation quickly.
The law is by no means perfect, as none of these laws ever are. For one, it doesn’t apply to government agencies, including the police. Of the businesses that the ordinance does cover, it exempts employees of those businesses, such as those required to clock in and out of work with a fingerprint. And the definition of what counts as a biometric will likely face challenges that could expand or narrow what is covered.
New York is the latest U.S. city to enact a biometric privacy law, after Portland, Oregon passed a similar ordinance last year. But the law falls short of stronger biometric privacy laws in effect.
Illinois, which has the Biometric Information Privacy Act, a law that grants residents the right to sue for any use of their biometric data without consent. Facebook this year settled for $650 million in a class-action suit that Illinois residents filed in 2015 after the social networking giant used facial recognition to tag users in photos without their permission.
Albert Fox Cahn, the executive director of the New York-based Surveillance Technology Oversight Project, said the law is an “important step” to learn how New Yorkers are tracked by local businesses.
“A false facial recognition match could mean having the NYPD called on you just for walking into a Rite Aid or Target,” he told TechCrunch. He also said that New York should go further by outlawing systems like facial recognition altogether, as some cities have done.
- An internal code repo used by New York State’s IT office was exposed online
- Data breach at New York Sports Clubs owner exposed customer data
- Microsoft pitched its facial recognition tech to the DEA, new emails show
- US towns are buying Chinese surveillance tech tied to Uighur abuses
Joy Buolamwini is on a crusade against bias in facial recognition technology, and the powerful companies that profit from it.
Two new books — “Genius Makers,” by Cade Metz, and “Futureproof,” by Kevin Roose — examine how artificial intelligence will change humanity.
Charles Johnson, a notorious conservative provocateur, played a pivotal role at the start of the facial recognition company.
When Google forced out two well-known artificial intelligence experts, a long-simmering research controversy burst into the open.
Massachusetts is one of the first states to put legislative guardrails around the use of facial recognition technology in criminal investigations.
Narc-narc, who’s there?
Mr. Jassy, who will become Amazon’s chief this summer, has spent more than two decades absorbing lessons from Mr. Bezos.
Controversial facial recognition startup Clearview AI violated Canadian privacy laws when it collected photos of Canadians without their knowledge or permission, the country’s top privacy watchdog has ruled.
The New York-based company made its splashy newspaper debut a year ago by claiming it had collected over 3 billion photos of people’s faces and touting its connections to law enforcement and police departments. But the startup has faced a slew of criticism for scraping social media sites also without their permission, prompting Facebook, LinkedIn and Twitter to send cease and desist letters to demand it stops.
In a statement, Canada’s Office of the Privacy Commissioner said its investigation found Clearview had “collected highly sensitive biometric information without the knowledge or consent of individuals,” and that the startup “collected, used and disclosed Canadians’ personal information for inappropriate purposes, which cannot be rendered appropriate via consent.”
Clearview rebuffed the allegations, claiming Canada’s privacy laws do not apply because the company doesn’t have a “real and substantial connection” to the country, and that consent was not required because the images it scraped were publicly available.
That’s a challenge the company continues to face in court, as it faces a class action suit citing Illinois’ biometric protection laws that last year dinged Facebook to the tune of $550 million for violating the same law.
The Canadian privacy watchdog rejected Clearview’s arguments, and said it would “pursue other actions” if the company does not follow its recommendations, which included stopping the collection on Canadians and deleting all previously collected images. Clearview said in July that it stopped providing its technology to Canadian customers after the Royal Canadian Mounted Police and the Toronto Police Service were using the startup’s technology.
“What Clearview does is mass surveillance and it is illegal,” said Daniel Therrien, Canada’s privacy commissioner. “It is an affront to individuals’ privacy rights and inflicts broad-based harm on all members of society, who find themselves continually in a police lineup. This is completely unacceptable.”
A spokesperson for Clearview AI did not immediately return a request for comment.
His loyal lieutenant will take Amazon’s helm as the company faces ever-growing scrutiny.
An online tool targets only a small slice of what’s out there, but may open some eyes to how widely artificial intelligence research fed on personal images.
A New Jersey man was accused of shoplifting and trying to hit an officer with a car. He is the third known Black man to be wrongfully arrested based on face recognition.
The website for the tech titan’s cloud business described facial recognition software that could detect members of a minority group whose persecution has drawn international condemnation.
Intel and Nvidia chips power a supercomputing center that tracks people in a place where government suppresses minorities, raising questions about the tech industry’s responsibility.
A.I. developers are committing to end the injustices in how their technology is often made and used.
In their spare time, two Silicon Valley developers aided conservationists in developing artificial intelligence to help keep track of individual bears.
“We’re now approaching the technological threshold where the little guys can do it to the big guys,” one researcher said.
As the rapid pace of change mixes with national security issues, Europe’s role as a global regulator is increasingly tested — and may not be enough.
Clearview AI has hired Floyd Abrams, a top lawyer, to help fight claims that selling its data to law enforcement agencies violates privacy laws.
Researchers at the University of Chicago want you to be able to post selfies without worrying that the next Clearview AI will use them to identify you.
Chris Larsen knows that a crypto mogul spending his own money for a city’s camera surveillance system might sound creepy. He’s here to explain why it’s not.
Since widespread protests over racial inequality began, IBM announced it would cancel its facial recognition programs to advance racial equity in law enforcement. Amazon suspended police use of its Rekognition software for one year to “put in place stronger regulations to govern the ethical use of facial recognition technology.”
But we need more than regulatory change; the entire field of artificial intelligence (AI) must mature out of the computer science lab and accept the embrace of the entire community.
We can develop amazing AI that works in the world in largely unbiased ways. But to accomplish this, AI can’t be just a subfield of computer science (CS) and computer engineering (CE), like it is right now. We must create an academic discipline of AI that takes the complexity of human behavior into account. We need to move from computer science-owned AI to computer science-enabled AI. The problems with AI don’t occur in the lab; they occur when scientists move the tech into the real world of people. Training data in the CS lab often lacks the context and complexity of the world you and I inhabit. This flaw perpetuates biases.
AI-powered algorithms have been found to display bias against people of color and against women. In 2014, for example, Amazon found that an AI algorithm it developed to automate headhunting taught itself to bias against female candidates. MIT researchers reported in January 2019 that facial recognition software is less accurate in identifying humans with darker pigmentation. Most recently, in a study late last year by the National Institute of Standards and Technology (NIST), researchers found evidence of racial bias in nearly 200 facial recognition algorithms.
In spite of the countless examples of AI errors, the zeal continues. This is why the IBM and Amazon announcements generated so much positive news coverage. Global use of artificial intelligence grew by 270% from 2015 to 2019, with the market expected to generate revenue of $118.6 billion by 2025. According to Gallup, nearly 90% Americans are already using AI products in their everyday lives – often without even realizing it.
Beyond a 12-month hiatus, we must acknowledge that while building AI is a technology challenge, using AI requires non-software development heavy disciplines such as social science, law and politics. But despite our increasingly ubiquitous use of AI, AI as a field of study is still lumped into the fields of CS and CE. At North Carolina State University, for example, algorithms and AI are taught in the CS program. MIT houses the study of AI under both CS and CE. AI must make it into humanities programs, race and gender studies curricula, and business schools. Let’s develop an AI track in political science departments. In my own program at Georgetown University, we teach AI and Machine Learning concepts to Security Studies students. This needs to become common practice.
Without a broader approach to the professionalization of AI, we will almost certainly perpetuate biases and discriminatory practices in existence today. We just may discriminate at a lower cost — not a noble goal for technology. We require the intentional establishment of a field of AI whose purpose is to understand the development of neural networks and the social contexts into which the technology will be deployed.
In computer engineering, a student studies programming and computer fundamentals. In computer science, they study computational and programmatic theory, including the basis of algorithmic learning. These are solid foundations for the study of AI – but they should only be considered components. These foundations are necessary for understanding the field of AI but not sufficient on their own.
For the population to gain comfort with broad deployment of AI so that tech companies like Amazon and IBM, and countless others, can deploy these innovations, the entire discipline needs to move beyond the CS lab. Those who work in disciplines like psychology, sociology, anthropology and neuroscience are needed. Understanding human behavior patterns, biases in data generation processes are needed. I could not have created the software I developed to identify human trafficking, money laundering and other illicit behaviors without my background in behavioral science.
Responsibly managing machine learning processes is no longer just a desirable component of progress but a necessary one. We have to recognize the pitfalls of human bias and the errors of replicating these biases in the machines of tomorrow, and the social sciences and humanities provide the keys. We can only accomplish this if a new field of AI, encompassing all of these disciplines, is created.
The arrest of a man for a crime he didn’t commit shows the dangers of facial recognition technology.
In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.
The bill, which the mayor is likely to sign, compels the police to disclose the technology they use and data they collect.
It’s time for companies to move beyond mere tweets and hire more black employees at every level.
Uber had been courting Grubhub for months, but the food-delivery rival will sell itself to a European rival instead. Regulations are partly to blame.
The company said it hoped the moratorium “might give Congress enough time to put in place appropriate rules” for the technology.
In a surprise blog post, Amazon said it will put the brakes on providing its facial recognition technology to police for one year, but refuses to say if the move applies to federal law enforcement agencies.
The moratorium comes two days after IBM said in a letter it was leaving the facial recognition market altogether. Arvind Krishna, IBM’s chief executive, cited a “pursuit of justice and racial equity” in light of the recent protests sparked by the killing of George Floyd by a white police officer in Minneapolis last month.
Amazon’s statement — just 102 words in length — did not say why it was putting the moratorium in place, but noted that Congress “appears ready” to work on stronger regulations governing the use of facial recognition — again without providing any details. It’s likely in response to the Justice in Policing Act, a bill that would, if passed, restrict how police can use facial recognition technology.
“We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested,” said Amazon in the unbylined blog post.
But the statement did not say if the moratorium would apply to the federal government, the source of most of the criticism against Amazon’s facial recognition technology. Amazon also did not say in the statement what action it would take after the yearlong moratorium expires.
Amazon is known to have pitched its facial recognition technology, Rekognition, to federal agencies, like Immigration and Customs Enforcement. Last year, Amazon’s cloud chief Andy Jassy said in an interview the company would provide Rekognition to “any” government department.
Amazon spokesperson Kristin Brown declined to comment further or say if the moratorium applies to federal law enforcement.
There are dozens of companies providing facial recognition technology to police, but Amazon is by far the biggest. Amazon has come under the most scrutiny after its Rekognition face-scanning technology showed bias against people of color.
In 2018, the ACLU found that Rekognition falsely matched 28 members of Congress as criminals in a mugshot database. Amazon criticized the results, claiming the ACLU had lowered the facial recognition system’s confidence threshold. But a year later, the ACLU of Massachusetts found that Rekognition had falsely matched 27 New England professional athletes against a mugshot database. Both tests disproportionately mismatched Black people, the ACLU found.
Investors brought a proposal to Amazon’s annual shareholder meeting almost exactly a year ago that would have forcibly banned Amazon from selling its facial recognition technology to the government or law enforcement. Amazon defeated the vote with a wide margin.
The ACLU acknowledged Amazon’s move to pause sales of Rekognition, which it called a “threat to our civil rights and liberties,” but called on the company and other firms to do more.
The facial recognition start-up violated the privacy of Illinois residents by collecting their images without their consent, the civil liberties group says in a new lawsuit.
Microsoft is pulling out of an investment in an Israeli facial recognition technology developer as part of a broader policy shift to halt any minority investments in facial recognition startups, the company announced late last week.
The decision to withdraw its investment from AnyVision, an Israeli company developing facial recognition software, came as a result of an investigation into reports that AnyVision’s technology was being used by the Israeli government to surveil residents in the West Bank.
The investigation, conducted by former U.S. Attorney General Eric Holder and his team at Covington & Burling, confirmed that AnyVision’s technology was used to monitor border crossings between the West Bank and Israel, but did not “power a mass surveillance program in the West Bank.”
Microsoft’s venture capital arm, M12 Ventures, backed AnyVision as part of the company’s $74 million financing round which closed in June 2019. Investors who continue to back the company include DFJ Growth and OG Technology Partners, LightSpeed Venture Partners, Robert Bosch GmbH, Qualcomm Ventures, and Eldridge Industries.
Microsoft first staked out its position on how the company would approach facial recognition technologies in 2018, when President Brad Smith issued a statement calling on government to come up with clear regulations around facial recognition in the U.S.
Smith’s calls for more regulation and oversight became more strident by the end of the year, when Microsoft issued a statement on its approach to facial recognition.
We and other tech companies need to start creating safeguards to address facial recognition technology. We believe this technology can serve our customers in important and broad ways, and increasingly we’re not just encouraged, but inspired by many of the facial recognition applications our customers are deploying. But more than with many other technologies, this technology needs to be developed and used carefully. After substantial discussion and review, we have decided to adopt six principles to manage these issues at Microsoft. We are sharing these principles now, with a commitment and plans to implement them by the end of the first quarter in 2019.
The principles that Microsoft laid out included privileging: fairness, transparency, accountability, non-discrimination, notice and consent, and lawful surveillance.
Critics took the company to task for its investment in AnyVision, saying that the decision to back a company working with the Israeli government on wide-scale surveillance ran counter to the principles it had set out for itself.
Now, after determining that controlling how facial recognition technologies are deployed by its minority investments is too difficult, the company is suspending its outside investments in the technology.
“For Microsoft, the audit process reinforced the challenges of being a minority investor in a company that sells sensitive technology, since such investments do not generally allow for the level of oversight or control that Microsoft exercises over the use of its own technology,” the company wrote in a statement on its M12 Ventures website. “Microsoft’s focus has shifted to commercial relationships that afford Microsoft greater oversight and control over the use of sensitive technologies.”
Investors and clients of the facial recognition start-up freely used the app on dates and at parties — and to spy on the public.