How Andy Jassy, Amazon’s Next C.E.O., Was a ‘Brain Double’ for Jeff Bezos

Mr. Jassy, who will become Amazon’s chief this summer, has spent more than two decades absorbing lessons from Mr. Bezos.

#amazon-com-inc, #appointments-and-executive-changes, #bezos-jeffrey-p, #cloud-computing, #computers-and-the-internet, #e-commerce, #executives-and-management-theory, #facial-recognition-software, #jassy-andrew-r, #labor-and-jobs

0

Clearview AI ruled ‘illegal’ by Canadian privacy authorities

Controversial facial recognition startup Clearview AI violated Canadian privacy laws when it collected photos of Canadians without their knowledge or permission, the country’s top privacy watchdog has ruled.

The New York-based company made its splashy newspaper debut a year ago by claiming it had collected over 3 billion photos of people’s faces and touting its connections to law enforcement and police departments. But the startup has faced a slew of criticism for scraping social media sites also without their permission, prompting Facebook, LinkedIn and Twitter to send cease and desist letters to demand it stops.

In a statement, Canada’s Office of the Privacy Commissioner said its investigation found Clearview had “collected highly sensitive biometric information without the knowledge or consent of individuals,” and that the startup “collected, used and disclosed Canadians’ personal information for inappropriate purposes, which cannot be rendered appropriate via consent.”

Clearview rebuffed the allegations, claiming Canada’s privacy laws do not apply because the company doesn’t have a “real and substantial connection” to the country, and that consent was not required because the images it scraped were publicly available.

That’s a challenge the company continues to face in court, as it faces a class action suit citing Illinois’ biometric protection laws that last year dinged Facebook to the tune of $550 million for violating the same law.

The Canadian privacy watchdog rejected Clearview’s arguments, and said it would “pursue other actions” if the company does not follow its recommendations, which included stopping the collection on Canadians and deleting all previously collected images. Clearview said in July that it stopped providing its technology to Canadian customers after the Royal Canadian Mounted Police and the Toronto Police Service were using the startup’s technology.

“What Clearview does is mass surveillance and it is illegal,” said Daniel Therrien, Canada’s privacy commissioner. “It is an affront to individuals’ privacy rights and inflicts broad-based harm on all members of society, who find themselves continually in a police lineup. This is completely unacceptable.”

A spokesperson for Clearview AI did not immediately return a request for comment.

#articles, #canada, #clearview-ai, #digital-rights, #facebook, #facial-recognition, #facial-recognition-software, #human-rights, #illinois, #law-enforcement, #mass-surveillance, #new-york, #privacy, #security, #social-issues, #spokesperson, #terms-of-service

0

As Jeff Bezos Takes Off, Meet His Earthbound Successor

His loyal lieutenant will take Amazon’s helm as the company faces ever-growing scrutiny.

#amazon-com-inc, #antitrust-laws-and-competition-issues, #bezos-jeffrey-p, #cook-timothy-d, #e-commerce, #facial-recognition-software, #jassy-andrew-r

0

Here’s a Way to Learn if Facial Recognition Systems Used Your Photos

An online tool targets only a small slice of what’s out there, but may open some eyes to how widely artificial intelligence research fed on personal images.

#computer-vision, #computers-and-the-internet, #facial-recognition-software, #flickr, #megvii-technology-ltd, #photography, #privacy, #tabriz-parisa, #university-of-washington

0

Flawed Facial Recognition Leads To Arrest and Jail for New Jersey Man

A New Jersey man was accused of shoplifting and trying to hit an officer with a car. He is the third known Black man to be wrongfully arrested based on face recognition.

#black-people, #clearview-ai-inc, #facial-recognition-software, #false-arrests-convictions-and-imprisonments, #suits-and-litigation-civil, #ton-that-hoan

0

Alibaba’s Software Can Find Uighur Faces, It Told China Clients

The website for the tech titan’s cloud business described facial recognition software that could detect members of a minority group whose persecution has drawn international condemnation.

#alibaba-group-holding-ltd, #china, #cloud-computing, #facial-recognition-software, #huawei-technologies-co-ltd, #racial-profiling, #surveillance-of-citizens-by-government, #uighurs-chinese-ethnic-group, #video-recordings-downloads-and-streaming, #xinjiang-china

0

Intel and Nvidia Chips Power a Chinese Surveillance System

Intel and Nvidia chips power a supercomputing center that tracks people in a place where government suppresses minorities, raising questions about the tech industry’s responsibility.

#artificial-intelligence, #cameras, #china, #computer-chips, #computer-vision, #computers-and-the-internet, #facial-recognition-software, #human-rights-and-human-rights-violations, #intel-corporation, #muslims-and-islam, #nvidia-corporation, #surveillance-of-citizens-by-government, #uighurs-chinese-ethnic-group, #xinjiang-china

0

Can We Make Our Robots Less Biased Than Us?

A.I. developers are committing to end the injustices in how their technology is often made and used.

#artificial-intelligence, #black-lives-matter-movement, #black-people, #buolamwini-joy, #discrimination, #facial-recognition-software, #police-brutality-misconduct-and-shootings, #race-and-ethnicity, #robots-and-robotics, #your-feed-science

0

Training Facial Recognition on Some New Furry Friends: Bears

In their spare time, two Silicon Valley developers aided conservationists in developing artificial intelligence to help keep track of individual bears.

#alaska, #artificial-intelligence, #bears, #british-columbia-canada, #computer-vision, #computers-and-the-internet, #conservation-of-resources, #ecology-and-evolution-journal, #facial-recognition-software, #katmai-national-park-alaska, #research, #your-feed-animals, #your-feed-science

0

Activists Turn Facial Recognition Tools Against the Police

“We’re now approaching the technological threshold where the little guys can do it to the big guys,” one researcher said.

#belarus, #cirio-paolo, #crowdsourcing-internet, #demonstrations-protests-and-riots, #facial-recognition-software, #france, #george-floyd-protests-2020, #howell-christopher-1978, #innovation, #police, #police-brutality-misconduct-and-shootings, #portland-ore

0

Europe Feels Squeeze as Tech Competition Heats Up Between U.S. and China

As the rapid pace of change mixes with national security issues, Europe’s role as a global regulator is increasingly tested — and may not be enough.

#amazon-com-inc, #apple-inc, #artificial-intelligence, #china, #computers-and-the-internet, #european-union, #facebook-inc, #facial-recognition-software, #google-inc, #privacy, #regulation-and-deregulation-of-industry, #social-media, #tiktok-bytedance, #united-states-international-relations

0

Facial Recognition Start-Up Mounts a First Amendment Defense

Clearview AI has hired Floyd Abrams, a top lawyer, to help fight claims that selling its data to law enforcement agencies violates privacy laws.

#abrams-floyd, #citizens-united-v-federal-election-commission-supreme-court-decision, #clearview-ai-inc, #facial-recognition-software, #freedom-of-speech-and-expression

0

This Tool Could Protect Your Photos From Facial Recognition

Researchers at the University of Chicago want you to be able to post selfies without worrying that the next Clearview AI will use them to identify you.

#clearview-ai-inc, #computers-and-the-internet, #facebook-inc, #facial-recognition-software, #photography, #privacy, #social-media, #ton-that-hoan

0

Why Is a Tech Executive Installing Security Cameras Around San Francisco?

Chris Larsen knows that a crypto mogul spending his own money for a city’s camera surveillance system might sound creepy. He’s here to explain why it’s not.

#black-lives-matter-movement, #boudin-chesa, #cameras, #crime-and-criminals, #facial-recognition-software, #larsen-chris-1960, #police, #police-reform, #privacy, #ripple-labs-inc, #san-francisco-calif, #security-and-warning-systems, #surveillance, #virtual-currency

0

We need a new field of AI to combat racial bias

Since widespread protests over racial inequality began, IBM announced it would cancel its facial recognition programs to advance racial equity in law enforcement. Amazon suspended police use of its Rekognition software for one year to “put in place stronger regulations to govern the ethical use of facial recognition technology.”

But we need more than regulatory change; the entire field of artificial intelligence (AI) must mature out of the computer science lab and accept the embrace of the entire community.

We can develop amazing AI that works in the world in largely unbiased ways. But to accomplish this, AI can’t be just a subfield of computer science (CS) and computer engineering (CE), like it is right now. We must create an academic discipline of AI that takes the complexity of human behavior into account. We need to move from computer science-owned AI to computer science-enabled AI. The problems with AI don’t occur in the lab; they occur when scientists move the tech into the real world of people. Training data in the CS lab often lacks the context and complexity of the world you and I inhabit. This flaw perpetuates biases.

AI-powered algorithms have been found to display bias against people of color and against women. In 2014, for example, Amazon found that an AI algorithm it developed to automate headhunting taught itself to bias against female candidates. MIT researchers reported in January 2019 that facial recognition software is less accurate in identifying humans with darker pigmentation. Most recently, in a study late last year by the National Institute of Standards and Technology (NIST), researchers found evidence of racial bias in nearly 200 facial recognition algorithms.

In spite of the countless examples of AI errors, the zeal continues. This is why the IBM and Amazon announcements generated so much positive news coverage. Global use of artificial intelligence grew by 270% from 2015 to 2019, with the market expected to generate revenue of $118.6 billion by 2025. According to Gallup, nearly 90% Americans are already using AI products in their everyday lives – often without even realizing it.

Beyond a 12-month hiatus, we must acknowledge that while building AI is a technology challenge, using AI requires non-software development heavy disciplines such as social science, law and politics. But despite our increasingly ubiquitous use of AI, AI as a field of study is still lumped into the fields of CS and CE. At North Carolina State University, for example, algorithms and AI are taught in the CS program. MIT houses the study of AI under both CS and CE. AI must make it into humanities programs, race and gender studies curricula, and business schools. Let’s develop an AI track in political science departments. In my own program at Georgetown University, we teach AI and Machine Learning concepts to Security Studies students. This needs to become common practice.

Without a broader approach to the professionalization of AI, we will almost certainly perpetuate biases and discriminatory practices in existence today. We just may discriminate at a lower cost — not a noble goal for technology. We require the intentional establishment of a field of AI whose purpose is to understand the development of neural networks and the social contexts into which the technology will be deployed.

In computer engineering, a student studies programming and computer fundamentals. In computer science, they study computational and programmatic theory, including the basis of algorithmic learning. These are solid foundations for the study of AI – but they should only be considered components. These foundations are necessary for understanding the field of AI but not sufficient on their own.

For the population to gain comfort with broad deployment of AI so that tech companies like Amazon and IBM, and countless others, can deploy these innovations, the entire discipline needs to move beyond the CS lab. Those who work in disciplines like psychology, sociology, anthropology and neuroscience are needed. Understanding human behavior patterns, biases in data generation processes are needed. I could not have created the software I developed to identify human trafficking, money laundering and other illicit behaviors without my background in behavioral science.

Responsibly managing machine learning processes is no longer just a desirable component of progress but a necessary one. We have to recognize the pitfalls of human bias and the errors of replicating these biases in the machines of tomorrow, and the social sciences and humanities provide the keys. We can only accomplish this if a new field of AI, encompassing all of these disciplines, is created.

#artificial-intelligence, #cambridge-university, #column, #cybernetics, #developer, #diversity, #facial-recognition, #facial-recognition-software, #ibm, #law-enforcement, #machine-learning, #mit, #neuroscience, #opinion, #rekognition, #saas

0

When the Police Treat Software Like Magic

The arrest of a man for a crime he didn’t commit shows the dangers of facial recognition technology.

#facial-recognition-software, #race-and-ethnicity

0

Wrongfully Accused by an Algorithm

In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.

#facial-recognition-software, #michigan, #police, #police-brutality-misconduct-and-shootings

0

Council Forces N.Y.P.D. to Disclose Use of Drones and Other Spy Tech

The bill, which the mayor is likely to sign, compels the police to disclose the technology they use and data they collect.

#black-lives-matter-movement, #center-on-privacy-and-technology-at-georgetown-law-center, #city-council-nyc, #civil-rights-and-liberties, #facial-recognition-software, #george-floyd-protests-2020, #law-and-legislation, #new-york-city, #police, #police-reform, #surveillance-of-citizens-by-government

0

Corporate America’s Black Lives Matter Pledges Fall Short

It’s time for companies to move beyond mere tweets and hire more black employees at every level.

#amazon-com-inc, #black-lives-matter-movement, #black-people, #boards-of-directors, #corporations, #discrimination, #facial-recognition-software, #george-floyd-protests-2020, #police-brutality-misconduct-and-shootings, #race-and-ethnicity, #ring-inc, #snap-inc, #starbucks-corporation, #twitter, #united-states

0

Did Washington Get in Uber’s Way?

Uber had been courting Grubhub for months, but the food-delivery rival will sell itself to a European rival instead. Regulations are partly to blame.

#amazon-com-inc, #facial-recognition-software, #federal-reserve-system, #george-floyd-protests-2020, #grubhub-inc, #just-eat-takeaway-com, #mergers-acquisitions-and-divestitures, #uber-technologies-inc, #united-states-economy

0

Amazon Facial Recognition Moratorium

The company said it hoped the moratorium “might give Congress enough time to put in place appropriate rules” for the technology.

#amazon-com-inc, #black-people, #computer-vision, #computers-and-the-internet, #facial-recognition-software, #international-business-machines-corporation, #police

0

Amazon won’t say if its facial recognition moratorium applies to the feds

In a surprise blog post, Amazon said it will put the brakes on providing its facial recognition technology to police for one year, but refuses to say if the move applies to federal law enforcement agencies.

The moratorium comes two days after IBM said in a letter it was leaving the facial recognition market altogether. Arvind Krishna, IBM’s chief executive, cited a “pursuit of justice and racial equity” in light of the recent protests sparked by the killing of George Floyd by a white police officer in Minneapolis last month.

Amazon’s statement — just 102 words in length — did not say why it was putting the moratorium in place, but noted that Congress “appears ready” to work on stronger regulations governing the use of facial recognition — again without providing any details. It’s likely in response to the Justice in Policing Act, a bill that would, if passed, restrict how police can use facial recognition technology.

“We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested,” said Amazon in the unbylined blog post.

But the statement did not say if the moratorium would apply to the federal government, the source of most of the criticism against Amazon’s facial recognition technology. Amazon also did not say in the statement what action it would take after the yearlong moratorium expires.

Amazon is known to have pitched its facial recognition technology, Rekognition, to federal agencies, like Immigration and Customs Enforcement. Last year, Amazon’s cloud chief Andy Jassy said in an interview the company would provide Rekognition to “any” government department.

Amazon spokesperson Kristin Brown declined to comment further or say if the moratorium applies to federal law enforcement.

There are dozens of companies providing facial recognition technology to police, but Amazon is by far the biggest. Amazon has come under the most scrutiny after its Rekognition face-scanning technology showed bias against people of color.

In 2018, the ACLU found that Rekognition falsely matched 28 members of Congress as criminals in a mugshot database. Amazon criticized the results, claiming the ACLU had lowered the facial recognition system’s confidence threshold. But a year later, the ACLU of Massachusetts found that Rekognition had falsely matched 27 New England professional athletes against a mugshot database. Both tests disproportionately mismatched Black people, the ACLU found.

Investors brought a proposal to Amazon’s annual shareholder meeting almost exactly a year ago that would have forcibly banned Amazon from selling its facial recognition technology to the government or law enforcement. Amazon defeated the vote with a wide margin.

The ACLU acknowledged Amazon’s move to pause sales of Rekognition, which it called a “threat to our civil rights and liberties,” but called on the company and other firms to do more.

#amazon, #american-civil-liberties-union, #andy-jassy, #computing, #congress, #face-id, #facial-recognition, #facial-recognition-software, #federal-government, #george-floyd, #massachusetts, #minneapolis, #privacy, #publishing, #security

0

A.C.L.U. Accuses Clearview AI of Privacy ‘Nightmare Scenario’

The facial recognition start-up violated the privacy of Illinois residents by collecting their images without their consent, the civil liberties group says in a new lawsuit.

#american-civil-liberties-union, #clearview-ai-inc, #computer-vision, #computers-and-the-internet, #facial-recognition-software, #illinois, #privacy, #suits-and-litigation-civil, #surveillance-of-citizens-by-government

0

Divesting from one facial recognition startup, Microsoft ends outside investments in the tech

Microsoft is pulling out of an investment in an Israeli facial recognition technology developer as part of a broader policy shift to halt any minority investments in facial recognition startups, the company announced late last week.

The decision to withdraw its investment from AnyVision, an Israeli company developing facial recognition software, came as a result of an investigation into reports that AnyVision’s technology was being used by the Israeli government to surveil residents in the West Bank.

The investigation, conducted by former U.S. Attorney General Eric Holder and his team at Covington & Burling, confirmed that AnyVision’s technology was used to monitor border crossings between the West Bank and Israel, but did not “power a mass surveillance program in the West Bank.”

Microsoft’s venture capital arm, M12 Ventures, backed AnyVision as part of the company’s $74 million financing round which closed in June 2019. Investors who continue to back the company include DFJ Growth and OG Technology Partners, LightSpeed Venture Partners, Robert Bosch GmbH, Qualcomm Ventures, and Eldridge Industries.

Microsoft first staked out its position on how the company would approach facial recognition technologies in 2018, when President Brad Smith issued a statement calling on government to come up with clear regulations around facial recognition in the U.S.

Smith’s calls for more regulation and oversight became more strident by the end of the year, when Microsoft issued a statement on its approach to facial recognition.

Smith wrote:

We and other tech companies need to start creating safeguards to address facial recognition technology. We believe this technology can serve our customers in important and broad ways, and increasingly we’re not just encouraged, but inspired by many of the facial recognition applications our customers are deploying. But more than with many other technologies, this technology needs to be developed and used carefully. After substantial discussion and review, we have decided to adopt six principles to manage these issues at Microsoft. We are sharing these principles now, with a commitment and plans to implement them by the end of the first quarter in 2019.

The principles that Microsoft laid out included privileging: fairness, transparency, accountability, non-discrimination, notice and consent, and lawful surveillance.

Critics took the company to task for its investment in AnyVision, saying that the decision to back a company working with the Israeli government on wide-scale surveillance ran counter to the principles it had set out for itself.

Now, after determining that controlling how facial recognition technologies are deployed by its minority investments is too difficult, the company is suspending its outside investments in the technology.

“For Microsoft, the audit process reinforced the challenges of being a minority investor in a company that sells sensitive technology, since such investments do not generally allow for the level of oversight or control that Microsoft exercises over the use of its own technology,” the company wrote in a statement on its M12 Ventures website. “Microsoft’s focus has shifted to commercial relationships that afford Microsoft greater oversight and control over the use of sensitive technologies.”

 

 

#anyvision, #brad-smith, #dfj-growth, #eldridge-industries, #eric-holder, #facial-recognition, #facial-recognition-software, #israel, #law-enforcement, #learning, #lightspeed-venture-partners, #m12, #mass-surveillance, #microsoft, #national-security, #president, #prevention, #qualcomm-ventures, #security, #skills, #surveillance, #tc, #united-states

0

Before Clearview Became a Police Tool, It Was a Secret Plaything of the Rich

Investors and clients of the facial recognition start-up freely used the app on dates and at parties — and to spy on the public.

#artificial-intelligence, #catsimatidis-john-a, #clearview-ai-inc, #computers-and-the-internet, #facial-recognition-software, #gristedes, #high-net-worth-individuals, #hot-ones-web-original-program, #kutcher-ashton, #mobile-applications, #schwartz-richard-j-1958, #thiel-peter-a, #ton-that-hoan, #venture-capital

0