A 2018 letter from the bureau to the Israeli government is the clearest documentary evidence to date that the agency weighed using the spyware for law enforcement operations.
The Eastern European countries had sought to buy Pegasus, spyware made by an Israeli firm, to carry out intelligence operations against Russia, according to people with knowledge of the discussions.
An official investigation refuted claims that the police had illegally hacked dozens of civilians using spyware from NSO Group, an Israeli company that has long attracted global scrutiny.
The decision reflected rising concerns about the domestic use of spyware made by NSO Group, based in Israel, which has long been a target of criticism abroad.
A whistleblower has accused Pegasus spyware-maker NSO Group of offering “bags of cash” to security company Mobileum in exchange for access to cellular networks in 2017. According to reports yesterday by The Guardian and The Washington Post, former Mobileum VP Gary Miller made his allegations in a complaint to the US Department of Justice and in an interview with news organizations that are part of the “Pegasus Project” consortium.
Miller alleged that during the Mobileum/NSO Group meeting, “a member of his own company’s leadership at Mobileum asked what NSO believed the ‘business model’ was of working with Mobileum, since Mobileum did not sell access to the global signalling networks as a product,” The Guardian wrote. “According to Miller, and a written disclosure he later made to federal authorities, the response allegedly made by [NSO co-founder Omri] Lavie was ‘we drop bags of cash at your office.'”
NSO Group, an Israeli company that was recently blacklisted by the US government, was allegedly seeking access to the SS7 network. Mobileum’s various security products include an SS7 firewall, and the company’s website warns that “modestly priced access to the SS7 network is now available to hackers on a modest budget.”
A tale about Israel, Pegasus and the world.
A Times investigation reveals how Israel reaped diplomatic gains around the world from NSO’s Pegasus spyware — a tool America itself purchased but is now trying to ban.
The announcement came months after the U.S. government blacklisted the Israeli firm that produces Pegasus, the technology used to target the journalists.
The U.S. intelligence community offered steps that would mitigate — but not stop — spyware developed by firms like the NSO Group.
In February 2019, an Israeli woman sat across from the son of Uganda’s president and made an audacious pitch—would he want to secretly hack any phone in the world?
Lt. General Muhoozi Kainerugaba, in charge of his father’s security and a long-whispered successor to Yoweri Museveni, was keen, said two people familiar with the sales pitch.
After all, the woman, who had ties to Israeli intelligence, was pitching him Pegasus, a piece of spyware so powerful that Middle East dictators and autocratic regimes had been paying tens of millions for it for years.
Two journalists and one politician said they received alerts warning them of “state-sponsored” attacks on their iPhones. At least one of those attacks was linked to the powerful Israeli cyberespionage tool, Pegasus.
The hack is the first known case of the spyware, known as Pegasus, being used against American officials.
The iPhones of nine US State Department officials were infected by powerful and stealthy malware developed by NSO Group, the Israeli exploit seller that has come under increasing scrutiny for selling its wares to journalists, lawyers, activists, and US allies.
The US officials, either stationed in Uganda or focusing on issues related to that country, received warnings like this one from Apple informing them their iPhones were being targeted by hackers. Citing unnamed people with knowledge of the attacks, Reuters said the hackers used software from NSO.
— Norbert Mao (@norbertmao) November 24, 2021
No clicking required
As previously reported, NSO software known as Pegasus uses exploits sent through messaging apps that infect iPhones and Android devices without requiring targets to click links or take any other action. From there, the devices run hard-to-detect malware that can download photos, contacts, text messages, and other data. The malware also allows the operator to listen to audio and view video in real time.
Apple is suing NSO Group Technologies, the Israeli military-grade spyware manufacturer that created surveillance software used to target the mobile phones of journalists, political dissidents, and human rights activists, to block it from using Apple products.
The iPhone maker’s lawsuit, filed on Tuesday in federal court in California, alleged that NSO, the largest known Israeli cyber warfare company, had spied on and targeted Apple users. It is seeking damages as well as an order stopping NSO from using any Apple software, device, or services.
NSO develops and sells its spyware, known as Pegasus, which exploits vulnerabilities in iPhones and Android smartphones and allows those who deploy it to infiltrate a target’s device unnoticed.
Apple accused NSO Group, the Israeli surveillance company, of “flagrant” violations of its software, as well as federal and state laws.
The accusation, which has not been independently verified, raises new questions over whether Israel is using software made by NSO Group to spy on Palestinians.
The hacking renewed scrutiny of the relationship between the Israeli government and the NSO Group, a surveillance company blacklisted by the United States.
The US has blacklisted Pegasus spyware maker NSO Group, saying that the Israeli company “developed and supplied spyware to foreign governments that used this tool to maliciously target government officials, journalists, businesspeople, activists, academics, and embassy workers.”
The Biden administration’s Commerce Department today announced a final rule that adds NSO Group and three other foreign companies to the Entity List “for engaging in activities that are contrary to the national security or foreign policy interests of the United States.” The other three companies are Israel-based Candiru, Russia-based Positive Technologies, and Singapore-based Computer Security Initiative Consultancy. Exports and transfers of their products will be restricted.
As we explained in a previous article, “Pegasus is frequently installed through ‘zero-click’ exploits, such as those sent by text messages, which require no interaction from victims.” Pegasus can jailbreak or root an iPhone or Android phone and make copies of call histories, text messages, calendar entries, and contacts. Pegasus can also activate cameras and microphones to eavesdrop, track a target’s movements, “and steal messages from end-to-end encrypted chat apps.”
Invasive hacking software sold to countries to fight terrorism is easily abused. Researchers say my phone was hacked twice, probably by Saudi Arabia.
Researchers at Citizen Lab found that NSO Group, an Israeli spyware company, had infected Apple products without so much as a click.
The illusion of privacy is getting harder and harder to maintain.
The shadowy world of private spyware has long caused alarm in cybersecurity circles, as authoritarian governments have repeatedly been caught targeting the smartphones of activists, journalists, and political rivals with malware purchased from unscrupulous brokers. The surveillance tools these companies provide frequently target iOS and Android, which have seemingly been unable to keep up with the threat. But a new report suggests the scale of the problem is far greater than feared—and has placed added pressure on mobile tech makers, particularly Apple, from security researchers seeking remedies.
This week, an international group of researchers and journalists from Amnesty International, Forbidden Stories, and more than a dozen other organizations published forensic evidence that a number of governments worldwide—including Hungary, India, Mexico, Morocco, Saudi Arabia, and the United Arab Emirates—may be customers of the notorious Israeli spyware vendor NSO Group. The researchers studied a leaked list of 50,000 phone numbers associated with activists, journalists, executives, and politicians who were all potential surveillance targets. They also looked specifically at 37 devices infected with, or targeted by, NSO’s invasive Pegasus spyware. They even created a tool so you can check whether your iPhone has been compromised.
Over the weekend, an international consortium of news outlets reported that several authoritarian governments — including Mexico, Morocco and the United Arab Emirates — used spyware developed by NSO Group to hack into the phones of thousands of their most vocal critics, including journalists, activists, politicians and business executives.
A leaked list of 50,000 phone numbers of potential surveillance targets was obtained by Paris-based journalism nonprofit Forbidden Stories and Amnesty International and shared with the reporting consortium, including The Washington Post and The Guardian. Researchers analyzed the phones of dozens of victims to confirm they were targeted by the NSO’s Pegasus spyware, which can access all of the data on a person’s phone. The reports also confirm new details of the government customers themselves, which NSO Group closely guards. Hungary, a member of the European Union where privacy from surveillance is supposed to be a fundamental right for its 500 million residents, is named as an NSO customer.
The reporting shows for the first time how many individuals are likely targets of NSO’s intrusive device-level surveillance. Previous reporting had put the number of known victims in the hundreds or more than a thousand.
NSO Group sharply rejected the claims. NSO has long said that it doesn’t know who its customers target, which it reiterated in a statement to TechCrunch on Monday.
Researchers at Amnesty, whose work was reviewed by the Citizen Lab at the University of Toronto, found that NSO can deliver Pegasus by sending a victim a link which when opened infects the phone, or silently and without any interaction at all through a “zero-click” exploit, which takes advantage of vulnerabilities in the iPhone’s software. Citizen Lab researcher Bill Marczak said in a tweet that NSO’s zero-clicks worked on iOS 14.6, which until today was the most up-to-date version.
Amnesty’s researchers showed their work by publishing meticulously detailed technical notes and a toolkit that they said may help others identify if their phones have been targeted by Pegasus.
The Mobile Verification Toolkit, or MVT, works on both iPhones and Android devices, but slightly differently. Amnesty said that more forensic traces were found on iPhones than Android devices, which makes it easier to detect on iPhones. MVT will let you take an entire iPhone backup (or a full system dump if you jailbreak your phone) and feed in for any indicators of compromise (IOCs) known to be used by NSO to deliver Pegasus, such as domain names used in NSO’s infrastructure that might be sent by text message or email. If you have an encrypted iPhone backup, you can also use MVT to decrypt your backup without having to make a whole new copy.
The toolkit works on the command line, so it’s not a refined and polished user experience and requires some basic knowledge of how to navigate the terminal. We got it working in about 10 minutes, plus the time to create a fresh backup of an iPhone, which you will want to do if you want to check up to the hour. To get the toolkit ready to scan your phone for signs of Pegasus, you’ll need to feed in Amnesty’s IOCs, which it has on its GitHub page. Any time the indicators of compromise file updates, download and use an up-to-date copy.
Once you set off the process, the toolkit scans your iPhone backup file for any evidence of compromise. The process took about a minute or two to run and spit out several files in a folder with the results of the scan. If the toolkit finds a possible compromise, it will say so in the outputted files. In our case, we got one “detection,” which turned out to be a false positive and has been removed from the IOCs after we checked with the Amnesty researchers. A new scan using the updated IOCs returned no signs of compromise.
Given it’s more difficult to detect an Android infection, MVT takes a similar but simpler approach by scanning your Android device backup for text messages with links to domains known to be used by NSO. The toolkit also lets you scan for potentially malicious applications installed on your device.
The toolkit is — as command line tools go — relatively simple to use, though the project is open source so not before long surely someone will build a user interface for it. The project’s detailed documentation will help you — as it did us.
- A new ‘digital violence’ platform maps dozens of victims of NSO Group’s spyware
- Google, Cisco and VMware join Microsoft to oppose NSO Group in WhatsApp spyware case
- NSO used real people’s location data to pitch its contact-tracing tech, researchers say
- Dozens of journalists’ iPhones hacked with NSO ‘zero-click’ spyware, says Citizen Lab
- A passwordless server run by spyware maker NSO sparks contact-tracing privacy concerns
You can send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more.
Smartphones belonging to more than three dozen journalists, human rights activists, and business executives have been infected with powerful spyware that an Israeli firm sells, purportedly to catch terrorists and criminals, The Washington Post and other publications reported.
The handsets were infected with Pegasus, full-featured spyware developed by NSO Group. The Israel-based exploit seller has come under intense scrutiny in recent years after repressive governments in the United Arab Emirates, Mexico, and other countries have been found using the malware against journalists, activists, and other groups not affiliated with terrorism or crime.
Pegasus is frequently installed through “zero-click” exploits, such as those sent by text messages, which require no interaction from victims. After the exploits surreptitiously jailbreak or root a target’s iPhone or Android device, Pegasus immediately trawls through a wealth of the device’s resources. It copies call histories, text messages, calendar entries, and contacts. It is capable of activating the cameras and microphones of compromised phones to eavesdrop on nearby activities. It can also track a target’s movements and steal messages from end-to-end encrypted chat apps.
For the first time, researchers have mapped all the known targets, including journalists, activists, and human rights defenders, whose phones were hacked by Pegasus, a spyware developed by NSO Group.
Forensic Architecture, an academic unit at Goldsmiths, University of London that investigates human rights abuses, scoured dozens of reports from human rights groups, carried out open-source research and interviewed dozens of the victims themselves to reveal over a thousand data points, including device infections, which show relations and patterns between digital surveillance carried out by NSO’s government customers, and the real-world intimidation, harassment and violence that the victims are also subject to.
By mapping out these data points on a bespoke platform, the researchers can show how nation-states, which use Pegasus to spy on their victims, also often target other victims in their networks and are entangled with assaults, arrests, and disinformation campaigns against the targets but also their families, friends, and colleagues.
Although the thousand-plus data points only present a portion of the overall use of Pegasus by governments, the project aims to provide researchers and investigators the tools and data of NSO’s activities worldwide, which the spyware maker goes to great lengths to keep out of the public eye.
Pegasus “activates your camera, your microphone, all that which forms an integral part of your life.” Mexican journalist Carmen Aristegui
Israel-based NSO Group develops Pegasus, a spyware that allows its government customers near-unfettered access to a victim’s device, including their personal data and their location. NSO has repeatedly declined to name its customers but reportedly has government contracts in at least 45 countries, said to include Rwanda, Israel, Bahrain, Saudi Arabia, Mexico, and the United Arab Emirates — all of which have been accused of human rights abuses — as well as Western nations, like Spain.
Forensic Architecture’s researcher-in-charge Shourideh Molavi said the new findings reveal “the extent to which the digital domain we inhabit has become the new frontier of human rights violations, a site of state surveillance and intimidation that enables physical violations in real space.”
The platform presents visual timelines of how victims are targeted by both spyware and physical violence as part of government campaigns to target their most outspoken critics.
Omar Abdulaziz, a Saudi video blogger and activist living in exile in Montreal, had his phone hacked in 2018 by the Pegasus malware. Shortly after Saudi emissaries tried to convince Abdulaziz to return to the kingdom, his phone was hacked. Weeks later, two of his brothers in Saudi Arabia were arrested and his friends detained.
Abdulaziz, a confidant of Washington Post journalist Jamal Khashoggi whose murder was approved by Saudi’s de facto ruler Crown Prince Mohammed bin Salman, also had information about his Twitter account obtained by a “state-sponsored” actor, which later transpired to be a Saudi spy employed by Twitter. It was this stolen data, which included Abdulaziz’s phone number, that helped the Saudis penetrate his phone and read his messages with Khashoggi in real-time, Yahoo News reported this week.
Mexican journalist Carmen Aristegui is another known victim, whose phone was hacked several times over 2015 and 2016 by a government customer of Pegasus, likely Mexico. The University of Toronto’s Citizen Lab found that her son, Emilio, a minor at the time, also had his phone targeted while he lived in the United States. The timeline of the digital intrusions against Aristegui, her son, and her colleagues show that the hacking efforts intensified following their exposure of corruption by Mexico’s then-president Enrique Peña Nieto.
“It’s a malware that activates your camera, your microphone, all that which forms an integral part of your life,” said Aristegui in an interview with journalist and filmmaker Laura Poitras, who contributed to the project. Speaking of her son whose phone was targeted, Aristegui said: “To know that a kid who is simply going about his life, and going to school tells us about the kinds of abuse that a state can exert without counterweight.” (NSO has repeatedly claimed it does not target phones in the United States, but offers a similar technology to Pegasus, dubbed Phantom, through U.S.-based subsidiary, Westbridge Technologies.)
“A phenomenal damage is caused to the journalistic responsibility when the state — or whoever — uses these systems of ‘digital violence’,” said Aristegui. “It ends up being a very damaging element for journalists, which affects the right of a society to keep itself informed.”
The platform also draws on recent findings from an Amnesty International investigation into NSO Group’s corporate structure, which shows how NSO’s spyware has proliferated to states and governments using a complex network of companies to hide its customers and activities. Forensic Architecture’s platform follows the trail of private investment since NSO’s founding in 2015, which “likely enabled” the sale of the spyware to governments that NSO would not ordinarily have access to because of Israeli export restrictions.
“NSO Group’s Pegasus spyware needs to be thought of and treated as a weapon developed, like other products of Israel’s military industrial complex, in the context of the ongoing Israeli occupation. It is disheartening to see it exported to enable human rights violations worldwide,” said Eyal Weizman, director of Forensic Architecture.
The platform launched shortly after NSO published its first so-called transparency report this week, which human rights defenders and security researchers panned as devoid of any meaningful detail. Amnesty International said the report reads “more like a sales brochure.”
In a statement, NSO Group said it cannot comment on research it has not seen, but claimed it “investigates all credible claims of misuse, and NSO takes appropriate action based on the results of its investigations.”
NSO Group maintained that its technology “cannot be used to conduct cybersurveillance within the United States, and no customer has ever been granted technology that would enable them to access phones with U.S. numbers,” and declined to name any of its government customers.
You can send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more.
As governments scrambled to lock down their populations after the COVID-19 pandemic was declared last March, some countries had plans underway to reopen. By June, Jamaica became one of the first countries to open its borders.
Tourism represents about one-fifth of Jamaica’s economy. In 2019 alone, four million travelers visited Jamaica, bringing thousands of jobs to its three million residents. But as COVID-19 stretched into the summer, Jamaica’s economy was in free fall, and tourism was its only way back — even if that meant at the expense of public health.
The Jamaican government contracted with Amber Group, a technology company headquartered in Kingston, to build a border entry system allowing residents and travelers back onto the island. The system was named JamCOVID and was rolled out as an app and a website to allow visitors to get screened before they arrive. To cross the border, travelers had to upload a negative COVID-19 test result to JamCOVID before boarding their flight from high-risk countries, including the United States.
Amber Group’s CEO Dushyant Savadia boasted that his company developed JamCOVID in “three days” and that it effectively donated the system to the Jamaican government, which in turn pays Amber Group for additional features and customizations. The rollout appeared to be a success, and Amber Group later secured contracts to roll out its border entry system to at least four other Caribbean islands.
But last month TechCrunch revealed that JamCOVID exposed immigration documents, passport numbers, and COVID-19 lab test results on close to half a million travelers — including many Americans — who visited the island over the past year. Amber Group had set the access to the JamCOVID cloud server to public, allowing anyone to access its data from their web browser.
Whether the data exposure was caused by human error or negligence, it was an embarrassing mistake for a technology company — and, by extension, the Jamaican government — to make.
And that might have been the end of it. Instead, the government’s response became the story.
A trio of security lapses
By the end of the first wave of coronavirus, contact tracing apps were still in their infancy and few governments had plans in place to screen travelers as they arrived at their borders. It was a scramble for governments to build or acquire technology to understand the spread of the virus.
As part of an investigation into a broad range of these COVID-19 apps and services, TechCrunch found that JamCOVID was storing data on an exposed, passwordless server.
This wasn’t the first time TechCrunch found security flaws or exposed data through our reporting. It also was not the first pandemic-related security scare. Israeli spyware maker NSO Group left real location data on an unprotected server that it used for demonstrating its new contact tracing system. Norway was one of the first countries with a contact tracing app, but pulled it after the country’s privacy authority found the continuous tracking of citizens’ location was a privacy risk.
Just as we have with any other story, we contacted who we thought was the server’s owner. We alerted Jamaica’s Ministry of Health to the data exposure on the weekend of February 13. But after we provided specific details of the exposure to ministry spokesperson Stephen Davidson, we did not hear back. Two days later, the data was still exposed.
After we spoke to two American travelers whose data was spilling from the server, we narrowed down the owner of the server to Amber Group. We contacted its chief executive Savadia on February 16, who acknowledged the email but did not comment, and the server was secured about an hour later.
We ran our story that afternoon. After we published, the Jamaican government issued a statement claiming the lapse was “discovered on February 16” and was “immediately rectified,” neither of which were true.
Got a tip? Contact us securely using SecureDrop. Find out more here.
Instead, the government responded by launching a criminal investigation into whether there was any “unauthorized” access to the unprotected data that led to our first story, which we perceived to be a thinly veiled threat directed at this publication. The government said it had contacted its overseas law enforcement partners.
When reached, a spokesperson for the FBI declined to say whether the Jamaican government had contacted the agency.
Things didn’t get much better for JamCOVID. In the days that followed the first story, the government engaged a cloud and cybersecurity consultant, Escala 24×7, to assess JamCOVID’s security. The results were not disclosed, but the company said it was confident there was “no current vulnerability” in JamCOVID. Amber Group also said that the lapse was a “completely isolated occurrence.”
A week went by and TechCrunch alerted Amber Group to two more security lapses. After the attention from the first report, a security researcher who saw the news of the first lapse found exposed private keys and passwords for JamCOVID’s servers and databases hidden on its website, and a third lapse that spilled quarantine orders for more than half a million travelers.
Amber Group and the government claimed it faced “cyberattacks, hacking and mischievous players.” In reality, the app was just not that secure.
The security lapses come at a politically inconvenient time for the Jamaican government, as it attempts to launch a national identification system, or NIDS, for the second time. NIDS will store biographic data on Jamaican nationals, including their biometrics, such as their fingerprints.
The repeat effort comes two years after the government’s first law was struck down by Jamaica’s High Court as unconstitutional.
Critics have cited the JamCOVID security lapses as a reason to drop the proposed national database. A coalition of privacy and rights groups cited the recent issues with JamCOVID for why a national database is “potentially dangerous for Jamaicans’ privacy and security.” A spokesperson for Jamaica’s opposition party told local media that there “wasn’t much confidence in NIDS in the first place.”
It’s been more than a month since we published the first story and there are many unanswered questions, including how Amber Group secured the contract to build and run JamCOVID, how the cloud server became exposed, and if security testing was conducted before its launch.
TechCrunch emailed both the Jamaican prime minister’s office and Jamaica’s national security minister Matthew Samuda to ask how much, if anything, the government donated or paid to Amber Group to run JamCOVID and what security requirements, if any, were agreed upon for JamCOVID. We did not get a response.
Amber Group also has not said how much it has earned from its government contracts. Amber Group’s Savadia declined to disclose the value of the contracts to one local newspaper. Savadia did not respond to our emails with questions about its contracts.
Following the second security lapse, Jamaica’s opposition party demanded that the prime minister release the contracts that govern the agreement between the government and Amber Group. Prime Minister Andrew Holness said at a press conference that the public “should know” about government contracts but warned “legal hurdles” may prevent disclosure, such as for national security reasons or when “sensitive trade and commercial information” might be disclosed.
That came days after local newspaper The Jamaica Gleaner had a request to obtain contracts revealing the salaries state officials denied by the government under a legal clause that prevents the disclosure of an individual’s private affairs. Critics argue that taxpayers have a right to know how much government officials are paid from public funds.
Jamaica’s opposition party also asked what was done to notify victims.
Government minister Samuda initially downplayed the security lapse, claiming just 700 people were affected. We scoured social media for proof but found nothing. To date, we’ve found no evidence that the Jamaican government ever informed travelers of the security incident — either the hundreds of thousands of affected travelers whose information was exposed, or the 700 people that the government claimed it notified but has not publicly released.
TechCrunch emailed the minister to request a copy of the notice that the government allegedly sent to victims, but we did not receive a response. We also asked Amber Group and Jamaica’s prime minister’s office for comment. We did not hear back.
Many of the victims of the security lapse are from the United States. Neither of the two Americans we spoke to in our first report were notified of the breach.
Spokespeople for the attorneys general of New York and Florida, whose residents’ information was exposed, told TechCrunch that they had not heard from either the Jamaican government or the contractor, despite state laws requiring data breaches to be disclosed.
The reopening of Jamaica’s borders came at a cost. The island saw over a hundred new cases of COVID-19 in the month that followed, the majority arriving from the United States. From June to August, the number of new coronavirus cases went from tens to dozens to hundreds each day.
To date, Jamaica has reported over 39,500 cases and 600 deaths caused by the pandemic.
Prime Minister Holness reflected on the decision to reopen its borders last month in parliament to announce the country’s annual budget. He said the country’s economic decline last was “driven by a massive 70% contraction in our tourist industry.” More than 525,000 travelers — both residents and tourists — have arrived in Jamaica since the borders opened, Holness said, a figure slightly more than the number of travelers’ records found on the exposed JamCOVID server in February.
Holness defended reopening the country’s borders.
“Had we not done this the fall out in tourism revenues would have been 100% instead of 75%, there would be no recovery in employment, our balance of payment deficit would have worsened, overall government revenues would have been threatened, and there would be no argument to be made about spending more,” he said.
Both the Jamaican government and Amber Group benefited from opening the country’s borders. The government wanted to revive its falling economy, and Amber Group enriched its business with fresh government contracts. But neither paid enough attention to cybersecurity, and victims of their negligence deserve to know why.
Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more.
Spyware maker NSO Group used real phone location data on thousands of unsuspecting people when it demonstrated its new COVID-19 contact-tracing system to governments and journalists, researchers have concluded.
NSO, a private intelligence company best known for developing and selling governments access to its Pegasus spyware, went on the charm offensive earlier this year to pitch its contact-tracing system, dubbed Fleming, aimed at helping governments track the spread of COVID-19. Fleming is designed to allow governments to feed location data from cell phone companies to visualize and track the spread of the virus. NSO gave several news outlets each a demo of Fleming, which NSO says helps governments make public health decisions “without compromising individual privacy.”
But in May, a security researcher told TechCrunch that he found an exposed database storing thousands of location data points used by NSO to demonstrate how Fleming works — the same demo seen by reporters weeks earlier.
TechCrunch reported the apparent security lapse to NSO, which quickly secured the database, but said that the location data was “not based on real and genuine data.”
NSO’s claim that the location data wasn’t real differed from reports in Israeli media, which said NSO had used phone location data obtained from advertising platforms, known as data brokers, to “train” the system. Academic and privacy expert Tehilla Shwartz Altshuler, who was also given a demo of Fleming, said NSO told her that the data was obtained from data brokers, which sell access to vast troves of aggregate location data collected from the apps installed on millions of phones.
TechCrunch asked researchers at Forensic Architecture, an academic unit at Goldsmiths, University of London that studies and examines human rights abuses, to investigate. The researchers published their findings on Wednesday, concluding that the exposed data was likely based on real phone location data.
The researchers said if the data is real, then NSO “violated the privacy” of 32,000 individuals across Rwanda, Israel, Bahrain, Saudi Arabia and the United Arab Emirates — countries that are reportedly customers of NSO’s spyware.
The researchers analyzed a sample of the exposed phone location data by looking for patterns they expected to see with real people’s location data, such as a concentration of people in major cities and by measuring the time it took for individuals to travel from one place to another. The researchers also found spatial irregularities that would be associated with real data, such as star-like patterns that are caused by a phone trying to accurately pinpoint its location when the line of sight to the satellite is obstructed by tall buildings.
“The spatial ‘irregularities’ in our sample — a common signature of real mobile location tracks — further support our assessment that this is real data. Therefore, the dataset is most likely not ‘dummy’ nor computer generated data, but rather reflects the movement of actual individuals, possibly acquired from telecommunications carriers or a third-party source,” the researchers said.
The researchers built maps, graphs, and visualizations to explain their findings, while preserving the anonymity of the individuals whose location data was fed into NSO’s Fleming demo.
Gary Miller, a mobile network security expert and founder of cyber intelligence firm Exigent Media, reviewed some of the datasets and graphs, and concluded it was real phone location data.
Miller said the number of data points increased around population hubs. “If you take a scatter plot of cell phone locations at a given point in time, there will be consistency in the number of points in suburban versus urban locations,” he said. Miller also found evidence of people traveling together, which he said “looked consistent with real phone data.”
He also said that even “anonymized” location data sets can be used to tell a lot about a person, such as where they live and work, and who they visit. “One can learn a lot of details about individuals simply by looking at location movement patterns,” he said.
“If you add up all of the similarities it would be very difficult to conclude that this was not actual mobile network data,” he said.
John Scott-Railton, a senior researcher at Citizen Lab, said the data likely originated from phone apps that use a blend of direct GPS data, nearby Wi-Fi networks, and the phone’s in-built sensors to try to improve the quality of the location data. “But it’s never really perfect,” he said. “If you’re looking at advertising data — like the kind that you buy from a data broker — it would look a lot like this.”
Scott-Railton also said that using simulated data for a contact-tracing system would be “counterproductive,” as NSO would “want to train [Fleming] on data that is as real and representative as possible.”
“Based on what I saw, the analysis provided by Forensic Architecture is consistent with the previous statements by Tehilla Shwartz Altshuler,” said Scott-Railton, referring to the academic who said NSO told her that was based on real data.
“The whole situation paints a picture of a spyware company once more being cavalier with sensitive and potentially personal information,” he said.
NSO rejected the researchers’ findings.
“We have not seen the supposed examination and have to question how these conclusions were reached. Nevertheless, we stand by our previous response of May 6, 2020. The demo material was not based on real and genuine data related to infected COVID-19 individuals,” said an unnamed spokesperson. (NSO’s earlier statement made no reference to individuals with COVID-19.)
“As our last statement details, the data used for the demonstrations did not contain any personally identifiable information (PII). And, also as previously stated, this demo was a simulation based on obfuscated data. The Fleming system is a tool that analyzes data provided by end users to help healthcare decision-makers during this global pandemic. NSO does not collect any data for the system, nor does NSO have any access to collected data.”
NSO did not answer our specific questions, including where the data came from and how it was obtained. The company claims on its website that Fleming is “already being operated by countries around the world,” but declined to confirm or deny its government customers when asked.
Got a tip? Contact us securely using SecureDrop. Find out more here.
The Israeli spyware maker’s push into contact tracing has been seen as a way to repair its image, as the company battles a lawsuit in the United States that could see it reveal more about the governments that buy access to its Pegasus spyware.
NSO is currently embroiled in a lawsuit with Facebook-owned WhatsApp, which last year blamed NSO for exploiting an undisclosed vulnerability in WhatsApp to infect some 1,400 phones with Pegasus, including journalists and human rights defenders. NSO says it should be afforded legal immunity because it acts on behalf of governments. But Microsoft, Google, Cisco, and VMware filed an amicus brief this week in support of WhatsApp, and calling on the court to reject NSO’s claim to immunity.
The amicus brief came shortly after Citizen Lab found evidence that dozens of journalists were also targeted with Pegasus spyware by NSO customers, including Saudi Arabia and the United Arab Emirates. NSO disputed the findings.
A coalition of companies have filed an amicus brief in support of a legal case brought by WhatsApp against Israeli intelligence firm NSO Group, accusing the company of using an undisclosed vulnerability in the messaging app to hack into at least 1,400 devices, some of which were owned by journalists and human rights activists.
NSO develops and sells governments access to its Pegasus spyware, allowing its nation state customers to target and stealthily hack into the devices of its targets. Spyware like Pegasus can track a victim’s location, read their messages and listen to their calls, steal their photos and files, and siphon off private information from their device. The spyware is often installed by tricking a target into opening a malicious link, or sometimes by exploiting never-before-seen vulnerabilities in apps or phones to silently infect the victims with the spyware. The company has drawn ire for selling to authoritarian regimes, like Saudi Arabia, Ethiopia, and the United Arab Emirates.
Last year, WhatsApp found and patched a vulnerability that it said was being abused to deliver the government-grade spyware, in some cases without the victim knowing. Months later, WhatsApp sued NSO to understand more about the incident, including which of its government customers was behind the attack.
NSO has repeatedly disputed the allegations, but was unable to convince a U.S. court to drop the case earlier this year. NSO’s main legal defense is that it is afforded legal immunities because it acts on behalf of governments.
But a coalition of tech companies has sided with WhatsApp, and are now asking the court to not allow NSO to claim or be subject to immunity.
Microsoft (including its subsidiaries LinkedIn and GitHub), Google, Cisco, VMware, and the Internet Association, which represents dozens of tech giants including Amazon, Facebook, and Twitter, warned that the development of spyware and espionage tools — including hoarding the vulnerabilities used to deliver them — make ordinary people less safe and secure, and also runs the risk of these tools falling into the wrong hands.
In a blog post, Microsoft’s customer security and trust chief Tom Burt said NSO should be accountable for the tools it builds and the vulnerabilities it exploits.
“Private companies should remain subject to liability when they use their cyber-surveillance tools to break the law, or knowingly permit their use for such purposes, regardless of who their customers are or what they’re trying to achieve,” said Burt. “We hope that standing together with our competitors today through this amicus brief will help protect our collective customers and global digital ecosystem from more indiscriminate attacks.”
A spokesperson for NSO did not immediately comment.
Citizen Lab researchers say they have found evidence that dozens of journalists had their iPhones silently compromised with spyware known to be used by nation states.
For more than the past year, London-based reporter Rania Dridi and at least 36 journalists, producers and executives working for the Al Jazeera news agency were targeted with a so-called “zero-click” attack that exploited a now-fixed vulnerability in Apple’s iMessage. The attack invisibly compromised the devices without having to trick the victims into opening a malicious link.
Citizen Lab, the internet watchdog at the University of Toronto, was asked to investigate earlier this year after one of the victims, Al Jazeera investigative journalist Tamer Almisshal, suspected that his phone may have been hacked.
In a technical report out Sunday and shared with TechCrunch, the researchers say they believe the journalists’ iPhones were infected with the Pegasus spyware, developed by Israel-based NSO Group.
The researchers analyzed Almisshal’s iPhone and found it had between July and August connected to servers known to be used by NSO for delivering the Pegasus spyware. The device revealed a burst of network activity that suggests that the spyware may have been delivered silently over iMessage.
Logs from the phone show that the spyware was likely able to secretly record the microphone and phone calls, take photos using the phone’s camera, access the victim’s passwords, and track the phone’s location.
Citizen Lab said the bulk of the hacks were likely carried out by at least four NSO customers, including the governments of Saudi Arabia and the United Arab Emirates, citing evidence it found in similar attacks involving Pegasus.
The researchers found evidence that two other NSO customers hacked into one and three Al Jazeera phones respectively, but that they could not attribute the attacks to a specific government.
A spokesperson for Al Jazeera, which just broadcast its reporting of the hacks, did not immediately comment.
NSO sells governments and nation states access to its Pegasus spyware as a prepackaged service by providing the infrastructure and the exploits needed to launch the spyware against the customer’s targets. But the spyware maker has repeatedly distanced itself from what its customers do and has said it does not who its customers target. Some of NSO’s known customers include authoritarian regimes like China and Russia. Saudi Arabia allegedly used the surveillance technology to spy on the communications of columnist Jamal Khashoggi shortly before his murder, which U.S. intelligence concluded was likely ordered by the kingdom’s de facto ruler, Crown Prince Mohammed bin Salman.
Citizen Lab said it also found evidence that Dridi, a journalist at Arabic television station Al Araby in London, had fallen victim to a zero-click attack. The researchers said Dridi was likely targeted by the UAE government.
In a phone call, Dridi told TechCrunch that her phone may have been targeted because of her close association to a person of interest to the UAE.
Dridi’s phone, an iPhone XS Max, was targeted for a longer period, likely between October 2019 and July 2020. The researchers found evidence that she was targeted on two separate occasions with a zero-day attack — the name of an exploit that has not been previously disclosed and that a patch is not yet available — because her phone was running the latest version of iOS both times.
“My life is not normal anymore. I don’t feel like I have a private life again,” said Dridi. “To be a journalist is not a crime,” she said.
Citizen Lab said its latest findings reveal an “accelerating trend of espionage” against journalists and news organizations, and that the growing use of zero-click exploits makes it increasingly difficult — though evidently not impossible — to detect because of the more sophisticated techniques used to infect victims’ devices while covering their tracks.
When reached on Saturday, NSO said it was unable to comment on the allegations as it had not seen the report, but declined to say when asked if Saudi Arabia or the UAE were customers or describe what processes — if any — it puts in place to prevent customers from targeting journalists.
“This is the first we are hearing of these assertions. As we have repeatedly stated, we do not have access to any information related to the identities of individuals upon whom our system is alleged to have been used to conduct surveillance. However, when we receive credible evidence of misuse, combined with the basic identifiers of the alleged targets and timeframes, we take all necessary steps in accordance with our product misuse investigation procedure to review the allegations,” said a spokesperson.
“We are unable to comment on a report we have not yet seen. We do know that CitizenLab regularly publishes reports based on inaccurate assumptions and without a full command of the facts, and this report will likely follow that theme NSO provides products that enable governmental law enforcement agencies to tackle serious organized crime and counterterrorism only, but as stated in the past, we do not operate them. Nevertheless, we are committed to ensuring our policies are adhered to, and any evidence of a breach will be taken seriously and investigated.”
Citizen Lab said it stood by its findings.
Read more on TechCrunch
- Before suing NSO Group, Facebook allegedly sought their software to better spy on users
- A passwordless server run by spyware maker NSO sparks contact-tracing privacy concerns
- UN calls for investigation after Saudis linked to Bezos phone hack
- US intelligence bill takes aim at commercial spyware makers
- Read this week’s Decrypted
Spokespeople for the Saudi and UAE governments in New York did not respond to an email requesting comment.
The attacks not only puts a renewed focus on the shadowy world of surveillance spyware, but also the companies having to defend against it. Apple rests much of its public image on advocating privacy for its users and building secure devices, like iPhones, designed to be hardened against the bulk of attacks. But no technology is impervious to security bugs. In 2016, Reuters reported that UAE-based cybersecurity firm DarkMatter bought a zero-click exploit to target iMessage, which they referred to as “Karma.” The exploit worked even if the user did not actively use the messaging app.
Apple told TechCrunch that it had not independently verified Citizen Lab’s findings but that the vulnerabilities used to target the reporters were fixed in iOS 14, released in September.
“At Apple, our teams work tirelessly to strengthen the security of our users’ data and devices. iOS 14 is a major leap forward in security and delivered new protections against these kinds of attacks. The attack described in the research was highly targeted by nation-states against specific individuals. We always urge customers to download the latest version of the software to protect themselves and their data,” said an Apple spokesperson.
NSO is currently embroiled in a legal battle with Facebook, which last year blamed the Israeli spyware maker for using a similar, previously undisclosed zero-click exploit in WhatsApp to infect some 1,400 devices with the Pegasus spyware.
Facebook discovered and patched the vulnerability, stopping the attack in its tracks, but said that more than 100 human rights defenders, journalists and “other members of civil society” had fallen victim.
Facebook-owned WhatsApp has revealed six previously undisclosed vulnerabilities, which the company has now fixed. The vulnerabilities are being reported on a dedicated security advisory website that will serve as the new resource providing a comprehensive list of WhatsApp security updates and associated Common Vulnerabilities and Exposures (CVE).
WhatsApp said five of the six vulnerabilities were fixed in the same day, while the remaining bug took a couple of days to remediate. Although some of the bugs could have been remotely triggered, the company said it found no evidence of hackers actively exploiting the vulnerabilities.
Around one-third of the new vulnerabilities were reported through the company’s Bug Bounty Program, while the others were discovered in routine code reviews and by using automated systems, as would be expected.
WhatsApp is one of the world’s most popular apps with more than two billion users around the world. But it’s also a persistent target for hackers, who try to find and exploit vulnerabilities in the platform.
The new website was launched as part of the company’s efforts to be more transparent about vulnerabilities targeting the messaging app, and in response to user feedback. The company says the WhatsApp community has been asking for a centralized location for tracking security vulnerabilities, as WhatsApp isn’t always able to detail its security advisories in an app’s release notes due to app store policies.
The new dashboard will update monthly, or sooner if it has to warn users of an active attack. It will also offer an archive of past CVEs dating back to 2018. While the website’s main focus will be on CVEs in WhatsApp’s code, if the company files a CVE with the public database MITRE for a vulnerability it found in third-party code, it will denote that on the WhatsApp Security Advisory page, as well.
Last year, WhatsApp went public after fixing a vulnerability allegedly used by Israeli spyware maker NSO Group. WhatsApp sued the spyware maker, alleging the company used the vulnerability to covertly deliver its Pegasus spyware to some 1,400 devices — including more than 100 human rights defenders and journalists.
NSO denied the allegations.
John Scott-Railton, a senior researcher at Citizen Lab, whose work has included investigating NSO Group, welcomed the news.
“This is good, and we know that bad actors make use of extensive resources to acquire and weaponize vulnerabilities,” he told TechCrunch. “WhatsApp sending the signal that it’s going to move regularly to identify and patch in this way seems like yet another way to raise the cost for bad actors.”
In a blog post, WhatsApp said: “We are very committed to transparency and this resource is intended to help the broader technology community benefit from the latest advances in our security efforts. We strongly encourage all users to ensure they keep their WhatsApp up-to-date from their respective app stores and update their mobile operating systems whenever updates are available.”
Facebook also said Thursday that it has codified its vulnerability disclosure policy, allowing the company to warn developers of security vulnerabilities in third-party code that Facebook and WhatsApp rely on.
A U.S. federal court judge ruled on Thursday that WhatsApp and parent company Facebook’s lawsuit against Israeli mobile surveillance software company NSO Group can go forward. Phyllis Hamilton, Chief Judge of the United Stated District Court of the Northern District of California, denied most of the arguments NSO Group made when it filed a motion to dismiss the suit in April (a copy of her decision is embedded below).
Last October, WhatsApp and Facebook filed a complaint alleging that NSO Group exploited an audio-calling vulnerability in the messaging app to send malware to about 1,400 mobile devices, including ones that belonged to journalists, human rights activists, political dissidents, diplomats and senior government officials.
WhatsApp and Facebook also claim that NSO Group developed a data program called Pegasus that extracted data, including messages, browser history and contacts, from phones, and sold support services to customers including the Kingdom of Bahrain, United Arab Emirates and Mexico.
In its motion to dismiss the lawsuit, one of NSO Group’s arguments was that its business dealings with foreign governments, which it said use its technology to fight terrorism and other serious crimes, granted it immunity from lawsuits filed in U.S. courts under the Foreign Sovereign Immunity Act (FSIA). In her decision, Judge Hamilton wrote that NSO Group failed to qualify because it was not incorporated or formed in the U.S.
In an email to TechCrunch, a WhatsApp spokesperson said “We are pleased with the Court’s decision permitting us to move ahead with our claims that NSO engaged in unlawful conduct. The decision also confirms that WhatsApp will be able to obtain relevant documents and other information about NSO’s practices.”
TechCrunch has also contacted NSO Group for comment. When the lawsuit was filed in October, the company stated, “In the strong possible terms, we dispute today’s allegations and will vigorously fight them.”
There is a darker side to cybersecurity that’s frequently overlooked.
Just as you have an entire industry of people working to keep systems and networks safe from threats, commercial adversaries are working to exploit them. We’re not talking about red-teamers, who work to ethically hack companies from within. We’re referring to exploit markets that sell details of security vulnerabilities and the commercial spyware companies that use those exploits to help governments and hackers spy on their targets.
These for-profit surveillance companies flew under the radar for years, but have only recently gained notoriety. But now, they’re getting unwanted attention from U.S. lawmakers.
In this week’s Decrypted, we look at the technologies police use against the public.
THE BIG PICTURE
Secrecy over protest surveillance prompts call for transparency
Last week we looked at how the Justice Department granted the Drug Enforcement Administration new powers to covertly spy on protesters. But that leaves a big question: What kind of surveillance do federal agencies have, and what happens to people’s data once it is collected?
While some surveillance is noticeable — from overhead drones and police helicopters overhead — others are worried that law enforcement are using less than obvious technologies, like facial recognition and access to phone records, CNBC reports. Many police departments around the U.S. also use “stingray” devices that spoof cell towers to trick cell phones into turning over their call, message and location data.
A newly released draft intelligence bill, passed by the Senate Intelligence Committee last week, would require the government to detail the threats posed by commercial spyware and surveillance technology.
The annual intelligence authorization bill, published Thursday, would take aim at private sector spyware makers, like NSO Group and Hacking Team, who build spyware and hacking tools designed to surreptitiously break into a victim’s devices for conducting surveillance. Both NSO Group and Hacking Team say they only sell their hacking tools to governments, but critics say that its customers have included despotic and authoritarian regimes like Saudi Arabia and Bahrain.
If passed, the bill would instruct the Director of National Intelligence to submit a report to both House and Senate intelligence committees within six months on the “threats posed by the use by foreign governments and entities of commercially available cyber intrusion and other surveillance technology” against U.S. citizens, residents and federal employees.
The report would also have to note if any spyware or surveillance technology is built by U.S. companies and what export controls should apply to prevent that technology from getting into the hands of unfriendly foreign governments.
Sen. Ron Wyden (D-OR) was the only member of the Senate Intelligence Committee to vote against the bill, citing a broken, costly declassification system, but praised the inclusion of the commercial spyware provision.
Commercial spyware and surveillance technology became a mainstream talking point two years ago after the murder of Washington Post columnist Jamal Khashoggi, which U.S. intelligence concluded was personally ordered by Saudi crown prince Mohammed bin Salman, the country’s de facto leader. A lawsuit filed by a Saudi dissident and friend of Khashoggi accuses NSO Group of selling its mobile hacking tool, dubbed Pegasus, to the Saudi regime, which allegedly used the technology to spy on Khashoggi shortly before his murder. NSO denies the claims.
NSO is currently embroiled in a legal battle with Facebook for allegedly exploiting a now-fixed vulnerability in WhatsApp to deliver its spyware to the cell phones of 1,400 users, including government officials, journalists and human rights activists, using Amazon cloud servers based in the U.S. and Germany.
In a separate incident, human rights experts at the United Nations have called for an investigation into allegations that the Saudi government used its spyware to hack into the phone of Amazon chief executive Jeff Bezos. NSO has repeatedly denied the allegations.
John Scott-Railton, a senior researcher at the Citizen Lab, part of the Munk School at the University of Toronto, told TechCrunch that the bill’s draft provisions “couldn’t come at a more important time.”
“Reporting throughout the security industry, as well as actions taken by Apple, Google, Facebook and others have made it clear that [spyware] is a problem at scale and is dangerous to U.S. national security and these companies,” said Scott-Railton. “Commercial spyware, when used by governments, is the ‘next Huawei’ in terms of the security of Americans and needs to be treated as a serious security threat,” he said.
“They brought this on themselves by claiming fo years that everything was fine while evidence mounted in every sector of the U.S. and global society that there was a problem,” he said.
The debate over encryption continues to drag on without end.
In recent months, the discourse has largely swung away from encrypted smartphones to focus instead on end-to-end encrypted messaging. But a recent press conference by the heads of the Department of Justice (DOJ) and the Federal Bureau of Investigation (FBI) showed that the debate over device encryption isn’t dead, it was merely resting. And it just won’t go away.
At the presser, Attorney General William Barr and FBI Director Chris Wray announced that after months of work, FBI technicians had succeeded in unlocking the two iPhones used by the Saudi military officer who carried out a terrorist shooting at the Pensacola Naval Air Station in Florida in December 2019. The shooter died in the attack, which was quickly claimed by Al Qaeda in the Arabian Peninsula.
Early this year — a solid month after the shooting — Barr had asked Apple to help unlock the phones (one of which was damaged by a bullet), which were older iPhone 5 and 7 models. Apple provided “gigabytes of information” to investigators, including “iCloud backups, account information and transactional data for multiple accounts,” but drew the line at assisting with the devices. The situation threatened to revive the 2016 “Apple versus FBI” showdown over another locked iPhone following the San Bernardino terror attack.
After the government went to federal court to try to dragoon Apple into doing investigators’ job for them, the dispute ended anticlimactically when the government got into the phone itself after purchasing an exploit from an outside vendor the government refused to identify. The Pensacola case culminated much the same way, except that the FBI apparently used an in-house solution instead of a third party’s exploit.
You’d think the FBI’s success at a tricky task (remember, one of the phones had been shot) would be good news for the Bureau. Yet an unmistakable note of bitterness tinged the laudatory remarks at the press conference for the technicians who made it happen. Despite the Bureau’s impressive achievement, and despite the gobs of data Apple had provided, Barr and Wray devoted much of their remarks to maligning Apple, with Wray going so far as to say the government “received effectively no help” from the company.
This diversion tactic worked: in news stories covering the press conference, headline after headline after headline highlighted the FBI’s slam against Apple instead of focusing on what the press conference was nominally about: the fact that federal law enforcement agencies can get into locked iPhones without Apple’s assistance.
That should be the headline news, because it’s important. That inconvenient truth undercuts the agencies’ longstanding claim that they’re helpless in the face of Apple’s encryption and thus the company should be legally forced to weaken its device encryption for law enforcement access. No wonder Wray and Barr are so mad that their employees keep being good at their jobs.
By reviving the old blame-Apple routine, the two officials managed to evade a number of questions that their press conference left unanswered. What exactly are the FBI’s capabilities when it comes to accessing locked, encrypted smartphones? Wray claimed the technique developed by FBI technicians is “of pretty limited application” beyond the Pensacola iPhones. How limited? What other phone-cracking techniques does the FBI have, and which handset models and which mobile OS versions do those techniques reliably work on? In what kinds of cases, for what kinds of crimes, are these tools being used?
We also don’t know what’s changed internally at the Bureau since that damning 2018 Inspector General postmortem on the San Bernardino affair. Whatever happened with the FBI’s plans, announced in the IG report, to lower the barrier within the agency to using national security tools and techniques in criminal cases? Did that change come to pass, and did it play a role in the Pensacola success? Is the FBI cracking into criminal suspects’ phones using classified techniques from the national security context that might not pass muster in a court proceeding (were their use to be acknowledged at all)?
Further, how do the FBI’s in-house capabilities complement the larger ecosystem of tools and techniques for law enforcement to access locked phones? Those include third-party vendors GrayShift and Cellebrite’s devices, which, in addition to the FBI, count numerous U.S. state and local police departments and federal immigration authorities among their clients. When plugged into a locked phone, these devices can bypass the phone’s encryption to yield up its contents, and (in the case of GrayShift) can plant spyware on an iPhone to log its passcode when police trick a phone’s owner into entering it. These devices work on very recent iPhone models: Cellebrite claims it can unlock any iPhone for law enforcement, and the FBI has unlocked an iPhone 11 Pro Max using GrayShift’s GrayKey device.
In addition to Cellebrite and GrayShift, which have a well-established U.S. customer base, the ecosystem of third-party phone-hacking companies includes entities that market remote-access phone-hacking software to governments around the world. Perhaps the most notorious example is the Israel-based NSO Group, whose Pegasus software has been used by foreign governments against dissidents, journalists, lawyers and human rights activists. The company’s U.S. arm has attempted to market Pegasus domestically to American police departments under another name. Which third-party vendors are supplying phone-hacking solutions to the FBI, and at what price?
Finally, who else besides the FBI will be the beneficiary of the technique that worked on the Pensacola phones? Does the FBI share the vendor tools it purchases, or its own home-rolled ones, with other agencies (federal, state, tribal or local)? Which tools, which agencies and for what kinds of cases? Even if it doesn’t share the techniques directly, will it use them to unlock phones for other agencies, as it did for a state prosecutor soon after purchasing the exploit for the San Bernardino iPhone?
We have little idea of the answers to any of these questions, because the FBI’s capabilities are a closely held secret. What advances and breakthroughs it has achieved, and which vendors it has paid, we (who provide the taxpayer dollars to fund this work) aren’t allowed to know. And the agency refuses to answer questions about encryption’s impact on its investigations even from members of Congress, who can be privy to confidential information denied to the general public.
The only public information coming out of the FBI’s phone-hacking black box is nothingburgers like the recent press conference. At an event all about the FBI’s phone-hacking capabilities, Director Wray and AG Barr cunningly managed to deflect the press’s attention onto Apple, dodging any difficult questions, such as what the FBI’s abilities mean for Americans’ privacy, civil liberties and data security, or even basic questions like how much the Pensacola phone-cracking operation cost.
As the recent PR spectacle demonstrated, a press conference isn’t oversight. And instead of exerting its oversight power, mandating more transparency, or requiring an accounting and cost/benefit analysis of the FBI’s phone-hacking expenditures — instead of demanding a straight and conclusive answer to the eternal question of whether, in light of the agency’s continually-evolving capabilities, there’s really any need to force smartphone makers to weaken their device encryption — Congress is instead coming up with dangerous legislation such as the EARN IT Act, which risks undermining encryption right when a population forced by COVID-19 to do everything online from home can least afford it.
The best–case scenario now is that the federal agency that proved its untrustworthiness by lying to the Foreign Intelligence Surveillance Court can crack into our smartphones, but maybe not all of them; that maybe it isn’t sharing its toys with state and local police departments (which are rife with domestic abusers who’d love to get access to their victims’ phones); that unlike third-party vendor devices, maybe the FBI’s tools won’t end up on eBay where criminals can buy them; and that hopefully it hasn’t paid taxpayer money to the spyware company whose best-known government customer murdered and dismembered a journalist.
The worst-case scenario would be that, between in-house and third-party tools, pretty much any law enforcement agency can now reliably crack into everybody’s phones, and yet nevertheless this turns out to be the year they finally get their legislative victory over encryption anyway. I can’t wait to see what else 2020 has in store.
As countries work to reopen after weeks of lockdown, contact-tracing apps help to understand the spread of the deadly coronavirus strain, COVID-19.
While most governments lean toward privacy-focused apps that use Bluetooth signals to create an anonymous profile of a person’s whereabouts, others, like Israel, use location and cell phone data to track the spread of the virus.
Security researcher Bob Diachenko discovered one of NSO’s contact-tracing systems on the internet, unprotected and without a password, for anyone to access. After he contacted the company, NSO pulled the unprotected database offline. Diachenko said he believes the database contains dummy data.
NSO told TechCrunch that the system was only for demonstrating its technology and denied it was exposed because of a security lapse. NSO is still waiting for the Israeli government’s approval to feed cell records into the system. But experts say the system should not have been open to begin with, and that centralized databases of citizens’ location data pose a security and privacy risk.
NSO began work on its contact-tracing system codenamed Fleming in March.
Fleming is designed to “pour” in confirmed coronavirus test data from the health authorities and phone location data from the cell networks to identify people who may have been exposed to a person with the virus. Anyone who came into close proximity to a person diagnosed with coronavirus would be notified.
The unprotected database was hosted on an Amazon Web Services server in Frankfurt, where the data protection regime is one of the strictest in the world.
It contained about six weeks of location data, spanning around March 10 to April 23. It also included specific dates, times and the location of a “target” — a term that NSO used in the database to describe people — that may have come into contact with a potentially infected person.
The data also included the duration of the encounter to help score the likelihood of a transmitted infection.
“NSO Group has successfully developed ‘Fleming’, an innovative, unique and purely analytical system designed to respond to the coronavirus pandemic,” said Oren Ganz, a director at NSO Group. “Fleming has been designed for the benefit of government decision-makers, without compromising individual privacy. This system has been demonstrated worldwide with great transparency to media organizations, and approximately 100 individual countries,” he said.
TechCrunch was also given a demonstration of how the system works.
“This transparent demo, the same shown to individual countries and media organizations, was the one located on the open random server in question, and the very same demo observed today by TechCrunch. All other speculation about this overt, open system is not correct, and does not align with the basic fact this transparent demonstration has been seen by hundreds of people in media and government worldwide,” said Ganz.
John Scott-Railton, a senior researcher at the Citizen Lab, part of the Munk School at the University of Toronto, said that any database storing location data poses a privacy risk.
“Not securing a server would be an embarrassment for a school project,” said Scott-Railton. “For a billion-dollar company to not password protect a secretive project that hopes to handle location and health data suggest a quick and sloppy roll out.”
“NSO’s case is the precedent that proves the problem: rushed COVID-19 tracking efforts will imperil our privacy and online safety,” he said.
Israel’s two tracing systems
As global coronavirus infections began to spike in March, the Israeli government passed an emergency law giving its domestic security service Shin Bet “unprecedented access” to collect vast amounts of cell data from the phone companies to help identify possible infections.
By the end of March, Israeli defense minister Naftali Bennett said the government was working on a new contact tracing system, separate from the one used by Shin Bet.
It was later revealed that NSO was building the second contact-tracing system.
Tehilla Shwartz Altshuler, a privacy expert and a senior fellow at the Israel Democracy Institute, told TechCrunch that she too was given a demonstration of Fleming over a Zoom call in the early days of the outbreak.
Without the authority to obtain cell records, NSO told her that it used location data gathered from advertising platforms, or so-called data brokers. Israeli media also reported that NSO used advertising data for “training” the system.
Data brokers amass and sell vast troves of location data collected from the apps installed on millions of phones. The apps that track your movements and whereabouts are often also selling those locations to data brokers, which then resell the data to advertisers to serve more targeted ads.
NSO denied it used location data from a data broker for its Fleming demo.
“The Fleming demo is not based on real and genuine data,” said Ganz. “The demo is rather an illustration of public obfuscated data. It does not contain any personal identifying information of any sort.”
Since governments began to outline their plans for contact-tracing systems, experts warned that location data is not accurate and can lead to both false positives and false negatives. Currently, NSO’s system appears to rely on this data for its core functions.
“This kind of location data will not get you a reliable measure of whether two people came into close contact,” said Scott-Railton.
NSO’s connection to the Middle East
Israel is not the only government interested in Fleming. Bloomberg reported in March that a d