Ukrainian police arrest multiple Clop ransomware gang suspects

Multiple suspects believed to be linked to the Clop ransomware gang have been detained in Ukraine after a joint operation from law enforcement agencies in Ukraine, South Korea, and the United States.

The Cyber Police Department of the National Police of Ukraine confirmed that six arrests were made after searches at 21 residences in the capital Kyiv and nearby regions. While it’s unclear whether the defendants are affiliates or core developers of the ransomware operation, they are accused of running a “double extortion” scheme, in which victims who refuse to pay the ransom are threatened with the leak of data stolen from their networks prior to their files being encrypted.

“It was established that six defendants carried out attacks of malicious software such as ‘ransomware’ on the servers of American and [South] Korean companies,” alleged Ukraine’s national police force in a statement.

The police also seized equipment from the alleged Clop ransomware gang, said to behind total financial damages of about $500 million. This includes computer equipment, several cars — including a Tesla and Mercedes, and 5 million Ukrainian Hryvnia (around $185,000) in cash. The authorities also claim to have successfully shut down the server infrastructure used by the gang members to launch previous attacks.

“Together, law enforcement has managed to shut down the infrastructure from which the virus spreads and block channels for legalizing criminally acquired cryptocurrencies,” the statement added.

These attacks first began in February 2019, when the group attacked four Korean companies and encrypted 810 internal services and personal computers. Since, Clop — often styled as “Cl0p” — has been linked to a number of high-profile ransomware attacks. These include the breach of U.S. pharmaceutical giant ExecuPharm in April 2020 and the attack on South Korean e-commerce giant E-Land in November that forced the retailer to close almost half of its stores.

Clop is also linked to the ransomware attack and data breach at Accellion, which saw hackers exploit flaws in the IT provider’s File Transfer Appliance (FTA) software to steal data from dozens of its customers. Victims of this breach include Singaporean telecom Singtel, law firm Jones Day, grocery store chain Kroger, and cybersecurity firm Qualys.

At the time of writing, the dark web portal that Clop uses to share stolen data is still up and running, although it hasn’t been updated for several weeks. However, law enforcement typically replaces the targets’ website with their own logo in the event of a successful takedown, which suggests that members of the gang could still be active.

“The Cl0p operation has been used to disrupt and extort organizations globally in a variety of sectors including telecommunications, pharmaceuticals, oil and gas, aerospace, and technology,” said John Hultquist, vice president of analysis at Mandiant’s threat intelligence unit. “The actor FIN11 has been strongly associated with this operation, which has included both ransomware and extortion, but it is unclear if the arrests included FIN11 actors or others who may also be associated with the operation.”

Hultquist said the efforts of the Ukrainian police “are a reminder that the country is a strong partner for the U.S. in the fight against cybercrime and authorities there are making the effort to deny criminals a safe harbor.”

The alleged perpetrators face up to eight years in prison on charges of unauthorized interference in the work of computers, automated systems, computer networks, or telecommunications networks and laundering property obtained by criminal means.

News of the arrests comes as international law enforcement turns up the heat on ransomware gangs. Last week, the U.S. Department of Justice announced that it had seized most of the ransom paid to members of DarkSide by Colonial Pipeline.

#aerospace, #colonial-pipeline, #crime, #cybercrime, #e-commerce, #extortion, #government, #kroger, #law, #law-enforcement, #malware, #mandiant, #oil-and-gas, #pharmaceuticals, #qualys, #ransomware, #security, #security-breaches, #singtel, #south-korea, #telecommunications, #tesla, #ukraine, #united-states

0

Your boss might tell you the office is more secure, but it isn’t

For the past 18 months, employees have enjoyed increased flexibility, and ultimately a better work-life balance, as a result of the mass shift to remote working necessitated by the pandemic. Most don’t want this arrangement, which brought an end to extensive commutes and superfluous meetings, to end: Buffer’s 2021 State of Remote Work report shows over 97% of employees would like to continue working remotely at least some of the time.

Companies, including some of the biggest names in tech, appear to have a different outlook and are beginning to demand that staff start to return to the workplace.

While most of the reasoning around this shift back to the office centers around the need for collaboration and socialization, another reason your employer might say is that the office is more secure. After all, we’ve seen an unprecedented rise in cybersecurity threats during the pandemic, from phishing attacks using Covid as bait to ransomware attacks that have crippled entire organizations.

Tessian research shared with TechCrunch shows that while none of the attacks have been linked to staff working remotely, 56% of IT leaders believe their employees have picked up bad cybersecurity behaviors since working from home. Similarly, 70% of IT leaders believe staff will be more likely to follow company security policies around data protection and data privacy while working in the office.

“Despite the fact that this was an emerging issue prior to the pandemic I do believe many organizations will use security as an excuse to get people back into the office, and in doing so actually ignore the cyber risks they are already exposed to,” Matthew Gribben, a cybersecurity expert, and former GCHQ consultant, told TechCrunch.

“As we’ve just seen with the Colonial Pipeline attack, all it takes is one user account without MFA enabled to bring down your business, regardless of where the user is sat.”

Will Emmerson, CIO at Claromentis, has already witnessed some companies using cybersecurity as a ploy to accelerate the shift to in-person working. “Some organizations are already using cybersecurity as an excuse to get team members to get back into the office,” he says. “Often it’s large firms with legacy infrastructure that relies on a secure perimeter and that haven’t adopted a cloud-first approach.”

“All it takes is one user account without MFA enabled to bring down your business, regardless of where the user is sat.”
Matthew Gribben, former GCHQ consultant

The bigger companies can try to argue for a return to the traditional 9-to-5, but we’ve already seen a bunch of smaller startups embrace remote working as a permanent arrangement. Rather, it will be larger and more risk-averse companies, says Craig Hattersley, CTO of cybersecurity startup SOC.OC, a BAE Systems spin-off, tells TechCrunch, who “begrudgingly let their staff work at home throughout the pandemic, so will seize any opportunity to reverse their new policies.”

“Although I agree that some companies will use the increase of cybersecurity threats to demand their employees go back to the office, I think the size and type of organization will determine their approach,” he says. “A lack of direct visibility of individuals by senior management could lead to a fear that staff are not fully managed.”

While some organizations will use cybersecurity as an excuse to get employees back into the workplace, many believe the traditional office is no longer the most secure option. After all, not only have businesses overhauled cybersecurity measures to cater to dispersed workforces over the past year, but we’ve already seen hackers start to refocus their attention on those returning to the post-COVID office.

“There is no guarantee that where a person is physically located will change the trajectory of increasingly complex cybersecurity attacks, or that employees will show a reduction in mistakes because they are sitting within the walls of an office building,” says Dr. Margaret Cunningham, principal research scientist at Forcepoint.

Some businesses will attempt to get all staff back into the workplace, but this is simply no longer viable: as a result of 18 months of home-working, many employees have moved away from their employer, while others, having found themselves more productive and less distracted, will push back against five days of commutes every week. In fact, a recent study shows that almost 40% of U.S. workers would consider quitting if their bosses made them return to the office full time.

That means most employers will have to, whether they like it or not, embrace a hybrid approach going forward, whereby employees work from the office three days a week and spend two days at home, or vice versa.

This, in itself, makes the cybersecurity argument far less viable. Sam Curry, chief security officer at Cybereason, tells TechCrunch: “The new hybrid phase getting underway is unlike the other risks companies encountered.

“We went from working in the office to working from home and now it will be work-from-anywhere. Assume that all networks are compromised and take a least-trust perspective, constantly reducing inherent trust and incrementally improving. To paraphrase Voltaire, perfection is the enemy of good.”

#articles, #bae-systems, #cio, #computer-security, #cto, #cyberattack, #cybercrime, #cybereason, #cybersecurity-startup, #cyberwarfare, #data-security, #gchq, #malware, #security, #soc, #telecommuting, #united-states

0

Elisity raises $26M Series A to scale its AI cybersecurity platform

Elisity, a self-styled innovator that provides behavior-based enterprise cybersecurity, has raised $26 million in Series A funding.

The funding round was co-led by Two Bear Capital and AllegisCyber Capital, the latter of which has invested in a number of cybersecurity startups including Panaseer, with previous seed investor Atlantic Bridge also participating.

Elisity, which is led by industry veterans from Cisco, Qualys, and Viptela, says the funding will help it meet growing enterprise demand for its cloud-delivered Cognitive Trust platform, which it claims is the only platform intelligent enough to understand how assets and people connect beyond corporate perimeters.

The platform looks to help organizations transition from legacy access approaches to zero trust, a security model based on maintaining strict access controls and not trusting anyone — even employees — by default, across their entire digital footprint. This enables organizations to adopt a ‘work-from-anywhere’ model, according to the company, which notes that most companies today continue to rely on security and policies based on physical location or low-level networking constructs, such as VLAN, IP and MAC addresses, and VPNs.

Cognitive Trust, the company claims, can analyze the uniquely identify and context of people, apps and devices, including Internet of Things (IoT) and operational technology (OT), wherever they’re working. The company says its AI-driven behavioral intelligence, the platform can also continuously assess risk and instantly optimize access, connectivity and protection policies.

“CISOs are facing ever increasing attack surfaces caused by the shift to remote work, reliance on cloud-based services (and often multi-cloud), and the convergence of IT/OT networks,” said Mike Goguen, founder and managing partner at Two Bear Capital. “Elisity addresses all of these problems by not only enacting a zero trust model, but by doing so at the edge and within the behavioral context of each interaction. We are excited to partner with the CEO, James Winebrenner, and his team as they expand the reach of their revolutionary approach to enterprise security.”

Founded in 2018, Elisity — whose competitors include the likes of Vectra AI and Lastline closed a $7.5 million seed round in August that same year, led by Atlantic Bridge. With its seed round, Elisity began scaling its engineering, sales and marketing teams to ramp up ahead of the platform’s launch. 

Now it’s looking to scale in order to meet growing enterprise demand, which comes as many organizations move to a hybrid working model and seek the tools to help them secure distributed workforces. 

“When the security perimeter is no longer the network, we see an incredible opportunity to evolve the way enterprises connect and protect their people and their assets, moving away from strict network constructs to identity and context as the basis for secure access,” said Winebrenner. 

“With Elisity, customers can dispense with the complexity, cost and protracted timeline enterprises usually encounter. We can onboard a new customer in as little as 45 minutes, rather than months or years, moving them to an identity-based access policy, and expanding to their cloud and on-prem[ise] footprints over time without having to rip and replace existing identity providers and network infrastructure investments. We do this without making tradeoffs between productivity for employees and the network security posture.”

Elisity, which is based in California, currently employs around 30 staff. However, it currently has no women in its leadership team, nor on its board of directors. 

#allegiscyber-capital, #artificial-intelligence, #california, #ceo, #cisco, #cloud-computing, #cloud-infrastructure, #computer-security, #computing, #funding, #lastline, #managing-partner, #operational-technology, #qualys, #security, #technology, #viptela

0

Supreme Court revives LinkedIn case to protect user data from web scrapers

The Supreme Court has given LinkedIn another chance to stop a rival company from scraping personal information from users’ public profiles, a practice LinkedIn says should be illegal but one that could have broad ramifications for internet researchers and archivists.

LinkedIn lost its case against Hiq Labs in 2019 after the U.S. Ninth Circuit Court of Appeals ruled that the CFAA does not prohibit a company from scraping data that is publicly accessible on the internet.

The Microsoft-owned social network argued that the mass scraping of its users’ profiles was in violation of the Computer Fraud and Abuse Act, or CFAA, which prohibits accessing a computer without authorization.

Hiq Labs, which uses public data to analyze employee attrition, argued at the time that a ruling in LinkedIn’s favor “could profoundly impact open access to the Internet, a result that Congress could not have intended when it enacted the CFAA over three decades ago.” (Hiq Labs has also been sued by Facebook, which it claims scraped public data across Facebook and Instagram, but also Amazon Twitter, and YouTube.)

The Supreme Court said it would not take on the case, but instead ordered the appeal’s court to hear the case again in light of its recent ruling, which found that a person cannot violate the CFAA if they improperly access data on a computer they have permission to use.

The CFAA was once dubbed the “worst law” in the technology law books by critics who have long argued that its outdated and vague language failed to keep up with the pace of the modern internet.

Journalists and archivists have long scraped public data as a way to save and archive copies of old or defunct websites before they shut down. But other cases of web scraping have sparked anger and concerns over privacy and civil liberties. In 2019, a security researcher scraped millions of Venmo transactions, which the company does not make private by default. Clearview AI, a controversial facial recognition startup, claimed it scraped over 3 billion profile photos from social networks without their permission.

 

#amazon, #clearview-ai, #computer-fraud-and-abuse-act, #congress, #facebook, #facial-recognition, #hacking, #linkedin, #microsoft, #privacy, #security, #social-network, #social-networks, #supreme-court, #twitter, #venmo, #web-scraping

0

Fraud protection startup nSure AI raises $6.8M in seed funding

Fraud protection startup nSure AI has raised $6.8 million in seed funding, led by DisruptiveAI, Phoenix Insurance, AXA-backed venture builder Kamet, Moneta Seeds and private investors.

The round will help the company bolster the predictive AI and machine learning algorithms that power nSure AI’s “first of its kind” fraud protection platform. Prior to this round, the company received $550,000 in pre-seed funding from Kamet in March 2019.

The Tel Aviv-headquartered startup, which currently has 16 employees, provides fraud detection for high-risk digital goods, such as electronic gift cards, airline tickets, software, and games. While most fraud detection tools analyze each online transaction in an attempt to decide which purchases to approve and decline, nSure AI’s risk engine leverages deep learning techniques to accurately identify fraudulent transactions.

nSure AI, which is backed by insurance company AXA, said it has a 98% approval rating on average for purchases, compared to an industry average of 80%, allowing retailers to recapture nearly $100 billion a year in revenue lost by declining legitimate customers. The company is so confident in its technology that it will accept liability for any fraudulent transaction allowed by the platform.

nSure AI’s founders Alex Zeltcer and Ziv Isaiah started the company after experiencing the unique challenges faced by retailers of digital assets. The first week of their online gift card business found that 40% of sales were fraudulent, resulting in chargebacks. The founders began to develop their own platform for supporting the sale of high-risk digital goods after no other fraud detection service met their needs.

Alex Zeltcer, co-founder and chief executive, said the investment “enables us to register thousands of new merchants, who can feel confident selling higher-risk digital goods, without accepting fraud as a part of business.”

nSure AI, which currently monitors and manages millions of transactions every month, has approved close to $1 billion in volume since going live in 2019.

#articles, #artificial-intelligence, #axa, #crime, #crimes, #fraud, #fraud-detection, #funding, #insurance, #merchant-services, #online-shopping, #security, #tel-aviv

0

Google will let enterprises store their Google Workspace encryption keys

As ubiquitous as Google Docs has become in the last year alone, a major criticism often overlooked by the countless workplaces who use it is that it isn’t end-to-end encrypted, allowing Google — or any requesting government agency — access to a company’s files. But Google is finally addressing that key complaint with a round of updates that will let customers shield their data by storing their own encryption keys.

Google Workspace, the company’s enterprise offering that includes Google Docs, Slides and Sheets, is adding client-side encryption so that a company’s data will be indecipherable to Google.

Companies using Google Workspace can store their encryption keys with one of four partners for now: Flowcrypt, Futurex, Thales, or Virtru, which are compatible with Google’s specifications. The move is largely aimed at regulated industries — like finance, healthcare, and defense — where intellectual property and sensitive data are subject to intense privacy and compliance rules.

(Image: Google / supplied)

The real magic lands later in the year when Google will publish details of an API that will let enterprise customers build their own in-house key service, allowing workplaces to retain direct control of their encryption keys. That means if the government wants that company’s data, they have to knock on their front door — and not sneak around the back by serving the key holder with a legal demand.

Google published technical details of how the client-side encryption feature works, and will roll out as a beta in the coming weeks.

Tech companies giving their corporate customers control of their own encryption keys has been a growing trend in recent years. Slack and cloud vendor Egnyte bucked the trend by allowing their enterprise users to store their own encryption keys, effectively cutting themselves out of the surveillance loop. But Google has dragged its feet on encryption for so long that startups are working to build alternatives that bake in encryption from the ground up.

Google said it’s also pushing out new trust rules for how files are shared in Google Drive to give administrators more granularity on how different levels of sensitive files can be shared, and new data classification labels to mark documents with a level of sensitivity such as “secret” or “internal”.

The company said it’s improving its malware protection efforts by now blocking phishing and malware shared from within organizations. The aim is to help cut down on employees mistakenly sharing malicious documents.

#api, #cloud-storage, #computing, #cryptography, #data-protection, #data-security, #egnyte, #encryption, #end-to-end-encryption, #finance, #google, #google-workspace, #google-drive, #healthcare, #privacy, #security, #technology, #thales

0

7 new security features Apple quietly announced at WWDC

Apple went big on privacy during its Worldwide Developer Conference (WWDC) keynote this week, showcasing features from on-device Siri audio processing to a new privacy dashboard for iOS that makes it easier than ever to see which apps are collecting your data and when.

While typically vocal about security during the Memoji-filled, two-hour-long(!) keynote, the company also quietly introduced several new security and privacy-focused features during its WWDC developer sessions. We’ve rounded up some of the most interesting — and important.

Passwordless login with iCloud Keychain

Apple is the latest tech company taking steps to ditch the password. During its “Move beyond passwords” developer session, it previewed Passkeys in iCloud Keychain, a method of passwordless authentication powered by WebAuthn, and Face ID and Touch ID.

The feature, which will ultimately be available in both iOS 15 and macOS Monterey, means you no longer have to set a password when creating an account or a website or app. Instead, you’ll simply pick a username, and then use Face ID or Touch ID to confirm it’s you. The passkey is then stored in your keychain and then synced across your Apple devices using iCloud — so you don’t have to remember it, nor do you have to carry around a hardware authenticator key.

“Because it’s just a single tap to sign in, it’s simultaneously easier, faster and more secure than almost all common forms of authentication today,” said Garrett Davidson, an Apple authentication experience engineer. 

While it’s unlikely to be available on your iPhone or Mac any time soon — Apple says the feature is still in its ‘early stages’ and it’s currently disabled by default — the move is another sign of the growing momentum behind eliminating passwords, which are prone to being forgotten, reused across multiple services, and — ultimately — phishing attacks. Microsoft previously announced plans to make Windows 10 password-free, and Google recently confirmed that it’s working towards “creating a future where one day you won’t need a password at all”.

Microphone indicator in macOS

macOS has a new indicator to tell you when the microhpone is on. (Image: Apple)

Since the introduction of iOS 14, iPhone users have been able to keep an eye on which apps are accessing their microphone via a green or orange dot in the status bar. Now it’s coming to the desktop too.

In macOS Monterey, users will be able to see which apps are accessing their Mac’s microphone in Control Center, MacRumors reports, which will complement the existing hardware-based green light that appears next to a Mac’s webcam when the camera is in use.

Secure paste

iOS 15, which will include a bunch of privacy-bolstering tools from Mail Privacy Protection to App Privacy Reports, is also getting a feature called Secure Paste that will help to shield your clipboard data from other apps.

This feature will enable users to paste content from one app to another, without the second app being able to access the information on the clipboard until you paste it. This is a significant improvement over iOS 14, which would notify when an app took data from the clipboard but did nothing to prevent it from happening.

With secure paste, developers can let users paste from a different app without having access to what was copied until the user takes action to paste it into their app,” Apple explains. “When developers use secure paste, users will be able to paste without being alerted via the [clipboard] transparency notification, helping give them peace of mind.”

While this feature sounds somewhat insignificant, it’s being introduced following a major privacy issue that came to light last year. In March 2020, security researchers revealed that dozens of popular iOS apps — including TikTok — were “snooping” on users’ clipboard without their consent, potentially accessing highly sensitive data.

Advanced Fraud Protection for Apple Card

Payments fraud is more prevalent than ever as a result of the pandemic, and Apple is looking to do something about it. As first reported by 9to5Mac, the company has previewed Advanced Fraud Protection, a feature that will let Apple Card users generate new card numbers in the Wallet app.

While details remain thin — the feature isn’t live in the first iOS 15 developer beta — Apple’s explanation suggests that Advanced Fraud Protection will make it possible to generate new security codes — the three-digit number you enter at checkout – when making online purchases. 

“With Advanced Fraud Protection, Apple Card users can have a security code that changes regularly to make online Card Number transactions even more secure,” the brief explainer reads. We’ve asked Apple for some more information. 

‘Unlock with Apple Watch’ for Siri requests

As a result of the widespread mask-wearing necessitated by the pandemic, Apple introduced an ‘Unlock with Apple Watch’ in iOS 14.5 that let enabled users to unlock their iPhone and authenticate Apple Pay payments using an Apple Watch instead of Face ID.

The scope of this feature is expanding with iOS 15, as the company has confirmed that users will soon be able to use this alternative authentication method for Siri requests, such as adjusting phone settings or reading messages. Currently, users have to enter a PIN, password or use Face ID to do so.

“Use the secure connection to your Apple Watch for Siri requests or to unlock your iPhone when an obstruction, like a mask, prevents Face ID from recognizing your Face,” Apple explains. Your watch must be passcode protected, unlocked, and on your wrist close by.”

Standalone security patches

To ensure iPhone users who don’t want to upgrade to iOS 15 straight away are up to date with security updates, Apple is going to start decoupling patches from feature updates. When iOS 15 lands later this year, users will be given the option to update to the latest version of iOS or to stick with iOS 14 and simply install the latest security fixes. 

“iOS now offers a choice between two software update versions in the Settings app,” Apple explains (via MacRumors). “You can update to the latest version of iOS 15 as soon as it’s released for the latest features and most complete set of security updates. Or continue on ‌iOS 14‌ and still get important security updates until you’re ready to upgrade to the next major version.”

This feature sees Apple following in the footsteps of Google, which has long rolled out monthly security patches to Android users.

‘Erase all contents and settings’ for Mac

Wiping a Mac has been a laborious task that has required you to erase your device completely then reinstall macOS. Thankfully, that’s going to change. Apple is bringing the “erase all contents and settings” option that’s been on iPhones and iPads for years to macOS Monterey.

The option will let you factory reset your MacBook with just a click. “System Preferences now offers an option to erase all user data and user-installed apps from the system, while maintaining the operating system currently installed,” Apple says. “Because storage is always encrypted on Mac systems with Apple Silicon or the T2 chip, the system is instantly and securely ‘erased’ by destroying the encryption keys.”

#android, #apple, #apple-inc, #clipboard, #computing, #control-center, #encryption, #face-id, #google, #icloud, #ios, #ios-14, #ipads, #iphone, #keychain, #microsoft, #microsoft-windows, #online-purchases, #operating-system, #operating-systems, #privacy, #security, #siri, #software

0

Volkswagen says a vendor’s security lapse exposed 3.3 million drivers’ details

Volkswagen says more than 3.3 million customers had their information exposed after one of its vendors left a cache of customer data unsecured on the internet.

The car maker said in a letter that the vendor, used by Volkswagen, its subsidiary Audi, and authorized dealers in the U.S. and Canada, left the customer data spanning 2014 to 2019 unprotected over a two-year window between August 2019 and May 2021.

The data, which Volkswagen said was gathered for sales and marketing, contained personal information about customers and prospective buyers, including their name, postal and email addresses, and phone number.

But more than 90,000 customers across the U.S. and Canada also had more sensitive data exposed, including information relating to loan eligibility. The letter said most of the sensitive data was driver’s license numbers, but that a “small” number of records also included a customer’s date of birth and Social Security numbers.

Volkswagen did not name the vendor, and a company spokesperson did not immediately comment.

It’s the latest security incident involving driver’s license numbers in recent months. Insurance giants Metromile and Geico admitted earlier this year that their quote forms had been abused by scammers trying to obtain driver’s license numbers. Several other car insurance companies have also reported similar incidents involving the theft of driver’s license numbers. Geico said it was likely an effort by scammers to file and cash fraudulent unemployment benefits in another person’s name.

Volkswagen’s letter, however, did not say if the company had evidence that the data exposed by the vendor was misused.

 

#articles, #audi, #automotive, #berkshire-hathaway, #canada, #car-insurance, #driver, #geico, #metromile, #security, #united-states, #volkswagen

0

Security flaws found in Samsung’s stock mobile apps

A mobile security startup has found seven security flaws in Samsung’s pre-installed mobile apps, which it says if abused could have allowed attackers broad access to a victim’s personal data.

Oversecured said the vulnerabilities were found in several apps and components bundled with Samsung phones and tablets. Oversecured founder Sergey Toshin told TechCrunch that the vulnerabilities were verified on a Samsung Galaxy S10+ but that all Samsung devices could be potentially affected because the baked-in apps are responsible for system functionality.

Toshin said the vulnerabilities could have allowed a malicious app on the same device to steal a victim’s photos, videos, contacts, call records and messages, and change settings “without any user consent or notice” by hijacking the permissions from Samsung’s stock apps.

One of the flaws could have allowed the theft of data by exploiting a vulnerability in Samsung’s Secure Folder app, which has a “large set” of rights across the device. In a proof-of-concept, Toshin showed the bug could be used to steal contacts data. Another bug in Samsung’s Knox security software could have been abused to install other malicious apps, while a bug in Samsung Dex could have been used to scrape data from user notifications from apps, email inboxes, and messages.

Oversecured published technical details of the vulnerabilities in a blog post, and said it reported the bugs to Samsung, which fixed the flaws.

Samsung confirmed the flaws affected “selected” Galaxy devices but would not provide a list of specific devices. “There have been no known reported issues globally and users should be assured that their sensitive information was not at risk,” but provided no evidence for this claim. “We addressed the potential vulnerability by developing and issuing security patches via software update in April and May, 2021 as soon as we identified this issue.”

The startup, which launched earlier this year after self-funding $1 million in bug bounty payouts, uses automation to search for vulnerabilities in Android code. Toshin has found similar security flaws in TikTok, and Android’s Google Play app.

#android, #apps, #computing, #google-play, #knox, #mobile-phones, #mobile-security, #oversecured, #russia, #samsung, #security, #security-software, #smartphones, #technology, #vulnerability

0

RSA spins off fraud and risk intelligence unit as Outseer

RSA Security has spun out its fraud and risk intelligence business into a standalone company called Outseer that will double down on payment security tools amid an “unprecedented” rise in fraudulent transactions.

Led by CEO Reed Taussig, who was appointed head of RSA’s Anti-Fraud Business Unit last year after previously serving as CEO of ThreatMetrix, the new company will focus solely on fraud detection and management and payments authentication services.

Outseer will continue to operate under the RSA umbrella and will inherit three core services, which are already used by more than 6,000 financial institutions, from the company: Outseer Fraud Manager (formerly RSA Adaptive Authentication), a risk-based account monitoring service; 3-D Secure (formerly Adaptive Authentication for eCommerce), a card-not-present and digital payment authentication mapping service; and FraudAction, which detects and takes down phishing sites, dodgy apps and fraudulent social media pages.

Outseer says its product portfolio is supported by deep investments in data and science, including a global network of verified fraud and transaction data, and a risk engine that the company claims delivers 95% fraud detection rates.

Commenting on the spinout, Taussig said: “Outseer is the culmination of decades of science-driven innovation in anti-fraud and payments authentication solutions. As the digital economy continues to deepen, the Outseer mission to liberate the world from transactional fraud is essential. Our role as a revenue enabler for the global economy will only strengthen as every digital business continues to scale.”

RSA, meanwhile, will continue to focus on integrated risk management and security products, including Archer for risk management, NetWitness for threat detection and response, and SecureID for identity and access management (IAM) capabilities.

The spinout comes less than a year after private equity firm Symphony Technology Group (STG), which recently bought FireEye’s product business for $1.2 billion, acquired RSA Security from Dell Technologies for more than $2 billion. Dell had previously acquired RSA as part of its purchase of EMC in 2016.

It also comes amid a huge rise in online fraud fueled by the COVID-19 pandemic. The Federal Trade Commission said in March that more than 217,000 Americans had filed a coronavirus-related fraud report since January 2020, with losses to COVID-linked fraud totaling $382 million. Similarly, the Consumer Financial Protection Bureau fielded 542,300 fraud complaints in 2020, a 54% increase over 2019.

RSA said that with the COVID-19 pandemic having fueled “unprecedented” growth in fraudulent transactions, Outseer will focus its innovation on payments authentication, mapping to the EMV 3-D Secure 2.x payment standard, and incorporating new technology integrations across the payments and commerce ecosystem. 

“Outseer’s reason for being isn’t just focused on eliminating payments and account fraud,” Taussig added. “These fraudulent transactions are often the pretext for more sinister drug and human trafficking, terrorism, and other nefarious behavior. Outseer has the ability to help make the world a safer place.”

Valuation information for Outseer was not disclosed, nor were headcount figures mentioned in the spinout announcement. Outseer didn’t immediately respond to TechCrunch’s request for more information. 

#3-d, #access-management, #articles, #ceo, #consumer-financial-protection-bureau, #crime, #deception, #e-commerce, #emc, #emv, #federal-trade-commission, #fireeye, #fraud, #head, #identity-theft, #online-fraud, #payments, #phishing, #risk-management, #rsa-security, #security, #symphony-technology-group, #threatmetrix

0

Recorded Future launches its new $20M Intelligence Fund for early-stage startups

Threat intelligence company Recorded Future is launching a $20 million fund for early-stage startups developing novel data intelligence tools.

The Intelligence Fund will provide seed and Series A funding to startups that already have venture capital funding, Recorded Future says, as well as equip them with resources to help with the development and integration of intelligence applications in order to accelerate their go-to-market strategy. 

Recorded Future, which provides customers with information to help them better understand the external cyber threats they are facing, will invest in startups that aim to tackle significant problems that require novel approaches using datasets and collection platforms, which the company says could be anything from technical internet sensors to satellites. It’s also keen to invest in startups building intelligence analysis toolsets that make use of technologies such as artificial intelligence and machine learning, as well as intelligence-driven applications that can be integrated into its own Intelligence Platform and ecosystem.

Recorded Future co-founder and chief executive Christopher Ahlberg said: “In a world of aggressive uncertainty, intelligence is the only equalizer. With the launch of the Intelligence Fund, we are investing in the next generation of entrepreneurs who share our vision for securing the world with intelligence.” 

So far, the Intelligence Fund has invested in two companies, the first being SecurityTrails, which provides customers with a comprehensive overview of current and historical domain and IP address data. The second investment went to Gemini Advisory, a fraud intelligence platform specializing in finding compromised data on the dark web, which Recorded Future went on to acquire earlier this year for $52 million in a bid to bolster its own threat intelligence capabilities. 

Recorded Future told TechCrunch that future investments could also be made with an eye to acquiring, but added that funding could also be given purely on the basis that the startup would make a good business or technology partner. Recorded Future was itself acquired by private equity firm Insight Partners back in 2019 for $780 million. The acquisition effectively bought out the company’s earlier investors, including Google’s venture arm GV, and In-Q-Tel, the non-profit venture arm of the U.S. intelligence community.

Commenting on the launch of the fund, Michael Triplett, managing partner at Insight Partners, said: “Cyberattacks continue to impact global enterprises across the globe, and we’re excited to see Recorded Future invest in intelligence startups tackling the business-critical issues that organizations face today. 

“The Intelligence Fund will provide the resources needed by entrepreneurs to build applications with data and mathematics at the core.” 

#christopher-ahlberg, #computing, #crunchbase, #dark-web, #entrepreneurship, #finance, #information-technology, #insight-partners, #machine-learning, #managing-partner, #prediction, #recorded-future, #security, #startup-company, #startups

0

Aserto announces $5.1M seed to build authorization as a service

Aserto, a new startup from a couple of tech industry vets who want to build an Authorization as a Service solution, announced a $5.1 million seed round today from Costanoa Ventures, Heavybit Industries and several industry luminaries.

The company’s two founders, Omri Gazitt, CEO and Gert Drapers, CTO, have decades of experience building some of the industry’s identity building blocks including SAML, OAuth 2.0 and OpenID. As the two founders considered what to do next last year, they felt that authorization would be a natural extension of their work in identity, and an area where there were few good solutions for developers.

“If you look at authorization, it really hasn’t hasn’t moved forward at all. The access part is really stuck in the world of the 2000s. And we wanted to figure out essentially what authorization would look like in the age of SaaS and cloud. We feel like that’s another 10 year mission with a lot of pain right now, and a lot of value that we can deliver,”Gazitt told me.

While there are other early stage startups attacking a similar problem Gazitt believes their experience gives them a leg up, and he sees it as such a critical area for developers. “If you think about authorization, it really is in the critical path of every application request. Every time I send your SaaS application a request, that authorization system has to be live to 100% availability, and it has to do its work in probably a millisecond of latency budget. Otherwise it’s putting too much burden on the application,” he explained.

What the company is doing is creating a sophisticated service that does much of the work for developers, giving them fine-grained control over roles access control based on policies using what they call a “policy-as-code approach to authoring, editing, storing, versioning, building, deploying and managing authorization rules.” The solution is built using the CNCF Open Policy Agent (OPA) project.

For now, the company is still working with early customers but is also expanding the private beta today. to include additional companies who could benefit from this kind of solution.

Casey Aylward, who is leading the investment at Costanoa, sees a wide open space and an experienced team ready to attack it. “I get really excited thinking about what are these big ecosystem plays in the open source world? Where should we be investing? And I think this is going to be one of them, and the wat Omri and Gert are thinking about the problem and approaching the ecosystem is really, really key,” she said.

The company launched in August 2020, but it was really the culmination of many discussions that Aylward and Gazitt had over the previous couple of years around authorization and how to attack the problem. In addition to the two founders, they have six employees spread across four continents.

When it comes to diversity Gazitt and Drapers are both immigrants and their lead investor is female, bringing an element of diversity to the company from the start, but it’s also something they are thinking about as they build the company. Having Aylward and other women involved certainly helps bring that to the forefront.

“It’s quite frankly super valuable that Casey, [and other women investors on the team Martina Lauchengco GP at Costanoa and Dana Oshiro, GP/GM at Heavybit] that can basically project the point of view that we’re not just a bunch of men sitting around the table with solutions, so that’s super helpful, and […] you can’t start thinking about it too early. There’s no such thing as too early with diversity,” Gazitt said.

While Gazitt and Draper both work in Seattle, their team is spread far and wide, and he says that the plan is to be a remote-first company. In fact, he has spent a lot of time talking to companies like GitLab and HashiCorp, two companies who have successfully built remote companies to learn more about how to do that right.

#aserto, #authorization, #casey-aylward, #costanoa-ventures, #developer, #funding, #recent-funding, #security, #startups, #tc

0

Ring won’t say how many users had footage obtained by police

Ring gets a lot of criticism, not just for its massive surveillance network of home video doorbells and its problematic privacy and security practices, but also for giving that doorbell footage to law enforcement. While Ring is making moves towards transparency, the company refuses to disclose how many users had their data given to police.

The video doorbell maker, acquired by Amazon in 2018, has partnerships with at least 1,800 U.S. police departments (and growing) that can request camera footage from Ring doorbells. Prior to a change this week, any police department that Ring partnered with could privately request doorbell camera footage from Ring customers for an active investigation. Ring will now let its police partners publicly request video footage from users through its Neighbors app.

The change ostensibly gives Ring users more control when police can access their doorbell footage, but ignores privacy concerns that police can access users’ footage without a warrant.

Civil liberties advocates and lawmakers have long warned that police can obtain camera footage from Ring users through a legal back door because Ring’s sprawling network of doorbell cameras are owned by private users. Police can still serve Ring with a legal demand, such as a subpoena for basic user information, or a search warrant or court order for video content, assuming there is evidence of a crime.

Ring received over 1,800 legal demands during 2020, more than double from the year earlier, according to a transparency report that Ring published quietly in January. Ring does not disclose sales figures but says it has “millions” of customers. But the report leaves out context that most transparency reports include: how many users or accounts had footage given to police when Ring was served with a legal demand?

When reached, Ring declined to say how many users had footage obtained by police.

That number of users or accounts subject to searches is not inherently secret, but an obscure side effect of how companies decide — if at all — to disclose when the government demands user data. Though they are not obligated to, most tech companies publish transparency reports once or twice a year to show how often user data is obtained by the government.

Transparency reports were a way for companies subject to data requests to push back against damning allegations of intrusive bulk government surveillance by showing that only a fraction of a company’s users are subject to government demands.

But context is everything. Facebook, Apple, Microsoft, Google, and Twitter all reveal how many legal demands they receive, but also specify how many users or accounts had data given. In some cases, the number of users or accounts affected can be twice or more than threefold the number of demands they received.

Ring’s parent, Amazon, is a rare exception among the big tech giants, which does not break out the specific number of users whose information was turned over to law enforcement.

“Ring is ostensibly a security camera company that makes devices you can put on your own homes, but it is increasingly also a tool of the state to conduct criminal investigations and surveillance,” Matthew Guariglia, policy analyst at the Electronic Frontier Foundation, told TechCrunch.

Guariglia added that Ring could release the numbers of users subject to legal demands, but also how many users have previously responded to police requests through the app.

Ring users can opt out of receiving requests from police, but this option would not stop law enforcement from obtaining a legal order from a judge for your data. Users can also switch on end-to-end encryption to prevent anyone other than the user, including Ring, from accessing their videos.

#amazon, #apple, #articles, #electronic-frontier-foundation, #encryption, #facebook, #google, #hardware, #judge, #law-enforcement, #microsoft, #neighbors, #operating-systems, #privacy, #ring, #security, #smart-doorbell, #software, #terms-of-service, #transparency-report

0

Maryland and Montana are restricting police access to DNA databases

Maryland and Montana have become the first U.S. states to pass laws that make it tougher for law enforcement to access DNA databases.

The new laws, which aim to safeguard the genetic privacy of millions of Americans, focus on consumer DNA databases, such as 23andMe, Ancestry, GEDmatch and FamilyTreeDNA, all of which let people upload their genetic information and use it to connect with distant relatives and trace their family tree. While popular — 23andMe has more than three million users, and GEDmatch more than one million — many are unaware that some of these platforms share genetic data with third parties, from the pharmaceutical industry and scientists to law enforcement agencies.

When used by law enforcement through a technique known as forensic genetic genealogy searching (FGGS), officers can upload DNA evidence found at a crime scene to make connections on possible suspects, the most famous example being the identification of the Golden State Killer in 2018. This saw investigators upload a DNA sample taken at the time of a 1980 murder linked to the serial killer into GEDmatch and subsequently identify distant relatives of the suspect — a critical breakthrough that led to the arrest of Joseph James DeAngelo.

While law enforcement agencies have seen success in using consumer DNA databases to aid with criminal investigations, privacy advocates have long warned of the dangers of these platforms. Not only can these DNA profiles help trace distant ancestors, but the vast troves of genetic data they hold can divulge a person’s propensity for various diseases, predict addiction and drug response, and even be used by companies to create images of what they think a person looks like.

Ancestry and 23andMe have kept their genetic databases closed to law enforcement without a warrant, GEDmatch (which was acquired by a crime scene DNA company in December 2019) and FamilyTreeDNA have previously shared their database with investigators. 

To ensure the genetic privacy of the accused and their relatives, Maryland will, starting October 1, require law enforcement to get a judge’s sign-off before using genetic genealogy, and will limit its use to serious crimes like murder, kidnapping, and human trafficking. It also says that investigators can only use databases that explicitly tell users that their information could be used to investigate crimes. 

In Montana, where the new rules are somewhat narrower, law enforcement would need a warrant before using a DNA database unless the users waived their rights to privacy.

The laws “demonstrate that people across the political spectrum find law enforcement use of consumer genetic data chilling, concerning and privacy-invasive,” said Natalie Ram, a law professor at the University of Maryland. “I hope to see more states embrace robust regulation of this law enforcement technique in the future.”

The introduction of these laws has also been roundly welcomed by privacy advocates, including the Electronic Frontier Foundation. Jennifer Lynch, surveillance litigation director at the EFF, described the restrictions as a “step in the right direction,” but called for more states — and the federal government — to crack down further on FGGS.

“Our genetic data is too sensitive and important to leave it up to the whims of private companies to protect it and the unbridled discretion of law enforcement to search it,” Lynch said.

“Companies like GEDmatch and FamilyTreeDNA have allowed and even encouraged law enforcement searches. Because of this, law enforcement officers are increasingly accessing these databases in criminal investigations across the country.”

A spokesperson for 23andMe told TechCrunch: “We fully support legislation that provides consumers with stronger privacy protections. In fact we are working on legislation in a number of states to increase consumer genetic privacy protections. Customer privacy and transparency are core principles that guide 23andMe’s approach to responding to legal requests and maintaining customer trust. We closely scrutinize all law enforcement and regulatory requests and we will only comply with court orders, subpoenas, search warrants or other requests that we determine are legally valid. To date we have not released any customer information to law enforcement.”

GEDmatch and FamilyTreeDNA, both of which opt users into law enforcement searches by default, told the New York Times that they have no plans to change their existing policies around user consent in response to the new regulation. 

Ancestry did not immediately comment.

Read more:

#23andme, #ancestry, #dna, #electronic-frontier-foundation, #federal-government, #gedmatch, #genetics, #health, #judge, #law-enforcement, #maryland, #montana, #privacy, #security, #the-new-york-times, #united-states

0

Network security startup ExtraHop skips and jumps to $900M exit

Last year, Seattle-based network security startup ExtraHop was riding high, quickly approaching $100 million in ARR and even making noises about a possible IPO in 2021. But there will be no IPO, at least for now, as the company announced this morning it has been acquired by a pair of private equity firms for $900 million.

The firms, Bain Capital Private Equity and Crosspoint Capital Partners, are buying a security solution that provides controls across a hybrid environment, something that could be useful as more companies find themselves in a position where they have some assets on-site and some in the cloud.

The company is part of the narrower Network Detection and Response (NDR) market. According to Jesse Rothstein, ExtraHop’s chief technology officer and co-founder, it’s a technology that is suited to today’s threat landscape, “I will say that ExtraHop’s north star has always really remained the same, and that has been around extracting intelligence from all of the network traffic in the wire data. This is where I think the network detection and response space is particularly well-suited to protecting against advanced threats,” he told TechCrunch.

The company uses analytics and machine learning to figure out if there are threats and where they are coming from, regardless of how customers are deploying infrastructure. Rothstein said he envisions a world where environments have become more distributed with less defined perimeters and more porous networks.

“So the ability to have this high quality detection and response capability utilizing next generation machine learning technology and behavioral analytics is so very important,” he said.

Max de Groen, managing partner at Bain, says his company was attracted to the NDR space, and saw ExtraHop as a key player. “As we looked at the NDR market, ExtraHop, which […] has spent 14 years building the product, really stood out as the best individual technology in the space,” de Groen told us.

Security remains a frothy market with lots of growth potential. We continue to see a mix of startups and established platform players jockeying for position, and private equity firms often try to establish a package of services. Last week, Symphony Technology Group bought FireEye’s product group for $1.2 billion, just a couple of months after snagging McAfee’s enterprise business for $4 billion as it tries to cobble together a comprehensive enterprise security solution.

#bain-capital, #cloud, #crosspoint-capital, #ec-cloud-and-enterprise-infrastructure, #ec-enterprise-applications, #enterprise, #exit, #extrahop, #fundings-exits, #ma, #mergers-and-acquisitions, #private-equity, #seattle, #security, #startups

0

CISA launches platform to let hackers report security bugs to US federal agencies

The Cybersecurity and Infrastructure Security Agency has launched a vulnerability disclosure program allowing ethical hackers to report security flaws to federal agencies.

The platform, launched with the help of cybersecurity companies Bugcrowd and Endyna, will allow civilian federal agencies to receive, triage and fix security vulnerabilities from the wider security community.

The move to launch the platform comes less than a year after the federal cybersecurity agency, better known as CISA, directed the civilian federal agencies that it oversees to develop and publish their own vulnerability disclosure policies. These policies are designed to set the rules of engagement for security researchers by outlining what (and how) online systems can be tested, and which can’t be.

It’s not uncommon for private companies to run VDP programs to allow hackers to report bugs, often in conjunction with a bug bounty to pay hackers for their work. The U.S. Department of Defense has for years warmed to hackers, the civilian federal government has been slow to adopt.

Bugcrowd, which last year raised $30 million at Series D, said the platform will “give agencies access to the same commercial technologies, world-class expertise, and global community of helpful ethical hackers currently used to identify security gaps for enterprise businesses.”

The platform will also help CISA share information about security flaws between other agencies.

The platform launches after a bruising few months for government cybersecurity, including a Russian-led espionage campaign against at least nine U.S. federal government agencies by hacking software house SolarWinds, and a China-linked cyberattack that backdoored thousands of Microsoft Exchange servers, including in the federal government.

#bugcrowd, #cisa, #computer-security, #computing, #cyberattack, #cybercrime, #cyberwarfare, #federal-government, #government, #information-technology, #internet-security, #security, #solarwinds, #united-states

0

Apple’s new encrypted browsing feature won’t be available in China, Saudi Arabia and more: report

Apple announced a handful of privacy-focused updates at its annual software developer conference on Monday. One called Private Relay particularly piques the interest of Chinese users living under the country’s censorship system, for it encrypts all browsing history so nobody can track or intercept the data.

As my colleague Roman Dillet explains:

When Private Relay is turned on, nobody can track your browsing history — not your internet service provider, anyone standing in the middle of your request between your device and the server you’re requesting information from. We’ll have to wait a bit to learn more about how it works exactly.

The excitement didn’t last long. Apple told Reuters that Private Relay won’t be available in China alongside Belarus, Colombia, Egypt, Kazakhstan, Saudi Arabia, South Africa, Turkmenistan, Uganda and the Philippines.

Apple couldn’t be immediately reached by TechCrunch for comment.

Virtual private networks or VPNs are popular tools for users in China to bypass the “great firewall” censorship apparatus, accessing web services that are otherwise blocked or slowed down. But VPNs don’t necessarily protect users’ privacy because they simply funnel all the traffic through VPN providers’ servers instead of users’ internet providers, so users are essentially entrusting VPN firms with protecting their identities. Private Relay, on the other hand, doesn’t even allow Apple to see one’s browsing activity.

In an interview with Fast Company, Craig Federighi, Apple’s senior vice president of software engineering, explained why the new feature may be superior to VPNs:

“We hope users believe in Apple as a trustworthy intermediary, but we didn’t even want you to have to trust us [because] we don’t have this ability to simultaneously source your IP and the destination where you’re going to–and that’s unlike VPNs. And so we wanted to provide many of the benefits that people are seeking when in the past they’ve decided to use a VPN, but not force that difficult and conceivably perilous privacy trade-off in terms of trusting it a single intermediary.”

It’s unclear whether Private Relay will simply be excluded from system upgrades for users in China and the other countries where it’s restricted, or it will be blocked by internet providers in those regions. It also remains to be seen whether the feature will be available to Apple users in Hong Kong, which has seen an increase in online censorship in the past year.

Like all Western tech firms operating in China, Apple is trapped between antagonizing Beijing and flouting the values it espouses at home. Apple has a history of caving in to Beijing’s censorship pressure, from migrating all user data in China to a state-run cloud center, cracking down on independent VPN apps in China, limiting free speech in Chinese podcasts, to removing RSS feed readers from the China App Store.

#apple, #asia, #beijing, #belarus, #china, #colombia, #craig-federighi, #egypt, #firewall, #government, #great-firewall, #internet-censorship, #internet-security, #internet-service, #isp, #kazakhstan, #philippines, #saudi-arabia, #security, #south-africa, #tc, #uganda, #vpn

0

Apple unveils new iOS 15 privacy features at WWDC

Apple kicked off its global annual developer conference, WWDC, with a ton of new features and technologies. TechCrunch has all the coverage here from the keynote. As with previous years, Apple has dropped a number of new security and privacy features.

New privacy dashboard keeps tabs on app tracking requests

Apple is bringing a new privacy dashboard to iOS 15 to make it easier to see which apps are collecting your data and when. It’s a continuation of Apple’s App Tracking Transparency feature that it rolled out earlier this year to block apps from siphoning off and selling your data to advertisers and data brokers. In iOS 15, you will be able to see which apps you have given permission to access your data — such as your location, microphone, contacts, and photos — and how often it’s accessed.

(Image: TechCrunch / screenshot)

Mail will block invisible email trackers

Emails are not as private as you might think. Most marketing emails contain hidden pixel-sized images that know when you’ve opened an email. These trackers also collect information about you, including your IP address, which can be used to infer your location. Some browser extensions already block these invisible email trackers. But Apple said it’s bringing privacy features to the Mail app, which will make it more difficult for emails — and ad trackers — from knowing what emails you’ve opened.

Siri will process speech on the device

Siri, Apple’s voice assistant, requires the internet to work, but will soon work offline. Apple said Siri will soon process speech on the device so that audio never leaves the device. Apple said this will help prevent unwanted audio recording, but will also help make Siri respond to requests faster.

Apple also has a few new security and privacy features baked into its new iCloud+ premium service, including private relay, which encrypts your Safari internet traffic.

As part of its new premium iCloud+ service, Apple is also rolling out private relay browsing, which encrypts your Safari traffic and routes it so that it’s more difficult to track which websites you visit. TechCrunch’s Romain Dillet has more.

read more about Apple's WWDC 2021 on TechCrunch

#apps, #privacy, #security

0

It’s time for security teams to embrace security data lakes

The average corporate security organization spends $18 million annually but is largely ineffective at preventing breaches, IP theft and data loss. Why? The fragmented approach we’re currently using in the security operations center (SOC) does not work.

Here’s a quick refresher on security operations and how we got where we are today: A decade ago, we protected our applications and websites by monitoring event logs — digital records of every activity that occurred in our cyber environment, ranging from logins to emails to configuration changes. Logs were audited, flags were raised, suspicious activities were investigated, and data was stored for compliance purposes.

The security-driven data stored in a data lake can be in its native format, structured or unstructured, and therefore dimensional, dynamic and heterogeneous, which gives data lakes their distinction and advantage over data warehouses.

As malicious actors and adversaries became more active, and their tactics, techniques and procedures (or TTP’s, in security parlance) grew more sophisticated, simple logging evolved into an approach called “security information and event management” (SIEM), which involves using software to provide real-time analysis of security alerts generated by applications and network hardware. SIEM software uses rule-driven correlation and analytics to turn raw event data into potentially valuable intelligence.

Although it was no magic bullet (it’s challenging to implement and make everything work properly), the ability to find the so-called “needle in the haystack” and identify attacks in progress was a huge step forward.

Today, SIEMs still exist, and the market is largely led by Splunk and IBM QRadar. Of course, the technology has advanced significantly because new use cases emerge constantly. Many companies have finally moved into cloud-native deployments and are leveraging machine learning and sophisticated behavioral analytics. However, new enterprise SIEM deployments are fewer, costs are greater, and — most importantly — the overall needs of the CISO and the hard-working team in the SOC have changed.

New security demands are asking too much of SIEM

First, data has exploded and SIEM is too narrowly focused. The mere collection of security events is no longer sufficient because the aperture on this dataset is too narrow. While there is likely a massive amount of event data to capture and process from your events, you are missing out on vast amounts of additional information such as OSINT (open-source intelligence information), consumable external-threat feeds, and valuable information such as malware and IP reputation databases, as well as reports from dark web activity. There are endless sources of intelligence, far too many for the dated architecture of a SIEM.

Additionally, data exploded alongside costs. Data explosion + hardware + license costs = spiraling total cost of ownership. With so much infrastructure, both physical and virtual, the amount of information being captured has exploded. Machine-generated data has grown at 50x, while the average security budget grows 14% year on year.

The cost to store all of this information makes the SIEM cost-prohibitive. The average cost of a SIEM has skyrocketed to close to $1 million annually, which is only for license and hardware costs. The economics force teams in the SOC to capture and/or retain less information in an attempt to keep costs in check. This causes the effectiveness of the SIEM to become even further reduced. I recently spoke with a SOC team who wanted to query large datasets searching for evidence of fraud, but doing so in Splunk was cost-prohibitive and a slow, arduous process, leading the team to explore alternatives.

The shortcomings of the SIEM approach today are dangerous and terrifying. A recent survey by the Ponemon Institute surveyed almost 600 IT security leaders and found that, despite spending an average of $18.4 million annually and using an average of 47 products, a whopping 53% of IT security leaders “did not know if their products were even working.” It’s clearly time for change.

#column, #computer-security, #crowdstrike, #cybersecurity, #data-security, #developer, #ec-column, #ec-cybersecurity, #machine-learning, #security, #splunk, #tc

0

The rise of cybersecurity debt

Ransomware attacks on the JBS beef plant, and the Colonial Pipeline before it, have sparked a now familiar set of reactions. There are promises of retaliation against the groups responsible, the prospect of company executives being brought in front of Congress in the coming months, and even a proposed executive order on cybersecurity that could take months to fully implement.

But once again, amid this flurry of activity, we must ask or answer a fundamental question about the state of our cybersecurity defense: Why does this keep happening?

I have a theory on why. In software development, there is a concept called “technical debt.” It describes the costs companies pay when they choose to build software the easy (or fast) way instead of the right way, cobbling together temporary solutions to satisfy a short-term need. Over time, as teams struggle to maintain a patchwork of poorly architectured applications, tech debt accrues in the form of lost productivity or poor customer experience.

Complexity is the enemy of security. Some companies are forced to put together as many as 50 different security solutions from up to 10 different vendors to protect their sprawling technology estates.

Our nation’s cybersecurity defenses are laboring under the burden of a similar debt. Only the scale is far greater, the stakes are higher and the interest is compounding. The true cost of this “cybersecurity debt” is difficult to quantify. Though we still do not know the exact cause of either attack, we do know beef prices will be significantly impacted and gas prices jumped 8 cents on news of the Colonial Pipeline attack, costing consumers and businesses billions. The damage done to public trust is incalculable.

How did we get here? The public and private sectors are spending more than $4 trillion a year in the digital arms race that is our modern economy. The goal of these investments is speed and innovation. But in pursuit of these ambitions, organizations of all sizes have assembled complex, uncoordinated systems — running thousands of applications across multiple private and public clouds, drawing on data from hundreds of locations and devices.

Complexity is the enemy of security. Some companies are forced to put together as many as 50 different security solutions from up to 10 different vendors to protect their sprawling technology estates — acting as a systems integrator of sorts. Every node in these fantastically complicated networks is like a door or window that might be inadvertently left open. Each represents a potential point of failure and an exponential increase in cybersecurity debt.

We have an unprecedented opportunity and responsibility to update the architectural foundations of our digital infrastructure and pay off our cybersecurity debt. To accomplish this, two critical steps must be taken.

First, we must embrace open standards across all critical digital infrastructure, especially the infrastructure used by private contractors to service the government. Until recently, it was thought that the only way to standardize security protocols across a complex digital estate was to rebuild it from the ground up in the cloud. But this is akin to replacing the foundations of a home while still living in it. You simply cannot lift-and-shift massive, mission-critical workloads from private data centers to the cloud.

There is another way: Open, hybrid cloud architectures can connect and standardize security across any kind of infrastructure, from private data centers to public clouds, to the edges of the network. This unifies the security workflow and increases the visibility of threats across the entire network (including the third- and fourth-party networks where data flows) and orchestrates the response. It essentially eliminates weak links without having to move data or applications — a design point that should be embraced across the public and private sectors.

The second step is to close the remaining loopholes in the data security supply chain. President Biden’s executive order requires federal agencies to encrypt data that is being stored or transmitted. We have an opportunity to take that a step further and also address data that is in use. As more organizations outsource the storage and processing of their data to cloud providers, expecting real-time data analytics in return, this represents an area of vulnerability.

Many believe this vulnerability is simply the price we pay for outsourcing digital infrastructure to another company. But this is not true. Cloud providers can, and do, protect their customers’ data with the same ferocity as they protect their own. They do not need access to the data they store on their servers. Ever.

To ensure this requires confidential computing, which encrypts data at rest, in transit and in process. Confidential computing makes it technically impossible for anyone without the encryption key to access the data, not even your cloud provider. At IBM, for example, our customers run workloads in the IBM Cloud with full privacy and control. They are the only ones that hold the key. We could not access their data even if compelled by a court order or ransom request. It is simply not an option.

Paying down the principal on any kind of debt can be daunting, as anyone with a mortgage or student loan can attest. But this is not a low-interest loan. As the JBS and Colonial Pipeline attacks clearly demonstrate, the cost of not addressing our cybersecurity debt spans far beyond monetary damages. Our food and fuel supplies are at risk, and entire economies can be disrupted.

I believe that with the right measures — strong public and private collaboration — we have an opportunity to construct a future that brings forward the combined power of security and technological advancement built on trust.

#cloud-computing, #cloud-infrastructure, #cloud-management, #colonial-pipeline, #column, #cybersecurity, #cyberwarfare, #data-security, #developer, #encryption, #opinion, #security, #software-development, #tc

0

AI cybersecurity provider SentinelOne files for $100M IPO

SentinelOne, a late-stage security startup that helps organizations secure their data using AI and machine learning, has filed for an IPO on the New York Stock Exchange (NYSE).

In an S-1 filing on Thursday, the security company revealed that for the three months ending April 30, its revenues increased by 108% year-on-year to $37.4 million and its customer base grew to 4,700, up from 2,700 a year prior. Despite this pandemic-fueled growth, SentinelOne’s net losses more than doubled from $26.6 million in 2020 to $62.6 million.

“We also expect our operating expenses to increase in the future as we continue to invest for our future growth, including expanding our research and development function to drive further development of our platform, expanding our sales and marketing activities, developing the functionality to expand into adjacent markets, and reaching customers in new geographic locations,” SentinelOne wrote in its filing.

The Mountain View-based company said it intends to list its Class A common stock using the ticker symbol “S” and that details about the price range and number of common shares to be put up for the IPO are yet to be determined. The S-1 filing also identifies Morgan Stanley, Goldman Sachs, Bank of America Securities, Barclays and Wells Fargo Securities as the lead underwriters.

SentinelOne raised $276 million in a funding round in November last year, tripling its $1 billion valuation from February 2020 to $3 billion. At the time, CEO and founder Tomer Weingarten told TechCrunch that an IPO “would be the next logical step” for the company.

SentinelOne, which was founded in 2013 and has raised a total of $696.5 million through eight rounds of funding, is looking to raise up to $100 million in its IPO, and said it’s intending to use the net proceeds to increase its visibility in the cybersecurity marketplace and for product development and other “general corporate processes.”

It added that “may also use a portion of the net proceeds for the acquisition of, or investment in, technologies, solutions, or businesses that complement our business.” The company’s sole acquisition so far took place back in February when it bought high-speed logging startup Scalyr for $155 million.

SentinelOne is going public during a period of heightened public interest in cybersecurity. There has been a wave of high-profile cyberattacks during the COVID-19 pandemic, with hackers taking advantage of widespread remote working necessitated as a result.

One of the biggest attacks saw Russian hackers breach the networks of IT company SolarWinds, enabling them to gain access to government agencies and corporations. SentinelOne’s endpoint protection solution was able to detect and stop the related malicious payload, protecting its customers.

“The world is full of criminals, state actors, and other hostile agents who seek to exfiltrate and exploit data to disrupt our way of life,” Weingarten said in SentinelOne’s SEC filing. “Our mission is to keep the world running by protecting and securing the core pillars of modern infrastructure: data and the systems that store, process, and share information. This is an endless mission as attackers evolve rapidly in their quest to disrupt operations, breach data, turn profit, and inflict damage.”

#artificial-intelligence, #barclays, #ceo, #cloud, #companies, #computing, #goldman-sachs, #initial-public-offering, #machine-learning, #morgan-stanley, #scalyr, #security, #sentinelone, #solarwinds, #system-administration, #u-s-securities-and-exchange-commission

0

TikTok just gave itself permission to collect biometric data on U.S. users, including ‘faceprints and voiceprints’

A change to TikTok’s U.S. Privacy Policy on Wednesday introduced a new section that says the social video app “may collect biometric identifiers and biometric information” from its users’ content. This includes things like “faceprints and voiceprints,” the policy explained. Reached for comment, TikTok could not confirm what product developments necessitated the addition of biometric data to its list of disclosures about the information it automatically collects from users, but said it would ask for consent in the case such data collection practices began.

The biometric data collection details were introduced in the newly added section, “Image and Audio Information,” found under the heading of “Information we collect automatically” in the policy.

This is the part of TikTok’s Privacy Policy that lists the types of data the app gathers from users, which was already fairly extensive.

The first part of the new section explains that TikTok may collect information about the images and audio that are in users’ content, “such as identifying the objects and scenery that appear, the existence and location within an image of face and body features and attributes, the nature of the audio, and the text of the words spoken in your User Content.”

While that may sound creepy, other social networks do object recognition on images you upload to power accessibility features (like describing what’s in an Instagram photo, for example), as well as for ad targeting purposes. Identifying where a person and the scenery is can help with AR effects, while converting spoken words to text helps with features like TikTok’s automatic captions.

The policy also notes this part of the data collection is for enabling “special video effects, for content moderation, for demographic classification, for content and ad recommendations, and for other non-personally-identifying operations,” it says.

The more concerning part of the new section references a plan to collect biometric data.

It states:

We may collect biometric identifiers and biometric information as defined under US laws, such as faceprints and voiceprints, from your User Content. Where required by law, we will seek any required permissions from you prior to any such collection.

The statement itself is vague, as it doesn’t specify whether it’s considering federal law, states laws, or both. It also doesn’t explain, as the other part did, why TikTok needs this data. It doesn’t define the terms faceprints or voicepints. Nor does it explain how it would go about seeking the “required permissions” from users, or if it would look to either state or federal laws to guide that process of gaining consent.

That’s important because as it stands today, only a handful of U.S. states have biometric privacy laws, including Illinois, Washington, California, Texas, and New York. If TikTok only requested consent, “where required by law,” it could mean users in other states would not have to be informed about the data collection.

Reached for comment, a TikTok spokesperson could not offer more details on the company’s plans for biometric data collection or how it may tie in to either current or future products.

“As part of our ongoing commitment to transparency, we recently updated our Privacy Policy to provide more clarity on the information we may collect,” the spokesperson said.

The company also pointed us to an article about its approach to data security, TikTok’s latest Transparency Report, and the recently launched privacy and security hub, which is aimed at helping people better understand their privacy choices on the app.

Photo by NOAH SEELAM / AFP) (Photo by NOAH SEELAM/AFP via Getty Images)

The biometric disclosure comes at a time when TikTok has been working to regain the trust of some U.S. users.

Under the Trump administration, the federal government attempted to ban TikTok from operating in the U.S. entirely, calling the app a national security threat because of its ownership by a Chinese company. TikTok fought back against the ban and went on record to state it only stores TikTok U.S. user data in its U.S. data centers and in Singapore.

It said it has never shared TikTok user data with the Chinese government nor censored content, despite being owned by Beijing-based ByteDance. And it said it would never do so, if asked.

Though the TikTok ban was initially stopped in the courts, the federal government appealed the rulings. But when President Biden took office, his administration put the appeal process on hold as it reviewed the actions taken by his predecessor. And although Biden has, as of today, signed an executive order to restrict U.S. investment in Chinese firms linked to surveillance, his administration’s position on TikTok remains unclear.

It is worth noting, however, that the new disclosure about biometric data collection follows a $92 million settlement in a class action lawsuit against TikTok, originally filed in May 2020, over the social media app’s violation of Illinois’ Biometric Information Privacy Act. The consolidated suit included more than 20 separate class filed against TikTok over the platform’s collection and sharing of the personal and biometric information without user consent. Specifically, this involved the use of facial filter technology for special effects.

In that context, TikTok’s legal team may have wanted to quickly cover themselves from future lawsuits by adding a clause that permits the app to collect personal biometric data.

The disclosure, we should also point out, has only been been added to the U.S. Privacy Policy, as other markets like the E.U. have stricter data protection and privacy laws.

The new section was part of a broader update to TikTok’s Privacy Policy, which included other changes both large and small, ranging from corrections of earlier typos to revamped or even entirely new sections. Most of these tweaks and changes could be easily explained, though — like new sections that clearly referenced TikTok’s e-commerce ambitions or adjustments aimed at addressing the implications of Apple’s App Tracking Transparency on targeted advertising.

In the grand scheme of things, TikTok still has plenty of data on its users, their content, and their devices, even without biometric data.

For example, TikTok policy already stated it automatically collects information about users’ devices, including location data based on your SIM card and IP addresses and GPS, your use of TikTok itself and all the content you create or upload, the data you send in messages on its app, metadata from the content you upload, cookies, the app and file names on your device, battery state, and even your keystroke patterns and rhythms, among other things.

This is in addition to the “Information you choose to provide,” which comes from when you register, contact TikTok or upload content. In that case, TikTok collects your registration info (username, age, language, etc.), profile info (name, photo, social media accounts), all your user-generated content on the platform, your phone and social network contacts, payment information, plus the text, images and video found in the device’s clipboard. (TikTok, as you may recall, got busted by Apple’s iOS 14 feature that alerted users to the fact that TikTok and other apps were accessing iOS clipboard content. Now, the policy says TikTok “may collect” clipboard data “with your permission.”)

The content of the Privacy Policy itself wasn’t of immediate concern to some TikTok users. Instead, it was the buggy rollout.

Some users reported seeing a pop-up message alerting them to the Privacy Policy update, but the page was not available when they tried to read it. Others complained of seeing the pop-up repeatedly. This issue doesn’t appear to be universal. In tests, we did not have an issue with the pop-up ourselves.

Additional reporting by Zack Whittaker

#apps, #security, #tc

0

Supreme Court limits US hacking law in landmark CFAA ruling

The Supreme Court has ruled that a police officer who searched a license plate database for an acquaintance in exchange for cash did not violate U.S. hacking laws.

The landmark ruling concludes a long-running case that clarifies the controversial Computer Fraud and Abuse Act, or CFAA, by putting limits on what kind of conduct can be prosecuted.

The court ruled 6-3 in favor of Nathan Van Buren, a former Georgia police sergeant who brought the case. Van Buren was prosecuted on two counts, one for accepting a kickback for accessing the database as a serving police officer, and another for violating the CFAA. His first conviction was overturned, but the CFAA conviction was upheld — until today.

Although Van Buren was allowed to access the license plate database, the legal question became whether or not he had exceeded his authorized access.

In the ruling, the Supreme Court said that the CFAA “covers those who obtain information from particular areas in the computer — such as files, folders, or databases — to which their computer access does not extend,” and that while Van Vuren “plainly flouted” the police department’s rules for law enforcement purposes, he did not violate the CFAA, wrote Justice Amy Coney Barrett, who wrote the majority opinion.

The CFAA was signed into law in 1986 to prosecute hackers who gain “unauthorized” access to a computer or network. But courts have been split on what “unauthorized” means. Legal experts have argued that a broad reading of the law could criminalize violating a site’s terms of service, such as lying on a dating profile or sharing a password to a streaming service. The court said that the government’s interpretation of the law “would attach criminal penalties to a breathtaking amount of commonplace computer activity.”

Not all the justices agreed. “Without valid law enforcement purposes, he was forbidden to use the computer to obtain that information,” wrote Justice Thomas, who filed a dissenting opinion along with Justice Samuel Alito and Chief Justice John Roberts.

Civil liberties experts said Congress should act to amend the CFAA following the court’s ruling.

“This is an important and welcome decision that will help protect digital research and journalism that is urgently necessary. But more is needed,” said Alex Abdo, litigation director of the Knight First Amendment Institute. “Congress should amend the Computer Fraud and Abuse Act to eliminate any remaining uncertainty about the scope of the statute. It should also create a safe harbor for researchers and journalists who are working to study disinformation and discrimination online. Major technology companies should not have a veto over research and journalism that are manifestly in the public interest.”

#california, #cfaa, #computer-fraud-and-abuse-act, #georgia, #hacking, #security, #supreme-court, #united-states, #university-of-california, #university-of-california-berkeley

0

Fujifilm becomes the latest victim of a network-crippling ransomware attack

Japanese multinational conglomerate Fujifilm has been forced to shut down parts of its global network after falling victim to a suspected ransomware attack.

The company, which is best known for its digital imaging products but also produces high tech medical kit including devices for rapid processing of COVID-19 tests, confirmed that its Tokyo headquarters was hit by a cyberattack on Tuesday evening.

“Fujifilm Corporation is currently carrying out an investigation into possible unauthorized access to its server from outside of the company. As part of this investigation, the network is partially shut down and disconnected from external correspondence,” the company said in a statement posted to its website.

“We want to state what we understand as of now and the measures that the company has taken. In the late evening of June 1, 2021, we became aware of the possibility of a ransomware attack. As a result, we have taken measures to suspend all affected systems in coordination with our various global entities.

“We are currently working to determine the extent and the scale of the issue. We sincerely apologize to our customers and business partners for the inconvenience this has caused.”

As a result of the partial network shutdown, Fujifilm USA added a notice to its website stating that it is currently experiencing problems affecting all forms of communications, including emails and incoming calls. In an earlier statement, Fujifilm confirmed that the cyberattack is also preventing the company from accepting and processing orders. 

Fujifilm has yet to respond to our request for comment.

While Fujifilm is keeping tight-lipped on further details, such as the identity of the ransomware used in the attack, Bleeping Computer reports that the company’s servers have been infected by Qbot. Advanced Intel CEO Vitali Kremez told the publication that the company’s systems were hit by the 13-year-old Trojan, typically initiated by phishing, last month.

The creators of Qbot, also known as QakBot or QuakBot, have a long history of partnering with ransomware operators. It previously worked with the ProLock and Egregor ransomware gangs, but is currently said to be linked with the notorious REvil group.

“Initial forensic analysis suggests that the ransomware attack on Fujifilm started with a Qbot trojan infection last month, which gave hackers a foothold in the company’s systems with which to deliver the secondary ransomware payload,” Ray Walsh, digital privacy expert at ProPrivacy, told TechCrunch. “Most recently, the Qbot trojan has been actively exploited by the REvil hacking collective, and it seems highly plausible that the Russian-based hackers are behind this cyberattack.”

REvil, also known as Sodinokibi, not only encrypts a victim’s files but also exfiltrates data from their network. The hackers typically threaten to publish the victim’s files if their ransom isn’t paid. But a site on the dark web used by REvil to publicize stolen data appeared offline at the time of writing.

Ransomware attacks have been on the rise since the start of the COVID-19 pandemic, so much so that they have become the biggest single money earner for cybercriminals. Threat hunting and cyber intelligence firm Group-IB estimates that the number of ransomware attacks grew by more than 150% in 2020, and that the average ransom demand increased more than twofold to $170,000.

At the time of writing, it’s unclear whether Fujifilm has paid any ransom to the hackers responsible for the attack on its systems.

#articles, #ceo, #computer-security, #crime, #crimes, #cyberattacks, #cybercrime, #cyberwarfare, #dark-web, #digital-imaging, #fujifilm, #hardware, #intel, #ransomware, #security

0

FireEye to sell products unit to Symphony-led group for $1.2B

Cybersecurity giant FireEye has agreed to sell its products business to a consortium led by private equity firm Symphony Technology Group for $1.2 billion.

The all-cash deal will split FireEye, the maker of network and email cybersecurity products, from its digital forensics and incident response arm Mandiant.

FireEye’s chief executive Kevin Mandia said the deal unlocks its “high-growth” Mandiant business, allowing it to stand alone as a separate business running incident response and security testing.

The move to split the two companies comes almost a decade after FireEye acquired Mandiant, and made Mandia chief executive.

Mandia said: “STG’s focus on fueling innovative market leaders in software and cybersecurity makes them an ideal partner for FireEye Products. We look forward to our relationship and collaboration on threat intelligence and expertise.”

STG managing partner William Chisholm said there is an “enormous untapped opportunity for the business that we are excited to crystallize by leveraging our significant security software sector experience and our market leading carve-out expertise.”

The company said the deal is expected to close by the end of the fourth quarter.

FireEye has become one of the more prominent names in cybersecurity, known for its research into hacking groups — some linked to governments — and its Mandiant unit for responding to major security incidents. Mandiant was called in to help Colonial Pipeline recover from a recent ransomware attack.

In December, FireEye admitted that its own networks had been hacked, a move praised across the cybersecurity industry for helping to speed up efforts that led to the discovery of the SolarWinds espionage attack, later attributed to Russian foreign intelligence.

FireEye becomes the latest cybersecurity giant to STG’s portfolio. In March, Symphony bought McAfee’s enterprise business for $4 billion and bought RSA for $2 billion.

#colonial-pipeline, #computer-security, #computing, #cybercrime, #cyberwarfare, #fireeye, #information-technology, #kevin-mandia, #mandiant, #mcafee, #partner, #rsa, #rsa-security, #security, #solarwinds, #symphony-technology-group

0

Cybersecurity unicorn Exabeam raises $200M to fuel SecOps growth

Exabeam, a late-stage startup that helps organizations detect advanced cybersecurity threats, has landed a new $200 million funding round that values the company at $2.4 billion.

The Series F growth round was led by the Owl Rock division of Blue Owl Capital, with support from existing investors Acrew Capital, Lightspeed Venture Partners and Norwest Venture Partners.

The announcement of Exabeam’s latest funding, which the company says will help it on its mission to become “the number one trusted cloud SeCops platform in the market”, coincides with the news that CEO Nir Polak, who co-founded the company in 2013, will be replaced by former ForeScout chief executive Michael DeCesare.

DeCesare is a big name in the cybersecurity space, with more than 25 years of experience leading high-growth security companies. He joined ForeScout as CEO and president in February 2015 after four years as president of McAfee, which at the time was owned by Intel. Under his leadership, ForeScout raised nearly $117 million in an upsized IPO that valued the IoT security vendor at $800 million.

Polak, meanwhile, will shift to a chairman role at Exabeam and “will continue on as an active member of the executive team and remain at the company,” according to the funding announcement.

“Nir has built an incredibly robust, diverse and inclusive culture at Exabeam, and I am committed to helping it flourish,” said DeCesare. “I’m thrilled to join Nir and the whole leadership team to help drive the company through its next phase of growth.”

Exabeam, which has now raised $390 million in six rounds of outside funding, says it expects to use the new money to fuel scale, innovate and extend the company’s leadership. “It gives us the opportunity to triple down on our R&D efforts and continue engineering the most advanced UEBA, XDR and SIEM cloud security products available today,” commented Polak.

The company adds that it has made significant investments in its partner program over the last 12 months, which now includes more than 400 reseller, distributor, systems integrator, MSSP, MDR and consulting partners globally. Exabeam also has more than 500 technology integrations with cloud network, data lake and endpoint vendors including CrowdStrike, Okta and Snowflake.

It’s clearly expecting these investments to pay off, describing its “outcome-based approach” to external security as perfectly suited to support organizations as they manage exponential amounts of data and return to the post-COVID workplace in a variety of hybrid scenarios. After all, hackers are already beginning to target employees who have started making a return to the office, and this threat is only likely to increase as more companies begin to dial back on remote working and start welcoming staff back into workplaces.

“Exabeam is poised to be the next-gen leader in the cloud security analytics, XDR and SIEM markets,” Pravin Vazirani, Blue Owl Capital’s managing director and co-head of tech investing, said in a statement. “We led this round of funding to provide the company with the resources necessary to support its sustainable, long-term growth and value creation.”

#acrew-capital, #ceo, #chairman, #cloud, #cloud-applications, #companies, #crowdstrike, #exabeam, #executive, #forescout, #funding, #intel, #leader, #lightspeed, #lightspeed-venture-partners, #mcafee, #norwest-venture-partners, #okta, #president, #security, #software

0

Hackers are targeting employees returning to the post-COVID office

With COVID-19 restrictions lifting and employees starting to make their way back into offices, hackers are being forced to change tack. While remote workers have been scammers’ main target for the past 18 months due to the mass shift to home working necessitated by the pandemic, a new phishing campaign is attempting to exploit those who have started to return to the physical workplace.

The email-based campaign, observed by Cofense, is targeting employees with emails purporting to come from their CIO welcoming them back into offices.

The email looks legitimate enough, sporting the company’s official logo in the header, as well as being signed spoofing the CIO. The bulk of the message outlines the new precautions and changes to business operations the company is taking relative to the pandemic.

If an employee were to be fooled by the email, they would be redirected to what appears to be a Microsoft SharePoint page hosting two company-branded documents. “When interacting with these documents, it becomes apparent that they are not authentic and instead are phishing mechanisms to garner account credentials,” explains Dylan Main, threat analyst at Cofense’s Phishing Defense Center.

However, if a victim decides to interact with either document, a login panel appears and prompts the recipient to provide login credentials to access the files.

“This is uncommon among most Microsoft phishing pages where the tactic of spoofing the Microsoft login screen opens an authenticator panel,” Main continued. “By giving the files the appearance of being real and not redirecting to another login page, the user may be more likely to supply their credentials in order to view the updates.”

Another technique the hackers are employing is the use of fake validated credentials. The first few times login information is entered into the panel, the result will be the error message that states: “Your account or password is incorrect.”

“After entering login information a few times, the employee will be redirected to an actual Microsoft page,” Main says. “This gives the appearance that the login information was correct, and the employee now has access to the OneDrive documents. In reality, the threat actor now has full access to the account owner’s information.”

While this is one of the first campaigns that’s been observed targeting employees returning to the workplace (Check Point researchers uncovered another last year), it’s unlikely to be the last. Both Google and Microsoft, for example, have started welcoming staff back to office cubicles, and some 75% of executives expect that at least 50% of employees will be back working in the office by July, according to a recent PwC study.

Threat actors typically adapt to exploit the global environment. Just as the shift to mass working over remote connections led to an increase in the number of attacks attempting to exploit remote login credentials, it’s likely the number of attacks targeting on-premise networks and office-based workers will continue to grow over the coming months.

#cio, #crime, #cybercrime, #email, #fancy-bear, #fraud, #google, #identity-theft, #internet, #microsoft, #outlook-com, #phishing, #security, #social-engineering, #spamming

0

Redacted comes out of stealth with $60M in funding and a new take on fighting cybercrime

The cybersecurity industry has no shortage of technology to fight against network intruders, app corrupters, email hackers and other cyber criminals. Today a startup called Redacted is coming out of stealth with a different approach to tackling malicious activity — it applies threat intelligence, and then proactively goes after the hackers to recover data loss and disrupt their activities announcing some funding to build out its business — and announcing $35 million in funding to fuel its growth.

The Series B is being led by Ten Eleven Ventures, with participation from Valor Equity Partners and SVB Capital. (Ten Eleven is a VC specializing on cybersecurity that has backed a number of other startups.) It brings the total raised by Redacted — which it specifically styles “[redacted]”…with brackets — to $60 million.

It’s always interesting when a startup comes out of nowhere with a substantial about of VC backing, but it’s almost always because that startup has some interesting pedigree, and that is the case here. The company is led by Max Kelly, who was previously the chief security officer at Facebook and before that held roles at the National Security Agency and U.S. Cyber Command. His co-founder, John Hering, was the founder and CEO of cybersecurity firm Lookout. The startup is populated out with a bigger team that the startup likes to say has “more than 300 years of combined experience” in cyber defense with experience at Facebook, Amazon, NASA JPL, Symantec, Cisco, FBI, CIA, NSA, DIA, Army, Air Force, Navy, US Marine Corps, US Cyber Command, the UK’s GCHQ.

I’d actually heard about the company before — it works with another cyber startup I’ve covered called Cado, which provides cyber forensics tools to Redacted (among others) — but when I mentioned I’d heard of the company previously, they suggested it was because I’d covered Ocado and its move into the U.S. market, so while Redacted is not particularly forthcoming about its customers, I guess that this grocery giant might be one of them.

The core of what Redacted does comes out of direct experience that Kelly said he had while working at Facebook, where he both built in-house threat response tools but also worked with third-party vendors to secure the social networking giant’s systems, employees and users.

“A big focus of the industry in last 10 years was preventing the breach,” Kelly said. “But that was always a lie. There is nothing you can do to prevent a breach. The point is not to prevent the breach but the damage from it. Make sure people can’t get data out, and if they do, make sure you can get it back.”

There was also the issue of the size of Facebook itself.

“We couldn’t buy any security tools that worked because of the scale of the company,” he said. “So we thought about it and decided that the best approach would be to ask who is doing this, get them to stop.”

In an environment where cybercrime has taken on the profile of some of the most advanced innovations in technology, with both bad actors and security apps and services leaning on artificial intelligence and automation to do their work, it sounds almost too human an approach. But from how Kelly describes it, it sounds like there is a very human face to cybercrime, and the mere fact of identifying bad actors can get them to retreat.

It’s also a highly technical operation: the startup has also built tools, with some of its own tech and leaning on tech built by others, to find patterns in the work that cybercriminals do and eventually track them to where they are.

“If they’re in a place where they can be touched by law enforcement, that can be used to get them to stop,” he said. “But if not, then it’s just the awareness that they’d been seen and that generally causes them to retreat.”

The mix of what Redacted has built to date, he says, is being aimed at smaller, mid-sized and slightly larger corporates, particularly those that are not capable of building tools like this themselves.

The name, meanwhile, in my opinion says something about the nimble, but also very focused, approach the startup is taking. It comes from a period when the company hadn’t yet come up with a name for itself but was already operating commercially while in stealth mode (which actually is very standard among cybersecurity startups, I’ve found, who don’t really want a lot of attention for obvious reasons).

“We used it as a placeholder, but I realized, as I talked to people, that they were using the name “Redacted” when referring to us,” Kelly said. He looked up redacted.com and saw it was available. “It was the universe telling me to use the name,” he said with a little smile.

“With the industry’s most advanced pursuit capabilities, Redacted has the power to teach attackers that companies will hold them accountable for attacks,” said Alex Doll, founder and managing general partner at Ten Eleven Ventures, in a statement. “Redacted’s cloud-native security platform also enables them to protect and defend companies that run their operations within a modern cloud architecture. Together, these features enable [redacted] to offer the most holistic and proactive security solution for companies in today’s elevated threat environment.” Doll is joining the board with this round.

#cybersecurity, #redacted, #security, #tc

0