Want in on the next $100B in cybersecurity?

As a Battery Ventures associate in 1999, I used to spend my nights highlighting actual magazines called Red Herring, InfoWorld and The Industry Standard, plus my personal favorites StorageWorld and Mass High Tech (because the other VC associates rarely scanned these).

As a 23-year-old, I’d circle the names of much older CEOs who worked at companies like IBM, EMC, Alcatel or Nortel to learn more about what they were doing. The companies were building mainframe-to-server replication technologies, IP switches and nascent web/security services on top.

Flash forward 22 years and, in a way, nothing has changed. We have gone from command line to GUI to now API as the interface innovation. But humans still need an interface, one that works for more types of people on more types of devices. We no longer talk about the OSI stack — we talk about the decentralized blockchain stack. We no longer talk about compute, data storage and analysis on a mainframe, but rather on the cloud.

The problems and opportunities have stayed quite similar, but the markets and opportunities have gotten much larger. AWS and Azure cloud businesses alone added $23 billion of run-rate revenue in the last year, growing at 32% and 50%, respectively — high growth on an already massive base.

The size of the cybersecurity market has gotten infinitely larger as software eats the world and more people are able to sit and feast at the table from anywhere on Earth (and, soon enough, space).

The size of the cybersecurity market, in particular, has gotten infinitely larger as software eats the world and more people are able to sit and feast at the table from anywhere on Earth (and, soon enough, space).

Over the course of the last few months, my colleague Spencer Calvert and I released a series of pieces about why this market opportunity is growing so rapidly: the rise of multicloud environments, data being generated and stored faster than anyone can keep up with it, SaaS applications powering virtually every function across an organization and CISOs’ rise in political power and strategic responsibility.

This all ladders up to an estimated — and we think conservative — $100 billion of new market value by 2025 alone, putting total market size at close to $280 billion.

In other words, opportunities are ripe for massive business value creation in cybersecurity. We think many unicorns will be built in these spaces, and while we are still in the early innings, there are a few specific areas where we’re looking to make bets (and one big-picture, still-developing area). Specifically, Upfront is actively looking for companies building in:

  1. Data security and data abstraction.
  2. Zero-trust, broadly applied.
  3. Supply chains.

Data security and abstraction

Data is not a new thesis, but I am excited to look at the change in data stacks from an initial cybersecurity lens. What set of opportunities can emerge if we view security at the bottom of the stack — foundational — rather than as an application at the top or to the side?

Image Credits: Upfront Ventures

For example, data is expanding faster than we can secure it. We need to first know where the (structured and unstructured) data is located, what data is being stored, confirm proper security posture and prioritize fixing the most important issues at the right speed.

Doing this at scale requires smart passive mapping, along with heuristics and rules to pull the signal from the noise in an increasingly data-rich (noisy) world. Open Raven, an Upfront portfolio company, is building a solution to discover and protect structured and unstructured data at scale across cloud environments. New large platform companies will be built in the data security space as the point of control moves from the network layer to the data layer.

We believe Open Raven is poised to be a leader in this space and also will power a new generation of “output” or application companies yet to be funded. These companies could be as big as Salesforce or Workday, built with data abstracted and managed differently from the start.

If we look at security data at the point it is created or discovered, new platforms like Open Raven may lead to the emergence of an entirely new ecosystem of apps, ranging from those Open Raven is most likely to build in-house — like compliance workflows — to entirely new companies that rebuild apps we have used since the beginning of time, which includes everything from people management systems to CRMs to product analytics to your marketing attribution tools.

Platforms that lead with a security-first, foundational lens have the potential to power a new generation of applications companies with a laser-focus on the customer engagement layer or the “output” layer, leaving the data cataloging, opinionated data models and data applications to third parties that handle data mapping, security and compliance.

Image Credits: Upfront Ventures

Put simply, if full-stack applications look like layers of the Earth, with UX as the crust, that crust can become better and deeper with foundational horizontal companies underneath meeting all the requirements surrounding personally identifiable information and GDPR, which are foisted upon companies that currently have data everywhere. This can free up time for new application companies to focus their creative talent even more deeply on the human-to-software engagement layer, building superhuman apps for every existing category.

Zero-trust

Zero-trust was first coined in 2010, but applications are still being discovered and large businesses are being built around the idea. Zero-trust, for those getting up to speed, is the assumption that anyone accessing your system, devices, etc., is a bad actor.

This could sound paranoid, but think about the last time you visited a Big Tech campus. Could you walk in past reception and security without a guest pass or name badge? Absolutely not. Same with virtual spaces and access. My first in-depth course on zero-trust security was with Fleetsmith. I invested in Fleetsmith in 2017, a young team building software to manage apps, settings and security preferences for organizations powered by Apple devices. Zero-trust in the context of Fleetsmith was about device setup and permissions. Fleetsmith was acquired by Apple in mid-2020.

About the same time as the Fleetsmith acquisition, I met Art Poghosyan and the team at Britive. This team is also deploying zero-trust for dynamic permissioning in the cloud. Britive is being built under the premise of zero-trust Just-in-time (JIT) access, whereby users are granted ephemeral access dynamically rather than the legacy process of “checking out” and “checking in” credentials.

By granting temporary privilege access instead of “always-on” credentials, Britive is able to drastically reduce cyber risks associated with over-privileged accounts, the time to manage privilege access and the workflows to streamline privileged access management across multicloud environments.

What’s next in zero-based trust (ZBT)? We see device and access as the new perimeter, as workers flex devices and locations for their work and have invested around this with Fleetsmith and now Britive. But we still think there is more ground to cover for ZBT to permeate more mundane processes. Passwords are an example of something that is, in theory, zero-trust (you must continually prove who you are). But they are woefully inadequate.

Phishing attacks to steal passwords are the most common path to data breaches. But how do you get users to adopt password managers, password rotation, dual-factor authentication or even passwordless solutions? We want to back simple, elegant solutions to instill ZBT elements into common workflows.

Supply chains

Modern software is assembled using third-party and open-source components. This assembly line of public code packages and third-party APIs is known as a supply chain. Attacks that target this assembly line are referred to as supply chain attacks.

Some supply chain attacks can be mitigated by existing application-security tools like Snyk and other SCA tools for open-source dependencies, such as Bridgecrew to automate security engineering and fix misconfigurations and Veracode for security scanning.

But other vulnerabilities can be extremely challenging to detect. Take the supply chain attack that took center stage — the SolarWinds hack of 2020 — in which a small snippet of code was altered in a SolarWinds update before spreading to 18,000 different companies, all of which relied on SolarWinds software for network monitoring or other services.

Image Credits: Upfront Ventures

How do you protect yourself from malicious code hidden in a version update of a trusted vendor that passed all of your security onboarding? How do you maintain visibility over your entire supply chain? Here we have more questions than answers, but securing supply chains is a space we will continue to explore, and we predict large companies will be built to securely vet, onboard, monitor and offboard third-party vendors, modules, APIs and other dependencies.

If you are building in any of the above spaces, or adjacent spaces, please reach out. We readily acknowledge that the cybersecurity landscape is rapidly changing, and if you agree or disagree with any of the arguments above, I want to hear from you!

#cloud, #cloud-computing, #cloud-infrastructure, #column, #cybersecurity, #data-management, #security, #software-as-a-service, #technology, #venture-capital

0

PlanetScale raises $30M Series B for its database service

PlanetScale, the company behind the open-source Vitess database clustering system for MySQL that was first developed at YouTube, today announced that it has raised a $30 million Series B funding round led by Insight Partners, with participation from a16z and SignalFire. With this, the company has now raised a total of $55 million, according to Crunchbase.

Today’s announcement comes only a few weeks after PlanetScale launched its new hosted database platform, also dubbed PlanetScale. The company had previously offered a hosted version of Vitess, but with this new service, it is going a step further and offering what it calls a “developer-first database” that abstracts away all of the infrastructures to ensure that developers won’t have to think about cloud zones, cluster sizes and other details.

Indeed, PlanetScale CEO and co-founder Jiten Vaidya was quite open about the limitations of this earlier product. “What we had built last year was pretty much hosted Vitess, which was no different than how a lot of cloud providers today give you databases,” he said. “So none of this ease of use, none of this elegance, none of these state-of-the-art experiences that the developers want and expect today, we had built into our product.”

But a few months ago, the company brought on former GitHub VP of Engineering Sam Lambert as its Chief Product Officer. Vaidya noted that Lambert brought a lot of developer empathy to PlanetScale and helped it launch this new product.

“People come to you because they’re not database experts, but they have data, they have problems,” Lambert said. “And too many companies, especially in the database world, do not think about the daily lives of their users like we do. They don’t think about the complete journey of what the user is actually trying to do, which is to provide value to their customers. They’re just very impressed with themselves for storing and retrieving data. And it’s like, yep, we’ve been doing that. We’ve been doing that since the 60s. Can we do something else now?”

The company’s users today include the likes of Slack, Figma, GitHub and Square, so it’s clearly delivering value to a lot of users. As Lambert noted, PlanetScale aims to offer them a product that is simple and easy to use. “Just because it is simple and easy to use, and beautiful, honestly — like just beautiful, well-designed tooling — it doesn’t mean it’s inferior. It doesn’t mean it’s missing anything. It means the others are missing the poetry and the additional elements of beauty that you can add to infrastructure products,” he said.

PlanetScale plans to use the new funding to scale its team globally and accelerate the adoption of its platform. Insight Partners Managing Director Nikhil Sachdev will join the company’s board, with the firm’s Managing Director Praveen Akkiraju also joining as a board observer.

“PlanetScale is setting a new bar for simplicity, performance and scalability for cloud-based databases in the serverless era,” said Sachdev. “The developer experience for databases has been painful for too long. PlanetScale is breaking that chain, solving longstanding problems related to scalability and reliability in an extremely elegant, tasteful, and useful way.”

#andreessen-horowitz, #cloud-computing, #computing, #database, #github, #insight-partners, #mysql, #planetscale, #serverless-computing, #software, #tc, #vitess, #youtube

0

Vercel raises $102M Series C for its front-end development platform

Vercel, the company behind the popular open-source Next.js React framework, today announced that it has raised a $102 million Series C funding round led by Bedrock Capital. Existing investors Accel, CRV,
Geodesic Capital, Greenoaks Capital and GV also participated in this round, together with new investors 8VC, Flex Capital, GGV, Latacora, Salesforce Ventures and Tiger Global. In total, the company has now raised $163 million and its current valuation is $1.1 billion.

As Vercel notes, the company saw strong growth in recent months, with traffic to all sites and apps on its network doubling since October 2020. About half of the world’s largest 10,000 websites now use Next.js . Given the open-source nature of the Next.js framework, not all of these users are obviously Vercel customers, but its current paying customers include the likes of Carhartt, Github, IBM, McDonald’s and Uber.

Image Credits: Vercel

“For us, it all starts with a front-end developer,” Vercel CEO Guillermo Rauch told me. “Our goal is to create and empower those developers — and their teams — to create delightful, immersive web experiences for their customers.”

With Vercel, Rauch and his team took the Next.js framework and then built a serverless platform that specifically caters to this framework and allows developers to focus on building their front ends without having to worry about scaling and performance.

Older solutions, Rauch argues, were built in isolation from the cloud platforms and serverless technologies, leaving it up to the developers to deploy and scale their solutions. And while some potential users may also be content with using a headless content management system, Rauch argues that increasingly, developers need to be able to build solutions that can go deeper than the off-the-shelf solutions that many businesses use today.

Rauch also noted that developers really like Vercel’s ability to generate a preview URL for a site’s front end every time a developer edits the code. “So instead of just spending all your time in code review, we’re shifting the equation to spending your time reviewing or experiencing your front end. That makes the experience a lot more collaborative,” he said. “So now, designers, marketers, IT, CEOs […] can now come together in this collaboration of building a front end and say, ‘that shade of blue is not the right shade of blue.’”

“Vercel is leading a market transition through which we are seeing the majority of value-add in web and cloud application development being delivered at the front end, closest to the user, where true experiences are made and enjoyed,” said Geoff Lewis, founder and managing partner at Bedrock. “We are extremely enthusiastic to work closely with Guillermo and the peerless team he has assembled to drive this revolution forward and are very pleased to have been able to co-lead this round.”

#bedrock-capital, #ceo, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #content-management-system, #developer, #funding, #fundings-exits, #geodesic-capital, #geoff-lewis, #github, #greenoaks-capital, #ibm, #managing-partner, #mcdonalds, #react, #recent-funding, #salesforce, #salesforce-ventures, #serverless-computing, #software, #startups, #tc, #tiger-global

0

Vantage raises $4M to help businesses understand their AWS costs

Vantage, a service that helps businesses analyze and reduce their AWS costs, today announced that it has raised a $4 million seed round led by Andreessen Horowitz. A number of angel investors, including Brianne Kimmel, Julia Lipton, Stephanie Friedman, Calvin French Owen, Ben and Moisey Uretsky, Mitch Wainer and Justin Gage, also participated in this round

Vantage started out with a focus on making the AWS console a bit easier to use — and help businesses figure out what they are spending their cloud infrastructure budgets on in the process. But as Vantage co-founder and CEO Ben Schaechter told me, it was the cost transparency features that really caught on with users.

“We were advertising ourselves as being an alternative AWS console with a focus on developer experience and cost transparency,” he said.”What was interesting is — even in the early days of early access before the formal GA launch in January — I would say more than 95% of the feedback that we were getting from customers was entirely around the cost features that we had in Vantage.”

Image Credits: Vantage

Like any good startup, the Vantage team looked at this and decided to double down on these features and highlight them in its marketing, though it kept the existing AWS Console-related tools as well. The reason the other tools didn’t quite take off, Schaechter believes, is because more and more, AWS users have become accustomed to infrastructure-as-code to do their own automatic provisioning. And with that, they spend a lot less time in the AWS Console anyway.

“But one consistent thing — across the board — was that people were having a really, really hard time twelve times a year, where they would get a shock AWS bill and had to figure out what happened. What Vantage is doing today is providing a lot of value on the transparency front there,” he said.

Over the course of the last few months, the team added a number of new features to its cost transparency tools, including machine learning-driven predictions (both on the overall account level and service level) and the ability to share reports across teams.

Image Credits: Vantage

While Vantage expects to add support for other clouds in the future, likely starting with Azure and then GCP, that’s actually not what the team is focused on right now. Instead, Schaechter noted, the team plans to add support for bringing in data from third-party cloud services instead.

“The number one line item for companies tends to be AWS, GCP, Azure,” he said. “But then, after that, it’s Datadog Cloudflare Sumo Logic, things along those lines. Right now, there’s no way to see, P&L or an ROI from a cloud usage-based perspective. Vantage can be the tool where that’s showing you essentially, all of your cloud costs in one space.”

That is likely the vision the investors bought in as well and even though Vantage is now going up against enterprise tools like Apptio’s Cloudability and VMware’s CloudHealth, Schaechter doesn’t seem to be all that worried about the competition. He argues that these are tools that were born in a time when AWS had only a handful of services and only a few ways of interacting with those. He believes that Vantage, as a modern self-service platform, will have quite a few advantages over these older services.

“You can get up and running in a few clicks. You don’t have to talk to a sales team. We’re helping a large number of startups at this stage all the way up to the enterprise, whereas Cloudability and Cloud Health are, in my mind, kind of antiquated enterprise offerings. No startup is choosing to use those at this point, as far as I know,” he said.

The team, which until now mostly consisted of Schaechter and his co-founder and CTO Brooke McKim, bootstrapped to company up to this point. Now they plan to use the new capital to build out its team (and the company is actively hiring right now), both on the development and go-to-market side.

The company offers a free starter plan for businesses that track up to $2,500 in monthly AWS cost, with paid plans starting at $30 per month for those who need to track larger accounts.

#amazon-web-services, #andreessen-horowitz, #apptio, #aws, #brianne-kimmel, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-services, #cloudability, #cloudflare, #computing, #datadog, #enterprise, #information-technology, #machine-learning, #recent-funding, #startups, #sumo-logic, #tc, #technology, #vmware

0

Internxt gets $1M to be ‘the Coinbase of decentralized storage’

Valencia-based startup Internxt has been quietly working on an ambitious plan to make decentralized cloud storage massively accessible to anyone with an Internet connection.

It’s just bagged $1M in seed funding led by Angels Capital, a European VC fund owned by Juan Roig (aka Spain’s richest grocer and second wealthiest billionaire), and Miami-based The Venture City. It had previously raised around half a million dollars via a token sale to help fund early development.

The seed funds will be put towards its next phase of growth — its month-to-month growth rate is 30% and it tells us it’s confident it can at least sustain that — including planning a big boost to headcount so it can accelerate product development.

The Spanish startup has spent most of its short life to date developing a decentralized infrastructure that it argues is both inherently more secure and more private than mainstream cloud-based apps (such as those offered by tech giants like Google).

This is because files are not only encrypted in a way that means it cannot access your data but information is also stored in a highly decentralized way, split into tiny shards which are then distributed across multiple storage locations, with users of the network contributing storage space (and being recompensed for providing that capacity with — you guessed it — crypto).

“It’s a distributed architecture, we’ve got servers all over the world,” explains founder and CEO Fran Villalba Segarra. “We leverage and use the space provided by professionals and individuals. So they connect to our infrastructure and start hosting data shards and we pay them for the data they host — which is also more affordable because we are not going through the traditional route of just renting out a data center and paying them for a fixed amount of space.

“It’s like the Airbnb model or Uber model. We’ve kind of democratized storage.”

Internxt clocked up three years of R&D, beginning in 2017, before launching its first cloud-based apps: Drive (file storage), a year ago — and now Photos (a Google Photos rival).

So far it’s attracting around a million active users without paying any attention to marketing, per Villalba Segarra.

Internxt Mail is the next product in its pipeline — to compete with Gmail and also ProtonMail, a pro-privacy alternative to Google’s freemium webmail client (and for more on why it believes it can offer an edge there read on).

Internxt Send (file transfer) is another product billed as coming soon.

“We’re working on a G-Suite alternative to make sure we’re at the level of Google when it comes to competing with them,” he adds.

The issue Internxt’s architecture is designed to solve is that files which are stored in just one place are vulnerable to being accessed by others. Whether that’s the storage provider itself (who may, like Google, have a privacy-hostile business model based on mining users’ data); or hackers/third parties who manage to break the provider’s security — and can thus grab and/or otherwise interfere with your files.

Security risks when networks are compromised can include ransomeware attacks — which have been on an uptick in recent years — whereby attackers that have penetrated a network and gained access to stored files then hold the information to ransom by walling off the rightful owner’s access (typically by applying their own layer of encryption and demanding payment to unlock the data).

The core conviction driving Internxt’s decentralization push is that files sitting whole on a server or hard drive are sitting ducks.

Its answer to that problem is an alternative file storage infrastructure that combines zero access encryption and decentralization — meaning files are sharded, distributed and mirrored across multiple storage locations, making them highly resilient against storage failures or indeed hack attacks and snooping.

The approach ameliorates cloud service provider-based privacy concerns because Internxt itself cannot access user data.

To make money its business model is simple, tiered subscriptions: With (currently) one plan covering all its existing and planned services — based on how much data you need. (It is also freemium, with the first 10GB being free.)

Internxt is by no means the first to see key user value in rethinking core Internet architecture.

Scotland’s MaidSafe has been trying to build an alternative decentralized Internet for well over a decade at this point — only starting alpha testing its alt network (aka, the Safe Network) back in 2016, after ten years of testing. Its long term mission to reinvent the Internet continues.

Another (slightly less veteran) competitor in the decentralized cloud storage space is Storj, which is targeting enterprise users. There’s also Filecoin and Sia — both also part of the newer wave of blockchain startups that sprung up after Bitcoin sparked entrepreneurial interest in cryptocurrencies and blockchain/decentralization.

How, then, is what Internxt’s doing different to these rival decentralized storage plays — all of which have been at this complex coal face for longer?

“We’re the only European based startup that’s doing this [except for MaidSafe, although it’s UK not EU based],” says Villalba Segarra, arguing that the European Union’s legal regime around data protection and privacy lends it an advantage vs U.S. competitors. “All the others, Storj, plus Sia, Filecoin… they’re all US-based companies as far as I’m aware.”

The other major differentiating factor he highlights is usability — arguing that the aforementioned competitors have been “built by developers for developers”. Whereas he says Internxt’s goal is be the equivalent of ‘Coinbase for decentralized storage’; aka, it wants to make a very complex technology highly accessible to non-technical Internet users.

“It’s a huge technology but in the blockchain space we see this all the time — where there’s huge potential but it’s very hard to use,” he tells TechCrunch. “That’s essentially what Coinbase is also trying to do — bringing blockchain to users, making it easier to use, easier to invest in cryptocurrency etc. So that’s what we’re trying to do at Internxt as well, bringing blockchain for cloud storage to the people. Making it easy to use with a very easy to use interface and so forth.

“It’s the only service in the distributed cloud space that’s actually usable — that’s kind of our main differentiating factor from Storj and all these other companies.”

“In terms of infrastructure it’s actually pretty similar to that of Sia or Storj,” he goes on — further likening Internxt’s ‘zero access’ encryption to Proton Drive’s architecture (aka, the file storage product from the makers of end-to-end encrypted email service ProtonMail) — which also relies on client side encryption to give users a robust technical guarantee that the service provider can’t snoop on your stuff. (So you don’t have to just trust the company not to violate your privacy.)

But while it’s also touting zero access encryption (it seems to be using off-the-shelf AES-256 encryption; it says it uses “military grade”, client-side, open source encryption that’s been audited by Spain’s S2 Grupo, a major local cybersecurity firm), Internxt takes the further step of decentralizing the encrypted bits of data too. And that means it can tout added security benefits, per Villalba Segarra.

“On top of that what we do is we fragment data and then distribute it around the world. So essentially what servers host are encrypted data shards — which is much more secure because if a hacker was ever to access one of these servers what they would find is encrypted data shards which are essentially useless. Not even we can access that data.

“So that adds a huge layer of security against hackers or third party [access] in terms of data. And then on top of that we build very nice interfaces with which the user is very used to using — pretty much similar to those of Google… and that also makes us very different from Storj and Sia.”

Storage space for Internxt users’ files is provided by users who are incentivized to offer up their unused capacity to host data shards with micropayments of crypto for doing so. This means capacity could be coming from an individual user connecting to Internxt with just their laptop — or a datacenter company with large amounts of unused storage capacity. (And Villalba Segarra notes that it has a number of data center companies, such as OVH, are connected to its network.)

“We don’t have any direct contracts [for storage provision]… Anyone can connect to our network — so datacenters with available storage space, if they want to make some money on that they can connect to our network. We don’t pay them as much as we would pay them if we went to them through the traditional route,” he says, likening this portion of the approach to how Airbnb has both hosts and guests (or Uber needs drivers and riders).

“We are the platform that connects both parties but we don’t host any data ourselves.”

Internxt uses a reputation system to manage storage providers — to ensure network uptime and quality of service — and also applies blockchain ‘proof of work’ challenges to node operators to make sure they’re actually storing the data they claim.

“Because of the decentralized nature of our architecture we really need to make sure that it hits a certain level of reliability,” he says. “So for that we use blockchain technology… When you’re storing data in your own data center it’s easier in terms of making sure it’s reliable but when you’re storing it in a decentralized architecture it brings a lot of benefits — such as more privacy or it’s also more affordable — but the downside is you need to make sure that for example they’re actually storing data.”

Payments to storage capacity providers are also made via blockchain tech — which Villalba Segarra says is the only way to scale and automate so many micropayments to ~10,000 node operators all over the world.

Discussing the issue of energy costs — given that ‘proof of work’ blockchain-based technologies are facing increased scrutiny over the energy consumption involved in carrying out the calculations — he suggests that Internxt’s decentralized architecture can be more energy efficient than traditional data centers because data shards are more likely to be located nearer to the requesting user — shrinking the energy required to retrieve packets vs always having to do so from a few centralized global locations.

“What we’ve seen in terms of energy consumption is that we’re actually much more energy efficient than a traditional cloud storage service. Why? Think about it, we mirror files and we store them all over the world… It’s actually impossible to access a file from Dropbox that is sent out from [a specific location]. Essentially when you access Dropbox or Google Drive and you download a file they’re going to be sending it out from their data center in Texas or wherever. So there’s a huge data transfer energy consumption there — and people don’t think about it,” he argues.

“Data center energy consumption is already 2%* of the whole world’s energy consumption if I’m not mistaken. So being able to use latency and being able to send your files from [somewhere near the user] — which is also going to be faster, which is all factored into our reputation system — so our algorithms are going to be sending you the files that are closer to you so that we save a lot of energy from that. So if you multiple that by millions of users and millions of terabytes that actually saves a lot of energy consumption and also costs for us.”

What about latency from the user’s point of view? Is there a noticeable lag when they try to upload or retrieve and access files stored on Internxt vs — for example — Google Drive?

Villalba Segarra says being able to store file fragments closer to the user also helps compensate for any lag. But he also confirms there is a bit of a speed difference vs mainstream cloud storage services.

“In terms of upload and download speed we’re pretty close to Google Drive and Dropbox,” he suggests. “Again these companies have been around for over ten years and their services are very well optimized and they’ve got a traditional cloud architecture which is also relatively simpler, easier to build and they’ve got thousands of [employees] so their services are obviously much better than our service in terms of speed and all that. But we’re getting really close to them and we’re working really fast towards bringing our speed [to that level] and also as many features as possible to our architecture and to our services.”

“Essentially how we see it is we’re at the level of Proton Drive or Tresorit in terms of usability,” he adds on the latency point. “And we’re getting really close to Google Drive. But an average user shouldn’t really see much of a difference and, as I said, we’re literally working as hard as possible to make our services as useable as those of Google. But we’re ages ahead of Storj, Sia, MaidSafe and so forth — that’s for sure.”

Internxt is doing all this complex networking with a team of just 20 people currently. But with the new seed funding tucked in its back pocket the plan now is to ramp up hiring over the next few months — so that it can accelerate product development, sustain its growth and keep pushing its competitive edge.

“By the time we do a Series A we should be around 100 people at Internxt,” says Villalba Segarra. “We are already preparing our Series A. We just closed our seed round but because of how fast we’re growing we are already being reached out to by a few other lead VC funds from the US and London.

“It will be a pretty big Series A. Potentially the biggest in Spain… We plan on growing until the Series A at at least a 30% month-to-month rate which is what we’ve been growing up until now.”

He also tells TechCrunch that the intention for the Series A is to do the funding at a $50M valuation.

“We were planning on doing it a year from now because we literally just closed our [seed] round but because of how many VCs are reaching out to us we may actually do it by the end of this year,” he says, adding: “But timeframe isn’t an issue for us. What matters most is being able to reach that minimum valuation.”

*Per the IEA, data centres and data transmission networks each accounted for around 1% of global electricity use in 2019

#angels-capital, #blockchain, #cloud-computing, #cloud-storage, #coinbase, #cryptocurrencies, #decentralization, #dropbox, #encryption, #energy-consumption, #europe, #european-union, #fundings-exits, #gmail, #internxt, #privacy, #recent-funding, #spain, #startups, #storage, #tc, #the-venture-city, #valencia

0

Elisity raises $26M Series A to scale its AI cybersecurity platform

Elisity, a self-styled innovator that provides behavior-based enterprise cybersecurity, has raised $26 million in Series A funding.

The funding round was co-led by Two Bear Capital and AllegisCyber Capital, the latter of which has invested in a number of cybersecurity startups including Panaseer, with previous seed investor Atlantic Bridge also participating.

Elisity, which is led by industry veterans from Cisco, Qualys, and Viptela, says the funding will help it meet growing enterprise demand for its cloud-delivered Cognitive Trust platform, which it claims is the only platform intelligent enough to understand how assets and people connect beyond corporate perimeters.

The platform looks to help organizations transition from legacy access approaches to zero trust, a security model based on maintaining strict access controls and not trusting anyone — even employees — by default, across their entire digital footprint. This enables organizations to adopt a ‘work-from-anywhere’ model, according to the company, which notes that most companies today continue to rely on security and policies based on physical location or low-level networking constructs, such as VLAN, IP and MAC addresses, and VPNs.

Cognitive Trust, the company claims, can analyze the uniquely identify and context of people, apps and devices, including Internet of Things (IoT) and operational technology (OT), wherever they’re working. The company says its AI-driven behavioral intelligence, the platform can also continuously assess risk and instantly optimize access, connectivity and protection policies.

“CISOs are facing ever increasing attack surfaces caused by the shift to remote work, reliance on cloud-based services (and often multi-cloud), and the convergence of IT/OT networks,” said Mike Goguen, founder and managing partner at Two Bear Capital. “Elisity addresses all of these problems by not only enacting a zero trust model, but by doing so at the edge and within the behavioral context of each interaction. We are excited to partner with the CEO, James Winebrenner, and his team as they expand the reach of their revolutionary approach to enterprise security.”

Founded in 2018, Elisity — whose competitors include the likes of Vectra AI and Lastline closed a $7.5 million seed round in August that same year, led by Atlantic Bridge. With its seed round, Elisity began scaling its engineering, sales and marketing teams to ramp up ahead of the platform’s launch. 

Now it’s looking to scale in order to meet growing enterprise demand, which comes as many organizations move to a hybrid working model and seek the tools to help them secure distributed workforces. 

“When the security perimeter is no longer the network, we see an incredible opportunity to evolve the way enterprises connect and protect their people and their assets, moving away from strict network constructs to identity and context as the basis for secure access,” said Winebrenner. 

“With Elisity, customers can dispense with the complexity, cost and protracted timeline enterprises usually encounter. We can onboard a new customer in as little as 45 minutes, rather than months or years, moving them to an identity-based access policy, and expanding to their cloud and on-prem[ise] footprints over time without having to rip and replace existing identity providers and network infrastructure investments. We do this without making tradeoffs between productivity for employees and the network security posture.”

Elisity, which is based in California, currently employs around 30 staff. However, it currently has no women in its leadership team, nor on its board of directors. 

#allegiscyber-capital, #artificial-intelligence, #california, #ceo, #cisco, #cloud-computing, #cloud-infrastructure, #computer-security, #computing, #funding, #lastline, #managing-partner, #operational-technology, #qualys, #security, #technology, #viptela

0

How Microsoft Is Ditching the Video Game Console Wars

Known for the Xbox, Microsoft has been diversifying away from boxy hardware in favor of reaching millions more new gamers.

#cloud-computing, #computer-and-video-games, #computers-and-the-internet, #mergers-acquisitions-and-divestitures, #microsoft-corp, #mojang-ab, #nadella-satya, #playstation-5-video-game-system, #software, #sony-corporation, #spencer-phil-technology-executive, #xbox-video-game-system

0

The rise of cybersecurity debt

Ransomware attacks on the JBS beef plant, and the Colonial Pipeline before it, have sparked a now familiar set of reactions. There are promises of retaliation against the groups responsible, the prospect of company executives being brought in front of Congress in the coming months, and even a proposed executive order on cybersecurity that could take months to fully implement.

But once again, amid this flurry of activity, we must ask or answer a fundamental question about the state of our cybersecurity defense: Why does this keep happening?

I have a theory on why. In software development, there is a concept called “technical debt.” It describes the costs companies pay when they choose to build software the easy (or fast) way instead of the right way, cobbling together temporary solutions to satisfy a short-term need. Over time, as teams struggle to maintain a patchwork of poorly architectured applications, tech debt accrues in the form of lost productivity or poor customer experience.

Complexity is the enemy of security. Some companies are forced to put together as many as 50 different security solutions from up to 10 different vendors to protect their sprawling technology estates.

Our nation’s cybersecurity defenses are laboring under the burden of a similar debt. Only the scale is far greater, the stakes are higher and the interest is compounding. The true cost of this “cybersecurity debt” is difficult to quantify. Though we still do not know the exact cause of either attack, we do know beef prices will be significantly impacted and gas prices jumped 8 cents on news of the Colonial Pipeline attack, costing consumers and businesses billions. The damage done to public trust is incalculable.

How did we get here? The public and private sectors are spending more than $4 trillion a year in the digital arms race that is our modern economy. The goal of these investments is speed and innovation. But in pursuit of these ambitions, organizations of all sizes have assembled complex, uncoordinated systems — running thousands of applications across multiple private and public clouds, drawing on data from hundreds of locations and devices.

Complexity is the enemy of security. Some companies are forced to put together as many as 50 different security solutions from up to 10 different vendors to protect their sprawling technology estates — acting as a systems integrator of sorts. Every node in these fantastically complicated networks is like a door or window that might be inadvertently left open. Each represents a potential point of failure and an exponential increase in cybersecurity debt.

We have an unprecedented opportunity and responsibility to update the architectural foundations of our digital infrastructure and pay off our cybersecurity debt. To accomplish this, two critical steps must be taken.

First, we must embrace open standards across all critical digital infrastructure, especially the infrastructure used by private contractors to service the government. Until recently, it was thought that the only way to standardize security protocols across a complex digital estate was to rebuild it from the ground up in the cloud. But this is akin to replacing the foundations of a home while still living in it. You simply cannot lift-and-shift massive, mission-critical workloads from private data centers to the cloud.

There is another way: Open, hybrid cloud architectures can connect and standardize security across any kind of infrastructure, from private data centers to public clouds, to the edges of the network. This unifies the security workflow and increases the visibility of threats across the entire network (including the third- and fourth-party networks where data flows) and orchestrates the response. It essentially eliminates weak links without having to move data or applications — a design point that should be embraced across the public and private sectors.

The second step is to close the remaining loopholes in the data security supply chain. President Biden’s executive order requires federal agencies to encrypt data that is being stored or transmitted. We have an opportunity to take that a step further and also address data that is in use. As more organizations outsource the storage and processing of their data to cloud providers, expecting real-time data analytics in return, this represents an area of vulnerability.

Many believe this vulnerability is simply the price we pay for outsourcing digital infrastructure to another company. But this is not true. Cloud providers can, and do, protect their customers’ data with the same ferocity as they protect their own. They do not need access to the data they store on their servers. Ever.

To ensure this requires confidential computing, which encrypts data at rest, in transit and in process. Confidential computing makes it technically impossible for anyone without the encryption key to access the data, not even your cloud provider. At IBM, for example, our customers run workloads in the IBM Cloud with full privacy and control. They are the only ones that hold the key. We could not access their data even if compelled by a court order or ransom request. It is simply not an option.

Paying down the principal on any kind of debt can be daunting, as anyone with a mortgage or student loan can attest. But this is not a low-interest loan. As the JBS and Colonial Pipeline attacks clearly demonstrate, the cost of not addressing our cybersecurity debt spans far beyond monetary damages. Our food and fuel supplies are at risk, and entire economies can be disrupted.

I believe that with the right measures — strong public and private collaboration — we have an opportunity to construct a future that brings forward the combined power of security and technological advancement built on trust.

#cloud-computing, #cloud-infrastructure, #cloud-management, #colonial-pipeline, #column, #cybersecurity, #cyberwarfare, #data-security, #developer, #encryption, #opinion, #security, #software-development, #tc

0

Alibaba is making its cloud OS compatible with multiple chip architectures

Alibaba’s cloud computing unit is making its Apsara operating system compatible with processors based on Arm, x86, RISC-V, among other architectures, the company announced at a conference on Friday.

Alibaba Cloud is one of the fastest-growing businesses for the Chinese e-commerce giant and the world’s fourth-largest public cloud service in the second half of 2020, according to market research firm IDC.

The global chip market has mostly been dominated by Intel’s x86 in personal computing and Arm for mobile devices. But RISC-V, an open-source chip architecture competitive with Arm’s technologies, is gaining popularity around the world, especially with Chinese developers. Started by academics at the University of California, Berkeley, RISC-V is open to all to use without licensing or patent fees and is generally not subject to America’s export controls.

The Trump Administration’s bans on Huawei and its rival ZTE over national security concerns have effectively severed ties between the Chinese telecom titans and American tech companies, including major semiconductor suppliers.

Arm was forced to decide its relationships with Huawei and said it could continue licensing to the Chinese firm as it’s of U.K. origin. But Huawei still struggles to find fabs that are both capable and allowed to actually manufacture the chips designed using the architecture.

The U.S. sanctions led to a burst in activity around RISC-V in China’s tech industry as developers prepare for future tech restrictions by the U.S., with Alibaba at the forefront of the movement. Alibaba Cloud, Huawei and ZTE are among the 13 premier members of RISC-V International, which means they get a seat on its Board of Directors and Technical Steering Community.

In 2019, the e-commerce company’s semiconductor division T-Head launched its first core processor Xuantie 910, which is based on RISC-V and used for cloud edge and IoT applications. Having its operating system work with multiple chip systems instead of one mainstream architecture could prepare Alibaba Cloud well for a future of chip independence in China.

“The IT ecosystem was traditionally defined by chips, but cloud computing fundamentally changed that,” Zhang Jianfeng, president of Alibaba Cloud’s Intelligence group, said at the event. “A cloud operating system can standardize the computing power of server chips, special-purpose chips and other hardware, so whether the chip is based on x86, Arm, RISC-V or a hardware accelerator, the cloud computing offerings for customers are standardized and of high-quality.”

Meanwhile, some argue that Chinese companies moving towards alternatives like RISC-V means more polarization of technology and standards, which is not ideal for global collaboration unless RISC-V becomes widely adopted in the rest of the world.

#alibaba, #alibaba-cloud, #asia, #china, #cloud, #cloud-computing, #computers, #computing, #huawei, #operating-system, #risc-v, #semiconductor, #university-of-california, #university-of-california-berkeley, #x86

0

Microsoft brings more of its Azure services to any Kubernetes cluster

At its Build developer conference today, Microsoft announced a new set of Azure services (in preview) that businesses can now run on virtually any CNCF-conformant Kubernetes cluster with the help of its Azure Arc multi-cloud service.

Azure Arc, similar to tools like Google’s Anthos or AWS’s upcoming EKS Anywhere, provides businesses with a single tool to manage their container clusters across clouds and on-premises data centers. Since its launch back in late 2019, Arc enabled some of the core Azure services to run directly in these clusters as well, though the early focus was on a small set of data services, with the team also later adding some machine learning tools to Arc as well. With today’s update, the company is greatly expanding this set of containerized Azure services that work with Arc.

These new services include Azure App Service for building and managing web apps and APIs, Azure Functions for event-driven programming, Azure Logic Apps for building automated workflows, Azure Event Grid for event routing, and Azure API Management for… you guessed it… managing internal and external APIs.

“The app services are now Azure Arc-enabled, which means customers can deploy Web Apps, Functions, API gateways, Logic Apps and Event Grid services on pre-provisioned Kubernetes clusters,” Microsoft explained in its annual “Book of News” for this year’s Build. “This takes advantage of features including deployment slots for A/B testing, storage queue triggers and out-of-box connectors from the app services, regardless of run location. With these portable turnkey services, customers can save time building apps, then manage them consistently across hybrid and multicloud environments using Azure Arc.”

read

#api, #aws, #azure, #azure-arc, #cloud-computing, #cloud-infrastructure, #computing, #google-cloud-platform, #kubernetes, #machine-learning, #microsoft, #microsoft-build-2021, #microsoft-azure, #tc, #web-apps

0

Esper raises $30M Series B for its IoT DevOps platform

There may be billions of IoT devices in use today, but the tooling around building (and updating) the software for them still leaves a lot to be desired. Esper, which today announced that it has raised a $30 million Series B round, builds the tools to enable developers and engineers to deploy and manage fleets of Android-based edge devices. The round was led by Scale Venture Partners, with participation from Madrona Venture Group, Root Ventures, Ubiquity Ventures and Haystack.

The company argues that there are thousands of device manufacturers who are building these kinds of devices on Android alone, but that scaling and managing these deployments comes with a lot of challenges. The core idea here is that Esper brings to device development the DevOps experience that software developers now expect. The company argues that its tools allow companies to forgo building their own internal DevOps teams and instead use its tooling to scale their Android-based IoT fleets for use cases that range from digital signage and kiosks to custom solutions in healthcare, retail, logistics and more.

“The pandemic has transformed industries like connected fitness, digital health, hospitality, and food delivery, further accelerating the adoption of intelligent edge devices. But with each new use case, better software automation is required,” said Yadhu Gopalan, CEO and co-founder at Esper. “Esper’s mature cloud infrastructure incorporates the functionality cloud developers have come to expect, re-imagined for devices.”

Image Credits: Esper

Mobile device management (MDM) isn’t exactly a new thing, but the Esper team argues that these tools weren’t created for this kind of use case. “MDMs are the solution now in the market. They are made for devices being brought into an environment,” Gopalan said. “The DNA of these solutions is rooted in protecting the enterprise and to deploy applications to them in the network. Our customers are sending devices out into the wild. It’s an entirely different use case and model.”

To address these challenges, Esper offers a range of tools and services that includes a full development stack for developers, cloud-based services for device management and hardware emulators to get started with building custom devices.

“Esper helped us launch our Fusion-connected fitness offering on three different types of hardware in less than six months,” said Chris Merli, founder at Inspire Fitness. “Their full stack connected fitness Android platform helped us test our application on different hardware platforms, configure all our devices over the cloud, and manage our fleet exactly to our specifications. They gave us speed, Android expertise, and trust that our application would provide a delightful experience for our customers.”

The company also offers solutions for running Android on older x86 Windows devices to extend the life of this hardware, too.

“We spent about a year and a half on building out the infrastructure,” said Gopalan. “Definitely. That’s the hard part and that’s really creating a reliable, robust mechanism where customers can trust that the bits will flow to the devices. And you can also roll back if you need to.”

Esper is working with hardware partners to launch devices that come with built-in Esper-support from the get-go.

Esper says it saw 70x revenue growth in the last year, an 8x growth in paying customers and a 15x growth in devices running Esper. Since we don’t know the baseline, those numbers are meaningless, but the investors clearly believe that Esper is on to something. Current customers include the likes of CloudKitchens, Spire Health, Intelity, Ordermark, Inspire Fitness, RomTech and Uber.

#ambient-intelligence, #android, #cloud, #cloud-computing, #developer, #device-management, #enterprise, #esper, #hardware, #healthcare, #internet-of-things, #iot, #madrona-venture-group, #microsoft-windows, #mobile-device-management, #operating-systems, #recent-funding, #retail, #root-ventures, #scale-venture-partners, #smartphones, #software-automation, #software-developers, #startups, #tc, #technology, #uber, #ubiquity-ventures

0

Construction tech upstart Assignar adds a Fifth Wall with $20M Series B

Construction technology may not be the sexiest of industries, but it is one where tremendous opportunity lies — considering it has historically lagged in productivity. And, lags in productivity means project delays, which typically costs everyone involved more time and more money.

There are a number of larger players in the space (think Procore, PlanGrid and Autodesk) that are tackling the problems from the perspective of the general contractor. But when it comes to the subcontractors that are hired by the general contractor to do 95% of the work, the pickings are few and far between.

Enter Assignar, a cloud-based construction tech startup that was originally born in Australia and is now based in Denver, Colorado. Co-founder and CEO Sean McCreanor was a contractor himself for many years, and grew frustrated with the lack of offerings available to him. So, as in the case of many founders, he set out to create the technology he wished existed.

And today, Assignar has raised $20 million in a Series B funding round led by real estate tech-focused venture firm Fifth Wall. 

Existing backer Tola Capital and new investor Ironspring Ventures also put money in the round, which brings Assignar’s total raised since its 2014 inception to $31 million.

“I had 100 crews and workers out in the field, lots of heavy equipment and project work, and was running the entire business on spreadsheets and whiteboards,” McCreanor recalls. “With Assignar, we essentially help the office connect to the field and vice versa.”

In a nutshell, Assignar’s operations platform is designed for use by “self-perform general and subcontractors” on public and private infrastructure projects. The company’s goal is to make the whole process smoother for large general contractors, developers and real estate owner-operators by providing a “real-time snapshot of granular field activity.”

Specifically, Assignar aims to streamline operations and schedules, track crews and equipment, and improve quality and safety, as well as measure and monitor productivity and progress with data on all projects. For example, it claims to be able to help match up the best crews and equipment for a specific job “more efficiently.”

The startup says it has hundreds of international customers working on multibillion-dollar projects in infrastructure, road, rail, heavy civil, utilities and other construction disciplines. Those customers range from specialist contractors with as few as five crews to multi-national, multibillion-dollar companies. Projects include things such as bridges and roads, for example.

Image Credits: Assignar

Assignar historically has “more than doubled” its revenue every year since inception and in 2020, saw revenue increase by 75%.

“We could have grown faster but wanted to manage cash flow,” McCreanor told TechCrunch.

Assignar’s focus is particularly significant these days considering that the Biden administration’s Infrastructure Bill is nearing agreement, likely signaling an investment in infrastructure for communities across the U.S. 

The heavy civil and horizontal construction industry has long lacked a well-designed and ubiquitous operations platform, according to Fifth Wall Partner Vik Chawla.

“Assignar’s cloud-based software offers a detailed view on when and where different types of field activities are being performed,” he said. “It streamlines communications between headquarters and the field, allows for a reduction in paperwork, and brings time and cost savings to an industry where much of the planning, tracking and reporting are still done by hand, in Excel or on white boards.”

Assignar plans to use its new capital to grow its business in North America (which currently makes up about 25% of its revenue) and double its 65-person team by hiring for roles across all departments. The company also plans to invest in R&D and product development to further build out its core platform. Among the features it’s planning to develop is a contractor hub and a schedule recommendation engine that McCreanor says will leverage data, AI and machine learning “to support planning and execution processes.”

#architecture, #artificial-intelligence, #assignar, #australia, #autodesk, #biden-administration, #cloud, #cloud-based-software, #cloud-computing, #colorado, #construction, #construction-software, #construction-tech, #contractor, #denver, #fifth-wall, #funding, #fundings-exits, #heavy-equipment, #ironspring-ventures, #machine-learning, #north-america, #plangrid, #procore, #recent-funding, #startup, #startups, #tc, #tola-capital, #united-states, #venture-capital, #vik-chawla

0

Google updates Firebase with new personalization features, security tools and more

At its I/O developer conference, Google today announced a slew of updates to its Firebase developer platform, which, as the company also announced, now powers over 3 million apps.

There’s a number of major updates here, most of which center around improving existing tools like Firebase Remote Config and Firebase’s monitoring capabilities, but there are also a number of completely new features here as well, including the ability to create Android App Bundles and a new security tool called App Check.

“Helping developers be successful is what makes Firebase successful,” Firebase product manager Kristen Richards told me ahead of today’s announcements. “So we put helpfulness and helping developers at the center of everything that we do.” She noted that during the pandemic, Google saw a lot of people who started to focus on app development — both as learners and as professional developers. But the team also saw a lot of enterprises move to its platform as those companies looked to quickly bring new apps online.

Maybe the marquee Firebase announcement at I/O is the updated Remote Config. That’s always been a very powerful feature that allows developers to make changes to live production apps on the go without having to release a new version of their app. Developers can use this for anything from A/B testing to providing tailored in-app experience to specific user groups.

With this update, Google is introducing updates to the Remote Config console, to make it easier for developers to see how they are using this tool, as well as an updated publish flow and redesigned test results pages for A/B tests.

Image Credits: Google

What’s most important, though, is that Google is taking Remote Config a step further now by launching a new Personalization feature that helps developers automatically optimize the user experience for individual users. “It’s a new feature of [Remote Config] that uses Google’s machine learning to create unique individual app experiences,” Richards explained. “It’s super simple to set up and it automatically creates these personalized experiences that’s tailored to each individual user. Maybe you have something that you would like, which would be something different for me. In that way, we’re able to get a tailored experience, which is really what customers expect nowadays. I think we’re all expecting things to be more personalized than they have in the past.”

Image Credits: Google

Google is also improving a number of Firebase’s analytics and monitoring capabilities, including its Crashlytics service for figuring out app crashes. For game developers, that means improved support for games written with the help of the Unity platform, for example, but for all developers, the fact that Firebase’s Performance Monitoring service now processes data in real time is a major update to having performance data (especially on launch day) arrive with a delay of almost half a day.

Firebase is also now finally adding support for Android App Bundles, Google’s relatively new format for packaging up all of an app’s code and resources, with Google Play optimizing the actual APK with the right resources for the kind of device the app gets installed on. This typically leads to smaller downloads and faster installs.

On the security side, the Firebase team is launching App Check, now available in beta. App Check helps developers guard their apps against outside threats and is meant to automatically block any traffic to online resources like Cloud Storage, Realtime Database and Cloud Functions for Firebase (with others coming soon) that doesn’t provide valid credentials.

Image Credits: Google

The other update worth mentioning here is to Firebase Extensions, which launched a while ago, but which is getting support for a few more extensions today. These are new extensions from Algolia, Mailchimp and MessageBird, that helps bring new features like Algolia’s search capabilities or MessageBird’s communications features directly to the platform. Google itself is also launching a new extension that helps developers detect comments that could be considered “rude, disrespectful, or unreasonable in a way that will make people leave a conversation.”

#algolia, #android, #cloud-computing, #computing, #developer, #firebase, #google, #google-allo, #google-cloud, #google-i-o-2021, #google-play, #google-search, #machine-learning, #mailchimp, #operating-systems, #product-manager, #tc

0

Google Cloud launches Vertex AI, a new managed machine learning platform

At Google I/O today Google Cloud announced Vertex AI, a new managed machine learning platform that is meant to make it easier for developers to deploy and maintain their AI models. It’s a bit of an odd announcement at I/O, which tends to focus on mobile and web developers and doesn’t traditionally feature a lot of Google Cloud news, but the fact that Google decided to announce Vertex today goes to show how important it thinks this new service is for a wide range of developers.

The launch of Vertex is the result of quite a bit of introspection by the Google Cloud team. “Machine learning in the enterprise is in crisis, in my view,” Craig Wiley, the director of product management for Google Cloud’s AI Platform, told me. “As someone who has worked in that space for a number of years, if you look at the Harvard Business Review or analyst reviews, or what have you — every single one of them comes out saying that the vast majority of companies are either investing or are interested in investing in machine learning and are not getting value from it. That has to change. It has to change.”

Image Credits: Google

Wiley, who was also the general manager of AWS’s SageMaker AI service from 2016 to 2018 before coming to Google in 2019, noted that Google and others who were able to make machine learning work for themselves saw how it can have a transformational impact, but he also noted that the way the big clouds started offering these services was by launching dozens of services, “many of which were dead ends,” according to him (including some of Google’s own). “Ultimately, our goal with Vertex is to reduce the time to ROI for these enterprises, to make sure that they can not just build a model but get real value from the models they’re building.”

Vertex then is meant to be a very flexible platform that allows developers and data scientist across skill levels to quickly train models. Google says it takes about 80% fewer lines of code to train a model versus some of its competitors, for example, and then help them manage the entire lifecycle of these models.

Image Credits: Google

The service is also integrated with Vizier, Google’s AI optimizer that can automatically tune hyperparameters in machine learning models. This greatly reduces the time it takes to tune a model and allows engineers to run more experiments and do so faster.

Vertex also offers a “Feature Store” that helps its users serve, share and reuse the machine learning features and Vertex Experiments to help them accelerate the deployment of their models into producing with faster model selection.

Deployment is backed by a continuous monitoring service and Vertex Pipelines, a rebrand of Google Cloud’s AI Platform Pipelines that helps teams manage the workflows involved in preparing and analyzing data for the models, train them, evaluate them and deploy them to production.

To give a wide variety of developers the right entry points, the service provides three interfaces: a drag-and-drop tool, notebooks for advanced users and — and this may be a bit of a surprise — BigQuery ML, Google’s tool for using standard SQL queries to create and execute machine learning models in its BigQuery data warehouse.

We had two guiding lights while building Vertex AI: get data scientists and engineers out of the orchestration weeds, and create an industry-wide shift that would make everyone get serious about moving AI out of pilot purgatory and into full-scale production,” said Andrew Moore, vice president and general manager of Cloud AI and Industry Solutions at Google Cloud. “We are very proud of what we came up with in this platform, as it enables serious deployments for a new generation of AI that will empower data scientists and engineers to do fulfilling and creative work.”

#amazon-sagemaker, #analyst, #andrew-moore, #artificial-intelligence, #aws, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #developer, #enterprise, #google, #google-cloud-platform, #google-i-o-2021, #harvard, #machine-learning, #product-management, #tc, #technology, #web-developers, #world-wide-web

0

Apple’s Compromises in China: 5 Takeaways

To stay on the good side of the Chinese authorities, the company has made decisions that contradict its carefully curated image.

#apple-inc, #censorship, #cloud-computing, #communist-party-of-china, #computer-security, #computers-and-the-internet, #data-centers, #guizhou-china, #guizhou-cloud-big-data-industry-co-ltd, #guo-wengui, #icloud

0

Censorship, Surveillance and Profits: A Hard Bargain for Apple in China

Apple built the world’s most valuable business on top of China. Now it has to answer to the Chinese government.

#apple-inc, #censorship, #cloud-computing, #computer-security, #computers-and-the-internet, #cook-timothy-d, #cue-eddy, #data-storage, #guizhou-cloud-big-data-industry-co-ltd, #guo-wengui, #icloud, #inner-mongolia, #iphone, #mobile-applications, #privacy, #software, #suits-and-litigation-civil, #surveillance-of-citizens-by-government

0

With $21M in funding, Code Ocean aims to help researchers replicate data-heavy science

Every branch of science is increasingly reliant on big data sets and analysis, which means a growing confusion of formats and platforms — more than inconvenient, this can hinder the process of peer review and replication of research. Code Ocean hopes to make it easier for scientists to collaborate by making a flexible, shareable format and platform for any and all datasets and methods, and it has raised a total of $21M to build it out.

Certainly there’s an air of “Too many options? Try this one!” to this (and here’s the requisite relevant XKCD). But Code Ocean isn’t creating a competitor to successful tools like Jupyter or Gitlab or Docker — it’s more of a small-scale container platform that lets you wrap up all the necessary components of your data and analysis in an easily shared format, whatever platform they live on natively.

The trouble appears when you need to share what you’re doing with another researcher, whether they’re on the bench next to you or at a university across the country. It’s important for replication purposes that data analysis — just like any other scientific technique — be done exactly the same way. But there’s no guarantee that your colleague will use the same structures, formats, notation, labels, and so on.

That doesn’t mean it’s impossible to share your work, but it does add a lot of extra steps as would-be replicators or iterators check and double check that all the methods are the same, that the same versions of the same tools are being used in the same order, with the same settings, and so on. A tiny inconsistency can have major repercussions down the road.

Turns out this problem is similar in a way to how many cloud services are spun up. Software deployments can be as finicky as scientific experiments, and one solution to this is containers, which like tiny virtual machines include everything needed to accomplish a computing task, in a portable format compatible with many different setups. The idea is a natural one to transfer to the research world, where you can tie up the data, the software used, and the specific techniques and processes used to reach a given result all in one tidy package. That, at least, is the pitch Code Ocean offers for its platform and “Compute Capsules.”

Diagram showing how a "compute capsule" includes code, environment, and data.

Say you’re a microbiologist looking at the effectiveness of a promising compound on certain muscle cells. You’re working in R, writing in RStudio on a Ubuntu machine, and your data are such and such collected during an in vitro observation. While you would naturally declare all this when you publish, there’s no guarantee anyone has an Ubuntu laptop with a working Rstudio setup around, so even if you provide all the code it might be for nothing.

If however you put it on Code Ocean, like this, it makes all the relevant code available, and capable of being inspected and run unmodified with a click, or being fiddled with if a colleague is wondering about a certain piece. It works through a single link and web app, cross platform, and can even be embedded on a webpage like a document or video. (I’m going to try to do that below, but our backend is a little finicky. The capsule itself is here.)

More than that, though, the Compute Capsule can be repurposed by others with new data and modifications. Maybe the technique you put online is a general purpose RNA sequence analysis tool that works as long as you feed it properly formatted data, and that’s something others would have had to code from scratch in order to take advantage of some platforms.

Well, they can just clone your capsule, run it with their own data, and get their own results in addition to verifying your own. This can be done via the Code Ocean website or just by downloading a zip file of the whole thing and getting it running on their own computer, if they happen to have a compatible setup. A few more example capsules can be found here.

Screenshot of the Code Ocean workbench environment.

Image Credits: Code Ocean

This sort of cross-pollination of research techniques is as old as science, but modern data-heavy experimentation often ends up siloed because it can’t easily be shared and verified even though the code is technically available. That means other researchers move on, build their own thing, and further reinforce the silo system.

Right now there are about 2,000 public compute capsules on Code Ocean, most of which are associated with a published paper. Most have also been used by others, either to replicate or try something new, and some, like ultra-specific open source code libraries, have been used by thousands.

Naturally there are security concerns when working with proprietary or medically sensitive data, and the enterprise product allows the whole system to run on a private cloud platform. That way it would be more of an internal tool, and at major research institutions that in itself could be quite useful.

Code Ocean hopes that by being as inclusive as possible in terms of codebases, platforms, compute services and so on will make for a more collaborative environment at the cutting edge.

Clearly that ambition is shared by others, as the the company has raised $21M so far, $6M of which was in previously undisclosed investments and $15M in an A round announced today. The A round was led by Battery Ventures, with Digitalis Ventures, EBSCO, and Vaal Partners participating as well as numerous others.

The money will allow the company to further develop, scale, and promote its platform. With luck they’ll soon find themselves among the rarefied air often breathed by this sort of savvy SaaS — necessary, deeply integrated, and profitable.

 

#biotech, #cloud, #cloud-computing, #code-ocean, #funding, #fundings-exits, #recent-funding, #saas, #science, #startups

0

To Understand Amazon, We Must Understand Jeff Bezos

In “Amazon Unbound,” his second book about the company, Brad Stone focuses on its singular C.E.O.

#amazon-unbound-book, #amazon-com-inc, #bezos-jeffrey-p, #books-and-literature, #cloud-computing, #computers-and-the-internet, #workplace-environment

0

Google Cloud Run gets committed use discounts and new security features

Cloud Run, Google Cloud’s serverless platform for containerized applications, is getting committed use discounts. Users who commit to spending a given amount on using Cloud Run for a year will get a 17% discount on the money they commit. The company offers a similar pre-commitment discount scheme for VM-based Compute Engine instances, as well as automatic ‘sustained use‘ discounts for machines that run for more than 25% of a month.

In addition, Google Cloud is also introducing a number of new security features for Cloud Run, including the ability to mount secrets from the Google Cloud Secret Manager and binary authorization to help define and enforce policies about how containers are deployed on the service. Cloud Run users can now also now use and manage their own encryption keys (by default, Cloud Run uses Google-managed keys) and a new Recommendation Hub inside of Cloud Run will now offer users recommendations for how to better protect their Cloud Run services.

Aparna Sinha, who recently became the director of product management for Google Cloud’s serverless platform, noted that these updates are part of Google Cloud’s push to build what she calls the “next generation of serverless.’

“We’re really excited to introduce our new vision for serverless, which I think is going to help redefine this space,” she told me. “In the past, serverless has meant a certain narrower type of compute, which is focused on functions or a very specific kind of applications, web services, etc. — and what we are talking about with redefining serverless is focusing on the power of serverless, which is the developer experience and the ease of use, but broadening it into a much more versatile platform, where many different types of applications can be run, and building in the Google way of doing DevOps and security and a lot of integrations so that you have access to everything that’s the best of cloud.”

She noted that Cloud Run saw “tremendous adoption” during the pandemic, something she attributes to the fact that businesses were looking to speed up time-to-value from their applications. IKEA, for example, which famously had a hard time moving from in-store to online sales, bet on Google Cloud’s serverless platform to bring down the refresh time of its online store and inventory management system from three hours to less than three minutes after switching to this model.

“That’s kind of the power of serverless, I think, especially looking forward, the ability to build real-time applications that have data about the context, about the inventory, about the customer and can therefore be much more reactive and responsive,” Sinha said. “This is an expectation that customers will have going forward and serverless is an excellent way to deliver that as well as be responsive to demand patterns, especially when they’re changing so much in today’s uncertain environment.”

Since the container model gives businesses a lot of flexibility in what they want to run in these containers — and how they want to develop these applications since Cloud Run is language-agnostic — Google is now seeing a lot of other enterprises move to this platform as well, both for deploying completely new applications but also to modernize some of their existing services.

For the companies that have predictable usage patterns, the committed use discounts should be an attractive option and it’s likely the more sophisticated organizations that are asking for the kinds of new security features that Google Cloud is introducing today.

“The next generation of serverless combines the best of serverless with containers to run a broad spectrum of apps, with no language, networking or regional restrictions,” Sinha writes in today’s announcement. “The next generation of serverless will help developers build the modern applications of tomorrow—applications that adapt easily to change, scale as needed, respond to the needs of their customers faster and more efficiently, all while giving developers the best developer experience.”

#aparna-sinha, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #developer, #encryption, #google, #google-cloud, #google-compute-engine, #ikea, #online-sales, #product-management, #serverless-computing, #web-services

0

The health data transparency movement is birthing a new generation of startups

In the early 2000s, Jeff Bezos gave a seminal TED Talk titled “The Electricity Metaphor for the Web’s Future.” In it, he argued that the internet will enable innovation on the same scale that electricity did.

We are at a similar inflection point in healthcare, with the recent movement toward data transparency birthing a new generation of innovation and startups.

Those who follow the space closely may have noticed that there are twin struggles taking place: a push for more transparency on provider and payer data, including anonymous patient data, and another for strict privacy protection for personal patient data. What’s the main difference?

This sector is still somewhat nascent — we are in the first wave of innovation, with much more to come.

Anonymized data is much more freely available, while personal data is being locked even tighter (as it should be) due to regulations like GDPR, CCPA and their equivalents around the world.

The former trend is enabling a host of new vendors and services that will ultimately make healthcare better and more transparent for all of us.

These new companies could not have existed five years ago. The Affordable Care Act was the first step toward making anonymized data more available. It required healthcare institutions (such as hospitals and healthcare systems) to publish data on costs and outcomes. This included the release of detailed data on providers.

Later legislation required biotech and pharma companies to disclose monies paid to research partners. And every physician in the U.S. is now required to be in the National Practitioner Identifier (NPI), a comprehensive public database of providers.

All of this allowed the creation of new types of companies that give both patients and providers more control over their data. Here are some key examples of how.

Allowing patients to access all their own health data in one place

This is a key capability of patients’ newly found access to health data. Think of how often, as a patient, providers aren’t aware of treatment or a test you’ve had elsewhere. Often you end up repeating a test because a provider doesn’t have a record of a test conducted elsewhere.

#artificial-intelligence, #cloud-computing, #column, #drug-discovery, #ec-column, #ec-consumer-health, #ec-market-map, #enterprise, #food-and-drug-administration, #health, #health-systems, #healthcare, #healthcare-data, #machine-learning, #startups, #united-states

0

Court Could Consider Whether Trump Interfered in Cloud Computing Contract

The decision could be a win for Amazon, which said it was passed over for the $10 billion Pentagon contract because of his animosity toward its founder, Jeff Bezos.

#amazon-com-inc, #bezos-jeffrey-p, #cloud-computing, #computers-and-the-internet, #defense-contracts, #defense-department, #microsoft-corp, #suits-and-litigation-civil, #trump-donald-j, #united-states-defense-and-military-forces

0

DigitalOcean says customer billing data ‘exposed’ by a security flaw

DigitalOcean has emailed customers warning of a data breach involving customers’ billing data, TechCrunch has learned.

The cloud infrastructure giant told customers in an email on Wednesday, obtained by TechCrunch, that it has “confirmed an unauthorized exposure of details associated with the billing profile on your DigitalOcean account.” The company said the person “gained access to some of your billing account details through a flaw that has been fixed” over a two-week window between April 9 and April 22.

The email said customer billing names and addresses were accessed, as well as the last four digits of the payment card, its expiry date, and the name of the card-issuing bank. The company said that customers’ DigitalOcean accounts were “not accessed,” and passwords and account tokens were “not involved” in this breach.

“To be extra careful, we have implemented additional security monitoring on your account. We are expanding our security measures to reduce the likelihood of this kind of flaw occuring [sic] in the future,” the email said.

DigitalOcean said it fixed the flaw and notified data protection authorities, but it’s not clear what the apparent flaw was that put customer billing information at risk.

In a statement, DigitalOcean’s security chief Tyler Healy said 1% of billing profiles were affected by the breach, but declined to address our specific questions, including how the vulnerability was discovered and which authorities have been informed.

Companies with customers in Europe are subject to GDPR, and can face fines of up to 4% of their global annual revenue.

Last year, the cloud company raised $100 million in new debt, followed by another $50 million round, months after laying off dozens of staff amid concerns about the company’s financial health. In March, the company went public, raising about $775 million in its initial public offering. 

#cloud, #cloud-computing, #cloud-infrastructure, #cloud-storage, #computing, #data-breach, #digitalocean, #enterprise, #security, #spokesperson, #web-hosting, #web-services, #world-wide-web

0

Canada’s newest unicorn: Clio raises $110M at a $1.6B valuation for legal tech

Clio, a software company that helps law practices run more efficiently with its cloud-based technology, announced Tuesday it has raised a $110 million Series E round co-led by T. Rowe Price Associates Inc. and OMERS Growth Equity.

The round propels the Vancouver, British Columbia-based company to unicorn status, valuing it at $1.6 billion. Clio last raised in September of 2019 when it brought in $250 million in a Series D financing. With the latest funding, Clio claims that it’s the “first legal practice management unicorn” globally. The investment also brings its total capital raised since its 2008 inception to $386 million.

Founder and CEO Jack Newton says he and Rian Gauvreau launched Clio during the 2008 recession after seeing the struggles solo lawyers and small firms faced when running a business. Historically, legal practice management software was limited to server-based solutions designed for enterprise businesses — not small law firms, Newton said. Clio was formed to change that.

Clio co-founders Jack Newton and Rian Gauvreau; Image courtesy of Clio

“Much like how Microsoft Windows defined the operating system for personal computers decades ago, Clio has developed a software platform for law firms and their clients that is cloud-based and client-centric by design,” Newton said.

The company’s platform aims to serve as “an operating system” for lawyers, offering cloud-based legal practice management, client intake and legal CRM software. Clio has more than 150,000 customers across 100 countries. Many of the lawyers using Clio are smaller and solo practitioners, but the company also serves larger firms such as Locks Law and King Law.

Newton said his vertical SaaS company helps legal professionals be more productive, grow their firms and “make legal services more accessible.” It also aims to help clients find lawyers more easily and vice versa.

Image Credits: Clio

Newton was tight-lipped about the company’s financials, saying only that since its 2019 raise, the company has seen “explosive” growth. That growth was only fueled by the COVID-19 pandemic and its push toward all things digital. He added that its current valuation was “fair,” and achieved through a “thorough” vetting process.

Clio has focused on building out its core technology to an industry that has historically relied on pen and paper in many cases. It has also aimed to make legal technology more affordable for lawyers to use.

While change has been gradual, COVID-19 forced lawyers to fundamentally reevaluate how they run their law firms and how they deliver legal services to their clients, Newton said.

“Many firms realized that storing client data at the office was no longer an option as teams became distributed during COVID-19,” he added. “Lawyers and legal professionals who had hesitated to adopt technology in the past were suddenly forced to rapidly adapt to this new reality. While this technological change is in response to the crisis, it’s an enduring change.”

In 2018, Clio made its first acquisition with its buy of Lexicata, a Los Angeles-based legal tech startup. The company plans to do more acquisitions with the capital, according to Newton. The company plans to use its new capital to continue investing in its platform as well as toward strategic partnerships. (Clio currently has partnered with over 150 apps.)

Clio also plans to, naturally, do some hiring. Specifically, it plans to boost its headcount by 40%, or 250 employees, with a focus on bolstering its product and engineering teams. (Clio currently has 600 employees.)

“Over the next few years we intend to completely redefine the way legal services are delivered and democratize access to legal aid by way of the cloud,” Newton told TechCrunch. “This investment allows us to expedite our plans and offer even more to our existing customers.”

Clio in particular is growing in the EMEA markets with a current focus on the United Kingdom and Ireland.

In a written statement, OMERS Growth Equity managing director Mark Shulgan said his firm has been following Clio for a number of years.

“We believe Clio has clearly established itself as a market-leading legal tech firm, and will deliver growth for decades to come,” he said.

#canada, #cars, #clio, #cloud, #cloud-computing, #crm, #funding, #fundings-exits, #ireland, #law-firm, #law-firms, #legal-services, #legal-tech, #legal-technology, #los-angeles, #microsoft-windows, #omers-growth-equity, #operating-system, #recent-funding, #saas, #software, #software-platform, #startups, #t-rowe-price, #vancouver, #venture-capital

0

Solving the security challenges of public cloud

Experts believe the data-lake market will hit a massive $31.5 billion in the next six years, a prediction that has led to much concern among large enterprises. Why? Well, an increase in data lakes equals an increase in public cloud consumption — which leads to a soaring amount of notifications, alerts and security events.

Around 56% of enterprise organizations handle more than 1,000 security alerts every day and 70% of IT professionals have seen the volume of alerts double in the past five years, according to a 2020 Dark Reading report that cited research by Sumo Logic. In fact, many in the ONUG community are on the order of 1 million events per second. Yes, per second, which is in the range of tens of peta events per year.

Now that we are operating in a digitally transformed world, that number only continues to rise, leaving many enterprise IT leaders scrambling to handle these events and asking themselves if there’s a better way.

Why isn’t there a standardized approach for dealing with security of the public cloud — something so fundamental now to the operation of our society?

Compounding matters is the lack of a unified framework for dealing with public cloud security. End users and cloud consumers are forced to deal with increased spend on security infrastructure such as SIEMs, SOAR, security data lakes, tools, maintenance and staff — if they can find them — to operate with an “adequate” security posture.

Public cloud isn’t going away, and neither is the increase in data and security concerns. But enterprise leaders shouldn’t have to continue scrambling to solve these problems. We live in a highly standardized world. Standard operating processes exist for the simplest of tasks, such as elementary school student drop-offs and checking out a company car. But why isn’t there a standardized approach for dealing with security of the public cloud — something so fundamental now to the operation of our society?

The ONUG Collaborative had the same question. Security leaders from organizations such as FedEx, Raytheon Technologies, Fidelity, Cigna, Goldman Sachs and others came together to establish the Cloud Security Notification Framework. The goal is to create consistency in how cloud providers report security events, alerts and alarms, so end users receive improved visibility and governance of their data.

Here’s a closer look at the security challenges with public cloud and how CSNF aims to address the issues through a unified framework.

The root of the problem

A few key challenges are sparking the increased number of security alerts in the public cloud:

  1. Rapid digital transformation sparked by COVID-19.
  2. An expanded network edge created by the modern, work-from-home environment.
  3. An increase in the type of security attacks.

The first two challenges go hand in hand. In March of last year, when companies were forced to shut down their offices and shift operations and employees to a remote environment, the wall between cyber threats and safety came crashing down. This wasn’t a huge issue for organizations already operating remotely, but for major enterprises the pain points quickly boiled to the surface.

Numerous leaders have shared with me how security was outweighed by speed. Keeping everything up and running was prioritized over governance. Each employee effectively held a piece of the company’s network edge in their home office. Without basic governance controls in place or training to teach employees how to spot phishing or other threats, the door was left wide open for attacks.

In 2020, the FBI reported its cyber division was receiving nearly 4,000 complaints per day about security incidents, a 400% increase from pre-pandemic figures.

Another security issue is the growing intelligence of cybercriminals. The Dark Reading report said 67% of IT leaders claim a core challenge is a constant change in the type of security threats that must be managed. Cybercriminals are smarter than ever. Phishing emails, entrance through IoT devices and various other avenues have been exploited to tap into an organization’s network. IT teams are constantly forced to adapt and spend valuable hours focused on deciphering what is a concern and what’s not.

Without a unified framework in place, the volume of incidents will spiral out of control.

Where CSNF comes into play

CSNF will prove beneficial for cloud providers and IT consumers alike. Security platforms often require integration timelines to wrap in all data from siloed sources, including asset inventory, vulnerability assessments, IDS products and past security notifications. These timelines can be expensive and inefficient.

But with a standardized framework like CSNF, the integration process for past notifications is pared down and contextual processes are improved for the entire ecosystem, efficiently reducing spend and saving SecOps and DevSecOps teams time to focus on more strategic tasks like security posture assessment, developing new products and improving existing solutions.

Here’s a closer look at the benefits a standardized approach can create for all parties:

  • End users: CSNF can streamline operations for enterprise cloud consumers, like IT teams, and allows improved visibility and greater control over the security posture of their data. This enhanced sense of protection from improved cloud governance benefits all individuals.
  • Cloud providers: CSNF can eliminate the barrier to entry currently prohibiting an enterprise consumer from using additional services from a specific cloud provider by freeing up added security resources. Also, improved end-user cloud governance encourages more cloud consumption from businesses, increasing provider revenue and providing confidence that their data will be secure.
  • Cloud vendors: Cloud vendors that provide SaaS solutions are spending more on engineering resources to deal with increased security notifications. But with a standardized framework in place, these additional resources would no longer be necessary. Instead of spending money on such specific needs along with labor, vendors could refocus core staff on improving operations and products such as user dashboards and applications.

Working together, all groups can effectively reduce friction from security alerts and create a controlled cloud environment for years to come.

What’s next?

CSNF is in the building phase. Cloud consumers have banded together to compile requirements, and consumers continue to provide guidance as a prototype is established. The cloud providers are now in the process of building the key component of CSNF, its Decorator, which provides an open-source multicloud security reporting translation service.

The pandemic created many changes in our world, including new security challenges in the public cloud. Reducing IT noise must be a priority to continue operating with solid governance and efficiency, as it enhances a sense of security, eliminates the need for increased resources and allows for more cloud consumption. ONUG is working to ensure that the industry stays a step ahead of security events in an era of rapid digital transformation.

#cloud, #cloud-computing, #cloud-infrastructure, #cloud-services, #cloud-storage, #column, #computer-security, #cybersecurity, #opinion, #security, #tc

0

5 emerging use cases for productivity infrastructure in 2021

When the world flipped upside down last year, nearly every company in every industry was forced to implement a remote workforce in just a matter of days — they had to scramble to ensure employees had the right tools in place and customers felt little to no impact. While companies initially adopted solutions for employee safety, rapid response and short-term air cover, they are now shifting their focus to long-term, strategic investments that empower growth and streamline operations.

As a result, categories that make up productivity infrastructure — cloud communications services, API platforms, low-code development tools, business process automation and AI software development kits — grew exponentially in 2020. This growth was boosted by an increasing number of companies prioritizing tools that support communication, collaboration, transparency and a seamless end-to-end workflow.

Productivity infrastructure is on the rise and will continue to be front and center as companies evaluate what their future of work entails and how to maintain productivity, rapid software development and innovation with distributed teams.

According to McKinsey & Company, the pandemic accelerated the share of digitally enabled products by seven years, and “the digitization of customer and supply-chain interactions and of internal operations by three to four years.” As demand continues to grow, companies are taking advantage of the benefits productivity infrastructure brings to their organization both internally and externally, especially as many determine the future of their work.

Automate workflows and mitigate risk

Developers rely on platforms throughout the software development process to connect data, process it, increase their go-to-market velocity and stay ahead of the competition with new and existing products. They have enormous amounts of end-user data on hand, and productivity infrastructure can remove barriers to access, integrate and leverage this data to automate the workflow.

Access to rich interaction data combined with pre-trained ML models, automated workflows and configurable front-end components enables developers to drastically shorten development cycles. Through enhanced data protection and compliance, productivity infrastructure safeguards critical data and mitigates risk while reducing time to ROI.

As the post-pandemic workplace begins to take shape, how can productivity infrastructure support enterprises where they are now and where they need to go next?

#artificial-intelligence, #business-process-management, #cloud-computing, #column, #ec-column, #ec-enterprise-applications, #ml, #productivity, #remote-work, #startups

0

Google’s Anthos multi-cloud platform gets improved logging, Windows container support and more

Google today announced a sizable update to its Anthos multi-cloud platform that lets you build, deploy and manage containerized applications anywhere, including on Amazon’s AWS and (in preview) on Microsoft Azure.

Version 1.7 includes new features like improved metrics and logging for Anthos on AWS, a new Connect gateway to interact with any cluster right from Google Cloud and a preview of Google’s managed control plane for Anthos Service Mesh. Other new features include Windows container support for environments that use VMware’s vSphere platform and new tools for developers to make it easier for them to deploy their applications to any Anthos cluster.

Today’s update comes almost exactly two years after Google CEO Sundar Pichai originally announced Anthos at its Cloud Next event in 2019 (before that, Google called this project the ‘Google Cloud Services Platform,’ which launched three years ago). Hybrid- and multi-cloud, it’s fair to say, takes a key role in the Google Cloud roadmap — and maybe more so for Google than for any of its competitors. And recently, Google brought on industry veteran Jeff Reed to become the VP of Product Management in charge of Anthos.

Reed told me that he believes that there are a lot of factors right now that are putting Anthos in a good position. “The wind is at our back. We bet on Kubernetes, bet on containers — those were good decisions,” he said. Increasingly, customers are also now scaling out their use of Kubernetes and have to figure out how to best scale out their clusters and deploy them in different environments — and to do so, they need a consistent platform across these environments. He also noted that when it comes to bringing on new Anthos customers, it’s really those factors that determine whether a company will look into Anthos or not.

He acknowledged that there are other players in this market, but he argues that Google Cloud’s take on this is also quite different. “I think we’re pretty unique in the sense that we’re from the cloud, cloud-native is our core approach,” he said. “A lot of what we talk about in [Anthos] 1.7 is about how we leverage the power of the cloud and use what we call ‘an anchor in the cloud’ to make your life much easier. We’re more like a cloud vendor there, but because we support on-prem, we see some of those other folks.” Those other folks being IBM/Red Hat’s OpenShift and VMware’s Tanzu, for example. 

The addition of support for Windows containers in vSphere environments also points to the fact that a lot of Anthos customers are classical enterprises that are trying to modernize their infrastructure, yet still rely on a lot of legacy applications that they are now trying to bring to the cloud.

Looking ahead, one thing we’ll likely see is more integrations with a wider range of Google Cloud products into Anthos. And indeed, as Reed noted, inside of Google Cloud, more teams are now building their products on top of Anthos themselves. In turn, that then makes it easier to bring those services to an Anthos-managed environment anywhere. One of the first of these internal services that run on top of Anthos is Apigee. “Your Apigee deployment essentially has Anthos underneath the covers. So Apigee gets all the benefits of a container environment, scalability and all those pieces — and we’ve made it really simple for that whole environment to run kind of as a stack,” he said.

I guess we can expect to hear more about this in the near future — or at Google Cloud Next 2021.

 

#anthos, #apigee, #aws, #ceo, #chrome-os, #cisco, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #enterprise, #google, #google-cloud, #google-cloud-platform, #ibm, #kubernetes, #microsoft, #microsoft-windows, #red-hat, #sundar-pichai, #vmware

0

Pulumi launches version 3.0 of its infrastructure-as-code platform

Pulumi was one of the first of what is now a growing number of infrastructure-as-code startups and today, at its developer conference, the company is launching version 3.0 of its cloud engineering platform. With 70 new features and about 1,000 improvements since version 2.0, this is Pulumi’s biggest release yet.

The new release includes features that range from support for Google Cloud as an infrastructure provider (now in preview) to a new Automation API that turns Pulumi into a library that can then be called from other applications. It basically allows developers to write tools that, for example, can then provision and configure their own infrastructure for each customer of a SaaS application, for example.

Image Credits: Pulumi

The company is also launching Pulumi Packages and Components for creating opinionated infrastructure building blocks that developers can then call up from their preferred languages.

Also new is support for Pulumi’s CI/CD Assistant across all the company’s paid plans. This feature makes it easier to deploy cloud infrastructure and applications through more than a dozen popular CI/CD platforms, including the likes of AWS Code Service, Azure DevOps, CircleCI, GitLab CI, Google Cloud Build, Jenkins, Travis CI and Spinnaker. Until now, you needed to be on a Team Pro or Enterprise plan to use this, but it’s now available to all paying users.

In addition, the company is expanding some of its enterprise features with, for example, SAML SSO, SCIm synchronization and new role types.

“When we started out on Pulumi, we knew we wanted to enable developers and infrastructure teams to
collaborate more closely to build more innovative software,” said Joe Duffy, Pulumi co-founder and
CEO. “What we didn’t know yet is that we’d end up calling this ‘Cloud Engineering,’ that our customers
would call it that too, and that they would go on this journey with us. We are now centering our entire
platform around this core idea which is now accelerating as the modern cloud continues to disrupt
entire business models. Pulumi 3.0 is an exciting milestone in realizing this vision of the future —
democratizing access to the cloud and helping teams build better software together — with much more
to come.”

#api, #aws, #cloud-computing, #cloud-infrastructure, #co-founder, #computing, #continuous-integration, #devops, #gitlab, #identity-management, #jenkins, #joe-duffy, #pulumi, #software-engineering, #tc, #technology, #version-control

0

Newer Planes Are Providing Airlines a Trove of Useful Data

During the pandemic, older aircraft have been retired, resulting in a fleet that can collect more information about emissions and safety.

#airasia, #airlines-and-airplanes, #airports, #artificial-intelligence, #biometrics, #cloud-computing, #computers-and-the-internet, #mobile-applications

0