IBM pushes qubit count over 400 with new processor

IBM pushes qubit count over 400 with new processor

Enlarge (credit: IBM)

Today, IBM announced the latest generation of its family of avian-themed quantum processors, the Osprey. With more than three times the qubit count of its previous-generation Eagle processor, Osprey is the first to offer more than 400 qubits, which indicates the company remains on track to release the first 1,000-qubit processor next year.

Despite the high qubit count, there’s no need to rush out and re-encrypt all your sensitive data just yet. While the error rates of IBM’s qubits have steadily improved, they’ve still not reached the point where all 433 qubits in Osprey can be used in a single algorithm without a very high probability of an error. For now, IBM is emphasizing that Osprey is an indication that the company can stick to its aggressive road map for quantum computing, and that the work needed to make it useful is in progress.

On the road

To understand IBM’s announcement, it helps to understand the quantum computing market as a whole. There are now a lot of companies in the quantum computing market, from startups to large, established companies like IBM, Google, and Intel. They’ve bet on a variety of technologies, from trapped atoms to spare electrons to superconducting loops. Pretty much all of them agree that to reach quantum computing’s full potential, we need to get to where qubit counts are in the tens of thousands, and error rates on each individual qubit are low enough that these can be linked together into a smaller number of error-correcting qubits.

Read 14 remaining paragraphs | Comments

#biz-it, #computer-science, #ibm, #physics, #quantum-computing, #quantum-mechanics, #science

Nvidia wants to speed up data transfer by connecting data center GPUs to SSDs 

Nvidia wants to speed up data transfer by connecting data center GPUs to SSDs 

Enlarge (credit: Getty Images)

Microsoft brought DirectStorage to Windows PCs this week. The API promises faster load times and more detailed graphics by letting game developers make apps that load graphical data from the SSD directly to the GPU. Now, Nvidia and IBM have created a similar SSD/GPU technology, but they are aiming it at the massive data sets in data centers.

Instead of targeting console or PC gaming like DirectStorage, Big accelerator Memory (BaM) is meant to provide data centers quick access to vast amounts of data in GPU-intensive applications, like machine-learning training, analytics, and high-performance computing, according to a research paper spotted by The Register this week. Entitled “BaM: A Case for Enabling Fine-grain High Throughput GPU-Orchestrated Access to Storage” (PDF), the paper by researchers at Nvidia, IBM, and a few US universities proposes a more efficient way to run next-generation applications in data centers with massive computing power and memory bandwidth.

BaM also differs from DirectStorage in that the creators of the system architecture plan to make it open source.

Read 4 remaining paragraphs | Comments

#biz-it, #ibm, #nvidia, #ssd, #tech

IBM exec called older workers “dinobabies” who should go “extinct,” lawsuit says

Fictional characters (and dinosaurs) Earl Sinclair and Baby Sinclair from the TV series, Dinosaurs.

Enlarge / Earl Sinclair and Baby Sinclair (not actual IBM employees). (credit: Disney)

A former high-level IBM executive wrote an internal message calling older workers “dinobabies” who should go “extinct,” according to a plaintiff’s filing in an age-discrimination lawsuit against IBM.

“In arbitration, Plaintiff’s counsel have obtained evidence showing high level executive communications demonstrating highly incriminating animus against older workers by” two former IBM executives who left the company in 2020, said the court filing submitted on Friday. The executives’ names and their positions were redacted. In one communication, an executive “applauds the use of the disparaging term ‘dinobabies’ to describe the older IBM employees” and described a “plan to oust them from IBM‟s workforce,” the court filing said.

In the message, “he describes his plan to ‘accelerate change by inviting the “dinobabies” (new species) to leave’ and make them an ‘Extinct species,'” the filing said. “In another email, [name redacted] describes IBM’s ‘dated maternal workforce—this is what must change. They really don’t understand social or engagement. Not digital natives. A real threat for us,'” the filing said.

Read 14 remaining paragraphs | Comments

#age-discrimination, #ibm, #policy

IBM clears the 100-qubit mark with its new processor

Image of a chip labeled IBM.

Enlarge (credit: IBM)

IBM has announced it has cleared a major hurdle in its effort to make quantum computing useful: it now has a quantum processor, called Eagle, with 127 functional qubits. This makes it the first company to clear the 100-qubit mark, a milestone that’s interesting because the interactions of that many qubits can’t be simulated using today’s classical computing hardware and algorithms.

But what may be more significant is that IBM now has a roadmap that would see it producing the first 1,000-qubit processor in two years. And, according to IBM Director of Research Darío Gil, that’s the point where calculations done with quantum hardware will start being useful.

What’s new

Gil told Ars that the new qubit count was a product of multiple developments that have been put together for the first time. One is that IBM switched to what it’s calling a “heavy hex” qubit layout, which it announced earlier this year. This layout connects qubits in a set of hexagons with shared sides. In this layout, qubits are connected to two, three, or a maximum of four neighbors—on average, that’s a lower level of connectivity than some competing designs. But Gil argued that the tradeoff is worth it, saying “it reduces the level of connectivity, but greatly improves crosstalk.”

Read 14 remaining paragraphs | Comments

#computer-science, #ibm, #physics, #quantum-computers, #quantum-mechanics, #qubits, #science

Intel slipped—and its future now depends on making everyone else’s chips

Intel slipped—and its future now depends on making everyone else’s chips

Enlarge (credit: Getty Images | Aurich Lawson)

Last month, Intel CEO Pat Gelsinger stepped to a podium on a hazy, wind-whipped day just outside Phoenix. “Isn’t this awesome!” Gelsinger exclaimed, gesturing over his shoulder. Behind him, two large pieces of construction equipment posed theatrically atop the ocher Arizona soil, framing an organized tangle of pipes, steel, and fencing at the company’s Ocotillo campus. “If this doesn’t get you excited, check your pulse,” he said with a chuckle. A handful of executives and government officials applauded at the appropriate points.

Despite the gathering dust storm, Gelsinger genuinely seemed to enjoy himself. He was in Arizona to announce not one but two new fabs that, when finished, will form a $20 billion bet that Intel can return to the leading edge of semiconductor manufacturing, one of the world’s most profitable, challenging, and cutthroat businesses.

“Semiconductors are a hot topic these days,” Gelsinger continued. “What aspect of your life is not being increasingly driven by digital transformation? If there was any question on that, COVID eliminated it.”

Read 52 remaining paragraphs | Comments

#features, #foundry, #ibm, #intel, #intel-foundry-services, #policy, #semiconductor, #tech-policy, #tsmc

IBM says AI can help track carbon pollution across vast supply chains

A container ship sails off the coast of Thailand.

Enlarge / A container ship sails off the coast of Thailand. (credit: iStock)

Finding sources of pollution across vast supply chains may be one of the largest barriers to eliminating carbon pollution. For some sources like electricity or transportation, it’s relatively easy. But for others like agriculture or consumer electronics, tracing and quantifying greenhouse gas emissions can be a time-consuming, laborious process. It generally takes an expert around three to six months—sometimes more—to come up with an estimate for a single product.

Typically, researchers have to probe vast supply chains, comb the scientific literature, digest reports, and even interview suppliers. They may have to dive into granular details, estimating the footprint of everything from gypsum in drywall to tin solder on circuit boards. Massive databases of reference values offer crude shortcuts, but they can also introduce uncertainty in the estimate because they don’t capture the idiosyncrasies of many companies’ supply chains.

Enter IBM, which has placed a massive bet on offering artificial intelligence services to businesses. Some services, like the company’s Watson health care effort, didn’t live up to the promise. But IBM has refocused its efforts in recent years, and today it announced a new suite of tools for businesses to tackle two significant challenges posed by climate change: emissions reduction and adaptation.

Read 8 remaining paragraphs | Comments

#ai, #artificial-intelligence, #carbon-footprint, #climate-change, #ibm, #life-cycle-analysis, #policy

Tyk raises $35M for its open-source, open-ended approach to enterprise API management

APIs are the grease turning the gears and wheels for many organizations’ IT systems today, but as APIs grow in number and use, tracking how they work (or don’t work) together can become complex and potentially critical if something goes awry. Now, a startup that has built an innovative way to help with this is announcing some funding after getting traction with big enterprises adopting its approach.

Tyk, which has built a way for users to access and manage multiple internal enterprise APIs through a universal interface by way of GraphQL, has picked up $35 million, an investment that it will be using both for hiring and to continue enhancing and expanding the tools that it provides to users. Tyk has coined a term describing its approach to managing APIs and the data they produce — “universal data graph” — and today its tools are being used to manage APIs by some 10,000 businesses, including large enterprises like Starbucks, Societe Generale, and Domino’s.

Scottish Equity Partners led the round, with participation also from MMC Ventures — its sole previous investor from a round in 2019 after boostrapping for its first five years. The startup is based out of London but works in a very distributed way — one of the co-founders is living in New Zealand currently — and it will be hiring and growing based on that principle, too. It has raised just over $40 million to date.

Tyk (pronounced like “tyke”, meaning small/lively child) got its start as an open source side project first for co-founder Martin Buhr, who is now the company’s CEO, while he was working elsewhere, as a “load testing thing,” in his words.

The shifts in IT towards service-oriented architectures, and building and using APIs to connect internal apps, led him to rethink the code and consider how it could be used to control APIs. Added to that was the fact that as far as Buhr could see, the API management platforms that were in the market at the time — some of the big names today include Kong, Apigee (now a part of Google), 3scale (now a part of RedHat and thus IBM), MuleSoft (now a part of Salesforce) — were not as flexible as his needs were. “So I built my own,” he said.

It was built as an open source tool, and some engineers at other companies started to use it. As it got more attention, some of the bigger companies interested in using it started to ask why he wasn’t charging for anything — a sure sign as any that there was probably a business to be built here, and more credibility to come if he charged for the it.

“So we made the gateway open source, and the management part went into a licensing model,” he said. And Tyk was born as a startup co-founded with James Hirst, who is now the COO, who worked with Buhr at a digital agency some years before.

The key motivation behind building Tyk has stayed as its unique selling point for customers working in increasingly complex environments.

“What sparked interest in Tyk was that companies were unhappy with API management as it exists today,” Buhr noted, citing architectures using multiple clouds and multiple containers, creating more complexity that needed better management. “It was just the right time when containerization, Kubernetes and microservices were on the rise… The way we approach the multi-data and multi-vendor cloud model is super flexible and resilient to partitions, in a way that others have not been able to do.”

“You engage developers and deliver real value and it’s up to them to make the choice,” added Hirst. “We are responding to a clear shift in the market.”

One of the next frontiers that Tyk will tackle will be what happens within the management layer, specifically when there are potential conflicts with APIs.

“When a team using a microservice makes a breaking change, we want to bring that up and report that to the system,” Buhr said. “The plan is to flag the issue and test against it, and be able to say that a schema won’t work, and to identify why.”

Even before that is rolled out, though, Tyk’s customer list and its grow speak to a business on the cusp of a lot more.

“Martin and James have built a world-class team and the addition of this new capital will enable Tyk to accelerate the growth of its API management platform, particularly around the GraphQL focused Universal Data Graph product that launched earlier this year,” said Martin Brennan, a director at SEP, in a statement. “We are pleased to be supporting the team to achieve their global ambitions.”

Keith Davidson, a partner at SEP, is joining the Tyk board as a non-executive director with this round.

#api, #api-gateway, #api-management, #apigee, #apis, #ceo, #cloud-computing, #co-founder, #computing, #coo, #developer, #enterprise, #europe, #funding, #google, #graphql, #ibm, #london, #microservices, #mmc-ventures, #mulesoft, #new-zealand, #salesforce, #scottish-equity-partners, #societe-generale, #starbucks, #technology, #tyk

Rezilion raises $30M help security operations teams with tools to automate their busywork

Security operations teams face a daunting task these days, fending off malicious hackers and their increasingly sophisticated approaches to cracking into networks. That also represents a gap in the market: building tools to help those security teams do their jobs. Today, an Israeli startup called Rezilion that is doing just that — building automation tools for DevSecOps, the area of IT that addresses the needs of security teams and the technical work that they need to do in their jobs — is announcing $30 million in funding.

Guggenheim Investments is leading the round with JVP and Kindred Capital also contributing. Rezilion said that unnamed executives from Google, Microsoft, CrowdStrike, IBM, Cisco, PayPal, JP Morgan Chase, Nasdaq, eBay, Symantec, RedHat, RSA and Tenable are also in the round. Previously, the company had raised $8 million.

Rezilion’s funding is coming on the back of strong initial growth for the startup in its first two years of operations.

Its customer base is made up of some of the world’s biggest companies, including two of the “Fortune 10” (the top 10 of the Fortune 500). CEO Liran Tancman, who co-founded Rezilion with CTO Shlomi Boutnaru, said that one of those two is one of the world’s biggest software companies, and the other is a major connected device vendor, but he declined to say which. (For the record, the top 10 includes Amazon, Apple, Alphabet/Google, Walmart and CVS.)

Tancman and Boutnaru had previously co-founded another security startup, CyActive, which was acquired by PayPal in 2015; the pair worked there together until leaving to start Rezilion.

There are a lot of tools out in the market now to help automate different aspects of developer and security operations. Rezilion focuses on a specific part of DevSecOps: large businesses have over the years put in place a lot of processes that they need to follow to try to triage and make the most thorough efforts possible to detect security threats. Today, that might involve inspecting every single suspicious piece of activity to determine what the implications might be.

The problem is that with the volume of information coming in, taking the time to inspect and understand each piece of suspicious activity can put enormous strain on an organization: it’s time-consuming, and as it turns out, not the best use of that time because of the signal to noise ratio involved. Typically, each vulnerability can take 6-9 hours to properly investigate, Tancman said. “But usually about 70-80% of them are not exploitable,” meaning they may be bad for some, but not for this particular organization and the code it’s using today. That represents a very inefficient use of the security team’s time and energy.

“Eight of out ten patches tend to be a waste of time,” Tancman said of the approach that is typically made today. He believes that as its AI continues to grow and its knowledge and solution becomes more sophisticated, “it might soon be 9 out of 10.”

Rezilion has built a taxonomy and an AI-based system that essentially does that inspection work as a human would do: it spots any new, or suspicious, code, figures out what it is trying to do, and runs it against a company’s existing code and systems to see how and if it might actually be a threat to it or create further problems down the line. If it’s all good, it essentially whitelists the code. If not, it flags it to the team.

The stickiness of the product has come out of how Tancman and Boutnaru understand large enterprises, especially those heavy with technology stacks, operate these days in what has become a very challenging environment for cybersecurity teams.

“They are using us to accelerate their delivery processes while staying safe,” Tancman said. “They have strict compliance departments and have to adhere to certain standards,” in terms of the protocols they take around security work, he added. “They want to leverage DevOps to release that.”

He said Rezilion has generally won over customers in large part for simply understanding that culture and process and helping them work better within that: “Companies become users of our product because we showed them that, at a fraction of the effort, they can be more secure.” This has special resonance in the world of tech, although financial services, and other verticals that essentially leverage technology as a significant foundation for how they operate, are also among the startup’s user base.

Down the line, Rezilion plans to add remediation and mitigation into the mix to further extend what it can do with its automation tools, which is part of where the funding will be going, too, Boutnaru said. But he doesn’t believe it will ever replace the human in the equation altogether.

“It will just focus them on the places where you need more human thinking,” he said. “We’re just removing the need for tedious work.”

In that grand tradition of enterprise automation, then, it will be interesting to watch which other automation-centric platforms might make a move into security alongside the other automation they are building. For now, Rezilion is forging out an interesting enough area for itself to get investors interested.

“Rezilion’s product suite is a game changer for security teams,” said Rusty Parks, senior MD of Guggenheim Investments, in a statement. “It creates a win-win, allowing companies to speed innovative products and features to market while enhancing their security posture. We believe Rezilion has created a truly compelling value proposition for security teams, one that greatly increases return on time while thoroughly protecting one’s core infrastructure.”

#agile-software-development, #alphabet, #amazon, #apple, #articles, #artificial-intelligence, #automation, #ceo, #cisco, #computer-security, #crowdstrike, #cto, #cyactive, #devops, #ebay, #energy, #entrepreneurship, #europe, #financial-services, #funding, #google, #ibm, #jp-morgan-chase, #kindred-capital, #maryland, #microsoft, #paypal, #security, #software, #software-development, #startup-company, #symantec, #technology

New IBM Power E1080 server promises dramatic increases in energy efficiency, power

We know that large data centers running powerful servers use vast amounts of electricity. Anything that can reduce consumption would be a welcome change, especially in a time of climate upheaval. That’s where the new IBM Power E1080 server, which is powered by the latest Power10 processors, comes into play.

IBM claims it can consolidate the work of 126 competitive servers down to just two E1080s, saving 80% in energy costs, by the company’s estimation. What’s more, the company says, “The new server has set a new world record in a SAP benchmark that measures performance for key SAP applications, needing only half the resources used by x86 competitive servers to beat them by 40%.”

Patrick Moorhead, founder and principal analyst at Moor Insight & Strategy, who closely follows the chip industry, says that the company’s bold claims about what these systems can achieve make sense from a hardware design perspective. “The company’s claims on SAP, Oracle and OpenShift workloads pass initial muster with me as it simply requires less sockets and physical processors to achieve the same performance. These figures were compared to Intel’s Cascade Lake that will be replaced with Sapphire Rapids (in the future),” he said.

Steve Sibley, vice president and business line executive in the Power Systems Group at IBM, says that the new server (and the Power10 chip running it) have been designed for customers looking for a combination of speed, power, efficiency and security. “If you look at what we deliver here with scale and performance, it gives customers even more agility to respond quickly to scale to their highest demands,” he said.

To give customers options, they can buy E1080 servers outright and install them in a company data center. They can buy server access as a service from the IBM cloud (and possibly competitor clouds) or they can rent the servers and install them in their data centers and pay by the minute to help mitigate the cost.

“Our systems are a little bit more expensive on what I call a base cost of acquisition standpoint, but we allow customers to actually purchase [E1080 servers] on an as-a-service basis with a by-the-minute level of granularity of what they’re paying for,” he said.

What’s more, this server, which is the first to be released based on the Power10 chip, is designed to run Red Hat software under the hood, giving the company another outlet for its 2018 $34 billion acquisition.

“Bringing Red Hat’s platform to this platform is a key way to modernize applications, both from just a RHEL (Red Hat Enterprise Linux) operating system environment, as well as OpenShift (the company’s container platform). The other place that has been key with our Red Hat acquisition and our capitalizing on it is that we’re leveraging their Ansible projects and products to drive management and automation on our platform, as well,” Sibley explained.

Since Arvind Krishna took over as CEO at IBM in April 2020, he has been trying to shift the focus of the company to hybrid computing, where some computing exists in the cloud and some on prem, which is the state many companies will find themselves in for many years to come. IBM hopes to leverage Red Hat as a management plane for a hybrid environment, while offering a variety of hardware and software tools and services.

While Red Hat continues to operate as a standalone entity inside IBM, and wants to remain a neutral company for customers, Big Blue is still trying to find ways to take advantage of its offerings whenever possible and using it to run its own systems, and the E1080 provides a key avenue for doing that.

The company says that it is taking orders for the new servers starting immediately and expects to begin shipping systems at the end of the month.

#arvind-krishna, #chips, #cloud, #enterprise, #hardware, #ibm, #ibm-red-hat-deal, #red-hat, #tc

Spain’s Factorial raises $80M at a $530M valuation on the back of strong traction for its ‘Workday for SMBs’

Factorial, a startup out of Barcelona that has built a platform that lets SMBs run human resources functions with the same kind of tools that typically are used by much bigger companies, is today announcing some funding to bulk up its own position: the company has raised $80 million, funding that it will be using to expand its operations geographically — specifically deeper into Latin American markets — and to continue to augment its product with more features.

CEO Jordi Romero, who co-founded the startup with Pau Ramon and Bernat Farrero — said in an interview that Factorial has seen a huge boom of growth in the last 18 months and counts more than anything 75,000 customers across 65 countries, with the average size of each customer in the range of 100 employees, although they can be significantly (single-digit) smaller or potentially up to 1,000 (the “M” of SMB, or SME as it’s often called in Europe).

“We have a generous definition of SME,” Romero said of how the company first started with a target of 10-15 employees but is now working in the size bracket that it is. “But that is the limit. This is the segment that needs the most help. We see other competitors of ours are trying to move into SME and they are screwing up their product by making it too complex. SMEs want solutions that have as much data as possible in one single place. That is unique to the SME.” Customers can include smaller franchises of much larger organizations, too: KFC, Booking.com, and Whisbi are among those that fall into this category for Factorial.

Factorial offers a one-stop shop to manage hiring, onboarding, payroll management, time off, performance management, internal communications and more. Other services such as the actual process of payroll or sourcing candidates, it partners and integrates closely with more localized third parties.

The Series B is being led by Tiger Global, and past investors CRV, Creandum, Point Nine and K Fund also participating, at a valuation we understand from sources close to the deal to be around $530 million post-money. Factorial has raised $100 million to date, including a $16 million Series A round in early 2020, just ahead of the Covid-19 pandemic really taking hold of the world.

That timing turned out to be significant: Factorial, as you might expect of an HR startup, was shaped by Covid-19 in a pretty powerful way.

The pandemic, as we have seen, massively changed how — and where — many of us work. In the world of desk jobs, offices largely disappeared overnight, with people shifting to working at home in compliance with shelter-in-place orders to curb the spread of the virus, and then in many cases staying there even after those were lifted as companies grappled both with balancing the best (and least infectious) way forward and their own employees’ demands for safety and productivity. Front-line workers, meanwhile, faced a completely new set of challenges in doing their jobs, whether it was to minimize exposure to the coronavirus, or dealing with giant volumes of demand for their services. Across both, organizations were facing economics-based contractions, furloughs, and in other cases, hiring pushes, despite being office-less to carry all that out.

All of this had an impact on HR. People who needed to manage others, and those working for organizations, suddenly needed — and were willing to pay for — new kinds of tools to carry out their roles.

But it wasn’t always like this. In the early days, Romero said the company had to quickly adjust to what the market was doing.

“We target HR leaders and they are currently very distracted with furloughs and layoffs right now, so we turned around and focused on how we could provide the best value to them,” Romero said to me during the Series A back in early 2020. Then, Factorial made its product free to use and found new interest from businesses that had never used cloud-based services before but needed to get something quickly up and running to use while working from home (and that cloud migration turned out to be a much bigger trend played out across a number of sectors). Those turning to Factorial had previously kept all their records in local files or at best a “Dropbox folder, but nothing else,” Romero said.

It also provided tools specifically to address the most pressing needs HR people had at the time, such as guidance on how to implement furloughs and layoffs, best practices for communication policies and more. “We had to get creative,” Romero said.

But it wasn’t all simple. “We did suffer at the beginning,” Romero now says. “People were doing furloughs and [frankly] less attention was being paid to software purchasing. People were just surviving. Then gradually, people realized they needed to improve their systems in the cloud, to manage remote people better, and so on.” So after a couple of very slow months, things started to take off, he said.

Factorial’s rise is part of a much, longer-term bigger trend in which the enterprise technology world has at long last started to turn its attention to how to take the tools that originally were built for larger organizations, and right size them for smaller customers.

The metrics are completely different: large enterprises are harder to win as customers, but represent a giant payoff when they do sign up; smaller enterprises represent genuine scale since there are so many of them globally — 400 million, accounting for 95% of all firms worldwide. But so are the product demands, as Romero pointed out previously: SMBs also want powerful tools, but they need to work in a more efficient, and out-of-the-box way.

Factorial is not the only HR startup that has been honing in on this, of course. Among the wider field are PeopleHR, Workday, Infor, ADP, Zenefits, Gusto, IBM, Oracle, SAP and Rippling; and a very close competitor out of Europe, Germany’s Personio, raised $125 million on a $1.7 billion valuation earlier this year, speaking not just to the opportunity but the success it is seeing in it.

But the major fragmentation in the market, the fact that there are so many potential customers, and Factorial’s own rapid traction are three reasons why investors approached the startup, which was not proactively seeking funding when it decided to go ahead with this Series B.

“The HR software market opportunity is very large in Europe, and Factorial is incredibly well positioned to capitalize on it,” said John Curtius, Partner at Tiger Global, in a statement. “Our diligence found a product that delighted customers and a world-class team well-positioned to achieve Factorial’s potential.”

“It is now clear that labor markets around the world have shifted over the past 18 months,” added Reid Christian, general partner at CRV, which led its previous round, which had been CRV’s first investment in Spain. “This has strained employers who need to manage their HR processes and properly serve their employees. Factorial was always architected to support employers across geographies with their HR and payroll needs, and this has only accelerated the demand for their platform. We are excited to continue to support the company through this funding round and the next phase of growth for the business.”

Notably, Romero told me that the fundraising process really evolved between the two rounds, with the first needing him flying around the world to meet people, and the second happening over video links, while he was recovering himself from Covid-19. Given that it was not too long ago that the most ambitious startups in Europe were encouraged to relocate to the U.S. if they wanted to succeed, it seems that it’s not just the world of HR that is rapidly shifting in line with new global conditions.

#barcelona, #booking-com, #brazil, #ceo, #crv, #enterprise, #europe, #factorial, #general-partner, #germany, #hiring, #human-resource-management, #human-resources, #ibm, #k, #k-fund, #labor, #mathematics, #onboarding, #oracle, #payroll, #people-management, #performance-management, #personnel, #sap, #software, #spain, #tiger-global-management, #united-states, #zenefits

A brief overview of IBM’s new 7 nm Telum mainframe CPU

Each Telum package consists of two 7nm, eight-core / sixteen-thread processors running at a <em>base</em> clock speed above 5GHz. A typical system will have sixteen of these chips in total, arranged in four-socket "drawers."

Enlarge / Each Telum package consists of two 7nm, eight-core / sixteen-thread processors running at a base clock speed above 5GHz. A typical system will have sixteen of these chips in total, arranged in four-socket “drawers.” (credit: IBM)

From the perspective of a traditional x86 computing enthusiast—or professional—mainframes are strange, archaic beasts. They’re physically enormous, power-hungry, and expensive by comparison to more traditional data-center gear, generally offering less compute per rack at a higher cost.

This raises the question, “Why keep using mainframes, then?” Once you hand-wave the cynical answers that boil down to “because that’s how we’ve always done it,” the practical answers largely come down to reliability and consistency. As AnandTech’s Ian Cutress points out in a speculative piece focused on the Telum’s redesigned cache, “downtime of these [IBM Z] systems is measured in milliseconds per year.” (If true, that’s at least seven nines.)

IBM’s own announcement of the Telum hints at just how different mainframe and commodity computing’s priorities are. It casually describes Telum’s memory interface as “capable of tolerating complete channel or DIMM failures, and designed to transparently recover data without impact to response time.”

Read 16 remaining paragraphs | Comments

#biz-it, #ibm, #mainframe, #tech, #telum, #z-series

US giants top tech industry’s $100M+ a year lobbying blitz in EU

The scale of the tech industry’s spending to influence the European Union’s tech policy agenda has been laid out in a report published today by Corporate Europe Observatory and Lobbycontrol — which found hundreds of companies, groups and business associations shelling out a total of €97 million (~$115M) annually lobbying EU institutions.

The level of spending makes tech the biggest lobby sector in the region — ahead of pharma, fossil fuels, finance, and chemicals — per the report by the two lobbying transparency campaign groups.

The EU has a raft of digital legislation in train, including the Digital Markets Act, which is set to apply ex ante controls to the biggest ‘gatekeeper’ platforms to promote fair competition in the digital market by outlawing a range of abusive practices; and the Digital Services Act, which will increase requirements on a swathe of digital businesses — again with greater requirements for larger platforms — to try to bring online rules in line with offline requirements in areas like illegal content and products.

Tackling online disinformation and threats to democratic processes — such as by updating the EU’s rules for political ads running online and tighter regulation of online ad targeting more generally is also being eyed by Brussels-based lawmakers.

The bloc is also in the process of agreeing a risk-based framework for applications of artificial intelligence.

Data reuse is another big EU regulatory focus.

At the same time, enforcement of the EU’s existing data protection framework (GDPR) — which is widely perceived to have been (mostly) weakly applied against tech giants — is another area where tech giants may be keen to influence regional policy, given that uniformly vigorous enforcement could threaten the surveillance-based business models of online ad giants like Google and Facebook.

Instead, multiple GDPR complaints against the pair are still sitting undecided on the desk of Ireland’s Data Protection Commission.

A small number of tech giants dominant EU lobbying, according to the report, which found ten companies are responsible for almost a third of the total spend — namely: Google, Facebook, Microsoft, Apple, Huawei, Amazon, IBM, Intel, Qualcomm and Vodafone — who collectively spend more than €32M a year to try to influence EU tech policy.

Google topped the lobbying list of Big Tech big spenders in the EU — spending €5.8M annually trying to influence EU institutions, per the report; followed by Facebook (€5.5M); Microsoft (€5.3M); Apple (€3.5M); and Huawei (€3M).


Unsurprisingly, US-based tech companies dominate industry lobbying in the EU — with the report finding a fifth of the companies lobbying the bloc on digital policy are US-based — although it suggests the true proportion is “likely even higher”.

While China (or Hong Kong) based companies were only found to comprise less than one per cent of the total, suggesting Chinese tech firms are so far not invested in EU lobbying at anywhere near the level of their US counterparts.

“The lobbying surrounding proposals for a Digital Services pack, the EU’s attempt at reining in Big Tech, provides the perfect example of how the firms’ immense budget provides them with privileged access: Commission high-level officials held 271 meetings, 75 percent of them with industry lobbyists. Google and Facebook led the pack,” write the pair of transparency campaign groups.

The report also shines a light on how the tech industry routinely relies upon astroturfing to push favored policies — with tech companies not only lobbying individually but also being collectively organised into a network of business and trade associations that the report dubs “important lobby actors” too.

Per the report, business associations lobbying on behalf of Big Tech alone have a lobbying budget that “far surpasses that of the bottom 75 per cent of the companies in the digital industry”.

Such a structure can allow the wealthiest tech giants to push preferred policy positions under a guise of wider industry support — by also shelling out to fund such associations which then gives them an outsized influence over their lobbying output.

“Big Tech’s lobbying also relies on its funding of a wide network of third parties, including think tanks, SME and startup associations and law and economic consultancies to push through its messages. These links are often not disclosed, obfuscating potential biases and conflicts of interest,” the pair note, going on to highlight 14 think tanks and NGOs they found to have “close ties” to Big Tech firms.

“The ethics and practice of these policy organisations varies but some seem to have played a particularly active role in discussions surrounding the Digital Services pack, hosting exclusive or skewed debates on behalf of their funders or publishing scaremongering reports,” they continue.

“There’s an opacity problem here: Big Tech firms have fared poorly in declaring their funding of think tanks – mostly only disclosing these links after being pressured. And even still this disclosure is not complete. To this, Big Tech adds its funding of SME and startup associations; and the fact that law and economic experts hired by Big Tech also participate in policy discussions, often without disclosing their clients or corporate links.”

The 14 think tanks and NGOs the report links to Big Tech backers are: CERRE; CDI, EPC, CEPS, CER, Bruegel, Lisbon Council, CDT, TPN, Friends of Europe, ECIPE, European Youth Forum, German Marshall Fund and the Wilfried Martens Centre for European Studies.

The biggest spending tech giants were contacted for comment on the report. We’ll update this article with any response.

We have also reached out to the European Commission for comment.

The full report — entitled The Lobby Network: Big Tech’s Web of Influence in the EU — can be found here.

#amazon, #apple, #big-tech, #brussels, #digital-markets-act, #europe, #european-union, #facebook, #huawei, #ibm, #intel, #lobbying, #online-disinformation, #policy, #qualcomm, #united-states, #vodafone

Linux 5.14 set to boost future enterprise application security

Linux is set for a big release this Sunday August 29, setting the stage for enterprise and cloud applications for months to come. The 5.14 kernel update will include security and performance improvements.

A particular area of interest for both enterprise and cloud users is always security and to that end, Linux 5.14 will help with several new capabilities. Mike McGrath, vice president, Linux Engineering at Red Hat told TechCrunch that the kernel update includes a feature known as core scheduling, which is intended to help mitigate processor-level vulnerabilities like Spectre and Meltdown, which first surfaced in 2018. One of the ways that Linux users have had to mitigate those vulnerabilities is by disabling hyper-threading on CPUs and therefore taking a performance hit. 

“More specifically, the feature helps to split trusted and untrusted tasks so that they don’t share a core, limiting the overall threat surface while keeping cloud-scale performance relatively unchanged,” McGrath explained.

Another area of security innovation in Linux 5.14 is a feature that has been in development for over a year-and-a-half that will help to protect system memory in a better way than before. Attacks against Linux and other operating systems often target memory as a primary attack surface to exploit. With the new kernel, there is a capability known as memfd_secret () that will enable an application running on a Linux system to create a memory range that is inaccessible to anyone else, including the kernel.

“This means cryptographic keys, sensitive data and other secrets can be stored there to limit exposure to other users or system activities,” McGrath said.

At the heart of the open source Linux operating system that powers much of the cloud and enterprise application delivery is what is known as the Linux kernel. The kernel is the component that provides the core functionality for system operations. 

The Linux 5.14 kernel release has gone through seven release candidates over the last two months and benefits from the contributions of 1,650 different developers. Those that contribute to Linux kernel development include individual contributors, as well large vendors like Intel, AMD, IBM, Oracle and Samsung. One of the largest contributors to any given Linux kernel release is IBM’s Red Hat business unit. IBM acquired Red Hat for $34 billion in a deal that closed in 2019.

“As with pretty much every kernel release, we see some very innovative capabilities in 5.14,” McGrath said.

While Linux 5.14 will be out soon, it often takes time until it is adopted inside of enterprise releases. McGrath said that Linux 5.14 will first appear in Red Hat’s Fedora community Linux distribution and will be a part of the future Red Hat Enterprise Linux 9 release. Gerald Pfeifer, CTO for enterprise Linux vendor SUSE, told TechCrunch that his company’s openSUSE Tumbleweed community release will likely include the Linux 5.14 kernel within ‘days’ of the official release. On the enterprise side, he noted that SUSE Linux Enterprise 15 SP4, due next spring, is scheduled to come with Kernel 5.14. 

The new Linux update follows a major milestone for the open source operating system, as it was 30 years ago this past Wednesday that creator Linus Torvalds (pictured above) first publicly announced the effort. Over that time Linux has gone from being a hobbyist effort to powering the infrastructure of the internet.

McGrath commented that Linux is already the backbone for the modern cloud and Red Hat is also excited about how Linux will be the backbone for edge computing – not just within telecommunications, but broadly across all industries, from manufacturing and healthcare to entertainment and service providers, in the years to come.

The longevity and continued importance of Linux for the next 30 years is assured in Pfeifer’s view.  He noted that over the decades Linux and open source have opened up unprecedented potential for innovation, coupled with openness and independence.

“Will Linux, the kernel, still be the leader in 30 years? I don’t know. Will it be relevant? Absolutely,” he said. “Many of the approaches we have created and developed will still be pillars of technological progress 30 years from now. Of that I am certain.”

 

 

#cloud, #cloud-applications, #enterprise, #ibm, #linus-torvalds, #linux, #operating-systems, #red-hat, #security, #suse, #tc

Noetic Cyber emerges from stealth with $15M led by Energy Impact Partners

Noetic Cyber, a cloud-based continuous cyber asset management and controls platform, has launched from stealth with a Series A funding round of $15 million led by Energy Impact Partners.

The round was also backed by Noetic’s existing investors, TenEleven Ventures and GlassWing Ventures, and brings the total amount of funds raised by the startup to $20 million following a $5 million seed round. Shawn Cherian, a partner at Energy Impact Partners, will join the Noetic board, while Niloofar Razi Howe, a senior operating partner at the investment firm, will join Noetic’s advisory board.

“Noetic is a true market disruptor, offering an innovative way to fix the cyber asset visibility problem — a growing and persistent challenge in today’s threat landscape,” said Howe.

The Massachusetts-based startup claims to be taking a new approach to the cyber asset management problem. Unlike traditional solutions, Noetic is not agent-based, instead using API aggregation and correlation to draw insights from multiple security and IT management tools.

“What makes us different is that we’re putting orchestration and automation at the heart of the solution, so we’re not just showing security leaders that they have problems, but we’re helping them to fix them,” Paul Ayers, CEO and co-founder of Noetic Cyber tells TechCrunch.

Ayer was previously a top exec at PGP Corporation (acquired by Symantec for $370 million) and Vormetric (acquired by Thales for $400 million) and founded Noetic Cyber with Allen Roger and Allen Hadden, who have previously worked at cybersecurity vendors including Authentica, Raptor and Axent. All three were also integral to the development of Resilient Systems, which was acquired by IBM.

“The founding team’s experience in the security, orchestration, automation and response market gives us unique experience and insights to make automation a key pillar of the solution,” Ayers said. “Our model gives you the certainty to make automation possible, the goal is to find and fix problems continuously, getting assets back to a secure state.”

“The development of the technology has been impacted by the current cyber landscape, and the pandemic, as some of the market drivers we’ve seen around the adoption of cloud services, and the increased use of unmanaged devices by remote workers, are driving a great need for accurate cyber asset discovery and management.”

The company, which currently has 20 employees, says it plans to use the newly raised funds to double its headcount by the end of the year, as well as increase its go-to-market capability in the U.S. and the U.K. to grow its customer base and revenue growth.

“In terms of technology development, this investment allows us to continue to add development and product management talent to the team to build on our cyber asset management platform,” Ayers said. 

“The beauty of our approach is that it allows us to easily add more applications and use cases on top of our core asset visibility and management model. We will continue to add more connectors to support customer use cases and will be bringing a comprehensive controls package to market later in 2021, as well as a community edition in 2022.”

#api, #cloud-services, #computer-security, #computing, #cryptography, #cybercrime, #cyberwarfare, #data-security, #energy-impact-partners, #funding, #glasswing-ventures, #ibm, #information-technology, #malware, #massachusetts, #partner, #raptor, #resilient-systems, #security, #shawn-cherian, #symantec, #technology-development, #teneleven-ventures, #thales, #united-kingdom, #united-states, #vormetric

Why former Alibaba scientist wants to back founders outside the Ivory Tower

Min Wanli had a career path much coveted by those pursuing a career in computer science. A prodigy, Min was accepted to a top research university in China at the age of 14. He subsequently obtained Ph.D. degrees in physics and statistics from the University of Chicago before spending nearly a combined decade at IBM and Google.

Like many young, aspiring Chinese scientists working in the United States, Min returned to China when the country’s internet boom was underway in the early 2010s. He joined Alibaba’s fledgling cloud arm and was at the forefront of applying its tech to industrial scenarios, like using visual identification to mitigate highway traffic and computing power to improve factory efficiency.

Then in July 2019, Min took a leap. He resigned from Alibaba Cloud, which had become a major growth driver for the e-commerce goliath and at the time China’s largest public cloud infrastructure provider (it still is). With no experience in investment, he started a new venture capital firm called North Summit Capital.

“A lot of enterprises were quite skeptical of ‘digital transformation’ around 2016 and 2017. But by 2019, after they had seen success cases [from Alibaba Cloud], they no longer questioned its viability,” said Min in his office overlooking a cluster of urban villages and highrise offices in Shenzhen. Clad in a well-ironed light blue shirt, he talked with a childlike, earnest smile.

“Suddenly, everyone wanted to go digital. But how am I supposed to meet their needs with a team of just 400-500 people?”

Min’s solution was not to serve the old-school factories and corporations himself but to finance and support a raft of companies to do so. Soon he closed the first fund for North Summit with “several hundreds of millions of dollars” from an undisclosed high-net-worth individual from the United Arab Emirates, whom Min had met when he represented Alibaba at a Duhai tech conference in 2018.

“Venture capital is like a magnifier through which I can connect with a lot of tech companies and share my lessons from the past, so they can quickly and effectively work with their clients from traditional industries,” Min said.

“For example, I’d discuss with my portfolio firms whether they should focus on selling hardware pieces or software first, or give them equal weight.”

Min strives to be deeply involved in the companies he backs. North Summit invests early, with check sizes so far ranging from roughly $5 million to $25 million. Min also started a technology service company called Quadtalent to provide post-investment support to his portfolio.

Photo: North Summit Capital’s office in Shenzhen

The notion of digital transformation is both buzzy and daunting for many investors due to the highly complex and segmented nature of traditional industries. But Min has a list of criteria to help narrow down his targets.

First, an investable area should be data-intensive. Subway tracks, for example, could benefit from implementing large amounts of sensors that monitor the rail system’s stauts. Second, an area’s manufacturing or business process should be capital-intensive, such as production lines that use exorbitant equipment. And lastly, the industry should be highly dependent on repetitive human experience, like police directing traffic.

Solving industrial problems require not just founders’ computing ingenuity but more critically, their experience in a traditional sector. As such, Min goes beyond the “Ivory Tower” of computer science wizards when he looks for entrepreneurs.

“What we need today is a type of inter-disciplinary talent who can do ‘compound algorithms.’ That means understanding sensor signals, business rationales, manufacturing, as well as computer algorithms. Applying neural network through an algorithmic black box without the other factors is simply futile.”

Min faces ample competition as investors hunt down the next ABB, Schneider, or Siemens of China. The country is driving towards technological independence in all facets of the economy and the national mandate takes on new urgency as COVID-19 disrupts global supply chains. The result is skyrocketing valuations for startups touting “industrial upgrade” solutions, Min noted.

But factory bosses don’t care whether their automation solution providers are unerdogs or startup unicorns. “At the end of the day, the factory CFO will only ask, ‘how much more money does this piece of software or equipment help us save or make?’”

The investor is cautious about deploying his maiden fund. Two years into operation, North Summit has closed four deals: TopScore, a 17-year-old footwear manufacturer embracing automation; Lingumi, a London-based English learning app targeting Chinese pre-school kids; Aerodyne, a Malaysian drone service provider; and Extreme Vision, a marketplace connecting small-and-medium enterprises to affordable AI vision solutions. 

This year, North Summit aims to invest close to $100 million in companies inside and outside China. Optical storage and robotic process automation (RPA) are just two areas that have been on Min’s radar in recent days.

#abb, #alibaba, #alibaba-cloud, #alibaba-group, #asia, #china, #cloud-computing, #cloud-infrastructure, #computing, #dubai, #funding, #ibm, #manufacturing, #siemens, #tc, #united-arab-emirates, #university-of-chicago, #venture-capital

Jim Whitehurst steps down as president at IBM just 14 months after taking role

In a surprise announcement today, IBM announced that Jim Whitehurst, who came over in the Red deal, would be stepping down as company president just 14 months after taking over in that role.

IBM didn’t give a lot of details as to why he was stepping away, but acknowledged his key role in helping bring the 2018 $34 billion Red Hat deal to fruition and helping bring the two companies together after the deal closed. “Jim has been instrumental in articulating IBM’s strategy, but also, in ensuring that IBM and Red Hat work well together and that our technology platforms and innovations provide more value to our clients,” the company stated.

He will stay on as a senior advisor to Krishna, but it begs the question why he is leaving after such a short time in the role, and what he plans to do next. Oftentimes after a deal of this magnitude closes, there is an agreement as to how long key executives will stay. It could be simply that the period has expired and Whitehurst wants to move on, but some saw him as the heir apparent to Krishna and the move comes as a surprise when looked at in that context.

“I am surprised because I always thought Jim would be next in line as IBM CEO. I also liked the pairing between a lifer IBMer and an outsider,” Patrick Moorhead, founder and principal analyst at Moor Insight & Strategies told TechCrunch.

Regardless, it leaves a big hole in Krishna’s leadership team as he works to transform the company into one that is primarily focused on hybrid cloud.  Whitehurst was undoubtedly in a position to help drive that change through his depth of industry knowledge and his credibility with the open source community from his time at Red Hat. He is not someone who would be easily replaced and the announcement didn’t mention anyone filling his role.

When IBM bought Red Hat in 2018 for $34 billion, it led to a cascading set of changes at both companies. First Ginni Rometty stepped down as CEO at IBM and Arvind Krishna took over. At the same time, Jim Whitehurst, who had been Red Hat CEO moved to IBM as president and long-time employee Paul Cormier moved into his role.

At the same time, the company also announced some other changes including that long-time IBM executive Bridget van Kralingen announced she too was stepping away, leaving her role as senior vice president of global markets. Rob Thomas, who had been senior vice president of IBM cloud and data platform, will step in to replace Van Kraligen.

#arvind-krishna, #cloud, #enterprise, #ibm, #jim-whitehurst, #personnel

Companies navigate ethical minefield to build proof of vaccination apps

In the U.S, after you get vaccinated against COVID-19 you are given a small paper card issued by the CDC that is essentially the only evidence that you’ve received your shots. It might seem like a flimsy level of proof, one that you could easily lose, but replacing that paper copy with a digital one has become a political lightning rod in America.

In spite of that, many companies are attempting to attack the problem to produce a viable form of digital proof, sometimes called vaccine passports. For all intents and purposes, what many call a vaccine passport is simply proof you’ve been vaccinated that you can carry on your smartphone, rather than on a card in your wallet.

Some have argued against the digital approach for privacy reasons. Others have claimed it is a civil liberties issue, and some have pointed to equity issues related to not having equal access to appropriate technology or the internet.

That lack of consensus along with the open ethical questions, has led some states including Florida and Georgia to ban the use of electronic passport records, at least as far as requiring them to conduct state business or to create a centralized vaccination record keeping system. In Iowa, the governor signed a law last month that prohibits businesses and the state from requiring any proof to access services, whether the card is physical or digital.

These are just a few examples of the patchwork of state laws and executive orders that has resulted in even more complexity for companies trying to develop products to solve this problem. But not every state is banning digital vaccination records. Earlier this month, California opened a registration system to request a digital record of your vaccination and New York announced a system earlier this year to download proof of vaccination to your smartphone. More on these approaches later.

We spoke to several experts to get their take on moving your vaccine card to the digital world to find out how this could work in spite of the obvious friction.

Practical issues

According to Dr. Shira I. Doron from Tufts Medical Center in Boston, whose specialties include infectious diseases and hospital epidemiology, it’s not as simple a matter as may sound.

For starters she says, states have not kept records in a consistent way. People have been getting vaccinated in all kinds of places from school gyms to pharmacies to stadiums, and it’s not clear if those records have made their way to people’s primary care physicians, assuming they even have one.

“[Vaccine passports could work] if [a system] had been rolled out that way [with central record keeping in mind] from December 15th [when we started vaccinating], but it was not. So, if somebody takes it on to go backwards and issue that kind of proof to people, maybe a system like that could work — and of course there are a lot of people that have taken issue with the ethics of that,” she said.

For her, it comes down to infection rates. As they drop with more people getting vaccinated, it could alleviate the need for any kind of proof at all because we would be safer simply because the infection rate fell below 10%. “I think that more ideally we get down to such a low infection rate and such high rate of vaccination that there is no longer a concern about people walking into a building,” she said.

Putting it in on the blockchain

If the infection rate remains higher than desirable, or certain entities like universities want to require it, how do we offer proof of vaccination beyond the paper card? Some people are pointing to the blockchain, but the approach isn’t without controversy. New York State is using IBM’s blockchain technology for its proof of vaccination called Excelsior Pass, but privacy advocates worry that doing so could expose people’s personal medical information.

The idea with the IBM approach is that you to go to your physician’s healthcare portal or some other place that has your vaccine records, and which has partnered with IBM. The portal will present you with a QR code which you can take a picture of with your phone and store in your phone’s digital wallet. The person then presents the QR code at a venue, which uses a companion scanning application to view it to see proof of vaccination (or a recent negative test). Finally the venue would verify the identity of that person with a secondary form of ID like a driver’s license.

The question then is why use the blockchain at all in this instance. IBM Global VP of Payer and Emerging Business Networks Eric Piscini, says that there are three main reasons. “The first is that the immutability of the blockchain is extremely important, and that’s [a big reason] why we use it. The second piece, which is also very important is the decentralization of that platform so that [all of the vaccine data] is not just in one place. It’s decentralized and managed by different parties. […] The third piece […] is the audit trail, and not just for me as a consumer, but as an [entity] that is trying to verify me,” he explained.

But are those reasons enough to justify its use? Steve Wilson, an analyst at Constellation Research, who specializes in end user privacy thinks the blockchain is an inappropriate technology to use for digital proof of vaccination. “Basically, I don’t see how blockchain adds anything to the digitizing of COVID vaccinations or tests. The purpose of blockchain is to crowd-source agreement on the ordering of some events, and logging that order in a shared record. What problem in vaccination management does that address,” he asked.

An open-source approach to the problem

When California released a digital vaccination record app last week, it went a different route, using an open-source framework called the Smart Health Cards Framework. The framework was developed by an organization called The Commons Project (TCP) along with a broad coalition of health and technology organizations including Oracle, Microsoft, Salesforce, Epic and others.

JP Pollack, co-founder of The Commons Project, Senior Researcher-in-Residence at Cornell Tech, and Assistant Professor at Weill Cornell Medicine, says that since the government has made clear it won’t be compiling vaccine records in a central database, and because the vaccine administration system itself is so fragmented, it’s even more challenging to create digital records. His organization is working to create a solution to that problem.

“What we’re working on at The Commons Project is a steering group called the Vaccination Credential Initiative or VCI. And the purpose of that group is basically to design and advocate for a specification, someday hopefully a standard, that makes it so that all of those disparate issuers of vaccines can issue the same vaccine record in a signed and portable format,” he said.

That comes in the form of a Smart Health Card app that TCP has developed. “The additional layer that we have built is what turns [your vaccine] information into what we’re calling the Smart Health Card. And basically it’s all of the information that goes on your CDC card — so your name, your date of birth, the type of vaccine that you received, the dates of your doses, lot numbers and where you received it. All of those kinds of things get packaged up into this credential, and that credential is then signed by the issuer,” he said.

In addition to California, the state of Louisiana also went live with the The Commons Project solution this week, and Walmart recently announced that anyone that received their vaccine through them is now able to download a digital version of their vaccine record directly to the CommonHealth app (available on Android) or CommonPass app (available on iOS or Android). The company also hinted that other companies that have administered the vaccine would be following Walmart’s lead in the coming weeks and providing access to digital records through the same apps.

The approach doesn’t necessarily solve all of the criticisms around equitable access to technology, privacy or the ethics of being asked to show proof vaccination, but it does provide a means to deliver the information digitally for those that want it in an open way.

Regardless of the method your state chooses, if it indeed chooses any approach at all,  it will come with its own set of pros and cons. The paper CDC card, as Wilson points out, is similar in many ways to the “Yellow Card” vaccination record that people traveling overseas have been carrying for decades, and that has worked fine.

But it seems that in 2021 when approximately half the world’s population owns a smartphone, while two-thirds have some sort of mobile phone, smart or otherwise, it makes sense to make this record available in a digital form. For the many startups and large companies trying to solve that problem, they will have to do more than come up with a clever solution. They will also need to figure out how to convince individuals, businesses and governments that it makes sense to even offer this approach, and that may be the biggest hurdle of all.

#apps, #covid-19, #health, #ibm, #tc, #vaccination, #vaccine-passports

Vercel raises $102M Series C for its front-end development platform

Vercel, the company behind the popular open-source Next.js React framework, today announced that it has raised a $102 million Series C funding round led by Bedrock Capital. Existing investors Accel, CRV,
Geodesic Capital, Greenoaks Capital and GV also participated in this round, together with new investors 8VC, Flex Capital, GGV, Latacora, Salesforce Ventures and Tiger Global. In total, the company has now raised $163 million and its current valuation is $1.1 billion.

As Vercel notes, the company saw strong growth in recent months, with traffic to all sites and apps on its network doubling since October 2020. About half of the world’s largest 10,000 websites now use Next.js . Given the open-source nature of the Next.js framework, not all of these users are obviously Vercel customers, but its current paying customers include the likes of Carhartt, Github, IBM, McDonald’s and Uber.

Image Credits: Vercel

“For us, it all starts with a front-end developer,” Vercel CEO Guillermo Rauch told me. “Our goal is to create and empower those developers — and their teams — to create delightful, immersive web experiences for their customers.”

With Vercel, Rauch and his team took the Next.js framework and then built a serverless platform that specifically caters to this framework and allows developers to focus on building their front ends without having to worry about scaling and performance.

Older solutions, Rauch argues, were built in isolation from the cloud platforms and serverless technologies, leaving it up to the developers to deploy and scale their solutions. And while some potential users may also be content with using a headless content management system, Rauch argues that increasingly, developers need to be able to build solutions that can go deeper than the off-the-shelf solutions that many businesses use today.

Rauch also noted that developers really like Vercel’s ability to generate a preview URL for a site’s front end every time a developer edits the code. “So instead of just spending all your time in code review, we’re shifting the equation to spending your time reviewing or experiencing your front end. That makes the experience a lot more collaborative,” he said. “So now, designers, marketers, IT, CEOs […] can now come together in this collaboration of building a front end and say, ‘that shade of blue is not the right shade of blue.’”

“Vercel is leading a market transition through which we are seeing the majority of value-add in web and cloud application development being delivered at the front end, closest to the user, where true experiences are made and enjoyed,” said Geoff Lewis, founder and managing partner at Bedrock. “We are extremely enthusiastic to work closely with Guillermo and the peerless team he has assembled to drive this revolution forward and are very pleased to have been able to co-lead this round.”

#bedrock-capital, #ceo, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #content-management-system, #developer, #funding, #fundings-exits, #geodesic-capital, #geoff-lewis, #github, #greenoaks-capital, #ibm, #managing-partner, #mcdonalds, #react, #recent-funding, #salesforce, #salesforce-ventures, #serverless-computing, #software, #startups, #tc, #tiger-global

Extra Crunch roundup: SaaS founder salaries, break-even neobanks, Google Search tips

Usually, a teacher who grades students on a curve is boosting the efforts of those who didn’t perform well on the test. In the case of cloud companies, however, it’s the other way around.

As of Q1 2021, startups in this sector have median Series A rounds around $8 million, reports PitchBook. With $100+ million Series D rounds becoming more common, company valuations are regularly boosted into the billions.

Andy Stinnes, a general partner at Cloud Apps Capital Partners, says founders who are between angel and Series A should seek out investors who are satisfied with $200,000 to $500,000 in ARR.


Full Extra Crunch articles are only available to members.
Use discount code ECFriday to save 20% off a one- or two-year subscription.


Usually a specialist firm, these VCs are open to betting on startups that haven’t yet found product-market fit.

“At this phase of development, you need a committed partner who has both the time and the experience to guide you,” says Stinnes.

These observations aren’t just for active investors: This post is also a framework for new and seasoned founders who are getting ready to knock on doors and ask strangers for money.

Thanks very much for reading Extra Crunch this week!

Walter Thompson
Senior Editor, TechCrunch
@yourprotagonist

Maybe neobanks will break even after all

Alex returned from a week of vacation with a dispatch about the profitability of neobanks Revolut, Chime and Monzo.

“In short, while American consumer fintech Chime has disclosed positive EBITDA — an adjusted profitability metric — many neobanks that we’ve seen numbers from have demonstrated a stark inability to paint a path to profitability,” he writes.

“That could be changing.”

How to land the top spot in Google Search with featured snippets in 2021

Image of colorful scraps of torn paper to represent snippets.

Image Credits: IngaNielsen / Getty Images

“Google search is not what it used to be,” Ryan Sammy, the director of strategy at growth-marketing agency Fractl, writes in a guest post. “We all want to be No. 1 on the search results page, but these days, getting to that position isn’t enough. It might be worth your while to instead go after the top featured snippet position.”

Sammy writes that earning the featured snippet spot is “one of the best things you can do for your SEO.” But how do you land your page in the coveted snippet perch?

 

What does Red Hat’s sale to IBM tell us about Couchbase’s valuation?

Image Credits: Getty Images

After noSQL provider Couchbase filed to go public, joining the ranks of the Great IPO Rush of 2021, Alex Wilhelm looked into its business model and financial performance, with a goal of better understanding the company — and market comps.

Alex used Red Hat, which recently sold to IBM for around $34 billion, as a comp, determining Couchbase “is worth around $900 million” if you use the Red Hat math.

“The Red Hat-Couchbase comparison is not perfect; 2019 is ages ago in technology time, the database company is smaller and other differences exist between the two companies,” Alex notes. “But Red Hat does allow us the confidence to state that Couchbase will be able to best its final private valuation in its public debut.”

How much to pay yourself as a SaaS founder

Piggy bank With a Money Carrot stick

Image Credits: AlenaPaulus (opens in a new window) / Getty Images

Anna Heim interviewed SaaS entrepreneurs and investors to find out how much early-stage founders should pay themselves.

Startups run by CEOs who take home a small salary tend to do better over the long run, but there are other points to consider, such as geography, marital status, and frankly, what quality of life you desire.

Waterly founder Chris Sosnowski raised his own pay to $14/hour last year; at his prior job, his salary topped $100,000.

“We had saved money up for over a year before we cut out my pay,” he told Anna. “I can live my life without entertainment … so that’s what we did for 2020.”

How much are you willing to sacrifice?

The early-stage venture capital market is weird and chaotic

Alex Wilhelm and Anna Heim had been hearing that Series A raises were coming later, while Series Bs were coming in quick succession after startups landed an A.

That piqued their curiosity, so they put feelers out to a bunch of investors to understand what’s going on in early-stage venture capital markets.

In the first of a two-part series, Alex and Anna examine why seed stage is so chaotic, why As are slow, and why Bs are fast. In their first dispatch, they looked at the U.S. market.

Have you worked with a talented individual or agency who helped you find and keep more users? Respond to our survey and help us find the best startup growth marketers!

#advertising-tech, #alex-wilhelm, #chime, #couchbase, #entrepreneurship, #extra-crunch-roundup, #ibm, #information-technology, #red-hat, #revolut, #saas, #startups, #tc, #venture-capital

Merlyn Mind emerges from stealth with $29M and a hardware and software solution to help teachers with tech

We’ve chronicled, in great detail, the many layers of technology, services and solutions, that have been wrapped around the world of education in recent years — and especially in the last year, which became a high watermark for digital learning tools because of Covid-19. Today, a startup called Merlyn Mind is coming out of stealth with a proposition that it believes helps tie a lot of this together in the K-12 classroom — a “digital assistant” that comes in the form of a piece of custom hardware and software to “read” natural voice and remote control commands from a teacher to control multimedia apps on a screen of choice. Along with this, Merlyn Mind is announcing $29 million in initial funding to build out its vision.

The funding is being led by specialist edtech investor Learn Capital, with other unnamed investors participating. It comes after Merlyn Mind spent about three years quietly building its first release and more recently piloting the service in 50+ classrooms in more than 20 schools.

Co-founded by longtime IBM scientists Satya Nitta (the CEO), Ravi Kokku, and Sharad Sundararajan — all of whom spent several years leading education efforts in IBM’s Watson AI research division — Merlyn Mind is coming to the market with a patented, vertically integrated solution to solve what Nitta told me in an interview he believes and has seen first-hand to be a fundamental pain point in the world of edtech.

In effect, education and technology may have now been merged into a single term as far as the tech world is concerned, but in terms of practical, on-the-ground application, many teachers are not making the most of the tools they have in the classroom. The majority are, he believes, facing “cognitive overload” (which is not to mention the kids, who themselves probably are facing the same: a problem for it to tackle down the road, I hope), and they need help.

To be fair, this problem existed before the pandemic, with research from McKinsey & Co. published in 2020 (and gathered earlier) finding that teachers were already spending more than half of their time on administrative tasks, not teaching or thinking about how and what to teach or what help specific students might need. Other research from Learn Platform found that teachers potentially have as many as 900 different applications that they can use in a classroom (in practice, Nitta told me a teacher will typically use between 20 and 30 applications, sites and tech services in a day, although even that is a huge amount).

Post-Covid-19, there are other kinds of new complications to grapple with on top of all that. Not only are many educators now playing catch-up because of the months spent learning at home (it’s been widely documented that in many cases, students have fallen behind), but overall, education is coming away from our year+ of remote learning with a much stronger mandate to use more tech from now on, not less.

The help that Merlyn Mind is proposing comes in the form of what the startup describes as an “AI hub.” This includes a personal assistant called Symphony Classroom, a kind of Alexa-style voice interface tailored to the educational environment and built on a fork of Android; a smart speaker that looks a bit like a soundbar; and a consumer-style remote that can be used also for navigation and commands.

These then work with whatever screen the teacher opts to use, whether it is a TV, or an interactive whiteboard, or something else; along with any other connected devices that are used in the classroom, to open and navigate through different apps, including various Google apps, NearPod, Newsela, and so on. (That could potentially also include kids’ individual screens if they are being used.)

The idea is that if a teacher is in the middle of a lesson on a specific topic and a question comes up that can best be answered by illustrating a concept through another app, a teacher can trigger the system to navigate to a new screen to find that information and instantly show it to the students. The system can also be used to find a teacher’s own materials on file. The demo I saw worked well enough, although I would love to see how an ordinary teacher — the kind they’re hoping will use this — would fare.

Everyone knows the expression “hardware is hard,” so it’s interesting to see Merlyn addressing its problem with a hardware-forward approach.

Nitta was very ready with his defense for this one:

“I’ll tell you why we built our own hardware,” he told me. “There’s a bunch of AI processing that’s happening on the device, for various reasons, including latency and security. So it’s kind of an edge AI appliance. And the second thing is the microphones. They are designed for the classroom environment, and we wanted to have complete control over the tooling of these microphones for the processing, for the environment, and that is very hard to do. If you are taking a third-party microphone array off the shelf, it’s impossible, actually, you simply cannot.”

The startup’s early team is rounded out with alums from the likes of HP Education, Amazon, Google, Facebook, Broadcom and Roku to help build all of this, knowing the challenges they were tackling, but also the payoff once it would be finished if it all works.

“We have a very, very talented team, and we basically said, right, this is going to be a lot of hard work that will take us three and a half years. We have to build our own piece of hardware… and we ended up building the entire voice stack from from scratch ourselves, too,” Nitta continued. “It means we have end to end control of everything from the hardware all the way to the language models.”

He did point out though that over time, there will be some elements that will be usable without all the hardware, in particular when a teacher may suddenly have to teach outside the classroom again in a remote learning environment.

It’s a very ambitious concept, but where would education and learning be if not for taking leaps once in a while? That’s where investors stand on the startup, too.

“Just as we saw with the breakthrough edtech company Coursera which reached IPO this year and was started a decade ago by two machine learning professors, in today’s hypercompetitive market the best edtech companies need to start with an advanced technological core,” said Rob Hutter, founder and managing partner of Learn Capital. “Merlyn is one of the first companies to focus on the enhancement of live teaching in classrooms, and it is developing a solution that is so intuitive it allows teachers to leverage technology with mastery while using minimal effort.  This is a very promising platform.”

The proof will be in how it gets adopted when it finally launches commercially later this year, with pricing to be announced later.

#classrooms, #edtech, #education, #funding, #ibm, #merlyn-mind, #teachers, #watson

What does Red Hat’s sale to IBM tell us about Couchbase’s valuation?

The IPO rush of 2021 continued this week with a fresh filing from NoSQL provider Couchbase. The company raised hundreds of millions while private, making its impending debut an important moment for a number of private investors, including venture capitalists.

According to PitchBook data, Couchbase was last valued at a post-money valuation of $580 million when it raised $105 million in May 2020. The company — despite its expansive fundraising history — is not a unicorn heading into its debut to the best of our knowledge.

We’d like to uncover whether it will be one when it prices and starts to trade, so we dug into Couchbase’s business model and its financial performance, hoping to better understand the company and its market comps.

The Couchbase S-1

The Couchbase S-1 filing details a company that sells database tech. More specifically, Couchbase offers customers database technology that includes what NoSQL can offer (“schema flexibility,” in the company’s phrasing), as well as the ability to ask questions of their data with SQL queries.

Couchbase’s software can be deployed on clouds, including public clouds, in hybrid environments, and even on-prem setups. The company sells to large companies, attracting 541 customers by the end of its fiscal 2021 that generated $107.8 million in annual recurring revenue, or ARR, by the close of last year.

Couchbase breaks its revenue into two main buckets. The first, subscription, includes software license income and what the company calls “support and other” revenues, which it defines as “post-contract support,” or PCS, which is a package of offerings, including “support, bug fixes and the right to receive unspecified software updates and upgrades” for the length of the contract.

The company’s second revenue bucket is services, which is self-explanatory and lower-margin than its subscription products.

#couchbase, #ec-cloud-and-enterprise-infrastructure, #enterprise, #fundings-exits, #ibm, #nosql, #red-hat, #startups

Nexford University lands $10.8M pre-Series A to scale its flexible remote learning platform

Two profound problems face the higher education sector globally — affordability and relevance. Whether you live in Africa, Europe, or the U.S., a major reason why people don’t go to university or college or even drop out because they cannot afford tuition fees. On the other hand, relevance shows the huge gap between what traditional universities teach and what global employers actually look for. It’s not a secret that universities focus a bit too much on theory.

Over the past few years, there has been the emergence of a number of alternative credential providers trying to provide students with the necessary skills to earn and make a living. Nexford University is one of such platforms, and today, it has a closed $10.8 million pre-Series A funding round.

Dubai-based VC Global Ventures led the new round. Other investors include Future Africa’s new thematic fund (focused on education), angel investors, and family offices. Unnamed VCs from 10 countries, including the U.S., U.K., France, Dubai, Switzerland, Qatar, Nigeria, Egypt and Saudi Arabia, also took part.

To date, Nexford has raised $15.3 million, following the first tranche of $4.5 million in seed funding raised two years ago.

Fadl Al Tarzi launched Nexford University in 2019. The tech-enabled university is filling affordability and relevance gaps by providing access to quality and affordable education.

“That way, you get the best of both worlds,” CEO Al Tarzi said to TechCrunch. “You get practical skills that you can put to work immediately or for your future career while actively keeping a job. So the whole experience is designed as a learning as a service model.”

Nexford Unversity lets students study at their own pace. Once they apply and get admitted into either a degree program or a course program, they choose how fast or slow they want the program to be.

Nexford University

Fadl Al Tarzi (CEO, Nexford University)

The CEO says whatever students learn on the platform is directly applicable to their jobs. Currently, Nexford offers undergraduate degrees in business administration; 360° marketing; AI & automation; building a tech startup; business analytics; business in emerging markets; digital transformation; e-commerce; and product management. Its graduate degrees are business administration, advanced AI, e-commerce, hyperconnectivity, sustainability, and world business.

Nexford’s tuition structure is very different from traditional universities because it’s modelled monthly. Its accredited degrees cost between $3,000 to $4,000 paid in monthly instalments. In Nigeria, for instance, an MBA costs about $160 a month, while a bachelor degree costs $80 a month. But the catch for the monthly instalment structure means the faster a learner graduates, the less they pay.

What’s it like learning with Nexford University?

Nexford University doesn’t offer standardized and theoretical tests or assignments as most traditional universities do. Al Tarzi says the company employs what he calls a competency-based education model where students prove mastery by working on practical projects.

For instance, a student working on an accounting course will most likely need to create a P&L statement, analyze balance sheets and identify where the error is to correct it. The platform then gives the student different scenarios showing companies with different revenues and expense levels. The task? To analyse and extract certain ratios to help make sense of which company is profitable and the other unit economics involved.

Though Nexford plays in the edtech space, Al Tarzi doesn’t think the company is an edtech company. As a licensed and accredited online university, Nexford has a huge amount of automation across the organization and provides students with support from faculty and career advisors.

After offering degrees, Nexford puts on its placement hats by fixing its graduates with partner employers.

There’s a big shortage of jobs in Nigeria, and despite the high unemployment, it’s actually difficult to find extremely qualified entry-level graduates. So Nexford has carried out several partnerships where employers sponsor their employees or soon-to-be employees for upskilling and rescaling purposes.

An illustration is with Sterling Bank, a local bank in the country. Most Nigerian banks have yearly routines where they hire graduates and put them on weeks-long training programs. Sterling Bank employs any candidate it feels did great after the capital intensive (eight weeks in most cases) programs.

So what Nexford has done is to partner with Sterling to fund the tuition for high school leavers. When these students go through Nexford’s programs for the first year, they begin to get part-time placements at Sterling. Upon graduation, they get a job in the bank.

“That saves Sterling the training cost and our tuition fee is almost equal to the training that they provided for students. Also, students start paying back once they get placed, so it’s a win-win.”

Nexford University has learners from 70 countries, with Nigeria its biggest market yet. Nexford also has blue-chip partnerships with Microsoft, LinkedIn Learning, and IBM to provide access to tools, courses and programmes to improve the learning experience.

One of the major gains of this learning experience is how it prepares people for remote jobs. Nexford is bullish on its virtual skills grid, where people will get jobs remotely regardless of their location on the platform.

“Across Sub Saharan Africa by the year 2026, there’s gonna be a shortage of about 100 million university seats as a result of huge growth in youth population not met by growth and supply. Even if you want to build universities fast, you wouldn’t be able to meet the demand. And that spirals down to the job market. We don’t think the local economy will produce enough jobs in Nigeria, for instance. But we want to enable people to get remote jobs across the world and not necessarily have to migrate.” 

Last year, Nexford’s revenues grew by 300%. This year, the company hopes to triple the size of its enrollment from last year, the CEO said.

Nexford is big on designing students’ curriculum based on analysis of what their employer needs. Al Tarzi tells me that the company always follow the Big Data approach, asking themselves, “how do we find out what employers worldwide are looking for and keep our curriculum alive and relevant?”

“We develop proprietary technology that enables us to analyze job vacancies as well as several other data sources; use AI to understand how those data sets and build a curriculum based on those findings. So, in short, we start with the end in mind,” he answers.

The company is keen on improving its technology regardless. It wants to analyse skills more accurately and automate more functions to enhance user experience. That’s what the funding will be used for in addition to fuelling its regional expansion plans (particularly in Asia) and investing in growth and product development. Per the latter, the online university says it will be launching partner programs with more employers globally to facilitate both placement and upskilling and rescaling. 

Merging both worlds of tech and the traditional university model is no easy feat. The former is about efficiency, user-centricity, product, among others. The latter embodies rigidity and continues to lag behind fast-paced innovation. And while there’s been a boom in edtech, most startups try to circumvent the industry’s bureaucracy by launching an app or a MOOC. Nexford’s model of running a degree-granting, licensed, accredited, and regulated university is more challenging but in it lies so much opportunity.

Iyin Aboyeji, Future Africa general partner CEO, understands this. It’s one reason why the company is the first investment out of Future Africa’s soon-to-be-launched fund focused on the future of learning and why he believes the company is a game-changer for higher education in Africa.

“During the pandemic, while many universities in Nigeria were shut down due to labour disputes, Nexford was already delivering an innovative and affordable new model of online higher education designed for a skills-based economy.”  

For general partner at Global Ventures Noor Sweid, Nexford University is redressing the mismatch between the supply of talent and the demands of today’s digital economy. “We are thrilled to partner with Fadl and the Nexford team on their journey toward expanding access to universal quality higher education in emerging markets,” she said.

#africa, #artificial-intelligence, #asia, #education, #europe, #funding, #future-africa, #higher-education, #ibm, #massive-open-online-course, #microsoft, #nexford-university, #nigeria, #online-learning, #product-management, #saudi-arabia, #tc, #tech-startup, #united-states, #university

Google Analytics prepares for life after cookies

As consumer behavior and expectations around privacy have shifted — and operating systems and browsers have adapted to this — the age of cookies as a means of tracking user behavior is coming to an end. Few people will bemoan this, but advertisers and marketers rely on having insights into how their efforts translate into sales (and publishers like to know how their content performs as well). Google is obviously aware of this and it is now looking to machine learning to ready its tools like Google Analytics for this post-cookie future.

headshot of Vidhya Srinivasan, VP/GM, Advertising at Google

Vidhya Srinivasan, VP/GM, Advertising at Google

Last year, the company brought several machine learning tools to Google Analytics already. At the time, the focus was on alerting users to significant changes in their campaign performance, for example. Now, it is taking this a step further by using its machine learning systems to model user behavior when cookies are not available.

It’s hard to underestimate the importance of this shift, but according to Vidhya Srinivasan, Google’s VP and GM for Ads Buying, Analytics and Measurement who joined the company after a long stint at Amazon two years ago (and IBM before that), it’s also the only way to go.

“The principles we outlined to drive our measurement roadmap are based on shifting consumer expectations and ecosystem paradigms. Bottom line: the future is consented. It’s modeled. It’s first-party. So that’s what we’re using as our guide for the next gen of our products and solutions,” she said in her first media interview after joining Google.

It’s still early days and a lot of users may yet consent and opt in to tracking and sharing their data in some form or another. But the early indications are that this will be a minority of users. Unsurprisingly, first-party data and the data Google can gather from users who consent becomes increasingly valuable in this context.

Because of this, Google is now also making it easier to work with this so-called ‘consented data’ and to create better first-party data through improved integrations with tools like the Google Tag Manager.

Last year, Google launched Consent Mode, which helps advertisers manage cookie behavior based on local data-protection laws and user preferences. For advertisers in the EU and in the U.K., Consent Mode allows them to adjust their Google tags based on a user’s choices and soon, Google will launch a direct integration with Tag Manager to make it easier to modify and customize these tags.

How Consent Mode works today.

What’s maybe more important, though, is that Consent Mode will now use conversion modeling for users who don’t consent to cookies. Google says this can recover about 70% of ad-click-to-conversion journeys that would otherwise be lost to advertisers.

In addition, Google is also making it easier for bring in first-party data (in a privacy-forward way) to Google Analytics to improve measurements and its models.

“Revamping a popular product with a long history is something people are going to have opinions about – we know that. But we felt strongly that we needed Google Analytics to be relevant to changing consumer behavior and ready for a cookie-less world – so that’s what we’re building,” Srinivasan said. “The machine learning that Google has invested in for years — that experience is what we’re putting in action to drive the modeling underlying this tech. We take having credible insights and reporting in the market seriously. We know that doing the work on measurement is critical to market trust. We don’t take the progress we’ve made for granted and we’re looking to continue iterating to ensure scale, but above all we’re prioritizing user trust.”

 

 

#advertising-tech, #amazon, #analytics, #articles, #computing, #european-union, #gm, #google, #google-analytics, #ibm, #machine-learning, #operating-systems, #tc, #tracking, #united-kingdom, #vp, #web-analytics, #world-wide-web

IBM creates the world’s first 2 nm chip

Thursday, IBM announced a breakthrough in integrated circuit design—the world’s first 2 nanometer process. IBM says its new process can produce CPUs capable of either 45 percent higher performance, or 75 percent lower energy use than modern 7 nm designs.

If you’ve followed recent processor news, you’re likely aware that Intel’s current desktop processors are still laboring along at 14 nm, while the company struggles to complete a migration downward to 10 nm—and that its rivals are on much smaller processes, with the smallest production chips being Apple’s new M1 processors at 5 nm. What’s less clear is exactly what that means in the first place.

Originally, process size referred to the literal two-dimensional size of a transistor on the wafer itself—but modern 3D chip fabrication processes have made a hash of that. Foundries still refer to a process size in nanometers, but it’s a “2D equivalent metric” only loosely coupled to reality, and its true meaning varies from one fabricator to the next.

Read 4 remaining paragraphs | Comments

#cpu-design, #ibm, #tech

Cloud infrastructure market keeps rolling in Q1 with almost $40B in revenue

Conventional wisdom over the last year has suggested that the pandemic has driven companies to the cloud much faster than they ever would have gone without that forcing event with some suggesting it has compressed years of transformation into months. This quarter’s cloud infrastructure revenue numbers appear to be proving that thesis correct.

With The Big Three — Amazon, Microsoft and Google — all reporting this week, the market generated almost $40 billion in revenue, according to Synergy Research data. That’s up $2 billion from last quarter and up 37% over the same period last year. Canalys’s numbers were slightly higher at $42 billion.

As you might expect if you follow this market, AWS led the way with $13.5 billion for the quarter up 32% year over year. That’s a run rate of $54 billion. While that is an eye-popping number, what’s really remarkable is the yearly revenue growth, especially for a company the size and maturity of Amazon. The law of large numbers would suggest this isn’t sustainable, but the pie keeps growing and Amazon continues to take a substantial chunk.

Overall AWS held steady with 32% market share. While the revenue numbers keep going up, Amazon’s market share has remained firm for years at around this number. It’s the other companies down market that are gaining share over time, most notably Microsoft which is now at around 20% share good for about $7.8 billion this quarter.

Google continues to show signs of promise under Thomas Kurian, hitting $3.5 billion good for 9% as it makes a steady march towards double digits. Even IBM had a positive quarter, led by Red Hat and cloud revenue good for 5% or about $2 billion overall.

Synergy Research cloud infrastructure bubble map for Q1 2021. AWS is leader, followed by Microsoft and Google.

Image Credits: Synergy Research

John Dinsdale, chief analyst at Synergy says that even though AWS and Microsoft have firm control of the market, that doesn’t mean there isn’t money to be made by the companies playing behind them.

“These two don’t have to spend too much time looking in their rearview mirrors and worrying about the competition. However, that is not to say that there aren’t some excellent opportunities for other players. Taking Amazon and Microsoft out of the picture, the remaining market is generating over $18 billion in quarterly revenues and growing at over 30% per year. Cloud providers that focus on specific regions, services or user groups can target several years of strong growth,” Dinsdale said in a statement.

Canalys, another firm that watches the same market as Synergy had similar findings with slight variations, certainly close enough to confirm one another’s findings. They have AWS with 32%, Microsoft 19%, and Google with 7%.

Canalys market share chart with Amazon with 32%, Microsoft 19% and Google 7%

Image Credits: Canalys

Canalys analyst Blake Murray says that there is still plenty of room for growth, and we will likely continue to see big numbers in this market for several years. “Though 2020 saw large-scale cloud infrastructure spending, most enterprise workloads have not yet transitioned to the cloud. Migration and cloud spend will continue as customer confidence rises during 2021. Large projects that were postponed last year will resurface, while new use cases will expand the addressable market,” he said.

The numbers we see are hardly a surprise anymore, and as companies push more workloads into the cloud, the numbers will continue to impress. The only question now is if Microsoft can continue to close the market share gap with Amazon.

#amazon, #cloud, #cloud-infrastructure-market-share, #earnings, #enterprise, #google, #ibm, #microsoft, #synergy-research, #tc

IBM is acquiring cloud app management firm Turbonomic for up to $2B

IBM today made another acquisition to deepen its reach into providing enterprises with AI-based services to manage their networks and workloads. It announced that it is acquiring Turbonomic, a company that provides tools to manage application performance (specifically resource management), along with Kubernetes and network performance, part of its bigger strategy to bring more AI into IT ops, or as it calls it, AIOps.

Financial terms of the deal were not disclosed but according to data in PitchBook, Turbonomic was valued at nearly $1 billion — $963 million, to be exact — in its last funding round in September 2019. A report in Reuters rumoring the deal a little earlier today valued it at between $1.5 billion and $2 billion, and a source tells us the figure is accurate.

The Boston-based company’s investors included General Atlantic, Cisco, Bain, Highland Capital Partners, and Red Hat. The last of these, of course, is now a part of IBM (so it was theoretically also an investor), and together Red Hat and IBM have been developing a range of cloud-based tools addressing telco, edge and enterprise use cases.

This latest deal will help extend that further, and it has more generally been an area that IBM has been aggressive in recently. Last November IBM acquired another company called Instana to bring application performance management into its stable, and it pointed out today that the Turbonomic deal will complement that and the two technologies’ tools will be integrated together, IBM said.

Turbonomic’s tools are particularly useful in hybrid cloud architectures, which involve not just on-premise and cloud workloads, but workloads that typically are extended across multiple cloud environments. While this may be the architecture people apply for more resilience, reasons of cost, location or other practicalities, the fact of the matter is that it can be a challenge to manage. Turbonomic’s tools automate management, analyse performance, and suggest changes for network operations engineers to make to meet usage demands.

“Businesses are looking for AI-driven software to help them manage the scale and complexity challenges of running applications cross-cloud,” said Ben Nye, CEO, Turbonomic, in a statement. “Turbonomic not only prescribes actions, but allows customers to take them. The combination of IBM and Turbonomic will continuously assure target application response times even during peak demand.”

The bigger picture for IBM is that it’s another sign of how the company is continuing to move away from its legacy business based around servers and deeper into services, and specifically services on the infrastructure of the future, cloud-based networks.

“IBM continues to reshape its future as a hybrid cloud and AI company,” said Rob Thomas, SVP, IBM Cloud and Data Platform, in a statement. “The Turbonomic acquisition is yet another example of our commitment to making the most impactful investments to advance this strategy and ensure customers find the most innovative ways to fuel their digital transformations.”

A large part of the AI promise in the world of network operations and IT ops is how it will afford companies to rely more on automation, another area where IBM has been very active. (In a very different application of this technology — in business services — this month, it acquired MyInvenio in Italy to bring process mining technology in house.)

The promise of automation, meanwhile, is lower operation costs, a critical issue for managing network performance and availability in hybrid cloud deployments.

“We believe that AI-powered automation has become inevitable, helping to make all information-centric jobs more productive,” said Dinesh Nirmal, General Manager, IBM Automation, in a statement. “That’s why IBM continues to invest in providing our customers with a one-stop shop of AI-powered automation capabilities that spans business processes and IT. The addition of Turbonomic now takes our portfolio another major step forward by ensuring customers will have full visibility into what is going on throughout their hybrid cloud infrastructure, and across their entire enterprise.”

#enterprise, #fundings-exits, #ibm

Red Hat CEO looks to maintain double-digit growth in second year at helm

Red Hat CEO Paul Cormier runs the centerpiece of IBM’s transformation hopes. When Big Blue paid $34 billion for his company in 2018, it was because it believed it could be the linchpin of the organization’s shift to a focus on hybrid computing.

In its most recent earnings report, IBM posted positive revenue growth for only the second time in 8 quarters, and it was Red Hat’s 15% growth that led the way. Cormier recognizes the role his company plays for IBM, and he doesn’t shy away from it.

As he told me in an interview this week ahead of the company’s Red Hat Summit, a lot, a lot of cloud technology is based on Linux, and as the company that originally made its name selling Red Hat Enterprise Linux (RHEL), he says that is a technology his organization is very comfortable working with. He sees the two companies working well together with Red Hat benefitting from having IBM sell his company’s software, while remaining neutral technologically, something that benefits customers and pushes the overall IBM vision.

Quite a first year

Even though Cormier has been with Red Hat for 20 years, he took over as its CEO after Arvind Krishna replaced Ginni Rometty as IBM’s chief executive, and long-time Red Hat CEO Jim Whitehurst moved over to a role at IBM last April. Cormier stepped in as leader just as the pandemic hit the U.S. with its full force.

“Going into my first year of a pandemic, no one knew what the business was going to look like, and not that we’re completely out of the woods yet, but we have weathered that pretty well,” he said.

Part of the reason for that is because like many software companies, he has seen his customers shifting to the cloud much faster than anyone thought previously. While the pandemic acted as a forcing event for digital transformation, it has left many companies to manage a hybrid on-prem and cloud environment, a place where Red Hat can help.

“Having a hybrid architecture brings a lot of value […], but it’s complex. It just doesn’t happen by magic, and I think we helped a lot of customers, and it accelerated a lot of things by years of what was going to happen anyways,” Cormier told me.

In terms of the workforce moving to work from home, Red Hat had 25% of its workforce doing that even before the pandemic, so the transition wasn’t as hard as you might think for a company of its size. “Most every meeting at Red Hat had someone on remotely [before the pandemic]. And so we just sort of flipped into that mode overnight. I think we had an easier time than others for that reason,” he said.

Acting as IBM’s growth engine

Red Hat’s 15% growth was a big reason for IBM showing modest revenue growth last quarter, something that has been hard to come by for the last seven years. At IBM’s earnings call with analysts, CEO Krishna and CFO Jim Kavanaugh both saw Red Hat maintaining that double digit growth as key to driving the company towards more stable positive revenue in the coming years.

Cormier says that he anticipates the same things that IBM expects — and that Red Hat is up to the task ahead of it. “We see that growth continuing to happen as it’s a huge market, and this is the way it’s really playing out. We share the optimism,” he explained.

While he understands that Red Hat must remain neutral and work with multiple cloud partners, IBM is free to push Red Hat, and having that kind of sales clout behind it is also helping drive Red Hat revenue. “What IBM does for us is they open the door for us in many more places. They are in many more countries than we were [prior to the acquisition], and they have a lot of high level relationships where they can open the door for us,” he said.

In fact, Cormier points out that IBM salespeople have quotas to push Red Hat in their biggest accounts. “IBM sales is very incentivized to bring Red Hat in to help solve customer problems with Red Hat products,” he said.

No pressure or anything

When you’re being billed as a savior of sorts for a company as storied as IBM, it wouldn’t be surprising for Cormier to feel the weight of those expectations. But if he is he doesn’t seem to show it. While he acknowledges that there is pressure, he argues that it’s no different from being a public company, only the stakeholders have changed.

“Sure it’s pressure, but prior to [being acquired] we were a public company. I look at Arvind as the chairman of the board and IBM as our shareholders. Our shareholders put a lot of pressure on us too [when we were public]. So I don’t feel any more pressure with IBM and with Arvind than we had with our shareholders,” he said.

Although they represent only 5% of IBM’s revenue at present, Cormier knows it isn’t really about that number, per se. It’s about what his team does and how that fits in with IBM’s transformation strategy overall.

Being under pressure to deliver quarter after quarter is the job of any CEO, especially one that’s in the position of running a company like Red Hat under a corporation like IBM, but Cormier as always appears to be comfortable in his own skin and confident in his company’s ability to continue chugging along as it has been with that double-digit growth. The market potential is definitely there. It’s up to Red and Hat and IBM to take advantage.

#arvind-krishna, #cloud, #enterprise, #ibm, #paul-cormier, #red-hat

Google’s Anthos multi-cloud platform gets improved logging, Windows container support and more

Google today announced a sizable update to its Anthos multi-cloud platform that lets you build, deploy and manage containerized applications anywhere, including on Amazon’s AWS and (in preview) on Microsoft Azure.

Version 1.7 includes new features like improved metrics and logging for Anthos on AWS, a new Connect gateway to interact with any cluster right from Google Cloud and a preview of Google’s managed control plane for Anthos Service Mesh. Other new features include Windows container support for environments that use VMware’s vSphere platform and new tools for developers to make it easier for them to deploy their applications to any Anthos cluster.

Today’s update comes almost exactly two years after Google CEO Sundar Pichai originally announced Anthos at its Cloud Next event in 2019 (before that, Google called this project the ‘Google Cloud Services Platform,’ which launched three years ago). Hybrid- and multi-cloud, it’s fair to say, takes a key role in the Google Cloud roadmap — and maybe more so for Google than for any of its competitors. And recently, Google brought on industry veteran Jeff Reed to become the VP of Product Management in charge of Anthos.

Reed told me that he believes that there are a lot of factors right now that are putting Anthos in a good position. “The wind is at our back. We bet on Kubernetes, bet on containers — those were good decisions,” he said. Increasingly, customers are also now scaling out their use of Kubernetes and have to figure out how to best scale out their clusters and deploy them in different environments — and to do so, they need a consistent platform across these environments. He also noted that when it comes to bringing on new Anthos customers, it’s really those factors that determine whether a company will look into Anthos or not.

He acknowledged that there are other players in this market, but he argues that Google Cloud’s take on this is also quite different. “I think we’re pretty unique in the sense that we’re from the cloud, cloud-native is our core approach,” he said. “A lot of what we talk about in [Anthos] 1.7 is about how we leverage the power of the cloud and use what we call ‘an anchor in the cloud’ to make your life much easier. We’re more like a cloud vendor there, but because we support on-prem, we see some of those other folks.” Those other folks being IBM/Red Hat’s OpenShift and VMware’s Tanzu, for example. 

The addition of support for Windows containers in vSphere environments also points to the fact that a lot of Anthos customers are classical enterprises that are trying to modernize their infrastructure, yet still rely on a lot of legacy applications that they are now trying to bring to the cloud.

Looking ahead, one thing we’ll likely see is more integrations with a wider range of Google Cloud products into Anthos. And indeed, as Reed noted, inside of Google Cloud, more teams are now building their products on top of Anthos themselves. In turn, that then makes it easier to bring those services to an Anthos-managed environment anywhere. One of the first of these internal services that run on top of Anthos is Apigee. “Your Apigee deployment essentially has Anthos underneath the covers. So Apigee gets all the benefits of a container environment, scalability and all those pieces — and we’ve made it really simple for that whole environment to run kind of as a stack,” he said.

I guess we can expect to hear more about this in the near future — or at Google Cloud Next 2021.

 

#anthos, #apigee, #aws, #ceo, #chrome-os, #cisco, #cloud, #cloud-computing, #cloud-infrastructure, #computing, #enterprise, #google, #google-cloud, #google-cloud-platform, #ibm, #kubernetes, #microsoft, #microsoft-windows, #red-hat, #sundar-pichai, #vmware