Developer sabotages his own apps, then claims Aaron Swartz was murdered

Stock photo of the lit fuse of a stick of dynamite or firework.

Enlarge (credit: James Brey / iStockPhoto / Getty Images)

The developer who sabotaged two of his own open source code libraries, causing disruptions for thousands of apps that used them, has a colorful past that includes embracing a QAnon theory involving Aaron Swartz, the well-known hacktivist and programmer who died by suicide in 2013.

Marak Squires, the author of two JavaScript libraries with more than 21,000 dependent apps and more than 22 million weekly downloads, updated his projects late last week after they remained unchanged for more than a year. The updates contained code to produce an infinite loop that caused dependent apps to spew gibberish, prefaced by the words “Liberty Liberty Liberty.” The update sent developers scrambling as they attempted to fix their malfunctioning apps.

What really happened with Aaron Swartz?

Squires provided no reason for the move, but in a readme file accompanying last week’s malicious update, he included the words “What really happened with Aaron Swartz?”

Read 8 remaining paragraphs | Comments

#aaron-swartz, #biz-it, #foss, #free-and-open-source-software, #open-source

Zeroday in ubiquitous Log4j tool poses a grave threat to the Internet

Zeroday in ubiquitous Log4j tool poses a grave threat to the Internet

Enlarge (credit: Getty Images)

Exploit code has been released for a serious code-execution vulnerability in Log4j, an open-source logging utility that’s used in countless apps, including those used by large enterprise organizations and also in Java versions of Minecraft, several website reported on last Thursday.

Word of the vulnerability first came to light on sites catering to users of Minecraft, the best-selling game of all time. The sites warned that hackers could execute malicious code on Minecraft servers or clients by manipulating log messages, including from things typed in chat messages. The picture became more dire still as the Log4j was identified as the source of the vulnerability and exploit code was discovered posted online.

A big deal

“The Minecraft side seems like a perfect storm, but I suspect we are going to see affected applications and devices continue to be identified for a long time,” HD Moore, founder and CTO of network discovery platform Rumble, said. “This is a big deal for environments tied to older Java runtimes: Web front ends for various network appliances, older application environments using legacy APIs, and Minecraft servers, due to their dependency on older versions for mod compatibility.”

Read 11 remaining paragraphs | Comments

#biz-it, #log4j, #minecraft, #open-source, #vulnerability

Malicious packages sneaked into NPM repository stole Discord tokens

Malicious packages sneaked into NPM repository stole Discord tokens

Enlarge (credit: Getty Images)

Researchers have found another 17 malicious packages in an open source repository, as the use of such repositories to spread malware continues to flourish.

This time, the malicious code was found in NPM, where 11 million developers trade more than 1 million packages among each other. Many of the 17 malicious packages appear to have been spread by different threat actors who used varying techniques and amounts of effort to trick developers into downloading malicious wares instead of the benign ones intended.

This latest discovery continues a trend first spotted a few years ago, in which miscreants sneak information stealers, keyloggers, or other types of malware into packages available in NPM, RubyGems, PyPi, or another repository. In many cases, the malicious package has a name that’s a single letter different than a legitimate package. Often, the malicious package includes the same code and functionality as the package being impersonated and adds concealed code that carries out additional nefarious actions.

Read 9 remaining paragraphs | Comments

#biz-it, #malware, #open-source, #repositories

Malware downloaded from PyPI 41,000 times was surprisingly stealthy

Malware downloaded from PyPI 41,000 times was surprisingly stealthy

Enlarge (credit: Getty Images)

PyPI—the open source repository that both large and small organizations use to download code libraries—was hosting 11 malicious packages that were downloaded more than 41,000 times, in one of the latest reported such incidents threatening the software supply chain.

JFrog, a security firm that monitors PyPI and other repositories for malware, said the packages are notable for the lengths its developers took to camouflage their malicious code from network detection. The lengths include a novel mechanism that uses what’s known as a reverse shell to proxy communications with control servers through the Fastly content distribution network. Another technique is DNS tunneling, something that JFrog said it had never seen before in malicious software uploaded to PyPI.

A powerful vector

“Package managers are a growing and powerful vector for the unintentional installation of malicious code, and as we discovered with these 11 new PyPI packages, attackers are getting more sophisticated in their approach, Shachar Menashe, senior director of JFrog research, wrote in an email. “The advanced evasion techniques used in these malware packages, such as novel exfiltration or even DNS tunneling (the first we’ve seen in packages uploaded to PyPI) signal a disturbing trend that attackers are becoming stealthier in their attacks on open source software.”

Read 11 remaining paragraphs | Comments

#biz-it, #malware, #open-source, #repositories

These parents built a school app. Then the city called the cops

Öppna Skolplattformen hoped to succeed where Skolplattform had failed.

Enlarge / Öppna Skolplattformen hoped to succeed where Skolplattform had failed. (credit: Comstock | Getty Images)

Christian Landgren’s patience was running out. Every day the separated father of three was wasting precious time trying to get the City of Stockholm’s official school system, Skolplattform, to work properly. Landgren would dig through endless convoluted menus to find out what his children were doing at school. If working out what his children needed in their gym kit was a hassle, then working out how to report them as sick was a nightmare. Two years after its launch in August 2018, the Skolplattform had become a constant thorn in the side of thousands of parents across Sweden’s capital city. “All the users and the parents were angry,” Landgren says.

The Skolplattform wasn’t meant to be this way. Commissioned in 2013, the system was intended to make the lives of up to 500,000 children, teachers, and parents in Stockholm easier—acting as the technical backbone for all things education, from registering attendance to keeping a record of grades. The platform is a complex system that’s made up of three different parts, containing 18 individual modules that are maintained by five external companies. The sprawling system is used by 600 preschools and 177 schools, with separate logins for every teacher, student, and parent. The only problem? It doesn’t work.

Read 27 remaining paragraphs | Comments

#apis, #biz-it, #citizen-involvement, #coding, #open-source, #programming, #tech

Linux Foundation says companies are desperate for open source talent

Online hiring information for Linux position.

Enlarge / It probably shouldn’t be considered “surprising” when a Linux certification entity reports that Linux certifications are highly desirable. (credit: Linux Foundation)

The Linux Foundation released its 2021 Open Source Jobs Report this month, which aims to inform both sides of the IT hiring process about current trends. The report accurately foreshadows many of its conclusions in the first paragraph, saying “the talent gap that existed before the pandemic has worsened due to an acceleration of cloud-native adoption as remote work has gone mainstream.” In other words: job-shopping Kubernetes and AWS experts are in luck.

The Foundation surveyed roughly 200 hiring managers and 750 open source professionals to find out which skills—and HR-friendly resume bullet points—are in the greatest demand. According to the report, college-degree requirements are trending down, but IT-certification requirements and/or preferences are trending up—and for the first time, “cloud-native” skills (such as Kubernetes management) are in higher demand than traditional Linux skills.

The hiring priority shift from traditional Linux to “cloud-native” skill sets implies that it’s becoming more possible to live and breathe containers without necessarily understanding what’s inside them—but you can’t have Kubernetes, Docker, or similar computing stacks without a traditional operating system beneath them. In theory, any traditional operating system could become the foundation of a cloud-native stack—but in practice, Linux is overwhelmingly what clouds are made of.

Read 3 remaining paragraphs | Comments

#biz-it, #linux, #linux-foundation, #open-source

Cryptocurrency launchpad hit by $3 million supply chain attack

Cryptocurrency launchpad hit by $3 million supply chain attack

Enlarge (credit: Austin Distel)

SushiSwap’s chief technology officer says the company’s MISO platform has been hit by a software supply chain attack. SushiSwap is a community-driven decentralized finance (DeFi) platform that lets users swap, earn, lend, borrow, and leverage cryptocurrency assets all from one place. Launched earlier this year, Sushi’s newest offering, Minimal Initial SushiSwap Offering (MISO), is a token launchpad that lets projects launch their own tokens on the Sushi network.

Unlike cryptocurrency coins that need a native blockchain and substantive groundwork, DeFi tokens are an easier alternative to implement, as they can function on an existing blockchain. For example, anybody can create their own “digital tokens” on top of the Ethereum blockchain without having to recreate a new cryptocurrency altogether.

Attacker steals $3 million in Ethereum via one GitHub commit

In a Twitter thread today, SushiSwap CTO Joseph Delong announced that an auction on MISO launchpad had been hijacked via a supply chain attack. An “anonymous contractor” with the GitHub handle AristoK3 and access to the project’s code repository had pushed a malicious code commit that was distributed on the platform’s front-end.

Read 13 remaining paragraphs | Comments

#biz-it, #cryptocurrency, #defi, #github, #miso, #open-source, #supply-chain-attack, #sushi, #tech

Mirantis launches cloud-native data center-as-a-service software

Mirantis has been around the block, starting way back as an OpenStack startup, but a few years ago the company began to embrace cloud-native development technologies like containers, microservices and Kubernetes. Today, it announced Mirantis Flow, a fully managed open source set of services designed to help companies manage a cloud-native data center environment, whether your infrastructure lives on-prem or in a public cloud.

“We’re about delivering to customers an open source-based cloud-to-cloud experience in the data center, on the edge, and interoperable with public clouds,” Adrian Ionel, CEO and co-founder at Mirantis explained.

He points out that the biggest companies in the world, the hyperscalers like Facebook, Netflix and Apple, have all figured out how to manage in a hybrid cloud-native world, but most companies lack the resources of these large organizations. Mirantis Flow is aimed at putting these same types of capabilities that the big companies have inside these more modest organizations.

While the large infrastructure cloud vendors like Amazon, Microsoft and Google have been designed to help with this very problem, Ionel says that these tend to be less open and more proprietary. That can lead to lock-in, which today’s large organizations are looking desperately to avoid.

“[The large infrastructure vendors] will lock you into their stack and their APIs. They’re not based on open source standards or technology, so you are locked in your single source, and most large enterprises today are pursuing a multi-cloud strategy. They want infrastructure flexibility,” he said. He added, “The idea here is to provide a completely open and flexible zero lock-in alternative to the [big infrastructure providers, but with the] same cloud experience and same pace of innovation.”

They do this by putting together a stack of open source solutions in a single service. “We provide virtualization on top as part of the same fabric. We also provide software-defined networking, software-defined storage and CI/CD technology with DevOps as a service on top of it, which enables companies to automate the entire software development pipeline,” he said.

As the company describes the service in a blog post published today, it includes “Mirantis Container Cloud, Mirantis OpenStack and Mirantis Kubernetes Engine, all workloads are available for migration to cloud native infrastructure, whether they are traditional virtual machine workloads or containerized workloads.”

For companies worried about migrating their VMware virtual machines to this solution, Ionel says they have been able to move these VMs to the Mirantis solution in early customers. “This is a very, very simple conversion of the virtual machine from VMware standard to an open standard, and there is no reason why any application and any workload should not run on this infrastructure — and we’ve seen it over and over again in many many customers. So we don’t see any bottlenecks whatsoever for people to move right away,” he said.

It’s important to note that this solution does not include hardware. It’s about bringing your own hardware infrastructure, either physical or as a service, or using a Mirantis partner like Equinix. The service is available now for $15,000 per month or $180,000 annually, which includes: 1,000 core/vCPU licenses for access to all products in the Mirantis software suite plus support for 20 virtual machine (VM) migrations or application onboarding and unlimited 24×7 support. The company does not charge any additional fees for control plane and management software licenses.

#cloud, #developer, #enterprise, #kubernetes, #mirantis, #open-source, #openstack, #tc

Confluent CEO Jay Kreps is coming to TC Sessions: SaaS for a fireside chat

As companies process ever-increasing amounts of data, moving it in real time is a huge challenge for organizations. Confluent is a streaming data platform built on top of the open source Apache Kafka project that’s been designed to process massive numbers of events. To discuss this, and more, Confluent CEO and co-founder Jay Kreps will be joining us at TC Sessions: SaaS on Oct 27th for a fireside chat.

Data is a big part of the story we are telling at the SaaS event, as it has such a critical role in every business. Kreps has said in the past the data streams are at the core of every business, from sales to orders to customer experiences. As he wrote in a company blog post announcing the company’s $250 million Series E in April 2020, Confluent is working to process all of this data in real time — and that was a big reason why investors were willing to pour so much money into the company.

“The reason is simple: though new data technologies come and go, event streaming is emerging as a major new category that is on a path to be as important and foundational in the architecture of a modern digital company as databases have been,” Kreps wrote at the time.

The company’s streaming data platform takes a multi-faceted approach to streaming and builds on the open source Kafka project. While anyone can download and use Kafka, as with many open source projects, companies may lack the resources or expertise to deal with the raw open source code. Many a startup have been built on open source to help simplify whatever the project does, and Confluent and Kafka are no different.

Kreps told us in 2017 that companies using Kafka as a core technology include Netflix, Uber, Cisco and Goldman Sachs. But those companies have the resources to manage complex software like this. Mere mortal companies can pay Confluent to access a managed cloud version or they can manage it themselves and install it in the cloud infrastructure provider of choice.

The project was actually born at LinkedIn in 2011 when their engineers were tasked with building a tool to process the enormous number of events flowing through the platform. The company eventually open sourced the technology it had created and Apache Kafka was born.

Confluent launched in 2014 and raised over $450 million along the way. In its last private round in April 2020, the company scored a $4.5 billion valuation on a $250 million investment. As of today, it has a market cap of over $17 billion.

In addition to our discussion with Kreps, the conference will also include Google’s Javier Soltero, Amplitude’s Olivia Rose, as well as investors Kobie Fuller and Casey Aylward, among others. We hope you’ll join us. It’s going to be a thought-provoking lineup.

Buy your pass now to save up to $100 when you book by October 1. We can’t wait to see you in October!

#apache-kafka, #casey-aylward, #cisco, #cloud, #cloud-computing, #computing, #confluent, #developer, #enterprise, #event-streaming, #free-software, #goldman-sachs, #google, #javier-soltero, #jay-kreps, #kobie-fuller, #linkedin, #microsoft, #netflix, #open-source, #saas, #software, #software-as-a-service, #tc, #tc-sessions-saas-2021, #uber

Travis CI flaw exposed secrets of thousands of open source projects

Travis CI flaw exposed secrets of thousands of open source projects

Enlarge (credit: Getty Images)

A security flaw in Travis CI potentially exposed the secrets of thousands of open source projects that rely on the hosted continuous integration service. Travis CI is a software-testing solution used by over 900,000 open source projects and 600,000 users. A vulnerability in the tool made it possible for secure environment variables—signing keys, access credentials, and API tokens of all public open source projects—to be exfiltrated.

Worse, the dev community is upset about the poor handling of the vulnerability disclosure process and the brief “security bulletin” it had to force out of Travis.

Environment variables injected into pull request builds

Travis CI is a popular software-testing tool due to its seamless integration with GitHub and Bitbucket. As the makers of the tool explain:

Read 15 remaining paragraphs | Comments

#bitbucket, #biz-it, #data-leak, #github, #open-source, #secrets, #tech, #travis-ci, #vulnerability

DigitalOcean enhances serverless capabilities with Nimbella acquisition

As developers look for ways to simplify how they create software, serverless solutions, which enable them to write code without worrying about the underlying infrastructure required to run their applications, is becoming increasingly popular. DigitalOcean announced today that it is enhancing its existing offering in this area with the acquisition of serverless startup Nimbella. The companies did not share the terms of the deal.

With Nimbella, the company is getting a platform for building serverless applications that is built on the open source container orchestration platform, Kubernetes and Apache OpenWhisk, which is itself an open source serverless development platform.

DigitalOcean CEO Yancey Spruill, who took over two years ago, refers to Nimbella’s capabilities as Function as a Service with the goal being to simplify serverless development in an open source context for its target customers.”Serverless kinds of capabilities are taking a whole level of the infrastructure burden away from developers and businesses and we absorb that. We’ll allow our customers to have more configurability around the tools, which just removes burdens for them and allows them to go faster,” he said.

In practical terms, Nimbella CEO Anshu Agarwal says that means they are providing a specific set of tools to build sophisticated serverless applications and connect to other DigitalOcean services. “The capabilities that we will be adding to DigitalOcean portfolio are a fast solution, a function as a service solution that also integrates with the underlying DigitalOcean services [like] managed databases, storage and other services that make it make it easier for a developer to develop full applications, not just addressing events, but doing things which are completely stateless,” Agarwal explained.

Spruill said that this wasn’t the company’s first foray into serverless. That began last year when it offered its initial serverless tooling, but it wanted to build on its current offering and Nimbella fit the bill.

DigitalOcean is a cloud Infrastructure as a Service and Platform as a Service provider, aiming at individual developers, startups and SMBs. While DigitalOcean’s $318 million 2020 revenue was a fraction of the $129 billion cloud market, it is proof that there is still money to be made even with a small slice of that market.

The companies did not discuss the terms of the deal, the number of employees involved or even the title that Agarwal would have when the deal closed, but the plan is to fully integrate Nimbella into the DigitalOcean portfolio and eventually make it a DigitalOcean-branded product some time in the first half of next year.

#cloud, #cloud-infrastructure-market, #digitalocean, #exit, #fundings-exits, #ma, #mergers-and-acquisitions, #open-source, #serverless, #startups, #tc

Explosion snags $6M on $120M valuation to expand machine learning platform

Explosion, a company that has combined an open source machine learning library with a set of commercial developer tools, announced a $6 million Series A today on a $120 million valuation. The round was led by SignalFire, and the company reported that today’s investment represents 5% of its value.

Oana Olteanu from SignalFire will be joining the board under the terms of the deal, which includes warrants of $12 million in additional investment at the same price.

“Fundamentally, Explosion is a software company and we build developer tools for AI and machine learning and natural language processing. So our goal is to make developers more productive and more focused on their natural language processing, so basically understanding large volumes of text, and training machine learning models to help with that and automate some processes,” company co-founder and CEO Ines Montani told me.

The company started in 2016 when Montani met her co-founder, Matthew Honnibal in Berlin where he was working on the spaCy open source machine learning library. Since then, that open source project has been downloaded over 40 million times.

In 2017, they added Prodigy, a commercial product for generating data for the machine learning model. “Machine learning is code plus data, so to really get the most out of the technologies you almost always want to train your models and build custom systems because what’s really most valuable are problems that are super specific to you and your business and what you’re trying to find out, and so we saw that the area of creating training data, training these machine learning models, was something that people didn’t pay very much attention to at all,” she said.

The next step is a product called Prodigy Teams, which is a big reason the company is taking on this investment. “Prodigy Teams  is [a hosted service that] adds user management and collaboration features to Prodigy, and you can run it in the cloud without compromising on what people love most about Prodigy, which is the data privacy, so no data ever needs to get seen by our servers,” she said. They do this by letting the data sit on the customer’s private cluster in a private cloud, and then use Prodigy Team’s management features in the public cloud service.

Today, they have 500 companies using Prodigy including Microsoft and Bayer in addition to the huge community of millions of open source users. They’ve built all this with just 6 early employees, a number that has grown to 17 recently and they hope to reach 20 by year’s end.

She believes if you’re thinking too much about diversity in your hiring process, you probably have a problem already. “If you go into hiring and you’re thinking like, oh, how can I make sure that the way I’m hiring is diverse, I think that already shows that there’s maybe a problem,” she said.

“If you have a company, and it’s 50 dudes in their 20s, it’s not surprising that you might have problems attracting people who are not white dudes in their 20s. But in our case, our strategy is to hire good people and good people are often very diverse people, and again if you play by the [startup] playbook, you could be limited in a lot of other ways.”

She said that they have never seen themselves as a traditional startup following some conventional playbook. “We didn’t raise any investment money [until now]. We grew the team organically, and we focused on being profitable and independent [before we got outside investment],” she said.

But more than the money, Montani says that they needed to find an investor that would understand and support the open source side of the business, even while they got capital to expand all parts of the company. “Open source is a community of users, customers and employees. They are real people, and [they are not] pawns in [some] startup game, and it’s not a game. It’s real, and these are real people,” she said.

“They deserve more than just my eyeballs and grand promises. […] And so it’s very important that even if we’re selling a small stake in our company for some capital [to build our next] product [that open source remains at] the core of our company and that’s something we don’t want to compromise on,” Montani said.

#artificial-intelligence, #developer, #developer-tools, #enterprise, #explosion, #funding, #machine-learning, #open-source, #recent-funding, #signalfire, #startups

Would the math work if Databricks were valued at $38B?

Databricks, the open-source data lake and data management powerhouse has been on quite a financial run lately. Today Bloomberg reported the company could be raising a new round worth at least $1.5 billion at an otherworldly $38 billion valuation. That price tag is up $10 billion from its last fundraise in February when it snagged $1 billion at a $28 billion valuation.

Databricks declined to comment on the Bloomberg post and its possible new valuation.

The company has been growing like gangbusters, giving credence to the investor thesis that the more your startup makes, the more it is likely to make. Consider that Databricks closed 2020 with $425 million in annual recurring revenue, which in itself was up 75% from the previous year.

As revenue goes up so does valuation, and Databricks is a great example of that rule in action. In October 2019, the company raised $400 million at a seemingly modest $6.2 billion valuation (if a valuation like that can be called modest). By February 2021, that had ballooned to $28 billion, and today it could be up to $38 billion if that rumor turns out to be true.

One of the reasons that Databricks is doing so well is it operates on a consumption model. The more data you move through the Databricks product family, the more money it makes, and with data exploding, it’s doing quite well, thank you very much.

It’s worth noting that Databricks’s primary competitor, Snowflake went public last year and has a market cap of almost $83 billion. In that context, the new figure doesn’t feel quite so outrageous, But what does it mean in terms of revenue to warrant a valuation like that. Let’s find out.

Valuation math

Let’s rewind the clock and observe the company’s recent valuation marks and various revenue results at different points in time:

  • Q3 2019: $200 million run rate, $6.2 billion valuation
  • Q3 2020: $350 million run rate, no known valuation change
  • EoY 2020: $425 million run rate, $28 billion valuation (Q1 valuation)
  • Q3 2021: Unclear run rate, possible $38 billion valuation

The company’s 2019 venture round gave Databricks a 31x run rate multiple. By the first quarter of 2021, that had swelled to a roughly 66x multiple if we compare its final 2020 revenue pace to its then-fresh valuation. Certainly software multiples were higher at the start of 2021 than they were in late 2019, but Databricks’s $28 billion valuation was still more than impressive; investors were betting on the company like it was going to be a key breakout winner, and a technology company that would go public eventually in a big way.

To see the company possibly raise more funds would therefore not be surprising. Presumably the company has had a good few quarters since its last round, given its history of revenue accretion. And there’s only more money available today for growing software companies than before.

But what to make of the $38 billion figure? If Databricks merely held onto its early 2021 run rate multiple, the company would need to have reached a roughly $575 million run rate, give or take. That would work out to around 36% growth in the last two-and-a-bit quarters. That works out to less than $75 million in new run rate per quarter since the end of 2020.

Is that possible? Yeah. The company added $75 million in run rate between Q3 2020 and the end of the year. So you can back-of-the-envelope the company’s growth to make a $38 billion valuation somewhat reasonable at a flat multiple. (There’s some fuzz in all of our numbers, as we are discussing rough timelines from the company; we’ll be able to go back and do more precise math once we get the Databricks S-1 filing in due time.)

All this raises the question of whether Databricks should be able to command such a high multiple. There’s some precedent. Recently, public software company Monday.com has a run rate multiple north of 50x, for example. It earned that mark on the back of a strong first quarter as a public company.

Databricks securing a higher multiple while private is not crazy, though we wonder if the data-focused company is managing a similar growth rate. Monday.com grew 94% on a year-over-year basis in its most recent quarter.

All this is to say that you can make the math shake out for Databricks to raise at a $38 billion valuation, but built into that price is quite a lot of anticipated growth. Top quartile public software companies today trade for around 23x their forward revenues, and around 27x their present-day revenues, per Bessemer. To defend its possible new valuation when public, then, leaves quite a lot of work ahead of Databricks.

The company’s CEO, Ali Ghodsi, will join us at TC Sessions: SaaS on October 27th, and we should know by then if this rumor is, indeed true. Either way, you can be sure we are going to ask him about it.

 

#cloud, #data-lakes, #databricks, #enterprise, #funding, #open-source, #rumors, #tc

Webiny nabs $3.5M seed to build serverless development framework on top of serverless CMS

Webiny, an early-stage startup that launched in 2019 with an open-source, serverless CMS, had also developed a framework to help build the CMS, and found that customers were also interested in that to help build their own serverless apps. Today, Webiny announced a $3.5 million seed round to continue developing both pieces.

Microsoft’s venture fund M12 led the round, with participation from Samsung Next, Episode 1, Cota Capital and other unnamed investors. The company previously raised $348,000 in 2019.

Webiny founder Sven Al Hamad says that when the company launched, he had an inkling that serverless would be the future and started by building an open-source serverless CMS, but then something interesting happened.

“We spoke to more than 300 companies, who had actually approached us and they also believed that the future is going to be built on top of serverless infrastructure. While they were intrigued by the CMS we built, they were more intrigued in terms of how we built it because they had tried serverless and they had a poor experience,” Al Hamad explained.

It turned out that the Webiny team was spending the vast majority of its time building an underlying serverless framework in order to build the CMS on top of that, and he began to realize that maybe they should be marketing and selling both the framework and the CMS.

“There was still a lot of interest for the CMS, but a lot of companies wanted both, being able to use the CMS for some of the content platforms, but also being able to build custom APIs on top, custom business logic, all on top of serverless,” he said.

At that point, Al Hamad realized that his startup had two products and that’s where they stand today as they take on this new capital to help build out the company. While he is still working on building a community and reports that he hosts a Slack community with close to a 1,000 developers, the goal is to use this money to begin building commercial products on top of their open-source offerings.

That will involve some sort of enterprise offering with management features for complex environments, single sign-on, better security and so forth.

Serverless is a way of delivering infrastructure in an automated way, so that the developer can concentrate on building the application without worrying about delivering the correct amount of resources. But it requires a very specific way of programming that involves writing functions and triggers. Webiny’s serverless framework is designed to help developers build these specialized apps and the related bits to make it all work.

The company currently has nine employees, with plans to add about six more over the remainder of 2021. He says that diversity is top of mind, but there are challenges in a tight market for technical talent. “We are thinking openly about diversity, but the overall market in terms of the talent available is making it very hard for us to find that balance,” he said. He says that there needs to be an effort across the entire system to train more diverse talent in STEM roles, but he will continue to try look for a diverse staff in spite of the challenges.

He says that his employees are spread out, but when it’s possible to be back in the office, he intends to make offices available where there are pools of people, while giving them the flexibility to decide when and if to come in.

#cloud, #cms, #developer, #funding, #m12, #microsoft-m12, #open-source, #recent-funding, #serverless, #startups, #tc, #webiny

The Perl Foundation is fragmenting over Code of Conduct enforcement

One of the Perl programming language's best-loved nicknames is "the Swiss Army chainsaw." The nickname also seems unfortunately applicable to Perl's recent community discourse.

Enlarge / One of the Perl programming language’s best-loved nicknames is “the Swiss Army chainsaw.” The nickname also seems unfortunately applicable to Perl’s recent community discourse. (credit: Coffeatus via Getty Images)

The Perl community is in a shambles due to disputes concerning its (nonexistent) Code of Conduct, its (inconsistent) enforcement of community standards, and an inability to agree on what constitutes toxicity or a proper response to it.

At least five extremely senior Perl community members have resigned from their positions and/or withdrew from working on Perl itself so far in 2021:

  • Community Affairs Team (CAT) chair Samantha McVey
  • The Perl Foundation (TPF) Board of Directors member Curtis Poe (author of Beginning Perl and Perl Hacks)
  • TPF Grant Committee member Elizabeth Mattijsen
  • TPF Perl Steering Committee member, key Perl Core developer, and former pumpking Sawyer X
  • Perl developer and SUSE engineer Sebastian Riedel

It’s difficult to impossible to pin down the current infighting to a single core incident. With that said, the rash of resignations revolves entirely around problems with unprofessional conduct—and in most cases, a focus on interminable yak-shaving that does little or nothing to address the actual problems at hand.

Read 21 remaining paragraphs | Comments

#code-of-conduct, #culture-wars, #infighting, #open-source, #perl, #racism, #tech

Facebook engineers develop new open source time keeping appliance

Most people probably don’t realize just how much our devices are time driven, whether it’s your phone, your laptop or a network server. For the most part, time keeping has been an esoteric chore, taken care of by a limited number of hardware manufacturers. While these devices served their purpose, a couple of Facebook engineers decided there had to be a better way. So they built a new more accurate time keeping device that fits on a PCI Express (PCIe) card, and contributed it to the Open Compute Project as an open source project.

At a basic level, says Olag Obleukhov, a production engineer at Facebook, it’s simply pinging this time-keeping server to make sure each device is reporting the same time. “Almost every single electronic device today uses NTP — Network Time Synchronization Protocol — which you have on your phone, on your watch, on your laptop, everywhere, and they all connect to these NTP servers where they just go and say, ‘what time is it’ and the NTP server provides the time,” he explained.

Before Facebook developed a new way of doing this, there were basically two ways to check the time. If you were a developer, you probably used something like Facebook.com as a time checking mechanism, but a company like Facebook, working at massive scale, needed something that worked even when there wasn’t an internet connection. Companies running data centers have a hardware device called Stratum One, which is a big box that sits in the data center, and has no other job than acting as the time keeper.

Because these time-keeping boxes were built by a handful of companies over years, they were solid and worked, but it was hard to get new features. What’s more, companies like Facebook couldn’t control the boxes because of their proprietary nature. Obleukhov and his colleague research scientist, Ahmad Byagowi began to attack the problem by looking for a way to create these devices by building a PCIe card with off-the-shelf parts that you could stick into any PC with an open slot.

Facebook time keeping PCI card

Image Credits: Facebook

They literally drew the first design on an iPad and began to build that vision into a prototype. A time appliance relies on a couple of key components: a GNSS receiver and what’s called a high stability oscillator. In a blog post describing the project, Obleukhov and Byagowi explained the role of these two parts:

“It all starts from a GNSS receiver that provides the time of day (ToD) as well as the 1 pulse per second (PPS). When the receiver is backed by a high-stability oscillator (e.g., an atomic clock or an oven-controlled crystal oscillator), it can provide time that is nanosecond-accurate. The time is delivered across the network via an off-the-shelf network card,” the two engineers wrote.

It all sounds pretty basic when described like this, but it’s actually quite complex and perhaps that’s why nobody had ever thought to attack the problem in this way, simply accepting that the current methods of determining time worked fine. But these two Facebook engineers were annoyed by the limitations of these approaches and decided to build something better themselves.

“A lot of it came from frustration. We were frustrated with whatever exists in the market, and we needed certain features like security features to maintain different things and monitor what’s going on. And we had to always ask the vendors [for these new features] and every time a request would take like six months to one year, and [it wouldn’t be exactly what we wanted] and we had to change things all the time, so that’s why we had to basically make this from scratch in this way,” Obleukhov said.

One thing that made it possible to put a time keeping device on a PCIe card was the advances in miniaturization of the atomic clock/oscillator. So when you combine the timing of their frustration with the current capabilities of the technologies, they realized they could do this themselves if they dedicated themselves to the task.

As the design began coming together, the engineers decided to make it flexible to enable engineers to play off the basic design and drop in whatever components met their needs. Some might need highly sophisticated expensive parts, but others could get away with much cheaper parts, depending on their requirements.

They also decided early on to open source the design process, and to involve the Open Compute Project so that other companies and engineers could contribute to the design. “It was actually going to be open source from the get-go, and the reason for that is we needed to have community support. I didn’t want it to be just one in-house project and let’s say if I lost interest or the businesses lost interest [it could go away]. I wanted this to [keep going] regardless [of what happened],” Obleukhov said.

Today there are a dozen vendors involved in the project and a number of cards out there including the one designed by these engineers, as well as a commercial offering from Orilia, but the goal is to continue improving the design, and by making it open source, the community of companies and engineers involved will continue to improve it.

#cloud, #developer, #engineering, #facebook, #hardware, #open-compute-project, #open-source, #tc

FOSS mobile app Stingle wants to privately, securely back up your photos

Stock photo of photo album open on a table.

Enlarge / Despite the encryption, Stingle Photos is a distinctly minimalist app which comes closer to the simple feel of an analog album than most of its competitors do. (credit: Kohei Hara / Getty Images)

With Google Photos killing off its Unlimited photo backup policy last November, the market for photo backup and sync applications opened up considerably. We reviewed one strong contender—Amazon Photos—in January, and freelancer Alex Kretzschmar walked us through several self-hosted alternatives in June.

Today, we’re looking at a new contender—Stingle Photos—which splits the difference, offering a FOSS mobile application which syncs to a managed cloud.

Trust no one

Arguably, encryption is Stingle Photos’ most important feature. Although the app uploads your photos to Stingle’s cloud service, the service’s operators can’t look at your photos. That’s because the app, which runs on your phone or tablet, encrypts them securely using Sodium cryptography.

Read 21 remaining paragraphs | Comments

#android, #foss, #free-and-open-source, #ios, #open-source, #open-source-software, #photo-backup, #tech

Software downloaded 30,000 times from PyPI ransacked developers’ machines

Software downloaded 30,000 times from PyPI ransacked developers’ machines

Enlarge

Open source packages downloaded an estimated 30,000 times from the PyPI open source repository contained malicious code that surreptitiously stole credit card data and login credentials and injected malicious code on infected machines, researchers said on Thursday.

In a post, researchers Andrey Polkovnichenko, Omer Kaspi, and Shachar Menashe of security firm JFrog said they recently found eight packages in PyPI that carried out a range of malicious activity. Based on searches on https://pepy.tech, a site that provides download stats for Python packages, the researchers estimate the malicious packages were downloaded about 30,000 times.

Systemic threat

The discovery is the latest in a long line of attacks in recent years that abuse the receptivity of open source repositories, which millions of software developers rely on daily. Despite their crucial role, repositories often lack robust security and vetting controls, a weakness that has the potential to cause serious supply chain attacks when developers unknowingly infect themselves or fold malicious code into the software they publish.

Read 14 remaining paragraphs | Comments

#biz-it, #open-source, #python, #repository, #supply-chain-attack, #tech

Audacity’s new owner is in another fight with the open source community

MuseScore (the website) offers access to hundreds of thousands of sheet music arrangements. MuseScore (the application) allows easy editing and modification, MIDI playback, and more.

Enlarge / MuseScore (the website) offers access to hundreds of thousands of sheet music arrangements. MuseScore (the application) allows easy editing and modification, MIDI playback, and more. (credit: Muse Group)

Muse Group—owner of the popular audio-editing app Audacity—is in hot water with the open source community again. This time, the controversy isn’t over Audacity—it’s about MuseScore, an open source application which allows musicians to create, share, and download musical scores (especially, but not only, in the form of sheet music).

The MuseScore app itself is licensed GPLv3, which gives developers the right to fork its source and modify it. One such developer, Wenzheng Tang (“Xmader” on GitHub) went considerably further than modifying the app—he also created separate apps designed to bypass MuseScore Pro subscription fees.

After thoroughly reviewing the public comments made by both sides at GitHub, Ars spoke at length with Muse Group’s Head of Strategy Daniel Ray—known on GitHub by the moniker “workedintheory”—to get to the bottom of the controversy.

Read 30 remaining paragraphs | Comments

#audacity, #github, #muse-group, #musescore, #open-source, #tech

No, open source Audacity audio editor is not “spyware”

Familiar to many an at-home podcaster.

Enlarge / Familiar to many an at-home podcaster. (credit: Jim Salter)

Over the fourth of July weekend, several open source news outlets began warning readers that the popular open source audio editing app Audacity is now “spyware.”

This would be very alarming if true—there aren’t any obvious successors or alternatives which meet the same use cases. Audacity is free and open source, relatively easy to use, cross platform, and ideally suited for simple “prosumer” tasks like editing raw audio into finished podcasts.

However, the negativity seems to be both massively overblown and quite late. While the team has announced that Audacity will begin collecting telemetry, it’s neither overly broad in scope nor aggressive in how it acquires the data—and the majority of the real concerns were addressed two months ago, to the apparent satisfaction of the actual Audacity community.

Read 17 remaining paragraphs | Comments

#audacity, #foss, #muse-group, #open-source, #tech, #telemetry

Ahoy, there’s malice in your repos—PyPI is the latest to be abused

Ahoy, there’s malice in your repos—PyPI is the latest to be abused

Enlarge (credit: Getty Images)

Counterfeit packages downloaded roughly 5,000 times from the official Python repository contained secret code that installed cryptomining software on infected machines, a security researcher has found.

The malicious packages, which were available on the PyPI repository, in many cases used names that mimicked those of legitimate and often widely used packages already available there, Ax Sharma, a researcher at security firm Sonatype reported. So-called typosquatting attacks succeed when targets accidentally mistype a name such as typing “mplatlib” or “maratlib” instead of the legitimate and popular package matplotlib.

Sharma said he found six packages that installed cryptomining software that would use the resources of infected computers to mine cryptocurrency and deposit it in the attacker’s wallet. All six were published by someone using the PyPI username nedog123, in some cases as early as April. The packages and download numbers are:

Read 4 remaining paragraphs | Comments

#biz-it, #counterfeit, #malware, #npm, #open-source, #pypi, #rubygems, #tech

A revival at the intersection of open source and open standards

Our world has big problems to solve, and something desperately needed in that pursuit is the open-source and open-standards communities working together.

Let me give you a stark example, taken from the harsh realities of 2020. Last year, the United States experienced nearly 60,000 wildland fires that burned more than 10 million acres, resulting in more than 9,500 homes destroyed and at least 43 lives lost.

I served as a volunteer firefighter in California for 10 years and witnessed firsthand the critical importance of technology in helping firefighters communicate efficiently and deliver safety-critical information quickly. Typically, multiple agencies show up to fight these fires, bringing with them radios made by different manufacturers that each use proprietary software to set radio frequencies. As a result, reprogramming these radios so that teams could communicate with one another is an unnecessarily slow — and potentially life-threatening — process.

If the radio manufacturers had instead all contributed to an open-source implementation conforming to a standard, the radios could have been quickly aligned to the same frequencies. Radio manufacturers could have provided a valuable, life-saving tool rather than a time-wasting obstacle, and they could have shared the cost of developing such software. In this situation, like so many others, there is no competitive advantage to be gained from proprietary radio-programming software and many priceless benefits to gain by standardizing.

Open source and open standards are obviously different, but the objectives of these communities are the same: interoperability, innovation and choice.

The benefit of coherent standards and corresponding open-source implementations is not unique to safety-critical situations like wildfires. There are many areas of our lives that could significantly benefit from a better integration of standards and open source.

Open source and open standards: What’s the difference?

“Open source” describes software that is publicly accessible and free for anyone to use, modify and share. It also describes a collaborative, community-oriented software development philosophy, with an open exchange of ideas, open participation, rapid prototyping, and open governance and transparency.

By contrast, the term “standard” refers to agreed-upon definitions of functionality. These requirements, specifications and guidelines ensure that products, services and systems perform in an interoperable way with quality, safety and efficiency.

Dozens of organizations exist for the purpose of establishing and maintaining standards. Examples include the International Organization for Standardization (ISO), the European Telecommunications Standards Institute (ETSI), and the World Wide Web Consortium (W3C). OASIS Open belongs in this category as well. A standard is “open” when it is developed via a consensus-building process, guided by organizations that are open, fair and transparent. Most people would agree that the standard-building process is careful and deliberate, ensuring consensus through compromise and resulting in long-lasting specifications and technical boundaries.

Where’s the common ground?

Open source and open standards are obviously different, but the objectives of these communities are the same: interoperability, innovation and choice. The main difference is how they accomplish those goals, and by that I’m referring primarily to culture and pace.

Chris Ferris, an IBM fellow and CTO of Open Technology, recently told me that with standards organizations, it often seems the whole point is to slow things down. Sometimes it’s with good reason, but I’ve seen competition get the best of people, too. Open source seems to be much more collaborative and less contentious or competitive. That doesn’t mean that there aren’t competitive projects out there that are tackling the same domain.

Another culture characteristic that affects pace is that open source is about writing code and standards organizations are about writing prose. Words outlive code with respect to long-term interoperability, so the standards culture is much more deliberate and thoughtful as it develops the prose that defines standards. Although standards are not technically static, the intent with a standard is to arrive at something that will serve without significant change for the long term. Conversely, the open-source community writes code with an iterative mindset, and the code is essentially in a state of continuous evolution. These two cultures sometimes clash when the communities try to move in concert.

If that’s the case, why try to find harmony?

Collaboration between open source and open standards will fuel innovation

The internet is a perfect example of what harmony between the open-source and open-standards communities can achieve. When the internet began as ARPANET, it relied on common shared communications standards that predated TCP/IP. With time, standards and open-source implementations brought us TCP/IP, HTTP, NTP, XML, SAML, JSON and many others, and also enabled the creation of additional key global systems implemented in open standards and code, like disaster warnings (OASIS CAP) and standardized global trade invoicing (OASIS UBL).

The internet has literally transformed our world. That level of technological innovation and transformative power is possible for the future, too, if we re-energize the spirit of collaboration between the open-standards and open-source communities.

Finding harmony and a natural path of integration

With all of the critical open-source projects residing in repositories today, there are many opportunities for collaboration on associated standards to ensure the long-term operability of that software. Part of our mission at OASIS Open is identifying those open-source projects and giving them a collaborative environment and all the scaffolding they need to build a standard without it becoming a difficult process.

Another point Ferris shared with me is the necessity for this path of integration to grow. For instance, this need is particularly prevalent if you want your technology to be used in Asia: If you don’t have an international standard, Asian enterprises don’t even want to hear from you. We’re seeing the European community asserting a strong preference for standards as well. That is certainly a driver for open-source projects that want to play with some of the heavy hitters in the ecosystem.

Another area where you can see a growing need for integration is when an open-source project becomes bigger than itself, meaning it begins to impact a whole lot of other systems, and alignment is needed between them. An example would be a standard for telemetry data, which is now being used for so many different purposes, from observability to security. Another example is the software bill of materials, or SBOM. I know some things are being done in the open-source world to address the challenge of tracking the provenance of software. This is another case where, if we’re going to be successful at all, we need a standard to emerge.

It’s going to take a team effort

Fortunately, the ultimate goals of the open-source and open-standards communities are the same: interoperability, innovation and choice. We also have excellent proof points of how and why we need to work together, from the internet to Topology and Orchestration Specification for Cloud Applications (TOSCA) and more. In addition, major stakeholders are carrying the banner, acknowledging that for certain open-source projects we need to take a strategic, longer-term view that includes standards.

That’s a great start to a team effort. Now it’s time for foundations to step up to the plate and collaborate with each other and with those stakeholders.

#column, #interoperability, #open-source, #open-source-software, #standards, #tc

With buyout, Cloudera hunts for relevance in a changing market

When Cloudera announced its sale to a pair of private equity firms yesterday for $5.3 billion, along with a couple of acquisitions of its own, the company detailed a new path that could help it drive back towards relevance in the big data market.

When the company launched in 2008, Hadoop was in its early days. The open source project developed at Yahoo three years earlier was built to deal with the large amounts of data that the Internet pioneer generated. It became increasingly clear over time that every company would have to deal with growing data stores, and it seemed that Cloudera was in the right market at the right time.

And for a while things went well. Cloudera rode the Hadoop startup wave, garnering a cool billion in funding along the way, including a stunning $740 million check from Intel Capital in 2014. It then went public in 2018 to much fanfare.

But the markets had already started to shift by the time of its public debut. Hadoop, a highly labor-intensive way to manage data, was being supplanted by cheaper and less complex cloud-based solutions.

“The excitement around the original promise of the Hadoop market has contracted significantly. It’s incredibly expensive and complex to get it working effectively in an enterprise context,” Casey Aylward, an investor at Costanoa Ventures told TechCrunch.

The company likely saw that writing on the wall when it merged with another Hadoop-based company, Hortonworks in 2019. That transaction valued the combined entity at $5.2 billion, almost the same amount it sold for yesterday, two years down the road. The decision to sell and go private may also have been spurred by Carl Icahn buying an 18% stake in the company that same year.

Looking to the future, Cloudera’s sale could provide the enterprise unicorn room as it regroups.

Patrick Moorhead, founder and principal analyst at Moor Insight & Strategies sees the deal as a positive step for the company. “I think this is good news for Cloudera because it now has the capital and flexibility to dive head first into SaaS. The company invented the entire concept of a data life cycle, implemented initially on premises, then extended to private and public clouds,” Moorhead said.

Adam Ronthal, Gartner Research VP agrees that it at least gives Cloudera more room to make necessary adjustments its market strategy as long as it doesn’t get stifled by its private equity overlords. “It should give Cloudera an opportunity to focus on their future direction with increased flexibility — provided they are able to invest in that future and that this does not just focus on cost cutting and maximizing profits. Maintaining a culture of innovation will be key,” Ronthal said.

Which brings us to the two purchases Cloudera also announced as part of its news package.

If you want to change direction in a hurry, there are worse ways than via acquisitions. And grabbing Datacoral and Cazena should help Cloudera alter its course more quickly than it could have managed on its own.

“[The] two acquisitions will help Cloudera capture some of the value on top of the lake storage layer — perhaps moving into different data management features and/or expanding into the compute layer for analytics and AI/ML use cases, where there has been a lot of growth and excitement in recent years,” Alyward said.

Chandana Gopal, Research Director for the future of intelligence at IDC agrees that the transactions give Cloudera some more modern options that could help speed up the data wrangling process. “Both the acquisitions are geared towards making the management of cloud infrastructure easier for end-users. Our research shows that data prep and integration takes 70%-80% of an analyst’s time versus the time spent in actual analysis. It seems like both these companies’ products will provide technology to improve the data integration/preparation experience,” she said.

The company couldn’t stay on the path it was on forever, certainly not with an activist investor breathing down its neck. Its recent efforts could give it the time away from public markets it needs to regroup. How successful Cloudera’s turnaround proves to be will depend on whether the private equity companies buying it can both agree on the direction and strategy for the company, while providing the necessary resources to push the company in a new direction. All of that and more will determine if these moves pay off in the end.

#big-data, #cloud, #cloudera, #enterprise, #hadoop, #ma, #open-source, #tc

Stemma launches with $4.8M seed to build managed data catalogue

As companies increasingly rely on data to run their businesses, having accurate sources of data becomes paramount. Stemma, a new early stage startup, has come up with a solution, a managed data catalogue that acts as an organization’s source of truth.

Today the company announced a $4.8 million seed investment led by Sequoia with assorted individual tech luminaries also participating. The product is also available for the first time today.

Company co-founder and CEO Mark Grover says the product is actually built on top of the open source Amundsen data catalogue project that he helped launch at Lyft to manage its massive data requirements. The problem was that with so much data, employees had to kludge together systems to confirm the data validity. Ultimately manual processes like asking someone in Slack or even creating a Wiki failed under the weight of trying to keep up with the volume and velocity.

“I saw this problem first-hand at Lyft, which led me to create the open source Amundsen project with a team of talented engineers,” Grover said. That project has 750 users at Lyft using it every week. Since it was open sourced, 35 companies like Brex, Snap and Asana have been using it.

What Stemma offers is a managed version of Amundsen that adds additional functionality like using intelligence to show data that’s meaningful to the person who is searching in the catalogue. It also can add metadata automatically to data as it’s added to the catalogue, creating documentation about the data on the fly, among other features.

The company launched last fall when Grover and co-founder and CTO Dorian Johnson decided to join forces and create a commercial product on top of Amundsen. Grover points out that Lyft was supportive of the move.

Today the company has five employees, in addition to the founders and has plans to add several more this year. As he does that, he is cognizant of diversity and inclusion in the hiring process. “I think it’s super important that we continue to invest in diversity, and the two ways that I think are the most meaningful for us right now is to have early employees that are from diverse groups, and that is the case within the first five,” he said. Beyond that, he says that as the company grows he wants to improve the ratio, while also looking at diversity in investors, board members and executives.

The company, which launched during COVID is entirely remote right now and plans to remain that way for at least the short term. As the company grows, they will look at ways to build camaraderie like organizing a regular cadence of employee offsite events.

#amundsen, #data-management, #enterprise, #funding, #open-source, #recent-funding, #sequoia-capital, #startups, #stemma

Databricks introduces Delta Sharing, an open source tool for sharing data

Databricks launched its fifth open source project today, a new tool called Delta Sharing designed to be a vendor neutral way to share data with any cloud infrastructure or SaaS product, so long as you have the appropriate connector. It’s part of the broader Databricks open source Delta Lake project.

As CEO Ali Ghodsi points out, data is exploding and moving data from Point A to Point B is an increasingly difficult problem to solve with proprietary tooling. “The number one barrier for organizations to succeed with data is sharing data, sharing it between different views, sharing it across organizations — that’s the number one issue we’ve seen in organizations,” Ghodsi explained.

Delta Sharing is an open source protocol designed to solve that problem. “This is the industry’s first ever open protocol, an open standard for sharing a data set securely. […] They can standardize on Databricks or something else. For instance, they might have standardized on using AWS Data Exchange, Power BI or Tableau — and they can then access that data securely.”

The tool is designed to work with multiple cloud infrastructure and SaaS services and out of the gate there are multiple partners involved including the Big Three cloud infrastructure vendors Amazon, Microsoft and Google, as well as data visualization and management vendors like Qlik, Starburst, Collibra and Alation and data providers like Nasdaq, S&P and Foursquare

Ghodsi said the key to making this work is the open nature of the project. By doing that and donating it to The Linux Foundation, he is trying to ensure that it can work across different environments. Another big aspect of this is the partnerships and the companies involved. When you can get big name companies involved in a project like this, it’s more likely to succeed because it works across this broad set of popular services. In fact, there are a number of connectors available today, but Databricks expects that number to increase over time as contributors build more connectors to other services.

Databricks operates on a consumption pricing model much like Snowflake, meaning the more data you move through its software, the more money it’s going to make, but the Delta Sharing tool means you can share with anyone, not just another Databricks customer. Ghodsi says that the open source nature of Delta Sharing means his company can still win, while giving customers more flexibility to move data between services.

The infrastructure vendors also love this model because the cloud data lake tools move massive amounts of data through their services and they make money too, which probably explains why they are all on board with this.

One of the big fears of modern cloud customers is being tied to a single vendor as they often were in the 1990s and early 2000s when most companies bought a stack of services from a single vendor like Microsoft, IBM or Oracle. On one hand, you had the veritable single throat to choke, but you were beholden to the vendor because the cost of moving to another one was prohibitively high. Companies don’t want to be locked in like that again and open source tooling is one way to prevent that.

Databricks was founded in 2013 and has raised almost $2 billion since. The latest round was in February for $1 billion at a $28 billion valuation, an astonishing number for a private company. Snowflake, a primary competitor, went public last September. As of today, it has a market cap of over $66 billion.

#cloud, #data-sharing, #databricks, #enterprise, #open-source, #tc

Airbyte announces $26M Series A for open source data connector platform

One of the major issues facing companies these days isn’t finding relevant data, so much as moving it to where it’s needed. Enter Airbyte, an early stage startup that is building an open source data integration platform to help solve that problem. Today the company announced a $26 million Series A, just a couple of months after announcing its $5.2 million seed round.

Benchmark led the investment with help from 8VC, Accel, SV Angel, Y Combinator and multiple tech industry luminaries. The company has raised over $31 million, all of it coming this year.

“What we’re building is an open source data integration platform to bring data wherever it is, whether it’s a database, a file or an API into the destination of your choice whether it is a data warehouse or a data lake,” company co-founder and CEO Michel Tricot told TechCrunch. This involves building connectors to various data types. The company is providing the open source platform and an SDK to build connectors, and inviting the community to add their own connectors, while building some too.

Things are moving quickly for the startup. In addition to the funding, it released its Connected Development Kit or CDK earlier this month. “It’s a local framework that enables you to build a custom connector within two hours instead of two or three days,” company co-founder John Lafleur explained. To this point, the community has contributed approximately 20% of the platform’s 70 connectors, but the two founders expect that percentage to increase as the CDK has time to spread in the community.

Airbyte was founded just last year and the company plans to spend this year trying to expand the rapidly growing community, which is up to 1200 members and 500 active users to this point. The long-term plan is to build a hosted version, which they will charge for, while continuing to work on the open source project.

Chetan Puttagunta, general partner at Benchmark, who is leading today’s investment, says that Benchmark has a long history of investing in open source startups including being an early investor in Red Hat, as well as Elastic, MongoDB, Acquia and many others.

He says that his firm approached Airbyte after seeing a lot of developer activity in the community in a short time. “We reached out to them just based on our involvement in the developer community. We started seeing Airbyte spike everywhere, and it started to become very quickly the de facto standard for how folks wanted to integrate data. And that was a remarkable achievement for a company that has been around for just several months.”

The rapid growth has led to the number of employees doubling to 14 in a short time. When it comes to diversity and inclusion, the founders have actually written a company handbook that includes a detailed section with definitions and goals around diversity and inclusion, not something you often see from an early stage company.

“We try to constantly improve on diversity inclusion and belonging, which is a continuous [thing]. [We] never would think it was done, We always have room to improve,” Tricot said.

#airbyte, #benchmark, #data-integration, #developer, #funding, #open-source, #recent-funding, #startups, #tc

How Online Sleuths Pantsed Putin

Bellingcat’s founder, Eliot Higgins, on the ethics of open source investigations and what separates his organization from online vigilantes.

#bellingcat, #central-intelligence-agency, #crowdsourcing-internet, #espionage-and-intelligence-services, #higgins-eliot, #investigation, #journalism, #navalny-aleksei-a, #open-source, #putin-vladimir-v, #russia, #skripal-sergei-v

Styra, the startup behind Open Policy Agent, nabs $40M to expand its cloud-native authorization tools

As cloud-native apps continue to become increasingly central to how organizations operate, a startup founded by the creators of a popular open-source tool to manage authorization for cloud-native application environments is announcing some funding to expand its efforts at commercializing the opportunity.

Styra, the startup behind Open Policy Agent, has picked up $40 million in a Series B round of funding led by Battery Ventures. Also participating are previous backers A. Capital, Unusual Ventures and Accel; and new backers CapitalOne Ventures, Citi Ventures and Cisco Investments. Styra has disclosed CapitalOne is also one of its customers, along with e-commerce site Zalando and the European Patent Office.

Styra is sitting on the classic opportunity of open source technology: scale and demand.

OPA — which can be used across Kubernetes, containerized and other environments — now has racked up some 75 million downloads and is adding some 1 million downloads weekly, with Netflix, Capital One, Atlassian and Pinterest among those that are using OPA for internal authorization purposes. The fact that OPA is open source is also important:

“Developers are at the top of the food chain right now,” CEO Bill Mann said in an interview, “They choose which technology on which to build the framework, and they want what satisfies their requirements, and that is open source. It’s a foundational change: if it isn’t open source it won’t pass the test.”

But while some of those adopting OPA have hefty engineering teams of their own to customize how OPA is used, the sheer number of downloads (and potential active users stemming from that) speak to the opportunity for a company to build tools to help manage that and customize it for specific use cases in cases where those wanting to use OPA may lack the resources (or appetite) to build and scale custom implementations themselves.

As with many of the enterprise startups getting funded at the moment, Styra has proven itself in particular over the last year, with the switch to remote work, workloads being managed across a number of environments, and the ever-persistent need for better security around what people can and should not be using. Authorization is a particularly acute issue when considering the many access points that need to be monitored: as networks continue to grow across multiple hubs and applications, having a single authorization tool for the whole stack becomes even more important.

Styra said that some of the funding will be used to continue evolving its product, specifically by creating better and more efficient ways to apply authorization policies by way of code; and by bringing in more partners to expand the scope of what can be covered by its technology.

“We are extremely impressed with the Styra team and the progress they’ve made in this dynamic market to date,” said Dharmesh Thakker, a general partner at Battery Ventures. “Everyone who is moving to cloud, and adopting containerized applications, needs Styra for authorization—and in the light of today’s new, remote-first work environment, every enterprise is now moving to the cloud.” Thakker is joining the board with this round.

#applications, #cloud, #cloud-native, #containers, #developer, #enterprise, #funding, #kubernetes, #open-source, #styra

AWS releases tool to open source that turns on-prem software into SaaS

AWS announced today that it’s releasing a tool called AWS SaaS Boost as open source distributed under the Apache 2.0 license. The tool, which was first announced at the AWS:re:Invent conference last year, is designed to help companies transform their on-prem software into cloud-based Software as a Service

In the charter for the software, the company describes its mission this way: “Our mission is to create a community-driven suite of extensible building blocks for Software-as-a-Service (SaaS) builders. Our goal is to foster an open environment for developing and sharing reusable code that accelerates the ability to deliver and operate multi-tenant SaaS solutions on AWS.”

What it effectively does is provide the tools to turn the application into one that lets you sign up users and let them use the app in a multi-tenant cloud context. Even though it’s open source, it is designed to get you to move your application into the AWS system where you can access a number of AWS services such as AWS CloudFormation, AWS Identity and Access Management (IAM), Amazon Route 53, Elastic Load Balancing, AWS Lambda (Amazon’s serverless tool), and Amazon Elastic Container Service (Amazon’s Kubernetes Service). Although presumably you could use alternative services if you were so inclined.

By making it open source, it gives companies who would need this kind of service access to the source code, giving them a comfort level and an ability to contribute to the project to expand upon the base product and give back to the community. That makes it a win for users who get flexibility and the benefit of a community behind the tool, and a win for AWS, which gets that community working on the tool to improve and enhance it over time.

“Our objective with AWS SaaS Boost is to get great quality software based on years of experience in the hands of as many developers and companies as possible. Because SaaS Boost is open source software, anyone can help improve it. Through a community of builders, our hope is to develop features faster, integrate with a wide range of SaaS software, and to provide a high quality solution for our customers regardless of company size or location,” Amazon’s Adrian De Lucan wrote in a blog post announcing the intent to open source SaaS Boost.

This announcement comes just a couple of weeks after the company open sourced its Deep Racer device software, which runs its machine-learning fueled mini race cars. That said, Amazon has had a complex relationship with the open source in the past couple of years, where companies like MongoDB, Elastic and CockroachDB have altered their open source licenses to prevent Amazon from making their own hosted versions of these software packages.

#aws, #cloud, #enterprise, #open-source, #saas, #tc

Emerging open cloud security framework has backing of Microsoft, Google and IBM

Each of the big cloud platforms has its own methodology for passing on security information to logging and security platforms, leaving it to the vendors to find proprietary ways to translate that into a format that works for their tool. The Cloud Security Notification Framework (CSNF), a new working group that includes Microsoft, Google and IBM is trying to create a new open and standard way of delivering this information.

Nick Lippis, who is co-founder and co-chairman of ONUG, an open enterprise cloud community, which is the primary driver of CSNF says that what they’ve created is part standard and part open source. “What we’ve been really focusing on is how do we automate governance on the cloud. And so security was the place that was ripe for that where we can actually provide some value right away for the community,” he said.

While they’ve pulled in some of the big cloud vendors, they’ve also got large companies who consume cloud services like FedEx, Pfizer and Goldman Sachs. Conspicuously missing from the group is AWS, the biggest player in the cloud infrastructure market by far. But Lippis says that he hopes as the project matures, other companies including AWS will join.

“There’s lots of security programs and industry programs that get out there and that people are asking them to join, and so some companies want to wait to see how well this pans out [before making a commitment to it],” Lippis said. His hope is that over time, that Amazon will come around and join the group, but in the meantime they are working to get to the point everyone in the community will feel good about what they’re doing.

The idea is to start with security alerts and find a way to build a common format to give companies the same kind of system they have in the data center to track security alerts in the cloud. The way they hope to do that is with this open dialogue between the cloud vendors and the companies involved with the group.

“So the structure of that is that there’s a steering committee that is chaired by CISOs from these large cloud consumer brands, and also the cloud providers, and they provide voting and direction. And then there’s the working group where all the work is done. The beauty of what we do is that we have now consumers and also providers working together and collaborating,” he said.

Don Duet, a member of ONUG, who is CEO and co-founder of Concourse Labs, has been involved in the formation of the CSNF. He says to keep the project focused they are looking at this as a data management problem and they are establishing a common vocabulary for everyone to work within the group.

“How do you build a consensus on what are the types of terms that everybody can agree on and then you build the underlying basis so that the experts in your resource providers in this case, Cloud Service Providers, can bless how their data [connects] to those common standards,” Duet explained.

He says that particular problem is more of an organizational problem than a technical one, getting the various stakeholders together and just building consensus around this. At this point, they have that process in place and the next step is proving it by having the various companies involved in this test it out in the coming months.

After they get past the testing phase, in October they plan to actually demonstrate what this looks like in a before and after scenario, with the new framework and without it. As the group works toward these goals, the hope is that eventually the framework will become more established and other companies and vendors will come on board and make this a more standard way of sharing security alerts. If all goes well, they hope to build in other security information into this framework over time.

#cloud, #cloud-infrastructure, #enterprise, #open-source, #security, #standards

Timescale grabs $40M Series B as it goes all in on cloud version of time series database

Timescale, makers of the open source TimescaleDB time series database, announced a $40 million Series B financing round today. The investment comes just over two years after it got a $15 million Series A.

Redpoint Ventures led today’s round with help from existing investors Benchmark, New Enterprise Associates, Icon Ventures and Two Sigma Ventures. The company reports it has now raised approximately $70 million.

TimescaleDB lets users measure data across a time dimension, so anything that would change over time. “What we found is we need a purpose-built database for it to handle scalability, reliability and performance, and we like to think of ourselves as the category-defining relational database for time series,” CEO and co-founder Ajay Kulkarni explained.

He says that the choice to build their database on top of Postgres when it launched 4 years ago was a key decision. “There are a few different databases that are designed for time series, but we’re the only one where developers get the purpose-built time series database plus a complete Postgres database all in one…,” he said.

While the company has an open source version, last year it decided rather than selling an enterprise version (as it had been), it was going to include all of that functionality in the free version of the product and place a bet entirely on the cloud for revenue.

“We decided that we’re going to make a bold bet on the cloud. We think cloud is where the future of database adoption is, and so in the last year, […] we made all of our enterprise features free. If you want to test it yourself, you get the whole thing, but if you want a managed service, then we’re available to run it for you,” he said.

The community approach is working to attract users, with over 2 million monthly active databases, some of which the company is betting will convert to the cloud service over time. Timescale is based in New York City, but it’s a truly remote organization with 60 employees spread across 20 countries and every continent except Antarctica.

He says that as a global company, it creates new dimensions of diversity and different ways of thinking about it. “I think one thing that is actually kind of an interesting challenge for us is what does D&I mean in a totally global org. A lot of people focus on diversity and inclusion within the U.S., but we think we’re doing better than most tech companies in terms of racial diversity, gender diversity,” he said.

And being remote first isn’t going to change even when we get past the pandemic. “I think it may not work for every business, but I think like being remote first has been a real good thing for us,” he said.

#cloud, #enterprise, #funding, #open-source, #postgres, #recent-funding, #redpoint-ventures, #saas, #startups, #tc, #time-series-database, #timescale

Botpress nabs $15M Series A to help developers build conversational apps

Botpress, a Montreal-based early stage startup, wants to make it easier for developers to build conversational-based apps, meaning humans interact with the app by speaking instead of typing, clicking or tapping. Today it announced an $15 million Series A from Decibel and Inovia Capital.

“We’re trying to bring human-level digital assistance to the masses, and we do that by giving developers the tools they need to build conversational AI applications, so essentially conversational AI. […] It’s a new way to build and consume software by using human language as a user interface instead of using traditional graphical user interfaces,” Botpress founder and CEO Sylvain Perron told me.

The company has created an open source toolkit to help developers remove some of the complexity associated with creating these applications. “Developers choose us because we provide the right tools to build conversational AI without changing the normal workflow of building software,” Perron explained.

Several years ago, Perron was trying to create a bot application and he just couldn’t find any good guidance to help him, so he decided to build a solution. He released the first version of that tool in 2017 and today he has more than 100,000 developers using the open source tool kit worldwide including a bunch of Fortune 500 companies.

Jon Sakoda who is leading the investment at Decibel says that the company is turning some of that enterprise interest into a business supporting those companies. “Today, we do have a commercial open source offering, which a lot of companies already pay for, but as I think you’ve seen in this current wave of successful open source companies, there’s always a lot of demand for a cloud product. And I think that this financing clearly allows Botpress to invest in building a turnkey cloud offering,” Sakoda says.

He says that what impresses him about Botpress is that developers can build a bot in less than an hour on a laptop, but having a cloud product will remove one more layer of complexity around deploying and scaling the bot in production.

The company, which has offices in Montreal and Quebec City (when they actually go to the office again), currently has 25 employees. The plan is to triple the team size over the next year as they put the investment to work. As they do this, Perron says that diversity and inclusion is a key goal in hiring.

“In our discussions, we want to make sure that we are a very inclusive company as we scale, especially scaling at this pace it’s very easy to […] fall into non inclusive ways, so that’s very top of mind for us, and we’re putting significant effort into making sure that we’re doing this right,” he said.

The company has been remote from its early days, and had just opened an office in Quebec City when the pandemic hit so they haven’t had much opportunity to use it. He expects to have a hybrid approach when they are allowed back in the office, but it will be up to employees whether they come in or not.

#artificial-intelligence, #botpress, #chat-bots, #conversational-ai, #decibel, #developer, #developer-tools, #funding, #open-source, #recent-funding, #startups, #tc

Amazon announces it’s open sourcing DeepRacer device software

When Amazon debuted AWS DeepRacer in 2018, it was meant as a fun way to help developers learn machine learning. While it has evolved since and incorporated DeepRacer competitions, today the company announced it was adding a new wrinkle. It’s open sourcing the software the company created to run these miniature cars.

At its core, the DeepRacer car is a mini computer running Ubuntu Linux and Amazon’s Robot Operating System (ROS). The company believes that by opening up the device software to developers, it will encourage more creative uses of the car by enabling them to change the car’s default behavior.

“With the open sourcing of the AWS DeepRacer device code you can quickly and easily change the default behavior of your currently track-obsessed race car. Want to block other cars from overtaking it by deploying countermeasures? Want to deploy your own custom algorithm to make the car go faster from point A to B? You just need to dream it and code it,” the company wrote in a blog post announcing the open .

After introducing the cars in 2018, the company has developed in person DeepRacer leagues, and more recently virtual races. In fact, the company reorganized the leagues last month to encourage new people to get involved with the technology. Adding an open source component could increase interest further as developers get a chance to make this their own, and really add new layers of usage to the cars that haven’t been possible up until now.

The idea behind all of this to teach developers the basics of machine learning, as AWS’ Marcia Villalba wrote in a blog post last month:

“AWS DeepRacer is an autonomous 1/18th scale race car designed to test [reinforcement learning] models by racing virtually in the AWS DeepRacer console or physically on a track at AWS and customer events. AWS DeepRacer is for developers of all skill levels, even if you don’t have any ML experience. When learning RL using AWS DeepRacer, you can take part in the AWS DeepRacer League where you get experience with machine learning in a fun and competitive environment.”

If you want to get involved customizing your car’s software, the project documentation is available on GitHub and on the AWS DeepRacer Open Source page, where you can get started with six sample projects.

#amazon, #aws-deepracer, #cloud, #developer, #machine-learning, #open-source

Tecton teams with founder of Feast open source machine learning feature store

Tecton, the company that pioneered the notion of the machine learning feature store, has teamed up with the founder of the open source feature store project called Feast. Today the company announced the release of version 0.10 of the open source tool.

The feature store is a concept that the Tecton founders came up with when they were engineers at Uber. Shortly thereafter an engineer named Willem Pienaar read the founder’s Uber blog posts on building a feature store and went to work building Feast as an open source version of the concept.

“The idea of Tecton [involved bringing] feature stores to the industry, so we build basically the best in class, enterprise feature store. […] Feast is something that Willem created, which I think was inspired by some of the early designs that we published at Uber. And he built Feast and it evolved as kind of like the standard for open source feature stores, and it’s now part of the Linux Foundation,” Tecton co-founder and CEO Mike Del Balso explained.

Tecton later hired Pienaar, who is today an engineer at the company where he leads their open source team. While the company did not originally start off with a plan to build an open source product, the two products are closely aligned, and it made sense to bring Pienaar on board.

“The products are very similar in a lot of ways. So I think there’s a similarity there that makes this somewhat symbiotic, and there is no explicit convergence necessary. The Tecton product is a superset of what Feast has. So it’s an enterprise version with a lot more advanced functionality, but at Feast we have a battle-tested feature store that’s open source,” Pienaar said.

As we wrote in a December 2020 story on the company’s $35 million Series B, it describes a feature store as “an end-to-end machine learning management system that includes the pipelines to transform the data into what are called feature values, then it stores and manages all of that feature data and finally it serves a consistent set of data.”

Del Balso says that from a business perspective, contributing to the open source feature store exposes his company to a different group of users, and the commercial and open source products can feed off one another as they build the two products.

“What we really like, and what we feel is very powerful here, is that we’re deeply in the Feast community and get to learn from all of the interesting use cases […] to improve the Tecton product. And similarly, we can use the feedback that we’re hearing from our enterprise customers to improve the open source project. That’s the kind of cross learning, and ideally that feedback loop involved there,” he said.

The plan is for Tecton to continue being a primary contributor with a team inside Tecton dedicated to working on Feast. Today, the company is releasing version 0.10 of the project.

#artificial-intelligence, #developer, #enterprise, #feature-stores, #linux-foundation, #machine-learning, #open-source, #tc

Streamlit nabs $35M Series B to expand machine learning platform

As a company founded by data scientists, Streamlit may be in a unique position to develop tooling to help companies build machine learning applications. For starters, it developed an open source project, but today the startup announced an expanded beta of a new commercial offering and $35 million Series B funding.

Sequoia led the investment with help from previous investors Gradient Ventures and GGV Capital. Today’s round brings the total raised to $62 million, according to the company.

Data scientists can download the open source project and build a machine learning application, but it requires a certain level of technical aptitude to make all the parts work. Company co-founder and CEO Adrien Treuille  says that so far the company has 20,000 monthly active developers using the open source tooling to develop streaming apps, which have been viewed millions of times.

As they have gained that traction, they have customers who would prefer to use a commercial service. “It’s great to have something free and that you can use instantly, but not every company is capable of bridging that into a commercial offering,” Treuille explained.

Company COO and co-founder Amanda Kelly says that the commercial offering called Streamlit for Teams is designed to remove some of the complexity around using the open source application. “The whole [process of] how do I actually deploy an app, put it in a container, make sure it scales, has the resources and is securely connected to data sources […] — that’s a whole different skill set. That’s a DevOps and IT skill set,” she said.

What Streamlit for Teams does is take care of all that in the background for end users, so they can concentrate on the app building part of the equation without help from the technical side of the company to deploy it.

Sonya Huang, a partner at Sequoia, who is leading the firm’s investment in Streamlit, says that she was impressed with the company’s developer focus and sees the new commercial offering as a way to expand usage of the applications that data scientists have been building in the open source project.

“Streamlit has a chance to define a better interface between data teams and business users by ushering in a new paradigm for interactive, data-rich applications,” Huang said.

They have data scientists at big-name companies like Uber, Delta Dental and John Deere using the open source product already. They have kept the company fairly lean with 27 employees up until now, but the plan is to double that number in the coming year with the new funding, Kelly says.

She says that the founding team recognizes that it’s important to build a diverse company. She admits that it’s not always easy to do in practice when as a young startup, you are just fighting to stay alive, but she says that the funding gives them the luxury to step back and begin to hire more deliberately.

“Literally right before this call, I was on with a consultant who is going to come in and work with the executive team, so that we’re all super clear about what we mean [when it comes to] diversity for us and how is this actually a really core part of our company, so that we can flow that into recruiting and people and engineering practices and and make that a lived value within our company,” she said.

Streamlit for Teams is available in beta starting today. The company plans to make it generally available some time later this year.

#developer, #funding, #machine-learning, #open-source, #recent-funding, #sequoia, #startups, #streamlit, #tc

Buffer overruns, license violations, and bad code: FreeBSD 13’s close call

FreeBSD's core development team, for the most part, does not appear to see the need to update their review and approval procedures.

Enlarge / FreeBSD’s core development team, for the most part, does not appear to see the need to update their review and approval procedures. (credit: Aurich Lawson (after KC Green))

At first glance, Matthew Macy seemed like a perfectly reasonable choice to port WireGuard into the FreeBSD kernel. WireGuard is an encrypted point-to-point tunneling protocol, part of what most people think of as a “VPN.” FreeBSD is a Unix-like operating system that powers everything from Cisco and Juniper routers to Netflix’s network stack, and Macy had plenty of experience on its dev team, including work on multiple network drivers.

So when Jim Thompson, the CEO of Netgate, which makes FreeBSD-powered routers, decided it was time for FreeBSD to enjoy the same level of in-kernel WireGuard support that Linux does, he reached out to offer Macy a contract. Macy would port WireGuard into the FreeBSD kernel, where Netgate could then use it in the company’s popular pfSense router distribution. The contract was offered without deadlines or milestones; Macy was simply to get the job done on his own schedule.

With Macy’s level of experience—with kernel coding and network stacks in particular—the project looked like a slam dunk. But things went awry almost immediately. WireGuard founding developer Jason Donenfeld didn’t hear about the project until it surfaced on a FreeBSD mailing list, and Macy didn’t seem interested in Donenfeld’s assistance when offered. After roughly nine months of part-time development, Macy committed his port—largely unreviewed and inadequately tested—directly into the HEAD section of FreeBSD’s code repository, where it was scheduled for incorporation into FreeBSD 13.0-RELEASE.

Read 61 remaining paragraphs | Comments

#biz-it, #code-review, #features, #freebsd, #kernel, #kernel-development, #open-source, #open-source-software, #tech, #wireguard

Camunda snares $98M Series B as process automation continues to flourish

It’s clear that automated workflow tooling has become increasingly important for companies. Perhaps that explains why Camunda, a Berlin startup that makes open source process automation software, announced an €82 million Series B today. That translates into approximately $98 million U.S.

Insight Partners led the round with help from A round investor Highland Europe. When combined with the $28 million A investment from December 2018, it brings the total raised to approximately $126 million.

What’s attracting this level of investment says Jakob Freund, co-founder and CEO at Camunda is the company is solving a problem that goes beyond pure automation. “There’s a bigger thing going on which you could call end-to-end automation or end-to-end orchestration of endpoints, which can be RPA bots, for example, but also micro services and manual work [by humans],” he said.

He added, “Camunda has become this endpoint agnostic orchestration layer that sits on top of everything else.” That means that it provides the ability to orchestrate how the automation pieces work in conjunction with one another to create this full workflow across a company.

The company has 270 employees and approximately 400 customers at this point including Goldman Sachs, Lufthansa, Universal Music Group, and Orange. Matt Gatto, managing director at Insight Partners sees a tremendous market opportunity for the company and that’s why his firm came in with such a big investment.

“Camunda’s success demonstrates how an open, standards-based, developer-friendly platform for end-to-end process automation can increase business agility and improve customer experiences, helping organizations truly transform to a digital enterprise,” Gatto said in a statement.

Camunda is not your typical startup. Its history actually dates back to 2008 as a business process management (BPM) consulting firm. It began the Camunda open source project in 2013, and that was the start of pivoting to become an open source software company with a commercial component built on top of that.

It took the funding at the end of 2018 because the market was beginning to catch up with the idea, and they wanted to build on that. It’s going so well that company reports it’s cash-flow positive, and will use the additional funding to continue accelerating the business.

#berlin-startups, #business-process-automation, #camunda, #enterprise, #funding, #open-source, #recent-funding, #startups, #tc

Seven months after Drone acquisition, Harness announces significant updates

The running line from any acquired company CEO is that the company can do so much more with resources of the company that acquired it than it could on its own. Just seven months after being acquired, Drone, co-founder Brad Rydzewski says that his company really has benefited greatly from being part of Harness, and today the company announced a significant overhaul of the open source project.

The artist formerly known as Drone is now called ‘Harness CI Community Edition’ and Rydzewski says the Harness CEO and founder Jyoti Bansal kept his word when he said he was 100% committed to continue developing the open source Drone product.

“Over the past seven months since the acquisition, a lot of community work has been around taking advantage of the resources that Harness has been able to afford us as a project — like having access to a designer, having access to professional writers — these are luxuries for most open source projects,” Rydzewski told me.

He says that having access to these additional resources has enabled him to bring a higher level of polish to the project that just wouldn’t have been possible without joining Harness. At the same time, he says the CI team, which has grown from the project’s two co-founders to 15 people, has also been able to build out the professional CI tool as it has become part of the Harness toolset.

Chief among the updates to the community edition is a new sleeker interface that has a much more professional look and feel, according to Rydzewski. In addition, developers can see how projects move along the pipeline in a visualization tool, while benefiting from real-time debugging tools and new governance and security features.

All of this is an embarrassment of riches for Rydzewski, who was used to working on a shoestring budget prior to joining Harness. “Drone came from very humble beginnings as an open source project, but now I think it can hold its own next to any product in the market today, even products that have raised hundreds of millions of dollars,” he said.

#continuous-integration, #developer, #drone-io, #enterprise, #harness, #ma, #open-source, #tc

Oso announces $8.2M Series A to simplify authorization for developers

When we think about getting access to an application, we tend to focus on the authentication side — granting or denying people (or devices) entry. But there is another piece to this, and that’s authorization. This is related to what you can do once you are inside the application, and Oso, an early stage startup, has created an open source library for developers to make it easier to build authorization in their applications.

Today, the company announced an $8.2 million Series A led by Sequoia with participation from SV Angel, Company Ventures, Highland Capital and numerous angel investors. When combined with a $2.7 million seed round from 2019, it brings the total raised to $10.9 million.

Company co-founder and CEO Graham Neray says that developers have benefited from tools like Stripe and Twilio to normalize the use of third-party APIs to offload parts of the application that aren’t core to the value prop. Oso does the same thing, except for authorization.

“We help developers to speed up their authorization roadmaps by up to 4x, and the way that we do that is by providing this library, which comes with pre-built integrations, guides and an underlying policy language,” Neray explained.

He says that authorization is a misunderstood concept, and as though to confirm this, when I tried to explain Oso to a colleague, his first thought was that it is an Okta competitor. It’s not. As Neray explains authorization and authentication are related, but are in fact different and require a different set of tools.

While tools like Okta grant you access, authorization determines what buttons can you click, what pages, can you see, what data can you access. Most developers handle this manually by writing the authorization code themselves, linking it to Active Directory (or a similar tool) and fashioning a permissions matrix. Oso’s goal is to remove that burden and provide a set of tools to abstract away most of the complexity.

The tool is open source and the startup is concentrating on building a community of users for now to build developer interest. Over time, they fully intend to build a commercial company on top of that, but are still thinking about how that will look.

For now,  the company, which launched in 2018, has 9 employees with plans to triple over the next 18 months. Naray and co-founder and CTO Sam Scott are thinking carefully about how to build a diverse, inclusive and equitable company as they grow. That means hiring from underrepresented groups, treating them fairly and making them feel like they belong. Naray says at this point, he is doing all of the hiring.

“I make a concerted effort to ensure that our pipeline is as diverse as I want the team to be — full stop — and that’s the only way to do it,” he said.

He adds that while building a diverse workforce is the morally right thing to do for him and his co-founder, there is also a practical business side to this too. “We don’t want to build an echo chamber with people from the same background, the same thought process and all the same upbringing,” he said.

When the company can return to the office, the plan is to have a home base, but let folks work where they want and how they want. “The plan is we will have an office in New York, and we will have remote team members. So in one form or another it will be hybrid,” Naray said.

#apis, #developer, #funding, #open-source, #oso, #recent-funding, #sequoia, #startups, #tc