Ransomware: A market problem deserves a market solution

REvil is a solid choice for a villain’s name: R Evil. Revil. Evil and yet fun. I could imagine Black Widow, Hulk and Spider-Man teaming up to topple the leadership of REvil Incorporated.

The criminal gang using the name REvil may have enabled ransomware attacks on thousands of small businesses worldwide this summer — but the ransomware problem is bigger than REvil, LockBit or DarkSide. REvil has disappeared from the internet, but the ransomware problem persists.

REvil is a symptom, not the cause. I advise Tony Stark and his fellow Avengers to look past any one criminal organization — because there is no evil mastermind. Ransomware is just the latest in the 50,000-year evolution of petty criminals discovering get-rich-quick schemes.

The massive boom in the number of ransomware occurrences arises from the lack of centralized control. More than 304 million ransomware attacks hit global businesses last year, with costs surpassing $178,000 per event. Technology has created a market where countless petty criminals can make good money fast. The best way to fight this kind of threat is with a market-based approach.

The spike in global ransomware attacks reflects a massive “dumbing down” of criminal activity. People looking to make an illicit buck have many more options available to them today than they did even two years ago. Without technical chops, people can steal your data, hold it for ransom and coerce you to pay to get it back. Law enforcement has not yet responded to combat this form of cybercrime, and large, sophisticated criminal networks have likewise not yet figured out how to control the encroaching upstarts.

The spike in ransomware attacks is attributable to the “as a service” economy. In this case, we’re talking about RaaS, or ransomware as a service. This works because each task in the ransomware chain benefits from the improved sophistication enabled by the division of labor and specialization.

Someone finds a vulnerable target. Someone provides bulletproof infrastructure outside of the jurisdiction of responsible law enforcement. Someone provides the malicious code. The players all come together without knowing each other’s names. No need to meet in person as Mr. Pink, Mr. Blonde and Mr. Orange because the ability to coordinate tasks has become simple. The rapid pace of technological innovation created a decentralized market, enabling amateurs to engage in high-dollar crimes.

There’s a gig economy for the underworld just like there is for the legal business world. I’ve built two successful software companies, even though I’m an economist. I use open source software and rent infrastructure via cloud technologies. I operated my first software company for six years before I sought outside capital, and I used that money for marketing and sales more than technology.

This tech advancement is both good and bad. The global economy did better than expected during a global pandemic because technology enabled many people to work from anywhere.

But the illicit markets of crime also benefited. REvil provided a service — a piece of a larger network — and earned a share of proceeds from ransomware attacks committed by others — like Jeff Bezos and Amazon get a share of my company’s revenues for the services they provide to me.

To fight ransomware attacks, appreciate the economics — the markets that enable ransomware — and change the market dynamics. Specifically, do three things:

1. Analyze the market like a business executive

Any competitive business thinks about what’s allowing competitors to succeed and how they can outcompete. The person behind a ransomware strike is an entrepreneur or a worker in a firm engaged in cybercrime, so start with good business analytics using data and smart business questions.

Can the crypto technologies that enable the crime also be used to enable entity resolution and deny anonymity/pseudonymity? Can technology undermine a criminal’s ability to recruit, coordinate or move, store and spend the proceeds from criminal activities?

2. Define victory in market terms

Doing the analytics to understand competing firms allows one to more clearly see the market for ransomware. Eliminating one “firm” often creates a power vacuum that will be filled by another, provided the market remains the same.

REvil disappeared, but ransomware attacks persist. Victory in market terms means creating markets in which criminals choose not to engage in the activity in the first place. The goal is not to catch criminals, but to deter the crime. Victory against ransomware happens when arrests drop because attempted attacks drop to near zero.

3. Combat RaaS as an entrepreneur in a competitive market

To prevent ransomware is to fight against criminal entrepreneurs, so the task requires one to think and fight crime like an entrepreneur.

Crime-fighting entrepreneurs require collaboration — networks of government officials, banking professionals and technologists in the private sector across the globe must come together.

Through artificial intelligence and machine learning, the capability to securely share data, information and knowledge while preserving privacy exists. The tools of crime become the tools to combat crime.

No evil mastermind sits in their lair laughing at the chaos inflicted on the economy. Instead, growing numbers of amateurs are finding ways to make money quickly. Tackling the ransomware industry requires the same coordinated focus on the market that enabled amateurs to enter cybercrime in the first place. Iron Man would certainly agree.

#column, #computer-security, #crime, #cybercrime, #machine-learning, #malware, #open-source-software, #ransomware, #security, #security-breaches, #tc

The stars are aligning for federal IT open source software adoption

In recent years, the private sector has been spurning proprietary software in favor of open source software and development approaches. For good reason: The open source avenue saves money and development time by using freely available components instead of writing new code, enables new applications to be deployed quickly and eliminates vendor lock-in.

The federal government has been slower to embrace open source, however. Efforts to change are complicated by the fact that many agencies employ large legacy IT infrastructure and systems to serve millions of people and are responsible for a plethora of sensitive data. Washington spends tens of billions every year on IT, but with each agency essentially acting as its own enterprise, decision-making is far more decentralized than it would be at, say, a large bank.

While the government has made a number of moves in a more open direction in recent years, the story of open source in federal IT has often seemed more about potential than reality.

But there are several indications that this is changing and that the government is reaching its own open source adoption tipping point. The costs of producing modern applications to serve increasingly digital-savvy citizens keep rising, and agencies are budget constrained to find ways to improve service while saving taxpayer dollars.

Sheer economics dictate an increased role for open source, as do a variety of other benefits. Because its source code is publicly available, open source software encourages continuous review by others outside the initial development team to promote increased software reliability and security, and code can be easily shared for reuse by other agencies.

Here are five signs I see that the U.S. government is increasingly rallying around open source.

More dedicated resources for open source innovation

Two initiatives have gone a long way toward helping agencies advance their open source journeys.

18F, a team within the General Services Administration that acts as consultancy to help other agencies build digital services, is an ardent open source backer. Its work has included developing a new application for accessing Federal Election Commission data, as well as software that has allowed the GSA to improve its contractor hiring process.

18F — short for GSA headquarters’ address of 1800 F St. — reflects the same grassroots ethos that helped spur open source’s emergence and momentum in the private sector. “The code we create belongs to the public as a part of the public domain,” the group says on its website.

Five years ago this August, the Obama administration introduced a new Federal Source Code Policy that called on every agency to adopt an open source approach, create a source code inventory, and publish at least 20% of written code as open source. The administration also launched Code.gov, giving agencies a place to locate open source solutions that other departments are already using.

The results have been mixed, however. Most agencies are now consistent with the federal policy’s goal, though many still have work to do in implementation, according to Code.gov’s tracker. And a report by a Code.gov staffer found that some agencies were embracing open source more than others.

Still, Code.gov says the growth of open source in the federal government has gone farther than initially estimated.

A push from the new administration

The American Rescue Plan, a $1.9 trillion pandemic relief bill that President Biden signed in early March 2021, contained $9 billion for the GSA’s Technology Modernization Fund, which finances new federal technology projects. In January, the White House said upgrading federal IT infrastructure and addressing recent breaches such as the SolarWinds hack was “an urgent national security issue that cannot wait.”

It’s fair to assume open source software will form the foundation of many of these efforts, because White House technology director David Recordon is a long-time open source advocate and once led Facebook’s open source projects.

A changing skills environment

Federal IT employees who spent much of their careers working on legacy systems are starting to retire, and their successors are younger people who came of age in an open source world and are comfortable with it.

About 81% of private sector hiring managers surveyed by the Linux Foundation said hiring open source talent is a priority and that they’re more likely than ever to seek out professionals with certifications. You can be sure the public sector is increasingly mirroring this trend as it recognizes a need for talent to support open source’s growing foothold.

Stronger capabilities from vendors

By partnering with the right commercial open source vendor, agencies can drive down infrastructure costs and more efficiently manage their applications. For example, vendors have made great strides in addressing security requirements laid out by policies such as the Federal Security Security Modernization Act (FISMA), Federal Information Processing Standards (FIPS) and the Federal Risk and Authorization Management Program (FedRamp), making it easy to deal with compliance.

In addition, some vendors offer powerful infrastructure automation tools and generous support packages, so federal agencies don’t have to go it alone as they accelerate their open source strategies. Linux distributions like Ubuntu provide a consistent developer experience from laptop/workstation to the cloud, and at the edge, for public clouds, containers, and physical and virtual infrastructure.

This makes application development a well-supported activity that includes 24/7 phone and web support, which provides access to world-class enterprise support teams through web portals, knowledge bases or via phone.

The pandemic effect

Whether it’s accommodating more employees working from home or meeting higher citizen demand for online services, COVID-19 has forced large swaths of the federal government to up their digital game. Open source allows legacy applications to be moved to the cloud, new applications to be developed more quickly, and IT infrastructures to adapt to rapidly changing demands.

As these signs show, the federal government continues to move rapidly from talk to action in adopting open source.

Who wins? Everyone!

#column, #developer, #federal-election-commission, #free-software, #government, #linux, #linux-foundation, #open-source-software, #open-source-technology, #opinion, #policy, #solarwinds, #ubuntu

To prevent cyberattacks, the government should limit the scope of a software bill of materials

The May 2021 executive order from the White House on improving U.S. cybersecurity includes a provision for a software bill of materials (SBOM), a formal record containing the details and supply chain relationships of various components used in building a software product.

An SBOM is the full list of every item that’s needed to build an application. It enumerates all parts, including open-source software (OSS) dependencies (direct), transitive OSS dependencies (indirect), open-source packages, vendor agents, vendor application programming interfaces (APIs) and vendor software development kits.

Software developers and vendors often create products by assembling existing open-source and commercial software components, the executive order notes. It’s useful to those who develop or manufacture software, those who select or purchase software and those who operate the software.

As the executive order describes, an SBOM enables software developers to make sure open-source and third-party components are up to date. Buyers can use an SBOM to perform vulnerability or license analysis, both of which can be used to evaluate risk in a product. And those who operate software can use SBOMs to quickly determine whether they are at potential risk of a newly discovered vulnerability.

“A widely used, machine-readable SBOM format allows for greater benefits through automation and tool integration,” the executive order says. “The SBOMs gain greater value when collectively stored in a repository that can be easily queried by other applications and systems. Understanding the supply chain of software, obtaining an SBOM and using it to analyze known vulnerabilities are crucial in managing risk.”

An SBOM is intrinsically hierarchical. The finished product sits at the top, and the hierarchy includes all of its dependencies providing a foundation for its functionality. Any one of these parts can be exploited in this hierarchical structure, leading to a ripple effect.

Not surprisingly, given the potential impact, there has been a lot of talk about the proposed SBOM provision since the executive order was announced. This is certainly true within the cybersecurity community. Anytime there are attacks such as the ones against Equifax or Solarwinds that involve software vulnerabilities being exploited, there is renewed interest in this type of concept.

Clearly, the intention of an SBOM is good. If software vendors are not upgrading dependencies to eliminate security vulnerabilities, the thinking is we need to be able to ask the vendors to share their lists of dependencies. That way, the fear of customer or public ridicule might encourage the software producers to do a better job of upgrading dependencies.

However, this is an old and outmoded way of thinking. Modern applications and microservices use many dependencies. It’s not uncommon for a small application to use tens of dependencies, which in turn might use other dependencies. Soon the list of dependencies used by a single application can run into the hundreds. And if a modern application consists of a few hundred microservices, which is not uncommon, the list of dependencies can run into the thousands.

If a software vendor were to publish such an extensive list, how will the end users of that software really benefit? Yes, we can also ask the software vendor to publish which of the dependencies is vulnerable, and let’s say that list runs into the hundreds. Now what?

Clearly, having to upgrade hundreds of vulnerable dependencies is not a trivial task. A software vendor would be constantly deciding between adding new functionality that generates revenue and allows the company to stay ahead of its competitors versus upgrading dependencies that don’t do either.

If the government formalizes an SBOM mandate and starts to financially penalize vendors that have vulnerable dependencies, it is clear that given the complexity associated with upgrading dependencies the software vendors might choose to pay fines rather than risk losing revenue or competitive advantage in the market.

Revenue drives market capitalization, which in turn drives executive and employee compensation. Fines, as small as they are, have negligible impact on the bottom line. In a purely economic sense, the choice is fairly obvious.

In addition, software vendors typically do not want to publish lists of all their dependencies because that provides a lot of information to hackers and other bad actors as well as to competitors. It’s bad enough that cybercriminals are able to find vulnerabilities on their own. Providing lists of dependencies gives them even more possible resources to discover weaknesses.

Customers and users of the software, for their part, don’t want to know all the dependencies. What would they gain from studying a list of hundreds of dependencies? Rather, software vendors and their customers want to know which dependencies, if any, make the application vulnerable. That really is the key question.

Prioritizing software composition analysis (SCA) ensures that when dependencies are analyzed in the context of an application, the dependencies that make an application vulnerable can be dramatically reduced.

Instead of publishing a list of 1,000 dependencies, or 100 that are vulnerable, organizations can publish a far more manageable list in the single digits. That is a problem that organizations can much more easily deal with. Sometimes a software vendor can fix an issue without having to upgrade the dependency. For example, it can make changes in the code, which is not always possible if we are merely looking for the list of vulnerable dependencies.

There is no reason to disdain the concept of SBOM outright. By all means, let’s make the software vendors responsible for being transparent about what goes into their software products. Plenty of organizations have paid a steep price because of software vulnerabilities that could have been prevented in the form of data breaches and other cybersecurity attacks.

Indeed, it’s heartening to see the federal government take cybersecurity so seriously and propose ways to enhance the protection of applications and data.

However, let’s make SBOM specific to the list of dependencies that actually make the application vulnerable. This serves both the vendor and its customers by cutting directly to the sources of vulnerabilities that can do damage. That way, we can address the issues at hand without creating unnecessary burdens.

#column, #cybersecurity, #government, #hacking, #open-source-software, #opinion, #policy, #security, #solarwinds, #tc, #united-states

FOSS mobile app Stingle wants to privately, securely back up your photos

Stock photo of photo album open on a table.

Enlarge / Despite the encryption, Stingle Photos is a distinctly minimalist app which comes closer to the simple feel of an analog album than most of its competitors do. (credit: Kohei Hara / Getty Images)

With Google Photos killing off its Unlimited photo backup policy last November, the market for photo backup and sync applications opened up considerably. We reviewed one strong contender—Amazon Photos—in January, and freelancer Alex Kretzschmar walked us through several self-hosted alternatives in June.

Today, we’re looking at a new contender—Stingle Photos—which splits the difference, offering a FOSS mobile application which syncs to a managed cloud.

Trust no one

Arguably, encryption is Stingle Photos’ most important feature. Although the app uploads your photos to Stingle’s cloud service, the service’s operators can’t look at your photos. That’s because the app, which runs on your phone or tablet, encrypts them securely using Sodium cryptography.

Read 21 remaining paragraphs | Comments

#android, #foss, #free-and-open-source, #ios, #open-source, #open-source-software, #photo-backup, #tech

Tech leaders can be the secret weapon for supercharging ESG goals

Environmental, social and governance (ESG) factors should be key considerations for CTOs and technology leaders scaling next generation companies from day one. Investors are increasingly prioritizing startups that focus on ESG, with the growth of sustainable investing skyrocketing.

What’s driving this shift in mentality across every industry? It’s simple: Consumers are no longer willing to support companies that don’t prioritize sustainability. According to a survey conducted by IBM, the COVID-19 pandemic has elevated consumers’ focus on sustainability and their willingness to pay out of their own pockets for a sustainable future. In tandem, federal action on climate change is increasing, with the U.S. rejoining the Paris Climate Agreement and a recent executive order on climate commitments.

Over the past few years, we have seen an uptick in organizations setting long-term sustainability goals. However, CEOs and chief sustainability officers typically forecast these goals, and they are often long term and aspirational — leaving the near and midterm implementation of ESG programs to operations and technology teams.

Until recently, choosing cloud regions meant considering factors like cost and latency to end users. But carbon is another factor worth considering.

CTOs are a crucial part of the planning process, and in fact, can be the secret weapon to help their organization supercharge their ESG targets. Below are a few immediate steps that CTOs and technology leaders can take to achieve sustainability and make an ethical impact.

Reducing environmental impact

As more businesses digitize and more consumers use devices and cloud services, the energy needed by data centers continues to rise. In fact, data centers account for an estimated 1% of worldwide electricity usage. However, a forecast from IDC shows that the continued adoption of cloud computing could prevent the emission of more than 1 billion metric tons of carbon dioxide from 2021 through 2024.

Make compute workloads more efficient: First, it’s important to understand the links between computing, power consumption and greenhouse gas emissions from fossil fuels. Making your app and compute workloads more efficient will reduce costs and energy requirements, thus reducing the carbon footprint of those workloads. In the cloud, tools like compute instance auto scaling and sizing recommendations make sure you’re not running too many or overprovisioned cloud VMs based on demand. You can also move to serverless computing, which does much of this scaling work automatically.

Deploy compute workloads in regions with lower carbon intensity: Until recently, choosing cloud regions meant considering factors like cost and latency to end users. But carbon is another factor worth considering. While the compute capabilities of regions are similar, their carbon intensities typically vary. Some regions have access to more carbon-free energy production than others, and consequently the carbon intensity for each region is different.

So, choosing a cloud region with lower carbon intensity is often the simplest and most impactful step you can take. Alistair Scott, co-founder and CTO of cloud infrastructure startup Infracost, underscores this sentiment: “Engineers want to do the right thing and reduce waste, and I think cloud providers can help with that. The key is to provide information in workflow, so the people who are responsible for infraprovisioning can weigh the CO2 impact versus other factors such as cost and data residency before they deploy.”

Another step is to estimate your specific workload’s carbon footprint using open-source software like Cloud Carbon Footprint, a project sponsored by ThoughtWorks. Etsy has open-sourced a similar tool called Cloud Jewels that estimates energy consumption based on cloud usage information. This is helping them track progress toward their target of reducing their energy intensity by 25% by 2025.

Make social impact

Beyond reducing environmental impact, CTOs and technology leaders can have significant, direct and meaningful social impact.

Include societal benefits in the design of your products: As a CTO or technology founder, you can help ensure that societal benefits are prioritized in your product roadmaps. For example, if you’re a fintech CTO, you can add product features to expand access to credit in underserved populations. Startups like LoanWell are on a mission to increase access to capital for those typically left out of the financial system and make the loan origination process more efficient and equitable.

When thinking about product design, a product needs to be as useful and effective as it is sustainable. By thinking about sustainability and societal impact as a core element of product innovation, there is an opportunity to differentiate yourself in socially beneficial ways. For example, Lush has been a pioneer of package-free solutions, and launched Lush Lens — a virtual package app leveraging cameras on mobile phones and AI to overlay product information. The company hit 2 million scans in its efforts to tackle the beauty industry’s excessive use of (plastic) packaging.

Responsible AI practices should be ingrained in the culture to avoid social harms: Machine learning and artificial intelligence have become central to the advanced, personalized digital experiences everyone is accustomed to — from product and content recommendations to spam filtering, trend forecasting and other “smart” behaviors.

It is therefore critical to incorporate responsible AI practices, so benefits from AI and ML can be realized by your entire user base and that inadvertent harm can be avoided. Start by establishing clear principles for working with AI responsibly, and translate those principles into processes and procedures. Think about AI responsibility reviews the same way you think about code reviews, automated testing and UX design. As a technical leader or founder, you get to establish what the process is.

Impact governance

Promoting governance does not stop with the board and CEO; CTOs play an important role, too.

Create a diverse and inclusive technology team: Compared to individual decision-makers, diverse teams make better decisions 87% of the time. Additionally, Gartner research found that in a diverse workforce, performance improves by 12% and intent to stay by 20%.

It is important to reinforce and demonstrate why diversity, equity and inclusion is important within a technology team. One way you can do this is by using data to inform your DEI efforts. You can establish a voluntary internal program to collect demographics, including gender, race and ethnicity, and this data will provide a baseline for identifying diversity gaps and measuring improvements. Consider going further by baking these improvements into your employee performance process, such as objectives and key results (OKRs). Make everyone accountable from the start, not just HR.

These are just a few of the ways CTOs and technology leaders can contribute to ESG progress in their companies. The first step, however, is to recognize the many ways you as a technology leader can make an impact from day one.

#artificial-intelligence, #carbon-footprint, #cloud, #cloud-computing, #cloud-infrastructure, #cloud-services, #column, #energy, #environmentalism, #esg, #etsy, #greenhouse-gas-emissions, #greentech, #machine-learning, #open-source-software, #opinion, #sustainability, #tc, #thoughtworks

Finite State lands $30M Series B to help uncover security flaws in device firmware

Columbus, Ohio-based Finite State, a startup that provides supply chain security for connected devices and critical infrastructure, has raised $30M in Series B funding. 

The funding lands amid increased focus on the less-secure elements in an organizations’ supply chain, such as Internet of Things devices and embedded systems. The problem, Finite State says, is largely fueled by device firmware, the foundational software that often includes components sourced from third-party vendors or open-source software. This means if a security flaw is baked into the finished product, it’s often without the device manufacturers’ knowledge. 

“Cyber attackers see firmware as a weak link to gain unauthorized access to critical systems and infrastructure,” Matt Wyckhouse, CEO of Finite State, tells TechCrunch. “The number of known cyberattacks targeting firmware has quintupled in just the last four years.”

The Finite State platform brings visibility to the supply chains that create connected devices and embedded systems. After unpacking and analyzing every file and configuration in a firmware build, the platform generates a complete bill of materials for software components, identifies known and possible zero-day vulnerabilities, shows a contextual risk score, and provides actionable insights that product teams can use to secure their software.

“By looking at every piece of their supply chain and every detail of their firmware — something no other product on the market offers — we enable manufacturers to ship more secure products, so that users can trust their connected devices more,” Wyckhouse says.

The company’s latest funding round was led by Energize Ventures, with participation from Schneider Electric Ventures and Merlin Ventures, and comes a year after Finite State raised a $12.5 million Series A round. It brings the total amount of funds raised by the firm to just shy of $50 million. 

The startup says it plans to use the funds to scale to meet the demands of the market. It plans to increase its headcount too; Finite State currently has 50 employees, a figure that’s expected to grow to more than 80 by the end of 2021.  

“We also want to use this fundraising round to help us get out the message: firmware isn’t safe unless it’s safe by design,” Wyckhouse added. “It’s not enough to analyze the code your engineers built when other parts of your supply chain could expose you to major security issues.”

Finite State was founded in 2017 by Matt Wyckhouse, founder and former CTO of Battelle’s Cyber Business Unit. The company showcased its capabilities in June 2019, when its widely-cited Huawei Supply Chain Assessment revealed numerous backdoors and major security vulnerabilities in the Chinese technology company’s networking devices that could be used in 5G networks. 

Read more:

#articles, #battelle, #ceo, #columbus, #computer-security, #computing, #cto, #cyberwarfare, #energize-ventures, #firmware, #funding, #hardware, #huawei, #internet-of-things, #open-source-software, #security, #supply-chain, #supply-chain-management, #technology

The end of open source?

Several weeks ago, the Linux community was rocked by the disturbing news that University of Minnesota researchers had developed (but, as it turned out, not fully executed) a method for introducing what they called “hypocrite commits” to the Linux kernel — the idea being to distribute hard-to-detect behaviors, meaningless in themselves, that could later be aligned by attackers to manifest vulnerabilities.

This was quickly followed by the — in some senses, equally disturbing — announcement that the university had been banned, at least temporarily, from contributing to kernel development. A public apology from the researchers followed.

Though exploit development and disclosure is often messy, running technically complex “red team” programs against the world’s biggest and most important open-source project feels a little extra. It’s hard to imagine researchers and institutions so naive or derelict as not to understand the potentially huge blast radius of such behavior.

Equally certain, maintainers and project governance are duty bound to enforce policy and avoid having their time wasted. Common sense suggests (and users demand) they strive to produce kernel releases that don’t contain exploits. But killing the messenger seems to miss at least some of the point — that this was research rather than pure malice, and that it casts light on a kind of software (and organizational) vulnerability that begs for technical and systemic mitigation.

Projects of the scale and utter criticality of the Linux kernel aren’t prepared to contend with game-changing, hyperscale threat models.

I think the “hypocrite commits” contretemps is symptomatic, on every side, of related trends that threaten the entire extended open-source ecosystem and its users. That ecosystem has long wrestled with problems of scale, complexity and free and open-source software’s (FOSS) increasingly critical importance to every kind of human undertaking. Let’s look at that complex of problems:

  • The biggest open-source projects now present big targets.
  • Their complexity and pace have grown beyond the scale where traditional “commons” approaches or even more evolved governance models can cope.
  • They are evolving to commodify each other. For example, it’s becoming increasingly hard to state, categorically, whether “Linux” or “Kubernetes” should be treated as the “operating system” for distributed applications. For-profit organizations have taken note of this and have begun reorganizing around “full-stack” portfolios and narratives.
  • In so doing, some for-profit organizations have begun distorting traditional patterns of FOSS participation. Many experiments are underway. Meanwhile, funding, headcount commitments to FOSS and other metrics seem in decline.
  • OSS projects and ecosystems are adapting in diverse ways, sometimes making it difficult for for-profit organizations to feel at home or see benefit from participation.

Meanwhile, the threat landscape keeps evolving:

  • Attackers are bigger, smarter, faster and more patient, leading to long games, supply-chain subversion and so on.
  • Attacks are more financially, economically and politically profitable than ever.
  • Users are more vulnerable, exposed to more vectors than ever before.
  • The increasing use of public clouds creates new layers of technical and organizational monocultures that may enable and justify attacks.
  • Complex commercial off-the-shelf (COTS) solutions assembled partly or wholly from open-source software create elaborate attack surfaces whose components (and interactions) are accessible and well understood by bad actors.
  • Software componentization enables new kinds of supply-chain attacks.
  • Meanwhile, all this is happening as organizations seek to shed nonstrategic expertise, shift capital expenditures to operating expenses and evolve to depend on cloud vendors and other entities to do the hard work of security.

The net result is that projects of the scale and utter criticality of the Linux kernel aren’t prepared to contend with game-changing, hyperscale threat models. In the specific case we’re examining here, the researchers were able to target candidate incursion sites with relatively low effort (using static analysis tools to assess units of code already identified as requiring contributor attention), propose “fixes” informally via email, and leverage many factors, including their own established reputation as reliable and frequent contributors, to bring exploit code to the verge of being committed.

This was a serious betrayal, effectively by “insiders” of a trust system that’s historically worked very well to produce robust and secure kernel releases. The abuse of trust itself changes the game, and the implied follow-on requirement — to bolster mutual human trust with systematic mitigations — looms large.

But how do you contend with threats like this? Formal verification is effectively impossible in most cases. Static analysis may not reveal cleverly engineered incursions. Project paces must be maintained (there are known bugs to fix, after all). And the threat is asymmetrical: As the classic line goes — blue team needs to protect against everything, red team only needs to succeed once.

I see a few opportunities for remediation:

  • Limit the spread of monocultures. Stuff like Alva Linux and AWS’ Open Distribution of ElasticSearch are good, partly because they keep widely used FOSS solutions free and open source, but also because they inject technical diversity.
  • Reevaluate project governance, organization and funding with an eye toward mitigating complete reliance on the human factor, as well as incentivizing for-profit companies to contribute their expertise and other resources. Most for-profit companies would be happy to contribute to open source because of its openness, and not despite it, but within many communities, this may require a culture change for existing contributors.
  • Accelerate commodification by simplifying the stack and verifying the components. Push appropriate responsibility for security up into the application layers.

Basically, what I’m advocating here is that orchestrators like Kubernetes should matter less, and Linux should have less impact. Finally, we should proceed as fast as we can toward formalizing the use of things like unikernels.

Regardless, we need to ensure that both companies and individuals provide the resources open source needs to continue.

#column, #developer, #kernel, #kubernetes, #linux, #open-source-software, #operating-systems, #opinion, #university-of-minnesota

r2c raises $27M to scale its security-focused code analysis service

This morning r2c, a startup building a SaaS service around the Semgrep open-source project, announced that it has closed a $27 million Series B. Felicis led the round, which the company said was a pre-emptive deal.

Prior investors firms Redpoint and Sequoia also participated in the fundraising event; r2c last raised a $13 million Series A in October of 2020.

The startup fits into several trends that TechCrunch has explored in recent quarters, including what appears to be a growing number of open-source (OSS) grounded startups raising capital, more rounds coming to exist thanks to investors looking to get the jump on inside rounds before they can form.

On the OSS point, r2c works with Semgrep, which the company likens to a “code-aware grep.” Still confused? Don’t worry, this is all a bit technical, but interesting. Grep is a tool for searching through plain-text that has been around for decades. Semgrep is related, but focused on finding things inside of written code.

Given the sheer volume of code that is written daily in the world, you can imagine that there is an ever-rising demand for finding particular bits of text quickly; Semgrep is an evolution of the original project, that was initially built inside of Facebook.

Per r2c CEO Isaac Evans, however, the project failed to attract much awareness. His startup has built what Evans described to TechCrunch has the “canonical” Semgrep fork, or version, and has crafted a software service around the code to make it easier for other companies to use.

The r2c team, via the company.

There are many ways to generate revenue from open-source software. Two popular monetization routes are througuh support services or offers to host particular projects. But, R2c is a doing something a bit different. The startup sells a monthly, per-developer subscription (SaaS) that packages a broad set of security-focused rules across different coding languages, allowing companies to easily check their own software for possible security issues.

Or as Evans succinctly explained it, r2c offers something akin to application security in a box.

Focusing on cybersecurity is a reasonable tack for the company. Given the ever-growing number of breaches that the public endures, helping companies leak less data, and suffer fewer intrusions is big business.

You don’t have to pay r2c, however. Semgrep is OSS and the rules associated with various languages are available under a LGPL license — more on that definition here. Developers could build their own version of what the company offers. But, Evans argued, it won’t be ready to help you pick which rules you may want to apply to your code, something that his company is happy to help with for a fee.

From a wide lens, r2c fits into the developer tools category. It is content to land and expand inside of companies, perhaps allowing it a lower cost of acquiring customers than we see at some SaaS startups. But that doesn’t mean that the company won’t go to market to sell its service. Per Evans, the startup has historically underinvested in marketing, something that it may now be able to focus more on thanks to its recent financing.

It is not uncommon to see companies with technically-minded founders initially spend too little on the sales and marketing parts of operating a software business. But our impression after discussing the company’s plans with Evans is that r2c intends to get that part of its house in order.

Evans told TechCrunch that his company took aboard more cash because it doesn’t want to build the best search tool for, say, the C programming language. It wants to go broad, fusing what the CEO described as the “customizability of Semgrep” and wide language support.

Let’s see how quickly the company can staff up, bolster its marketing efforts, and take on enterprise clients. Raising a Series C puts the company somewhere past its startup adolescence, so from here on out we can pester the company for concrete growth numbers.

#fundings-exits, #open-source-software, #oss, #r2c, #startups, #tc

A revival at the intersection of open source and open standards

Our world has big problems to solve, and something desperately needed in that pursuit is the open-source and open-standards communities working together.

Let me give you a stark example, taken from the harsh realities of 2020. Last year, the United States experienced nearly 60,000 wildland fires that burned more than 10 million acres, resulting in more than 9,500 homes destroyed and at least 43 lives lost.

I served as a volunteer firefighter in California for 10 years and witnessed firsthand the critical importance of technology in helping firefighters communicate efficiently and deliver safety-critical information quickly. Typically, multiple agencies show up to fight these fires, bringing with them radios made by different manufacturers that each use proprietary software to set radio frequencies. As a result, reprogramming these radios so that teams could communicate with one another is an unnecessarily slow — and potentially life-threatening — process.

If the radio manufacturers had instead all contributed to an open-source implementation conforming to a standard, the radios could have been quickly aligned to the same frequencies. Radio manufacturers could have provided a valuable, life-saving tool rather than a time-wasting obstacle, and they could have shared the cost of developing such software. In this situation, like so many others, there is no competitive advantage to be gained from proprietary radio-programming software and many priceless benefits to gain by standardizing.

Open source and open standards are obviously different, but the objectives of these communities are the same: interoperability, innovation and choice.

The benefit of coherent standards and corresponding open-source implementations is not unique to safety-critical situations like wildfires. There are many areas of our lives that could significantly benefit from a better integration of standards and open source.

Open source and open standards: What’s the difference?

“Open source” describes software that is publicly accessible and free for anyone to use, modify and share. It also describes a collaborative, community-oriented software development philosophy, with an open exchange of ideas, open participation, rapid prototyping, and open governance and transparency.

By contrast, the term “standard” refers to agreed-upon definitions of functionality. These requirements, specifications and guidelines ensure that products, services and systems perform in an interoperable way with quality, safety and efficiency.

Dozens of organizations exist for the purpose of establishing and maintaining standards. Examples include the International Organization for Standardization (ISO), the European Telecommunications Standards Institute (ETSI), and the World Wide Web Consortium (W3C). OASIS Open belongs in this category as well. A standard is “open” when it is developed via a consensus-building process, guided by organizations that are open, fair and transparent. Most people would agree that the standard-building process is careful and deliberate, ensuring consensus through compromise and resulting in long-lasting specifications and technical boundaries.

Where’s the common ground?

Open source and open standards are obviously different, but the objectives of these communities are the same: interoperability, innovation and choice. The main difference is how they accomplish those goals, and by that I’m referring primarily to culture and pace.

Chris Ferris, an IBM fellow and CTO of Open Technology, recently told me that with standards organizations, it often seems the whole point is to slow things down. Sometimes it’s with good reason, but I’ve seen competition get the best of people, too. Open source seems to be much more collaborative and less contentious or competitive. That doesn’t mean that there aren’t competitive projects out there that are tackling the same domain.

Another culture characteristic that affects pace is that open source is about writing code and standards organizations are about writing prose. Words outlive code with respect to long-term interoperability, so the standards culture is much more deliberate and thoughtful as it develops the prose that defines standards. Although standards are not technically static, the intent with a standard is to arrive at something that will serve without significant change for the long term. Conversely, the open-source community writes code with an iterative mindset, and the code is essentially in a state of continuous evolution. These two cultures sometimes clash when the communities try to move in concert.

If that’s the case, why try to find harmony?

Collaboration between open source and open standards will fuel innovation

The internet is a perfect example of what harmony between the open-source and open-standards communities can achieve. When the internet began as ARPANET, it relied on common shared communications standards that predated TCP/IP. With time, standards and open-source implementations brought us TCP/IP, HTTP, NTP, XML, SAML, JSON and many others, and also enabled the creation of additional key global systems implemented in open standards and code, like disaster warnings (OASIS CAP) and standardized global trade invoicing (OASIS UBL).

The internet has literally transformed our world. That level of technological innovation and transformative power is possible for the future, too, if we re-energize the spirit of collaboration between the open-standards and open-source communities.

Finding harmony and a natural path of integration

With all of the critical open-source projects residing in repositories today, there are many opportunities for collaboration on associated standards to ensure the long-term operability of that software. Part of our mission at OASIS Open is identifying those open-source projects and giving them a collaborative environment and all the scaffolding they need to build a standard without it becoming a difficult process.

Another point Ferris shared with me is the necessity for this path of integration to grow. For instance, this need is particularly prevalent if you want your technology to be used in Asia: If you don’t have an international standard, Asian enterprises don’t even want to hear from you. We’re seeing the European community asserting a strong preference for standards as well. That is certainly a driver for open-source projects that want to play with some of the heavy hitters in the ecosystem.

Another area where you can see a growing need for integration is when an open-source project becomes bigger than itself, meaning it begins to impact a whole lot of other systems, and alignment is needed between them. An example would be a standard for telemetry data, which is now being used for so many different purposes, from observability to security. Another example is the software bill of materials, or SBOM. I know some things are being done in the open-source world to address the challenge of tracking the provenance of software. This is another case where, if we’re going to be successful at all, we need a standard to emerge.

It’s going to take a team effort

Fortunately, the ultimate goals of the open-source and open-standards communities are the same: interoperability, innovation and choice. We also have excellent proof points of how and why we need to work together, from the internet to Topology and Orchestration Specification for Cloud Applications (TOSCA) and more. In addition, major stakeholders are carrying the banner, acknowledging that for certain open-source projects we need to take a strategic, longer-term view that includes standards.

That’s a great start to a team effort. Now it’s time for foundations to step up to the plate and collaborate with each other and with those stakeholders.

#column, #interoperability, #open-source, #open-source-software, #standards, #tc

Huawei officially launches Android alternative HarmonyOS for smartphones

Think you’re living in a hyper-connected world? Huawei’s proprietary HarmonyOS wants to eliminate delays and gaps in user experience when you move from one device onto another by adding interoperability to all devices, regardless of the system that powers them.

Two years after Huawei was added to the U.S. entity list that banned the Chinese telecom giant from accessing U.S. technologies, including core chipsets and Android developer services from Google, Huawei’s alternative smartphone operating system was unveiled.

On Wednesday, Huawei officially launched its proprietary operating system HarmonyOS for mobile phones. The firm began building the operating system in 2016 and made it open-source for tablets, electric vehicles and smartwatches last September. Its flagship devices such as Mate 40 could upgrade to HarmonyOS starting Wednesday, with the operating system gradually rolling out on lower-end models in the coming quarters.

HarmonyOS is not meant to replace Android or iOS, Huawei said. Rather, its application is more far-reaching, powering not just phones and tablets but an increasing number of smart devices. To that end, Huawei has been trying to attract hardware and home appliance manufacturers to join its ecosystem.

To date, more than 500,000 developers are building applications based on HarmonyOS. It’s unclear whether Google, Facebook and other mainstream apps in the West are working on HarmonyOS versions.

Some Chinese tech firms have answered Huawei’s call. Smartphone maker Meizu hinted on its Weibo account that its smart devices might adopt HarmonyOS. Oppo, Vivo and Xiaomi, who are much larger players than Meizu, are probably more reluctant to embrace a rival’s operating system.

Huawei’s goal is to collapse all HarmonyOS-powered devices into one single control panel, which can, say, remotely pair the Bluetooth connections of headphones and a TV. A game that is played on a phone can be continued seamlessly on a tablet. A smart soymilk blender can customize a drink based on the health data gleaned from a user’s smartwatch.

Devices that aren’t already on HarmonyOS can also communicate with Huawei devices with a simple plug-in. Photos from a Windows-powered laptop can be saved directly onto a Huawei phone if the computer has the HarmonyOS plug-in installed. That raises the question of whether Android, or even iOS, could, one day, talk to HarmonyOS through a common language.

The HarmonyOS launch arrived days before Apple’s annual developer event scheduled for next week. A recent job posting from Apple mentioned a seemingly new concept, homeOS, which may have to do with Apple’s smart home strategy, as noted by Macrumors.

Huawei denied speculations that HarmonyOS is a derivative of Android and said no single line of code is identical to that of Android. A spokesperson for Huawei declined to say whether the operating system is based on Linux, the kernel that powers Android.

Several tech giants have tried to introduce their own mobile operating systems to no avail. Alibaba built AliOS based on Linux but has long stopped updating it. Samsung flirted with its own Tizen but the operating system is limited to powering a few Internet of Things like smart TVs.

Huawei may have a better shot at drumming up developer interest compared to its predecessors. It’s still one of China’s largest smartphone brands despite losing a chunk of its market after the U.S. government cut it off critical chip suppliers, which could hamper its ability to make cutting-edge phones. HarmonyOS also has a chance to create an alternative for developers who are disgruntled with Android, if Huawei is able to capture their needs.

The U.S. sanctions do not block Huawei from using Android’s open-source software, which major Chinese smartphone makers use to build their third-party Android operating system. But the ban was like a death knell for Huawei’s consumer markets overseas as its phones abroad lost access to Google Play services.

#alibaba, #android, #apple, #asia, #bluetooth, #china, #facebook, #gadgets, #harmonyos, #huawei, #internet-of-things, #linux, #meizu, #microsoft-windows, #mobile, #mobile-linux, #mobile-operating-system, #mobile-phones, #open-source-software, #operating-system, #operating-systems, #smart-devices, #smartphone, #smartphones, #tc, #xiaomi

The open-source Contributor Covenant is now managed by the Organization for Ethical Source

Managing the technical side of open-source projects is often hard enough, but throw in the inevitable conflicts between contributors, who are often very passionate about their contributions, and things get even harder. One way to establish ground rules for open-source communities is the Contributor Covenant, created by Coraline Ada Ehmke back in 2014. Like so many projects in the open-source world, the Contributor Covenant was also a passion project for Ehmke. Over the years, its first two iterations have been adopted by organizations like the CNCF, Creative Commons, Apple, Google, Microsoft and the Linux project, in addition to hundreds of other projects.

Now, as work is starting on version 3.0, the Organization for Ethical Source (OES), of which Ehmke is a co-founder and executive director, will take over the stewardship of the project.

“Contributor Covenant was the first document of its kind as code of conduct for open-source projects — and it was incredibly controversial and actually remains pretty controversial to this day,” Ehmke told me. “But I come from the Ruby community, and the Ruby community really embraced the concept and also really embraced the document itself. And then it spread from there to lots of other open-source projects and other open-source communities.”

The core of the document is a pledge to “make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation,” and for contributors to act in ways that contribute to a diverse, open and welcoming community.

As Ehmke told me, one part that evolved over the course of the last few years is the addition of enforcement guidelines that are meant to help community leaders determine the consequences when members violate the code of conduct.

“One of the things that I try to do in this work is when people criticize the work, even if they’re not arguing in good faith, I try to see if there’s something in there that could be used as constructive feedback, something actionable,” Ehmke said. “A lot of the criticism for years for Contributor Covenant was people saying, ‘Oh, I’ll say one wrong thing and be permanently banned from our project, which is really grim and really unreasonable.’ What I took from that is that people are afraid of what consequences project leaders might impose on them for an infraction. Put that way, that’s kind of a reasonable concern.”

Ehmke described bringing the Covenant to the OES as an “exit to community,” similar to how companies will often bring their mature open-source projects under the umbrella of a foundation. She noted that the OES includes a lot of members with expertise in community management and project governance, which they will be able to bring to the project in a more formal way. “I’m still going to be involved with the evolution of Contributor Covenant, but it’s going to be developed under the working group model that the organization for ethical source has established,” she explained.

For version 3.0, Ehmke hopes to turn the Covenant into what she described as more of a “toolkit” that will allow different communities to tailor it a bit more to their own goals and values (though still within the core ethical principles outlined by the OES).

“Microsoft’s adoption of Contributor Covenant represents our commitment to building healthy, diverse and inclusive communities, as well as our intention to contribute and build together with others in the ecosystem,” said Emma Irwin, a program manager in Microsoft’s Open Source Program Office. “I am honored to bring this intention and my expertise to the OES’s Contributor Covenant 3.0 working group.”

#apple, #contributor-covenant, #creative-commons, #developer, #google, #intellectual-property-law, #linux, #microsoft, #open-source-software, #ruby, #tc

Racist Computer Engineering Words: ‘Master,’ ‘Slave’ and the Fight Over Offensive Terms

Nearly a year after the Internet Engineering Task Force took up a plan to replace words that could be considered racist, the debate is still raging.

#cerf-vinton-g, #computers-and-the-internet, #engineering-and-engineers, #internet-engineering-task-force, #open-source-software, #standards-and-standardization, #world-wide-web-consortium

How we dodged risks and raised millions for our open-source machine language startup

Open-source software gave birth to a slew of useful software in recent years. Many of the great technologies that we use today were born out of open-source development: Android, Firefox, VLC media player, MongoDB, Linux, Docker and Python, just to name a few, with many of these also developing into very successful for-profit companies.

While there are some dedicated open-source investors such as the Apache Software Foundation incubator and OSS Capital, the majority of open-source companies will raise from traditional venture capital firms.

Our team has raised from traditional venture capital firms like Speedinvest, open-source-specific firms like OSS, and even from more hybrid firms like OpenOcean, which was created by the founders and senior leadership teams at MariaDB and MySQL. These companies understandably have a significant but not exclusive open-source focus.

Our area of innovation is an open-source AutoML server that reduces model training complexity and brings machine learning to the source of the data. Ultimately, we feel democratizing machine learning has the potential to truly transform the modern business world. As such, we successfully raised $5 million in seed funding to help bring our vision to the current marketplace.

Here, we aim to provide insights and advice for open-source startups that hope to follow a similar path for securing funding, and also detail some of the important risks your team needs to consider when crafting a business model to attract investment.

Strategies for acquiring open-source seed funding

Obviously, venture capitalists find many open-source software initiatives to be worthy investments. However, they need to understand any inherent risks involved when successfully commercializing an innovative idea. Finding low-risk investments that lead to lucrative business opportunities remains an important goal for these firms.

In our experience, we found these risks fall into three major categories: market risk, execution risk, and founders’ risk. Explaining all three to potential investors in a concise manner helps dispel their fears. In the end, low-risk, high-reward scenarios obviously attract tangible interest from sources of venture capital.

Ultimately, investment companies want startups to generate enough revenue to reach a valuation exceeding $1 billion. While that number is likely to increase over time, it remains a good starting point for initial funding discussions with investors. Annual revenue of $100 million serves as a good benchmark for achieving that valuation level.

Market risks in open-source initiatives

Market risks for open-source organizations tend to be different when compared to traditional businesses seeking funding. Notably, investors in these traditional startups are taking a larger leap of faith.

#artificial-intelligence, #column, #ec-ai, #ec-column, #ec-how-to, #entrepreneurship, #machine-learning, #open-source-software, #private-equity, #startups, #tc, #venture-capital

Buffer overruns, license violations, and bad code: FreeBSD 13’s close call

FreeBSD's core development team, for the most part, does not appear to see the need to update their review and approval procedures.

Enlarge / FreeBSD’s core development team, for the most part, does not appear to see the need to update their review and approval procedures. (credit: Aurich Lawson (after KC Green))

At first glance, Matthew Macy seemed like a perfectly reasonable choice to port WireGuard into the FreeBSD kernel. WireGuard is an encrypted point-to-point tunneling protocol, part of what most people think of as a “VPN.” FreeBSD is a Unix-like operating system that powers everything from Cisco and Juniper routers to Netflix’s network stack, and Macy had plenty of experience on its dev team, including work on multiple network drivers.

So when Jim Thompson, the CEO of Netgate, which makes FreeBSD-powered routers, decided it was time for FreeBSD to enjoy the same level of in-kernel WireGuard support that Linux does, he reached out to offer Macy a contract. Macy would port WireGuard into the FreeBSD kernel, where Netgate could then use it in the company’s popular pfSense router distribution. The contract was offered without deadlines or milestones; Macy was simply to get the job done on his own schedule.

With Macy’s level of experience—with kernel coding and network stacks in particular—the project looked like a slam dunk. But things went awry almost immediately. WireGuard founding developer Jason Donenfeld didn’t hear about the project until it surfaced on a FreeBSD mailing list, and Macy didn’t seem interested in Donenfeld’s assistance when offered. After roughly nine months of part-time development, Macy committed his port—largely unreviewed and inadequately tested—directly into the HEAD section of FreeBSD’s code repository, where it was scheduled for incorporation into FreeBSD 13.0-RELEASE.

Read 61 remaining paragraphs | Comments

#biz-it, #code-review, #features, #freebsd, #kernel, #kernel-development, #open-source, #open-source-software, #tech, #wireguard

Elon Musk declares you can now buy a Tesla with Bitcoin in the U.S.

Tesla made headlines earlier this year when it took out significant holdings in bitcoin, acquiring a roughly $1.5 billion stake at then-prices in early February. At the time, it also noted in an SEC filing disclosing the transaction that it could also eventually accept the cryptocurrency as payment from customers for its vehicles. Now, Elon Musk says they’ve made that a reality, at least for customers in the U.S., and he added that the plan is for the automaker to ‘hodl’ all their bitcoin payments, too.

In terms of its infrastructure for accepting bitcoin payments, Tesla isn’t relying on any third-party networks or wallets — the company is “using only internal & open source software & operates Bitcoin nodes directly,” Musk said on Twitter. And when customers pay in bitcoin, those won’t be converted to fiat currency, the CEO says, but will instead presumably add to the company’s stockpile.

In February when Tesla revealed its bitcoin purchase, observers either lauded the company’s novel approach to converting its cash holdings, or criticized the plan for its attachment to an asset with significant price volatility. Many also pointed out that the environmental cost of mining bitcoin seems at odds with Tesla’s overall stated mission, given its carbon footprint. Commenters today echoed these concerns, noting the irony of Tesla accepting the grid-taxing cryptocurrency for its all-electric cars.

As for how the bitcoin payment process works today, Tesla has detailed that in an FAQ. Customers begin the payment process from their own bitcoin wallet, and have to set the exact amount for a vehicle deposit based on current rates, with the value of Tesla’s cars still set in U.S. dollars. The automaker further notes that in the case of any refunds, it’s buyer-beware in terms of any change in value relative to the U.S. dollar from time of purchase to time of refund.

Musk also said that the plan is to expand Bitcoin payments to other countries outside the U.S. by “later this year.” Depending on the market, that could require some regulatory work, but clearly Musk thinks it’s worth the effort. Meanwhile, Bitcoin is up slightly on the news early Wednesday morning.

#bitcoin, #car, #ceo, #cryptocurrencies, #cryptography, #currency, #digital-currencies, #electric-vehicles, #elon-musk, #mining, #mobility, #open-source-software, #tc, #tesla, #u-s-securities-and-exchange-commission, #united-states

$1.3M in grants go towards making the web’s open source infrastructure more equitable

Open source software is at the core of… well, practically everything online. But while much of it is diligently maintained in some ways, in others it doesn’t receive the kind of scrutiny that something so foundational ought to. $1.3 million worth of grants were announced today, split among 13 projects looking to ensure open source software and development is being done equitably, sustainably, and responsibly.

The research projects will look into a number of questions about the way open source digital infrastructure is being used, maintained, and otherwise affected. For instance, many municipalities rely on and create this sort of infrastructure constantly as the need for government software solutions grows, but what are the processes by which this is done? Which approaches or frameworks succeed, and why?

And what about the private companies that contribute to major open-source projects, often without consulting one another — how do they communicate and share priorities and dependencies? How could that be improved, and with what costs and benefits?

These and other questions aren’t the type that any single organization or local government is likely to take on spontaneously, and of course the costs of such studies aren’t trivial. But they were deemed interesting enough (and possibly likely to generate new approaches and products) by a team of experts who sorted through about 250 applications over the last year.

The grantmaking operation is funded and organized by the Ford Foundation, Alfred P. Sloan Foundation, Open Society Foundations, Omidyar Network, and the Mozilla Open Source Support Program in collaboration with the Open Collective Foundation.

“There’s a dearth of funding for looking at the needs and potential applications of free and open source infrastructure. The public interest issues behind open source have been the missing piece,” said Michael Brennan, who’s leading the grant program at the Ford Foundation.

“The president of the foundation [Darren Walker] once said, ‘a just society relies on a just Internet,’ ” he quoted. “So our question is how do we create that just Internet? How do we create and sustain an equitable Internet that serves everyone equally? We actually have a lot more questions than answers, and few people are funding research into those questions.”

Even finding the right questions is part of the question, of course, but in basic research that’s expected. Early work in a field can seem frustratingly general or inconclusive because it’s as much about establishing the scope and general direction of the work as it is about suggesting actual courses of action.

“The final portfolio wasn’t just about the ‘objectively best’ ones, but how do we find a diversity of approaches and ideas, and tackle different aspects of this work, and also be representative of the diverse and global nature of the project?” Brennan said. “This year we also accepted proposals for both research and implementation. We want to see that the research is informing the building of that equitable and sustainable infrastructure.”

You can read the full research abstracts here, but these are the short versions, with the proposer’s name:

  • How are COVID data infrastructures created and transformed by builders and maintainers from the open source community? – Megan Finn (University of Washington, University of Texas, Northeastern University)
  • How is digital infrastructure a critical response to fight climate change? – Narrira Lemos de Souza
  • How do perceptions of unfairness when contributing to an open source project affect the sustainability of critical open source digital infrastructure projects? – Atul Pokharel (NYU)
  • Supporting projects to implement research-informed best practices at the time of need on governance, sustainability, and inclusion. – Danielle Robinson (Code for Science & Society)
  • Assessing Partnerships for Municipal Digital Infrastructure – Anthony Townsend (Cornell Tech)
  • Implement recommendations for funders of open source infrastructure with guides, programming, and models – Eileen Wagner, Molly Wilson, Julia Kloiber, Elisa Lindinger, and Georgia Bullen (Simply Secure & Superrr)
  • How we can build a “Creative Commons” for API terms of Service, as a contract to automatically read, control and enforce APIs Terms of service between infrastructure and applications? – Mehdi Medjaoui (APIdays, LesMainteneurs, Inno3)
  • Indian case study of governance, implementation, and private sector role of open source infrastructure projects – ​Digital Asia Hub
  • Will cross-company visibility into shared free and open source dependencies lead to cross-company collaboration and efforts to sustain shared dependencies? – ​Duane O’Brien
  • How do open source tools contribute towards creating a multilingual internet? – Anushah Hossain (UC Berkeley)
  • How digital infrastructure projects could embrace cooperatives as a sustainable model for working – ​Jorge Benet (Cooperativa Tierra Común)
  • How do technical decision-makers assess the security ramifications of open source software components before adopting them in their projects and where can systemic interventions to the FOSS ecosystem be targeted to collectively improve its security? – Divyank Katira (Centre for Internet & Society in Bangalore)
  • How can African participation in the development, maintenance, and application of the global open source digital infrastructure be enhanced? – Alex Comninos (Research ICT Africa (RIA) and the University of Cape Town)

The projects will receive their grants soon, and later in the year (or whenever they’re ready) the organizers will coordinate some kind of event at which they can present their results. Brennan made it clear that the funders take no stake in the projects and aren’t retaining or publishing the research themselves; they’re just coordinating and offering support where it makes sense.

$1.3 million is an interesting number. For some, it’s peanuts. A startup might burn through that cash in a month or two. But in an academic context, a hundred grand can be the difference between work getting done or being abandoned. The hope is that small injections at the base layer produce a better environment for the type of support the Ford Foundation and others provide as part of their other philanthropic and grantmaking efforts.

#cornell, #ford-foundation, #intellectual-property-law, #northeastern-university, #nyu, #omidyar-network, #open-source, #open-source-software, #philanthropy, #tc, #uc-berkeley, #university-of-texas, #university-of-washington

Scarf helps open-source developers track how their projects are being used

Almost by default, open-source developers get very little insight into who uses their projects. In part, that’s the beauty of open source, but for developers who want to monetize their projects, it’s also a bit of a curse because they get very little data back from these projects. While you usually know who bought your proprietary software — and those tools often send back some telemetry, too — that’s not something that holds true for open-source code. Scarf is trying to change that.

In its earliest incarnation, Scarf founder Avi Press tried to go the telemetry route for getting this kind of data. He had written a few successful developer tools and as they got more popular, he realized that he was spending an increasingly large amount of time supporting his users.

Scarf founder Avi Press

Scarf co-founder and CEO Avi Press (Image Credits: Scarf)

“This project was now really sapping my time and energy, but also clearly providing value to big companies,” he said. “And that’s really what got me thinking that there’s probably an opportunity to maybe provide support or build features just for these companies, or do something to try to make some money from that, or really just better support those commercial users.” But he also quickly realized that he had virtually no data about how the project was being used beyond what people told him directly and download stats from GitHub and other places. So as he tried to monetize the project, he had very little data to inform his decisions and he had no way of knowing which companies to target directly that were already quietly using his code.

“If you were working at any old company — pushing code out to an app or a website — if you pushed out code without any observability, that would be reckless. You would you get fired over something like that. Or maybe not, but it’s a really poor decision to make. And this is the norm for every domain of software — except open source.”

Image Credits: Scarf

That led to the first version of Scarf: a package manager that would provide usage analytics and make it easy to sell different versions of a project. But that wasn’t quite something the community was ready to accept — and a lot of people questioned the open-source nature of the project.

“What really came out of those conversations, even chatting with people who were really, really against this kind of approach — everyone agrees that the package registries already have all of this data. So NPM and Docker and all these companies that have this data — there are many, many requests of developers for this data,” Press said, and noted that there is obviously a lot of value in this data.

So the new Scarf now takes a more sophisticated approach. While it still offers an NPM library that does phone home and pixel tracking for documentation, its focus is now on registries. What the company is essentially launching this week is a kind of middle layer between the code and the registry that allows developers to, for example, point users of their containers to the Scarf registry first and then Scarf sits in front of the Docker Hub or the GitHub Container Registry.

“You tell us, where are your containers located? And then your users pull the image through Scarf and Scarf just redirects the traffic to wherever it needs to go. But then all the traffic that flows through Scarf, we can expose that to the maintainers. What company did that pull come from? Was it on a laptop or on CI? What cloud provider was it on? What container runtime was it using? What version of the software did they pull down? And all of these things that are actually pretty trivial to answer from this traffic — and the registries could have been doing this whole time but unfortunately have not done so.”

To fund its efforts, Scarf recently raised a $2 million seed funding round led by Wave Capital, with participation from 468 Capital and a number of angel investors.

#computing, #developer, #docker, #energy, #free-software, #github, #go, #npm, #open-source-software, #programming-languages, #recent-funding, #scarf, #software, #startups, #wave-capital

The rise of the activist developer

The last few months have put technology and its role in society, especially in the United States, in the spotlight.

We need a serious conversation on the equitable and ethical use of tech, what can be done to combat the spread of misinformation and more. As we work to solve these problems, however, I hope this dialogue doesn’t overshadow one silver lining of the past year: The rise of the developer activists who are using tech for good.

They stepped up like never before to tackle numerous global issues, demonstrating they not only love solving incredibly hard problems, but can do it well and at scale.

We need a serious conversation on the equitable and ethical use of tech, what can be done to combat the spread of misinformation and more.

The responsibility lies with all of us to empower this community to unleash their entrepreneurial growth mindset and ensure more people have the opportunity to create a sustainable future for all. I’m calling on my colleagues, our industry, our governments and more to join me in supporting a new wave of developer-led activism and renew efforts to collectively close the skills gap that exists today.

From the COVID-19 pandemic, to climate change, to racial injustice, developers are playing a crucial role in creating new technologies to help people navigate today’s volatile world. Many of these developers are working on social problems on their own time, using open-source software that they can share globally. This work is helping to save lives and going forward, will help millions more.

The international research community acted early to share data and genetic sequences with one another in open-source projects that helped advance our early understanding of coronavirus and how to mobilize efforts to stop it. The ability for researchers to track genetic codes around the world in near real-time is crucial to our response.

St. Jude Children’s Research Hospital was able to digitize its contract signature process in just 10 days during this critical time. A team of four developers hailing from Taiwan, Brazil, Mongolia and India helped farmers navigate climate change by using weather data to make more informed crop management decisions.

From the civil rights and anti-war movements of the 1950s and 1960s through the recent rallies supporting the Black Lives Matter movement, people have used passion and protests to shape the conversations that lead to a better future. Now, this rich history of people-powered action has an important new set of tools: The data, software and tech know-how that’s needed to mount a coordinated global and local response to our greatest challenges.

Today’s software developers are akin to civil engineers in the 1940s and 1950s who designed bridges and roads, creating an infrastructure that paved the path for enormous widespread progress.

The open-source code community already collaborates and shares, producing innovations that belong to everyone, focusing on progress over perfection. If a hurricane is about to create havoc in your community, don’t just fill sandbags, hit your keyboard and use open-source technologies to not only help your community, but to scale solutions to help others. DroneAID, for example, is an open-source tool that uses visual recognition to detect and count SOS icons on the ground from drones flying overhead, and then automatically plots emergency needs on a map for first responders.

A recent GitHub study shows that open-source project creation is up 25% since April of last year. Developers are signing on to contribute to open-source communities and virtual hackathons during their downtime, using their skills to create a more sustainable world.

In 2018, I helped found Call for Code with IBM, David Clark Cause and United Nations Human Rights to empower the global developer community, and a big part of our mission was to create the infrastructure needed to shepherd big ideas into real-world deployments. For our part, IBM provides the 24-million-person developer community access to the same technology being used by our enterprise clients, including our open hybrid cloud platform, AI, blockchain and quantum computing.

One winner, Prometeo, with a team including a firefighter, nurse and developers, created a system that uses artificial intelligence and the Internet of Things to safeguard firefighters as they battle blazes and has been tested in multiple regions in Spain. We’ve seen developers help teachers share virtual information for homeschooling; measure the carbon footprint impact of consumer purchases; update small businesses with COVID-19 policies; help farmers navigate climate change; and improve the way businesses manage lines amid the pandemic.

This past year, Devpost partnered with the World Health Organization (WHO) and challenged developers to create COVID-19 mitigation solutions in categories including health, vulnerable populations and education. The Ford Foundation and Mozilla led a fellowship program to connect technologists, activists, journalists and scientists, and strengthen organizations working at the convergence of technology and social justice. The U.S. Digital Response (USDR) connected pro-bono technologists to work with government and organizations responding to crisis.

The most complex global and societal issues can be broken down into smaller solvable tech challenges. But to solve our most complex problems, we need the brains of every country, every class, every gender. The skills-gap crisis is a global phenomenon, making it critical that we equip the next generation of problem solvers with the training and resources they need to turn great ideas into impactful solutions.

This year, we can expect to see a newly energized community of developers working across the boundaries of companies, states and countries to take on some of the world’s biggest problems.

But they can’t do it alone. These developer activists need our support, encouragement and help pinpointing the most crucial problems to address, and they need the tools to bring solutions to every corner of the world.

The true power of technology lies with those who want to change the world for good. To ensure anyone who wants to create change has the tools, resources and skillsets to do so, we must renew our focus on closing the skills gap and addressing deep inequalities in our society.

Our future depends on getting this right.

#column, #covid-19, #developer, #free-software, #hackathon, #open-source-software, #opinion

He Created the Web. Now He’s Out to Remake the Digital World.

Tim Berners-Lee wants to put people in control of their personal data. He has technology and a start-up pursuing that goal. Can he succeed?

#antitrust-laws-and-competition-issues, #berners-lee-tim, #computer-security, #computers-and-the-internet, #facebook-inc, #open-source-software, #privacy, #software, #start-ups

Zilliz raises $43 million as investors rush to China’s open source software

For years, founders and investors in China had little interest in open source software because it did not seem like the most viable business model. Zilliz‘s latest financing round shows that attitude is changing. The three-year-old Chinese startup, which builds open source software for processing unstructured data, recently closed a Series B round of $43 million.

The investment, which catapults Zilliz’s to-date raise to over $53 million, is a sizable amount for any open source business around the world. Storied private equity firm Hillhouse Capital led the round joined by Trustbridge Partners, Pavilion Capital, and existing investors 5Y Capital (formerly Morningside) and Yunqi Partners.

Investors are going after Zilliz as they increasingly recognize open source as an effective software development strategy, Charles Xie, founder and CEO of Zilliz, told TechCrunch at an open source meetup in Shenzhen where he spoke as the first Chinese board chairperson for Linux Foundation’s AI umbrella, LF AI.

“Investors are seeing very good exits for open source companies around the world in recent years, from Elastic to MongoDB,” he added.

“When Starlord [Xie’s nickname] first told us his vision for data processing in the future digital age, we thought it was a crazy idea, but we chose to believe,” said 5Y Capital’s partner Liu Kai.

There’s one caveat for investing in the area: don’t expect to make money in the first 3 to 5 years. “But if you’re looking at an 8 to 10-year cycle, these [open source] companies can gain valuation at tens of billions of dollars,” Xie reckoned.

After six years as a software engineer at Oracle, Xie left the U.S. and headed home to start Zilliz in China. Like many Chinese entrepreneurs these days, Xie named his startup in English to mark the firm’s vision to be “global from day one.” While Zilliz set out in Shanghai, the goal is to relocate its headquarters to Silicon Valley when the firm delivers “robust technology and products” in the next 12 months, Xie said. China is an ideal starting point both for the cheaper engineering talents and the explosive growth of unstructured data — anything from molecular structure, people’s shopping behavior, audio information to video content.

“The amount of unstructured data in a region is in proportion to the size of its population and the level of its economic activity, so it’s easy to see why China is the biggest data source,” Xie observed.

On the other hand, China has seen rapid development in mobile internet and AI, especially in terms of real-life applications, which Xie argued makes China a suitable testing ground for data processing software.

So far Zilliz’s open source product Milvus has been “starred” over 4,440 times on GitHub and attracted some 120 contributors and 400 enterprise users around the world, half of whom are outside China. It’s done so without spending a penny on advertising; rather, user acquisition has come from its active participation on GitHub, Reddit, and other online developer communities.

Going forward, Zilliz plans to deploy its fresh capital in overseas recruitment, expanding its open source ecosystem, as well as research and development in its cloud-based products and services, which will eventually become a revenue driver as it starts monetizing in the second half of 2021.

#asia, #china, #data-management, #developer, #hillhouse-capital, #linux, #linux-foundation, #mongodb, #open-source, #open-source-software, #oracle, #pavilion-capital, #recent-funding, #saas, #shanghai, #trustbridge-partners, #yunqi-partners

Standing by developers through Google v. Oracle

The Supreme Court will hear arguments tomorrow in Google v. Oracle. This case raises a fundamental question for software developers and the open-source community: Whether copyright may prevent developers from using software’s functional interfaces — known as APIs — to advance innovation in software. The court should say no — free and open APIs protect innovation, competition and job mobility for software developers in America.

When we use an interface, we don’t need to understand (or care) about how the function on the other side of the interface is performed. It just works. When you sit down at your computer, the QWERTY keyboard allows you to rapidly put words on the screen. When you submit an online payment to a vendor, you are certain the funds will appear in the vendor’s account. It just works.

In the software world, interfaces between software programs are called “application programming interfaces” or APIs. APIs date back to the 1950s and allow developers to write programs that reuse other program functionality without knowing how that functionality is performed. If your program needs to sort a list, you could have it use a sorting program’s API to sort the list for your program. It just works.

Developers have historically used software interfaces free of copyright concerns, and this freedom has accelerated innovation, software interoperation and developer job mobility. Developers using existing APIs save time and effort, allowing those savings to be refocused on new ideas. Developers can also reimplement APIs from one software platform to others, enabling innovation to flow freely across software platforms.

Importantly, reusing APIs gives developers job portability, since knowledge of one set of APIs is more applicable cross-industry. The upcoming Google v. Oracle decision could change this, harming developers, open-source software and the entire software industry.

Google v. Oracle and the platform API bargain

Google v. Oracle is the culmination of a decade-long dispute. Back in 2010, Oracle sued Google, arguing that Google’s Android operating system infringed Oracle’s rights in Java. After ten years, the dispute now boils down to whether Google’s reuse of Java APIs in Android was copyright infringement.

Prior to this case, most everyone assumed that copyright did not cover the use of functional software like APIs. Under that assumption, competing platforms’ API reimplementation allowed developers to build new yet familiar things according to the API bargain: Everyone could use the API to build applications and platforms that interoperate with each other. Adhering to the API made things “just work.”

But if the Google v. Oracle decision indicates that API reimplementation requires copyright permission, the bargain falls apart. Nothing “just works” unless platform makers say so; they now dictate rules for interoperability — charging developers huge prices for the platform or stopping rival, compatible platforms from being built.

Free and open APIs are essential for modern developers

If APIs are not free and open, platform creators can stop competing platforms from using compatible APIs. This lack of competition blocks platform innovation and harms developers who cannot as easily transfer their skills from project to project, job to job.

MySQL, Oracle’s popular database, reimplemented mSQL’s APIs so third-party applications for mSQL could be “ported easily” to MySQL. If copyright had restricted reimplementation of those APIs, adoption of MySQL, reusability of old mSQL programs and the expansion achieved by the “LAMP” stack would have been stifled, and the whole ecosystem would be poorer for it. This and other examples of API reimplementation — IBM’s BIOS, Windows and WINE, UNIX and Linux, Windows and WSL, .NET and Mono, have driven perhaps the most amazing innovation in human history, with open-source software becoming critical digital infrastructure for the world.

Similarly, a copyright block on API-compatible implementations puts developers at the mercy of platform makers say so — both for their skills and their programs. Once a program is written for a given set of APIs, that program is locked-in to the platform unless those APIs can also be used on other software platforms. And once a developer learns skills for how to use a given API, it’s much easier to reuse than retrain on APIs for another platform. If the platform creator decides to charge outrageous fees, or end platform support, the developer is stuck. For nondevelopers, imagine this: The QWERTY layout is copyrighted and the copyright owner decided to charge $1,000 dollars per keyboard. You would have a choice: Retrain your hands or pay up.

All software used by anyone was created by developers. We should give developers the right to freely reimplement APIs, as developer ability to shift applications and skills between software ecosystems benefits everyone — we all get better software to accomplish more.

I hope that the Supreme Court’s decision will pay heed to what developer experience has shown: Free and open APIs promote freedom, competition, innovation and collaboration in tech.

#android, #apis, #column, #developer, #google, #government, #java, #lawsuit, #open-source-software, #operating-system, #opinion, #oracle-corporation, #supreme-court

Goldman Sachs Now Has a Font

The custom-made Goldman Sans is ‘neutral, with a wink’ — or boring and derivative, according to fontheads.

#banking-and-financial-institutions, #design, #goldman-sachs-group-inc, #open-source-software, #software, #typography

Goldman Sachs Has Money. It Has Power. And Now It Has a Font

The custom-made Goldman Sans is ‘neutral, with a wink’ — or boring and derivative, according to fontheads.

#banking-and-financial-institutions, #design, #goldman-sachs-group-inc, #open-source-software, #software, #typography

Mirantis acquires Lens, an IDE for Kubernetes

Mirantis, the company that recently bought Docker’s enterprise business, today announced that it has acquired Lens, a desktop application that the team describes as a Kubernetes integrated development environment. Mirantis previously acquired the team behind the Finnish startup Kontena, the company that originally developed Lens.

Lens itself was most recently owned by Lakend Labs, though, which describes itself as “a collective of cloud native compute geeks and technologists” that is “committed to preserving and making available the open-source software and products of Kontena.” Lakend open-sourced Lens a few months ago.

Image Credits: Mirantis

“The mission of Mirantis is very simple: we want to be — for the enterprise — the fastest way to [build] modern apps at scale,” Mirantis CEO Adrian Ionel told me. “We believe that enterprises are constantly undergoing this cycle of modernizing the way they build applications from one wave to the next — and we want to provide products to the enterprise that help them make that happen.”

Right now, that means a focus on helping enterprises build cloud-native applications at scale and, almost by default, that means providing these companies with all kinds of container infrastructure services.

“But there is another piece of this of the story that’s always been going through our minds, which is, how do we become more developer-centric and developer-focused, because, as we’ve all seen in the past 10 years, developers have become more and more in charge off what services and infrastructure they’re actually using,” Ionel explained. And that’s where the Kontena and Lens acquisitions fit in. Managing Kubernetes clusters, after all, isn’t trivial — yet now developers are often tasked with managing and monitoring how their applications interact with their company’s infrastructure.

“Lance makes it dramatically easier for developers to work with Kubernetes, to build and deploy their applications on Kubernetes, and it’s just a huge obstacle-remover for people who are turned off by the complexity of Kubernetes to get more value,” he added.

“I’m very excited to see that we found a common vision with Adrian for how to incorporate lens and how to make life for developers more enjoyable in this cloud -native technology landscape,” Miska Kaipiainen, the former CEO Kontena and now Mirantis’ Director of Engineering, told me.

He describes Lens as an IDE for Kubernetes. While you could obviously replicate Lens’ functionality with existing tools, Kaipiainen argues that it would take 20 different tools to do this. “One of them could be for monitoring, another could be for logs. A third one is for command-line configuration, and so forth and so forth,” he said. “What we have been trying to do with Lens is that we are bringing all these technologies [together] and provide one single, unified, easy to use interface for developers, so they can keep working on their workloads and on their clusters, without ever losing focus and the context on what they are working on.”

Among other things, Lens includes a context-aware terminal, multi-cluster management capabilities that work across clouds, and support for the open-source Prometheus monitoring service.

For Mirantis, Lens is a very strategic investment and the company will continue to develop the service. Indeed, Ionel said that the Lens team now basically has unlimited resources.

Looking ahead, Kaipiainen said that the team is looking at adding extensions to Lens through an API within the next couple of months. “Through this extension API, we are actually able to collaborate and work more closely with other technology vendors within the cloud technology landscape so they can start plugging directly into the Lens UI and visualize the data coming from their components, so that will make it very powerful.”

Ionel also added that the company is working on adding more features for larger software teams to Lens, which is currently a single-user product. A lot of users are already using Lens in the context of very large development teams, after all.

While the core Lens tools will remain free and open-source, Mirantis will likely charge for some new features that require a centralized service for managing them. What exactly that will look like remains to be seen, though.

If you want to give Lens a try, you can download the Windows, macOS and Linux binaries here.

#api, #ceo, #cloud, #cloud-infrastructure, #computing, #developer, #docker, #enterprise, #exit, #free-software, #fundings-exits, #lens, #linux, #mirantis, #open-source-software, #startups

Merico raises $4.1M for its developer analytics platform

Merico, a startup that gives companies deeper insights into their developers’ productivity and code quality, today announced that it has raised a $4.1 million seed round led by GGV Capital with participation from Legend Star and previous investor Polychain Capital. The company was originally funded by the open source-centric firm OSS Capital.

“The mission of Merico is to empower every developer to build better and realize more value. We are excited that GGV Capital and our other investors see the importance of bringing more useful data to the software development process,” said Merico founder and CEO Jinglei Ren. “in today’s world, enabling remote contribution is more important than ever, and we at Merico are excited to continue our pursuit of bringing the most insightful and practical metrics to support both enterprise and open-sourcee software teams.”

Merico head of business development Maxim Wheatley tells me that the company plans to use the new funding to enhance and expand its existing technology and marketing efforts. As a remote-first startup, Merico already has team members in the U.S., Brazil, France, Canada, India and China.

“In keeping with our roots and mission in open source, we will be focusing some of these new resources to engage more collaboratively with open source foundations, contributors and maintainers,” he added.

The idea behind Merico was born out of two key observations, Wheatley said. First of all, the team wanted to create a better way to analyze developer productivity and the quality of the code they generate. Some companies still simply use the number of lines of code generated by a developer to allocate bonuses for their teams, for example, which isn’t a great metric by any means. In addition, the team also wanted to find ways to better allocate income and recognition to the community members of open source projects based on the quality of their contributions.

The company’s tool is systems agnostic because it bases its analysis on the codebase and workflow tools instead of looking at lines of codes or commit counts, for example.

“Merico evaluates the actual code, in addition to related processes, and places productivity in the context of quality and impact,” said Wheatley. “In this process, we evaluate impact leveraging dependency relationships and examine fundamental indicators of quality including bug density, redundancy, modularity, test-coverage, documentation-coverage, code-smell, and more. By compiling these signals into a single point of truth, Merico can determine the quality and the productivity of a developer or a team in a manner that more accurately reflects the nature of the work.”
As of now, Merico supports code written in  Java, JavaScript (Vue.js and React.js), TypeScript, Go, C, C++, Ruby and Python, with support for other languages coming later.
“Merico‘s technology delivers the most advanced code analytics that we’ve seen on the market,” said GGV’s Jenny Lee. “With the Merico team, we saw an opportunity to empower the organizations of tomorrow with insight, in this era of remote transformation, there’s never been a more critical time to bring this visibility to the enterprise and to open source, we can’t wait to see how this technology drives innovation in both technology and management.”

#articles, #ceo, #developer, #economy, #free-software, #ggv-capital, #jenny-lee, #manufacturing, #open-source-software, #polychain-capital, #productivity, #software-development, #tc

Google launches the Open Usage Commons, a new organization for managing open-source trademarks

Google, in collaboration with a number of academic leaders and its consulting partner SADA Systems, today announced the launch of the Open Usage Commons, a new organization that aims to help open-source projects manage their trademarks.

To be fair, at first glance, open-source trademarks may not sound like it would be a major problem (or even a really interesting topic), but there’s more here than meets the eye. As Google’s director of open source Chris DiBona told me, trademarks have increasingly become an issue for open-source projects, not necessarily because there have been legal issues around them, but because commercial entities that want to use the logo or name of an open-source project on their websites, for example, don’t have the reassurance that they are free to use those trademarks.

“One of the things that’s been rearing its ugly head over the last couple years has been trademarks,” he told me. “There’s not a lot of trademarks in open-source software in general, but particularly at Google, and frankly the higher tier, the more popular open-source projects, you see them more and more over the last five years. If you look at open-source licensing, they don’t treat trademarks at all the way they do copyright and patents, even Apache, which is my favorite license, they basically say, nope, not touching it, not our problem, you go talk.”

Traditionally, open-source licenses didn’t cover trademarks because there simply weren’t a lot of trademarks in the ecosystem to worry about. One of the exceptions here was Linux, a trademark that is now managed by the Linux Mark Institute on behalf of Linus Torvalds.

With that, commercial companies aren’t sure how to handle this situation and developers also don’t know how to respond to these companies when they ask them questions about their trademarks.

“What we wanted to do is give guidance around how you can share trademarks in the same way that you would share patents and copyright in an open-source license […],” DiBona explained. “And the idea is to basically provide that guidance, you know, provide that trademarks file, if you will, that you include in your source code.”

Google itself is putting three of its own open-source trademarks into this new organization: the Angular web application framework for mobile, the Gerrit code review tool and the Istio service mesh. “All three of them are kind of perfect for this sort of experiment because they’re under active development at Google, they have a trademark associated with them, they have logos and, in some cases, a mascot.”

One of those mascots is Diffi, the Kung Fu Code Review Cuckoo, because, as DiBona noted, “we were trying to come up with literally the worst mascot we could possibly come up with.” It’s now up to the Open Usage Commons to manage that trademark.

DiBona also noted that all three projects have third parties shipping products based on these projects (think Gerrit as a service).

Another thing DiBona stressed is that this is an independent organization. Besides himself, Jen Phillips, a senior engineering manager for open source at Google is also on the board. But the team also brought in SADA’s CTO Miles Ward (who was previously at Google); Allison Randal, the architect of the Parrot virtual machine and member of the board of directors of the Perl Foundation and OpenStack Foundation, among others; Charles Isbel, the dean of the Georgia Institute of Technology College of Computing, and Cliff Lampe, a professor at the School of Information at the University of Michigan and a “rising star,” as DiBona pointed out.

“These are people who really have the best interests of computer science at heart, which is why we’re doing this,” DiBona noted. “Because the thing about open source — people talk about it all the time in the context of business and all the rest. The reason I got into it is because through open source we could work with other people in this sort of fertile middle space and sort of know what the deal was.”

#computing, #developer, #enterprise, #google, #intellectual-property-law, #linus-torvalds, #linux, #michigan, #open-source, #open-source-software, #openstack-foundation, #perl, #tc, #university-of-michigan, #virtual-machine

VESoft raises $8M to meet China’s growing need for graph databases

Sherman Ye founded VESoft in 2018 when he saw a growing demand for graph databases in China. Its predecessors like Neo4j and TigerGraph had already been growing aggressively in the West for a few years, while China was just getting to know the technology that leverages graph structures to store data sets and depict their relationships, such as those used for social media analysis, e-commerce recommendations, and financial risk management.

VESoft is ready for further growth after closing an $8 million funding round led by Redpoint China Ventures, an investment firm launched by Silicon Valley-based Redpoint Ventures in 2005. Existing investor Matrix Partners China also participated in the Series pre-A round. The new capital will allow the startup to develop products and expand to markets in North America, Europe, and other parts of Asia.

The 30-people team is comprised of former employees from Alibaba, Facebook, Huawei, and IBM. It’s based in Hangzhou, a scenic city known for its rich history and housing Alibaba and its financial affiliate Ant Financial, where Ye previously worked as a senior engineer after his four-year stint with Facebook in California. From 2017 to 2018, the entrepreneur noticed that Ant Financial’s customers were increasingly interested in adopting graph databases as an alternative to relational databases, a model that had been popular since the 80s and normally organizes data into tables.

“While relational databases are capable of achieving many functions carried out by graph databases… they deteriorate in performance as the quantity of data grows,” Yu told TechCrunch during an interview. “We didn’t use to have so much data.”

Information explosion is one reason why Chinese companies are turning to graph databases, which can handle millions of transactions to discover patterns within scattered data. The technology’s rise is also a response to new forms of online businesses that depend more on relationships.

“Take recommendations for example. The old model recommends content based purely on user profiles, but the problem of relying on personal browsing history is it fails to recommend new things. That was fine for a long time as the Chinese [internet] market was big enough to accommodate many players. But as the industry becomes saturated and crowded… companies need to ponder how to retain existing users, lengthen their time spent, and win users from rivals.”

The key lies in serving people content and products they find appealing. Graph databases come in handy, suggested Yu, when services try to predict users’ interest or behavior as the model uncovers what their friends or people within their social circles like. “That’s a lot more effective than feeding them what’s trending.”

Neo4j compares relational and graph databases (Link)

The company has made its software open source, which the founder believed can help cultivate a community of graph database users and educate the market in China. It will also allow VESoft to reach more engineers in the English-speaking world who are well-acquainted with the open-source culture.

“There is no such thing as being ‘international’ or ‘domestic’ for a technology-driven company. There are no boundaries between countries in the open-source world,” reckoned Yu.

When it comes to generating income, the startup plans to launch a paid version for enterprises, which will come with customized plug-ins and host services.

The Nebula Graph, the brand of VESoft’s database product, is now serving 20 enterprise clients from areas across social media, e-commerce, and finance including big names like food delivery giant Meituan, popular social commerce app Xiaohongshu, and e-commerce leader JD.com. A number of overseas companies are also trialing Nebula.

The time is ripe for enterprise-facing startups with a technological moat in China as the market for consumers has been divided by incumbents like Tencent and Alibaba. This makes fundraising relatively easy for VESoft. The founder is confident that Chinese companies are rapidly catching up with their Western counterparts in the space, for the gargantuan amount of data and the myriad of ways data is used in the country “will propel the technology forward.”

#ant-financial, #asia, #china, #data-management, #database, #databases, #enterprise, #graph-database, #graph-databases, #hangzhou, #matrix-partners-china, #neo4j, #nosql, #open-source-software, #redpoint-ventures

Priyanka Sharma takes over the leadership of the Cloud Native Computing Foundation

The Cloud Native Computing Foundation, the Linux Foundation-based home of open-source projects like Kubernetes, OpenTracing and Envoy, today announced that Dan Kohn, the long-time executive director of the organization, is stepping down, with Priyanka Sharma, the director of Cloud Native Alliances at GitLab, stepping into the general manager role. Kohn will continue to be part of the Linux Foundation, where he will launch a new initiative “to help public health authorities use open source software to fight COVID-19 and other epidemics.”

Sharma, who once presented in the TechCrunch Disrupt Battlefield competition a startup she co-founded, became part of the overall cloud-native community during her time as head of marketing and strategic partnerships at Lightstep, a role she took in 2016. Her involvement with the OpenTracing project snowballed into a deeper relationship with the CNCF, she told me. “Once I joined GitLab, I was fortunate enough to be elected to the board of the CNCF — and until the 31st, I am in that role,” she told me. “That was really helpful, but that gave me the context about how does such a successful foundation and community run — what is the governance piece here — which, when I was on the community side, I wasn’t that involved in.”

Kohn had been at the helm of the CNCF since 2016 and guided the project from its early days to becoming one of the most successful open-source foundations of all time. Its bi-annual conferences draw thousands of developers from all over the world. While its marquee project is obviously Kubernetes, Kohn and his team at the foundation put a lot of emphasis on the overall ecosystem. The organization’s mission, after all, is “to make cloud native computing ubiquitous.” Today, the CNCF is home to 10 graduated projects, like Kubernetes, Prometheus, Envoy, Jaeger and Vitess, as well as 16 so-called “incubating projects,” like OpenTracing, Linkerd, Rook and etcd.

“Priyanka’s contributions to CNCF as a speaker, governing board member, and community leader over the last several years has been invaluable,” said Kohn in a statement. “I think she is a great choice to lead the organization to its next stage.”

Sharma says she’ll start her tenure by listening to the community. “Cloud native has become the de facto standard,” she said. “We’re doing great with regard to technology adoption, growth — but as things evolve — with the number of people already in the foundation that is on track to be 600 members — I think as a community, and there is so much growth opportunity there, we can go deeper into developer engagement, have more conversations around education and understanding of all the projects. Now we have 10 graduated projects — not just Kubernetes — so there’s lots of adoption to happen there. So I’m just very excited for the second wave that we will have.”

Now that everybody knows that DevOps and containers are, she wants to bring more people into the fold — and also look at new technologies like serverless and service meshes. “We’ve been off to a blockbuster start and now I think we have to mature a little and go deeper,” she said.

It’s worth noting that current CNCF CTO Chris Aniszczyk will continue in his role at the foundation. “The cloud native community has grown leaps and bounds in the last few years as companies look for more flexible and innovative solutions to meet their continuously evolving infrastructure and application needs,” he said. “As CNCF now reaches nearly 50 projects and 90,000 contributors globally, I’m thrilled to have an opportunity to work with Priyanka to cultivate and grow our cloud native community in its next evolution.”

#cloud, #cloud-computing, #cloud-infrastructure, #cloud-native-computing, #computing, #envoy, #free-software, #gitlab, #kubernetes, #linux, #linux-foundation, #open-source-software, #priyanka-sharma, #tc

Runa Capital closes Fund III at $157M, with an added focus on Quantum computing

VC fund Runa Capital was launched with $135M in 2010 and is perhaps best known for its investment into NGINX which powers many web sites today. In more recent years it’s participated or led investments into startups such as Zipdrug ($10.8M); Rollbar this year ($11M); and Monedo (for €20M).

HQ’d in San Francisco, it’s now completed the final closing on its $157 million Runa Capital Fund III, which, they say, exceeded its original target of $135 million.

The firm typically invests between $1 million and $10 million in early-stage companies, predominantly Series A rounds and has a strong interest in cloud infrastructure, open-source software, AI and machine intelligence and B2B SaaS, in markets such as finance, education, and healthcare.

Dmitry Chikhachev, co-founder and managing partner of Runa Capital, said in a statement: “We are excited to see many of our portfolio companies’ founders investing in Runa Capital III, along with tech-savvy LPs from all parts of the world, who supported us in all of our funds from day one… We invested in deep tech long before it became the mainstream for venture capital, betting on Nginx in 2011, Wallarm and ID Quantique in 2013, and MariaDB in 2014.”

Going forward the firm says it aims to concentrate much of its firepower in the realm of machine learning, and quantum computing.

In addition, Jinal Jhaveri, ex-CEO & Founder of Schoolmint, a former portfolio company of Runa Capital which was acquired by Hero K12, has joined the firm as a Venture Partner.

Runa operates out its HQ in Palo Alto to its offices throughout Europe. Its newest office opened in Berlin in early 2020, given Runa Capital’s growing German portfolio. German investments have included Berlin-based Smava and Mambu, as well as the recently added Monedo (formerly Kreditech), Vehiculum, and N8N (a co-investment with Sequoia Capital). Other investments made from the third fund include Rollbar, Reelgood, Forest Admin, Uploadcare, and Oxygen.

N8N and three other startups were funded through Runa Capital’s recently established seed program that focuses on smaller investments up to $100k.

#artificial-intelligence, #berlin, #cloud-infrastructure, #companies, #europe, #finance, #healthcare, #kreditech, #machine-learning, #nginx, #open-source-software, #palo-alto, #quantum-computing, #rollbar, #runa-capital, #san-francisco, #schoolmint, #sequoia-capital, #tc, #venture-capital, #zipdrug