Many of us are taught that if we work hard enough we’ll be able to get over our losses. The social scientist Pauline Boss sees it differently.
From California to Minnesota to Massachusetts, turkeys have taken a liking to university life, leading to social media stardom and crosswalk confrontations.
Climate activists on campuses across the country have spent years organizing in the name of fossil fuel divestment. That organizing is starting to pay off.
Several weeks ago, the Linux community was rocked by the disturbing news that University of Minnesota researchers had developed (but, as it turned out, not fully executed) a method for introducing what they called “hypocrite commits” to the Linux kernel — the idea being to distribute hard-to-detect behaviors, meaningless in themselves, that could later be aligned by attackers to manifest vulnerabilities.
This was quickly followed by the — in some senses, equally disturbing — announcement that the university had been banned, at least temporarily, from contributing to kernel development. A public apology from the researchers followed.
Though exploit development and disclosure is often messy, running technically complex “red team” programs against the world’s biggest and most important open-source project feels a little extra. It’s hard to imagine researchers and institutions so naive or derelict as not to understand the potentially huge blast radius of such behavior.
Equally certain, maintainers and project governance are duty bound to enforce policy and avoid having their time wasted. Common sense suggests (and users demand) they strive to produce kernel releases that don’t contain exploits. But killing the messenger seems to miss at least some of the point — that this was research rather than pure malice, and that it casts light on a kind of software (and organizational) vulnerability that begs for technical and systemic mitigation.
Projects of the scale and utter criticality of the Linux kernel aren’t prepared to contend with game-changing, hyperscale threat models.
I think the “hypocrite commits” contretemps is symptomatic, on every side, of related trends that threaten the entire extended open-source ecosystem and its users. That ecosystem has long wrestled with problems of scale, complexity and free and open-source software’s (FOSS) increasingly critical importance to every kind of human undertaking. Let’s look at that complex of problems:
- The biggest open-source projects now present big targets.
- Their complexity and pace have grown beyond the scale where traditional “commons” approaches or even more evolved governance models can cope.
- They are evolving to commodify each other. For example, it’s becoming increasingly hard to state, categorically, whether “Linux” or “Kubernetes” should be treated as the “operating system” for distributed applications. For-profit organizations have taken note of this and have begun reorganizing around “full-stack” portfolios and narratives.
- In so doing, some for-profit organizations have begun distorting traditional patterns of FOSS participation. Many experiments are underway. Meanwhile, funding, headcount commitments to FOSS and other metrics seem in decline.
- OSS projects and ecosystems are adapting in diverse ways, sometimes making it difficult for for-profit organizations to feel at home or see benefit from participation.
Meanwhile, the threat landscape keeps evolving:
- Attackers are bigger, smarter, faster and more patient, leading to long games, supply-chain subversion and so on.
- Attacks are more financially, economically and politically profitable than ever.
- Users are more vulnerable, exposed to more vectors than ever before.
- The increasing use of public clouds creates new layers of technical and organizational monocultures that may enable and justify attacks.
- Complex commercial off-the-shelf (COTS) solutions assembled partly or wholly from open-source software create elaborate attack surfaces whose components (and interactions) are accessible and well understood by bad actors.
- Software componentization enables new kinds of supply-chain attacks.
- Meanwhile, all this is happening as organizations seek to shed nonstrategic expertise, shift capital expenditures to operating expenses and evolve to depend on cloud vendors and other entities to do the hard work of security.
The net result is that projects of the scale and utter criticality of the Linux kernel aren’t prepared to contend with game-changing, hyperscale threat models. In the specific case we’re examining here, the researchers were able to target candidate incursion sites with relatively low effort (using static analysis tools to assess units of code already identified as requiring contributor attention), propose “fixes” informally via email, and leverage many factors, including their own established reputation as reliable and frequent contributors, to bring exploit code to the verge of being committed.
This was a serious betrayal, effectively by “insiders” of a trust system that’s historically worked very well to produce robust and secure kernel releases. The abuse of trust itself changes the game, and the implied follow-on requirement — to bolster mutual human trust with systematic mitigations — looms large.
But how do you contend with threats like this? Formal verification is effectively impossible in most cases. Static analysis may not reveal cleverly engineered incursions. Project paces must be maintained (there are known bugs to fix, after all). And the threat is asymmetrical: As the classic line goes — blue team needs to protect against everything, red team only needs to succeed once.
I see a few opportunities for remediation:
- Limit the spread of monocultures. Stuff like Alva Linux and AWS’ Open Distribution of ElasticSearch are good, partly because they keep widely used FOSS solutions free and open source, but also because they inject technical diversity.
- Reevaluate project governance, organization and funding with an eye toward mitigating complete reliance on the human factor, as well as incentivizing for-profit companies to contribute their expertise and other resources. Most for-profit companies would be happy to contribute to open source because of its openness, and not despite it, but within many communities, this may require a culture change for existing contributors.
- Accelerate commodification by simplifying the stack and verifying the components. Push appropriate responsibility for security up into the application layers.
Basically, what I’m advocating here is that orchestrators like Kubernetes should matter less, and Linux should have less impact. Finally, we should proceed as fast as we can toward formalizing the use of things like unikernels.
Regardless, we need to ensure that both companies and individuals provide the resources open source needs to continue.
Although the games will be played without spectators in the stadiums, some officials are concerned they will lead to more off-campus gatherings that could spread the virus.
Even with more than 500,000 dead worldwide, scientists are struggling to learn how often the virus kills. Here’s why.
Contrary to Trump’s recent comments, specialists say, recent increases are real, and the virus is like a “forest fire” that will burn as long as there is fuel.
What you need to know about donating in a time of crisis.
Campaign rallies like those planned by Trump and other social gatherings could spread infections this summer. People should adhere to wearing masks and continue social distancing, public health researchers say.
In Ojibwe culture, music, dance and medicine are sources of healing.
He can’t beat her coronavirus.
The U.S. Food and Drug Administration (FDA) has authorized the manufacture of the Coventor ventilator, a new hardware design first developed by the University of Minnesota. The project sought to create a ventilator that could provide the same level of life-saving care as existing ventilator models, but with a much lower cost to help ramp production quickly and make them affordable to the health institutions that need them.
The Coventor becomes the first of these types of novel ventilator designs to earn an Emergency Use Authorization (EUA) from the FDA. Just like it sounds, an EUA isn’t a full traditional medical device approval like the drug and device regulator would ordinarily issue, but an emergency, temporary grant in the interest of helping provide access to resources in short supply, or without the usual full chain of approvals, in times of crisis.
The coronavirus pandemic is potentially the best example of such a crisis in modern memory, and the respiratory illness caused by COVID-19 requires treatment including intubation and ventilator breathing support for the most severe cases. Ventilator hardware has been in short supply given the volume of cases, both in the U.S. and abroad, and a number of solutions have been proposed including new hardware designs and modifications to other types of medical breathing apparatus to account for the gap.
U of M’s Coventor, developed with a team including engineering and medical school faculty, is a desktop-sized device that costs around $1,000 to produce, making it a much more viable alternative if sold at cost to medical facilities when compared to the $20,000 to $25,000 retail price of your average existing hospital-grade ventilator hardware.
Both medical device maker Medtronic (the company that’s also working with Tesla on its ventilator manufacturing plans) and Boston Scientific (which will be producing the Coventor for distribution following this approval) contributed to the development of the design. The University also announced today that it would be making the Coventor’s specs open-source so that it can be manufactured globally, provided other companies seek and secure similar approvals from the FDA and relevant international health authorities.
After this wave of infections, brace yourself for more waves.