Examining btrfs, Linux’s perpetually half-finished filesystem

We don't recommend allowing btrfs to directly manage a complex array of disks—floppy or otherwise.

Enlarge / We don’t recommend allowing btrfs to directly manage a complex array of disks—floppy or otherwise. (credit: Faustino Carmona Guerrero via Getty Images)

Btrfs—short for “B-Tree File System” and frequently pronounced “butter” or “butter eff ess”—is the most advanced filesystem present in the mainline Linux kernel. In some ways, btrfs simply seeks to supplant ext4, the default filesystem for most Linux distributions. But btrfs also aims to provide next-gen features that break the simple “filesystem” mold, combining the functionality of a RAID array manager, a volume manager, and more.

We have good news and bad news about this. First, btrfs is a perfectly cromulent single-disk ext4 replacement. But if you’re hoping to replace ZFS—or a more complex stack built on discrete RAID management, volume management, and simple filesystem—the picture isn’t quite so rosy. Although the btrfs project has fixed many of the glaring problems it launched with in 2009, other problems remain essentially unchanged 12 years later.

History

Chris Mason is the founding developer of btrfs, which he began working on in 2007 while working at Oracle. This leads many people to believe that btrfs is an Oracle project—it is not. The project belonged to Mason, not to his employer, and it remains a community project unencumbered by corporate ownership to this day. In 2009, btrfs 1.0 was accepted into the mainline Linux kernel 2.6.29.

Read 54 remaining paragraphs | Comments

#btrfs, #features, #filesystems, #linux, #tech, #zfs

Android to take an “upstream first” development model for the Linux kernel

The Linux Plumbers Conference is this week, and since Android is one of the biggest distributors of the Linux kernel in the world, Google software engineer Todd Kjos stopped by for a progress report from the Android team. Android 12—which will be out any day now—promises to bring Android closer than ever to mainline Linux by shipping Google’s “Generic Kernel Image” (GKI) to end-users.

Traditionally, the Linux kernel is forked several times before it hits an Android phone, usually by each stakeholder in an Android device. First, Google forks the Linux kernel into “Android common”—the Linux kernel plus a bunch of phone- and Android-specific changes. Then SoC vendors like Qualcomm, Samsung, or MediaTek fork Android Common to make an SoC-specific kernel for each major chip release. Then each device gets a fork of the SoC kernel for device-specific hardware support.

Android’s kernel fragmentation is a huge mess, and you can imagine how long and difficult the road is for a bugfix at the top of the fork tree to reach to the bottom, where end-users live. The official Android.com documentation notes that “These modifications can be extensive, to the point that as much as 50% of the code running on a device is out-of-tree code (not from upstream Linux or from AOSP common kernels).” It’s also a big time sink, and even Google phones typically ship kernels that start at two years old.

Read 6 remaining paragraphs | Comments

#android, #google, #linux, #tech

Linux Foundation says companies are desperate for open source talent

Online hiring information for Linux position.

Enlarge / It probably shouldn’t be considered “surprising” when a Linux certification entity reports that Linux certifications are highly desirable. (credit: Linux Foundation)

The Linux Foundation released its 2021 Open Source Jobs Report this month, which aims to inform both sides of the IT hiring process about current trends. The report accurately foreshadows many of its conclusions in the first paragraph, saying “the talent gap that existed before the pandemic has worsened due to an acceleration of cloud-native adoption as remote work has gone mainstream.” In other words: job-shopping Kubernetes and AWS experts are in luck.

The Foundation surveyed roughly 200 hiring managers and 750 open source professionals to find out which skills—and HR-friendly resume bullet points—are in the greatest demand. According to the report, college-degree requirements are trending down, but IT-certification requirements and/or preferences are trending up—and for the first time, “cloud-native” skills (such as Kubernetes management) are in higher demand than traditional Linux skills.

The hiring priority shift from traditional Linux to “cloud-native” skill sets implies that it’s becoming more possible to live and breathe containers without necessarily understanding what’s inside them—but you can’t have Kubernetes, Docker, or similar computing stacks without a traditional operating system beneath them. In theory, any traditional operating system could become the foundation of a cloud-native stack—but in practice, Linux is overwhelmingly what clouds are made of.

Read 3 remaining paragraphs | Comments

#biz-it, #linux, #linux-foundation, #open-source

Command line wizardry, part two: Variables and loops in Bash

Getting the hang of iteratively building commands interactively is all it really takes to become a command line wizard.

Enlarge / Getting the hang of iteratively building commands interactively is all it really takes to become a command line wizard. (credit: Bashar Shglila / Getty Images)

In our first tutorial on command line wizardry, we covered simple redirection and the basics of sed, awk, and grep. Today, we’re going to introduce the concepts of simple variable substitution and loops—again, with a specific focus on the Bash command line itself, rather than Bash scripting.

If you need to write a script for repeated use—particularly one with significant logical branching and evaluation—I strongly recommend a “real language” instead of Bash. Luckily, there are plenty of options. I’m personally a big fan of Perl, in part because it’s available on pretty much any *nix system you’ll ever encounter. Others might reasonably choose, say, Python or Go instead, and I wouldn’t judge.

The real point is that we’re focusing on the command line itself. Everything below is something you can easily learn to think in and use in real time with a little practice.

Read 47 remaining paragraphs | Comments

#bash, #bsd, #cli, #cli-wizardry, #command-line, #command-line-tutorial, #command-line-wizardry, #features, #linux, #tech

Crypto’s networked collaboration will drive Web 3.0

Web 1.0 was the static web, and Web 2.0 is the social web, but Web 3.0 will be the decentralized web. It will move us from a world in which communities contribute but don’t own or profit, to one where they can through collaboration.

By breaking away from traditional business models centered around benefiting large corporations, Web3 brings the possibility of community-centered economies of scale. This collaborative spirit and its associated incentive mechanisms are attracting some of the most talented and ambitious developers today, unlocking projects that were previously not possible.

Web3 might not be the final answer, but it’s the current iteration, and innovation isn’t always obvious in the beginning.

Web3, as Ki Chong Tran once said, is “The next major iteration of the internet, which promises to wrest control from the centralized corporations that today dominate the web.” Web3-enabled collaboration is made possible by decentralized networks that no single entity controls.

In closed-source business models, users trust a business to manage funds and execute services. With open-source projects, users trust the technology to perform these tasks. In Web2, the bigger network wins. In Web3, whoever builds the biggest network together wins.

In a decentralized world, not only is participation open to all, the incentive structure is designed so that greater the number of participants, the more everybody succeeds.

Learning from Linux

Linux, which is behind a majority of Web2’s websites, changed the paradigm for how the internet was developed and provides a clear example of how collaborative processes can drive the future of technology. Linux wasn’t developed by an incumbent tech giant, but by a group of volunteer programmers who used networked collaboration, which is when people freely share information without central control.

In The Cathedral & The Bazaar, author Eric S. Raymond shares his observations of the Linux kernel development process and his experiences managing open source projects. Raymond depicts a time when the popular mindset was to develop complex operating systems carefully coordinated by a small, exclusionary group of people — “cathedrals,” which are corporations and financial institutions.

Linux evolved in a completely different way. Raymond explains, “Quality was maintained not by rigid standards or autocracy, but by the naively simple strategy of releasing every week and getting feedback from hundreds of users within days, creating a sort of Darwinian selection on the mutations introduced by developers. To the amazement of almost everyone, this worked quite well.” This Linux development model, or “bazaar” model as Raymond puts it, assumes that “bugs are generally shallow phenomena” when exposed to an army of hackers without significant coordination.

#blockchain, #column, #cryptocurrency, #decentralization, #ec-column, #linux, #operating-systems, #proof-of-stake, #web3

The past, present and future of IoT in physical security

When Axis Communications released the first internet protocol (IP) camera after the 1996 Olympic games in Atlanta, there was some initial confusion. Connected cameras weren’t something the market had been clamoring for, and many experts questioned whether they were even necessary.

Today, of course, traditional analog cameras have been almost completely phased out as organizations have recognized the tremendous advantage that IoT devices can offer, but that technology felt like a tremendous risk during those early days.

To say that things have changed since then would be a dramatic understatement. The growth of the Internet of Things (IoT) represents one of the ways physical security has evolved. Connected devices have become the norm, opening up exciting new possibilities that go far beyond recorded video. Further developments, such as the improvement and widespread acceptance of the IP camera, have helped power additional breakthroughs including improved analytics, increased processing power, and the growth of open-architecture technology. On the 25th anniversary of the initial launch of the IP camera, it is worth reflecting on how far the industry has come — and where it is likely to go from here.

Tech improvements herald the rise of IP cameras

Comparing today’s IP cameras to those available in 1996 is almost laughable. While they were certainly groundbreaking at the time, those early cameras could record just one frame every 17 seconds — quite a change from what cameras can do today.

But despite this drawback, those on the cutting edge of physical security understood what a monumental breakthrough the IP camera could represent. After all, creating a network of cameras would enable more effective remote monitoring, which — if the technology could scale — would enable them to deploy much larger systems, tying together disparate groups of cameras. Early applications might include watching oil fields, airport landing strips or remote cell phone towers. Better still, the technology had the potential to usher in an entirely new world of analytics capabilities.

Of course, better chipsets were needed to make that endless potential a reality. Groundbreaking or not, the limited frame rate of the early cameras was never going to be effective enough to drive widespread adoption of traditional surveillance applications. Solving this problem required a significant investment of resources, but before long these improved chipsets brought IP cameras from one frame every 17 seconds to 30 frames per second. Poor frame rate could no longer be listed as a justification for shunning IP cameras in favor of their analog cousins, and developers could begin to explore the devices’ analytics potential.

Perhaps the most important technological leap was the introduction of embedded Linux, which made IP cameras more practical from a developer point of view. During the 1990s, most devices used proprietary operating systems, which made them difficult to develop for.

Even within the companies themselves, proprietary systems meant that developers had to be trained on a specific technology, costing companies both time and money. There were a few attempts at standardization within the industry, such as the Wind River operating system, but these ultimately failed. They were too small, with limited resources behind them — and besides, a better solution already existed: Linux.

Linux offered a wide range of benefits, not the least of which was the ability to collaborate with other developers in the open source community. This was a road that ran two ways. Because most IP cameras lacked the hard disk necessary to run Linux, hardware known as JFFS was developed that would allow a device to use a Flash memory chip as a hard disk. That technology was contributed to the open source community, and while it is currently on its third iteration, it remains in widespread use today.

Compression technology represented a similar challenge, with the more prominent data compression models in the late ’90s and early 2000s poorly suited for video. At the time, video storage involved individual frames being stored one-by-one — a data storage nightmare. Fortunately, the H.264 compression format, which was designed with video in mind, became much more commonplace in 2009.

By the end of that year, more than 90% of IP cameras and most video management systems used the H.264 compression format. It is important to note that improvements in compression capabilities have also enabled manufacturers to improve their video resolution as well. Before the new compression format, video resolution had not changed since the ’60s with NTSC/PAL. Today, most cameras are capable of recording in high definition (HD).

1996: First IP camera is released.
2001: Edge-based analytics with video motion detection arrive.
2006: First downloadable, edge-based analytics become available.
2009: Full HD becomes the standard video resolution; H.264 compression goes mainstream.
2015: Smart compression revolutionizes video storage.

The growth of analytics

Analytics is not exactly a “new” technology — customers requested various analytics capabilities even in the early days of the IP camera — but it is one that has seen dramatic improvement. Although it might seem quaint by today’s high standards, video motion detection was one of the earliest analytics loaded onto IP cameras.

Customers needed a way to detect movement within certain parameters to avoid having a tree swaying in the wind, or a squirrel running by, trigger a false alarm. Further refinement of this type of detection and recognition technology has helped automate many aspects of physical security, triggering alerts when potentially suspicious activity is detected and ensuring that it is brought to human attention. By taking human fallibility out of the equation, analytics has turned video surveillance from a reactive tool to a proactive one.

Reliable motion detection remains one of the most widely used analytics, and while false alarms can never be entirely eliminated, modern improvements have made it a reliable way to detect potential intruders. Object detection is also growing in popularity and is increasingly capable of classifying cars, people, animals and other objects.

License plate recognition is popular in many countries (though less so in the United States), not just for identifying vehicles involved in criminal activity, but for uses as simple as parking recognition. Details like car model, shirt color or license plate number are easy for the human eye to miss or fail to notice — but thanks to modern analytics, that data is cataloged and stored for easy reference. The advent of technology like deep learning, which features better pattern recognition and object classification through improved labeling and categorization, will drive further advancements in this area of analytics.

The rise of analytics also helps highlight why the security industry has embraced open-architecture technology. Simply put, it is impossible for a single manufacturer to keep up with every application that its customers might need. By using open-architecture technology, they can empower those customers to seek out the solutions that are right for them, without the need to specifically tailor the device for certain use cases. Hospitals might look to add audio analytics to detect signs of patient distress; retail stores might focus on people counting or theft detection; law enforcement might focus on gunshot detection — with all of these applications housed within the same device model.

It is also important to note that the COVID-19 pandemic drove interesting new uses for both physical security devices and analytics — though some applications, such as using thermal cameras for fever measurement, proved difficult to implement with a high degree of accuracy. Within the healthcare industry, camera usage increased significantly — something that is unlikely to change. Hospitals have seen the benefit of cameras within patient rooms, with video and intercom technology enabling healthcare professionals to monitor and communicate with patients while maintaining a secure environment.

Even simple analytics like cross-line detection can generate an alert if a patient who is a fall risk attempts to leave a designated area, potentially reducing accidents and overall liability. The fact that analytics like this bear only a passing mention today highlights how far physical security has come since the early days of the IP camera.

Looking to the future of security

That said, an examination of today’s trends can provide a glimpse into what the future might hold for the security industry. For instance, video resolution will certainly continue to improve.

Ten years ago, the standard resolution for video surveillance was 720p (1 megapixel), and 10 years before that it was the analog NTSC/PAL resolution of 572×488, or 0.3 megapixels. Today, the standard resolution is 1080p (2 megapixels), and a healthy application of Moore’s law indicates that 10 years from now it will be 4K (8 megapixels).

As ever, the amount of storage that higher-resolution video generates is the limiting factor, and the development of smart storage technologies such as Zipstream has helped tremendously in recent years. We will likely see further improvements in smart storage and video compression that will help make higher-resolution video possible.

Cybersecurity will also be a growing concern for both manufacturers and end users.

Recently, one of Sweden’s largest retailers was shut down for a week because of a hack, and others will meet the same fate if they continue to use poorly secured devices. Any piece of software can contain a bug, but only developers and manufacturers committed to identifying and fixing these potential vulnerabilities can be considered reliable partners. Governments across the globe will likely pass new regulations mandating cybersecurity improvements, with California’s recent IoT protection law serving as an early indicator of what the industry can expect.

Finally, ethical behavior will continue to become more important. A growing number of companies have begun foregrounding their ethics policies, issuing guidelines for how they expect technology like facial recognition to be used — not abused.

While new regulations are coming, it’s important to remember that regulation always lags behind, and companies that wish to have a positive reputation will need to adhere to their own ethical guidelines. More and more consumers now list ethical considerations among their major concerns—especially in the wake of the COVID-19 pandemic—and today’s businesses will need to strongly consider how to broadcast and enforce responsible product use.

Change is always around the corner

Physical security has come a long way since the IP camera was introduced, but it is important to remember that these changes, while significant, took place over more than two decades. Changes take time — often more time than you might think. Still, it is impossible to compare where the industry stands today to where it stood 25 years ago without being impressed. The technology has evolved, end users’ needs have shifted, and even the major players in the industry have come and gone according to their ability to keep up with the times.

Change is inevitable, but careful observation of today’s trends and how they fit into today’s evolving security needs can help today’s developers and device manufacturers understand how to position themselves for the future. The pandemic highlighted the fact that today’s security devices can provide added value in ways that no one would have predicted just a few short years ago, further underscoring the importance of open communication, reliable customer support and ethical behavior.

As we move into the future, organizations that continue to prioritize these core values will be among the most successful.

#column, #facial-recognition, #hardware, #internet-protocol, #ip-camera, #linux, #opinion, #physical-security, #security, #surveillance, #tc

System76’s updated 15-inch Pangolin laptop ships with Ryzen 7 5700U CPU

Specs at a glance: System76 Pangolin
OS Pop!_OS 21.04 or Ubuntu Linux 20.04
CPU Ryzen 5 5500U or Ryzen 7 5700U
RAM 8GiB DDR4 (upgradable to 64GiB)
GPU AMD Vega 7 integrated
SSD 240GB to 2TB NVMe
Battery 49 Wh LiOn
Wi-Fi Intel dual-band Wi-Fi 6
Display 15-inch 1080p matte
Camera 720p
Connectivity
  • two USB-A 2.0 ports
  • one USB-A 3.2 port
  • one USB-C 3.2 port
  • one gigabit Ethernet port
  • 3.5 mm phone/mic combo jack
  • DC power jack
  • full-size HDMI 2.0 out
  • Kensington lock slot
Entry-level price $1,200 (Ryzen 5500U, 8GiB RAM, 240GB NVMe)

This week, System76—probably the best-known Linux-only laptop vendor—announced the latest update to its lightweight 15-inch Pangolin laptop series. The newest models of Pangolin are available and shipping today; customers have a choice between a six-core Ryzen 5 5500U and an eight-core Ryzen 7 5700U processor.

Pangolin was already the first System76 laptop model to offer AMD Ryzen processors, with last-generation Ryzen 4500U and 4700U models announced last December. This year’s model bumps up both the processor generation and asking price significantly—last year’s Ryzen 4500U Pangolin started at $850, offering 8GiB of RAM and a 240GiB SSD in the entry-level trim. The new 5500U-powered Pangolin runs $1,200 for the same specs.

AMD Ryzen + Linux for the win

The increase in price likely reflects additional public awareness of mobile Ryzen’s outstanding Linux kernel support as well as its significant raw performance advantage over most competing Intel CPUs. Although we didn’t get the chance to test System76’s Ryzen 7 4700U, Acer’s 4700U-powered Swift 3—which isn’t even designed as an OEM Linux laptop—remains one of our all-time favorite systems for dedicated Linux users.

Read 4 remaining paragraphs | Comments

#linux, #linux-laptop, #oem-linux, #oem-linux-laptop, #pop_os, #system76, #tech, #ubuntu

Linux/BSD Command Line wizardry: Learn to think in sed, awk, and grep

IT programmer as genius or wizard sitting behind computer.

Enlarge (credit: jozefmicic via Getty Images)

As a relatively isolated junior sysadmin, I remember seeing answers on Experts Exchange and later Stack Exchange that baffled me. Authors and commenters might chain 10 commands together with pipes and angle brackets—something I never did in day-to-day system administration. Honestly, I doubted the real-world value of that. Surely, this was just an exercise in e-braggadocio, right?

Trying to read the man pages for the utilities most frequently seen in these extended command chains didn’t make them seem more approachable, either. For example, the sed man page weighs in at around 1,800 words alone without ever really explaining how regular expressions work or the most common uses of sed itself.

If you find yourself in the same boat, grab a beverage and buckle in. Instead of giving you encyclopedic listings of every possible argument and use case for each of these ubiquitous commands, we’re going to teach you how to think about them—and how to easily, productively incorporate them in your own daily command-line use.

Read 30 remaining paragraphs | Comments

#awk, #bash, #command-line, #features, #grep, #linux, #sed, #tech, #wizardry

Linux 5.14 set to boost future enterprise application security

Linux is set for a big release this Sunday August 29, setting the stage for enterprise and cloud applications for months to come. The 5.14 kernel update will include security and performance improvements.

A particular area of interest for both enterprise and cloud users is always security and to that end, Linux 5.14 will help with several new capabilities. Mike McGrath, vice president, Linux Engineering at Red Hat told TechCrunch that the kernel update includes a feature known as core scheduling, which is intended to help mitigate processor-level vulnerabilities like Spectre and Meltdown, which first surfaced in 2018. One of the ways that Linux users have had to mitigate those vulnerabilities is by disabling hyper-threading on CPUs and therefore taking a performance hit. 

“More specifically, the feature helps to split trusted and untrusted tasks so that they don’t share a core, limiting the overall threat surface while keeping cloud-scale performance relatively unchanged,” McGrath explained.

Another area of security innovation in Linux 5.14 is a feature that has been in development for over a year-and-a-half that will help to protect system memory in a better way than before. Attacks against Linux and other operating systems often target memory as a primary attack surface to exploit. With the new kernel, there is a capability known as memfd_secret () that will enable an application running on a Linux system to create a memory range that is inaccessible to anyone else, including the kernel.

“This means cryptographic keys, sensitive data and other secrets can be stored there to limit exposure to other users or system activities,” McGrath said.

At the heart of the open source Linux operating system that powers much of the cloud and enterprise application delivery is what is known as the Linux kernel. The kernel is the component that provides the core functionality for system operations. 

The Linux 5.14 kernel release has gone through seven release candidates over the last two months and benefits from the contributions of 1,650 different developers. Those that contribute to Linux kernel development include individual contributors, as well large vendors like Intel, AMD, IBM, Oracle and Samsung. One of the largest contributors to any given Linux kernel release is IBM’s Red Hat business unit. IBM acquired Red Hat for $34 billion in a deal that closed in 2019.

“As with pretty much every kernel release, we see some very innovative capabilities in 5.14,” McGrath said.

While Linux 5.14 will be out soon, it often takes time until it is adopted inside of enterprise releases. McGrath said that Linux 5.14 will first appear in Red Hat’s Fedora community Linux distribution and will be a part of the future Red Hat Enterprise Linux 9 release. Gerald Pfeifer, CTO for enterprise Linux vendor SUSE, told TechCrunch that his company’s openSUSE Tumbleweed community release will likely include the Linux 5.14 kernel within ‘days’ of the official release. On the enterprise side, he noted that SUSE Linux Enterprise 15 SP4, due next spring, is scheduled to come with Kernel 5.14. 

The new Linux update follows a major milestone for the open source operating system, as it was 30 years ago this past Wednesday that creator Linus Torvalds (pictured above) first publicly announced the effort. Over that time Linux has gone from being a hobbyist effort to powering the infrastructure of the internet.

McGrath commented that Linux is already the backbone for the modern cloud and Red Hat is also excited about how Linux will be the backbone for edge computing – not just within telecommunications, but broadly across all industries, from manufacturing and healthcare to entertainment and service providers, in the years to come.

The longevity and continued importance of Linux for the next 30 years is assured in Pfeifer’s view.  He noted that over the decades Linux and open source have opened up unprecedented potential for innovation, coupled with openness and independence.

“Will Linux, the kernel, still be the leader in 30 years? I don’t know. Will it be relevant? Absolutely,” he said. “Many of the approaches we have created and developed will still be pillars of technological progress 30 years from now. Of that I am certain.”

 

 

#cloud, #cloud-applications, #enterprise, #ibm, #linus-torvalds, #linux, #operating-systems, #red-hat, #security, #suse, #tc

The stars are aligning for federal IT open source software adoption

In recent years, the private sector has been spurning proprietary software in favor of open source software and development approaches. For good reason: The open source avenue saves money and development time by using freely available components instead of writing new code, enables new applications to be deployed quickly and eliminates vendor lock-in.

The federal government has been slower to embrace open source, however. Efforts to change are complicated by the fact that many agencies employ large legacy IT infrastructure and systems to serve millions of people and are responsible for a plethora of sensitive data. Washington spends tens of billions every year on IT, but with each agency essentially acting as its own enterprise, decision-making is far more decentralized than it would be at, say, a large bank.

While the government has made a number of moves in a more open direction in recent years, the story of open source in federal IT has often seemed more about potential than reality.

But there are several indications that this is changing and that the government is reaching its own open source adoption tipping point. The costs of producing modern applications to serve increasingly digital-savvy citizens keep rising, and agencies are budget constrained to find ways to improve service while saving taxpayer dollars.

Sheer economics dictate an increased role for open source, as do a variety of other benefits. Because its source code is publicly available, open source software encourages continuous review by others outside the initial development team to promote increased software reliability and security, and code can be easily shared for reuse by other agencies.

Here are five signs I see that the U.S. government is increasingly rallying around open source.

More dedicated resources for open source innovation

Two initiatives have gone a long way toward helping agencies advance their open source journeys.

18F, a team within the General Services Administration that acts as consultancy to help other agencies build digital services, is an ardent open source backer. Its work has included developing a new application for accessing Federal Election Commission data, as well as software that has allowed the GSA to improve its contractor hiring process.

18F — short for GSA headquarters’ address of 1800 F St. — reflects the same grassroots ethos that helped spur open source’s emergence and momentum in the private sector. “The code we create belongs to the public as a part of the public domain,” the group says on its website.

Five years ago this August, the Obama administration introduced a new Federal Source Code Policy that called on every agency to adopt an open source approach, create a source code inventory, and publish at least 20% of written code as open source. The administration also launched Code.gov, giving agencies a place to locate open source solutions that other departments are already using.

The results have been mixed, however. Most agencies are now consistent with the federal policy’s goal, though many still have work to do in implementation, according to Code.gov’s tracker. And a report by a Code.gov staffer found that some agencies were embracing open source more than others.

Still, Code.gov says the growth of open source in the federal government has gone farther than initially estimated.

A push from the new administration

The American Rescue Plan, a $1.9 trillion pandemic relief bill that President Biden signed in early March 2021, contained $9 billion for the GSA’s Technology Modernization Fund, which finances new federal technology projects. In January, the White House said upgrading federal IT infrastructure and addressing recent breaches such as the SolarWinds hack was “an urgent national security issue that cannot wait.”

It’s fair to assume open source software will form the foundation of many of these efforts, because White House technology director David Recordon is a long-time open source advocate and once led Facebook’s open source projects.

A changing skills environment

Federal IT employees who spent much of their careers working on legacy systems are starting to retire, and their successors are younger people who came of age in an open source world and are comfortable with it.

About 81% of private sector hiring managers surveyed by the Linux Foundation said hiring open source talent is a priority and that they’re more likely than ever to seek out professionals with certifications. You can be sure the public sector is increasingly mirroring this trend as it recognizes a need for talent to support open source’s growing foothold.

Stronger capabilities from vendors

By partnering with the right commercial open source vendor, agencies can drive down infrastructure costs and more efficiently manage their applications. For example, vendors have made great strides in addressing security requirements laid out by policies such as the Federal Security Security Modernization Act (FISMA), Federal Information Processing Standards (FIPS) and the Federal Risk and Authorization Management Program (FedRamp), making it easy to deal with compliance.

In addition, some vendors offer powerful infrastructure automation tools and generous support packages, so federal agencies don’t have to go it alone as they accelerate their open source strategies. Linux distributions like Ubuntu provide a consistent developer experience from laptop/workstation to the cloud, and at the edge, for public clouds, containers, and physical and virtual infrastructure.

This makes application development a well-supported activity that includes 24/7 phone and web support, which provides access to world-class enterprise support teams through web portals, knowledge bases or via phone.

The pandemic effect

Whether it’s accommodating more employees working from home or meeting higher citizen demand for online services, COVID-19 has forced large swaths of the federal government to up their digital game. Open source allows legacy applications to be moved to the cloud, new applications to be developed more quickly, and IT infrastructures to adapt to rapidly changing demands.

As these signs show, the federal government continues to move rapidly from talk to action in adopting open source.

Who wins? Everyone!

#column, #developer, #federal-election-commission, #free-software, #government, #linux, #linux-foundation, #open-source-software, #open-source-technology, #opinion, #policy, #solarwinds, #ubuntu

Jolla hits profitability ahead of turning ten, eyes growth beyond mobile

A milestone for Jolla, the Finnish startup behind the Sailfish OS — which formed, almost a decade ago, when a band of Nokia staffers left to keep the torch burning for a mobile linux-based alternative to Google’s Android — today it’s announcing hitting profitability.

The mobile OS licensing startup describes 2020 as a “turning point” for the business — reporting revenues that grew 53% YoY, and EBITDA (which provides a snapshot of operational efficiency) standing at 34%.

It has a new iron in the fire too now — having recently started offering a new licensing product (called AppSupport for Linux Platforms) which, as the name suggests, can provide linux platforms with standalone compatibility with general Android applications — without a customer needing to licence the full Sailfish OS (the latter has of course baked in Android app compatibility since 2013).

Jolla says AppSupport has had some “strong” early interest from automotive companies looking for solutions to develop their in-case infotainment systems — as it offers a way for embedded Linux-compatible platform the capability to run Android apps without needing to opt for Google’s automotive offerings. And while plenty of car makers have opted for Android, there are still players Jolla could net for its ‘Google-free’ alternative.

Embedded linux systems also run in plenty of other places, too, so it’s hopeful of wider demand. The software could be used to enable an IoT device to run a particularly popular app, for example, as a value add for customers.

“Jolla is doing fine,” says CEO and co-founder Sami Pienimäki. “I’m happy to see the company turning profitable last year officially.

“In general it’s the overall maturity of the asset and the company that we start to have customers here and there — and it’s been honestly a while that we’ve been pushing this,” he goes, fleshing out the reasons behind the positive numbers with trademark understatement. “The company is turning ten years in October so it’s been a long journey. And because of that we’ve been steadily improving our efficiency and our revenue.

“Our revenue grew over 50% since 2019 to 2020 and we made €5.4M revenue. At the same time the cost base of the operation has stablized quite well so the sum of those resulted to nice profitability.”

While the consumer mobile OS market has — for years — been almost entirely sewn up by Google’s Android and Apple’s iOS, Jolla licenses its open source Sailfish OS to governments and business as an alternative platform they can shape to their needs — without requiring any involvement of Google.

Perhaps unsurprisingly, Russia was one of the early markets that tapped in.

The case for digital sovereignty in general — and an independent (non-US-based) mobile OS platform provider, specifically — has been strengthened in recent years as geopolitical tensions have played out via the medium of tech platforms; leading to, in some cases, infamous bans on foreign companies being able to access US-based technologies.

In a related development this summer, China’s Huawei launched its own Android alternative for smartphones, which it’s called HarmonyOS.

Pienimäki is welcoming of that specific development — couching it as a validation of the market in which Sailfish plays.

“I wouldn’t necessarily see Huawei coming out with the HarmonyOS value proposition and the technology as a competitor to us — I think it’s more proving the point that there is appetite in the market for something else than Android itself,” he says when we ask whether HarmonyOS risks eating Sailfish’s lunch.

“They are tapping into that market and we are tapping into that market. And I think both of our strategies and messages support each other very firmly.”

Jolla has been working on selling Sailfish into the Chinese market for several years — and that sought for business remains a work in progress at this stage. But, again, Pienimäki says Jolla doesn’t see Huawei’s move as any kind of blocker to its ambitions of licensing its Android alternative in the Far East.

“The way we see the Chinese market in general is that it’s been always open to healthy competition and there is always competing solutions — actually heavily competing solutions — in the Chinese market. And Huawei’s offering one and we are happy to offer Sailfish OS for this very big, challenging market as well.”

“We do have good relationships there and we are building a case together with our local partners also to access the China market,” he adds. “I think in general it’s also very good that big corporations like Huawei really recognize this opportunity in general — and this shapes the overall industry so that you don’t need to, by default, opt into Android always. There are other alternatives around.”

On AppSupport, Jolla says the automative sector is “actively looking for such solutions”, noting that the “digital cockpit is a key differentiator for car markers — and arguing that makes it a strategically important piece for them to own and control.

“There’s been a lot of, let’s say, positive vibes in that sector in the past few years — new comers on the block like Tesla have really shaken the industry so that the traditional vendors need to think differently about how and what kind of user experience they provide in the cockpit,” he suggests.

“That’s been heavily invested and rapidly developing in the past years but I’m going to emphasize that at the same time, with our limited resources, we’re just learning where the opportunities for this technology are. Automative seems to have a lot of appetite but then [we also see potential in] other sectors — IoT… heavy industry as well… we are openly exploring opportunities… but as we know automotive is very hot at the moment.”

“There is plenty of general linux OS base in the world for which we are offering a good additional piece of technology so that those operating solutions can actually also tap into — for example — selected applications. You can think of like running the likes of Spotify or Netflix or some communications solutions specific for a certain sector,” he goes on.

“Most of those applications are naturally available both for iOS and Android platforms. And those applications as they simply exist the capability to run those applications independently on top of a linux platform — that creates a lot of interest.”

In another development, Jolla is in the process of raising a new growth financing round — it’s targeting €20M — to support its push to market AppSupport and also to put towards further growing its Sailfish licensing business.

It sees growth potential for Sailfish in Europe, which remains the biggest market for licensing the mobile OS. Pienimäki also says it’s seeing “good development” in certain parts of Africa. Nor has it given up on its ambitions to crack into China.

The growth round was opened to investors in the summer and hasn’t yet closed — but Jolla is confident of nailing the raise.

“We are really turning a next chapter in the Jolla story so exploring to new emerging opportunities — that requires capital and that’s what are looking for. There’s plenty of money available these days, in the investor front, and we are seeing good traction there together with the investment bank with whom we are working,” says Pienimäki.

“There’s definitely an appetite for this and that will definitely put us in a better position to invest further — both to Sailfish OS and the AppSupport technology. And in particular to the go-to market operation — to make this technology available for more people out there in the market.”

 

#africa, #android, #appsupport, #automotive, #china, #europe, #google, #harmonyos, #huawei, #jolla, #linux, #meego, #mobile, #mobile-linux, #nokia, #operating-systems, #russia, #sailfish, #sailfish-os, #sami-pienimaki, #smartphones, #tc, #tesla

Elastic acquisition spree continues as it acquires security startup CMD

Just days after Elastic announced the acquisition of build.security, the company is making yet another security acquisition. As part of its second-quarter earnings announcement this afternoon, Elastic disclosed that it is acquiring Vancouver, Canada based security vendor CMD. Financial terms of the deal are not being publicly disclosed.

CMD‘s technology provides runtime security for cloud infrastructure, helping organizations gain better visibility into processes that are running. The startup was founded in 2016 and has raised $21.6 million in funding to date. The company’s last round was a $15 million Series B that was announced in 2019, led by GV. 

Elastic CEO and co-founder Shay Banon told TechCrunch that his company will be welcoming the employees of CMD into his company, but did not disclose precisely how many would be coming over. CMD CEO and co-founder Santosh Krishan and his fellow co-founder Jake King will both be taking executive roles within Elastic.

Both build.security and CMD are set to become part of Elastic’s security organization. The two technologies will be integrated into the Elastic Stack platform that provides visibility into what an organization is running, as well as security insights to help limit risk. Elastic has been steadily growing its security capabilities in recent years, acquiring Endgame Security in 2019 for $234 million.

Banon explained that, as organizations increasingly move to the cloud and make use of Kubernetes, they are looking for more layers of introspection and protection for Linux. That’s where CMD’s technology comes in. CMD’s security service is built with an open source technology known as eBPF. With eBPF, it’s possible to hook into a Linux operating system for visibility and security control. Work is currently ongoing to extend eBPF for Windows workloads, as well.

CMD isn’t the only startup that has been building based on eBP. Isovalent, which announced a $29 million Series A round led by Andreessen Horowitz and Google in November 2020, is also active in the space. The Linux Foundation also recently announced the creation of an eBPF Foundation, with the participation of Facebook, Google, Microsoft, Netflix and Isovalent.

Fundamentally, Banon sees a clear alignment between what CMD was building and what Elastic aims to deliver for its users.

“We have a saying at Elastic – while you observe, why not protect?” Banon said. “With CMD if you look at everything that they do, they also have this deep passion and belief that it starts with observability. “

It will take time for Elastic to integrate the CMD technology into the Elastic Stack, though it won’t be too long. Banon noted that one of the benefits of acquiring a startup is that it’s often easier to integrate than a larger, more established vendor.

“With all of these acquisitions that we make we spend time integrating them into a single product line,” Banon said.

That means Elastic needs to take the technology that other companies have built and fold it into its stack and that sometimes can take time, Banon explained. He noted that it took two years to integrate the Endgame technology after that acquisition.

“Typically that lends itself to us joining forces with smaller companies with really innovative technology that can be more easily taken and integrated into our stack,” Banon said.

#canada, #cloud, #cloud-computing, #cloud-infrastructure, #cmd, #elasticsearch, #facebook, #kubernetes, #linux, #open-source-technology, #security, #shay-banon, #vancouver

Not-a-Linux distro review: SerenityOS is a Unix-y love letter to the ‘90s

Today, I test-drove an in-development operating system project that seems almost disturbingly tailored to me specifically: SerenityOS. I cannot possibly introduce SerenityOS more accurately than its own website does:

SerenityOS is a love letter to ’90s user interfaces with a custom Unix-like core. It flatters with sincerity by stealing beautiful ideas from various other systems. Roughly speaking, the goal is a marriage between the aesthetic of late-1990s productivity software and the power-user accessibility of late-2000s *nix. This is a system by us, for us, based on the things we like.

Every word of this introduction is almost surgically accurate. To someone in SerenityOS’s target demographic—someone like myself (and likely many Arsians), who grew up with NT4 systems but matured on modern Linux and BSD—SerenityOS hits like a love letter from the ex you never quite forgot.

SerenityOS isn’t Linux—and it’s not BSD, either

What that brief intro doesn’t get across is the scale of the project. You might think that SerenityOS is just a Linux distro with an unusually ambitious vaporwave aesthetic, but it’s actually an entire operating system built from the ground up. That means custom-built kernel, display manager, shell… everything.

Read 35 remaining paragraphs | Comments

#bsd, #distro-review, #features, #freebsd, #linux, #netbsd, #serenityos, #tech

Valve’s upcoming Steam Deck will be based on Arch Linux—not Debian

SteamOS is rebasing from Debian to Arch Linux for the Steam Deck. As long as Valve puts in plenty of ongoing maintenance work, we think it's a smart move.

Enlarge / SteamOS is rebasing from Debian to Arch Linux for the Steam Deck. As long as Valve puts in plenty of ongoing maintenance work, we think it’s a smart move. (credit: Valve / Arch Linux / Jim Salter)

As Ars Technica confirmed in May, two months ahead of its official reveal, Valve is about to re-enter the hardware space with its first portable PC, the Steam Deck. This custom x86 PC resembles an XL version of the Nintendo Switch and will begin shipping to buyers by the end of 2021, starting at $399.

Like other recent Valve hardware efforts, the Steam Deck will run a custom Linux distro by default. Today, we’re going to explore how Valve’s Linux approach will transform by the time Steam Deck launches—and what that will mean for gaming on Linux as a whole.

SteamOS vs. Windows

Although the Steam Deck is capable of running Windows—currently the premiere PC gaming operating system—it won’t ship that way. Like Valve’s earlier Steam Machine effort, the Deck will ship with a custom Linux distribution instead.

Read 17 remaining paragraphs | Comments

#arch-linux, #debian, #gaming-culture, #linux, #linux-gaming, #proton, #steam, #tech

VCs are betting big on Kubernetes: Here are 5 reasons why

I worked at Google for six years. Internally, you have no choice — you must use Kubernetes if you are deploying microservices and containers (it’s actually not called Kubernetes inside of Google; it’s called Borg). But what was once solely an internal project at Google has since been open-sourced and has become one of the most talked about technologies in software development and operations.

For good reason. One person with a laptop can now accomplish what used to take a large team of engineers. At times, Kubernetes can feel like a superpower, but with all of the benefits of scalability and agility comes immense complexity. The truth is, very few software developers truly understand how Kubernetes works under the hood.

I like to use the analogy of a watch. From the user’s perspective, it’s very straightforward until it breaks. To actually fix a broken watch requires expertise most people simply do not have — and I promise you, Kubernetes is much more complex than your watch.

How are most teams solving this problem? The truth is, many of them aren’t. They often adopt Kubernetes as part of their digital transformation only to find out it’s much more complex than they expected. Then they have to hire more engineers and experts to manage it, which in a way defeats its purpose.

Where you see containers, you see Kubernetes to help with orchestration. According to Datadog’s most recent report about container adoption, nearly 90% of all containers are orchestrated.

All of this means there is a great opportunity for DevOps startups to come in and address the different pain points within the Kubernetes ecosystem. This technology isn’t going anywhere, so any platform or tooling that helps make it more secure, simple to use and easy to troubleshoot will be well appreciated by the software development community.

In that sense, there’s never been a better time for VCs to invest in this ecosystem. It’s my belief that Kubernetes is becoming the new Linux: 96.4% of the top million web servers’ operating systems are Linux. Similarly, Kubernetes is trending to become the de facto operating system for modern, cloud-native applications. It is already the most popular open-source project within the Cloud Native Computing Foundation (CNCF), with 91% of respondents using it — a steady increase from 78% in 2019 and 58% in 2018.

While the technology is proven and adoption is skyrocketing, there are still some fundamental challenges that will undoubtedly be solved by third-party solutions. Let’s go deeper and look at five reasons why we’ll see a surge of startups in this space.

 

Containers are the go-to method for building modern apps

Docker revolutionized how developers build and ship applications. Container technology has made it easier to move applications and workloads between clouds. It also provides as much resource isolation as a traditional hypervisor, but with considerable opportunities to improve agility, efficiency and speed.

#cloud, #cloud-computing, #cloud-infrastructure, #cloud-native-computing-foundation, #cloud-native-computing, #column, #databricks, #ec-cloud-and-enterprise-infrastructure, #ec-column, #ec-enterprise-applications, #enterprise, #google, #kubernetes, #linux, #microservices, #new-relic, #openshift, #rapid7, #red-hat, #startups, #ubuntu, #web-services

Paragon is working to get its ntfs3 filesystem into the Linux kernel

Extreme close-up image of hard drive components.

Enlarge / Your hard drives and SSDs aren’t any better than the filesystem you format them with. Paragon’s ntfs3 driver combines decent performance with a fully featured implementation—a combination that neither Linux in-kernel ntfs or FUSE-mounted ntfs-3g can claim both halves of. (credit: dublinmark / Getty Images)

In March of last year, proprietary filesystem vendor Paragon Software unleashed a stream of anti-open source FUD about a Samsung-derived exFAT implementation headed into the Linux kernel. Several months later, Paragon seemed to have seen the error of its ways and began the arduous process of getting its own implementation of Microsoft’s NTFS (the default filesystem for all Windows machines) into the kernel as well.

Although Paragon is still clearly struggling to get its processes and practices aligned to open source-friendly ones, Linux kernel BDFL Linus Torvalds seems to have taken a personal interest in the process. After nearly a year of effort by Paradox, Torvalds continues to gently nudge both it and skeptical Linux devs in order to keep the project moving forward.

Why Paragon?

To those familiar with daily Linux use, the utility of Paragon’s version of NTFS might not be immediately obvious. The Linux kernel already has one implementation of NTFS, and most distributions make it incredibly easy to install and use another, FUSE-based implementation (ntfs-3g) beyond that.

Read 10 remaining paragraphs | Comments

#biz-it, #filesystems, #linux, #ntfs, #ntfs-3g, #ntfs3, #paragon, #tech

Two-for-Tuesday vulnerabilities send Windows and Linux users scrambling

A cartoonish padlock has been photoshopped onto glowing computer chips.

Enlarge

The world woke up on Tuesday to two new vulnerabilities—one in Windows and the other in Linux—that allow hackers with a toehold in a vulnerable system to bypass OS security restrictions and access sensitive resources.

As operating systems and applications become harder to hack, successful attacks typically require two or more vulnerabilities. One vulnerability allows the attacker access to low-privileged OS resources, where code can be executed or sensitive data can be read. A second vulnerability elevates that code execution or file access to OS resources reserved for password storage or other sensitive operations. The value of so-called local privilege escalation vulnerabilities, accordingly, has increased in recent years.

Breaking Windows

The Windows vulnerability came to light by accident on Monday when a researcher observed what he believed was a coding regression in a beta version of the upcoming Windows 11. The researcher found that the contents of the security account manager—the database that stores user accounts and security descriptors for users on the local computer—could be read by users with limited system privileges.

Read 12 remaining paragraphs | Comments

#biz-it, #exploits, #hacking, #linux, #tech, #vulnerabilities, #windows

The end of open source?

Several weeks ago, the Linux community was rocked by the disturbing news that University of Minnesota researchers had developed (but, as it turned out, not fully executed) a method for introducing what they called “hypocrite commits” to the Linux kernel — the idea being to distribute hard-to-detect behaviors, meaningless in themselves, that could later be aligned by attackers to manifest vulnerabilities.

This was quickly followed by the — in some senses, equally disturbing — announcement that the university had been banned, at least temporarily, from contributing to kernel development. A public apology from the researchers followed.

Though exploit development and disclosure is often messy, running technically complex “red team” programs against the world’s biggest and most important open-source project feels a little extra. It’s hard to imagine researchers and institutions so naive or derelict as not to understand the potentially huge blast radius of such behavior.

Equally certain, maintainers and project governance are duty bound to enforce policy and avoid having their time wasted. Common sense suggests (and users demand) they strive to produce kernel releases that don’t contain exploits. But killing the messenger seems to miss at least some of the point — that this was research rather than pure malice, and that it casts light on a kind of software (and organizational) vulnerability that begs for technical and systemic mitigation.

Projects of the scale and utter criticality of the Linux kernel aren’t prepared to contend with game-changing, hyperscale threat models.

I think the “hypocrite commits” contretemps is symptomatic, on every side, of related trends that threaten the entire extended open-source ecosystem and its users. That ecosystem has long wrestled with problems of scale, complexity and free and open-source software’s (FOSS) increasingly critical importance to every kind of human undertaking. Let’s look at that complex of problems:

  • The biggest open-source projects now present big targets.
  • Their complexity and pace have grown beyond the scale where traditional “commons” approaches or even more evolved governance models can cope.
  • They are evolving to commodify each other. For example, it’s becoming increasingly hard to state, categorically, whether “Linux” or “Kubernetes” should be treated as the “operating system” for distributed applications. For-profit organizations have taken note of this and have begun reorganizing around “full-stack” portfolios and narratives.
  • In so doing, some for-profit organizations have begun distorting traditional patterns of FOSS participation. Many experiments are underway. Meanwhile, funding, headcount commitments to FOSS and other metrics seem in decline.
  • OSS projects and ecosystems are adapting in diverse ways, sometimes making it difficult for for-profit organizations to feel at home or see benefit from participation.

Meanwhile, the threat landscape keeps evolving:

  • Attackers are bigger, smarter, faster and more patient, leading to long games, supply-chain subversion and so on.
  • Attacks are more financially, economically and politically profitable than ever.
  • Users are more vulnerable, exposed to more vectors than ever before.
  • The increasing use of public clouds creates new layers of technical and organizational monocultures that may enable and justify attacks.
  • Complex commercial off-the-shelf (COTS) solutions assembled partly or wholly from open-source software create elaborate attack surfaces whose components (and interactions) are accessible and well understood by bad actors.
  • Software componentization enables new kinds of supply-chain attacks.
  • Meanwhile, all this is happening as organizations seek to shed nonstrategic expertise, shift capital expenditures to operating expenses and evolve to depend on cloud vendors and other entities to do the hard work of security.

The net result is that projects of the scale and utter criticality of the Linux kernel aren’t prepared to contend with game-changing, hyperscale threat models. In the specific case we’re examining here, the researchers were able to target candidate incursion sites with relatively low effort (using static analysis tools to assess units of code already identified as requiring contributor attention), propose “fixes” informally via email, and leverage many factors, including their own established reputation as reliable and frequent contributors, to bring exploit code to the verge of being committed.

This was a serious betrayal, effectively by “insiders” of a trust system that’s historically worked very well to produce robust and secure kernel releases. The abuse of trust itself changes the game, and the implied follow-on requirement — to bolster mutual human trust with systematic mitigations — looms large.

But how do you contend with threats like this? Formal verification is effectively impossible in most cases. Static analysis may not reveal cleverly engineered incursions. Project paces must be maintained (there are known bugs to fix, after all). And the threat is asymmetrical: As the classic line goes — blue team needs to protect against everything, red team only needs to succeed once.

I see a few opportunities for remediation:

  • Limit the spread of monocultures. Stuff like Alva Linux and AWS’ Open Distribution of ElasticSearch are good, partly because they keep widely used FOSS solutions free and open source, but also because they inject technical diversity.
  • Reevaluate project governance, organization and funding with an eye toward mitigating complete reliance on the human factor, as well as incentivizing for-profit companies to contribute their expertise and other resources. Most for-profit companies would be happy to contribute to open source because of its openness, and not despite it, but within many communities, this may require a culture change for existing contributors.
  • Accelerate commodification by simplifying the stack and verifying the components. Push appropriate responsibility for security up into the application layers.

Basically, what I’m advocating here is that orchestrators like Kubernetes should matter less, and Linux should have less impact. Finally, we should proceed as fast as we can toward formalizing the use of things like unikernels.

Regardless, we need to ensure that both companies and individuals provide the resources open source needs to continue.

#column, #developer, #kernel, #kubernetes, #linux, #open-source-software, #operating-systems, #opinion, #university-of-minnesota

Breach simulation startup AttackIQ raises $44M to fuel expansion

AttackIQ, a cybersecurity startup that provides organizations with breach and attack simulation solutions, has raised $44 million in Series C funding as it looks to ramp up its international expansion.

The funding round was led by Atlantic Bridge, Saudi Aramco Energy Ventures (SAEV), and Gaingels, with existing vendors — including Index Ventures, Khosla Ventures, Salesforce Ventures, and Telstra Ventures — also participating. The round brings the company’s total funding raised to date to $79 million. 

AttackIQ was founded in 2013 and is based out of San Diego, California. It provides an automated validation platform that runs scenarios to detect any gaps in a company’s defenses, enabling organizations to test and measure the effectiveness of their security posture and receive guidance on how to fix what’s broken. Broadly, AttackIQ’s platform helps an organization’s security teams to anticipate, prepare, and hunt for threats that may impact their business, before hackers get there first.

Its Security Optimization Platform platform, which supports Windows, Linux, and macOS across public, private, and on-premises cloud environments, is based on the MITRE ATT&CK framework, a curated knowledge base of known adversary threats, tactics, and techniques. This is used by a number of cybersecurity companies also building continuous validation services including FireEye, Palo Alto Networks, and Cymulate.

AttackIQ says this latest round of funding, which comes more than two years after its last, arrives at a “dynamic time” for the company. Not only has cybersecurity become more of a priority for organizations as a result of a major uptick in both ransomware and supply-chain attacks, the company also recently accelerated its international expansion efforts through a partnership with technology distributor Westcon.

The startup says it’s planning to use these new funds to further expand internationally through its newfound partnership with Atlantic Bridge, which will also see Kevin Dillon, the company’s co-founder and managing director, join the AttackIQ board of directors. 

“AttackIQ has established itself as a category leader with a formidable enterprise customer base that includes four of the Fortune 20,” said Dillon. “We believe deeply in the company’s vision and potential to become the next billion-dollar cybersecurity software company and look forward to helping the company turn early traction in Europe and the Middle East into robust, long-term expansion.”

Brett Galloway, CEO of AttackIQ, said the round “reaffirms the strength” of its platform.

As well as enabling organizations to review the robustness of their security defenses, the startup also runs the AttackIQ Academy, which provides free entry-level and advanced cybersecurity training. It has accumulated 17,200 registered students to date across 176 countries.

#atlantic-bridge, #california, #ceo, #computer-security, #computing, #cybersecurity-startup, #cymulate, #europe, #fireeye, #funding, #gaingels, #information-technology, #khosla-ventures, #linux, #microsoft-windows, #middle-east, #palo-alto-networks, #salesforce-ventures, #san-diego, #security, #simulation, #telstra-ventures

Here’s what you’ll need to upgrade to Windows 11

Since Microsoft’s announcement of Windows 11 yesterday, one concern has reverberated around the Web more loudly than any other—what’s this about a Trusted Platform Module requirement?

Windows 11 is the first Windows version to require a TPM, and most self-built PCs (and cheaper, home-targeted OEM PCs) don’t have a TPM module on board. Although this requirement is a bit of a mess, it’s not as onerous as millions of people have assumed. We’ll walk you through all of Windows 11’s announced requirements, including TPM—and make sure to note when all this is likely to be a problem.

General hardware requirements

Although Windows 11 does bump general hardware requirements up some from Windows 10’s extremely lenient minimums, it will still be challenging to find a PC that doesn’t meet most of these specifications. Here’s the list:

Read 29 remaining paragraphs | Comments

#features, #kvm, #linux, #microsoft, #tech, #tpm, #trusted-computing, #uefi, #virtualization, #windows, #windows-10, #windows-11

The ISRG wants to make the Linux kernel memory-safe with Rust

Rust coats a pipe in an industrial construction site.

Enlarge / No, not that kind of Rust. (credit: Heritage Images via Getty Images)

The Internet Security Research Group—parent organization of the better-known Let’s Encrypt project—has provided prominent developer Miguel Ojeda with a one-year contract to work on Rust in Linux and other security efforts on a full-time basis.

What’s a Rust for Linux?

As we covered in March, Rust is a low-level programming language offering most of the flexibility and performance of C—the language used for kernels in Unix and Unix-like operating systems since the 1970s—in a safer way.

Efforts to make Rust a viable language for Linux kernel development began at the 2020 Linux Plumbers conference, with acceptance for the idea coming from Linus Torvalds himself. Torvalds specifically requested Rust compiler availability in the default kernel build environment, to support such efforts—not to replace the entire source code of the Linux kernel with Rust-developed equivalents, but to make it possible for new development to work properly.

Read 5 remaining paragraphs | Comments

#isrg, #lets-encrypt, #linux, #rust, #tech

CentOS replacement distro Rocky Linux’s first general release is out

Rocky Linux 8.4 (Green Obsidian) is bug-for-bug compatible with RHEL 8.4 and should serve admirably as a CentOS Linux replacement.

Enlarge / Rocky Linux 8.4 (Green Obsidian) is bug-for-bug compatible with RHEL 8.4 and should serve admirably as a CentOS Linux replacement. (credit: RESF)

Rocky Linux—one of at least two new distributions created to fill the void left when CentOS Linux was discontinued by parent corporation Red Hat—announced general availability of Rocky Linux 8.4 today. Rocky Linux 8.4 is binary-compatible with Red Hat Enterprise Linux 8.4, making it possible to run apps designed and tested only for RHEL without RHEL itself.

Bug-for-bug, not just feature-for-feature

One of the questions we’ve gotten repeatedly since first covering CentOS Linux’s deprecation is “why not just use [my favorite distro]?” Linux and BSD users tend to be so accustomed to the same software working on multiple distributions, with similar package names and installation procedures, that they forget what using and installing proprietary software is frequently like.

Rocky Linux and competitor AlmaLinux (which released its own binary-compatible RHEL 8.4 clone in March) aren’t simply “Linux distros” or even “Linux distros which closely resemble RHEL.” They’re built from the same source code as RHEL 8.4, which guarantees that a wide array of proprietary software designed with nothing but RHEL 8.4 in mind will “just work,” regardless of how obscure a feature (or bug!) those packages depend upon in RHEL 8.4 might be.

Read 11 remaining paragraphs | Comments

#centos, #centos-stream, #linux, #linux-distributions, #rocky-linux, #tech

Microsoft’s Linux repositories were down for 18+ hours

Close-up photograph of a hand holding a toy penguin.

Enlarge / In 2017, Tux was sad that he had a Microsoft logo on his chest. In 2021, he’s mostly sad that Microsoft’s repositories were down for most of a day. (credit: Jim Salter)

Yesterday, packages.microsoft.com—the repository from which Microsoft serves software installers for Linux distributions including CentOS, Debian, Fedora, OpenSUSE, and more—went down hard, and it stayed down for around 18 hours. The outage impacted users trying to install .NET Core, Microsoft Teams, Microsoft SQL Server for Linux (yes, that’s a thing) and more—as well as Azure‘s own devops pipelines.

We first became aware of the problem Wednesday evening when we saw 404 errors in the output of apt update on an Ubuntu workstation with Microsoft Teams installed. The outage is somewhat better-documented at this .NET Core-issue report on Github, with many users from all around the world sharing their experiences and theories.

The short version is that the entire repository cluster that serves all Linux packages for Microsoft was completely down—issuing a range of HTTP 404 (content not found) and 500 (Internal Server Error) messages for any URL—for roughly 18 hours. Microsoft engineer Rahul Bhandari confirmed the outage roughly five hours after it was initially reported, with a cryptic comment about the infrastructure team “running into some space issues.”

Read 2 remaining paragraphs | Comments

#azure, #linux, #microsoft, #microsoft-loves-linux, #microsoft-azure, #teams, #tech

ZFS fans, rejoice—RAIDz expansion will be a thing very soon

OpenZFS supports many complex disk topologies, but "spiral stack sitting on a desk" still isn't one of them.

Enlarge / OpenZFS supports many complex disk topologies, but “spiral stack sitting on a desk” still isn’t one of them. (credit: Jim Salter)

OpenZFS founding developer Matthew Ahrens merged one of the most sought-after features in ZFS history—RAIDz expansion—into master last week. The new feature allows a ZFS user to expand the size of a single RAIDz vdev. For example, you can use the new feature to turn a three-disk RAIDz1 into a four, five, or six RAIDz1.

OpenZFS is a complex filesystem, and things are necessarily going to get a bit chewy explaining how the feature works. So if you’re a ZFS newbie, you may want to refer back to our comprehensive ZFS 101 introduction.

Expanding storage in ZFS

In addition to being a filesystem, ZFS is a storage array and volume manager, meaning that you can feed it a whole pile of disk devices, not just one. The heart of a ZFS storage system is thezpool—this is the most fundamental level of ZFS storage. The zpool in turn contains vdevs, and vdevs contain actual disks within them. Writes are split into units called records or blocks, which are then distributed semi-evenly among the vdevs.

Read 23 remaining paragraphs | Comments

#freebsd, #freenas, #ixsystems, #linux, #openzfs, #raidz-expansion, #tech, #truenas, #ubuntu, #zfs

Kai-Fu Lee’s Sinovation bets on Linux tablet maker Jingling in $10M round

Kai-Fu Lee’s Sinovation Ventures has its eyes on a niche market targeting software developers. In April, the venture capital fund led a $10 million angel round in Jingling, a Chinese startup developing Linux-based tablets and laptops, TechCrunch learned. Other investors in the round included private equity firm Trustbridge Partners.

Jingling was founded only in June 2020 but has quickly assembled a team of 80 employees hailing from the likes of Aliyun OS, Alibaba’s Linux distribution, Thunder Software, a Chinese operating system solution provider, and active participants in China’s open source community.

The majority of the startup’s staff are working on its Linux-based operating system called JingOS in Beijing, with the rest developing hardware in Shenzhen, where its supply chain is located.

“Operating systems are a highly worthwhile field for investment,” Peter Fang, a partner at Sinovation Ventures, told TechCrunch. “We’ve seen the best product iteration for work and entertainment through the combination of iPad Pro and Magical Keyboard, but no tablet maker has delivered a superior user experience for the Android system so far, so we decided to back JingOS.”

“The investment is also in line with Sinovation’s recognition and prediction in ARM powering more mobile and desktop devices in the future,” the investor added.

Jingling’s first device, the JingPad A1 tablet based on the ARM architecture, has already shipped over 500 units in a pre-sale and is ramping up interest through a crowdfunding campaign. Jingling currently uses processors from Tsinghua Unigroup but is looking into Qualcomm and MediaTek chipsets for future production, according to Liu.

On the software end, JingOS, which is open sourced on GitHub, has accumulated over 50,000 installs from users around the world, most of whom are in the United States and Europe.

But how many people want a Linux tablet or laptop? Liu Chengcheng, who launched Jingling with Zhu Rui, said the demand is big enough from the developer community to sustain the startup’s early-phase growth. Liu is known for founding China’s leading startup news site 36Kr and Zhu is an operating system expert and a veteran of Motorola and Lenovo.

Targeting the Linux community is step one for Jingling, for “it’s difficult to gain a foothold by starting out in the [general] consumer market,” said Liu.

“The Linux market is too small for tech giants but too hard for small startups to tackle… Aside from Jingling, Huawei is the only other company in China building a mobile operating system, but HarmonyOS focuses more on IoTs.”

Linux laptops have been around for years, but Jingling wanted to offer something different by offering both desktop and mobile experiences on one device. That’s why Jingling made JingOS compatible with both Linux desktop software like WPS Office and Terminal as well as the usual Android apps on smartphones. The JingPad A1 tablet comes with a detachable keyboard that immediately turns itself into a laptop, a setup similar to Apple’s Magic Keyboard for iPad.

“It’s a gift to programmers, who can use it to code in the Linux system but also use Android mobile apps on the run,” said Liu.

Jingling aspires to widen its user base and seize the Chromebook market about two from now, Liu said. The success of Chromebooks, which comprised 10.8% of the PC market in 2020 and increasingly ate into Microsoft’s dominance, is indicative of the slowing demand for Windows personal computers, the founder observed.

The JingPad A1 is sold at a starting price of $549, compared to Chrome’s wide price range roughly between $200 and $550 depending on the specs and hardware providers.

#android, #asia, #beijing, #china, #funding, #gadgets, #hardware, #ipad, #kai-fu-lee, #linus-torvalds, #linux, #mediatek, #operating-system, #operating-systems, #shenzhen, #software-developers, #tc, #trustbridge-partners

Nvidia and Valve are bringing DLSS to Linux gaming… sort of

Three different logos, including a cartoon penguin, have been photoshopped together.

Enlarge / Tux looks a lot more comfortable sitting on that logo than he probably should—Nvidia’s drivers are still proprietary, and DLSS support isn’t available for native Linux apps—only Windows apps running under Proton. (credit: Aurich Lawson / Jim Salter / Larry Ewing / Nvidia)

Linux gamers, rejoice—we’re getting Nvidia’s Deep Learning Super Sampling on our favorite platform! But don’t rejoice too hard; the new support only comes on a few games, and it’s only on Windows versions of those games played via Proton.

At Computex 2021, Nvidia announced a collaboration with Valve to bring DLSS support to Windows games played on Linux systems. This is good news, since DLSS can radically improve frame rates without perceptibly altering graphics quality. Unfortunately, as of this month, fewer than 60 games support DLSS in the first place; of those, roughly half work reasonably well in Proton, with or without DLSS.

What’s a DLSS, anyway?

Nvidia's own benchmarking shows well over double the frame rate in <em><a href="https://arstechnica.com/gaming/2019/02/metro-exodus-a-good-single-player-game-to-usher-in-the-pc-ray-tracing-era/">Metro Exodus</a>.</em> Most third-party benchmarks "only" show an improvement of 50 to 75 percent. Note the DLSS image actually looks sharper and cleaner than the non-DLSS in this case!

Nvidia’s own benchmarking shows well over double the frame rate in Metro Exodus. Most third-party benchmarks “only” show an improvement of 50 to 75 percent. Note the DLSS image actually looks sharper and cleaner than the non-DLSS in this case! (credit: nvidia)

If you’re not up on all the gaming graphics jargon, DLSS is an acronym for Deep Learning Super Sampling. Effectively, DLSS takes a low-resolution image and uses deep learning to upsample it to a higher resolution on the fly. The impact of DLSS can be astonishing in games that support the tech—in some cases more than doubling non-DLSS frame rates, usually with little or no visual impact.

Read 10 remaining paragraphs | Comments

#dlss, #gaming-culture, #linux, #linux-gaming, #nvidia, #proton, #steam, #tech

Huawei officially launches Android alternative HarmonyOS for smartphones

Think you’re living in a hyper-connected world? Huawei’s proprietary HarmonyOS wants to eliminate delays and gaps in user experience when you move from one device onto another by adding interoperability to all devices, regardless of the system that powers them.

Two years after Huawei was added to the U.S. entity list that banned the Chinese telecom giant from accessing U.S. technologies, including core chipsets and Android developer services from Google, Huawei’s alternative smartphone operating system was unveiled.

On Wednesday, Huawei officially launched its proprietary operating system HarmonyOS for mobile phones. The firm began building the operating system in 2016 and made it open-source for tablets, electric vehicles and smartwatches last September. Its flagship devices such as Mate 40 could upgrade to HarmonyOS starting Wednesday, with the operating system gradually rolling out on lower-end models in the coming quarters.

HarmonyOS is not meant to replace Android or iOS, Huawei said. Rather, its application is more far-reaching, powering not just phones and tablets but an increasing number of smart devices. To that end, Huawei has been trying to attract hardware and home appliance manufacturers to join its ecosystem.

To date, more than 500,000 developers are building applications based on HarmonyOS. It’s unclear whether Google, Facebook and other mainstream apps in the West are working on HarmonyOS versions.

Some Chinese tech firms have answered Huawei’s call. Smartphone maker Meizu hinted on its Weibo account that its smart devices might adopt HarmonyOS. Oppo, Vivo and Xiaomi, who are much larger players than Meizu, are probably more reluctant to embrace a rival’s operating system.

Huawei’s goal is to collapse all HarmonyOS-powered devices into one single control panel, which can, say, remotely pair the Bluetooth connections of headphones and a TV. A game that is played on a phone can be continued seamlessly on a tablet. A smart soymilk blender can customize a drink based on the health data gleaned from a user’s smartwatch.

Devices that aren’t already on HarmonyOS can also communicate with Huawei devices with a simple plug-in. Photos from a Windows-powered laptop can be saved directly onto a Huawei phone if the computer has the HarmonyOS plug-in installed. That raises the question of whether Android, or even iOS, could, one day, talk to HarmonyOS through a common language.

The HarmonyOS launch arrived days before Apple’s annual developer event scheduled for next week. A recent job posting from Apple mentioned a seemingly new concept, homeOS, which may have to do with Apple’s smart home strategy, as noted by Macrumors.

Huawei denied speculations that HarmonyOS is a derivative of Android and said no single line of code is identical to that of Android. A spokesperson for Huawei declined to say whether the operating system is based on Linux, the kernel that powers Android.

Several tech giants have tried to introduce their own mobile operating systems to no avail. Alibaba built AliOS based on Linux but has long stopped updating it. Samsung flirted with its own Tizen but the operating system is limited to powering a few Internet of Things like smart TVs.

Huawei may have a better shot at drumming up developer interest compared to its predecessors. It’s still one of China’s largest smartphone brands despite losing a chunk of its market after the U.S. government cut it off critical chip suppliers, which could hamper its ability to make cutting-edge phones. HarmonyOS also has a chance to create an alternative for developers who are disgruntled with Android, if Huawei is able to capture their needs.

The U.S. sanctions do not block Huawei from using Android’s open-source software, which major Chinese smartphone makers use to build their third-party Android operating system. But the ban was like a death knell for Huawei’s consumer markets overseas as its phones abroad lost access to Google Play services.

#alibaba, #android, #apple, #asia, #bluetooth, #china, #facebook, #gadgets, #harmonyos, #huawei, #internet-of-things, #linux, #meizu, #microsoft-windows, #mobile, #mobile-linux, #mobile-operating-system, #mobile-phones, #open-source-software, #operating-system, #operating-systems, #smart-devices, #smartphone, #smartphones, #tc, #xiaomi

The open-source Contributor Covenant is now managed by the Organization for Ethical Source

Managing the technical side of open-source projects is often hard enough, but throw in the inevitable conflicts between contributors, who are often very passionate about their contributions, and things get even harder. One way to establish ground rules for open-source communities is the Contributor Covenant, created by Coraline Ada Ehmke back in 2014. Like so many projects in the open-source world, the Contributor Covenant was also a passion project for Ehmke. Over the years, its first two iterations have been adopted by organizations like the CNCF, Creative Commons, Apple, Google, Microsoft and the Linux project, in addition to hundreds of other projects.

Now, as work is starting on version 3.0, the Organization for Ethical Source (OES), of which Ehmke is a co-founder and executive director, will take over the stewardship of the project.

“Contributor Covenant was the first document of its kind as code of conduct for open-source projects — and it was incredibly controversial and actually remains pretty controversial to this day,” Ehmke told me. “But I come from the Ruby community, and the Ruby community really embraced the concept and also really embraced the document itself. And then it spread from there to lots of other open-source projects and other open-source communities.”

The core of the document is a pledge to “make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation,” and for contributors to act in ways that contribute to a diverse, open and welcoming community.

As Ehmke told me, one part that evolved over the course of the last few years is the addition of enforcement guidelines that are meant to help community leaders determine the consequences when members violate the code of conduct.

“One of the things that I try to do in this work is when people criticize the work, even if they’re not arguing in good faith, I try to see if there’s something in there that could be used as constructive feedback, something actionable,” Ehmke said. “A lot of the criticism for years for Contributor Covenant was people saying, ‘Oh, I’ll say one wrong thing and be permanently banned from our project, which is really grim and really unreasonable.’ What I took from that is that people are afraid of what consequences project leaders might impose on them for an infraction. Put that way, that’s kind of a reasonable concern.”

Ehmke described bringing the Covenant to the OES as an “exit to community,” similar to how companies will often bring their mature open-source projects under the umbrella of a foundation. She noted that the OES includes a lot of members with expertise in community management and project governance, which they will be able to bring to the project in a more formal way. “I’m still going to be involved with the evolution of Contributor Covenant, but it’s going to be developed under the working group model that the organization for ethical source has established,” she explained.

For version 3.0, Ehmke hopes to turn the Covenant into what she described as more of a “toolkit” that will allow different communities to tailor it a bit more to their own goals and values (though still within the core ethical principles outlined by the OES).

“Microsoft’s adoption of Contributor Covenant represents our commitment to building healthy, diverse and inclusive communities, as well as our intention to contribute and build together with others in the ecosystem,” said Emma Irwin, a program manager in Microsoft’s Open Source Program Office. “I am honored to bring this intention and my expertise to the OES’s Contributor Covenant 3.0 working group.”

#apple, #contributor-covenant, #creative-commons, #developer, #google, #intellectual-property-law, #linux, #microsoft, #open-source-software, #ruby, #tc

Google updates its cross-platform Flutter UI toolkit

Flutter, Google’s cross-platform UI toolkit for building mobile and desktop apps, is getting a small but important update at the company’s I/O conference today. Google also announced that Flutter now powers 200,000 apps in the Play Store alone, including popular apps from companies like WeChat, ByteDance, BMW, Grab and DiDi. Indeed, Google notes that 1 in 8 new apps in the Play Store are now Flutter apps.

The launch of Flutter 2.2 follows Google’s rollout of Flutter 2, which first added support for desktop and web apps in March, so it’s no surprise that this is a relatively minor release. In many ways, the update builds on top of the features the company introduced in version 2 and reliability and performance improvements.

Version 2.2 makes null safety the default for new projects, for example, to add protections against null reference exceptions. As for performance, web apps can now use background caching using service workers, for example, while Android apps can use deferred components and iOS apps get support for precompiled shaders to make first runs smoother.

Google also worked on streamlining the overall process of bringing Flutter apps to desktop platforms (Windows, macOS and Linux).

But as Google notes, a lot of the work right now is happening in the ecosystem. Google itself is introducing a new payment plugin for Flutter built in partnership with the Google Pay team and Google’s ads SDK for Flutter is getting support for adaptive banner formats. Meanwhile, Samsung is now porting Flutter to Tizen and Sony is leading an effort to bring it to embedded Linux. Adobe recently announced its XD to Flutter plugin for its design tool and Microsoft today launched the alpha of Flutter support for Universal Windows Platform (UWP) apps for Windows 10 in alpha.

#adobe, #alpha, #android, #bytedance, #caching, #chrome-os, #computing, #flutter, #google, #google-i-o-2021, #google-pay, #linux, #microsoft, #microsoft-windows, #operating-systems, #play-store, #samsung, #sony, #tc, #universal-windows-platform, #web-apps, #wechat, #windows-10

CentOS replacement distro AlmaLinux gets commercial support options

Today, CloudLinux Inc announced that it will offer commercial support for the AlmaLinux community distribution. The new support plans will be available next week and will include regular patches and updates for AlmaLinux’s kernel and core packages, patch delivery SLAs, and 24/7 incident support.

What’s AlmaLinux?

AlmaLinux is one of several Linux distributions jostling for position as “the new CentOS” in the wake of Red Hat’s December 2020 deprecation of its own free-as-in-beer Red Hat Enterprise Linux clone distribution.

AlmaLinux was initially sponsored by CloudLinux Inc. and is based on its own CloudLinux commercial distribution—but the company specifically set up the new distribution to be community owned and governed. Its qualifications as “the new CentOS” come from its base on the source code of Red Hat Enterprise Linux (RHEL).

Read 7 remaining paragraphs | Comments

#almalinux, #centos, #cloudlinux, #linux, #linux-distro, #red-hat, #rhel, #tech

Linux kernel team rejects University of Minnesota researchers’ apology

A penguin stares menacingly at us.

Enlarge / Do not anger the penguin, for it is long of memory and slow to forgive. (credit: DJRPhoto36 / Flickr)

Last week, senior Linux kernel developer Greg Kroah-Hartman announced that all Linux patches coming from the University of Minnesota would be summarily rejected by default.

This policy change came as a result of three University of Minnesota researchers—Qiushi Wu, Kangjie Lu, and Aditya Pakki—embarking on a program to test the Linux kernel dev community’s resistance to what the group called “Hypocrite Commits.”

Testing the Linux kernel community

The trio’s scheme involved first finding three easy-to-fix, low-priority bugs in the Linux kernel and then fixing them—but fixing them in such a way as to complete what the UMN researchers called an “immature vulnerability”:

Read 8 remaining paragraphs | Comments

#greg-k-h, #greg-kroah-hartman, #hypocrite-commits, #infosec, #linux, #linux-foundation, #linux-kernel, #security-patches, #tech

Graphical Linux apps are coming to Windows Subsystem for Linux

This week, Microsoft launched support for graphical and audio Linux apps under the Windows Subsystem for Linux—although the new feature is only available in the Dev channel of Insider builds, for now. The new feature is nicknamed WSLg, and it includes both X and PulseAudio servers. We gave WSLg some limited testing today, and it performed rather well.

After running apt install firefox in the WSL2/Ubuntu terminal, we ran an Ubuntu-flavored web browser and played several videos on YouTube. We don’t necessarily recommend you base your next HTPC on WSLg—but the videos were watchable, with decent frame rate and non-skipping audio. (We tested WSLg with a Ryzen 5 Pro 2500U-powered Minisforum UM250 Mini-PC.)

More importantly, virt-manager worked very well on the little Minisforum—in very short order, we set up a “virt-ception” by using virt-manager beneath WSL2/Ubuntu running on Windows 10 to access a Windows VM running under Ubuntu on a machine across the office. (You can also see a Hackintosh VM in the background.)

Read 4 remaining paragraphs | Comments

#linux, #tech, #ubuntu, #windows, #windows-subsystem-for-linux, #wsl, #wsl2

Apple M1 hardware support merged into Linux 5.13

We're still a long way away from a smooth, quick boot with a fancy Asahi logo centered on the screen and (presumably) a soothing startup noise.

Enlarge / We’re still a long way away from a smooth, quick boot with a fancy Asahi logo centered on the screen and (presumably) a soothing startup noise. (credit: Asahi Linux)

Asahi Linux—founded by Hector “marcan” Martin—has merged initial support for Apple M1 hardware into the Linux system-on-chip (SOC) tree, where it will hopefully make it into the Linux 5.13 kernel (which we can expect roughly in July).

What’s an Asahi?

Asahi is the Japanese name for what we know as the McIntosh Apple—the specific fruit cultivar that gave the Mac its name. Asahi Linux is a fledgling distribution founded with the specific goal of creating a workable daily-driver Linux experience on Apple M1 silicon.

This is a daunting task. Apple does not offer any community documentation for Apple Silicon, so Martin and cohorts must reverse-engineer the hardware as well as write drivers for it. And this is especially difficult considering the M1 GPU—without first-class graphics support, Asahi cannot possibly offer a first-class Linux experience on M1 hardware such as the 2020 M1 Mac Mini, Macbook Air, and Macbook Pro.

Read 8 remaining paragraphs | Comments

#apple, #apple-m1, #linux, #tech

Esri brings its flagship ArcGIS platform to Kubernetes

Esri, the geographic information system (GIS), mapping and spatial analytics company, is hosting its (virtual) developer summit today. Unsurprisingly, it is making a couple of major announcements at the event that range from a new design system and improved JavaScript APIs to support for running ArcGIS Enterprise in containers on Kubernetes.

The Kubernetes project was a major undertaking for the company, Esri Product Managers Trevor Seaton and Philip Heede told me. Traditionally, like so many similar products, ArcGIS was architected to be installed on physical boxes, virtual machines or cloud-hosted VMs. And while it doesn’t really matter to end-users where the software runs, containerizing the application means that it is far easier for businesses to scale their systems up or down as needed.

Esri ArcGIS Enterprise on Kubernetes deployment

Esri ArcGIS Enterprise on Kubernetes deployment

“We have a lot of customers — especially some of the larger customers — that run very complex questions,” Seaton explained. “And sometimes it’s unpredictable. They might be responding to seasonal events or business events or economic events, and they need to understand not only what’s going on in the world, but also respond to their many users from outside the organization coming in and asking questions of the systems that they put in place using ArcGIS. And that unpredictable demand is one of the key benefits of Kubernetes.”

Deploying Esri ArcGIS Enterprise on Kubernetes

Deploying Esri ArcGIS Enterprise on Kubernetes

The team could have chosen to go the easy route and put a wrapper around its existing tools to containerize them and call it a day, but as Seaton noted, Esri used this opportunity to re-architect its tools and break it down into microservices.

“It’s taken us a while because we took three or four big applications that together make up [ArcGIS] Enterprise,” he said. “And we broke those apart into a much larger set of microservices. That allows us to containerize specific services and add a lot of high availability and resilience to the system without adding a lot of complexity for the administrators — in fact, we’re reducing the complexity as we do that and all of that gets installed in one single deployment script.”

While Kubernetes simplifies a lot of the management experience, a lot of companies that use ArcGIS aren’t yet familiar with it. And as Seaton and Heede noted, the company isn’t forcing anyone onto this platform. It will continue to support Windows and Linux just like before. Heede also stressed that it’s still unusual — especially in this industry — to see a complex, fully integrated system like ArcGIS being delivered in the form of microservices and multiple containers that its customers then run on their own infrastructure.

Image Credits: Esri

In addition to the Kubernetes announcement, Esri also today announced new JavaScript APIs that make it easier for developers to create applications that bring together Esri’s server-side technology and the scalability of doing much of the analysis on the client-side. Back in the day, Esri would support tools like Microsoft’s Silverlight and Adobe/Apache Flex for building rich web-based applications. “Now, we’re really focusing on a single web development technology and the toolset around that,” Esri product manager Julie Powell told me.

A bit later this month, Esri also plans to launch its new design system to make it easier and faster for developers to create clean and consistent user interfaces. This design system will launch April 22, but the company already provided a bit of a teaser today. As Powell noted, the challenge for Esri is that its design system has to help the company’s partners to put their own style and branding on top of the maps and data they get from the ArcGIS ecosystem.

 

#computing, #developer, #enterprise, #esri, #gis, #javascript, #kubernetes, #linux, #microsoft-windows, #software, #tc, #vms

Google Cloud joins the FinOps Foundation

Google Cloud today announced that it is joining the FinOps Foundation as a Premier Member.

The FinOps Foundation is a relatively new open-source foundation, hosted by the Linux Foundation, that launched last year. It aims to bring together companies in the ‘cloud financial management’ space to establish best practices and standards. As the term implies, ‘cloud financial management,’ is about the tools and practices that help businesses manage and budget their cloud spend. There’s a reason, after all, that there are a number of successful startups that do nothing else but help businesses optimize their cloud spend (and ideally lower it).

Maybe it’s no surprise that the FinOps Foundation was born out of Cloudability’s quarterly Customer Advisory Board meetings. Until now, CloudHealth by VMware was the Foundation’s only Premiere Member among its vendor members. Other members include Cloudability, Densify, Kubecost and SoftwareOne. With Google Cloud, the Foundation has now signed up its first major cloud provider.

“FinOps best practices are essential for companies to monitor, analyze, and optimize cloud spend across tens to hundreds of projects that are critical to their business success,” said Yanbing Li, Vice President of Engineering and Product at Google Cloud. “More visibility, efficiency, and tools will enable our customers to improve their cloud deployments and drive greater business value. We are excited to join FinOps Foundation, and together with like-minded organizations, we will shepherd behavioral change throughout the industry.”

Google Cloud has already committed to sending members to some of the Foundation’s various Special Interest Groups (SIGs) and Working Groups to “help drive open source standards for cloud financial management.”

“The practitioners in the FinOps Foundation greatly benefit when market leaders like Google Cloud invest resources and align their product offerings to FinOps principles and standards,” said J.R. Storment, Executive Director of the FinOps Foundation. “We are thrilled to see Google Cloud increase its commitment to the FinOps Foundation, joining VMware as the 2nd of 3 dedicated Premier Member Technical Advisory Council seats.”

#cloud, #cloud-computing, #cloud-infrastructure, #cloudability, #computing, #densify, #enterprise, #google, #google-cloud, #linux, #linux-foundation, #vmware

Red Hat withdraws from the Free Software Foundation after Stallman’s return

sad penguin

Enlarge

Last week, Richard M. Stallman—father of the GNU Public License that underpins Linux and a significant part of the user-facing software that initially accompanied the Linux kernel—returned to the board of the Free Software Foundation after a two-year hiatus due to his own highly controversial remarks about his perception of Jeffrey Epstein’s victims as “entirely willing.”

As a result of RMS’ reinstatement, Red Hat—the Raleigh, North Carolina-based open source software giant that produces Red Hat Enterprise Linux—has publicly withdrawn funding and support from the Free Software Foundation:

Red Hat was appalled to learn that [Stallman] had rejoined the FSF board of directors. As a result, we are immediately suspending all Red Hat funding of the FSF and any FSF-hosted events.

Red Hat’s relatively brief statement goes on to acknowledge an FSF statement on board governance that appeared on the same day:

Read 10 remaining paragraphs | Comments

#free-software-foundation, #fsf, #linux, #red-hat, #red-hat-enterprise-linux, #richard-m-stallman, #rms, #stallman, #tech

Linus Torvalds weighs in on Rust language in the Linux kernel

Rust coats a pipe in an industrial construction site.

Enlarge / No, not that kind of Rust. (credit: Heritage Images via Getty Images)

This week, ZDNet’s Steven J. Vaughan-Nichols asked Linus Torvalds and Greg Kroah-Hartman about the possibility of new Linux kernel code being written in Rust—a high performance but memory-safe language sponsored by the Mozilla project.

C versus Rust

As of now, the Linux kernel is written in the C programming language—essentially, the same language used to write kernels for Unix and Unix-like operating systems since the 1970s. The great thing about C is that it’s not assembly language—it’s considerably easier to read and write, and it’s generally much closer to directly portable between hardware architectures. However, C still opens you up to nearly the entire range of catastrophic errors possible in assembly.

In particular, as a nonmemory-managed language, C opens the programmer up to memory leaks and buffer overflows. When you’re done with a variable you’ve created, you must explicitly destroy it—otherwise, old orphaned variables accumulate until the system crashes. Similarly, you must allocate memory to store data in—and if your attempt to put too much data into too-small an area of RAM, you’ll end up overwriting locations you shouldn’t.

Read 11 remaining paragraphs | Comments

#greg-k-h, #greg-kroah-hartman, #kernel, #linus, #linus-torvalds, #linux, #rust, #tech, #torvalds