No-code is code

Today, the release of OpenAI Codex, a new Al system that translates natural language to code, marks the beginning of a shift in how computer software is written.

Over the past few years, there’s been growing talk about “no code” platforms, but this is no new phenomenon. The reality is, ever since the first programmable devices, computer scientists have regularly developed breakthroughs in how we “code” computer software.

The first computers were programmed with switches or punch cards, until the keyboard was invented. Coding became a matter of typing numbers or machine language, until Grace Hopper invented the modern compiler and the COBOL language, ushering in decades of innovation in programming languages and platforms. Languages like Fortran, Pascal, C, Java and Python evolved in a progression, where the newest language (built using an older language) enabled programmers to “code” using increasingly more human language.

Alongside languages, we’ve seen the evolution of “no-code” platforms — including Microsoft Excel, the 1980s granddaddy of no-code — that empower people to program computers in a visual interface, whether in school or in the workplace. Anytime you write a formula in a spreadsheet, or when you drag a block of code on Code.org or Scratch, you’re programming, or “coding,” a computer. “No code” is code. Every decade, a breakthrough innovation makes it easier to write code so that the old way of coding is replaced by the new.

Does this mean coding is dead? No! It doesn’t replace the need for a programmer to understand code. It means coding just got much easier, higher impact and thus more important.

This brings us to today’s announcement. Today, OpenAl announced OpenAI Codex, an entirely new way to “write code” in the natural English language. A computer programmer can now use English to describe what they want their software to do, and OpenAl’s generative Al model will automatically generate the corresponding computer code, in your choice of programming language. This is what we’ve always wanted — for computers to understand what we want them to do, and then do it, without having to go through a complex intermediary like a programming language.

But this is not an end, it is a beginning. With Al-generated code, one can imagine an evolution in every programming tool, in every programming class, and a Cambrian explosion of new software. Does this mean coding is dead? No! It doesn’t replace the need for a programmer to understand code. It means coding just got much easier, higher impact and thus more important, just as when punch cards were replaced by keyboards, or when Grace Hopper invented the compiler.

In fact, the demand for software today is greater than ever and will only continue to grow. As this technology evolves, Al will play a greater role in generating code, which will multiply the productivity and impact of computer scientists, and will make this field accessible to more and more computer programmers.

There are already tools that let you program using only drag-and-drop, or to write code using your voice. Improvements in these technologies and new tools, like OpenAI Codex, will increasingly democratize the ability to create software. As a result, the amount of code — and the number of coders — in the world will increase.

This also means that learning how to program — in a new way — is more important than ever. Learning to code can unlock doors to opportunity and also help solve global problems. As it becomes easier and more accessible to create software, we should give every student in every school the fundamental knowledge to not only be a user of technology but also a creator.

#code-org, #coding, #column, #computer-science, #computing, #developer, #openai, #science-and-technology, #software, #startups, #tc

Google turns AlphaFold loose on the entire human genome

Image of a diagram of ribbons and coils.

Enlarge (credit: Sloan-Kettering)

Just one week after Google’s DeepMind AI group finally described its biology efforts in detail, the company is releasing a paper that explains how it analyzed nearly every protein encoded in the human genome and predicted its likely three-dimensional structure—a structure that can be critical for understanding disease and designing treatments. In the very near future, all of these structures will be released under a Creative Commons license via the European Bioinformatics Institute, which already hosts a major database of protein structures.

In a press conference associated with the paper’s release, DeepMind’s Demis Hassabis made clear that the company isn’t stopping there. In addition to the work described in the paper, the company will release structural predictions for the genomes of 20 major research organisms, from yeast to fruit flies to mice. In total, the database launch will include roughly 350,000 protein structures.

What’s in a structure?

We just described DeepMind’s software last week, so we won’t go into much detail here. The effort is an AI-based system trained on the structure of existing proteins that had been determined (often laboriously) through laboratory experiments. The system uses that training, plus information it obtains from families of proteins related by evolution, to predict how a protein’s chain of amino acids folds up in three-dimensional space.

Read 14 remaining paragraphs | Comments

#ai, #biochemistry, #biology, #computer-science, #protein-folding, #science

PlasticARM is a 32-bit bendable processor

Image of the plasticARM processor, showing its dimensions and components.

Enlarge (credit: Biggs, et. al.)

Wearable electronics, like watches and fitness trackers, represent the next logical step in computing. They’ve sparked an interest in the development of flexible electronics, which could enable wearables to include things like clothing and backpacks.

Flexible electronics, however, run into a problem: our processing hardware is anything but flexible. Most efforts at dealing with that limitation have involved splitting up processors into a collection of smaller units, linking them with flexible wiring, and then embedding all the components in a flexible polymer. To an extent, it’s a throwback to the early days of computing, when a floating point unit might reside on a separate chip.

But a group within the semiconductor company ARM has now managed to implement one of the company’s smaller embedded designs using flexible silicon. The design works and executes all the instructions you’d expect from it, but it also illustrates the compromises we have to make for truly flexible electronics.

Read 12 remaining paragraphs | Comments

#arm, #computer-science, #materials-science, #processor, #science

Google tries out error correction on its quantum processor

Image of a chip above iridescent wiring.

Enlarge / Google’s Sycamore processor. (credit: Google)

The current generation of quantum hardware has been termed “NISQ”: noisy, intermediate-scale quantum processors. “Intermediate-scale” refers to a qubit count that is typically in the dozens, while “noisy” references the fact that current qubits frequently produce errors. These errors can be caused by problems setting or reading the qubits or by the qubit losing its state during calculations.

Long-term, however, most experts expect that some form of error correction will be essential. Most of the error-correction schemes involve distributing a qubit’s logical information across several qubits and using additional qubits to track that information in order to identify and correct errors.

Back when we visited the folks from Google’s quantum computing group, they mentioned that the layout of their processor was chosen because it simplifies implementing error correction. Now, the team is running two different error-correction schemes on the processor. The results show that error correction clearly works, but we’ll need a lot more qubits and a lower inherent error rate before correction is useful.

Read 14 remaining paragraphs | Comments

#computer-science, #error-correction, #quantum-computing, #quantum-mechanics, #qubits, #science

Quantum-computing startup Rigetti to offer modular processors

Image of a grey metallic rectangle.

Enlarge / It may look nearly featureless, but it’s meant to contain 80 qubits. (credit: Rigetti Computing)

A quantum-computing startup announced Tuesday that it will make a significant departure in its designs for future quantum processors. Rather than building a monolithic processor as everyone else has, Rigetti Computing will build smaller collections of qubits on chips that can be physically linked together into a single functional processor. This isn’t multiprocessing so much as a modular chip design.

The decision has several consequences, both for Rigetti processors and quantum computing more generally. We’ll discuss them below.

What’s holding things back

Rigetti’s computers rely on a technology called a “transmon,” based on a superconducting wire loop linked to a resonator. That’s the same qubit technology used by larger competitors like Google and IBM. Transmons are set up so that the state of one can influence that of its neighbors during calculations, an essential feature of quantum computing. To an extent, the topology of connections among transmon qubits is a key contributor to the machine’s computational power.

Read 11 remaining paragraphs | Comments

#computer-science, #quantum-computing, #rigetti, #science

Researchers build a metadata-based image database using DNA storage

Fluorescent tagged DNA is the key to a new storage system.

Enlarge / Fluorescent tagged DNA is the key to a new storage system. (credit: Gerald Barber, Virginia Tech)

DNA-based data storage appears to offer solutions to some of the problems created by humanity’s ever-growing capacity to create data we want to hang on to. Compared to most other media, DNA offers phenomenal data densities. If stored in the right conditions, it doesn’t require any energy to maintain the data for centuries. And due to DNA’s centrality to biology, we’re always likely to maintain the ability to read it.

But DNA is not without its downsides. Right now, there’s no standard method of encoding bits in the pattern of bases of a DNA strand. Synthesizing specific sequences remains expensive. And accessing the data using current methods is slow and depletes the DNA being used for storage. Try to access the data too many times, and you have to restore it in some way—a process that risks introducing errors.

A team based at MIT and the Broad Institute has decided to tackle some of these issues. In the process, the researchers have created a DNA-based image-storage system that’s somewhere between a file system and a metadata-based database.

Read 18 remaining paragraphs | Comments

#biology, #computer-science, #dna, #science, #storage

Programming a robot to teach itself how to move

image of three small pieces of hardware connected by tubes.

Enlarge / The robotic train. (credit: Oliveri et. al.)

One of the most impressive developments in recent years has been the production of AI systems that can teach themselves to master the rules of a larger system. Notable successes have included experiments with chess and Starcraft. Given that self-teaching capability, it’s tempting to think that computer-controlled systems should be able to teach themselves everything they need to know to operate. Obviously, for a complex system like a self-driving car, we’re not there yet. But it should be much easier with a simpler system, right?

Maybe not. A group of researchers in Amsterdam attempted to take a very simple mobile robot and create a system that would learn to optimize its movement through a learn-by-doing process. While the system the researchers developed was flexible and could be effective, it ran into trouble due to some basic features of the real world, like friction.

Roving robots

The robots in the study were incredibly simple and were formed from a varying number of identical units. Each had an on-board controller, battery, and motion sensor. A pump controlled a piece of inflatable tubing that connected a unit to a neighboring unit. When inflated, the tubing generated a force that pushed the two units apart. When deflated, the tubing would pull the units back together.

Read 14 remaining paragraphs | Comments

#computer-science, #machine-learning, #robotics, #science

Honeywell releases details of its ion trap quantum computer

Image of a small electronic device.

Enlarge / The line down the middle is where the trapped ions reside. (credit: Honeywell)

About a year ago, Honeywell announced that it had entered the quantum computing race with a technology that was different from anything else on the market. The company claimed that because the performance of its qubits was so superior to those of its competitors, its computer could do better on a key quantum computing benchmark than quantum computers with far more qubits.

Now, roughly a year later, the company finally released a paper describing the feat in detail. But in the meantime, the competitive landscape has shifted considerably.

It’s a trap!

In contrast to companies like IBM and Google, Honeywell has decided against using superconducting circuitry and in favor of using a technology called “trapped ions.” In general, these use a single ion as a qubit and manipulate its state using lasers. There are different ways to create ion trap computers, however, and Honeywell’s version is distinct from another on the market, made by a competitor called IonQ (which we’ll come back to).

Read 14 remaining paragraphs | Comments

#computer-science, #honeywell, #ion-trap, #physics, #quantum-computing, #quantum-mechanics, #science

How does the brain interpret computer languages?

Image of a pillar covered in lit ones and zeroes.

Enlarge (credit: Barcroft Media / Getty Images)

In the US, a 2016 Gallup poll found that the majority of schools want to start teaching code, with 66 percent of K-12 school principals thinking that computer science learning should be incorporated into other subjects. Most countries in Europe have added coding classes and computer science to their school curricula, with France and Spain introducing theirs in 2015. This new generation of coders is expected to boost the worldwide developer population from 23.9 million in 2019 to 28.7 million in 2024.

Despite all this effort, there’s still some confusion on how to teach coding. Is it more like a language, or more like math? Some new research may have settled this question by watching the brain’s activity while subjects read Python code.

Two schools on schooling

Right now, there are two schools of thought. The prevailing one is that coding is a type of language, with its own grammar rules and syntax that must be followed. After all, they’re called coding languages for a reason, right? This idea even has its own snazzy acronym: Coding as Another Language, or CAL.

Read 17 remaining paragraphs | Comments

#biology, #brain, #computer-science, #language, #math, #neuroscience, #programming-language, #science

D-Wave’s hardware outperforms a classic computer

D-Wave’s hardware outperforms a classic computer

(credit: D-Wave)

Early on in D-Wave’s history, the company made bold claims about its quantum annealer outperforming algorithms run on traditional CPUs. Those claims turned out to be premature, as improvements to these algorithms pulled the traditional hardware back in front. Since then, the company has been far more circumspect about its performance claims, even as it brought out newer generations of hardware.

But in the run-up to the latest hardware, the company apparently became a bit more interested in performance again. And it recently got together with Google scientists to demonstrate a significant boost in performance compared to a classical algorithm, with the gap growing as the problem became complex—although the company’s scientists were very upfront about the prospects of finding a way to boost classical hardware further. Still, there are a lot of caveats even beyond that, so it’s worth taking a detailed look at what the company did.

Magnets, how do they flip?

D-Wave’s system is based on a large collection of quantum devices that are connected to some of their neighbors. Each device can have its state set separately, and the devices are then given the chance to influence their neighbors as the system moves through different states and individual devices change their behavior. These transitions are the equivalent of performing operations. And because of the quantum nature of these devices, the hardware seems to be able to “tunnel” to new states, even if the only route between them involves high-energy states that are impossible to reach.

Read 14 remaining paragraphs | Comments

#computer-science, #d-wave, #quantum-computing, #quantum-mechanics, #science, #xeon

SuperAnnotate, a computer vision platform, partners with with open-source to spread visual ML

SuperAnnotate, a NoCode computer vision platform, is partnering with OpenCV, a non-profit organization that has built a large collection of open-source computer vision algorithms. The move means startups and entrepreneurs will be able to build their own AI models and allow cameras to detect objects using machine learning. SuperAnnotate has so far raised $3M to date from investors including Point Nine Capital, Fathom Capital and Berkeley SkyDeck Fund.

The AI-powered computer vision platform for data scientists and annotation teams will provide OpenCV AI Kit (OAK) users with access to its platform, as well as launching a computer vision course on building AI models. SuperAnnotate will also set up the AI Kit’s camera to detect objects using machine learning and OAK users will get $200 of credit to set up their systems on its platform. 

The OAK is a multi-camera device that can run computer vision and 3D perception tasks such as identifying objects, counting people and measuring distances. Since launching, around 11,000 of these cameras have been distributed.

The AI Kit has so far been used to build drone and security applications, agricultural vision sensors or even COVID-related detection devices (for example, to identify people whether someone is wearing a mask or not).

Tigran Petrosyan, co-founder and CEO at SuperAnnotate said in a statement that: “Computer vision and smart camera applications are gaining momentum, yet not many have the relevant AI expertise to implement those. With OAK Kit and SuperAnnotate, one can finally build their smart camera system, even without coding experience.”

Competitors to SuperAnnotate include Dataloop, Labelbox, Appen and Hive
.

#articles, #artificial-intelligence, #computer-science, #computer-vision, #computing, #europe, #machine-learning, #opencv, #point-nine-capital, #tc

Classiq raises $10.5M Series A round for its quantum software development platform

Classiq, a Tel Aviv-based startup that aims to make it easier for computer scientists and developers to create quantum algorithms and applications, today announced that it has raised a $10.5 million Series A round led by Team8 Capital and Wing Capital. Entrée Capital, crowdfunding platform OurCrowd and Sumitomo Corporation (through IN Venture) also participated in this round, which follows the company’s recent $4 million seed round led by Entrée Capital.

The idea behind Classiq, which currently has just under a dozen members on its team, is that developing quantum algorithms remains a major challenge.

“Today, quantum software development is almost an impossible task,” said Nir Minerbi, CEO and Co-founder of Classiq. “The programming is at the gate level, with almost no abstraction at all. And on the other hand, for many enterprises, that’s exactly what they want to do: come up with game-changing quantum algorithms. So we built the next layer of the quantum software stack, which is the layer of a computer-aided design, automation, synthesis. […] So you can design the quantum algorithm without being aware of the details and the gate level details are automated.”

Image Credits: Classiq

With Microsoft’s Q#, IBM’s Qiskit and their competitors, developers already have access to quantum-specific languages and frameworks. And as Amir Naveh, Classiq’s VP of R&D told me, just like with those tools, developers will define their algorithms as code — in Classiq’s case a variant of Python. With those other languages, though, you will write sequences of gates on the cubits to define your quantum circuit.

“What you’re writing down isn’t gates on cubits, its concepts, its constructs, its constraints — it’s always constraints on what you want the circuit to achieve,” Naveh explained. “And then the circuit is synthesized from the constraints. So in terms of the visual interface, it would look the same [as using other frameworks], but in terms of what’s going through your head, it’s a whole different level of abstraction, you’re describing the circuit at a much higher level.”

This, he said, gives Classiq’s users the ability to more easily describe what they are trying to do. For now, though, that also means that the platform’s users tend to be quantum teams and scientists and developers who are quantum experts and understand how to develop quantum circuits at a very deep level. The team argues, though, that as the technology gets better, developers will need to have less and less of an understanding of how the actual qubits behave.

As Minerbi stressed, the tool is agnostic to the hardware that will eventually run these algorithms. Classiq’s mission, after all, is to provide an additional abstraction layer on top of the hardware. At the same time, though, developers can optimize their algorithms for specific quantum computing hardware as well.

Classiq CTO Dr. Yehuda Naveh also noted that the company is already working with a number of larger companies. These include banks that have used its platform for portfolio optimization, for example, and a semiconductor firm that was looking into a material science problem related to chip manufacturing, an area that is a bit of a sweet spot for quantum computing — at least in its current state.

The team plans to use the new funding to expand its existing team, mostly on the engineering side. A lot of the work the company is doing, after all, is still in R&D. Finding the right software engineers with a background in physics — or quantum information experts who can program — will be of paramount importance for the company. Minerbi believes that is possible, though, and the plan is to soon expand the team to about 25 people.

“We are thrilled to be working with Classiq, assisting the team in achieving their goals of advancing the quantum computing industry,” said Sarit Firon, Managing Partner at Team8 Capital. “As the quantum era takes off, they have managed to solve the missing piece in the quantum computing puzzle, which will enable game-changing quantum algorithms. We look forward to seeing the industry grow, and witnessing how Classiq continues to mark its place as a leader in the industry.”

#computer-science, #emerging-technologies, #entree-capital, #funding, #ourcrowd, #quantum-computing, #recent-funding, #science, #science-and-technology, #startups, #team8, #tel-aviv

One piece of optical hardware performs massively parallel AI calculations

Image of a series of parallel lines in different colors.

Enlarge / The output of two optical frequency combs, showing the light appearing at evenly spaced wavelengths. (credit: ESO)

AI and machine-learning techniques have become a major focus of everything from cloud computing services to cell phone manufacturers. Unfortunately, our existing processors are a bad match for the sort of algorithms that many of these techniques are based on, in part because they require frequent round trips between the processor and memory. To deal with this bottleneck, researchers have figured out how to perform calculations in memory and designed chips where each processing unit has a bit of memory attached.

Now, two different teams of researchers have figured out ways of performing calculations with light in a way that both merges memory and calculations and allows for massive parallelism. Despite the differences in implementation, the hardware designed by these teams has a common feature: it allows the same piece of hardware to simultaneously perform different calculations using different frequencies of light. While they’re not yet at the level of performance of some dedicated processors, the approach can scale easily and can be implemented using on-chip hardware, raising the prospect of using it as a dedicated co-processor.

A fine-toothed comb

The new work relies on hardware called a frequency comb, a technology that won some of its creators the 2005 Nobel Prize in Physics. While a lot of interesting physics is behind how the combs work (which you can read more about here), what we care about is the outcome of that physics. While there are several ways to produce a frequency comb, they all produce the same thing: a beam of light that is composed of evenly spaced frequencies. So a frequency comb in visible wavelengths might be composed of light with a wavelength of 500 nanometers, 510nm, 520nm, and so on.

Read 14 remaining paragraphs | Comments

#ai, #computer-science, #materials-science, #optical-computing, #science

Google develops an AI that can learn both chess and Pac-Man

The first major conquest of artificial intelligence was chess. The game has a dizzying number of possible combinations, but it was relatively tractable because it was structured by a set of clear rules. An algorithm could always have perfect knowledge of the state of the game and know every possible move that both it and its opponent could make. The state of the game could be evaluated just by looking at the board.

But many other games aren’t that simple. If you take something like Pac-Man, then figuring out the ideal move would involve considering the shape of the maze, the location of the ghosts, the location of any additional areas to clear, the availability of power-ups, etc., and the best plan can end up in disaster if Blinky or Clyde makes an unexpected move. We’ve developed AIs that can tackle these games, too, but they have had to take a very different approach to the ones that conquered chess and Go.

At least until now. Today, however, Google’s DeepMind division published a paper describing the structure of an AI that can tackle both chess and Atari classics.

Read 12 remaining paragraphs | Comments

#ai, #atari, #chess, #computer-science, #deepmind, #go, #google, #pac-man, #science

DeepMind AI handles protein folding, which humbled previous software

Proteins rapidly form complicated structures which had proven difficult to predict.

Enlarge / Proteins rapidly form complicated structures which had proven difficult to predict. (credit: Argonne National Lab / Flickr)

Today, DeepMind announced that it has seemingly solved one of biology’s outstanding problems: how the string of amino acids in a protein folds up into a three-dimensional shape that enables their complex functions. It’s a computational challenge that has resisted the efforts of many very smart biologists for decades, despite the application of supercomputer-level hardware for these calculations. DeepMind instead trained its system using 128 specialized processors for a couple of weeks; it now returns potential structures within a couple of days.

The limitations of the system aren’t yet clear—DeepMind says it’s currently planning on a peer-reviewed paper and has only made a blog post and some press releases available. But the system clearly performs better than anything that’s come before it, after having more than doubled the performance of the best system in just four years. Even if it’s not useful in every circumstance, the advance likely means that the structure of many proteins can now be predicted from nothing more than the DNA sequence of the gene that encodes them, which would mark a major change for biology.

Between the folds

To make proteins, our cells (and those of every other organism) chemically link amino acids to form a chain. This works because every amino acid shares a backbone that can be chemically connected to form a polymer. But each of the 20 amino acids used by life has a distinct set of atoms attached to that backbone. These can be charged or neutral, acidic or basic, etc., and these properties determine how each amino acid interacts with its neighbors and the environment.

Read 13 remaining paragraphs | Comments

#ai, #biochemistry, #biology, #computational-biology, #computer-science, #deepmind, #protein-folding, #science

With $29M in funding, Isovalent launches its cloud-native networking and security platform

Isovalent, a startup that aims to bring networking into the cloud-native era, today announced that it has raised a $29 million Series A round led by Andreesen Horowitz and Google. In addition, the company today officially launched its Cilium platform (which was in stealth until now) to help enterprises connect, observe and secure their applications.

The open-source Cilium project is already seeing growing adoption, with Google choosing it for its new GKE dataplane, for example. Other users include Adobe, Capital One, Datadog and GitLab. Isovalent is following what is now the standard model for commercializing open-source projects by launching an enterprise version.

Image Credits: Cilium

The founding team of CEO Dan Wendlandt and CTO Thomas Graf has deep experience in working on the Linux kernel and building networking products. Graf spent 15 years working on the Linux kernel and created the Cilium open-source project, while Wendlandt worked on Open vSwitch at Nicira (and then VMware).

Image Credits: Isovalent

“We saw that first wave of network intelligence be moved into software, but I think we both shared the view that the first wave was about replicating the traditional network devices in software,” Wendlandt told me. “You had IPs, you still had ports, you created virtual routers, and this and that. We both had that shared vision that the next step was to go beyond what the hardware did in software — and now, in software, you can do so much more. Thomas, with his deep insight in the Linux kernel, really saw this eBPF technology as something that was just obviously going to be groundbreaking technology, in terms of where we could take Linux networking and security.”

As Graf told me, when Docker, Kubernetes and containers, in general, become popular, what he saw was that networking companies at first were simply trying to reapply what they had already done for virtualization. “Let’s just treat containers as many as miniature VMs. That was incredibly wrong,” he said. “So we looked around, and we saw eBPF and said: this is just out there and it is perfect, how can we shape it forward?”

And while Isovalent’s focus is on cloud-native networking, the added benefit of how it uses the eBPF Linux kernel technology is that it also gains deep insights into how data flows between services and hence allows it to add advanced security features as well.

As the team noted, though, users definitely don’t need to understand or program eBPF, which is essentially the next generation of Linux kernel modules, themselves.

Image Credits: Isovalent

“I have spent my entire career in this space, and the North Star has always been to go beyond IPs + ports and build networking visibility and security at a layer that is aligned with how developers, operations and security think about their applications and data,” said Martin Casado, partner at Andreesen Horowitz (and the founder of Nicira). “Until just recently, the technology did not exist. All of that changed with Kubernetes and eBPF.  Dan and Thomas have put together the best team in the industry and given the traction around Cilium, they are well on their way to upending the world of networking yet again.”

As more companies adopt Kubernetes, they are now reaching a stage where they have the basics down but are now facing the next set of problems that come with this transition. Those, almost by default, include figuring out how to isolate workloads and get visibility into their networks — all areas where Isovalent/Cilium can help.

The team tells me its focus, now that the product is out of stealth, is about building out its go-to-market efforts and, of course, continue to build out its platform.

#andreesen-horowitz, #ceo, #cloud, #computer-science, #computing, #cto, #datadog, #enterprise, #google, #kernel, #kubernetes, #linus-torvalds, #linux, #martin-casado, #nicira, #operating-systems, #recent-funding, #security, #startups, #vms, #vmware

D-Wave releases its next-generation quantum annealing chip

Image of a chip surrounded by complicated support hardware.

Enlarge

Today, quantum computing company D-Wave is announcing the availability of its next-generation quantum annealer, a specialized processor that uses quantum effects to solve optimization and minimization problems. The hardware itself isn’t much of a surprise—D-Wave was discussing its details months ago—but D-Wave talked with Ars about the challenges of building a chip with over a million individual quantum devices. And the company is coupling the hardware’s release to the availability of a new software stack that functions a bit like middleware between the quantum hardware and classical computers.

Quantum annealing

Quantum computers being built by companies like Google and IBM are general purpose, gate-based machines. They can solve any problem and should show a vast acceleration for specific classes of problems. Or they will, as soon as the gate count gets high enough. Right now, these quantum computers are limited to a few dozen gates and have no error correction. Bringing them up to the scale needed presents a series of difficult technical challenges.

D-Wave’s machine is not general-purpose; it’s technically a quantum annealer, not a quantum computer. It performs calculations that find low-energy states for different configurations of the hardware’s quantum devices. As such, it will only work if a computing problem can be translated into an energy-minimization problem in one of the chip’s possible configurations. That’s not as limiting as it might sound, since many forms of optimization can be translated to an energy minimization problem, including things like complicated scheduling issues and protein structures.

Read 22 remaining paragraphs | Comments

#algorithms, #computer-science, #d-wave, #quantum-annealer, #quantum-mechanics, #science

Want to hire and retain high-quality developers? Give them stimulating work

Software developers are some of the most in-demand workers on the planet. Not only that, they’re complex creatures with unique demands in terms of how they define job fulfillment. With demand for developers on the rise (the number of jobs in the field is expected to grow by 22% over the next decade), companies are under pressure to do everything they can to attract and retain talent.

First and foremost — above salary — employers must ensure that product teams are made up of developers who feel creatively stimulated and intellectually challenged. Without work that they feel passionate about, high-quality programmers won’t just become bored and potentially seek opportunities elsewhere, the standard of work will inevitably drop. In one survey, 68% of developers said learning new things is the most important element of a job.

The worst thing for a developer to discover about a new job is that they’re the most experienced person in the room and there’s little room for their own growth.

Yet with only 32% of developers feeling “very satisfied” with their jobs, there’s scope for you to position yourself as a company that prioritizes the development of its developers, and attract and retain top talent. So, how exactly can you ensure that your team stays stimulated and creatively engaged?

Allow time for personal projects

78% of developers see coding as a hobby — and the best developers are the ones who have a true passion for software development, in and out of the workplace. This means they often have their own personal passions within the space, be it working with specific languages or platforms, or building certain kinds of applications.

Back in their 2004 IPO letter, Google founders Sergey Brin and Larry Page wrote:

We encourage our employees, in addition to their regular projects, to spend 20% of their time working on what they think will most benefit Google. [This] empowers them to be more creative and innovative. Many of our significant advances have happened in this manner.

At DevSquad, we’ve adopted a similar approach. We have an “open Friday” policy where developers are able to learn and enhance their skills through personal projects. As long as the skills being gained contribute to work we are doing in other areas, the developers can devote that time to whatever they please, whether that’s contributing to open-source projects or building a personal product. In fact, 65% of professional developers on Stack Overflow contribute to open-source projects once a year or more, so it’s likely that this is a keen interest within your development team too.

Not only does this provide a creative outlet for developers, the company also gains from the continuously expanding skillset that comes as a result.

Provide opportunities to learn and teach

One of the most demotivating things for software developers is work that’s either too difficult or too easy. Too easy, and developers get bored; too hard, and morale can dip as a project seems insurmountable. Within our team, we remain hyperaware of the difficulty levels of the project or task at hand and the level of experience of the developers involved.

#column, #computer-science, #developer, #entrepreneurship, #hiring, #information-technology, #programmer, #recruiting, #software-development, #startups, #tc

Jesus, SaaS and digital tithing

There are more than 300,000 congregations in the U.S., and entrepreneurs are creating billion-dollar companies by building software to service them. Welcome to church tech.

The sector was growing prior to COVID-19, but the pandemic forced many congregations to go entirely online, which rapidly accelerated growth in this space. While many of these companies were bootstrapped, VC dollars are also increasingly flowing in. Unfortunately, it’s hard to come across a lot of resources covering this expanding, unique sector.

Market map

In broad terms, we can split church tech into six categories:

  • church management software (ChMS)
  • digital giving
  • member outreach/messaging
  • streaming/content
  • Bible study
  • website and app building

Horizontal integration is huge in this sector, and nearly all the companies operating in this space fall into several of these categories. Many have expanded through M&A.

The categories

Speech recognition algorithms may also have racial bias

Extreme closeup photograph of a professional microphone.

Enlarge / Microphones are how our machines listen to us. (credit: Teddy Mafia / Flickr)

We’re outsourcing ever more of our decision making to algorithms, partly as a matter of convenience, and partly because algorithms are ostensibly free of some of the biases that humans suffer from. Ostensibly. As it turns out, algorithms that are trained on data that’s already subject to human biases can readily recapitulate them, as we’ve seen in places like the banking and judicial systems. Other algorithms have just turned out to be not especially good.

Now, researchers at Stanford have identified another area with potential issues: the speech-recognition algorithms that do everything from basic transcription to letting our phones fulfill our requests. These algorithms seem to have more issues with the speech patterns used by African Americans, although there’s a chance that geography plays a part, too.

A non-comedy of errors

Voice-recognition systems have become so central to modern technology that most of the large companies in the space have developed their own. For the study, the research team tested systems from Amazon, Apple, Google, IBM, and Microsoft. While some of these systems are sold as services to other businesses, the ones from Apple and Google are as close as your phone. Their growing role in daily life makes their failures intensely frustrating, so the researchers decided to have a look at whether those failures display any sort of bias.

Read 10 remaining paragraphs | Comments

#computer-science, #science, #speech-recognition

How do you keep an AI’s behavior from becoming predictable?

The Facebook app displayed on the screen of an iPhone.

Enlarge / The Facebook app displayed on the screen of an iPhone. (credit: Fabian Sommer | picture alliance | Getty Images)

A lot of neural networks are black boxes. We know they can successfully categorize things—images with cats, X-rays with cancer, and so on—but for many of them, we can’t understand what they use to reach that conclusion. But that doesn’t mean that people can’t infer the rules they use to fit things into different categories. And that creates a problem for companies like Facebook, which hopes to use AI to get rid of accounts that abuse its terms of service.

Most spammers and scammers create accounts in bulk, and they can easily look for differences between the ones that get banned and the ones that slip under the radar. Those differences can allow them to evade automated algorithms by structuring new accounts to avoid the features that trigger bans. The end result is an arms race between algorithms and spammers and scammers who try to guess their rules.

Facebook thinks it has found a way to avoid getting involved in this arms race while still using automated tools to police its users, and this week, it decided to tell the press about it. The result was an interesting window into how to keep AI-based moderation useful in the face of adversarial behavior, an approach that could be applicable well beyond Facebook.

Read 14 remaining paragraphs | Comments

#artificial-intelligence, #biz-it, #computer-science, #facebook, #fact-checking, #moderation, #science