Quantum computer succeeds where a classical algorithm fails

Image of a chip above iridescent wiring.

Enlarge / Google’s Sycamore processor. (credit: Google)

People have performed many mathematical proofs to show that a quantum computer will vastly outperform traditional computers on a number of algorithms. But the quantum computers we have now are error-prone and don’t have enough qubits to allow for error correction. The only demonstrations we’ve had involve quantum computing hardware evolving out of a random configuration and traditional computers failing to simulate their normal behavior. Useful calculations are an exercise for the future.

But a new paper from Google’s quantum computing group has now moved beyond these sorts of demonstrations and used a quantum computer as part of a system that can help us understand quantum systems in general, rather than the quantum computer. And they show that, even on today’s error-prone hardware, the system can outperform classical computers on the same problem.

Probing quantum systems

To understand what the new work involves, it helps to step back and think about how we typically understand quantum systems. Since the behavior of these systems is probabilistic, we typically need to measure them repeatedly. The results of these measurements are then imported into a classical computer, which processes them to generate a statistical understanding of the system’s behavior. With a quantum computer, by contrast, it can be possible to mirror a quantum state using the qubits themselves, reproduce it as often as needed, and manipulate it as necessary. This method has the potential to provide a route to a more direct understanding of the quantum system at issue.

Read 17 remaining paragraphs | Comments

#computer-science, #google, #physics, #quantum-computing, #quantum-mechanics, #science

Making blockchain stop wasting energy by getting it to manage energy

Image of solar panels.

Enlarge / Managing a microgram might be a case where blockchain is actually useful. (credit: Getty Images)

One of the worst features of blockchain technologies like cryptocurrency and NFTs is their horrific energy use. When we should be wringing every bit of efficiency out of our electricity use, most blockchains require computers to perform pointless calculations repeatedly.

The obvious solution is to base blockchains on useful calculations—something we might need to do anyway. Unfortunately, the math involved in a blockchain has to have a very specific property: The solution must be difficult to calculate but easy to verify. Nevertheless, a number of useful calculations have been identified as possible replacements for the ones currently being used in many systems.

A paper released this week adds another option to this list. Optimization problems are notoriously expensive in terms of computations, but the quality of a solution is relatively easy to evaluate. And in this case, the systems being optimized are small energy grids, meaning that this approach could partly offset some of a blockchain’s horrific energy usage.

Read 11 remaining paragraphs | Comments

#blockchain, #computer-science, #energy, #science

Manipulating photons for microseconds tops 9,000 years on a supercomputer

Given an actual beam of light, a beamsplitter divides it in two. Given individual photons, the behavior becomes more complicated.

Enlarge / Given an actual beam of light, a beamsplitter divides it in two. Given individual photons, the behavior becomes more complicated. (credit: Wikipedia)

Ars Technica’s Chris Lee has spent a good portion of his adult life playing with lasers, so he’s a big fan of photon-based quantum computing. Even as various forms of physical hardware like superconducting wires and trapped ions made progress, it was possible to find him gushing about an optical quantum computer put together by a Canadian startup called Xanadu. But, in the year since Xanadu described its hardware, companies using that other technology continued to make progress by cutting down error rates, exploring new technologies, and upping the qubit count.

But the advantage of optical quantum computing didn’t go away, and now Xanadu is back with a reminder that it hasn’t gone away either. Thanks to some tweaks to the design it described a year ago, Xanadu is now able to sometimes perform operations with more than 200 qubits. And it’s shown that simulating the behavior of just one of those operations on a supercomputer would take 9,000 years, while its optical quantum computer can do them in just a few dozen milliseconds.

This is an entirely contrived benchmark: just as Google’s quantum computer did, the quantum computer is just being itself while the supercomputer is trying to simulate it. The news here is more about the potential of Xanadu’s hardware to scale.

Read 14 remaining paragraphs | Comments

#computer-science, #optics, #physics, #quantum-computing, #science

Quantum memristor: A memory-dependent computational unit

An abstract image meant to evoke a complex electronic processor.

Enlarge (credit: Donald Jorgensen | Pacific Northwest National Laboratory)

Quantum computing has come on in leaps and bounds in the last few years. Indeed, once the big technology companies like IBM, Microsoft, and Google started showing an interest, I kind of stopped keeping track. Nevertheless, research on the basic elements of quantum computing continues and is, for me, more interesting than the engineering achievements of commercial labs (which are still absolutely necessary).

In line with my interests, a group of researchers demonstrated the first quantum memristor recently. This may be a critical step in bringing a type of highly efficient neural network to the world of quantum computing without an eye-watering large number of quantum connections.

Memristors and adding the quantum

The concept of the memristor dates back to the 1970s, but, for a long time, it sat like a sock under your washing machine: forgotten and un-missed. The essential idea is that the current that flows through a memristor doesn’t just depend on the voltage that is applied across the terminals but also on the history of applied voltage. Physical implementations of memristors offer great promise for low-energy computing because they can be used to make energy-efficient memory.

Read 14 remaining paragraphs | Comments

#computer-science, #quantum-computing, #quantum-memristor, #science

Software can design proteins that inhibit proteins on viruses

Image of a complex arrangement of colored ribbons.

Enlarge / The three-dimensional structures of proteins provide many opportunities for specific interactions. (credit: Getty Images)

Thanks in part to the large range of shapes they can adopt and the chemical environments those shapes create, proteins can perform an amazing number of functions. But there are many proteins we wish didn’t function quite so well, like the proteins on the surfaces of viruses that let them latch on to new cells or the damaged proteins that cause cancer cells to grow uncontrollably.

Ideally, we’d like to block the key sites on these proteins, limiting their ability to do harm. We’ve seen some progress in this area with the introduction of a number of small-molecule drugs, including one that appears effective against COVID-19. But that sort of drug development often results in chemicals that, for one reason or another, don’t make effective drugs.

Now, researchers have announced that they have created software that can design a separate protein that will stick to a target protein and potentially block its activity. The software has been carefully designed to minimize the processing demands of a computationally complex process, and the whole thing benefits from our ability to do large-scale validation tests using molecular biology.

Read 12 remaining paragraphs | Comments

#biochemistry, #biology, #computer-science, #science, #software

Latest success from Google’s AI group: Controlling a fusion reactor

A dark space with a toroidal material that glows purple.

Enlarge / Plasma inside the tokamak at the EPFL. (credit: EPFL)

As the world waits for construction of the largest fusion reactor yet, called ITER, smaller reactors with similar designs are still running. These reactors, called tokamaks, help us test both hardware and software. The hardware testing helps us refine things like the materials used for container walls or the shape and location of control magnets.

But arguably, the software is the most important. To enable fusion, the control software of a tokamak has to monitor the state of the plasma it contains and respond to any changes by making real-time adjustments to the system’s magnets. Failure to do so can result in anything from a drop in energy (which leads to the failure of any fusion) to seeing the plasma spill out of containment (and scorch the walls of the container).

Getting that control software right requires a detailed understanding of both the control magnets and the plasma the magnets manipulate. Or it would be more accurate to say, getting that control software right has required. Because today, Google’s DeepMind AI team is announcing that its software has been successfully trained to control a tokamak.

Read 13 remaining paragraphs | Comments

#ai, #computer-science, #deepmind, #fusion, #physics, #science

Hydrogen-soaked crystal lets neural networks expand to match a problem

Image of a stylized circuit layout.

Enlarge (credit: Getty Images)

Training AIs remains very processor-intensive, in part because traditional processing architectures are poor matches for the sorts of neural networks that are widely used. This has led to the development of what has been termed neuromorphic computing hardware, which attempts to model the behavior of biological neurons in hardware.

But most neuromorphic hardware is implemented in silicon, which limits it to behaviors that are set at the hardware level. A group of US researchers is now reporting a type of non-silicon hardware that’s substantially more flexible. It works by controlling how much hydrogen is present in an alloy of nickel, with the precise amount of hydrogen switching a single device among four different behaviors, each of which is useful for performing neural-network operations.

Give it the gas

The material being used here is one of a class of compounds called perovskite nickelates. Perovskite is a general term for a specific arrangement of atoms in a crystalline structure; a wide variety of chemicals can form perovskites. In this case, the crystal is formed from a material that’s a mix of neodymium, nickel, and oxygen.

Read 13 remaining paragraphs | Comments

#ai, #computer-science, #materials-science, #neural-networks, #science

Tracking Facebook connections between parent groups and vaccine misinfo

Tracking Facebook connections between parent groups and vaccine misinfo

Enlarge (credit: Getty | Joe Amon)

Misinformation about the pandemic and the health measures that are effective against SARS-CoV-2 has been a significant problem in the US. It’s led to organized resistance against everything from mask use to vaccines and has undoubtedly ended up killing people.

Plenty of factors have contributed to this surge of misinformation, but social media clearly helps enable its spread. While the companies behind major networks have taken some actions to limit the spread of misinformation, internal documents indicate that a lot more could be done.

Taking more effective action, however, would benefit from more clearly identifying what the problems are. And, to that end, a recent analysis of the network of vaccine misinformation provides information that might be helpful. It finds that most of the worst misinformation sources are probably too small to stand out as being in need of moderation. The analysis also shows that the pandemic has brought mainstream parenting groups noticeably closer to groups devoted to conspiracy theories.

Read 15 remaining paragraphs | Comments

#computer-science, #medicine, #misinformation, #pandemic, #science, #vaccines

Rigetti announces 80 qubit processor, experiments with “qutrits”

Image of a golden colored square with lots of attachment points for cables.

Enlarge / The Aspen-M 40-qubit chip and its housing. (credit: Rigetti)

On Wednesday, quantum computing startup Rigetti announced a number of interesting hardware developments. To begin with, its users would now have access to its next-generation chip, called Apsen-M, with 40 qubits and improved performance. While that’s well below the qubit count achieved by IBM, the company also hints at a way it can stay competitive: private testers will now have access to an 80 qubit version achieved by linking two of these chips together.

Separately, the company says that it is now experimenting with allowing testers to access a third energy state in its superconducting hardware, converting its qubits into “qutrits.” If these qutrits show consistent behavior, then they would allow the manipulation of significantly more data in existing hardware.

New and improved

For traditional processors, advances are typically measured in clock speed, core count, and energy use. For quantum computers, one of the most critical measures is error rate, since the qubits lose track of their state in a way that digital hardware doesn’t. With Aspen-M, Rigetti is claiming that a specific type of error—the readout of the state of the qubit—has been cut in half.

Read 7 remaining paragraphs | Comments

#computer-science, #physics, #quantum-computing, #quantum-mechanics, #qutrits, #rigetti, #science

A potential hangup for quantum computing: Cosmic rays

Image of a chip above iridescent wiring.

Enlarge / Google’s Sycamore processor. (credit: Google)

Recently, when researchers were testing error correction on Google’s quantum processor, they noted an odd phenomenon where the whole error-correction scheme would sporadically fail badly. They chalked this up to background radiation, a combination of cosmic rays and the occasional decay of a naturally occurring radioactive isotope.

It seemed like a bit of an amusing aside at the time—Google having accidentally paid for an extremely expensive cosmic ray detector. But the people behind the processor took the problem very seriously and are back with a new paper that details exactly how the radiation affects the qubits. And they conclude that the problems caused by cosmic rays happen often enough to keep error-corrected quantum computations from working unless we figure out a way to limit the rays’ impact.

It’s a shame about the rays

Cosmic rays and radioactivity cause problems for classical computing hardware as well. That’s because classical computers rely on moving and storing charges, and cosmic rays can induce charges when they impact a material. Qubits, in contrast, store information in the form of the quantum state of an object—in the case of Google’s processor, a loop of superconducting wire linked to a resonator. Cosmic rays affect these, too, but the mechanism is completely different.

Read 12 remaining paragraphs | Comments

#computer-science, #cosmic-rays, #error-correction, #physics, #quantum-computing, #quantum-mechanics, #science

Quantum processor swapped in for a neural network

Colorful weather map.

Enlarge / Given the right data, a neural network can infer what radar maps would have looked like, were they available. (credit: NOAA/CIMSS.)

It’s become increasingly clear that quantum computers won’t have a single moment when they become clearly superior to classical hardware. Instead, we’re likely to see them becoming useful for a narrow set of problems and then gradually expand out from there to an increasing range of computations. The question obviously becomes one of where the utility will be seen first.

The quantum-computing startup Rigetti now has a white paper that identifies, at least theoretically, a case when quantum hardware should offer an advantage. And it is actually useful: replacing a neural network that’s used for analyzing weather data.

How’s the weather?

The problem the people at Rigetti looked at involves taking a partial set of weather data and inferring what the rest looks like. Plenty of areas of the planet lack good coverage, and so we only get partial information about local conditions. And, if we have things like commercial aircraft going through said remote areas, we’ll often want a more complete picture of the conditions there.

Read 8 remaining paragraphs | Comments

#computer-science, #quantum-computing, #science

Getting software to “hallucinate” reasonable protein structures

Four images of spiraling ribbons.

Enlarge / Top row: the hallucination and actual structure. Bottom row: the two structures superimposed. (credit: Anishchenko et. al.)

Chemically, proteins are just a long string of amino acids. Their amazing properties come about because that chain can fold up into a complex, three-dimensional shape. So understanding the rules that govern this folding can not only give us insights into the proteins that life uses but could potentially help us design new proteins with novel chemical abilities.

There’s been remarkable progress on the first half of that problem recently: researchers have tuned AIs to sort through the evolutionary relationships among proteins and relate common features to structures. As of yet, however, those algorithms aren’t any help for designing new proteins from scratch. But that may change, thanks to the methods described in a paper released on Wednesday.

In it, a large team of researchers describes what it terms protein “hallucinations.” These are the products of a process that resembles a game of hotter/colder with an algorithm, starting with a random sequence of amino acids, making a change, and asking, “Does this look more or less like a structured protein?” Several of the results were tested and do, in fact, fold up like they were predicted to.

Read 14 remaining paragraphs | Comments

#ai, #biochemistry, #computer-science, #protein-structure, #science

Interesting research, but no, we don’t have living, reproducing robots

Clusters of frog cells look like beignets.

Enlarge / The crescent-shaped balls of cells would travel in circles, piling up cells that could grow into mobile clusters. (credit: Sam Kriegman and Douglas Blackiston)

Scientists on Monday announced that they’d optimized a way of getting mobile clusters of cells to organize other cells into smaller clusters that, under the right conditions, could be mobile themselves. The researchers call this process “kinematic self-replication,” although that’s not entirely right—the copies need help from humans to start moving on their own, are smaller than the originals, and the copying process grinds to a halt after just a couple of cycles.

So, of course, CNN headlined its coverage “World’s first living robots can now reproduce.”

This is a case when something genuinely interesting is going on, but both the scientists and some of the coverage of the developments are promoting it as far more than it actually is. So, let’s take a look at what’s really been done.

Read 17 remaining paragraphs | Comments

#bioengineering, #biology, #computer-science, #science, #synthetic-biology

Math may have caught up with Google’s quantum-supremacy claims

Image of a chip above iridescent wiring.

Enlarge / Google’s Sycamore processor. (credit: Google)

In 2019, word filtered out that a quantum computer built by Google had performed calculations that the company claimed would be effectively impossible to replicate on supercomputing hardware. That turned out to not be entirely correct, since Google had neglected to consider the storage available to supercomputers; if that were included, the quantum computer’s lead shrank to just a matter of days.

Adding just a handful of additional qubits, however, would re-establish the quantum computer’s vast lead. Recently, however, a draft manuscript was placed on the arXiv that points out a critical fact: Google’s claims relied on comparisons to a very specific approach to performing the calculation on standard computing hardware. There are other ways to perform the calculation, and the paper suggests one of those would allow a supercomputer to actually pull ahead of its quantum competitor.

More than one road to random

The calculation Google performed was specifically designed to be difficult to simulate on a normal computer. It set the 54 qubits of its Sycamore processor in a random state, then let quantum interference among neighboring qubits influence how the system evolves over time. After a short interval, the hardware started repeatedly measuring the state of the qubits. Each individual measurement produced a string of random bits, making Sycamore into a very expensive random-number generator. But if enough measurements are made, certain patterns generated by the quantum interference become apparent.

Read 16 remaining paragraphs | Comments

#computer-science, #physics, #quantum-computing, #science

IBM clears the 100-qubit mark with its new processor

Image of a chip labeled IBM.

Enlarge (credit: IBM)

IBM has announced it has cleared a major hurdle in its effort to make quantum computing useful: it now has a quantum processor, called Eagle, with 127 functional qubits. This makes it the first company to clear the 100-qubit mark, a milestone that’s interesting because the interactions of that many qubits can’t be simulated using today’s classical computing hardware and algorithms.

But what may be more significant is that IBM now has a roadmap that would see it producing the first 1,000-qubit processor in two years. And, according to IBM Director of Research Darío Gil, that’s the point where calculations done with quantum hardware will start being useful.

What’s new

Gil told Ars that the new qubit count was a product of multiple developments that have been put together for the first time. One is that IBM switched to what it’s calling a “heavy hex” qubit layout, which it announced earlier this year. This layout connects qubits in a set of hexagons with shared sides. In this layout, qubits are connected to two, three, or a maximum of four neighbors—on average, that’s a lower level of connectivity than some competing designs. But Gil argued that the tradeoff is worth it, saying “it reduces the level of connectivity, but greatly improves crosstalk.”

Read 14 remaining paragraphs | Comments

#computer-science, #ibm, #physics, #quantum-computers, #quantum-mechanics, #qubits, #science

Open-sourcing of protein-structure software is already paying off

Image of different categories of protein complexes.

Enlarge (credit: Humphreys et. al.)

It is now relatively trivial to determine the order of amino acids in a protein. Figuring out how that order translates to a complicated three-dimensional structure that performs a specific function, however, is extremely challenging. But after decades of slow progress, Google’s DeepMind AI group announced that it has made tremendous strides toward solving the problem. In July, the system, called AlphaFold, was made open source. At the same time, a group of academic researchers released its own protein-folding software, called RoseTTAFold, built in part using ideas derived from DeepMind’s work.

How effective are these tools? Even if they aren’t as good as some of the statistics suggested, it’s clear they’re far better than anything we’ve ever had. So how will scientists use them?

This week, a large research collaboration set the software loose on a related problem: how these individual three-dimensional structures come together to form the large, multi-protein complexes that perform some of the most important functions in biology.

Read 19 remaining paragraphs | Comments

#ai, #biology, #computer-science, #deepmind, #protein-folding, #proteins, #science

No-code is code

Today, the release of OpenAI Codex, a new Al system that translates natural language to code, marks the beginning of a shift in how computer software is written.

Over the past few years, there’s been growing talk about “no code” platforms, but this is no new phenomenon. The reality is, ever since the first programmable devices, computer scientists have regularly developed breakthroughs in how we “code” computer software.

The first computers were programmed with switches or punch cards, until the keyboard was invented. Coding became a matter of typing numbers or machine language, until Grace Hopper invented the modern compiler and the COBOL language, ushering in decades of innovation in programming languages and platforms. Languages like Fortran, Pascal, C, Java and Python evolved in a progression, where the newest language (built using an older language) enabled programmers to “code” using increasingly more human language.

Alongside languages, we’ve seen the evolution of “no-code” platforms — including Microsoft Excel, the 1980s granddaddy of no-code — that empower people to program computers in a visual interface, whether in school or in the workplace. Anytime you write a formula in a spreadsheet, or when you drag a block of code on Code.org or Scratch, you’re programming, or “coding,” a computer. “No code” is code. Every decade, a breakthrough innovation makes it easier to write code so that the old way of coding is replaced by the new.

Does this mean coding is dead? No! It doesn’t replace the need for a programmer to understand code. It means coding just got much easier, higher impact and thus more important.

This brings us to today’s announcement. Today, OpenAl announced OpenAI Codex, an entirely new way to “write code” in the natural English language. A computer programmer can now use English to describe what they want their software to do, and OpenAl’s generative Al model will automatically generate the corresponding computer code, in your choice of programming language. This is what we’ve always wanted — for computers to understand what we want them to do, and then do it, without having to go through a complex intermediary like a programming language.

But this is not an end, it is a beginning. With Al-generated code, one can imagine an evolution in every programming tool, in every programming class, and a Cambrian explosion of new software. Does this mean coding is dead? No! It doesn’t replace the need for a programmer to understand code. It means coding just got much easier, higher impact and thus more important, just as when punch cards were replaced by keyboards, or when Grace Hopper invented the compiler.

In fact, the demand for software today is greater than ever and will only continue to grow. As this technology evolves, Al will play a greater role in generating code, which will multiply the productivity and impact of computer scientists, and will make this field accessible to more and more computer programmers.

There are already tools that let you program using only drag-and-drop, or to write code using your voice. Improvements in these technologies and new tools, like OpenAI Codex, will increasingly democratize the ability to create software. As a result, the amount of code — and the number of coders — in the world will increase.

This also means that learning how to program — in a new way — is more important than ever. Learning to code can unlock doors to opportunity and also help solve global problems. As it becomes easier and more accessible to create software, we should give every student in every school the fundamental knowledge to not only be a user of technology but also a creator.

#code-org, #coding, #column, #computer-science, #computing, #developer, #openai, #science-and-technology, #software, #startups, #tc

Google turns AlphaFold loose on the entire human genome

Image of a diagram of ribbons and coils.

Enlarge (credit: Sloan-Kettering)

Just one week after Google’s DeepMind AI group finally described its biology efforts in detail, the company is releasing a paper that explains how it analyzed nearly every protein encoded in the human genome and predicted its likely three-dimensional structure—a structure that can be critical for understanding disease and designing treatments. In the very near future, all of these structures will be released under a Creative Commons license via the European Bioinformatics Institute, which already hosts a major database of protein structures.

In a press conference associated with the paper’s release, DeepMind’s Demis Hassabis made clear that the company isn’t stopping there. In addition to the work described in the paper, the company will release structural predictions for the genomes of 20 major research organisms, from yeast to fruit flies to mice. In total, the database launch will include roughly 350,000 protein structures.

What’s in a structure?

We just described DeepMind’s software last week, so we won’t go into much detail here. The effort is an AI-based system trained on the structure of existing proteins that had been determined (often laboriously) through laboratory experiments. The system uses that training, plus information it obtains from families of proteins related by evolution, to predict how a protein’s chain of amino acids folds up in three-dimensional space.

Read 14 remaining paragraphs | Comments

#ai, #biochemistry, #biology, #computer-science, #protein-folding, #science

PlasticARM is a 32-bit bendable processor

Image of the plasticARM processor, showing its dimensions and components.

Enlarge (credit: Biggs, et. al.)

Wearable electronics, like watches and fitness trackers, represent the next logical step in computing. They’ve sparked an interest in the development of flexible electronics, which could enable wearables to include things like clothing and backpacks.

Flexible electronics, however, run into a problem: our processing hardware is anything but flexible. Most efforts at dealing with that limitation have involved splitting up processors into a collection of smaller units, linking them with flexible wiring, and then embedding all the components in a flexible polymer. To an extent, it’s a throwback to the early days of computing, when a floating point unit might reside on a separate chip.

But a group within the semiconductor company ARM has now managed to implement one of the company’s smaller embedded designs using flexible silicon. The design works and executes all the instructions you’d expect from it, but it also illustrates the compromises we have to make for truly flexible electronics.

Read 12 remaining paragraphs | Comments

#arm, #computer-science, #materials-science, #processor, #science

Google tries out error correction on its quantum processor

Image of a chip above iridescent wiring.

Enlarge / Google’s Sycamore processor. (credit: Google)

The current generation of quantum hardware has been termed “NISQ”: noisy, intermediate-scale quantum processors. “Intermediate-scale” refers to a qubit count that is typically in the dozens, while “noisy” references the fact that current qubits frequently produce errors. These errors can be caused by problems setting or reading the qubits or by the qubit losing its state during calculations.

Long-term, however, most experts expect that some form of error correction will be essential. Most of the error-correction schemes involve distributing a qubit’s logical information across several qubits and using additional qubits to track that information in order to identify and correct errors.

Back when we visited the folks from Google’s quantum computing group, they mentioned that the layout of their processor was chosen because it simplifies implementing error correction. Now, the team is running two different error-correction schemes on the processor. The results show that error correction clearly works, but we’ll need a lot more qubits and a lower inherent error rate before correction is useful.

Read 14 remaining paragraphs | Comments

#computer-science, #error-correction, #quantum-computing, #quantum-mechanics, #qubits, #science

Quantum-computing startup Rigetti to offer modular processors

Image of a grey metallic rectangle.

Enlarge / It may look nearly featureless, but it’s meant to contain 80 qubits. (credit: Rigetti Computing)

A quantum-computing startup announced Tuesday that it will make a significant departure in its designs for future quantum processors. Rather than building a monolithic processor as everyone else has, Rigetti Computing will build smaller collections of qubits on chips that can be physically linked together into a single functional processor. This isn’t multiprocessing so much as a modular chip design.

The decision has several consequences, both for Rigetti processors and quantum computing more generally. We’ll discuss them below.

What’s holding things back

Rigetti’s computers rely on a technology called a “transmon,” based on a superconducting wire loop linked to a resonator. That’s the same qubit technology used by larger competitors like Google and IBM. Transmons are set up so that the state of one can influence that of its neighbors during calculations, an essential feature of quantum computing. To an extent, the topology of connections among transmon qubits is a key contributor to the machine’s computational power.

Read 11 remaining paragraphs | Comments

#computer-science, #quantum-computing, #rigetti, #science

Researchers build a metadata-based image database using DNA storage

Fluorescent tagged DNA is the key to a new storage system.

Enlarge / Fluorescent tagged DNA is the key to a new storage system. (credit: Gerald Barber, Virginia Tech)

DNA-based data storage appears to offer solutions to some of the problems created by humanity’s ever-growing capacity to create data we want to hang on to. Compared to most other media, DNA offers phenomenal data densities. If stored in the right conditions, it doesn’t require any energy to maintain the data for centuries. And due to DNA’s centrality to biology, we’re always likely to maintain the ability to read it.

But DNA is not without its downsides. Right now, there’s no standard method of encoding bits in the pattern of bases of a DNA strand. Synthesizing specific sequences remains expensive. And accessing the data using current methods is slow and depletes the DNA being used for storage. Try to access the data too many times, and you have to restore it in some way—a process that risks introducing errors.

A team based at MIT and the Broad Institute has decided to tackle some of these issues. In the process, the researchers have created a DNA-based image-storage system that’s somewhere between a file system and a metadata-based database.

Read 18 remaining paragraphs | Comments

#biology, #computer-science, #dna, #science, #storage

Programming a robot to teach itself how to move

image of three small pieces of hardware connected by tubes.

Enlarge / The robotic train. (credit: Oliveri et. al.)

One of the most impressive developments in recent years has been the production of AI systems that can teach themselves to master the rules of a larger system. Notable successes have included experiments with chess and Starcraft. Given that self-teaching capability, it’s tempting to think that computer-controlled systems should be able to teach themselves everything they need to know to operate. Obviously, for a complex system like a self-driving car, we’re not there yet. But it should be much easier with a simpler system, right?

Maybe not. A group of researchers in Amsterdam attempted to take a very simple mobile robot and create a system that would learn to optimize its movement through a learn-by-doing process. While the system the researchers developed was flexible and could be effective, it ran into trouble due to some basic features of the real world, like friction.

Roving robots

The robots in the study were incredibly simple and were formed from a varying number of identical units. Each had an on-board controller, battery, and motion sensor. A pump controlled a piece of inflatable tubing that connected a unit to a neighboring unit. When inflated, the tubing generated a force that pushed the two units apart. When deflated, the tubing would pull the units back together.

Read 14 remaining paragraphs | Comments

#computer-science, #machine-learning, #robotics, #science

Honeywell releases details of its ion trap quantum computer

Image of a small electronic device.

Enlarge / The line down the middle is where the trapped ions reside. (credit: Honeywell)

About a year ago, Honeywell announced that it had entered the quantum computing race with a technology that was different from anything else on the market. The company claimed that because the performance of its qubits was so superior to those of its competitors, its computer could do better on a key quantum computing benchmark than quantum computers with far more qubits.

Now, roughly a year later, the company finally released a paper describing the feat in detail. But in the meantime, the competitive landscape has shifted considerably.

It’s a trap!

In contrast to companies like IBM and Google, Honeywell has decided against using superconducting circuitry and in favor of using a technology called “trapped ions.” In general, these use a single ion as a qubit and manipulate its state using lasers. There are different ways to create ion trap computers, however, and Honeywell’s version is distinct from another on the market, made by a competitor called IonQ (which we’ll come back to).

Read 14 remaining paragraphs | Comments

#computer-science, #honeywell, #ion-trap, #physics, #quantum-computing, #quantum-mechanics, #science

How does the brain interpret computer languages?

Image of a pillar covered in lit ones and zeroes.

Enlarge (credit: Barcroft Media / Getty Images)

In the US, a 2016 Gallup poll found that the majority of schools want to start teaching code, with 66 percent of K-12 school principals thinking that computer science learning should be incorporated into other subjects. Most countries in Europe have added coding classes and computer science to their school curricula, with France and Spain introducing theirs in 2015. This new generation of coders is expected to boost the worldwide developer population from 23.9 million in 2019 to 28.7 million in 2024.

Despite all this effort, there’s still some confusion on how to teach coding. Is it more like a language, or more like math? Some new research may have settled this question by watching the brain’s activity while subjects read Python code.

Two schools on schooling

Right now, there are two schools of thought. The prevailing one is that coding is a type of language, with its own grammar rules and syntax that must be followed. After all, they’re called coding languages for a reason, right? This idea even has its own snazzy acronym: Coding as Another Language, or CAL.

Read 17 remaining paragraphs | Comments

#biology, #brain, #computer-science, #language, #math, #neuroscience, #programming-language, #science

D-Wave’s hardware outperforms a classic computer

D-Wave’s hardware outperforms a classic computer

(credit: D-Wave)

Early on in D-Wave’s history, the company made bold claims about its quantum annealer outperforming algorithms run on traditional CPUs. Those claims turned out to be premature, as improvements to these algorithms pulled the traditional hardware back in front. Since then, the company has been far more circumspect about its performance claims, even as it brought out newer generations of hardware.

But in the run-up to the latest hardware, the company apparently became a bit more interested in performance again. And it recently got together with Google scientists to demonstrate a significant boost in performance compared to a classical algorithm, with the gap growing as the problem became complex—although the company’s scientists were very upfront about the prospects of finding a way to boost classical hardware further. Still, there are a lot of caveats even beyond that, so it’s worth taking a detailed look at what the company did.

Magnets, how do they flip?

D-Wave’s system is based on a large collection of quantum devices that are connected to some of their neighbors. Each device can have its state set separately, and the devices are then given the chance to influence their neighbors as the system moves through different states and individual devices change their behavior. These transitions are the equivalent of performing operations. And because of the quantum nature of these devices, the hardware seems to be able to “tunnel” to new states, even if the only route between them involves high-energy states that are impossible to reach.

Read 14 remaining paragraphs | Comments

#computer-science, #d-wave, #quantum-computing, #quantum-mechanics, #science, #xeon

SuperAnnotate, a computer vision platform, partners with with open-source to spread visual ML

SuperAnnotate, a NoCode computer vision platform, is partnering with OpenCV, a non-profit organization that has built a large collection of open-source computer vision algorithms. The move means startups and entrepreneurs will be able to build their own AI models and allow cameras to detect objects using machine learning. SuperAnnotate has so far raised $3M to date from investors including Point Nine Capital, Fathom Capital and Berkeley SkyDeck Fund.

The AI-powered computer vision platform for data scientists and annotation teams will provide OpenCV AI Kit (OAK) users with access to its platform, as well as launching a computer vision course on building AI models. SuperAnnotate will also set up the AI Kit’s camera to detect objects using machine learning and OAK users will get $200 of credit to set up their systems on its platform. 

The OAK is a multi-camera device that can run computer vision and 3D perception tasks such as identifying objects, counting people and measuring distances. Since launching, around 11,000 of these cameras have been distributed.

The AI Kit has so far been used to build drone and security applications, agricultural vision sensors or even COVID-related detection devices (for example, to identify people whether someone is wearing a mask or not).

Tigran Petrosyan, co-founder and CEO at SuperAnnotate said in a statement that: “Computer vision and smart camera applications are gaining momentum, yet not many have the relevant AI expertise to implement those. With OAK Kit and SuperAnnotate, one can finally build their smart camera system, even without coding experience.”

Competitors to SuperAnnotate include Dataloop, Labelbox, Appen and Hive
.

#articles, #artificial-intelligence, #computer-science, #computer-vision, #computing, #europe, #machine-learning, #opencv, #point-nine-capital, #tc

Classiq raises $10.5M Series A round for its quantum software development platform

Classiq, a Tel Aviv-based startup that aims to make it easier for computer scientists and developers to create quantum algorithms and applications, today announced that it has raised a $10.5 million Series A round led by Team8 Capital and Wing Capital. Entrée Capital, crowdfunding platform OurCrowd and Sumitomo Corporation (through IN Venture) also participated in this round, which follows the company’s recent $4 million seed round led by Entrée Capital.

The idea behind Classiq, which currently has just under a dozen members on its team, is that developing quantum algorithms remains a major challenge.

“Today, quantum software development is almost an impossible task,” said Nir Minerbi, CEO and Co-founder of Classiq. “The programming is at the gate level, with almost no abstraction at all. And on the other hand, for many enterprises, that’s exactly what they want to do: come up with game-changing quantum algorithms. So we built the next layer of the quantum software stack, which is the layer of a computer-aided design, automation, synthesis. […] So you can design the quantum algorithm without being aware of the details and the gate level details are automated.”

Image Credits: Classiq

With Microsoft’s Q#, IBM’s Qiskit and their competitors, developers already have access to quantum-specific languages and frameworks. And as Amir Naveh, Classiq’s VP of R&D told me, just like with those tools, developers will define their algorithms as code — in Classiq’s case a variant of Python. With those other languages, though, you will write sequences of gates on the cubits to define your quantum circuit.

“What you’re writing down isn’t gates on cubits, its concepts, its constructs, its constraints — it’s always constraints on what you want the circuit to achieve,” Naveh explained. “And then the circuit is synthesized from the constraints. So in terms of the visual interface, it would look the same [as using other frameworks], but in terms of what’s going through your head, it’s a whole different level of abstraction, you’re describing the circuit at a much higher level.”

This, he said, gives Classiq’s users the ability to more easily describe what they are trying to do. For now, though, that also means that the platform’s users tend to be quantum teams and scientists and developers who are quantum experts and understand how to develop quantum circuits at a very deep level. The team argues, though, that as the technology gets better, developers will need to have less and less of an understanding of how the actual qubits behave.

As Minerbi stressed, the tool is agnostic to the hardware that will eventually run these algorithms. Classiq’s mission, after all, is to provide an additional abstraction layer on top of the hardware. At the same time, though, developers can optimize their algorithms for specific quantum computing hardware as well.

Classiq CTO Dr. Yehuda Naveh also noted that the company is already working with a number of larger companies. These include banks that have used its platform for portfolio optimization, for example, and a semiconductor firm that was looking into a material science problem related to chip manufacturing, an area that is a bit of a sweet spot for quantum computing — at least in its current state.

The team plans to use the new funding to expand its existing team, mostly on the engineering side. A lot of the work the company is doing, after all, is still in R&D. Finding the right software engineers with a background in physics — or quantum information experts who can program — will be of paramount importance for the company. Minerbi believes that is possible, though, and the plan is to soon expand the team to about 25 people.

“We are thrilled to be working with Classiq, assisting the team in achieving their goals of advancing the quantum computing industry,” said Sarit Firon, Managing Partner at Team8 Capital. “As the quantum era takes off, they have managed to solve the missing piece in the quantum computing puzzle, which will enable game-changing quantum algorithms. We look forward to seeing the industry grow, and witnessing how Classiq continues to mark its place as a leader in the industry.”

#computer-science, #emerging-technologies, #entree-capital, #funding, #ourcrowd, #quantum-computing, #recent-funding, #science, #science-and-technology, #startups, #team8, #tel-aviv

One piece of optical hardware performs massively parallel AI calculations

Image of a series of parallel lines in different colors.

Enlarge / The output of two optical frequency combs, showing the light appearing at evenly spaced wavelengths. (credit: ESO)

AI and machine-learning techniques have become a major focus of everything from cloud computing services to cell phone manufacturers. Unfortunately, our existing processors are a bad match for the sort of algorithms that many of these techniques are based on, in part because they require frequent round trips between the processor and memory. To deal with this bottleneck, researchers have figured out how to perform calculations in memory and designed chips where each processing unit has a bit of memory attached.

Now, two different teams of researchers have figured out ways of performing calculations with light in a way that both merges memory and calculations and allows for massive parallelism. Despite the differences in implementation, the hardware designed by these teams has a common feature: it allows the same piece of hardware to simultaneously perform different calculations using different frequencies of light. While they’re not yet at the level of performance of some dedicated processors, the approach can scale easily and can be implemented using on-chip hardware, raising the prospect of using it as a dedicated co-processor.

A fine-toothed comb

The new work relies on hardware called a frequency comb, a technology that won some of its creators the 2005 Nobel Prize in Physics. While a lot of interesting physics is behind how the combs work (which you can read more about here), what we care about is the outcome of that physics. While there are several ways to produce a frequency comb, they all produce the same thing: a beam of light that is composed of evenly spaced frequencies. So a frequency comb in visible wavelengths might be composed of light with a wavelength of 500 nanometers, 510nm, 520nm, and so on.

Read 14 remaining paragraphs | Comments

#ai, #computer-science, #materials-science, #optical-computing, #science

Google develops an AI that can learn both chess and Pac-Man

The first major conquest of artificial intelligence was chess. The game has a dizzying number of possible combinations, but it was relatively tractable because it was structured by a set of clear rules. An algorithm could always have perfect knowledge of the state of the game and know every possible move that both it and its opponent could make. The state of the game could be evaluated just by looking at the board.

But many other games aren’t that simple. If you take something like Pac-Man, then figuring out the ideal move would involve considering the shape of the maze, the location of the ghosts, the location of any additional areas to clear, the availability of power-ups, etc., and the best plan can end up in disaster if Blinky or Clyde makes an unexpected move. We’ve developed AIs that can tackle these games, too, but they have had to take a very different approach to the ones that conquered chess and Go.

At least until now. Today, however, Google’s DeepMind division published a paper describing the structure of an AI that can tackle both chess and Atari classics.

Read 12 remaining paragraphs | Comments

#ai, #atari, #chess, #computer-science, #deepmind, #go, #google, #pac-man, #science

DeepMind AI handles protein folding, which humbled previous software

Proteins rapidly form complicated structures which had proven difficult to predict.

Enlarge / Proteins rapidly form complicated structures which had proven difficult to predict. (credit: Argonne National Lab / Flickr)

Today, DeepMind announced that it has seemingly solved one of biology’s outstanding problems: how the string of amino acids in a protein folds up into a three-dimensional shape that enables their complex functions. It’s a computational challenge that has resisted the efforts of many very smart biologists for decades, despite the application of supercomputer-level hardware for these calculations. DeepMind instead trained its system using 128 specialized processors for a couple of weeks; it now returns potential structures within a couple of days.

The limitations of the system aren’t yet clear—DeepMind says it’s currently planning on a peer-reviewed paper and has only made a blog post and some press releases available. But the system clearly performs better than anything that’s come before it, after having more than doubled the performance of the best system in just four years. Even if it’s not useful in every circumstance, the advance likely means that the structure of many proteins can now be predicted from nothing more than the DNA sequence of the gene that encodes them, which would mark a major change for biology.

Between the folds

To make proteins, our cells (and those of every other organism) chemically link amino acids to form a chain. This works because every amino acid shares a backbone that can be chemically connected to form a polymer. But each of the 20 amino acids used by life has a distinct set of atoms attached to that backbone. These can be charged or neutral, acidic or basic, etc., and these properties determine how each amino acid interacts with its neighbors and the environment.

Read 13 remaining paragraphs | Comments

#ai, #biochemistry, #biology, #computational-biology, #computer-science, #deepmind, #protein-folding, #science

With $29M in funding, Isovalent launches its cloud-native networking and security platform

Isovalent, a startup that aims to bring networking into the cloud-native era, today announced that it has raised a $29 million Series A round led by Andreesen Horowitz and Google. In addition, the company today officially launched its Cilium platform (which was in stealth until now) to help enterprises connect, observe and secure their applications.

The open-source Cilium project is already seeing growing adoption, with Google choosing it for its new GKE dataplane, for example. Other users include Adobe, Capital One, Datadog and GitLab. Isovalent is following what is now the standard model for commercializing open-source projects by launching an enterprise version.

Image Credits: Cilium

The founding team of CEO Dan Wendlandt and CTO Thomas Graf has deep experience in working on the Linux kernel and building networking products. Graf spent 15 years working on the Linux kernel and created the Cilium open-source project, while Wendlandt worked on Open vSwitch at Nicira (and then VMware).

Image Credits: Isovalent

“We saw that first wave of network intelligence be moved into software, but I think we both shared the view that the first wave was about replicating the traditional network devices in software,” Wendlandt told me. “You had IPs, you still had ports, you created virtual routers, and this and that. We both had that shared vision that the next step was to go beyond what the hardware did in software — and now, in software, you can do so much more. Thomas, with his deep insight in the Linux kernel, really saw this eBPF technology as something that was just obviously going to be groundbreaking technology, in terms of where we could take Linux networking and security.”

As Graf told me, when Docker, Kubernetes and containers, in general, become popular, what he saw was that networking companies at first were simply trying to reapply what they had already done for virtualization. “Let’s just treat containers as many as miniature VMs. That was incredibly wrong,” he said. “So we looked around, and we saw eBPF and said: this is just out there and it is perfect, how can we shape it forward?”

And while Isovalent’s focus is on cloud-native networking, the added benefit of how it uses the eBPF Linux kernel technology is that it also gains deep insights into how data flows between services and hence allows it to add advanced security features as well.

As the team noted, though, users definitely don’t need to understand or program eBPF, which is essentially the next generation of Linux kernel modules, themselves.

Image Credits: Isovalent

“I have spent my entire career in this space, and the North Star has always been to go beyond IPs + ports and build networking visibility and security at a layer that is aligned with how developers, operations and security think about their applications and data,” said Martin Casado, partner at Andreesen Horowitz (and the founder of Nicira). “Until just recently, the technology did not exist. All of that changed with Kubernetes and eBPF.  Dan and Thomas have put together the best team in the industry and given the traction around Cilium, they are well on their way to upending the world of networking yet again.”

As more companies adopt Kubernetes, they are now reaching a stage where they have the basics down but are now facing the next set of problems that come with this transition. Those, almost by default, include figuring out how to isolate workloads and get visibility into their networks — all areas where Isovalent/Cilium can help.

The team tells me its focus, now that the product is out of stealth, is about building out its go-to-market efforts and, of course, continue to build out its platform.

#andreesen-horowitz, #ceo, #cloud, #computer-science, #computing, #cto, #datadog, #enterprise, #google, #kernel, #kubernetes, #linus-torvalds, #linux, #martin-casado, #nicira, #operating-systems, #recent-funding, #security, #startups, #vms, #vmware

D-Wave releases its next-generation quantum annealing chip

Image of a chip surrounded by complicated support hardware.

Enlarge

Today, quantum computing company D-Wave is announcing the availability of its next-generation quantum annealer, a specialized processor that uses quantum effects to solve optimization and minimization problems. The hardware itself isn’t much of a surprise—D-Wave was discussing its details months ago—but D-Wave talked with Ars about the challenges of building a chip with over a million individual quantum devices. And the company is coupling the hardware’s release to the availability of a new software stack that functions a bit like middleware between the quantum hardware and classical computers.

Quantum annealing

Quantum computers being built by companies like Google and IBM are general purpose, gate-based machines. They can solve any problem and should show a vast acceleration for specific classes of problems. Or they will, as soon as the gate count gets high enough. Right now, these quantum computers are limited to a few dozen gates and have no error correction. Bringing them up to the scale needed presents a series of difficult technical challenges.

D-Wave’s machine is not general-purpose; it’s technically a quantum annealer, not a quantum computer. It performs calculations that find low-energy states for different configurations of the hardware’s quantum devices. As such, it will only work if a computing problem can be translated into an energy-minimization problem in one of the chip’s possible configurations. That’s not as limiting as it might sound, since many forms of optimization can be translated to an energy minimization problem, including things like complicated scheduling issues and protein structures.

Read 22 remaining paragraphs | Comments

#algorithms, #computer-science, #d-wave, #quantum-annealer, #quantum-mechanics, #science

Want to hire and retain high-quality developers? Give them stimulating work

Software developers are some of the most in-demand workers on the planet. Not only that, they’re complex creatures with unique demands in terms of how they define job fulfillment. With demand for developers on the rise (the number of jobs in the field is expected to grow by 22% over the next decade), companies are under pressure to do everything they can to attract and retain talent.

First and foremost — above salary — employers must ensure that product teams are made up of developers who feel creatively stimulated and intellectually challenged. Without work that they feel passionate about, high-quality programmers won’t just become bored and potentially seek opportunities elsewhere, the standard of work will inevitably drop. In one survey, 68% of developers said learning new things is the most important element of a job.

The worst thing for a developer to discover about a new job is that they’re the most experienced person in the room and there’s little room for their own growth.

Yet with only 32% of developers feeling “very satisfied” with their jobs, there’s scope for you to position yourself as a company that prioritizes the development of its developers, and attract and retain top talent. So, how exactly can you ensure that your team stays stimulated and creatively engaged?

Allow time for personal projects

78% of developers see coding as a hobby — and the best developers are the ones who have a true passion for software development, in and out of the workplace. This means they often have their own personal passions within the space, be it working with specific languages or platforms, or building certain kinds of applications.

Back in their 2004 IPO letter, Google founders Sergey Brin and Larry Page wrote:

We encourage our employees, in addition to their regular projects, to spend 20% of their time working on what they think will most benefit Google. [This] empowers them to be more creative and innovative. Many of our significant advances have happened in this manner.

At DevSquad, we’ve adopted a similar approach. We have an “open Friday” policy where developers are able to learn and enhance their skills through personal projects. As long as the skills being gained contribute to work we are doing in other areas, the developers can devote that time to whatever they please, whether that’s contributing to open-source projects or building a personal product. In fact, 65% of professional developers on Stack Overflow contribute to open-source projects once a year or more, so it’s likely that this is a keen interest within your development team too.

Not only does this provide a creative outlet for developers, the company also gains from the continuously expanding skillset that comes as a result.

Provide opportunities to learn and teach

One of the most demotivating things for software developers is work that’s either too difficult or too easy. Too easy, and developers get bored; too hard, and morale can dip as a project seems insurmountable. Within our team, we remain hyperaware of the difficulty levels of the project or task at hand and the level of experience of the developers involved.

#column, #computer-science, #developer, #entrepreneurship, #hiring, #information-technology, #programmer, #recruiting, #software-development, #startups, #tc

Jesus, SaaS and digital tithing

There are more than 300,000 congregations in the U.S., and entrepreneurs are creating billion-dollar companies by building software to service them. Welcome to church tech.

The sector was growing prior to COVID-19, but the pandemic forced many congregations to go entirely online, which rapidly accelerated growth in this space. While many of these companies were bootstrapped, VC dollars are also increasingly flowing in. Unfortunately, it’s hard to come across a lot of resources covering this expanding, unique sector.

Market map

In broad terms, we can split church tech into six categories:

  • church management software (ChMS)
  • digital giving
  • member outreach/messaging
  • streaming/content
  • Bible study
  • website and app building

Horizontal integration is huge in this sector, and nearly all the companies operating in this space fall into several of these categories. Many have expanded through M&A.

The categories

Speech recognition algorithms may also have racial bias

Extreme closeup photograph of a professional microphone.

Enlarge / Microphones are how our machines listen to us. (credit: Teddy Mafia / Flickr)

We’re outsourcing ever more of our decision making to algorithms, partly as a matter of convenience, and partly because algorithms are ostensibly free of some of the biases that humans suffer from. Ostensibly. As it turns out, algorithms that are trained on data that’s already subject to human biases can readily recapitulate them, as we’ve seen in places like the banking and judicial systems. Other algorithms have just turned out to be not especially good.

Now, researchers at Stanford have identified another area with potential issues: the speech-recognition algorithms that do everything from basic transcription to letting our phones fulfill our requests. These algorithms seem to have more issues with the speech patterns used by African Americans, although there’s a chance that geography plays a part, too.

A non-comedy of errors

Voice-recognition systems have become so central to modern technology that most of the large companies in the space have developed their own. For the study, the research team tested systems from Amazon, Apple, Google, IBM, and Microsoft. While some of these systems are sold as services to other businesses, the ones from Apple and Google are as close as your phone. Their growing role in daily life makes their failures intensely frustrating, so the researchers decided to have a look at whether those failures display any sort of bias.

Read 10 remaining paragraphs | Comments

#computer-science, #science, #speech-recognition

How do you keep an AI’s behavior from becoming predictable?

The Facebook app displayed on the screen of an iPhone.

Enlarge / The Facebook app displayed on the screen of an iPhone. (credit: Fabian Sommer | picture alliance | Getty Images)

A lot of neural networks are black boxes. We know they can successfully categorize things—images with cats, X-rays with cancer, and so on—but for many of them, we can’t understand what they use to reach that conclusion. But that doesn’t mean that people can’t infer the rules they use to fit things into different categories. And that creates a problem for companies like Facebook, which hopes to use AI to get rid of accounts that abuse its terms of service.

Most spammers and scammers create accounts in bulk, and they can easily look for differences between the ones that get banned and the ones that slip under the radar. Those differences can allow them to evade automated algorithms by structuring new accounts to avoid the features that trigger bans. The end result is an arms race between algorithms and spammers and scammers who try to guess their rules.

Facebook thinks it has found a way to avoid getting involved in this arms race while still using automated tools to police its users, and this week, it decided to tell the press about it. The result was an interesting window into how to keep AI-based moderation useful in the face of adversarial behavior, an approach that could be applicable well beyond Facebook.

Read 14 remaining paragraphs | Comments

#artificial-intelligence, #biz-it, #computer-science, #facebook, #fact-checking, #moderation, #science