Google’s powerful AI spotlights a human cognitive glitch

Google’s powerful AI spotlights a human cognitive glitch

Enlarge (credit: Getty Images)

When you read a sentence like this one, your past experience tells you that it’s written by a thinking, feeling human. And, in this case, there is indeed a human typing these words: [Hi, there!] But these days, some sentences that appear remarkably humanlike are actually generated by artificial intelligence systems trained on massive amounts of human text.

People are so accustomed to assuming that fluent language comes from a thinking, feeling human that evidence to the contrary can be difficult to wrap your head around. How are people likely to navigate this relatively uncharted territory? Because of a persistent tendency to associate fluent expression with fluent thought, it is natural—but potentially misleading—to think that if an AI model can express itself fluently, that means it thinks and feels just like humans do.

Thus, it is perhaps unsurprising that a former Google engineer recently claimed that Google’s AI system LaMDA has a sense of self because it can eloquently generate text about its purported feelings. This event and the subsequent media coverage led to a number of rightly skeptical articles and posts about the claim that computational models of human language are sentient, meaning capable of thinking and feeling and experiencing.

Read 17 remaining paragraphs | Comments

#ai, #google, #language, #science

How to get started with machine learning and AI

"It's a cookbook?!"

Enlarge / “It’s a cookbook?!” (credit: Aurich Lawson | Getty Images)

“Artificial Intelligence” as we know it today is, at best, a misnomer. AI is in no way intelligent, but it is artificial. It remains one of the hottest topics in industry and is enjoying a renewed interest in academia. This isn’t new—the world has been through a series of AI peaks and valleys over the past 50 years. But what makes the current flurry of AI successes different is that modern computing hardware is finally powerful enough to fully implement some wild ideas that have been hanging around for a long time.

Back in the 1950s, in the earliest days of what we now call artificial intelligence, there was a debate over what to name the field. Herbert Simon, co-developer of both the logic theory machine and the General Problem Solver, argued that the field should have the much more anodyne name of “complex information processing.” This certainly doesn’t inspire the awe that “artificial intelligence” does, nor does it convey the idea that machines can think like humans.

However, “complex information processing” is a much better description of what artificial intelligence actually is: parsing complicated data sets and attempting to make inferences from the pile. Some modern examples of AI include speech recognition (in the form of virtual assistants like Siri or Alexa) and systems that determine what’s in a photograph or recommend what to buy or watch next. None of these examples are comparable to human intelligence, but they show we can do remarkable things with enough information processing.

Read 23 remaining paragraphs | Comments

#ai, #ai-ml, #artificial-intelligence, #biz-it, #dall-e, #feature, #features, #machine-learning, #machine-learning-tools, #models, #notebooks, #open-ai

Google places engineer on leave after he claims group’s chatbot is “sentient”

Google places engineer on leave after he claims group’s chatbot is “sentient”

Enlarge (credit: Yuchiro Chino | Getty Images)

Google has ignited a social media firestorm on the nature of consciousness after placing an engineer on paid leave who went public with his belief that the tech group’s chatbot has become “sentient.”

Blake Lemoine, a senior software engineer in Google’s Responsible AI unit, did not receive much attention last week when he wrote a Medium post saying he “may be fired soon for doing AI ethics work.”

But a Saturday profile in the Washington Post characterizing Lemoine as “the Google engineer who thinks the company’s AI has come to life” became the catalyst for widespread discussion on social media regarding the nature of artificial intelligence. Among the experts commenting, questioning or joking about the article were Nobel laureates, Tesla’s head of AI and multiple professors.

Read 16 remaining paragraphs | Comments

#ai, #google, #machine-learning, #science, #tech, #turing-test

How we learned to break down barriers to machine learning

Dr. Sephus discusses breaking down barriers to machine learning at Ars Frontiers 2022. Click here for transcript. (video link)

Welcome to the week after Ars Frontiers! This article is the first in a short series of pieces that will recap each of the day’s talks for the benefit of those who weren’t able to travel to DC for our first conference. We’ll be running one of these every few days for the next couple of weeks, and each one will include an embedded video of the talk (along with a transcript).

For today’s recap, we’re going over our talk with Amazon Web Services tech evangelist Dr. Nashlie Sephus. Our discussion was titled “Breaking Barriers to Machine Learning.”

Read 27 remaining paragraphs | Comments

#ai, #ai-ml, #ars-frontiers, #ars-technica-videos, #biz-it, #feature, #features, #frontiers-recap, #machine-learning, #ml

Apple will add fifth US English Siri voice in iOS 15.4

A black smartphone with two cameras.

Enlarge / The back of the iPhone 13 mini. (credit: Samuel Axon)

There are already four American-accented English voices for Siri, but Apple will add a fifth in iOS 15.4. The new voice aims to provide a gender-neutral option for the first time, as reported by Axios.

The voice is labeled “Voice 5” in the Settings panel in the current beta release, though developer Steve Moser noted on Twitter that the voice is named “Quinn” under the hood. Apple confirmed to Axios that the voice is built from recordings by a member of the LGBTQ+ community. Moser also tweeted an example of what the new voice sounds like:

For most of the time since Siri first became a core iPhone feature back in 2011, a female voice was the default. That changed last year when Apple changed the iPhone setup to prompt the user to pick a male or female voice when first starting the iPhone, with no default choice selected.

Read 4 remaining paragraphs | Comments

#ai, #apple, #digital-assistant, #ios, #ios-15, #ios-15-4, #siri, #tech, #voice-synthesis

Latest success from Google’s AI group: Controlling a fusion reactor

A dark space with a toroidal material that glows purple.

Enlarge / Plasma inside the tokamak at the EPFL. (credit: EPFL)

As the world waits for construction of the largest fusion reactor yet, called ITER, smaller reactors with similar designs are still running. These reactors, called tokamaks, help us test both hardware and software. The hardware testing helps us refine things like the materials used for container walls or the shape and location of control magnets.

But arguably, the software is the most important. To enable fusion, the control software of a tokamak has to monitor the state of the plasma it contains and respond to any changes by making real-time adjustments to the system’s magnets. Failure to do so can result in anything from a drop in energy (which leads to the failure of any fusion) to seeing the plasma spill out of containment (and scorch the walls of the container).

Getting that control software right requires a detailed understanding of both the control magnets and the plasma the magnets manipulate. Or it would be more accurate to say, getting that control software right has required. Because today, Google’s DeepMind AI team is announcing that its software has been successfully trained to control a tokamak.

Read 13 remaining paragraphs | Comments

#ai, #computer-science, #deepmind, #fusion, #physics, #science

This AI beat the world’s best Gran Turismo players

Sony AI has trained a new AI called GT Sophy to be extremely good at <em>Gran Turismo</em>.

Enlarge / Sony AI has trained a new AI called GT Sophy to be extremely good at Gran Turismo. (credit: Clive Rose – Gran Turismo/Gran Turismo via Getty Images)

A team of researchers at Sony AI have used deep reinforcement learning to teach an artificial intelligence to play Gran Turismo at a world-class level. While previous experiments have taught AI how to drive very fast, this is the first time that one has learned to actually race. And to prove it, the AI beat some of the world’s best GT players in head-to-head competition, as described in a new paper published in Nature this week.

Racing is not easy, and it involves more than just knowing how to drive a car really fast. Car control is obviously important, but so too are tactics, strategy, and the somewhat nebulous concept of etiquette.

Or, as the authors put it, “[a]utomobile racing is a domain that poses exactly these challenges; it requires real-time control of vehicles with complex, non-linear dynamics while operating within inches of opponents.” Some drivers might have limited success through aggression and going for every overtaking opportunity they see. But knowing where to pass and when to wait for a better opportunity—so you don’t get re-passed at the end of the next straight, for instance—is at least as important, as is knowing when to cede to a rival so you don’t end up in the wall or a gravel trap.

Read 16 remaining paragraphs | Comments

#ai, #cars, #e-sports, #gaming-culture, #gran-turismo, #gran-turismo-sport, #machine-learning, #neural-net, #science, #sony, #sony-ai

Hydrogen-soaked crystal lets neural networks expand to match a problem

Image of a stylized circuit layout.

Enlarge (credit: Getty Images)

Training AIs remains very processor-intensive, in part because traditional processing architectures are poor matches for the sorts of neural networks that are widely used. This has led to the development of what has been termed neuromorphic computing hardware, which attempts to model the behavior of biological neurons in hardware.

But most neuromorphic hardware is implemented in silicon, which limits it to behaviors that are set at the hardware level. A group of US researchers is now reporting a type of non-silicon hardware that’s substantially more flexible. It works by controlling how much hydrogen is present in an alloy of nickel, with the precise amount of hydrogen switching a single device among four different behaviors, each of which is useful for performing neural-network operations.

Give it the gas

The material being used here is one of a class of compounds called perovskite nickelates. Perovskite is a general term for a specific arrangement of atoms in a crystalline structure; a wide variety of chemicals can form perovskites. In this case, the crystal is formed from a material that’s a mix of neodymium, nickel, and oxygen.

Read 13 remaining paragraphs | Comments

#ai, #computer-science, #materials-science, #neural-networks, #science

This AI mechanic scans your car or tires to diagnose defects

UVeye's technology uses scanners and AI to diagnose defects.

Enlarge / UVeye’s technology uses scanners and AI to diagnose defects. (credit: UVeye)

Can you train an AI to take a breath, wince, and remark, “Well, it’s going to cost you”?

That’s probably easier than teaching one to diagnose problems with your car after a visual scan of its undercarriage, and yet the latter is what an Israeli company called UVeye has done. The company has developed what you might think of as a car scanner that can diagnose problems in just a few seconds. Drive past it, and it will image your car’s panels, tires, or underbody, spotting dings, oil leaks, foreign objects, or other problems, flagging them for remedy.

It’s another intriguing example of the civilian spinoffs that have emerged from Israel’s national security sector over the last couple of decades as sensors and algorithms find new life on civvy streets.

Read 7 remaining paragraphs | Comments

#ai, #car-repair, #cars, #scanner, #uveye

Top 9 Free AI Tools That Make Your Life Easier


Photo:- Copy.ai



First one on the list is copy.ai. It is an AI based copy writer tool. Basically  what a copywriter tool does is, it gives you content that you can post on your blog or video  when you give it a few descriptions about the topic you want content on.So copy ai can help you write instagram captions gives you blog idea, product descriptions,  facebook content, startup ideas, viral ideas, a lot of things it can do, you just make an account  in this website, then select a tool and fill in the necessary description and the AI will generate  content on what you ask for.

For tutorials go to their official Youtube  channel .An awesome tool that is going to be really handy in the future.



Hotpot.ai offers a collection of  AI tools for designers, as well as for anyone, it has an “AI picture restorer” which removes  scratches ,and basically restores your old photo into amazing pictures and makes it look brand new. 

 Ai picture colorizer , turns your black and white photo into color. And there is a background  remover tool, picture enlarger and a lot more for designers, check it out,and explore all the tools.



Deep-nostalgia became  very popular on the internet when people started 

making reaction videos of their parents reacting  to animated pictures of their grandparents. So deep – nostalgia is a very cool app, that will  animate any photo of a person.

 So what makes it really cool is that fact that you can upload an  old photo of your family and see them animate and living. Which is pretty cool and creepy at  the same time if they are dead already.. Really amazing service from myheritage, I created a  lot of cool animations with my old photos as well as with the photos of my grandparents.

Having a nice  looking profile picture is really important if you want that professional feel in your socials.  Whether in linkedin or twitter having a 

distinct and catchy profile picture can make  all the difference. So that’s where pfpmaker comes in. it a free online tool to create amazing professional profile pictures that fits you. It generates a lot of profile pictures  and you can also make small changes to already created profile pictures if you want to,as well.



Continue reading

#ai

Attempt to compare different types of intelligence falls a bit short

Attempt to compare different types of intelligence falls a bit short

(credit: MIT Press)

“What makes machines, animals, and people smart?” asks the subtitle of Paul Thagard’s new book. Not “Are computers smarter than humans? or “will computers ever be smarter than humans?” or even “are computers and animals conscious, sentient, or self-aware (whatever any of that might mean)?” And that’s unfortunate, becausethough most people are probably more concerned with questions like those.

Thagard is a philosopher and cognitive scientist, and he has written many books about the brain, the mind, and society. In this one, he defines what intelligence is and delineates the 12 features and 8 mechanisms that he thinks It’s built from,comprise it which allows him toso that he can compare the intelligences of these three very different types of beings.

He starts with a riff on the Aristotelian conception of virtue ethics. IWhereas in that case, a good person is defined as someone who possesses certain virtues;, in Thagard’sthis case, a smart person is defined as someone who epitomizes certain ways of thinking. Confucius, Mahatma Ghandi and Angela Merkel excelled at social innovation; Thomas Edison and George Washington Carver excelled at technological innovation; he lists Beethoven, Georgia O’Keeffe, Jane Austen, and Ray Charles as some of his favorite artistic geniuses; and Charles Darwin and Marie Curie serve as his paragons of scientific discoverers. Each of these people epitomizes different aspects of human intelligence, including creativity, emotion, problem solving, and using analogies.

Read 6 remaining paragraphs | Comments

#ai, #animal-behavior, #behavioral-science, #intelligence, #science

To see proteins change in a quadrillionth of a second, use AI

To see proteins change in a quadrillionth of a second, use AI

Enlarge (credit: Westend61 | Getty Images)

Have you ever had an otherwise perfect photo ruined by someone who moved too quickly and caused a blur? Scientists have the same issue while recording images of proteins that change their structure in response to light. This process is common in nature, so for years researchers have tried to capture its details. But they have long been thwarted by how incredibly fast it happens.

Now a team of researchers from the University of Wisconsin Milwaukee and the Center for Free-Electron Laser Science at the Deutsches Elektronen-Synchrotron in Germany have combined machine learning and quantum mechanical calculations to get the most precise record yet of structural changes in a photoactive yellow protein (PYP) that has been excited by light. Their study, published in November in Nature, showed that they were able to make movies of processes that occur in quadrillionths of a second.

Read 15 remaining paragraphs | Comments

#ai, #protein-folding, #science

The movement to hold AI accountable gains more steam

The movement to hold AI accountable gains more steam

Enlarge (credit: MirageC | Getty Images)

Algorithms play a growing role in our lives, even as their flaws are becoming more apparent: a Michigan man wrongly accused of fraud had to file for bankruptcy; automated screening tools disproportionately harm people of color who want to buy a home or rent an apartment; Black Facebook users were subjected to more abuse than white users. Other automated systems have improperly rated teachers, graded students, and flagged people with dark skin more often for cheating on tests.

Now, efforts are underway to better understand how AI works and hold users accountable. New York’s City Council last month adopted a law requiring audits of algorithms used by employers in hiring or promotion. The law, the first of its kind in the nation, requires employers to bring in outsiders to assess whether an algorithm exhibits bias based on sex, race, or ethnicity. Employers also must tell job applicants who live in New York when artificial intelligence plays a role in deciding who gets hired or promoted.

In Washington, DC, members of Congress are drafting a bill that would require businesses to evaluate automated decision-making systems used in areas such as health care, housing, employment, or education, and report the findings to the Federal Trade Commission; three of the FTC’s five members support stronger regulation of algorithms. An AI Bill of Rights proposed last month by the White House calls for disclosing when AI makes decisions that impact a person’s civil rights, and it says AI systems should be “carefully audited” for accuracy and bias, among other things.

Read 27 remaining paragraphs | Comments

#ai, #algorigthms, #policy

Getting software to “hallucinate” reasonable protein structures

Four images of spiraling ribbons.

Enlarge / Top row: the hallucination and actual structure. Bottom row: the two structures superimposed. (credit: Anishchenko et. al.)

Chemically, proteins are just a long string of amino acids. Their amazing properties come about because that chain can fold up into a complex, three-dimensional shape. So understanding the rules that govern this folding can not only give us insights into the proteins that life uses but could potentially help us design new proteins with novel chemical abilities.

There’s been remarkable progress on the first half of that problem recently: researchers have tuned AIs to sort through the evolutionary relationships among proteins and relate common features to structures. As of yet, however, those algorithms aren’t any help for designing new proteins from scratch. But that may change, thanks to the methods described in a paper released on Wednesday.

In it, a large team of researchers describes what it terms protein “hallucinations.” These are the products of a process that resembles a game of hotter/colder with an algorithm, starting with a random sequence of amino acids, making a change, and asking, “Does this look more or less like a structured protein?” Several of the results were tested and do, in fact, fold up like they were predicted to.

Read 14 remaining paragraphs | Comments

#ai, #biochemistry, #computer-science, #protein-structure, #science

This intrepid robot is the Wall-E of the deep sea

With extra-wide tracks and a bunch of other clever features, the Benthic Rover II can roam the seafloor for years at a time.

Enlarge / With extra-wide tracks and a bunch of other clever features, the Benthic Rover II can roam the seafloor for years at a time. (credit: Madison Pobis | MBARI)

The Benthic Rover II is the size of a compact car, although it rocks fat treads, making it more like a scientific tank. That, along with the two googly-eye-like flotation devices on its front, gives it a sort of WALL-E vibe. Only instead of exploring a garbage-strewn landscape, BR-II roams the Pacific seafloor, 13,000 feet deep. The robot’s mission: to prowl the squishy terrain in search of clues about how the deep ocean processes carbon.

That mission begins with a wild ride, 180 miles off the coast of Southern California. Scientists at the Monterey Bay Aquarium Research Institute lower BR-II into the water and then … drop it. Completely untethered, the robot free-falls for two and a half hours, landing on the abyssal plains—great stretches of what you might generously call muck. “It’s mushy and dusty at the same time,” says MBARI electrical engineer Alana Sherman, coauthor on a new paper in Science Robotics describing findings from the robot’s adventures. “Which is part of the reason it’s a tracked vehicle, and it has these really wide treads.” That extra surface area distributes the robot’s weight so it doesn’t sink into the sand.

Read 15 remaining paragraphs | Comments

#ai, #deep-sea, #ocean, #robots, #science

Open-sourcing of protein-structure software is already paying off

Image of different categories of protein complexes.

Enlarge (credit: Humphreys et. al.)

It is now relatively trivial to determine the order of amino acids in a protein. Figuring out how that order translates to a complicated three-dimensional structure that performs a specific function, however, is extremely challenging. But after decades of slow progress, Google’s DeepMind AI group announced that it has made tremendous strides toward solving the problem. In July, the system, called AlphaFold, was made open source. At the same time, a group of academic researchers released its own protein-folding software, called RoseTTAFold, built in part using ideas derived from DeepMind’s work.

How effective are these tools? Even if they aren’t as good as some of the statistics suggested, it’s clear they’re far better than anything we’ve ever had. So how will scientists use them?

This week, a large research collaboration set the software loose on a related problem: how these individual three-dimensional structures come together to form the large, multi-protein complexes that perform some of the most important functions in biology.

Read 19 remaining paragraphs | Comments

#ai, #biology, #computer-science, #deepmind, #protein-folding, #proteins, #science

Tagalong robots follow you to learn where you go

Tagalong robots follow you to learn where you go

Enlarge (credit: Piaggio Fast Forward)

When Amazon introduced its home robot Astro earlier this year, it first showcased the robot following behind a person. It’s a simple idea that has captured people’s imaginations with depictions in science fiction, like R2-D2 and BB-8 from Star Wars, and in reality, with research projects like DARPA’s robotic pack mule.

Follower robots have been tapped for senseless pursuits like carrying a single bottle of water, but robots can also carry tools in a warehouse or just-picked fruit from an orchard to a packing station. Artificially intelligent machines trained to follow people or other machines can transform how we think about everyday objects, like carry-on luggage or a set of golf clubs. Now the makers of follower robots want to coordinate movement around the modern workplace.

Follower robots have been under development since the late 1990s, beginning on the ground and extending underwater and into the sky. Initial forms relied on following the location of a tag in a person’s pocket, but advances in deep learning and computer vision now allow AI to navigate by “seeing” the world through cameras and other sensors.

Read 22 remaining paragraphs | Comments

#ai, #autonomy, #robots, #tech

Alphabet launches AI company to discover new drugs

Demis Hassabis, CEO of Google's artificial intelligence (AI) startup DeepMind, speaks during a press conference on March 8, 2016 in Seoul, South Korea.

Enlarge / Demis Hassabis, CEO of Google’s artificial intelligence (AI) startup DeepMind, speaks during a press conference on March 8, 2016 in Seoul, South Korea. (credit: Getty Images)

Google owner Alphabet has launched an artificial intelligence company to discover new drugs.

UK-registered Isomorphic Labs will use technology from its sister company DeepMind “to accelerate drug discovery, and ultimately, find cures for some of humanity’s most devastating diseases,” said Demis Hassabis, the head of DeepMind, in a blog post. He added that he would also become the chief executive of Isomorphic Labs.

Scientists around the world were awed in July when DeepMind unveiled how its AlphaFold2 technology could be used to predict the shape of every protein in the human body with almost perfect accuracy.

Read 10 remaining paragraphs | Comments

#ai, #alphabet, #drugs, #google, #pharmaceuticals, #science

Four revelations from the Facebook Papers

Four revelations from the Facebook Papers

Enlarge (credit: Aurich Lawson | Getty Images)

Facebook is battling its gravest crisis since the Cambridge Analytica scandal after a whistleblower accusing the company of placing “profit over safety” shed light on its inner workings through thousands of pages of leaked memos.

The documents were disclosed to US regulators and provided to Congress in redacted form by Frances Haugen’s legal counsel. A consortium of news organisations, including the Financial Times, has obtained the redacted versions received by Congress.

Earlier this month, Haugen testified in Congress that the social media company does not do enough to ensure the safety of its 2.9 billion users, plays down the harm it can cause to society and has repeatedly misled investors and the public. The Wall Street Journal also ran a series of articles called the Facebook Files.

Read 20 remaining paragraphs | Comments

#ai, #algorightms, #facebook, #facebook-papers, #policy, #social-media

Facebook AI moderator confused videos of mass shootings and car washes

A frowning man in a business suit.

Enlarge / Facebook CEO Mark Zuckerberg testifying before Congress in April 2018. It wasn’t his only appearance in DC this decade. (credit: Bloomberg | Getty Images)

Facebook CEO Mark Zuckerberg sounded an optimistic note three years ago when he wrote about the progress his company was making in automated moderation tools powered by artificial intelligence. “Through the end of 2019, we expect to have trained our systems to proactively detect the vast majority of problematic content,” he wrote in November 2018.

But as recently as March, internal Facebook documents reveal the company found its automated moderation tools were falling far short, removing posts that were responsible for only a small fraction of views of hate speech and violence and incitement on the platform. The posts removed by AI tools only accounted for 3–5 percent of views of hate speech and 0.6 percent of views of violence and incitement.

While that’s up from 2 percent of hate speech views two years ago, according to documents turned over to The Wall Street Journal by whistleblower Frances Haugen, it’s far from a vast majority. One of the company’s senior engineers wrote in 2019 that he felt the company could improve by an order of magnitude but that they might then hit a ceiling beyond which further advances would be difficult.

Read 14 remaining paragraphs | Comments

#ai, #artificial-intelligence, #automated-moderation, #content-moderation, #facebook, #policy

IBM says AI can help track carbon pollution across vast supply chains

A container ship sails off the coast of Thailand.

Enlarge / A container ship sails off the coast of Thailand. (credit: iStock)

Finding sources of pollution across vast supply chains may be one of the largest barriers to eliminating carbon pollution. For some sources like electricity or transportation, it’s relatively easy. But for others like agriculture or consumer electronics, tracing and quantifying greenhouse gas emissions can be a time-consuming, laborious process. It generally takes an expert around three to six months—sometimes more—to come up with an estimate for a single product.

Typically, researchers have to probe vast supply chains, comb the scientific literature, digest reports, and even interview suppliers. They may have to dive into granular details, estimating the footprint of everything from gypsum in drywall to tin solder on circuit boards. Massive databases of reference values offer crude shortcuts, but they can also introduce uncertainty in the estimate because they don’t capture the idiosyncrasies of many companies’ supply chains.

Enter IBM, which has placed a massive bet on offering artificial intelligence services to businesses. Some services, like the company’s Watson health care effort, didn’t live up to the promise. But IBM has refocused its efforts in recent years, and today it announced a new suite of tools for businesses to tackle two significant challenges posed by climate change: emissions reduction and adaptation.

Read 8 remaining paragraphs | Comments

#ai, #artificial-intelligence, #carbon-footprint, #climate-change, #ibm, #life-cycle-analysis, #policy

These virtual obstacle courses help real robots learn to walk

A clip from the simulation where virtual robots learn to climb steps.

An army of more than 4,000 marching doglike robots is a vaguely menacing sight, even in a simulation. But it may point the way for machines to learn new tricks.

The virtual robot army was developed by researchers from ETH Zurich in Switzerland and chipmaker Nvidia. They used the wandering bots to train an algorithm that was then used to control the legs of a real-world robot.

In the simulation, the machines—called ANYmals—confront challenges like slopes, steps, and steep drops in a virtual landscape. Each time a robot learned to navigate a challenge, the researchers presented a harder one, nudging the control algorithm to be more sophisticated.

Read 18 remaining paragraphs | Comments

#ai, #artificial-intelligence, #nvidia, #robotics, #science, #tech

A new formula may help black patients’ access to kidney care

A new formula may help black patients’ access to kidney care

Enlarge (credit: Getty Images)

For decades, doctors and hospitals saw kidney patients differently based on their race. A standard equation for estimating kidney function applied a correction for Black patients that made their health appear rosier, inhibiting access to transplants and other treatments.

On Thursday, a task force assembled by two leading kidney care societies said the practice is unfair and should end.

The group, a collaboration between the National Kidney Foundation and the American Society of Nephrology, recommended use of a new formula that does not factor in a patient’s race. In a statement, Paul Palevsky, the foundation’s president, urged “all laboratories and health care systems nationwide to adopt this new approach as rapidly as possible.” That call is significant because recommendations and guidelines from professional medical societies play a powerful role in shaping how specialists care for patients.

Read 15 remaining paragraphs | Comments

#ai, #algorithms, #bias, #dialysis, #health, #kidney-care, #medicine, #science, #transplants

Longtime VC, and happy Miami resident, David Blumberg has raised a new $225 million fund

Blumberg Capital, founded in 1991 by investor David Blumberg, has just closed its fifth early-stage venture fund with $225 million, a vehicle that Blumberg says was oversubscribed — he planned to raise $200 million — and that has already been used to invest in 16 startups around the world (the firm has small offices in San Francisco, New York, Tel Aviv, and Miami, where Blumberg moved his family last year).

We caught up with him earlier this week to talk shop and he sounded pretty ecstatic about the current market, which has evidently been good for returns, with Blumberg Capital’s biggest hits tied to Nutanix (it claims a 68x return), DoubleVerify (a 98x return at IPO in April, the firm says), Katapult (which went public via SPAC in July), Addepar (currently valued above $2 billion) and Braze (it submitted its S-1 in June).

We also talked a bit about his new life in Florida, which he was quick to note is “not a clone of Silicon Valley.” Not last, he told us why he thinks we’re in a “golden era of applying intelligence to every business,” from mining to the business of athletic performance.

More from our conversation, edited lightly for length and clarity, follows:

TC: What are you funding right now?

DB: Our last 30 to 40 deals have basically been about big data that’s been analyzed by artificial intelligence of some sort, then riding in a better wrapper of software process automation on rails of internet and mobility. Okay, that’s a lot of buzzwords.

TC: Yes.

DB: What I’m saying is that this ability to take raw information data that’s either been sitting around and not analyzed, or from new sources of data like sensors or social media or many other places, then analyze it and take it to the problem of all these businesses that have been there forever, is beginning to make incremental improvements that may sound small [but add up].

TC: What’s a very recent example?

One of our [unannounced] companies applies AI to mining — lithium mining and gold and copper — so miners don’t waste their time before finding the richest vein of deposit. We partner with mining owners and we bring extra data that they don’t have access to — some is proprietary, some is public — and because we’re experts at the AI modeling of it, we can apply it to their geography and geology, and as part of the business model, we take part of the mine in return.

TC: So your fund now owns not just equity but part of a mine?

DB: This is evidently done a lot in what’s called E&P, exploration and production in the oil and gas industry, and we’re just following a time-tested model, where some of the service providers put in value and take out a share. So as we see it, it aligns our interests and the better we do for them, the better they do.

TC: This fund is around the same size of your fourth fund, which closed with $207 million in 2017. How do you think about check sizes in this market?

DB: We write checks of $1 million to $6 million generally. We could go down a little bit for something in a seed where we can’t get more of a slice, but we like to have large ownership up front. We found that to have a fund return at least three x — and our funds seem to be returning much more than that — [we need to be math-minded about things].

We have 36 companies in our portfolio typically, and 20% of them fail, 20% of them are our superstars, and 60% are kind of medium. Of those superstars, six of them have to return $100 million each in a $200 million fund to make it a $600 million return, and to get six companies to [produce a] $100 million [for us] they have to reach a billion dollars in value, where we own 10% at the end.

TC You’re buying 10% and maintaining your pro rata or this is after being diluted over numerous rounds?

DB: It’s more like we want 15% to 20% of a company and it gets [diluted] down to 10%. And it’s been working. Some of our funds are way above that number.

TC: Are all four of your earlier funds in the black?

DB: Yes. I love to say this: We have never, ever lost money for our fund investors.

TC: You were among a handful of VCs who were cited quite a lot last year for hightailing it out of the Bay Area for Miami. One year into the move, how is it going?

DB: It is not a clone of Silicon Valley. They are different and add value each in their own way. But Florida is a great place for our family to be and I find for our business, it’s going to be great as well. I can be on the phone to Israel and New York without any time zone-related problems. Some of our companies are moving here, including one from from Israel recently, one from San Francisco, and one from Texas. A lot of our LPs are moving here or live here already. We can also up and down to South America for distribution deals more easily.

If we need to get to California or New York, airplanes still work, too, so it hasn’t been a negative at all. I’m going to a JPMorgan event tonight for a bunch of tech founders where there should be 150 people.

TC: That sounds great, though did you feel about summer in Miami?

DB: We were in France.

Pictured above, from left to right: Firm founder David Blumberg, managing director Yodfat Harel Buchris, COO Steve Gillan, and managing director Bruce Taragin.

#addepar, #ai, #artificial-intelligence, #blumberg-capital, #david-blumberg, #doubleverify, #israel, #miami, #nutanix, #tc, #venture-capital, #yotpo

The responsibilities of AI-first investors

Investors in AI-first technology companies serving the defense industry, such as Palantir, Primer and Anduril, are doing well. Anduril, for one, reached a valuation of over $4 billion in less than four years. Many other companies that build general-purpose, AI-first technologies — such as image labeling — receive large (undisclosed) portions of their revenue from the defense industry.

Investors in AI-first technology companies that aren’t even intended to serve the defense industry often find that these firms eventually (and sometimes inadvertently) help other powerful institutions, such as police forces, municipal agencies and media companies, prosecute their duties.

Most do a lot of good work, such as DataRobot helping agencies understand the spread of COVID, HASH running simulations of vaccine distribution or Lilt making school communications available to immigrant parents in a U.S. school district.

The first step in taking responsibility is knowing what on earth is going on. It’s easy for startup investors to shrug off the need to know what’s going on inside AI-based models.

However, there are also some less positive examples — technology made by Israeli cyber-intelligence firm NSO was used to hack 37 smartphones belonging to journalists, human-rights activists, business executives and the fiancée of murdered Saudi journalist Jamal Khashoggi, according to a report by The Washington Post and 16 media partners. The report claims the phones were on a list of over 50,000 numbers based in countries that surveil their citizens and are known to have hired the services of the Israeli firm.

Investors in these companies may now be asked challenging questions by other founders, limited partners and governments about whether the technology is too powerful, enables too much or is applied too broadly. These are questions of degree, but are sometimes not even asked upon making an investment.

I’ve had the privilege of talking to a lot of people with lots of perspectives — CEOs of big companies, founders of (currently!) small companies and politicians — since publishing “The AI-First Company” and investing in such firms for the better part of a decade. I’ve been getting one important question over and over again: How do investors ensure that the startups in which they invest responsibly apply AI?

Let’s be frank: It’s easy for startup investors to hand-wave away such an important question by saying something like, “It’s so hard to tell when we invest.” Startups are nascent forms of something to come. However, AI-first startups are working with something powerful from day one: Tools that allow leverage far beyond our physical, intellectual and temporal reach.

AI not only gives people the ability to put their hands around heavier objects (robots) or get their heads around more data (analytics), it also gives them the ability to bend their minds around time (predictions). When people can make predictions and learn as they play out, they can learn fast. When people can learn fast, they can act fast.

Like any tool, one can use these tools for good or for bad. You can use a rock to build a house or you can throw it at someone. You can use gunpowder for beautiful fireworks or firing bullets.

Substantially similar, AI-based computer vision models can be used to figure out the moves of a dance group or a terrorist group. AI-powered drones can aim a camera at us while going off ski jumps, but they can also aim a gun at us.

This article covers the basics, metrics and politics of responsibly investing in AI-first companies.

The basics

Investors in and board members of AI-first companies must take at least partial responsibility for the decisions of the companies in which they invest.

Investors influence founders, whether they intend to or not. Founders constantly ask investors about what products to build, which customers to approach and which deals to execute. They do this to learn and improve their chances of winning. They also do this, in part, to keep investors engaged and informed because they may be a valuable source of capital.

#ai, #artificial-general-intelligence, #artificial-intelligence, #column, #cybernetics, #ec-column, #machine-learning, #nso, #palantir, #private-equity, #startup-company, #startups, #venture-capital

News aggregator SmartNews raises $230 million, valuing its business at $2 billion

SmartNews, a Tokyo-headquartered news aggregation website and app that’s grown in popularity despite hefty competition from built-in aggregators like Apple News, today announced it has closed on $230 million in Series F funding. The round brings SmartNews’ total raise to date to over $400 million and values the business at $2 billion — or as the company touts in its press release, a “double unicorn.” (Ha!)

The funding included new U.S. investors Princeville Capital and Woodline Partners, as well as JIC Venture Growth Investments, Green Co-Invest Investment, and Yamauchi-No.10 Family Office in Japan. Existing investors participating in this round included ACA Investments and SMBC Venture Capital.

Founded in 2012 in Japan, the company launched to the U.S. in 2014 and expanded its local news footprint early last year. While the app’s content team includes former journalists, machine learning is used to pick which articles are shown to readers to personalize their experience. However, one of the app’s key differentiators is how it works to pop users’ “filter bubbles” through its “News From All Sides” feature, which allows its users to access news from across a range of political perspectives.

It has also developed new products, like its Covid-19 vaccine dashboard and U.S. election dashboard, that provide critical information at a glance. With the additional funds, the company says it plans to develop more features for its U.S. audience — one of its largest, in addition to Japan —  that will focus on consumer health and safety. These will roll out in the next few months and will include features for tracking wildfires and crime and safety reports. It also recently launched a hurricane tracker.

The aggregator’s business model is largely focused on advertising, as the company has said before that 85-80% of Americans aren’t paying to subscribe to news. But SmartNews’ belief is that these news consumers still have a right to access quality information.

In total, SmartNews has relationships with over 3,000 global publishing partners whose content is available through its service on the web and mobile devices.

To generate revenue, the company sells inline ads and video ads, where revenue is shared with publishers. Over 75% of its publishing partners also take advantage of its “SmartView” feature. This is the app’s quick-reading mode, and alternative to something like Google AMP. Here, users can quickly load an article to read, even if they’re offline. The company promises publishers that these mobile-friendly stories, which are marked with a lightning bolt icon in the app, deliver higher engagement — and its algorithm rewards that type of content, bringing them more readers. Among SmartView partners are well-known brands like USA Today, ABC, HuffPost, and others. Currently, over 70% of all SmartNews’ pageviews are coming from SmartView first.

SmartNews’ app has proven to be very sticky, in terms of attracting and keeping users’ attention. The company tells us, citing App Annie July 2021 data, that it sees an average time spent per user per month on U.S. mobile devices that’s higher than Google News or Apple News combined.

Image Credits: App Annie data provided by SmartNews

The company declined to share its monthly active users (MAUs), but had said in 2019 it had grown to 20 million in the U.S. and Japan. Today, it says its U.S. MAUs doubled over the last year.

According to data provided to us by Apptopia, the SmartNews app has seen around 85 million downloads since its October 2014 launch, and 14 million of those took place in the past 365 days. Japan is the largest market for installs, accounting for 59% of lifetime downloads, the firm noted.

“This latest round of funding further affirms the strength of our mission, and fuels our drive to expand our presence and launch features that specifically appeal to users and publishers in the United States,” said SmartNews co-founder and CEO Ken Zuzuki. “Our investors both in the U.S. and globally acknowledge the tremendous growth potential and value of SmartNews’s efforts to democratize access to information and create an ecosystem that benefits consumers, publishers, and advertisers,” he added.

The company says the new funds will be used to invest in further U.S. growth and expanding the company’s team. Since its last fundraise in 2019, where it became a unicorn, the company more than doubled its headcount to approximately 500 people globally. it now plans to double its headcount of 100 in the U.S., with additions across engineering, product, and leadership roles.

The Wall Street Journal reports SmartNews is exploring an IPO, but the company declined to comment on this.

The SmartNews app is available on iOS and Android across more than 150 countries worldwide.

#aca-investments, #aggregation, #ai, #android, #apple-news, #apps, #funding, #google, #google-news, #japan, #machine-learning, #media, #mobile, #mobile-applications, #mobile-devices, #mobile-software, #new-aggregator, #news, #news-aggregation, #news-reading, #recent-funding, #smartnews, #software, #startups, #tokyo, #united-states

NVIDIA’s latest tech makes AI voices more expressive and realistic

The voices on Amazon’s Alexa, Google Assistant and other AI assistants are far ahead of old-school GPS devices, but they still lack the rhythms, intonation and other qualities that make speech sound, well, human. NVIDIA has unveiled new research and tools that can capture those natural speech qualities by letting you train the AI system with your own voice, the company announced at the Interspeech 2021 conference.

To improve its AI voice synthesis, NVIDIA’s text-to-speech research team developed a model called RAD-TTS, a winning entry at an NAB broadcast convention competition to develop the most realistic avatar. The system allows an individual to train a text-to-speech model with their own voice, including the pacing, tonality, timbre and more.

Another RAD-TTS feature is voice conversion, which lets a user deliver one speaker’s words using another person’s voice. That interface gives fine, frame-level control over a synthesized voice’s pitch, duration and energy.

Using this technology, NVIDIA’s researchers created more conversational-sounding voice narration for its own I Am AI video series using synthesized rather than human voices. The aim was to get the narration to match the tone and style of the videos, something that hasn’t been done well in many AI narrated videos to date. The results are still a bit robotic, but better than any AI narration I’ve ever heard.

“With this interface, our video producer could record himself reading the video script, and then use the AI model to convert his speech into the female narrator’s voice. Using this baseline narration, the producer could then direct the AI like a voice actor — tweaking the synthesized speech to emphasize specific words, and modifying the pacing of the narration to better express the video’s tone,” NVIDIA wrote.

NVIDIA is distributing some of this research — optimized to run efficiently on NVIDIA GPUs, of course — to anyone who wants to try it via open source through the NVIDIA NeMo Python toolkit for GPU-accelerated conversational AI, available on the company’s NGC hub of containers and other software.

“Several of the models are trained with tens of thousands of hours of audio data on NVIDIA DGX systems. Developers can fine tune any model for their use cases, speeding up training using mixed-precision computing on NVIDIA Tensor Core GPUs,” the company wrote.

Editor’s note: This post originally appeared on Engadget.

#ai, #artificial-intelligence, #column, #nvidia, #speech-synthesis, #tc, #tceng, #voice-assistant

Peak raises $75M for a platform that helps non-tech companies build AI applications

As artificial intelligence continues to weave its way into more enterprise applications, a startup that has built a platform to help businesses, especially non-tech organizations, build more customized AI decision making tools for themselves has picked up some significant growth funding. Peak AI, a startup out of Manchester, England, that has built a “decision intelligence” platform, has raised $75 million, money that it will be using to continue building out its platform as well as to expand into new markets, and hire some 200 new people in the coming quarters.

The Series C is bringing a very big name investor on board. It is being led by SoftBank Vision Fund 2, with previous backers Oxx, MMC Ventures, Praetura Ventures, and Arete also participating. That group participated in Peak’s Series B of $21 million, which only closed in February of this year. The company has now raised $118 million; it is not disclosing its valuation.

(This latest funding round was rumored last week, although it was not confirmed at the time and the total amount was not accurate.)

Richard Potter, Peak’s CEO, said the rapid follow-on in funding was based on inbound interest, in part because of how the company has been doing.

Peak’s so-called Decision Intelligence platform is used by retailers, brands, manufacturers and others to help monitor stock levels, build personalized customer experiences, as well as other processes that can stand to have some degree of automation to work more efficiently, but also require sophistication to be able to measure different factors against each other to provide more intelligent insights. Its current customer list includes the likes of Nike, Pepsico, KFC, Molson Coors, Marshalls, Asos, and Speedy, and in the last 12 months revenues have more than doubled.

The opportunity that Peak is addressing goes a little like this: AI has become a cornerstone of many of the most advanced IT applications and business processes of our time, but if you are an organization — and specifically one not built around technology — your access to AI and how you might use it will come by way of applications built by others, not necessarily tailored to you, and the costs of building more tailored solutions can often be prohibitively high. Peak claims that those using its tools have seen revenues on average rise 5%; return on ad spend double; supply chain costs reduce by 5%; and inventory holdings (a big cost for companies) reduce by 12%.

Peak’s platform, I should point out, is not exactly a “no-code” approach to solving that problem — not yet at least: it’s aimed at data scientists and engineers at those organizations so that they can easily identify different processes in their operations where they might benefit from AI tools, and to build those out with relatively little heavy lifting.

There have also been different market factors that have also played a role. Covid-19, for example, and the boost that we have seen both in increasing “digital transformation” in businesses, and making e-commerce processes more efficient to cater to rising consumer demand and more strained supply chains, have all led to businesses being more open to and keen to invest in more tools to improve their automation intelligently.

This, combined with Peak AI’s growing revenues, is part of what interested SoftBank. The investor has been long on AI for a while, but it has been building out a section of its investment portfolio to provide strategic services to the kinds of businesses that it invests in. Those include e-commerce and other consumer-facing businesses, which make up one of the main segments of Peak’s customer base.

“In Peak we have a partner with a shared vision that the future enterprise will run on a centralized AI software platform capable of optimizing entire value chains,” Max Ohrstrand, senior investor for SoftBank Investment Advisers, said in a statement. “To realize this a new breed of platform is needed and we’re hugely impressed with what Richard and the excellent team have built at Peak. We’re delighted to be supporting them on their way to becoming the category-defining, global leader in Decision Intelligence.”

Longer term, it will be interesting to see how and if Peak evolves to be extend its platform to a wider set of users at the organizations that are already its customers.

Potter said he believes that “those with technical predispositions” will be the most likely users of its products in the near and medium term. You might assume that would cut out, for example, marketing managers, although the general trend in a lot of software tools has precisely been to build versions of the same tools used by data scientists for these tell technical people to engage in the process of building what it is that they want to use. “I do think it’s important to democratize the ability to stream data pipelines, and to be able to optimize those to work in applications,” he added.

#ai, #articles, #artificial-intelligence, #automation, #business-process-management, #ceo, #e-commerce, #enterprise, #europe, #funding, #kfc, #manchester, #mmc-ventures, #nike, #partner, #peak, #peak-ai, #pepsico, #science-and-technology, #series-b, #softbank-group, #softbank-vision-fund, #software-platform, #tc, #united-kingdom, #vodafone

Otter.ai expands automatic transcription assistant to Microsoft Teams, Google Meet and Cisco Webex

AI-powered voice transcription service Otter.ai is expanding its Otter Assistant feature for Microsoft Teams, Google Meet, and Cisco Webex. Otter.ai first released this feature for Zoom users earlier this year in May. With this new integration, Otter Assistant can now join and transcribe meetings on more platforms, even if the Otter user is not attending the meeting.

The Otter Assistant automatically joins calendared meetings and records, takes notes and shares transcriptions with meeting participants. If a user decides to skip a meeting altogether, they catch up on the discussion through the recorded notes afterwards. The tool can also help in instances where you have overlapping meetings or larger meetings where only a portion of them are relevant to you.

To use the new tool, users need to synchronize their calendars with the service. The assistant will then automatically join all future meetings, where it appears in the meeting as a separate participant, for transparency’s sake.

“With more companies adapting to a hybrid work model where professionals work and take meetings in-office, at home, and on mobile, many are looking to Otter as a tool to improve team communication and collaboration,” said Otter.ai co-founder and CEO Sam Liang in a statement. “We’re excited to make using Otter even easier and more accessible no matter where or how people conduct and participate in meetings.”

The new integration will be handy for those who attend meetings across several platforms, as the tool can keep all of your meeting notes in one place. The Otter Assistant is available to Otter.ai Business users. The business tier starts at $20 per month and includes features like two-factor authentication, advanced search, audio imports, custom vocabulary, shared speaker identification and more.

#ai, #otter-ai, #tc, #transcription

Kapacity.io is using AI to drive energy and emissions savings for real estate

Y Combinator-backed Kapacity.io is on a mission to accelerate the decarbonization of buildings by using AI-generated efficiency savings to encourage electrification of commercial real estate — wooing buildings away from reliance on fossil fuels to power their heating and cooling needs.

It does this by providing incentives to buildings owners/occupiers to shift to clean energy usage through a machine learning-powered software automation layer.

The startup’s cloud software integrates with buildings’ HVAC systems and electricity meters — drawing on local energy consumption data to calculate and deploy real-time adjustments to heating/cooling systems which not only yield energy and (CO2) emissions savings but generate actual revenue for building owners/tenants — paying them to reduce consumption such as at times of peak energy demand on the grid.

“We are controlling electricity consumption in buildings, focusing on heating and cooling devices — using AI machine learning to optimize and find the best ways to consume electricity,” explains CEO and co-founder Jaakko Rauhala, a former consultant in energy technology. “The actual method is known as ‘demand response’. Basically that is a way for electricity consumer to get paid for adjusting their energy consumption, based on a utility company’s demand.

“For example if there is a lot of wind power production and suddenly the wind drops or the weather changes and the utility company is running power grids they need to balance that reduction — and the way to do that is either you can fire up natural gas turbine or you can reduce power consumption… Our product estimates how much can we reduce electricity consumption at any given minute. We are [targeting] heating and cooling devices because they consume a lot of electricity.”

“The way we see this is this is a way we can help our customers electrify their building stocks faster because it makes their investments more lucrative and in addition we can then help them use more renewable electricity because we can shift the use from fossil fuels to other areas. And in that we hope to help push for a more greener power grid,” he adds.

Kapcity’s approach is applicable in deregulated energy markets where third parties are able to play a role offering energy saving services and fluctuations in energy demand are managed by an auction process involving the trading of surplus energy — typically overseen by a transmission system operator — to ensure energy producers have the right power balance to meet customer needs.

Demand for energy can fluctuate regardless of the type of energy production feeding the grid but renewable energy sources tend to increase the volatility of energy markets as production can be less predictable vs legacy energy generation (like nuclear or burning fossil fuels) — wind power, for example, depends on when and how strongly the wind is blowing (which both varies and isn’t perfectly predictable). So as economies around the world dial up efforts to tackle climate change and hit critical carbon emissions reduction targets there’s growing pressure to shift away from from fossil fuels-based power generation toward cleaner, renewable alternatives. And the real estate sector specifically remains a major generator of CO2 so is squarely in the frame for ‘greening’.

Simultaneously, decarbonization and the green shift looks likely to drive demand for smart solutions to help energy grids manage increasing complexity and volatility in the energy supply mix.

“Basically more wind power — and solar, to some extent — correlates with demand for balancing power grids and this is why there is a lot of talk usually about electricity storage when it comes to renewables,” says Rauhala. “Demand response, in the way that we do it, is an alternative for electricity storage units. Basically we’re saying that we already have a lot of electricity consuming devices — and we will have more and more with electrification. We need to adjust their consumption before we invest billions of dollars into other systems.”

“We will need a lot of electricity storage units — but we try to push the overall system efficiency to the maximum by utilising what we already have in the grid,” he adds.

There are of course limits to how much ‘adjustment’ (read: switching off) can be done to a heating or cooling system by even the cleverest AI without building occupants becoming uncomfortable.

But Kapacity’s premise is that small adjustments — say turning off the boilers/coolers for five, 15 or 30 minutes — can go essentially unnoticed by building occupants if done right, allowing the startup to tout a range of efficiency services for its customers; such as a peak-shaving offering which automatically reduces energy usage to avoid peaks in consumption and generate significant energy cost savings.

“Our goal — which is a very ambitious goal — is that the customers and occupants in the buildings wouldn’t notice the adjustments. And that they would fall into the normal range of temperature fluctuations in a building,” says Rauhala.

Kapacity’s algorithms are designed to understand how to make dynamic adjustments to buildings’ heating/cooling without compromising “thermal comfort”, as Rauhala puts it — noting that co-founder (and COO) Sonja Salo, has both a Phd in demand response and researched thermal comfort during a stint as a visiting researcher at UC Berkley — making the area a specialist focus for the engineer-led founding team.

At the same time, the carrots it’s dangling at the commercial real estate to sign up for a little algorithmic HVAC tweaking look substantial: Kapacity says its system has been able to achieve a 25% reduction in electricity costs and a 10% reduction in CO2-emissions in early pilots. Although early tests have been limited to its home market for now.

Its other co-founder, Rami El Geneidy, researched smart algorithms for demand response involving heat pumps for his PhD dissertation — and heat pumps are another key focus for the team’s tech, per Rauhala.

Heat pumps are a low carbon technology that’s fairly commonly used in the Nordics for heating buildings but whose use is starting to spread as countries around the world look for greener alternatives to heat buildings.

In the UK, for example, the government announced a plan last year to install hundreds of thousands of heat pumps per year by 2028 as it seeks to move the country away from widespread use of gas boilers to heat homes. And Rauhala names the UK as one of the startup’s early target markets — along with the European Union and the US where they also envisage plenty of demand for their services.

While the initial focus is the commercial real estate sector, he says they are also interested in residential buildings — noting that from a “tech core point of view we can do any type of building”.

“We have been focusing on larger buildings — multi-family buildings, larger office buildings, certain type of industrial or commercial buildings so we don’t do single family detached homes at the moment,” he goes on, adding: “We have been looking at that and it’s an interesting avenue but our current pilots are in larger buildings.”

The Finnish startup was only founded last year — taking in a pre-seed round of funding from Nordic Makers prior to getting backing from YC — where it will be presenting at the accelerator’s demo day next week. (But Rauhala won’t comment on any additional fund raising plans at this stage.)

He says it’s spun up five pilot projects over the last seven months involving commercial landlords, utilities, real estate developers and engineering companies (all in Finland for now), although — again — full customer details are not yet being disclosed. But Rauhala tells us they expect to move to their first full commercial deals with pilot customers this year.

“The reason why our customers are interested in using our products is that this is a way to make electrification cheaper because they are being paid for adjusting their consumption and that makes their operating cost lower and it makes investments more lucrative if — for example — you need to switch from natural gas boilers to heat pumps so that you can decarbonize your building,” he also tells us. “If you connect the new heat pump running on electricity — if you connect that to our service we can reduce the operating cost and that will make it more lucrative for everybody to electrify their buildings and run their systems.

“We can also then make their electricity consumed more sustainable because we are shifting consumption away from hours with most CO2 emissions on the grid. So we try to avoid the hours when there’s a lot of fossil fuel-based production in the grid and try to divert that into times when we have more renewable electricity.

“So basically the big question we are asking is how do we increase the use of renewables and the way to achieve that is asking when should we consume? Well we should consume electricity when we have more renewable in the grid. And that is the emission reduction method that we are applying here.”

In terms of limitations, Kapacity’s software-focused approach can’t work in every type of building — requiring that real estate customers have some ability to gather energy consumption (and potentially temperature) data from their buildings remotely, such as via IoT devices.

“The typical data that we need is basic information on the heating system — is it running at 100% or 50% or what’s the situation? That gets us pretty far,” says Rauhala. “Then we would like to know indoor temperatures. But that is not mandatory in the sense that we can still do some basic adjustments without that.”

It also of course can’t offer much in the way of savings to buildings that are running 100% on natural gas (or oil) — i.e. with electricity only used for lighting (turning lights off when people are inside buildings obviously wouldn’t fly); there must be some kind of air conditioning, cooling or heat pump systems already installed (or the use of electric hot water boilers).

“An old building that runs on oil or natural gas — that’s a target for decarbonization,” he continues. “That’s a target where you could consider installing heat pumps and that is where we could help some of our customers or potential customers to say ok we need to estimate how much would it cost to install a heat pump system here and that’s where our product can come in and we can say you can reduce the operating cost with demand response. So maybe we should do something together here.”

Rauhala also confirms that Kapacity’s approach does not require invasive levels of building occupant surveillance, telling TechCrunch: “We don’t collect information that is under GDPR [General Data Protection Regulation], I’ll put it that way. We don’t take personal data for this demand response.”

So any guestimates its algorithms are making about building occupants’ tolerance for temperature changes are, therefore, not going to be based on specific individuals — but may, presumably, factor in aggregated information related to specific industry/commercial profiles.

The Helsinki-based startup is not the only one looking at applying AI to drive energy cost and emissions savings in the commercial buildings sector — another we spoke to recently is Düsseldorf-based Dabbel, for example. And plenty more are likely to take an interest in the space as governments start to pump more money into accelerating decarbonization.

Asked about competitive differentiation, Rauhala points to a focus on real-time adjustments and heat pump technologies.

“One of our key things is we’re developing a system so that we can do close to real time control — very very short term control. That is a valuable service to the power grid so we can then quickly adjust,” he says. “And the other one is we are focusing on heat pump technologies to get started — heat pumps here in the Nordics are a very common and extremely good way to decarbonize and understanding how we can combine these to demand response with new heat pumps that is where we see a lot of advantages to our approach.”

“Heat pumps are a bit more technically complex than your basic natural gas boiler so there are certain things that have to be taken it account and that is where we have been focusing our efforts,” he goes on, adding: “We see heat pumps as an excellent way to decarbonize the global building stock and we want to be there and help make that happen.”

Per capita, the Nordics has the most heat pump installations, according to Rauhala — including a lot of ground source heat pump installations which can replace fossil fuel consumption entirely.

“You can run your building with a ground source heat pump system entirely — you don’t need any supporting systems for it. And that is the area where we here in Europe are more far ahead than in the US,” he says on that.

“The UK government is pushing for a lot of heat pump installations and there are incentives in place for people to replace their existing natural gas systems or whatever they have. So that is very interesting from our point of view. The UK also there is a lot of wind power coming online and there have been days when the UK has bee running 100% with renewable electricity which is great. So that actually is a really good thing for us. But then in the longer term in the US — Seattle, for example, has banned the use of fossil fuels in new buildings so I’m very confident that the market in the US will open up more and quickly. There’s a lot of opportunities in that space as well.

“And of course from a cooling perspective air conditioning in general in the US is very wide spread — especially in commercial buildings so that is already an existing opportunity for us.”

“My estimate on how valuable electricity use for heating and cooling is it’s tens of billions of dollars annually in the US and EU,” he adds. “There’s a lot of electricity being used already for this and we expect the market to grow significantly.”

On the business model front, the startup’s cloud software looks set to follow a SaaS model but the plan is also to take a commission of the savings and/or generated income from customers. “We also have the option to provide the service with a fixed fee, which might be easier for some customers, but we expect the majority to be under a commission,” adds Rauhala.

Looking ahead, were the sought for global shift away from fossil fuels to be wildly successful — and all commercial buildings’ gas/oil boilers got replaced with 100% renewable power systems in short order — there would still be a role for Kapacity’s control software to play, generating energy cost savings for its customers, even though our (current) parallel pressing need to shrink carbon emissions would evaporate in this theoretical future.

“We’d be very happy,” says Rauhala. “The way we see emission reductions with demand response now is it’s based on the fact that we do still have fossil fuels power system — so if we were to have a 100% renewable power system then the electricity does nothing to reduce emissions from the electricity consumption because it’s all renewable. So, ironically, in the future we see this as a way to push for a renewable energy system and makes that transition happen even faster. But if we have a 100% renewable system then there’s nothing [in terms of CO2 emissions] we can reduce but that is a great goal to achieve.”

#ai, #decarbonization, #energy-savings, #hvac-control-automation, #kapacity-io, #machine-learning, #nordic-makers, #tc, #y-combinator

Now that machines can learn, can they unlearn?

Now that machines can learn, can they unlearn?

Enlarge (credit: Andriy Onufriyenko | Getty Images)

Companies of all kinds use machine learning to analyze people’s desires, dislikes, or faces. Some researchers are now asking a different question: How can we make machines forget?

A nascent area of computer science dubbed machine unlearning seeks ways to induce selective amnesia in artificial intelligence software. The goal is to remove all trace of a particular person or data point from a machine learning system, without affecting its performance.

If made practical, the concept could give people more control over their data and the value derived from it. Although users can already ask some companies to delete personal data, they are generally in the dark about what algorithms their information helped tune or train. Machine unlearning could make it possible for a person to withdraw both their data and a company’s ability to profit from it.

Read 13 remaining paragraphs | Comments

#ai, #algorithms, #bias, #biz-it, #privacy, #science

Cardiomatics bags $3.2M for its ECG-reading AI

Poland-based healthtech AI startup Cardiomatics has announced a $3.2M seed raise to expand use of its electrocardiogram (ECG) reading automation technology.

The round is led by Central and Eastern European VC Kaya, with Nina Capital, Nova Capital and Innovation Nest also participating.

The seed raise also includes a $1M non-equity grant from the Polish National Centre of Research and Development.

The 2017-founded startup sells a cloud tool to speed up diagnosis and drive efficiency for cardiologists, clinicians and other healthcare professionals to interpret ECGs — automating the detection and analyse of some 20 heart abnormalities and disorders with the software generating reports on scans in minutes, faster than a trained human specialist would be able to work.

Cardiomatics touts its tech as helping to democratize access to healthcare — saying the tool enables cardiologists to optimise their workflow so they can see and treat more patients. It also says it allows GPs and smaller practices to offer ECG analysis to patients without needing to refer them to specialist hospitals.

The AI tool has analyzed more than 3 million hours of ECG signals commercially to date, per the startup, which says its software is being used by more than 700 customers in 10+ countries, including Switzerland, Denmark, Germany and Poland.

The software is able to integrate with more than 25 ECG monitoring devices at this stage, and it touts offering a modern cloud software interface as a differentiator vs legacy medical software.

Asked how the accuracy of its AI’s ECG readings has been validated, the startup told us: “The data set that we use to develop algorithms contains more than 10 billion heartbeats from approximately 100,000 patients and is systematically growing. The majority of the data-sets we have built ourselves, the rest are publicly available databases.

“Ninety percent of the data is used as a training set, and 10% for algorithm validation and testing. According to the data-centric AI we attach great importance to the test sets to be sure that they contain the best possible representation of signals from our clients. We check the accuracy of the algorithms in experimental work during the continuous development of both algorithms and data with a frequency of once a month. Our clients check it everyday in clinical practice.”

Cardiomatics said it will use the seed funding to invest in product development, expand its business activities in existing markets and gear up to launch into new markets.

“Proceeds from the round will be used to support fast-paced expansion plans across Europe, including scaling up our market-leading AI technology and ensuring physicians have the best experience. We prepare the product to launch into new markets too. Our future plans include obtaining FDA certification and entering the US market,” it added.

The AI tool received European medical device certification in 2018 — although it’s worth noting that the European Union’s regulatory regime for medical devices and AI is continuing to evolve, with an update to the bloc’s Medial Devices Directive (now known as the EU Medical Device Regulation) coming into application earlier this year (May).

A new risk-based framework for applications of AI — aka the Artificial Intelligence Act — is also incoming and will likely expand compliance demands on AI healthtech tools like Cardiomatics, introducing requirements such as demonstrating safety, reliability and a lack of bias in automated results.

Asked about the regulatory landscape it said: “When we launched in 2018 we were one of the first AI-based solutions approved as medical device in Europe. To stay in front of the pace we carefully observe the situation in Europe and the process of legislating a risk-based framework for regulating applications of AI. We also monitor draft regulations and requirements that may be introduced soon. In case of introducing new standards and requirements for artificial intelligence, we will immediately undertake their implementation in the company’s and product operations, as well as extending the documentation and algorithms validation with the necessary evidence for the reliability and safety of our product.”

However it also conceded that objectively measuring efficacy of ECG reading algorithms is a challenge.

“An objective assessment of the effectiveness of algorithms can be very challenging,” it told TechCrunch. “Most often it is performed on a narrow set of data from a specific group of patients, registered with only one device. We receive signals from various groups of patients, coming from different recorders. We are working on this method of assessing effectiveness. Our algorithms, which would allow them to reliably evaluate their performance regardless of various factors accompanying the study, including the recording device or the social group on which it would be tested.”

“When analysis is performed by a physician, ECG interpretation is a function of experience, rules and art. When a human interprets an ECG, they see a curve. It works on a visual layer. An algorithm sees a stream of numbers instead of a picture, so the task becomes a mathematical problem. But, ultimately, you cannot build effective algorithms without knowledge of the domain,” it added. “This knowledge and the experience of our medical team are a piece of art in Cardiomatics. We shouldn’t forget that algorithms are also trained on the data generated by cardiologists. There is a strong correlation between the experience of medical professionals and machine learning.”

#ai, #artificial-intelligence, #cardiomatics, #ecg, #europe, #fundings-exits, #health, #healthtech, #kaya, #startups, #tc

Samsung has its own AI-designed chip. Soon, others will too

Samsung has its own AI-designed chip. Soon, others will too

Enlarge (credit: Getty Images)

Samsung is using artificial intelligence to automate the insanely complex and subtle process of designing cutting-edge computer chips.

The South Korean giant is one of the first chipmakers to use AI to create its chips. Samsung is using AI features in new software from Synopsys, a leading chip design software firm used by many companies. “What you’re seeing here is the first of a real commercial processor design with AI,” says Aart de Geus, the chairman and co-CEO of Synopsys.

Others, including Google and Nvidia, have talked about designing chips with AI. But Synopsys’ tool, called DSO.ai, may prove the most far-reaching because Synopsys works with dozens of companies. The tool has the potential to accelerate semiconductor development and unlock novel chip designs, according to industry watchers.

Read 17 remaining paragraphs | Comments

#ai, #android, #biz-it, #chip-design, #computers, #cpu, #ics, #laptops, #samsung, #smartphones, #tech

Robotic AI firm Covariant raises another $80 million

In May of last year, Covariant announced that it had raised a $40 million Series B. It was a healthy sum of money for the young company, bringing its total funding up to $67 million. Just a little over a year later, the Berkeley-based AI startup is adding another $80 million to its coffers, riding on a wave that dramatically accelerated interest in robotics and AI during the pandemic.

“Companies across multiple industries had already been looking to realize significant gains with AI robotics and with COVID-19, market demands then increased by an order of magnitude,” president, chief scientist and co-founder Pieter Abbeel tells TechCrunch. “Combining this with our last year of successes, our investors are keen to double down. We’ll use the funding to significantly accelerate our global expansion and grow our current lead in a competitive industry.”

The Series C was led by existing investor Index Ventures and features Amplify Partners, Radical Ventures, CPPIB and Temasek. It brings the firm’s total funding up to $147 million for what it calls universal AI for robotic manipulation. “Universal” is really the key word for the Covariant Brain, and the company has already proven how versatile its tech can be in the two years since it came out of stealth.

The company currently employs just under 80 people. Part of the funding will go toward increasing its headcount “substantially.” Today’s news also includes the addition of some high-profile team members, including Raghavendra Prabhu as head of Engineering and Research, Ally Lynch as head of Marketing and Sam Cauthen as head of People.

Image Credits: Covariant

Covariant has deployed its technology in a number of markets in North America, Europe and Asia, across a broad range of different sectors requiring pick and place, from grocery to fashion to pharmaceuticals.

“As of today, the Covariant Brain is powering a wide range of industrial robots to manage order picking, putwall, sorter induction — all for companies in various industries with drastically different types of products to manipulate,” CEO Peter Chen said in a release. “The breadth of use demonstrates the Covariant Brain can help robots of different types to manipulate new objects they’ve never seen before in environments where they’ve never operated.”

Existing customers include Obeta, Knapp, ABB and Bastian.

“Forward-looking customers value our platform approach since it allows them to future-proof their long-term modernization strategy,” Abbeel says. “The Covariant Brain has unlimited learning potential to act on multiple applications across the warehouse. Our current deployments are just the tip of the iceberg on everything that AI Robotics can do for the supply chain and beyond.”

#ai, #artificial-intelligence, #covariant, #funding, #index-ventures, #pieter-abbeel, #recent-funding, #robotics, #startups

Sean Gallagher and an AI expert break down our crazy machine-learning adventure

Sean Gallagher and an AI expert break down our crazy machine-learning adventure

Enlarge

We’ve spent the past few weeks burning copious amounts of AWS compute time trying to invent an algorithm to parse Ars’ front-page story headlines to predict which ones will win an A/B test—and we learned a lot. One of the lessons is that we—and by “we,” I mainly mean “me,” since this odyssey was more or less my idea—should probably have picked a less, shall we say, ambitious project for our initial outing into the machine-learning wilderness. Now, a little older and a little wiser, it’s time to reflect on the project and discuss what went right, what went somewhat less than right, and how we’d do this differently next time.

Our readers had tons of incredibly useful comments, too, especially as we got into the meaty part of the project—comments that we’d love to get into as we discuss the way things shook out. The vagaries of the edit cycle meant that the stories were being posted quite a bit after they were written, so we didn’t have a chance to incorporate a lot of reader feedback as we went, but it’s pretty clear that Ars has some top-shelf AI/ML experts reading our stories (and probably groaning out loud every time we went down a bit of a blind alley). This is a great opportunity for you to jump into the conversation and help us understand how we can improve for next time—or, even better, to help us pick smarter projects if we do an experiment like this again!

Our chat kicks off on Wednesday, July 28, at 1:00 pm Eastern Time (that’s 10:00 am Pacific Time and 17:00 UTC). Our three-person panel will consist of Ars Infosec Editor Emeritus Sean Gallagher and me, along with Amazon Senior Principal Technical Evangelist (and AWS expert) Julien Simon. If you’d like to register so that you can ask questions, use this link here; if you just want to watch, the discussion will be streamed on the Ars Twitter account and archived as an embedded video on this story’s page. Register and join in or check back here after the event to watch!

Read on Ars Technica | Comments

#ai, #ai-ml, #amazon, #artificial-intelligence, #aws, #biz-it, #headlines, #livechat, #machine-learning, #ml, #natural-language-processing, #nlp

Researchers demonstrate that malware can be hidden inside AI models

This photo has a job application for Boston University hidden within it. The technique introduced by Wang, Liu, and Cui could hide data inside an image classifier rather than just an image.

Enlarge / This photo has a job application for Boston University hidden within it. The technique introduced by Wang, Liu, and Cui could hide data inside an image classifier rather than just an image. (credit: Keith McDuffy CC-BY 2.0)

Researchers Zhi Wang, Chaoge Liu, and Xiang Cui published a paper last Monday demonstrating a new technique for slipping malware past automated detection tools—in this case, by hiding it inside a neural network.

The three embedded 36.9MiB of malware into a 178MiB AlexNet model without significantly altering the function of the model itself. The malware-embedded model classified images with near-identical accuracy, within 1% of the malware-free model. (This is possible because the number of layers and total neurons in a convolutional neural network is fixed prior to training—which means that, much like in human brains, many of the neurons in a trained model end up being either largely or entirely dormant.)

Just as importantly, squirreling the malware away into the model broke it up in ways that prevented detection by standard antivirus engines. VirusTotal, a service that “inspects items with over 70 antivirus scanners and URL/domain blocklisting services, in addition to a myriad of tools to extract signals from the studied content,” did not raise any suspicions about the malware-embedded model.

Read 4 remaining paragraphs | Comments

#ai, #deep-learning, #machine-learning, #malware, #neural-networks, #steganography, #tech

Shares of protein discovery platform Absci pop in market debut

Absci Corp., a Vancouver company behind a multi-faceted drug development platform, went public on Thursday. It’s another sign of snowballing interest in new approaches to drug development – a traditionally risky business. 

Absci focuses on speeding drug development in the preclinical stages. The company has developed and acquired a handful of tools that can predict drug candidates, identify potential therapeutic targets, and test therapeutic proteins on billions of cells and identify which ones are worth pursuing. 

“We are offering a fully-integrated end-to-end solution for pharmaceutical drug development,” Absci founder Sean McClain tells TechCrunch. “Think of this as the Google index search for protein drug discovery and biomanufacturing.” 

The IPO was initially priced at $16 per share, with a pre-money valuation of about $1.5 billion, per S-1 filings. The company is offering 12.5 million shares of common stock, with plans to raise $200 million. However, Absci stock has already ballooned to $21 per share as of writing. Common stock is trading under the ticker “ABSI.” 

The company has elected to go public now, McClain says, to increase the company’s ability to attract and retain new talent. “As we continue to rapidly grow and scale, we need access to the best talent, and the IPO gives us amazing visibility for talent acquisition and retention,” says McClain.

Absci was founded in 2011 with a focus on manufacturing proteins in E.Coli. By 2018, the company had launched its first commercial product called SoluPro – a biogeneered E.Coli system that can build complex proteins. In 2019, the company scaled this process up by implementing a “protein printing” platform.

Since its founding Absci has grown to 170 employees and raised $230 million – the most recent influx was a $125 million crossover financing round closed in June 2020 led by Casdin Capital and Redmile Group. But this year, two major acquisitions have rounded out Absci’s offerings from protein manufacturing and testing to AI-enabled drug development. 

In January 2021, Absci acquired Denovium, a company using deep learning AI to categorize and predict the behavior of proteins. Denovium’s “engine” had been trained on more than 100 million proteins. In June, the company also acquired Totient, a biotech company that analyzes the immune system’s response to certain diseases. At the time of Totient’s acquisition, the company had already reconstructed 4,500 antibodies gleaned from immune system data from 50,000 patients. 

Absci already had protein manufacturing, evaluation and screening capabilities, but the Totient acquisition allowed it to identify potential targets for new drugs. The Denovium acquisition added an AI-based engine to aid in protein discovery. 

“What we’re doing is now feeding [our own data] into deep learning models and so that is why we acquired Denovium. Prior to Totient we were doing drug discovery and cell line development. This [acquisition] allows us to go fully integrated where we can now do target discovery as well,” McClain says. 

These two acquisitions place Absci into a particularly active niche in the drug development world. 

To start with, there’s been some noteworthy fiscal interest in developing new approaches to drug development, even after decades of low returns on drug R&D. In the first half of 2021, Evaluate reported that new drug developers raised about $9 billion in IPOs on Western exchanges. This is despite the fact that drug development is traditionally high risk. R&D returns for biopharmaceuticals hit a record low of 1.6 percent in 2019, and have rebounded to only about 2.5 percent, a Deloitte 2021 report notes. 

Within the world of drug development, we’ve seen AI play an increasingly large role. That same Deloitte report notes that “most biopharma companies are attempting to integrate AI into drug discovery, and development processes.” And, drug discovery projects received the greatest amount of AI investment dollars in 2020, according to Stanford University’s Artificial Intelligence Index annual report

More recently, the outlook on the use of AI in drug development has been bolstered by companies that have moved a candidate through the stages of pre-clinical development. 

In June, Insilico Medicine, a Hong Kong-based startup, announced that it had brought an A.I-identified drug candidate for idiopathic pulmonary fibrosis through the preclinical testing stages – a feat that helped close a $255 million Series C round. Founder Alexander Zharaonkov told TechCrunch the PI drug would begin a clinical trial on the drug late this year or early next year. 

With a hand in AI and in protein manufacturing, Absci has already positioned itself in a crowded, but hype-filled space. But going forward, the company will still have to work out the details of its business model.  

Absci is pursuing a partnership business model with drug manufacturers. This means that the company doesn’t have plans to run clinical trials of its own. Rather, it expects to earn revenue through “milestone payments” (conditional upon reaching certain stages of the drug development process) or, if drugs are approved, royalties on sales. 

This does offer some advantages, says McClain. The company is able to sidestep the risk of drug candidates failing after millions of R&D cash is poured into testing and can invest in developing “hundreds” of drug candidates at once. 

At this point, Absci does have nine currently “active programs” with drugmakers. The company’s cell line manufacturing platforms are in use in drug testing programs at eight biopharma companies, including Merck, Astellas, and Alpha Cancer technologies (the rest are undisclosed). Five of these projects are in the preclinical stage, one is in Phase 1 clinical trials, one is in a Phase 3 clinical trial, and the last is focused on animal health, per the company’s S-1 filing. 

One company, Astellas, is currently using Absci’s discovery platforms. But McClain notes that Absci has only just rolled out its drug discovery capabilities this year. 

However, none of these partners have formally licensed any of Absci’s platforms for clinical or commercial use. McClain notes that the nine active programs have milestones and royalty “potentials” associated with them. 

The company does have some ground to make up when it comes to profitability. So far this year, Absci has generated about $4.8 million in total revenue – up from about $2.1 million in 2019. Still, the costs have remained high, and S-1 filings note that the company has incurred net losses in the past two years. In 2019, the company reported $6.6 million in net losses in 2019 and $14.4 million in net losses in 2020. 

The company’s S-1 chalks up these losses to expenditures related to cost of research and development, establishing an intellectual property portfolio, hiring personnel, raising capital and providing support for these activities. 

Absci has recently completed the construction of a 77,000 square foot facility, notes McClain. So going forward the company does foresee the potential to increase the scale of its operations. 

In the immediate future, the company plans to use money raised from the IPO to grow the number of programs using Absci’s technology, invest in R&D and continue to refine the company’s new AI-based products. 

 

#ai, #artificial-intelligence, #biotech, #drug-development, #drug-discovery, #tc, #therapeutics

Ars AI headline experiment finale—we came, we saw, we used a lot of compute time

Ars AI headline experiment finale—we came, we saw, we used a lot of compute time

Enlarge (credit: Aurich Lawson | Getty Images)

We may have bitten off more than we could chew, folks.

An Amazon engineer told me that when he heard what I was trying to do with Ars headlines, the first thing he thought was that we had chosen a deceptively hard problem. He warned that I needed to be careful about properly setting my expectations. If this was a real business problem… well, the best thing he could do was suggest reframing the problem from “good or bad headline” to something less concrete.

That statement was the most family-friendly and concise way of framing the outcome of my four-week, part-time crash course in machine learning. As of this moment, my PyTorch kernels aren’t so much torches as they are dumpster fires. The accuracy has improved slightly, thanks to professional intervention, but I am nowhere near deploying a working solution. Today, as I am allegedly on vacation visiting my parents for the first time in over a year, I sat on a couch in their living room working on this project and accidentally launched a model training job locally on the Dell laptop I brought—with a 2.4 GHz Intel Core i3 7100U CPU—instead of in the SageMaker copy of the same Jupyter notebook. The Dell locked up so hard I had to pull the battery out to reboot it.

Read 27 remaining paragraphs | Comments

#ai, #al-ml, #artificial-intelligence, #aws, #biz-it, #features, #is-our-machine-learning, #machine-learning, #ml, #natural-language-processing, #nlp, #sagemaker

Google turns AlphaFold loose on the entire human genome

Image of a diagram of ribbons and coils.

Enlarge (credit: Sloan-Kettering)

Just one week after Google’s DeepMind AI group finally described its biology efforts in detail, the company is releasing a paper that explains how it analyzed nearly every protein encoded in the human genome and predicted its likely three-dimensional structure—a structure that can be critical for understanding disease and designing treatments. In the very near future, all of these structures will be released under a Creative Commons license via the European Bioinformatics Institute, which already hosts a major database of protein structures.

In a press conference associated with the paper’s release, DeepMind’s Demis Hassabis made clear that the company isn’t stopping there. In addition to the work described in the paper, the company will release structural predictions for the genomes of 20 major research organisms, from yeast to fruit flies to mice. In total, the database launch will include roughly 350,000 protein structures.

What’s in a structure?

We just described DeepMind’s software last week, so we won’t go into much detail here. The effort is an AI-based system trained on the structure of existing proteins that had been determined (often laboriously) through laboratory experiments. The system uses that training, plus information it obtains from families of proteins related by evolution, to predict how a protein’s chain of amino acids folds up in three-dimensional space.

Read 14 remaining paragraphs | Comments

#ai, #biochemistry, #biology, #computer-science, #protein-folding, #science

How we built an AI unicorn in 6 years

Today, Tractable is worth $1 billion. Our AI is used by millions of people across the world to recover faster from road accidents, and it also helps recycle as many cars as Tesla puts on the road.

And yet six years ago, Tractable was just me and Raz (Razvan Ranca, CTO), two college grads coding in a basement. Here’s how we did it, and what we learned along the way.

Build upon a fresh technological breakthrough

In 2013, I was fortunate to get into artificial intelligence (more specifically, deep learning) six months before it blew up internationally. It started when I took a course on Coursera called “Machine learning with neural networks” by Geoffrey Hinton. It was like being love struck. Back then, to me AI was science fiction, like “The Terminator.”

Narrowly focusing on a branch of applied science that was undergoing a paradigm shift which hadn’t yet reached the business world changed everything.

But an article in the tech press said the academic field was amid a resurgence. As a result of 100x larger training data sets and 100x higher compute power becoming available by reprogramming GPUs (graphics cards), a huge leap in predictive performance had been attained in image classification a year earlier. This meant computers were starting to be able to understand what’s in an image — like humans do.

The next step was getting this technology into the real world. While at university — Imperial College London — teaming up with much more skilled people, we built a plant recognition app with deep learning. We walked our professor through Hyde Park, watching him take photos of flowers with the app and laughing from joy as the AI recognized the right plant species. This had previously been impossible.

I started spending every spare moment on image classification with deep learning. Still, no one was talking about it in the news — even Imperial’s computer vision lab wasn’t yet on it! I felt like I was in on a revolutionary secret.

Looking back, narrowly focusing on a branch of applied science undergoing a breakthrough paradigm shift that hadn’t yet reached the business world changed everything.

Search for complementary co-founders who will become your best friends

I’d previously been rejected from Entrepreneur First (EF), one of the world’s best incubators, for not knowing anything about tech. Having changed that, I applied again.

The last interview was a hackathon, where I met Raz. He was doing machine learning research at Cambridge, had topped EF’s technical test, and published papers on reconstructing shredded documents and on poker bots that could detect bluffs. His bare-bones webpage read: “I seek data-driven solutions to currently intractable problems.” Now that had a ring to it (and where we’d get the name for Tractable).

That hackathon, we coded all night. The morning after, he and I knew something special was happening between us. We moved in together and would spend years side by side, 24/7, from waking up to Pantera in the morning to coding marathons at night.

But we also wouldn’t have got where we are without Adrien (Cohen, president), who joined as our third co-founder right after our seed round. Adrien had previously co-founded Lazada, an online supermarket in South East Asia like Amazon and Alibaba, which sold to Alibaba for $1.5 billion. Adrien would teach us how to build a business, inspire trust and hire world-class talent.

Find potential customers early so you can work out market fit

Tractable started at EF with a head start — a paying customer. Our first use case was … plastic pipe welds.

It was as glamorous as it sounds. Pipes that carry water and natural gas to your home are made of plastic. They’re connected by welds (melt the two plastic ends, connect them, let them cool down and solidify again as one). Image classification AI could visually check people’s weld setups to ensure good quality. Most of all, it was real-world value for breakthrough AI.

And yet in the end, they — our only paying customer — stopped working with us, just as we were raising our first round of funding. That was rough. Luckily, the number of pipe weld inspections was too small a market to interest investors, so we explored other use cases — utilities, geology, dermatology and medical imaging.

#ai, #artificial-intelligence, #column, #cybernetics, #ec-column, #ec-enterprise-applications, #ec-fintech, #ec-how-to, #enterprise, #insurance, #insurtech, #machine-learning, #startups