Engineers study solar and wind at the same power plant, nuclear reactors that also make hydrogen
— Read more on ScientificAmerican.com
No single question bedevils American energy and environmental policy more than nuclear waste. No, not even a changing climate, which may be a wicked problem but nonetheless receives a great deal of counter-bedeviling attention.
It’s difficult to paint the picture with a straight face. Let’s start with three main elements of the story.
First, nuclear power plants in the United States generate about 2,000 metric tons of nuclear waste (or “spent fuel”) per year. Due to its inherent radioactivity, it is carefully stored at various sites around the country.
Second, the federal government is in charge of figuring out what to do with it. In fact, power plant operators have paid over $40 billion into the Nuclear Waste Fund so that the government can handle it. The idea was to bury it in the “deep geological repository” embodied by Yucca Mountain, Nevada, but this has proved politically impossible. Nevertheless, $15 billion was spent on the scoping.
Third, due to the Energy Department’s inability to manage this waste, it simply accumulates. According to that agency’s most recent data release, some 80,000 metric tons of spent fuel—hundreds of thousands of fuel assemblies containing millions of fuel rods—is waiting for a final destination.
And here’s the twist ending: those nuclear plant operators sued the government for breach of contract and, in 2013, they won. Several hundred million dollars is now paid out to them each year by the U.S. Treasury, as part of a series of settlements and judgments. The running total is over $8 billion.
I realize this story sounds a little crazy. Am I really saying that the U.S. government collected billions of dollars to manage nuclear waste, then spent billions of dollars on a feasibility study only to stick it on the shelf, and now is paying even more billions of dollars for this failure? Yes, I am.
Fortunately, all of the aggregated waste occupies a relatively small area and temporary storage exists. Without an urgent reason to act, policymakers generally will not.
While attempts to find long-term storage will continue, policymakers should look towards recycling some of this “waste” into usable fuel. This is actually an old idea. Only a small fraction of nuclear fuel is consumed to generate electricity.
Proponents of recycling envision reactors that use “reprocessed” spent fuel, extracting energy from the 90% of it leftover after burn-up. Even its critics admit that the underlying chemistry, physics, and engineering of recycling are technically feasible, and instead assail the disputable economics and perceived security risks.
So-called Generation IV reactors come in all shapes and sizes. The designs have been around for years—in some respects, all the way back to the dawn of nuclear energy—but light-water reactors have dominated the field for a variety of political, economic, and strategic reasons. For example, Southern Company’s twin conventional pressurized water reactors under construction in Georgia each boast a capacity of just over 1,000-megawatt (or 1 gigawatt), standard for Westinghouse’s AP 1000 design.
In contrast, next-generation plant designs are a fraction of the size and capacity, and also may use different cooling systems: Oregon-based NuScale Power’s 77-megawatt small modular reactor, San Diego-based General Atomics’ 50-megawatt helium-cooled fast modular reactor, Alameda-based Kairos Power’s 140-megawatt molten fluoride salt reactor, and so on all have different configurations that can fit different business and policy objectives.
Many Gen-IV designs can either explicitly recycle used fuel or be configured to do so. On June 3, TerraPower (backed by Bill Gates), GE Hitachi, and the State of Wyoming announced an agreement to build a demonstration of the 345-megawatt Natrium design, a sodium-cooled fast reactor.
Natrium is technically capable of recycling fuel for generation. California-based Oklo has already reached an agreement with Idaho National Laboratory to operate its 1.5-megawatt “microreactor” off of used-fuel supplies. In fact, the self-professed “preferred fuel” for New York-based Elysium Industries’ molten salt reactor design is spent nuclear fuel and Alabama-based Flibe Energy advertises the waste-burning capability of its thorium reactor design.
Whether advanced reactors rise or fall does not depend on resolving the nuclear waste deadlock. Though such reactors may be able to consume spent fuel, they don’t necessarily have to. Nonetheless, incentivizing waste recycling would improve their economics.
“Incentivize” here is code for “pay.” Policymakers should consider ways that Washington can make it more profitable for a power plant to recycle fuel than to import it—from Canada, Kazakhstan, Australia, Russia, and other countries.
Political support for advanced nuclear technology, including recycling, is deeper than might be expected. In 2019, the Senate confirmed Dr. Rita Baranwal as the Assistant Secretary for Nuclear Energy at the Department of Energy (DOE). A materials scientist by training, she emerged as a champion of recycling.
The new Biden administration has continued broadly bipartisan support for advanced nuclear reactors in proposing in its Fiscal Year 2022 Budget Request to increase funding for the DOE’s Office of Nuclear Energy by nearly $350 million. The proposal includes specific funding increases for researching and developing reactor concepts (plus $32 million), fuel cycle R&D (plus $59 million), and advanced reactor demonstration (plus $120 million), and tripling funding for the Versatile Test Reactor (from $45 million to $145 million, year over year).
In May, the DOE’s Advanced Research Projects Agency-Energy (ARPA-E) announced a new $40 million program to support research in “optimizing” waste and disposal from advanced reactors, including through waste recycling. Importantly, the announcement explicitly states that the lack of a solution to nuclear waste today “poses a challenge” to the future of Gen-IV reactors.
The debate is a reminder that recycling in general is a very messy process. It is chemical-, machine-, and energy-intensive. Recycling of all kinds, from critical minerals to plastic bottles, produces new waste, too. Today, federal and state governments are quite active in recycling these other waste streams, and they should be equally involved in nuclear waste.
On Earth Day, April 22, SOSV published the SOSV Climate Tech 100, a list of the best startups that we’ve supported from their earliest stages to address climate change. There are always valuable insights embedded in a list like the 100. A TechCrunch story captured the investment perspective, and an SOSV post went deeper into the companies’ category breakdown and founder profiles.
But what can founders learn from the list about climate tech investors? In other words, who invested in the Climate Tech 100? We dug into the “who’s who” of the list, which had more than 500 investors, and here’s what we found.
If you think 500 investors in 100 companies is a lot of investors, you’re right. There are clearly a lot of investors interested in climate tech, and most are generalists just testing the waters. For the Climate Tech 100, about 10% of investors put their money in more than one startup and only seven (less than 2%) wrote a check to four or more. These included Blue Horizon, CPT Capital, EF, Fifty Years, Hemisphere Ventures and Horizons Ventures.
That pattern tracks well with data from PwC, which found that 2,700 unique investors had backed 1,200 startups in its State of Climate Tech 2020 report covering the 2013-2019 period. The report found that only 10 firms out of 2,700 made four or more climate tech deals per year, on average, over the 2013-2019 period. The most active firms are listed in the table below.
Capital deployed in climate tech grew at five times the venture capital overall growth rate over the 2013-2019 period.
There is reason to believe that the fragmentation will diminish with the launch of more funds focused on climate tech. Four funds worth more than a billion dollars each have launched since 2020 that fit the description (see chart below).
It’s also encouraging to see that capital deployed in climate tech grew at five times the venture capital overall growth rate over the 2013-2019 period.
Even so, climate tech still only represented 6% of total venture capital deployed in 2019, so there is plenty of room to grow.
Internet of Things devices are proliferating, making daily tasks more convenient for many people—but that comes at cost. The United Nations expects the amount of e-waste created globally to reach 52.2 million metric tons this year, and a sizable portion of that are dead batteries.
Dracula Technologies, a French startup that is currently exhibiting virtually at Computex, wants to help with its inkjet-printed organic photovoltaic (OPV, or organic solar cells) technology. Called LAYER (or Light As Your Energetic Response), Dracula Technologies’ OPV modules run indoors on natural or artificial ambient light, and can be used to power low-consumption indoor devices. Because they are printed and not made of silicon, the OPV modules’ shape is more customizable and, unlike many batteries, it does not use rare earths or heavy metals. Instead, the modules are created from carbon-based material.
In addition to being better for the environment, LAYER is also more economical—the company claims it can reduce the total cost of ownership by four times compared to batteries.
Dracula Technologies is currently working with manufacturers, including a partnership with Japanese semiconductor company Renesas Electronics and AND Technology Research (ANDtr) to create a self-powering, battery-less IoT device that can send messages through BLE to a mobile app.
Dracula Technologies was founded in 2011, after a project in collaboration with the CEA (Commissariat à l’énergie atomique et aux énergies alternatives, or the French Alternative Energies and Atomic Energy Commission), a public research organization. Chief executive officer Brice Cruchon saw the tech’s commercial potential and after six years of research and development, LAYER was launched through the Hello Tomorrow program for deep tech startups
So far, Dracula Technologies has raised a total of 4.4 million euros (about $5.4 million USD), including a 2 million euros round in 2016 from angel investors for a pilot line, and 2.4 million euros raised last year from MGI Digital and ISRA Cards, which Dracula Technologies is using to increase the production of its photovoltaic modules during its pre-industrialization stage. The company plans to move to its industrial phase in 2024, with the goal of producing millions of modules per year.
MGI Digital, a digital printing and finishing tech company, and ISRA Cards, which makes high-value electronic cards (like licenses or gift and loyalty cards), are Dracula Technologies’ industrial partners. It is also part of the Solar Impulse Foundation’s #1000 Solutions, a guide to green energy solutions that can be implemented on a large scale.
The grassroots Democratic organization Indivisible is launching its own team of stealth fact-checkers to push back against misinformation — an experiment in what it might look like to train up a political messaging infantry and send them out into the information trenches.
Called the “Truth Brigade,” the corps of volunteers will learn best practices for countering popular misleading narratives on the right. They’ll coordinate with the organization on a biweekly basis to unleash a wave of progressive messaging that aims to drown out political misinformation and boost Biden’s legislative agenda in the process.
Considering the scope of the misinformation that remains even after social media’s big January 6 cleanup, the project will certainly have its work cut out for it.
“This is an effort to empower volunteers to step into a gap that is being created by very irresponsible behavior by the social media platforms,” Indivisible co-founder and co-executive director Leah Greenberg told TechCrunch. “It is absolutely frustrating that we’re in this position of trying to combat something that they ultimately have a responsibility to address.”
Greenberg co-founded Indivisible with her husband following the 2016 election. The organization grew out of the viral success the pair had when they and two other former House staffers published a handbook to Congressional activism. The guide took off in the flurry of “resist”-era activism on the left calling on Americans to push back on Trump and his agenda.
Indivisible’s Truth Brigade project blossomed out of a pilot program in Colorado spearheaded by Jody Rein, a senior organizer concerned about what she was seeing in her state. Since that pilot began last fall, the program has grown into 2,500 volunteers across 45 states.
The messaging will largely center around Biden’s ambitious legislative packages: the American Rescue plan, the voting rights bill HR1 and the forthcoming infrastructure package. Rather than debunking political misinformation about those bills directly, the volunteer team will push back with personalized messages promoting the legislation and dispelling false claims within their existing social spheres on Facebook and Twitter.
The coordinated networks at Indivisible will cross-promote those pieces of semi-organic content using tactics parallel to what a lot of disinformation campaigns do to send their own content soaring (In the case of groups that make overt efforts to conceal their origins, Facebook calls this “coordinated inauthentic behavior.”) Since the posts are part of a volunteer push and not targeted advertising, they won’t be labeled, though some might contain hashtags that connect them back to the Truth Brigade campaign.
Volunteers are trained to serve up progressive narratives in a “truth sandwich” that’s careful to not amplify the misinformation it’s meant to push back against. For Indivisible, training volunteers to avoid giving political misinformation even more oxygen is a big part of the effort.
“What we know is that actually spreads disinformation and does the work of some of these bad actors for them,” Greenberg said. “We are trying to get folks to respond not by engaging in that fight — that’s really doing their work for them — but by trying to advance the kind of narrative that we actually want people to buy into.”
She cites the social media outrage cycle perpetuated by Georgia Rep. Marjorie Taylor Greene as a harbinger of what Democrats will again be up against in 2022. Taylor Greene is best known for endorsing QAnon, getting yanked off of her Congressional committee assignments and comparing mask requirements to the Holocaust — comments that inspired some Republicans to call for her ouster from the party.
Political figures like Greene regularly rile up the left with outlandish claims and easily debunked conspiracies. Greenberg believes that political figures like Greene who regularly rile up the online left suck up a lot of energy that could be better spent resisting the urge to rage-retweet and spreading progressive political messages.
“It’s not enough to just fact check [and] it’s not enough to just respond, because then fundamentally we’re operating from a defensive place,” Greenberg said.
“We want to be proactively spreading positive messages that people can really believe in and grab onto and that will inoculate them from some of this.”
For Indivisible, the project is a long-term experiment that could pave the way for a new kind of online grassroots political campaign beyond targeted advertising — one that hopes to boost the signal in a sea of noise.
Advanced driver assistance systems (ADAS) hold immense promise. At times, the headlines about the autonomous vehicle (AV) industry seem ominous, with a focus on accidents, regulation or company valuations that some find undeserving. None of this is unreasonable, but it makes the amazing possibilities of a world of AVs seem opaque.
One of the universally accepted upsides of AVs is the potential positive impact on the environment, as most AVs will also be electric vehicles (EVs).
Industry analyst reports project that by 2023, 7.3 million vehicles (7% of the total market) will have autonomous driving capabilities requiring $1.5 billion of autonomous-driving-dedicated processors. This is expected to grow to $14 billion in 2030, when upward of 50% of all vehicles sold will be classified as SAE Level 3 or higher, as defined by the National Highway Traffic Safety Administration (NHTSA).
Fundamental innovation in computing and battery technology may be required to fully deliver on the promise of AEVs with the range, safety and performance demanded by consumers.
While photonic chips are faster and more energy efficient, fewer chips will be needed to reach SAE Level 3; however, we can expect this increased compute performance to accelerate the development and availability of fully SAE Level 5 autonomous vehicles. In that case, the market for autonomous driving photonic processors will likely far surpass the projection of $14 billion by 2030.
When you consider all of the broad-based potential uses of autonomous electric vehicles (AEVs) — including taxis and service vehicles in major cities, or the clean transport of goods on our highways — we begin to see how this technology can rapidly begin to significantly impact our environment: by helping to bring clean air to some of the most populated and polluted cities.
The problem is that AEVs currently have a sustainability problem.
To operate efficiently and safely, AEVs must leverage a dizzying array of sensors: cameras, lidar, radar and ultrasonic sensors, to name just a few. These work together, gathering data to detect, react and predict in real time, essentially becoming the “eyes” for the vehicle.
While there’s some debate surrounding the specific numbers of sensors required to ensure effective and safe AV, one thing is unanimously agreed upon: These cars will create massive amounts of data.
Reacting to the data generated by these sensors, even in a simplistic way, requires tremendous computational power — not to mention the battery power required to operate the sensors themselves. Processing and analyzing the data involves deep learning algorithms, a branch of AI notorious for its outsized carbon footprint.
To be a viable alternative, both in energy efficiency and economics, AEVs need to get close to matching gas-powered vehicles in range. However, the more sensors and algorithms an AEV has running over the course of a journey, the lower the battery range — and the driving range — of the vehicle.
Today, EVs are barely capable of reaching 300 miles before they need to be recharged, while a traditional combustion engine averages 412 miles on a single tank of gas, according to the U.S. Department of Energy. Adding autonomous driving into the mix widens this gap even further and potentially accelerates battery degradation.
Recent work published in the journal Nature Energy claims that the range of an automated electric vehicle is reduced by 10%-15% during city driving.
At the 2019 Tesla Autonomy Day event, it was revealed that driving range could be reduced by up to 25% when Tesla’s driver-assist system is enabled during city driving. This reduces the typical range for EVs from 300 miles to 225 — crossing a perceived threshold of attractiveness for consumers.
A first-principle analysis takes this a step further. NVIDIA’s AI compute solution for robotaxis, DRIVE, has a power consumption of 800 watts, while a Tesla Model 3 has an energy consumption rate of about 11.9 kWh/100 km. At the typical city speed limit of 50 km/hour (about 30 mph), the Model 3 is consuming approximately 6 kW — meaning power solely dedicated to AI compute is consuming approximately 13% of total battery power intended for driving.
This illustrates how the power-hungry compute engines used for automated EVs pose a significant problem for battery life, vehicle range and consumer adoption.
This problem is further compounded by the power overhead associated with cooling the current generation of the power-hungry computer chips that are currently used for advanced AI algorithms. When processing heavy AI workloads, these semiconductor chip architectures generate massive amounts of heat.
As these chips process AI workloads, they generate heat, which increases their temperature and, as a consequence, performance declines. More effort is then needed and energy wasted on heat sinks, fans and other cooling methods to dissipate this heat, further reducing battery power and ultimately EV range. As the AV industry continues to evolve, new solutions to eliminate this AI compute chip heat problem are urgently needed.
For decades, we have relied on Moore’s law, and its lesser-known cousin Dennard scaling, to deliver more compute power per footprint repeatedly year after year. Today, it’s well known that electronic computers are no longer significantly improving in performance per watt, resulting in overheating data centers all over the world.
The largest gains to be had in computing are at the chip architecture level, specifically in custom chips, each for specific applications. However, architectural breakthroughs are a one-off trick — they can only be made at singular points in time in computing history.
Currently, the compute power required to train artificial intelligence algorithms and perform inference with the resulting models is growing exponentially — five times faster than the rate of progress under Moore’s law. One consequence of that is a huge gap between the amount of computing needed to deliver on the massive economic promise of autonomous vehicles and the current state of computing.
Autonomous EVs find themselves in a tug of war between maintaining battery range and the real-time compute power required to deliver autonomy.
Fundamental innovation in computing and battery technology may be required to fully deliver on the promise of AEVs with the range, safety and performance demanded by consumers. While quantum computers are an unlikely short- or even medium-term solution to this AEV conundrum, there’s another, more available solution making a breakthrough right now: photonic computing.
Photonic computers use laser light, instead of electrical signals, to compute and transport data. This results in a dramatic reduction in power consumption and an improvement in critical, performance-related processor parameters, including clock speed and latency.
Photonic computers also enable inputs from a multitude of sensors to run inference tasks concurrently on a single processor core (each input encoded in a unique color), while a traditional processor can only accommodate one job at a time.
The advantage that hybrid photonic semiconductors have over conventional architectures lies within the special properties of light itself. Each data input is encoded in a different wavelength, i.e., color, while each runs on the same neural network model. This means that photonic processors not only produce more throughput compared to their electronic counterparts, but are significantly more energy efficient.
Photonic computers excel in applications that require extreme throughput with low latency and relatively low power consumption — applications like cloud computing and, potentially, autonomous driving, where the real-time processing of vast amounts of data is required.
Photonic computing technology is on the brink of becoming commercially available and has the potential to supercharge the current roadmap of autonomous driving while also reducing its carbon footprint. It’s clear that interest in the benefits of self-driving vehicles is increasing and consumer demand is imminent.
So it is crucial for us to not only consider the industries it will transform and the safety it can bring to our roads, but also ensure the sustainability of its impact on our planet. In other words, it’s time to shine a little light on autonomous EVs.
In the wake of the news that UK-based AI startup Faculty has raised $42.5 million in a growth funding round, I teased out more from CEO and co-founder Marc Warner on what his plans are for the company.
Faculty seems to have an uncanny knack of winning UK government contracts, after helping Boris Johnson win his Vote Leave campaign and thus become Prime Minister. It’s even helping sort out the mess that Brexit has subsequently made of the fishing industry, problems with the NHS, and telling global corporates like Red Bull and Virgin Media what to suggest to their customers. Meanwhile, it continues to hoover up Ph.D. graduates at a rate of knots to work on its AI platform.
But, speaking to me over a call, Warner said the company no longer has plans to enter the political sphere again: “Never again. It’s very controversial. I don’t want to make out that I think politics is unethical. Trying to make the world better, in whatever dimension you can, is a good thing … But from our perspective, it was, you know, ‘noisy,’ and our goal as an organization is, despite current appearances to the contrary, is not to spend tonnes of time talking about this stuff. We do believe this is an important technology that should be out there and should be in a broader set of hands than just the tech giants, who are already very good at it.”
On the investment, he said: “Fundamentally, the money is about doubling down on the UK first and then international expansion. Over the last seven years or so we have learned what it takes to do important AI, impactful AI, at scale. And we just don’t think that there’s actually much of it out there. Customers are rightly sometimes a bit skeptical, as there’s been hype around this stuff for years and years. We figured out a bunch of the real-world applications that go into making this work so that it actually delivers the value. And so, ultimately, the money is really just about being able to build out all of the pieces to do that incredibly well for our customers.”
He said Faculty would be staying firmly HQ’d in the UK to take advantage of the UK’s talent pool: “The UK is a wonderful place to do AI. It’s got brilliant universities, a very dynamic startup scene. It’s actually more diverse than San Francisco. There’s government, there’s finance, there are corporates, there’s less competition from the tech giants. There’s a bit more of a heterogeneous ecosystem. There’s no sense in which we’re thinking, ‘Right, that’s it, we’re up and out!’. We love working here, we want to make things better. We’ve put an enormous amount of effort into trying to help organizations like the government and the NHS, but also a bunch of UK corporates in trying to embrace this technology, so that’s still going to be a terrifically important part of our business.”
That said, Faculty plans to expand abroad: “We’re going to start looking further afield as well, and take all of the lessons we’ve learned to the US, and then later Europe.”
But does he think this funding round will help it get ahead of other potential rivals in the space? “We tend not to think too much in terms of rivals,” he says. “The next 20 years are going to be about building intelligence into the software that already exists. If you look at the global market cap of the software businesses out there, that’s enormous. If you start adding intelligence to that, the scale of the market is so large that it’s much more important to us that we can take this incredibly important technology and deploy it safely in ways that actually improve people’s lives. It could be making products cheaper or helping organizations make their services more efficient.”
If that’s the case then does Faculty have any kind of ethics panel overseeing its work? “We have an internal ethics panel. We have a set of principles and if we think a project might violate those principles, it gets referred to that ethics panel. It’s randomly selected from across faculty. So we’re quite careful about the projects that we work on and don’t. But to be honest, the vast majority of stuff that’s going on is very vanilla. They are just clearly ‘good for the world’ projects. The vast majority of our work is doing good work for corporate clients to help them make their businesses that bit more efficient.”
I pressed him to expand on this issue of ethics and the potential for bias. He says Faculty “builds safety in from a start. Oddly enough, the reason I first got interested in AI was reading Nick Bostrom’s work about superintelligence and the importance of AI safety. And so from the very, very first fellowship [Faculty AI researchers are called Fellows] all the way back in 2014, we’ve taught the fellows about AI safety. Over time, as soon as we were able, we started contributing to the research field. So, we’ve published papers in all of the biggest computer science conferences Neurips, ICM, ICLR, on the topic of AI safety. How to make algorithms fair, private, robust and explainable. So these are a set of problems that we care a great deal about. And, I think, are generally ‘underdone’ in the wider ecosystem. Ultimately, there shouldn’t be a separation between performance and safety. There is a bit of a tendency in other companies to say, ‘Well, you can either have performance, or you can have safety.’ But of course, we know that’s not true. The cars today are faster and safer than the Model T Ford. So it’s a sort of a false dichotomy. We’ve invested a bunch of effort in both those capabilities, so we obviously want to be able to create a wonderful performance for the task at hand, but also to ensure that the algorithms are fair, private, robust and explainable wherever required.”
That also means, he says, that AI might not always be the ‘bogeyman’ the phrase implies: “In some cases, it’s probably not a huge deal if you’re deciding whether to put a red jumper or a blue jumper at the top of your website. There are probably not huge ethical implications in that. But in other circumstances, of course, it’s critically important that the algorithms are safe and are known to be safe and are trusted by both the users and anyone else who encounters them. In a medical context, obviously, they need to be trusted by the doctors and the patients need to make sure they actually work. So we’re really at the forefront of deploying that stuff.”
Last year the Guardian reported that Faculty had won seven government contracts in 18 months. To what does he attribute this success? “Well, I mean, we lost an enormous number more! We are a tiny supplier to government. We do our best to do work that is valuable to them. We’ve worked for many many years with people at the home office,” he tells me.
“Without wanting to go into too much detail, that 18 months stretches over multiple Prime Ministers. I was appointed to the AI Council under Theresa May. Any sort of insinuations on this are just obviously nonsense. But, at least historically, most of our work was in the private sector and that continues to be critically important for us as an organization. Over the last year, we’ve tried to step up and do our bit wherever we could for the public sector. It’s facing such a big, difficult situation around COVID, and we’re very proud of the things we’ve managed to accomplish with the NHS and the impact that we had on the decisions that senior people were able to undertake.”
Returning to the issue of politics I asked him if he thought – in the wake of events such as Brexit and the election of Donald Trump, which were both affected by AI-driven political campaigning – AI is too dangerous to be applied to that arena? He laughed: “It’s a funny old funny question… It’s a really odd way to phrase a question. AI is just a technology. Fundamentally, AI is just maths.”
I asked him if he thought the application of AI in politics had had an outsized or undue influence, on the way that political parties have operated in the last few years: “I’m afraid that is beyond my knowledge,” he says. But does Faculty have regrets about working in the political sphere?
“I think we’re just focused on our work. It’s not that we have strong feelings, either way, it’s just that from our perspective, it’s much, much more interesting to be able to do the things that we care about, which is deploying AI in the real world. It’s a bit of a boring answer! But it is truly how we feel. It’s much more about doing the things we think are important, rather than judging what everyone else is doing.”
Lastly, we touched on the data science capabilities of the UK and what the new fund-raising will allow the company to do.
He said: “We started an education program. We have roughly 10% of the UK’s PhDs in physics, maths, engineering, applying to the program. Roughly 400 or so people have been through that program and we plan to expand that further so that more and more people get the opportunity to start a career in data science. And then inside Faculty specifically, we think we’ll be able to create 400 new jobs in areas like software engineering, data science, product management. These are very exciting new possibilities for people to really become part of the technology revolution. I think there’s going to be a wonderful like new energy in Faculty, and hopefully a positive small part in increasing the UK tech ecosystem.”
Warner comes across as sincere in his thoughts about the future of AI and is clearly enthusiastic about where Faculty can take the whole field next, both philosophically and practically. Will Faculty soon be challenging that other AI leviathan, DeepMind, for access to all those Ph.D.s? There’s no doubt it will.
BuffaloGrid, a startup that provides phone charging and digital content to people in off-grid environments, is teaming up with the Techfugees refugee non-profit to provid free educational content and device charging to displaced people across East Africa and the Middle-East.
The initial service will see solar-powered ‘BuffaloGrid Hubs’ deployed in refugee camps across Kenya and Uganda, providing unlimited free access to education and health content, as well as other streaming services and mobile power charging.
The “Knowledge is Freedom” joint campaign has a goal of raising $3 million over the course of the next two years.
Daniel Becerra, CEO of BuffaloGrid, said: “Our mission is to remove barriers for internet adoption and provide the next billion with information, energy, and digital skills. I hope this campaign will raise awareness of the plight of displaced people and how collectively we have the power to change things. The entire team is excited to work with Techfugees. I believe together we have the technical expertise, experience, and connections to make a real difference.”
Raj Burman, Techfugees CEO, said: “In an increasingly digital and climate change stricken world, our mission is to make sure forcibly displaced people don’t get left behind. Around 400,000 marginalized refugees reside in the Rwamwanja and Kakuma-Kalobeyei settlements camp in Uganda and Kenya respectively. Our collaboration with BuffaloGrid presents a unique opportunity for an innovative, responsible digital solution to empower displaced communities with the support of our Chapters in Kenya and Uganda to overcome the access barriers to education and health content to better their livelihoods.”
Techfugees says 80 million people (roughly one percent of humanity) have been displaced because of climate change, war, conflict, economic challenges, and persecution. This figure is expected to grow to over 1 billion displaced people by 2050.
Belfast HQ’d BuffaloGrid has raised $6.4 million to date and counts, Tiny VC, ADV, Seedcamp, Kima Ventures and LocalGlobe among its investors.
(Disclosure: Mike Butcher is Chairman of Techfugees)
Workrise, which has built a workforce management platform for the skilled trades, announced today that it has raised $300 million in a Series E round led by UK-based Baillie Gifford that values the company at $2.9 billion.
New investor Franklin Templeton joined existing backers including Founders Fund, Bedrock Capital, Andreessen Horowitz (a16z), Moore Strategic Ventures, 137 Ventures and Brookfield Growth Partners in putting money in the round. WIth this latest financing, Workrise has now raised over $750 million.
You may know Austin-based Workrise better as its former name, RigUp. The company changed its name earlier this year to reflect a new emphasis on industries other than just oil and gas after the industry took a beating in recent years.
In 2020, Workrise laid off one-quarter of its corporate employees as the industry took an even bigger hit from the COVID-19 pandemic. It currently has over 600 employees in 25 offices.
Despite the rocky start to the year, Workrise apparently ended up rebounding. Its gross revenue has tripled since 2018, going from just under $300 million to about $900 million to close out 2020.
Workrise was founded in 2014 as a marketplace for on-demand services and skilled labor in the energy industry. In October 2019, it raised a $300 million Series D round led by Andreessen Horowitz(a16z) that valued the company at $1.9 billion.
Since then, Workrise has broadened its reach to include wind, solar, commercial construction and defense industries. In a nutshell, it connects skilled laborers with infrastructure and energy companies looking to staff and manage projects efficiently. Workrise’s online platform matches workers with over 500 companies in its network, manages payroll and benefits and provides access to training.
The company plans to use its new capital to continue to expand into new markets.
“The shift to clean energy and a redoubling of investment in infrastructure are opening up jobs that are desperately in need of filling,” said Workrise co-founder and CEO (and former energy investor) Xuan Yong in a statement. “Our platform makes it easier for skilled workers to find work and for companies to hire in-demand workers.”
Dave Bujnowski, investment manager at Baillie Gifford, points out that Workrise’s online management platform is “disrupting a sector that’s so far been slow to adopt new technologies.”
Workrise now serves more than 70 metro areas in the U.S., including Atlanta, where the company is matching trade workers with commercial construction companies, and in Broomfield, CO where the company trains and matches workers to jobs across the U.S. wind industry.
The company also offers trade workers access to training that equips them for energy and infrastructure jobs that are on the rise. Last year, Workrise placed more than 4,500 workers, or nearly a third of all its workers placed in 2020, in renewable-energy jobs.
Specifically, the company says in total, it placed 8,000 unique workers in jobs in 2019 with 13% in renewables. That number jumped to 15,000 in 2020.
Düsseldorf-based proptech startup Dabbel is using AI to drive energy efficiency savings in commercial buildings.
It’s developed cloud-based self-learning building management software that plugs into the existing building management systems (BMS) — taking over control of heating and cooling systems in a way that’s more dynamic than legacy systems based on fixed set-point resets.
Dabbel says its AI considers factors such as building orientation and thermal insulation, and reviews calibration decisions every five minutes — meaning it can respond dynamically to changes in outdoor and indoor conditions.
The 2018-founded startup claims this approach of layering AI-powered predictive modelling atop legacy BMS to power next-gen building automation is able to generate substantial energy savings — touting reductions in energy consumption of up to 40%.
“Every five minutes Dabbel reviews its decisions based on all available data,” explains CEO and co-founder, Abel Samaniego. “With each iteration, Dabbel improves or adapts and changes its decisions based on the current circumstances inside and outside the building. It does this by using cognitive artificial intelligence to drive a Model-Based Predictive Control (MPC) System… which can dynamically adjust all HVAC setpoints based on current/future conditions.”
In essence, the self-learning system predicts ahead of time the tweaks that are needed to adapt for future conditions — saving energy vs a pre-set BMS that would keep firing the boilers for longer.
The added carrot for commercial building owners (or tenants) is that Dabbel squeezes these energy savings without the need to rip and replace legacy systems — nor, indeed, to install lots of IoT devices or sensor hardware to create a ‘smart’ interior environment; the AI integrates with (and automatically calibrates) the existing heating, ventilation, and air conditioning (HVAC) systems.
All that’s needed is Dabbel’s SaaS — and less than a week for the system to be implemented (it also says installation can be done remotely).
“There are no limitations in terms of Heating and Cooling systems,” confirms Samaniego, who has a background in industrial engineering and several years’ experience automating high tech plants in Germany. “We need a building with a Building Management System in place and ideally a BACnet communication protocol.”
Average reductions achieved so far across the circa 250,000m² of space where its AI is in charge of building management systems are a little more modest but a still impressive 27%. (He says the maximum savings seen at some “peak times” is 42%.)
The touted savings aren’t limited to a single location or type of building/client, according to Dabbel, which says they’ve been “validated across different use cases and geographies spanning Europe, the U.S., China, and Australia”.
Early clients are facility managers of large commercial buildings — Commerzbank clearly sees potential, having incubated the startup via its early-stage investment arm — and several schools.
A further 1,000,000m² is in the contract or offer phase — slated to be installed “in the next six months”.
Dabbel envisages its tech being useful to other types of education institutions and even other use-cases. (It’s also toying with adding a predictive maintenance functionality to expand its software’s utility by offering the ability to alert building owners to potential malfunctions ahead of time.)
And as policymakers around the global turn their attention to how to achieve the very major reductions in carbon emissions that are needed to meet ambitious climate goals the energy efficiency of buildings certainly can’t be overlooked.
“The time for passive responses to addressing the critical issue of carbon emission reduction is over,” said Samaniego in a statement. “That is why we decided to take matters into our own hands and develop a solution that actively replaces a flawed human-based decision-making process with an autonomous one that acts with surgical precision and thanks to artificial intelligence, will only improve with each iteration.”
If the idea of hooking your building’s heating/cooling up to a cloud-based AI sounds a tad risky for Internet security reasons, Dabbel points out it’s connecting to the BMS network — not the (separate) IT network of the company/building.
It also notes that it uses one-way communication via a VPN tunnel — “creating an end-to-end encrypted connection under high market standards”, as Samaniego puts it.
The startup has just closed a €3.6 million (~$4.4M) pre-Series A funding round led by Target Global, alongside main incubator (Commerzbank’s early-stage investment arm), SeedX, plus some strategic angel investors.
Commenting in a statement, Dr. Ricardo Schaefer, partner at Target Global, added: “We are enthusiastic to work with the team at Dabbel as they offer their clients a tangible and frictionless way to significantly reduce their carbon footprint, helping to close the gap between passive measurement and active remediation.”
As vice president of Innovation at National Grid Partners, I’m responsible for developing initiatives that not only benefit National Grid’s current business but also have the potential to become stand-alone businesses. So I obviously have strong views about the future of the energy industry.
But I don’t have a crystal ball; no one does. To be a good steward of our innovation portfolio, my job isn’t to guess what the right “basket” is for our “eggs.” It’s to optimally allocate our finite eggs across multiple baskets with the greatest collective upside.
Put another way, global and regional trends make it clear that the Next Big Thing isn’t any single thing at all. Instead, the future is about open innovation and integration of elements across the entire energy supply chain. Only with such an open energy ecosystem can we adapt to the highly volatile — some might even say unpredictable — market conditions we face in the energy industry.
Just as the digital internet rewards innovation wherever it serves the market — whether you build a better app or design a cooler smartphone — so too will the energy internet offer greater opportunities across the energy supply chain.
I like to think of this open, innovation-enabling approach as the “energy internet,” and I believe it represents the most important opportunity in the energy sector today.
Here’s why I find the concept of the energy internet helpful. Before the digital internet (a term I’m using here to encompass all the hardware, software and standards that comprise it), we had multiple silos of technology such as mainframes, PCs, databases, desktop applications and private networks.
As the digital internet evolved, however, the walls between these silos disappeared. You can now utilize any platform on the back end of your digital services, including mainframes, commodity server hardware and virtual machines in the cloud.
You can transport digital payloads across networks that connect to any customer, supplier or partner on the planet with whatever combination of speed, security, capacity and cost you deem most appropriate. That payload can be data, sound or video, and your endpoint can be a desktop browser, smartphone, IoT sensor, security camera or retail kiosk.
This mix-and-match internet created an open digital supply chain that has driven an epochal boom in online innovation. Entrepreneurs and inventors can focus on specific value propositions anywhere across that supply chain rather than having to continually reinvent the supply chain itself.
The energy sector must move in the same direction. We need to be able to treat our various generation modalities like server platforms. We need our transmission grids to be as accessible as our data networks, and we need to be able to deliver energy to any consumption endpoint just as flexibly. We need to encourage innovation at those endpoints, too — just as the tech sector did.
Just as the digital internet rewards innovation wherever it serves the market — whether you build a better app or design a cooler smartphone — so too will the energy internet offer greater opportunities across the energy supply chain.
So what is the energy internet? As a foundation, let’s start with a model that takes the existing industry talk of digitalization, decentralization and decarbonization a few steps further:
Digitalization: Innovation depends on information about demand, supply, efficiency, trends and events. That data must be accurate, complete, timely and sharable. Digitalization efforts such as IoE, open energy, and what many refer to as the “smart grid” are instrumental because they ensure innovators have the insights they need to continuously improve the physics, logistics and economics of energy delivery.
Decentralization: The internet changed the world in part because it took the power of computing out of a few centralized data centers and distributed it wherever it made sense. The energy internet will do likewise. Digitalization supports decentralization by letting assets be integrated into an open energy supply chain. But decentralization is much more than just the integration of existing assets — it’s the proliferation of new assets wherever they’re needed.
Decarbonization: Decarbonization is, of course, the whole point of the exercise. We must move to greener supply chains built on decentralized infrastructure that leverage energy supply everywhere to meet energy demand anywhere. The market is demanding it and regulators are requiring it. The energy internet is therefore more than just an investment opportunity — it’s an existential imperative.
Democratization: Much of the innovation associated with the internet arose from the fact that, in addition to decentralizing technology physically, it also democratized technology demographically. Democratization is about putting power (literally, in this case) into the hands of the people. Vastly increasing the number of minds and hands tackling the energy industry’s challenges will also accelerate innovation and enhance our ability to respond to market dynamics.
Diversity: As I asserted above, no one has a crystal ball. So anyone investing in innovation at scale should diversify — not just to mitigate risk and optimize returns, but as an enablement strategy. After all, if we truly believe the energy internet (or Grid 2.0, if you prefer that term) will require that all the elements of the energy supply chain work together, we must diversify our innovation initiatives across those elements to promote interoperability and integration.
That’s how the digital internet was built. Standards bodies played an important role, but those standards and their implementations were driven by industry players like Microsoft and Cisco — as well as top VCs — who ensured the ecosystem’s success by driving integration across the supply chain.
We must take the same approach with the energy internet. Those with the power and influence to do so must help ensure we aggressively advance integration across the energy supply chain as a whole, even as we improve the individual elements. To this end, National Grid last year kicked off a new industry group called the NextGrid Alliance, which includes senior executives from more than 60 utilities across the world.
Finally, we believe it’s essential to diversify thinking within the energy ecosystem as well. National Grid has sounded alarms about the serious underrepresentation of women in the energy industry and of female undergraduates in STEM programs. On the flip side, research by Deloitte has found diverse teams are 20% more innovative. More than 60% of my own team at NGP are women, and that breadth of perspective has helped National Grid capture powerful insights into companywide innovation efforts.
The concept of the energy internet isn’t some abstract future ideal. We’re already seeing specific examples of how it will transform the market:
Green transnationalism: The energy internet is on its way to becoming as global as the digital internet. The U.K., for instance, is now receiving wind-generated power from Norway and Denmark. This ability to leverage decentralized energy supply across borders will have significant benefits for national economies and create new opportunities for energy arbitrage.
EV charging models: Pumping electricity isn’t like pumping gas, nor should it be. With the right combination of innovation in smart metering and fast-charging end-point design, the energy internet will create new opportunities at office buildings, residential complexes and other places where cars plus convenience can equal cash.
Disaster mitigation: Recent events in Texas have highlighted the negative consequences of not having an energy internet. Responsible utilities and government agencies must embrace digitization and interoperability to more effectively troubleshoot infrastructure and better safeguard communities.
These are just a few of the myriad ways in which an open, any-to-any energy internet will promote innovation, stimulate competition and generate big wins. No one can predict exactly what those big wins will be, but there will surely be many, and they will accrue to the benefit of all.
That’s why even without a crystal ball, we should all commit ourselves to digitalization, decentralization, decarbonization, democratization and diversity. In so doing, we’ll build the energy internet together, and enable a fair, affordable and clean energy future.
After years of delays, the federal government has approved what will be the third offshore wind project in the US—and the largest by far. Vineyard Wind, situated off the coast of Massachusetts, will have a generating capacity of 800 Megawatts, dwarfing Block Island Wind’s 30 MW and the output from two test turbines installed in Virginia.
Vineyard Wind has been approved a number of times but continued to experience delays during the Trump administration, which was openly hostile to renewable energy. But the Biden administration wrapped up an environmental review shortly before announcing a major push to accelerate offshore wind development.
The final hurdle, passed late Monday, was getting the Bureau of Ocean Energy Management to issue an approval for Vineyard Wind’s construction and operating plan. With that complete, the Departments of Commerce and Interior announced what they term the “final federal approval” to install 84 offshore turbines. Vineyard Wind will still have to submit paperwork showing that its construction and operation will be consistent with the approved plan; assuming that the operators can manage that, construction can begin.
Sweden’s Exeger, which for over a decade has been developing flexible solar cell technology (called Powerfoyle) that it touts as efficient enough to power gadgets solely with light, has taken in another tranche of funding to expand its manufacturing capabilities by opening a second factory in the country.
The $38 million raise is comprised of $20M in debt financing from Swedbank and Swedish Export Credit Corporation (SEK), with a loan amounting to $12M from Swedbank (partly underwritten by the Swedish Export Credit Agency (EKN) under the guarantee of investment credits for companies with innovations) and SEK issuing a loan amounting to $8M (partly underwritten by the pan-EU European Investment Fund (EIF)); along with $18M through a directed share issue to Ilija Batljan Invest AB.
The share issue of 937,500 shares has a transaction share price of $19.2 — which corresponds to a pre-money valuation of $860M for the solar cell maker.
Back in 2019 SoftBank also put $20M into Exeger, in two investments of $10M — entering a strategic partnership to accelerate the global rollout of its tech and further extending its various investments in solar energy.
The Swedish company has also previously received a loan from the Swedish Energy Agency, in 2014, to develop its solar cell tech. But this latest debt financing round is its first on commercial terms (albeit partly underwritten by EKN and EIF).
Exeger says its solar cell tech is the only one that can be printed in free-form and different colors, meaning it can “seamlessly enhance any product with endless power”, as its PR puts it.
So far two devices have integrated the Powerfoyle tech: A bike helmet with an integrated safety taillight (by POC), and a pair of wireless headphones (by Urbanista). Although neither has yet been commercially launched — but both are slated to go on sale next month.
Exeger says its planned second factory in Stockholm will allow it to increase its manufacturing capacity tenfold by 2023, helping it target a broader array of markets sooner and accelerating its goal of mass adoption of its tech.
Its main target markets for the novel solar cell technology currently include consumer electronics, smart home, smart workplace, and IoT.
More device partnerships are slated as coming this year.
“We don’t label our rounds but take a more pragmatic view on fundraising,” said Giovanni Fili, founder and CEO. “Developing a new technology, a new energy source, as well as laying the foundation for a new industry takes time. Thus, a company like ours requires long-term strategic investors that all buy into the vision as well as the overall strategy. We have spent a lot of time and energy on this, and it has paid off. It has given the company the resources required, both time and money, to bring an invention to a commercial launch, which is where we are today.”
Fili added that it’s chosen to raise debt financing now “because we can”.
“The same answer as when asked why we build a new factory in Stockholm, Sweden, rather than abroad. We have always said that once commercial, we will start leveraging the balance sheet when securing funds for the next factory. Thanks to our long-standing relationship with Swedbank and SEK, as well as the great support of the Swedish government through EKN underwriting part of the loans, we were able to move this forward,” he said.
Discussing the forthcoming two debut gizmos, the POC Omne Eternal helmet and the Urbanista Los Angeles headphones — which will both go sale in June — Fili says interest in the self-powered products has “surpassed all our expectations”.
“Any product which integrates Powerfoyle is able to charge under all forms of light, whether from indoor lamps or natural outdoor light. The stronger the light, the faster it charges. The POC helmet, for example, doesn’t have a USB port to power the safety light because the ambient light will keep it charging, cycling or not,” he tells TechCrunch.
“The Urbanista Los Angeles wireless headphones have already garnered tremendous interest online. Users can spend one hour outdoors with the headphones and gain three hours of battery time. This means most users will never need to worry about charging. As long as you have our product in light, any light, it will constantly charge. That’s one of the key aspects of our technology, we have designed and engineered the solar cell to work wherever people need it to work.”
“This is the year of our commercial breakthrough,” he added in a statement. “The phenomenal response from the product releases with POC and Urbanista are clear indicators this is the perfect time to introduce self-powered products to
the world. We need mass scale production to realize our vision which is to touch the lives of a billion people by 2030, and that’s why the factory is being built now.”
One the same day as Fortnite maker Epic Games goes to trial with one of the biggest legal challenges to the App Store’s business model to date, it has simultaneously announced the acquisition of the artist portfolio community ArtStation — and immediately lowered the commissions on sales. Now standard creators on ArtStation will see the same 12% commission rate found in Epic’s own Games Store for PCs, instead of the 30% it was before. This reduced rate is meant to serve as an example the wider community as to what a “reasonable” commission should look like. This could become a point of comparison with the Apple App Store’s 30% commission for larger developers like Epic as the court case proceeds.
ArtStation today offers a place for creators across gaming, media, and entertainment to showcase their work and find new jobs. The company has had a long relationship with Epic Games, as many ArtStation creators work with Epic’s Unreal Engine. However, ArtStation has also been a home to 2D and 3D creators across verticals, including those who don’t work with Unreal Engine.
The acquisition won’t change that, the team says in its announcement. Instead, the deal will expand the opportunities for creators to monetize their work. Most notably, that involves the commission drop. For standard creators, the fees will drop from 30% to 12%. For Pro members (who pay $9.95/mo for a subscription), the commission goes even lower — from 20% to 8%. And for self-promoted sales, the fees will be just 5%. ArtEngine’s streaming video service, ArtStation Learning, will also be free for the rest of 2021, the company notes.
The slashed commission, however, is perhaps the most important change Epic is making to ArtStation because it gives Epic a specific example as to how it treats its own creator communities. It will likely reference the acquisition and the commission changes during its trial with Apple, along with its own Epic Games Store and its similarly low rate. Already, Epic’s move had prompted Microsoft to lower its cut on game sales, too, having recently announced a similar 30% to 12% drop.
In the trial, Epic Games will try to argue that Apple has a monopoly on the iOS app ecosystem and it abuses its market power to force developers to use Apple’s payment systems and pay it commissions on the sales and in-app purchases that flow through those systems. Epic Games, like several other larger app makers, would rather use its own payment systems to avoid the commission — or at the very least, be able to point users to a website where they can pay directly. But Apple doesn’t allow this, per its App Store guidelines.
Last year, Epic Games triggered Fortnite’s App Store expulsion by introducing a new direct way to pay on mobile devices which offered a steep discount. It was a calculated move. Both Apple and Google immediately banned the game for violating their respective app store policies, as a result. And then Epic sued.
While Epic’s fight is technically with both Apple and Google, it has focused more of its energy on the former because Android devices allow sideloading of apps (a means of installing apps directly), and Apple does not.
Meanwhile, Apple’s argument is that Epic Games agreed to Apple’s terms and guidelines and then purposefully violated them in an effort to get a special deal. But Apple says the guidelines apply to all developers equally, and Epic doesn’t get an exception here.
However, throughout the course of the U.S. antitrust investigations into big tech, it was discovered that Apple did, in fact, make special deals in the past. Emails shared by the House Judiciary Committee as a part of an investigation revealed that Apple had agreed to a 15% commission for Amazon’s Prime Video app at the start, when typically subscription video apps are 30% in year one, then 15% in year two and beyond. (Apple says Amazon simply qualified for a new program.) Plus, other older emails revealed Apple had several discussions about raising commissions even higher than 30%, indicating that Apple believed its commission rate had some flex.
Ahead of today’s acquisition by Epic Games, ArtStation received a “Megagrant” from Epic during the height of the pandemic to help it through an uncertain period. This could may have pushed the two companies to further discuss deeper ties going forward.
“Over the last seven years, we’ve worked hard to enable creators to showcase their work, connect with opportunities and make a living doing what they love,” said Leonard Teo, CEO and co-founder of ArtStation, in a statement. “As part of Epic, we will be able to advance this mission and give back to the community in ways that we weren’t able to on our own, while retaining the ArtStation name and spirit.”
Tapping the geothermal energy stored beneath the Earth’s surface as a way to generate renewable power is one of the new visions for the future that’s captured the attention of environmentalists and oil and gas engineers alike.
That’s because it’s not only a way to generate power that doesn’t rely on greenhouse gas emitting hydrocarbons, but because it uses the same skillsets and expertise that the oil and gas industry has been honing and refining for years.
At least that’s what drew former the former completion engineer (it’s not what it sounds like) Tim Latimer to the industry and to launch Fervo Energy, the Houston-based geothermal tech developer that’s picked up funding from none other than Bill Gates’ Breakthrough Energy Ventures (that fund… is so busy) and former eBay executive, Jeff Skoll’s Capricorn Investment Group.
With the new $28 million cash in hand Fervo’s planning on ramping up its projects which Latimer said would “bring on hundreds of megawatts of power in the next few years.”
Latimer got his first exposure to the environmental impact of power generation as a kid growing up in a small town outside of Waco, Texas near the Sandy Creek coal power plant, one of the last coal-powered plants to be built in the U.S.
Like many Texas kids, Latimer came from an oil family and got his first jobs in the oil and gas industry before realizing that the world was going to be switching to renewables and the oil industry — along with the friends and family he knew — could be left high and dry.
It’s one reason why he started working on Fervo, the entrepreneur said.
“What’s most important, from my perspective, since I started my career in the oil and gas industry is providing folks that are part of the energy transition on the fossil fuel side to work in the clean energy future,” Latimer said. “I’ve been able to go in and hire contractors and support folks that have been out of work or challenged because of the oil price crash… And I put them to work on our rigs.”
When the Biden administration talks about finding jobs for employees in the hydrocarbon industry as part of the energy transition, this is exactly what they’re talking about.
And geothermal power is no longer as constrained by geography, so there’s a lot of abundant resources to tap and the potential for high paying jobs in areas that are already dependent on geological services work, Latimer said (late last year, Vox published a good overview of the history and opportunity presented by the technology).
“A large percentage of the world’s population actually lives next to good geothermal resources,” Latimer said. “25 countries today that have geothermal installed and producing and another 25 where geothermal is going to grow.”
Geothermal power production actually has a long history in the Western U.S. and in parts of Africa where naturally occurring geysers and steam jets pouring from the earth have been obvious indicators of good geothermal resources, Latimer said.
“Fervo’s technology unlocks a new class of geothermal resource that is ready for large-scale deployment. Fervo’s geothermal systems use novel techniques, including horizontal drilling, distributed fiber optic sensing, and advanced computational modelling, to deliver more repeatable and cost effective geothermal electricity,” Latimer wrote in an email. “Fervo’s technology combines with the latest advancements in Organic Rankine Cycle generation systems to deliver flexible, 24/7 carbon-free electricity.”
Initially developed with a grant from the TomKat Center at Stanford University and a fellowship funded by Activate.org at the Lawrence Berkeley National Lab’s Cyclotron Road division, Fervo has gone on to score funding from the DOE’s Geothermal Technology Office and ARPA-E to continue work with partners like Schlumberger, Rice University and the Berkeley Lab.
The combination of new and old technology is opening vast geographies to the company to potentially develop new projects.
Other companies are also looking to tap geothermal power to drive a renewable power generation development business. Those are startups like Eavor, which has the backing of energy majors like bp Ventures, Chevron Technology Ventures, Temasek, BDC Capital, Eversource and Vickers Venture Partners; and other players including GreenFire Energy, and Sage Geosystems.
Demand for geothermal projects is skyrocketing, opening up big markets for startups that can nail the cost issue for geothermal development. As Latimer noted, from 2016 to 2019 there was only one major geothermal contract, but in 2020 there were ten new major power purchase agreements signed by the industry.
For all of these projects, cost remains a factor. Contracts that are being signed for geothermal that are in the $65 to $75 per megawatt range, according to Latimer. By comparison, solar plants are now coming in somewhere between $35 and $55 per megawatt, as The Verge reported last year.
But Latimer said the stability and predictability of geothermal power made the cost differential palatable for utilities and businesses that need the assurance of uninterruptible power supplies. As a current Houston resident, the issue is something that Latimer has an intimate experience with from this year’s winter freeze, which left him without power for five days.
Indeed, geothermal’s ability to provide always-on clean power makes it an incredibly attractive option. In a recent Department of Energy study, geothermal could meet as much as 16% of the U.S. electricity demand, and other estimates put geothermal’s contribution at nearly 20% of a fully decarbonized grid.
“We’ve long been believers in geothermal energy but have waited until we’ve seen the right technology and team to drive innovation in the sector,” said Ion Yadigaroglu of Capricorn Investment Group, in a statement. “Fervo’s technology capabilities and the partnerships they’ve created with leading research organizations make them the clear leader in the new wave of geothermal.”
SunRoof is a European startup that has come up with a clever idea. It has its own roof-tile technology which generates solar power. It then links up those houses, creating a sort of virtual power plant, allowing homeowners to sell surplus energy back to the grid.
It’s now closed a €4.5 million round (Seed extension) led by Inovo Venture Partners, with participation from SMOK Ventures (€2m of which came in the form of convertible notes). Other investors include LT Capital, EIT InnoEnergy, FD Growth Capital and KnowledgeHub.
Sweden-based SunRoof’s approach is reminiscent of Tesla Energy, with its solar roof tiles, but whereas Tesla runs a closed energy ecosystem, SunRoof plans to work with multiple energy partners.
To achieve this virtual power company, SunRoof CEO and serial entrepreneur Lech Kaniuk (formerly of Delivery Hero, PizzaPortal, and iTaxi), acquired the renewable energy system, Redlogger, in 2020.
SunRoof’s platform consists of 2-in-1 solar roofs and façades that generate electricity without needing traditional photovoltaic modules. Instead, they use monocrystalline solar cells sandwiched between two large sheets of glass which measure 1.7 sq meters. Because the surface area is large and the connections fewer, the roofs are cheaper and faster to build.
SunRoof give homeowners an energy app to manage the solar, based on Redlogger’s infrastructure
Tesla’s Autobidder is a trading platform that manages the energy from roofs but is a closed ecosystem. SunRoof, by contrast, works with multiple partners.
Kaniuk said: “SunRoof was founded to make the move to renewable energy not only easy, but highly cost-effective without ever having to sacrifice on features or design. We’ve already grown more than 500% year-on-year and will use the latest funding to double down on growth.”
Michal Rokosz, Partner at Inovo Venture Partners, commented: “The market of solar energy is booming, estimated to reach $334 billion by 2026. Technology of integrated solar roofs is past the inflection point. It is an economical no-brainer for consumers to build new homes using solar solutions. With a more elegant and efficient substitute to a traditional hybrid of rooftops and solar panels, SunRoof clearly stands out and has a chance to be the brand for solar roofs, making clean-tech more appealing to a wider customer-base.”
The team includes co-founder Marek Zmysłowski (ex-(Jumia Travel and HotelOnline.co), former Google executive, Rafal Plutecki, and former Tesla Channel Sales Manager, Robert Bruchner.
There are rollout plans for Sweden, Germany, Poland, Switzerland, Italy, Spain, and the US.
The two founders of Crusoe Energy think they may have a solution to two of the largest problems facing the planet today — the increasing energy footprint of the tech industry and the greenhouse gas emissions associated with the natural gas industry.
Crusoe, which uses excess natural gas from energy operations to power data centers and cryptocurrency mining operations, has just raised $128 million in new financing from some of the top names in the venture capital industry to build out its operations — and the timing couldn’t be better.
Methane emissions are emerging as a new area of focus for researchers and policymakers focused on reducing greenhouse gas emissions and keeping global warming within the 1.5 degree targets set under the Paris Agreement. And those emissions are just what Crusoe Energy is capturing to power its data centers and bitcoin mining operations.
The reason why addressing methane emissions is so critical in the short term is because these greenhouse gases trap more heat than their carbon dioxide counterparts and also dissipate more quickly. So dramatic reductions in methane emissions can do more in the short term to alleviate the global warming pressures that human industry is putting on the environment.
And the biggest source of methane emissions is the oil and gas industry. In the U.S. alone roughly 1.4 billion cubic feet of natural gas is flared daily, said Chase Lochmiller, a co-founder of Crusoe Energy. About two thirds of that is flared in Texas with another 500 million cubic feet flared in North Dakota, where Crusoe has focused its operations to date.
For Lochmiller, a former quant trader at some of the top American financial services institutions, and Cully Cavmess, a third generation oil and gas scion, the ability to capture natural gas and harness it for computing operations is a natural combination of the two men’s interests in financial engineering and environmental preservation.
The two Denver natives met in prep-school and remained friends. When Lochmiller left for MIT and Cavness headed off to Middlebury they didn’t know that they’d eventually be launching a business together. But through Lochmiller’s exposure to large scale computing and the financial services industry, and Cavness assumption of the family business they came to the conclusion that there had to be a better way to address the massive waste associated with natural gas.
Conversation around Crusoe Energy began in 2018 when Lochmiller and Cavness went climbing in the Rockies to talk about Lochmiller’s trip to Mt. Everest.
When the two men started building their business, the initial focus was on finding an environmentally friendly way to deal with the energy footprint of bitcoin mining operations. It was this pitch that brought the company to the attention of investors at Polychain, the investment firm started by Olaf Carlson-Wee (and Lochmiller’s former employer), and investors like Bain Capital Ventures and new investor Valor Equity Partners.
(This was also the pitch that Lochmiller made to me to cover the company’s seed round. At the time I was skeptical of the company’s premise and was worried that the business would just be another way to prolong the use of hydrocarbons while propping up a cryptocurrency that had limited actual utility beyond a speculative hedge against governmental collapse. I was wrong on at least one of those assessments.)
“Regarding questions about sustainability, Crusoe has a clear standard of only pursuing projects that are net reducers of emissions. Generally the wells that Crusoe works with are already flaring and would continue to do so in the absence of Crusoe’s solution. The company has turned down numerous projects where they would be a buyer of low cost gas from a traditional pipeline because they explicitly do not want to be net adders of demand and emissions,” wrote a spokesman for Valor Equity in an email. “In addition, mining is increasingly moving to renewables and Crusoe’s approach to stranded energy can enable better economics for stranded or marginalized renewables, ultimately bringing more renewables into the mix. Mining can provide an interruptible base load demand that can be cut back when grid demand increases, so overall the effect to incentivize the addition of more renewable energy sources to the grid.”
Other investors have since piled on including: Lowercarbon Capital, DRW Ventures, Founders Fund, Coinbase Ventures, KCK Group, Upper90, Winklevoss Capital, Zigg Capital and Tesla co-founder JB Straubel.
The company now operate 40 modular data centers powered by otherwise wasted and flared natural gas throughout North Dakota, Montana, Wyoming and Colorado. Next year that number should expand to 100 units as Crusoe enters new markets such as Texas and New Mexico. Since launching in 2018, Crusoe has emerged as a scalable solution to reduce flaring through energy intensive computing such as bitcoin mining, graphical rendering, artificial intelligence model training and even protein folding simulations for COVID-19 therapeutic research.
Crusoe boasts 99.9% combustion efficiency for its methane, and is also bringing additional benefits in the form of new networking buildout at its data center and mining sites. Eventually, this networking capacity could lead to increased connectivity for rural communities surrounding the Crusoe sites.
Currently, 80% of the company’s operations are being used for bitcoin mining, but there’s increasing demand for use in data center operations and some universities, including Lochmiller’s alma mater of MIT are looking at the company’s offerings for their own computing needs.
“That’s very much in an incubated phase right now,” said Lochmiller. “A private alpha where we have a few test customers… we’ll make that available for public use later this year.”
Crusoe Energy Systems should have the lowest data center operating costs in the world, according to Lochmiller and while the company will spend money to support the infrastructure buildout necessary to get the data to customers, those costs are negligible when compared to energy consumption, Lochmiller said.
The same holds true for bitcoin mining, where the company can offer an alternative to coal powered mining operations in China and the construction of new renewable capacity that wouldn’t be used to service the grid. As cryptocurrencies look for a way to blunt criticism about the energy usage involved in their creation and distribution, Crusoe becomes an elegant solution.
Institutional and regulatory tailwinds are also propelling the company forward. Recently New Mexico passed new laws limiting flaring and venting to no more than 2 percent of an operator’s production by April of next year and North Dakota is pushing for incentives to support on-site flare capture systems while Wyoming signed a law creating incentives for flare gas reduction applied to bitcoin mining. The world’s largest financial services firms are also taking a stand against flare gas with BlackRock calling for an end to routine flaring by 2025.
“Where we view our power consumption, we draw a very clear line in our project evaluation stage where we’re reducing emissions for an oil and gas projects,” Lochmiller said.
Texas’ deep freeze didn’t just disrupt natural gas supplies throughout Lone Star country—its effects rippled across the country, extending as far north as Minnesota. There, gas utilities had to pay $800 million more than they anticipated during the event, and Minnesota regulators are furious.
“The ineptness and disregard for common-sense utility regulation in Texas makes my blood boil and keeps me up at night,” Katie Sieben, chairwoman of the Minnesota Public Utility Commission, told The Washington Post. “It is maddening and outrageous and completely inexcusable that Texas’s lack of sound utility regulation is having this impact on the rest of the country.”
The gas and electric markets in Texas are lightly regulated and highly competitive, which has pushed companies to deliver energy at the lowest possible cost. But it also means that many companies were ill-prepared when the mercury dropped. To save money, they had skimped on winterizing their equipment. As a result, gas lines across the state—which has about 23 percent of the country’s reserves—quite literally froze. The spot price of natural gas soared to 70-times what it would normally be in Minnesota, and gas utilities paid a hefty premium when they used the daily market to match demand.
The black curtain pulls aside and a character straight out of the movies waves hello. This is not an uncommon occurrence when I’m around Imagineers, but this time is special. The character isn’t a costume, it’s a robot. And, unlike the many animatronic figures you’ve seen in the parks, it’s not stuck in one place. No, this character is walking towards me, attached only by a thin cable used for programming.
The gait is smooth, the arms swing in a lifelike manner and the feet plant realistically. The body sways exactly as you’d expect it to. There’s no other way to say it, it’s ambling. This is Project Kiwi, a small-scale, free-roaming robotic actor — the first of its kind for Disney and a real robotics milestone.
The holy grail of themed entertainment has been established for decades now: a fully mobile, bipedal character that matches the appearance, personality and scale of the original. Various non-mobile levels of this vision have been achieved at parks around the world, including the incredibly lifelike Na’Vi Shaman, The A1000 figure that powers characters like Star Wars: Galaxy’s Edge’s Hondo Ohnaka and the smoothly expressive Belle from Beauty and the Beast at Tokyo Disneyland. There have also been some cool mobile experiments like the self-piloting droid “Jake”.
The pint sized character has accurately rendered textures on its face, hands and feet. It’s dressed in a distressed red flight suit that you may remember from the films. And its eyes are expressive as it looks at me and waves. This is the moment, the one that Disney Imagineers and park goers alike have been waiting decades to realize. This is a real, walk around character that is at the proper scale, kid scale.
A couple of weeks ago at Walt Disney Imagineering in Southern California, I saw just how close they finally are to making that dream come true. A bipedal platform, developed completely in house over the past 3 years by WDI researchers and roboticists — dressed up to look like a roughly two and a half foot tall Groot.
Even though the version of Kiwi that I’m looking at is Groot-flavored, it’s important to stress that this is a platform first and foremost, which means that it could take this form when it gets to the parks, or another form entirely. It’s important while developing a character to have a target character that can tell you whether or not you’re hitting an established mark of believability.
Kiwi is also is still very much a work in progress. I wouldn’t expect to see this in the wild soon, there is still a lot of work to be done on the way that Kiwi works and interacts with people and WDI does not have immediate plans to put it in the parks.
But even at this stage it’s an incredible feat of engineering that genuinely radiates that elusive characteristic that Disney always searches for with its figures: presence.
How did we get here?
I was able to speak to the lead on Project Kiwi, R&D Imagineer Principal Scott LaValley as well as Advanced Development Studio Executive SVP Jon Snoddy about how the platform came together over the past few years.
“Project KIWI started about three years ago to figure out how we can bring our smaller characters to life at their actual scale in authentic ways,” LaValley says. “It’s an exciting time for bipedal robotics and with an incredible team and our combination of technology, artistry, and magic, we are bringing characters to life that could not have happened anywhere but Disney.”
I’ve talked a bit about the unique Imagineering process in my previous pieces on how Disney builds reactive robotics, autonomous stuntbots and even entire lands. Imagineering works a lot like a startup in the way that it comes up with a problem to solve and then goes about pulling in other departments to help it get a solution. There is a remarkably ego-free nature to much of the way that WDI actually finds those solutions, too. They are as likely to find a key component off the shelf as they are to design, develop and patent it in house.
The interconnected nature of Imagineering departments like ride design, show systems, special effects, animatronics department, Tech Studio R&D and Disney Research means that they share solutions across the stack as well.
The guiding thread to all of it, of course, is storytelling. This guiding force exists at all levels of the process, keeping the project moving in the right direction — towards a better way to tell stories and transport guests.
With Kiwi, the end goal was clear, a character that could walk on its own and interact with park guests. Unfortunately, due to the scale and complexity of the figure and the requirements for interaction and walking, no ‘off the shelf’ platforms would do. The fact is that there are actually only a handful of truly viable bipedal robotics platforms anywhere in the world and the vast majority of them are being created for industrial applications, with a handful of ‘human-scale’ solutions that are designed as marketing set pieces rather than truly autonomous systems.
So to hit that goal, Imagineering turned to R&D and LaValley’s team. LaValley had come to Disney from Boston Dynamics, where he worked on the first version of its biped robot Atlas.
The project scope was that they needed a biped robot that was battery powered and could be programmed to handle autonomous interactions with park guests and striped gestures and emotes. The team would take the next 3 years to build what they needed — much of which was custom for reasons we’ll get into shortly.
It’s clear at a glance that Kiwi has no operator inside. The human brain is pretty good at instinctually understanding whether a space is just too small to have a person in it. In order to achieve the small size, the team had to first build a custom skeleton that had room for every motor and actuator Kiwi would need to achieve 50 degrees of freedom, all while keeping it humanoid in shape so that it could be ‘dressed up’ as any number of characters.
First came the frame. Prototypes were built from custom printed polymer and then eventually custom metal parts using industrial printers. The armatures and segments that they needed to house the critical components were just too complex to mill or cast. The cleverly printed metal skeleton is hollow throughout, allowing a ‘marrow conduit’ for air which rushes through the body cooling the motors and actuators. In the current Kiwi prototype the air comes in through the collar area of the suit, rushes throughout the body, propelled by fans embedded in the skeleton and exhausts near the bottom of the unit. Eventually it will use the clothing as a shroud to help air flow out near the feet.
Though there is some audible noise, even in this early state it is very low, allowing audio playing out of a speaker to enable conversation.
As you can see in the exclusive progress video embedded above, the lower sections were built first. Early testing around the office shows the legs and torso sneaking, bouncing, shuffling and strutting through Imagineering. This is probably the only workplace in the world where the bottom half of a torso can tiptoe past your office while you’re eating lunch and it doesn’t even merit a pause between bites.
An enormous amount of completely custom robotics work went into the Kiwi platform. In the demonstration I saw, young Groot had a safety tether and control cable for live programming but nothing on the rig itself needed support, it was free roaming with on board battery power that LaValley says hits around the 45 minute mark currently with more longevity hoped for in the final version. In fact, a next-generation skeleton is already under development that is lighter and more efficient.
The legs use a system that offers a kinetic counter-balance, allowing the force of having to move and plant a foot to be off-set, making motions more power efficient and quicker. Think of a spring loaded heavy gate that makes it easier for you to swing open — but no springs, and a robotic limb instead of a gate.
The feet plant realistically for the very simple reason that they must actually support the figure. This gives it an additional layer of believability that just doesn’t happen with externally supported characters that “fake” a foot plant. LaValley demonstrated that the figure could easily stay afoot even if it was shoved gently or if a hand was rested on its shoulder. This kind of self-balancing is something that humans do unconsciously and continuously but it must be built and programmed in to an ambulatory robot.
Many patentable inventions went into this creation. One of them is a clever system of gears that translates energy across joints, allowing them to share motors with one another even across a joint like a knee or wrist. This means fewer components and the ability to keep motor and actuator packages small and compact enough to fit underneath theming.
In order to minimize the amount of wiring throughout Kiwi — since wires are always the biggest points of failure — the team created a set of origami-like circuit boards joined by integrated flex cabling. Think of your standard computer circuit board but sliced into segments and mounted to the exterior of the hollow ‘bones’. They wrap around the limbs and other body parts, binding the control systems and motors being controlled together into a local group that reduces the amount of harnessing that needs to be spread across joints and throughout the structure.
No actuators — the components that decide how to move a limb — that exist had the capabilities that the team needed, so they built them from scratch. At one point, LaValley handed me a ring holding iteration after iteration of a dozen actuator elements. I was holding years worth of engineering, experimentation, failure and progress on a simple bit of wire twisted together at the end.
Up next for Project Kiwi is a new set of actuators that can dynamically apply torque plus added sensing capabilities for more stability and reaction to uneven ground or interactions. You can imagine that, as a free roaming character people will want to take pictures with it and I doubt kids would be able to resist running up for a hug. The skeleton must be able to sense and react quickly and smoothly to those sudden external inputs in order to stay upright and keep looking natural.
Moving from a pure IK system to a fully torque sensing system will allow for the platform to make on the fly adjustments that compensate for terrain or interaction with other performers or guests.
All of the work the team put into custom gearing, motors and actuation has paid off in spades with the ridiculously smooth and natural looking movements of Kiwi’s arms and legs. Quick waves, shrugs, dance moves and even boxing jabs all look like a real — if slightly gentle — creature is performing them.
The team also demonstrated the custom built performance software that they designed which allows Kiwi to have different kinds of gaits with personality layered on top. The bottom layer is an IK-style gait system that keeps Kiwi upright and walking, but then layering the personalities on top adds character to the walk while still maintaining stability. Bouncy, jaunty walks, limps, sad or downhearted walks, all with the other motions of arms and head contributing to a constantly shifting center of mass and momentum. The paddling ducks feet under the water is that gait system that takes the external inputs and integrates it into the walk naturally.
The current prototype software has a series of set behaviors, with a timeline that allows them to program new behaviors and actions by toggles or adjusting curves that control movement. With a series of tweaks in the software the changes become evident immediately, with Groot’s “mood” becoming immediately evident in his walk.
One moment he is bouncing along swinging his arms jauntily, clearly happy to be there. Then the next moment his arms are slumped, his head is hung and he is slowly plodding — clearly sad to be leaving the fun behind. It’s a remarkable bit of performance software.
And even though the expressive eyes are already impressive — the team is not done. Up next on the agenda is a sensory package that allows Kiwi to more fully understand the world around it and to identify people and their faces. This becomes important because eye contact is such an emotive and powerful tool to use in transporting a participant.
Even without the sensing software I can tell you that the experience of this 2.5ft Groot locking eyes with me, smiling and waving was just incredibly transportive. Multiple times throughout my interaction I completely forgot that it was a robot at all.
As I mentioned at the top, the Project Kiwi platform still has a lot of work left to do before it makes any appearances in the parks. But it’s already well on the road to being viable for things like stage performances, photo ops and eventually free roaming deployment in the parks.
That is really the vision, Snoddy says that the goal is to move the characters we love from across Disney’s pantheon into the spaces of the guests, elevating the entirety of the park to a live transportive experience, rather than a single ride or dark room. And to do it at the proper scale to make them genuine and capable of making guests believe. With these kinds of platforms, the possibilities are there to make the entire parks themselves a living breathing home for the characters, rather than the tightly controlled environments of the rides themselves.
The arc of history in this Imagineering journeys is drawn in robots. From Great Moments with Mr Lincoln, to incredibly expressive characters like the Na’Vi Shaman anchored inside a dark ride, to characters that hold up in bright, well lit spaces. Project Kiwi is the next frontier, allowing them to step off of the pedestal and right into the world of the guest.
One of the most fascinating fields in robotics currently is HRI or human-robot interaction. This multidisciplinary effort to help humans and robots communicate better is often focused on safety and awareness in industrial settings. But I’ve long had that the most incredibly interesting work in this space is being done in Imagineering R&D. Over 100 million people pass through Disney’s parks per year, and the number of opportunities that they have to react to and interact with robotic characters grows yearly. And with projects like Kiwi on the horizon, this field is going to explode with new kinds of data and learnings.
And, of course, we’ll get to meet some of our favorite characters looking and acting as real as we’ve ever seen them in our world.
Congruent Ventures, the early stage investment firm focused on technologies and services designed to avert the current climate emergency, has raised $175 million in financing for its latest fund.
The firm, which now has $300 million in assets under management, is focused on investing in pre-seed, seed and Series A rounds and was founded by Abe Yokell and Joshua Posamentier, two longtime investors in the climate space with over twenty years of experience investing in the sector.
“With the dawn of a new administration dedicated to infrastructure [and] climate and a long-overdue influx of capital towards pressing global issues surrounding climate change, we cover the gamut with a portfolio of over three dozen companies working in transportation, energy transition, food and agriculture to sustainable production and consumption,” Yokell wrote in an email.
Companies in the portfolio include the mycelium meat producer Meati; Milk Run, a farm to table food marketplace; PicoMES, which develops software for efficient manufacturing; Parallel Systems, a developer of electrified, autonomous rail cars; Alloy Enterprises, which is an additive manufacturer for aluminum; Hippo Harvest, which provides autonomous greenhouse growing systems; and Amp Robotics, a provider of recycling robots to improve efficiencies among hard-pressed waste recycling organizations.
The firm counts some high profile limited partners like Microsoft’s Climate Innovation Fund, affiliates of Prelude Ventures; the Jeremy and Hannelore Grantham Environmental Turst and Surdna Foundation, along with UC Investments.
“Until very recently, there was a total dearth of early-stage capital focused on climate and sustainability,” said Joshua Posamentier, managing partner and co-founder, Congruent Ventures. “We invest at the earliest stages where we can help entrepreneurs avoid a myriad of pitfalls, help them build strong companies, raise additional capital, and as a result, tackle the world’s most pressing environmental problems in some of the world’s largest sectors.”
One third of Congruent’s companies are working directly with energy or civil infrastructure and could stand to gain tremendously from any infrastructure spending bill that could come from Washington. Beyond that, the firm’s limited partners include infrastructure investors with over $700 billion of assets under management that are all potential customers for the technologies developed by the firm’s portfolio companies, the firm said in a statement.
General Motors is joining the list of big automakers picking their horses in the race to develop better batteries for electric vehicles with its lead of a $139 million investment into the lithium-metal battery developer, SES.
Volkswagen has QuantumScape; Ford has invested in SolidPower (along with Hyundai and BMW); and now with SES’ big backing from General Motors most of the big American and European automakers have placed their bets.
“We are beyond R&D development,” said SES chief executive Hu Qichao in an interview with TechCrunch. “The main purposes of this funding is to, one, mprove the key material, this lithium metal electrolyte on the anode side and the cathode side, and, two, to improve the scale of the current cell from the iPhone battery size to the size that can be used in cars.”
There’s a third component to the financing as well, Hu said, which is to increase the company’s algorithmic capabilities to monitor and manage cell performance. “It’s something that we and our OEM partners care about,” said Hu.
The investment from GM s the culmination of nearly six years of work with the big automaker, said Hu. “We started working with them in 2015. For the next three years we will go through the standard automation approval processes. Going from ‘A’ sample to ‘B’ sample all the way through ‘D’ sample,” which is the final testing phase before commercial availability of SES’ batteries in cars.
While Tesla, the current leader in electric vehicle sales in America, is looking to improve the form factors of its batteries to make them more powerful and more efficient, Hu said that the chemistry isn’t that different. Solid state batteries represent a step change in battery technology that makes batteries more powerful, easier to recycle, and potentially more stable.
As Mark Harris wrote in TechCrunch earlier earlier this year:
There are many different kinds of SSB but they all lack a liquid electrolyte for moving electrons (electricity) between the battery’s positive (cathode) and negative (anode) electrodes. The liquid electrolytes in lithium-ion batteries limit the materials the electrodes can be made from, and the shape and size of the battery. Because liquid electrolytes are usually flammable, lithium-ion batteries are also prone to runaway heating and even explosion. SSBs are much less flammable and can use metal electrodes or complex internal designs to store more energy and move it faster — giving higher power and faster charging.
What SES is doing has brought the company attention not just from General Motors, but from previous investors including the battery giant SK Innovation; the Singapore-based, government-backed investment firm, Temasek; the venture capital arm of semiconductor manufacturer, Applied Materials, Applied Ventures; the Chinese automaking giant, Shanghai Auto; and investment firm, Vertex.
“GM has been rapidly driving down battery cell costs and improving energy density, and our work with SES technology has incredible potential to deliver even better EV performance for customers who want more range at a lower cost,” said Matt Tsien, GM executive vice president and chief technology officer and president, GM Ventures. “This investment by GM and others will allow SES to accelerate their work and scale up their business.”
The race among mobility startups to become profitable by controlling market share has produced a string of bad results for cities and the people living in the them.
City officials and agencies learned from those early deployments of ride-hailing and shared scooter services and have since pushed back with new rules and tighter control over which companies can operate. This correction has prompted established companies to change how they do business and fueled a new crop of startups, all promising a different approach.
But can mobility be accessible, equitable and profitable? And how?
TC Sessions: Mobility 2021, a virtual event scheduled for June 9, aims to dig into those questions. Luckily, we have three guests who are at the center of cities, equity and shared mobility: community organizer, transportation consultant and lawyer Tamika L. Butler, Remix co-founder and CEO Tiffany Chu and Revel co-founder and CEO Frank Reig.
Butler, a lawyer and founder and principal of her own consulting company, is well known for work in diversity and inclusion, equity, the built environment, community organizing and leading nonprofits. She was most recently the director of planning in California and the director of equity and inclusion at Toole Design. She previously served as the executive director of the Los Angeles Neighborhood Land Trust and was the executive director of the Los Angeles County Bicycle Coalition. Butler also sits on the board of Lacuna Technologies.
Chu is the CEO and co-founder of Remix, a startup that developed mapping software used by cities for transportation planning and street design. Remix was recently acquired by Via for $100 million and will continue to operate as a subsidiary of the company. Remix, which was backed by Sequoia Capital, Energy Impact Partners, Y Combinator, and Elemental Excelerator has been recognized as both a 2020 World Economic Forum Tech Pioneer and BloombergNEF Pioneer for its work in empowering cities to make transportation decisions with sustainability and equity at the forefront. Chu currently serves as Commissioner of the San Francisco Department of the Environment, and sits on the city’s Congestion Pricing Policy Advisory Committee. Previously, Tiffany was a Fellow at Code for America, the first UX hire at Zipcar and is an alum of Y Combinator. Tiffany has a background in architecture and urban planning from MIT.
Reig is the co-founder and CEO of Revel, a transportation company that got its start launching a shared electric moped service in Brooklyn. The company, which launched in 2018, has since expanded its moped service to Queens, Manhattan, the Bronx, Washington, D.C., Miami, Oakland, Berkeley, and San Francisco. The company has since expanded its focus beyond moped and has started to build fast-charging EV Superhubs across New York City and launched an eBike subscription service in four NYC boroughs. Prior to Revel, Reig held senior roles in the energy and corporate sustainability sectors.
The trio will join other speakers TechCrunch has announced, a list that so far includes Joby Aviation founder and CEO JonBen Bevirt, investor and Linked founder Reid Hoffman, whose special purpose acquisition company just merged with Joby, as well as investors Clara Brenner of Urban Innovation Fund, Quin Garcia of Autotech Ventures and Rachel Holt of Construct Capital and Starship Technologies co-founder and CEO/CTO Ahti Heinla. Stay tuned for more announcements in the weeks leading up to the event.