The office was dedicated to the long-term safety of vaccines. Experts say plans to track coronavirus vaccines are fragmented and “behind the eight ball.”
The hotly contested strategy of deliberate exposure, known as a human challenge trial, could speed up the process of identifying effective coronavirus vaccines.
After months of caving to pressures from the White House, Commissioner Stephen Hahn and a band of agency scientists have eked out a few victories.
Downplaying the dangers of the pandemic and politicizing public health measures was grossly negligent and cost untold lives.
Edgify, which builds AI for edge computing, has secured a $6.5m seed funding round backed by Octopus Ventures, Mangrove Capital Partners and an unnamed semiconductor giant. The name was not released but TechCrunch understands it nay be Intel Corp. or Qualcomm Inc.
Edgify’s technology allows ‘edge devices’ (devices at the edge of the internet) to interpret vast amounts of data, train an AI model locally, and then share that learning across its network of similar devices. This then trains all the other devices in anything from computer vision, NLP, voice recognition, or any other form of AI.
The technology can be applied to anything from MRI machines, connected cars, checkout lanes, mobile devices and anything that has a CPU, GPU or NPU. Edgify’s technology is already being used in supermarkets, for instance.
Ofri Ben-Porat, CEO and co-founder of Edgify, commented in a statement: “Edgify allows companies, from any industry, to train complete deep learning and machine learning models, directly on their own edge devices. This mitigates the need for any data transfer to the Cloud and also grants them close to perfect accuracy every time, and without the need to retrain centrally.”
Mangrove partner Hans-Jürgen Schmitz who will join Edgify’s Board comments: “We expect a surge in AI adoption across multiple industries with significant long-term potential for Edgify in medical and manufacturing, just to name a few.”
Simon King, Partner and Deep Tech Investor at Octopus Ventures added: “As the interconnected world we live in produces more and more data, AI at the edge is becoming increasingly important to process large volumes of information.”
So-called ‘edge computing’ is seen as being one of the forefronts of deeptech right now.
“Any school district now that affords football can afford spaceflight.”
Blue Origin’s New Shepard rocket hasn’t flown space tourists yet, but it has found a business niche with NASA and private science experiments.
Virgin Hyperloop announced a key step in its long-term goal of making hyperloop transportation a reality in the U.S. on Thursday. The company revealed it will be doing its certification testing at a new West Virginia facility. This will be crucial to the creation of a national safety certification framework for the U.S., which will involve working directly with the U.S. Department of Transportation – a process already underway thanks to the DOT’s issuance of guidance documentation in advance of a framework this past July.
Before now, Virgin Hyperloop has been developing and testing its hyperloop technology at its full-scale proving ground in North Las Vegas. The company created a 500-meter long ‘development loop’ for running its tests, and performed its first full-scale system test in 2017. This new facility will be used specifically for certification, but will involve similar large-scale systems testing and involve ‘thousands’ of new jobs created, according to the company.
Virgin Hyperloop ultimately hopes to fully safety certify its system by 2025, and then ultimately enter into commercial operation with a real system by 2030, if all goes well.
In “The Knowledge Machine,” the philosopher Michael Strevens says that there is something fundamentally irrational and even “inhuman” about the scientific method.
The scientific institute leading the trip denied that its policy had been applied to a specific sex, saying, “Women and men participate in our polar expeditions as equals.”
A week of talks, panels and discussions seeks to counter an impression “that this talent pool just does not exist.”
“People are basically good” was eBay’s founding principle. But in the deranged summer of 2019, prosecutors say, a campaign to terrorize a blogger crawled out of a dark place in the corporate soul.
Amazon -owned Ring is expanding from home and neighborhood security to the automative world, with three new products it debuted today at Amazon’s expansive devices and services extravaganza. These include Ring Car Alarm, Ring Car Cam, and Ring Car Connect – two new devices and one API/hardware combo aimed at automotive manufacturers, respectively. Each of these will be available beginning sometime next year.
“Truly since we started Ring, and even back in Doorbot days, people were asking for automotive security,” explained Ring CEO and founder Jamie Siminoff in an interview. “It was something that we always kind of had top of mind, but obviously we had to get a lot of other things done first – it does take time to build a product, and to do them right. So while it did take us some time to get into it, our mission is making neighborhoods safer, and a lot of the stuff that happens to cars happens in the neighborhood.”
Siminoff said that he’s especially pleased to be able to launch not just one, but a full suite of car security products that he feels covers the needs of just about any customer out there. Ring Car Alarm is an OBD-II wireless device that can detect any bumps while the car’s unoccupied, or even break-ins and when the car’s begin towed. Ring Car Cam is a security camera, which can work either via wifi, or LTE available via an add-on plan, and check for incidents while parked, or offer emergency crash detection and traffic stop recording when on the road. Finally, Ring Car Connect is an API and aftermarket device for carmakers that allows them to integrate a vehicle’s built-in cameras, and lock/unlock state.
I asked Siminoff why start right out the gate with three separate products, especially in a new market that Ring’s entering for the first time.
“As we started looking into it more, we realized that really, it wasn’t a one-size-fits all kind of product line, even to start,” he said. “We realized that it really was about trying to build more of a suite of products around the car. At Ring. we try to – and I won’t say we hit this 100% of time – but we’ve certainly tried to only launch something when it’s truly inventive, differentiated for the market, fits our mission and can really make a customer’s life better.”
The products definitely span a range of price points – Ring Car Alarm will retail for $59.99, while Car Cam and Car Connect will both be $199.99. Ring Car Alarm is obviously aimed at the broadest swath of customers, and provides a fundamental feature set that can work in concert with the Ring app to hopefully provide deterrents to potential criminal activity around a user’s vehicle. The device sends alerts to the Ring app, and they can then trigger aa series if they want. Car Alarm can also be linked up to other Ring devices, or Amazon Alexa hardware, and Alexa will provide audible alerts of any bumps, break-ins or other events. Ring Car Alarm will require connectivity via Amazon Sidewalk, the low-bandwidth, and free wireless network protocol that Ring’s parent company is set to take live sometime later this year.
Ring Car Cam goes the extra mile of actually letting a user check in on their vehicle via video – provided they’re either within range of a wifi network, or connected via the optional built-in LTE with a companion plan. It also provides additional security features when the car in which it’s installed is actually in use. Ring’s Emergency Crash Assist feature will alert first responders to the car’s location whenever it detects what it determines to be a serious crash. Also, you can use the voice command “Alexa, I’m being pulled over” to trigger an automatic recording in case of a traffic stop, which is automatically uploaded to the cloud (again, provided you’ve got active connectivity.). On the privacy side, there’s a physical shutter on the camera itself for when you don’t want it in use, which also stops the mic from recording.
Finally, Ring Car Connect consists of an API that car manufacturers use to provide Ring customers access to mobile alerts for any detected events around their vehicle, or to watch footage recorded from their onboard cameras. This also allows access to information that wouldn’t be available with a strictly aftermarket setup – like whether the car is locked or unlocked, for instance. Ring’s first automaker partner for this is Tesla, which is enabling Ring Car Connect across the 3, X, S and Y models. Users will install an aftermarket device coming in 2021 for $199.99, but then they’ll be able to watch Tesla Sentry Mode footage, as well as video recorded while driving, directly in the Ring app.
Ring’s security ecosystem has grown from the humble doorbell, to whole-home (now, much more now) and exterior, to a full-fledged alarm service, and now to the car. It’s definitely not resting on its laurels. And it’s also releasing a $29.99 mailbox sensor, which will quite literally tell you when “You’ve got mail,” which is Iike a delightful little cherry on top.
PFAS, industrial chemicals used to waterproof jackets and grease-proof fast-food containers, may disrupt pregnancy with lasting effects.
WhyLabs, a new machine learning startup that was spun out of the Allen Institute, is coming out of stealth today. Founded by a group of former Amazon machine learning engineers, Alessya Visnjic, Sam Gracie and Andy Dang, together with Madrona Venture Group principal Maria Karaivanova, WhyLabs’ focus is on ML operations after models have been trained — not on building those models from the ground up.
“The team was all research scientists, and I was the only engineer who had kind of tier-one operating experience,” she told me. “So it was like, ”Okay, how bad could it be?’ I carried the pager for the retail website before it can be bad. But it was one of the first AI deployments that we’d done at Amazon at scale. The pager duty was extra fun because there were no real tools. So when things would go wrong — like we’d order way too many black socks out of the blue — it was a lot of manual effort to figure out why was this happening.”
But while large companies like Amazon have built their own internal tools to help their data scientists and AI practitioners operate their AI systems, most enterprises continue to struggle with this — and a lot of AI projects simply fail and never make it into production. “We believe that one of the big reasons that happens is because of the operating process that remains super manual,” Visnjic said. “So at WhyLabs, we’re building the tools to address that — specifically to monitor and track data quality and alert — you can think of it as Datadog for AI applications.”
The team has brought ambitions, but to get started, it is focusing on observability. The team is building — and open-sourcing — a new tool for continuously logging what’s happening in the AI system, using a low-overhead agent. That platform-agnostic system, dubbed WhyLogs, is meant to help practitioners understand the data that moves through the AI/ML pipeline.
For a lot of businesses, Visnjic noted, the amount of data that flows through these systems is so large that it doesn’t make sense for them to keep “lots of big haystacks with possibly some needles in there for some investigation to come in the future.” So what they do instead is just discard all of this. With its data logging solution, WhyLabs aims to give these companies the tools to investigate their data and find issues right at the start of the pipeline.
According to Karaivanova, the company doesn’t have paying customers yet, but it is working on a number of proofs of concepts. Among those users is Zulily, which is also a design partner for the company. The company is going after mid-size enterprises for the time being, but as Karaivanova noted, to hit the sweet spot for the company, a customer needs to have an established data science team with 10 to 15 ML practitioners. While the team is still figuring out its pricing model, it’ll likely be a volume-based approach, Karaivanova said.
“We love to invest in great founding teams who have built solutions at scale inside cutting-edge companies, who can then bring products to the broader market at the right time. The WhyLabs team are practitioners building for practitioners. They have intimate, first-hand knowledge of the challenges facing AI builders from their years at Amazon and are putting that experience and insight to work for their customers,” said Tim Porter, managing director at Madrona. “We couldn’t be more excited to invest in WhyLabs and partner with them to bring cross-platform model reliability and observability to this exploding category of MLOps.”
In-space manufacturing company Made In Space is pushing the envelope on what can, well, be made in space with its next mission – which is set to launch aboard a Northrop Grumman International Space Station (ISS) resupply mission set for next Tuesday. Aboard that launch will be Made In Space’s Turbine Ceramic Manufacturing Module (aka CMM), a commercial ceramic turbine blisk manufacturing device that uses 3D-printing technology to produce detailed parts the require a high degree of production accuracy.
A turbine blisk is a combo rotor disk/blade array that is used primarily in engines used in the aerospace industry. Making them involves using additive manufacturing to craft them as a single component, and the purpose of this mission is to provide a proof-of-concept about the viability of doing that in a microgravity environment. Gravity can actually introduce defects into ceramic blisks manufactured on Earth, because of the way that material can settle, leading to sedimentation, for instance. Producing them in microgravity could mean lower error rates overall, and a higher possible degree of precision for making finely detailed designs.
Made In Space, which was acquired earlier this year by new commercial space supply parent co. Redwire, has been at the forefront of creating and deploying 3D printing technologies in space, particularly through its partnership with the International Space Station. The goal of the company is to demonstrate the commercial benefits of in-space manufacturing, and to commercialize the technology in order to create tangible benefits for a number of industries right here on Earth.
Tavares Strachan is known for his ambitious projects and intensive research, which have included expeditions to the North Pole and training as a cosmonaut in Russia.
Engineers at MIT, in partnership with the University of Massachusetts at Lowell, have devised a way to build a camera lens that avoids the typical spherical curve of ultra-wide-angle glass, while still providing true optical fisheye distortion. The fisheye lens is relatively specialist, producing images that can cover as wide an area as 180 degrees or more, but they can be very costly to produce, and are typically heavy, large lenses that aren’t ideal for use on small cameras like those found on smartphones.
This is the first time that a flat lens has been able to product clear, 180-degree images that cover a true panoramic spread. The engineers were able to make it work by patterning a thin wafer of glass on one side with microscopic, three-dimensional structures that are positioned very precisely in order to scatter any inbound light in precisely the same way that a curved piece of glass would.
The version created by the researchers in this case is actually designed to work specifically with the infrared portion of the light spectrum, but they could also adapt the design to work with visible light, they say. Whether IR or visible light, there are a range of potential uses of this technology, since capturing a 180-degree panorama is useful not only in some types of photography, but also for practical applications like medical imaging, and in computer vision applications where range is important to interpreting imaging data.
This design is just one example of what’s called a ‘Metalens’ – lenses that make use of microscopic features to change their optical characteristics in ways that would traditionally have been accomplished through macro design changes – like building a lens with an outward curve, for instance, or stacking multiple pieces of glass with different curvatures to achieve a desired field of view.
What’s unusual here is that the ability to accomplish a clear, detailed and accurate 180-degree panoramic image with a perfectly flat metalens design came as a surprise even to the engineers who worked on the project. It’s definitely an advancement of the science that goes beyond what may assumed was the state of the art.
They said the administration’s policies were driving away technology talent and could do long-term damage to their industry.
A giant new vessel, OceanXplorer, seeks to unveil the secrets of the abyss for a global audience.
Democrats rolled out sprawling legislation that proposes substantial new funding over a decade in a bid to reinvest in the nation’s economy and challenge Beijing.
Over a decade’s worth of work by scientists working at Melbourne, Australia’s Monash University has produced a first-of-its-kind device that can restore vision to the blind, using a combination of smartphone-style electronics and brain-implanted micro electrodes. The system has already been shown to work in preclinical studies and non-human trials on sheep, and researchers are now preparing for a first human clinical trial to take place in Melbourne.
This new technology would be able to bypass the damaged optic nerves that are often responsible for what’s definite as technical clinical blindness. It works by translating information gathered by a camera and interpreted by a vision processor unity and custom software, wirelessly to a set of tiles implanted directly within the brain. These tiles convert the image data to electrical impulses which are then transmitted to neurons in the brain via microelectrodes that are thinner than human hair.
There are still a number of steps required before this becomes something that can actually be produced and used commercially – not least of which is the extensive human clinical trial process. The team behind the technology is also looking to secure additional funding to support the eventual ramp of manufacturing and distribution of its devices as a commercial venture. But its early studies, which saw 10 of these arrays implanted on sheep, saw that one the course of a cumulative total of more than 2,700 hours of stimulation, there weren’t any adverse health affects observed.
Animals studies are a very different thing from human studies, but the research team believes their technology has promise well beyond vision. They anticipate the same approach could provide benefits and treatment options for patients with other conditions that have a neurological root cause, including paralysis.
If that sounds familiar, it might be because Elon Musk recently revealed ambitions to use his company Neuralink’s similar brain implant technology to achieve these kinds of results as well. Musk’s project is hardly the first to imagine how devices paired with modern software and technology could overcome biological limitations, and this effort form Monash has a much longer history of working towards turning this kind of science into something that could impact the lives of everyday people.
Michael R. Caputo told a Facebook audience without evidence that left-wing hit squads were being trained for insurrection and accused C.D.C. scientists of “sedition.”
A president who has mocked climate change and pushed policies that accelerate it is set to be briefed on the scorched earth and ash-filled skies that experts say are the predictable result.
A new podcast about power: who has it, who’s been denied it and who dares to defy it.
Research papers come out at far too rapid a rate for anyone to read them all, especially in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect the most relevant recent discoveries and papers, particularly in but not limited to artificial intelligence, and explain why they matter.
This week in Deep Science spans the stars all the way down to human anatomy, with research concerning exoplanets and Mars exploration, as well as understanding the subtlest habits and most hidden parts of the body.
Let’s proceed in order of distance from Earth. First is the confirmation of 50 new exoplanets by researchers at the University of Warwick. It’s important to distinguish this process from discovering exoplanets among the huge volumes of data collected by various satellites. These planets were flagged as candidates but no one has had the chance to say whether the data is conclusive. The team built on previous work that ranked planet candidates from least to most likely, creating a machine learning agent that could make precise statistical assessments and say with conviction, here is a planet.
“A prime example when the additional computational complexity of probabilistic methods pays off significantly,” said the university’s Theo Damoulas. It’s an excellent example of a field where marquee announcements, like the Google-powered discovery of Kepler-90 i, represent only the earliest results rather than a final destination, emphasizing the need for further study.
In our own solar system, we are getting to know our neighbor Mars quite well, though even the Perseverance rover, currently hurtling through the void in the direction of the red planet, is like its predecessors a very resource-limited platform. With a small power budget and years-old radiation-hardened CPUs, there’s only so much in the way of image analysis and other AI-type work it can do locally. But scientists are preparing for when a new generation of more powerful, efficient chips makes it to Mars.
Microsoft has added to the slowly growing pile of technologies aimed at spotting synthetic media (aka deepfakes) with the launch of a tool for analyzing videos and still photos to generate a manipulation score.
The tool, called Video Authenticator, provides what Microsoft calls “a percentage chance, or confidence score” that the media has been artificially manipulated.
“In the case of a video, it can provide this percentage in real-time on each frame as the video plays,” it writes in a blog post announcing the tech. “It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”
If a piece of online content looks real but ‘smells’ wrong chances are it’s a high tech manipulation trying to pass as real — perhaps with a malicious intent to misinform people.
And while plenty of deepfakes are created with a very different intent — to be funny or entertaining — taken out of context such synthetic media can still take on a life of its own as it spreads, meaning it can also end up tricking unsuspecting viewers.
While AI tech is used to generate realistic deepfakes, identifying visual disinformation using technology is still a hard problem — and a critically thinking mind remains the best tool for spotting high tech BS.
Nonetheless, technologists continue to work on deepfake spotters — including this latest offering from Microsoft.
Although its blog post warns the tech may offer only passing utility in the AI-fuelled disinformation arms race: “The fact that [deepfakes are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology. However, in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.”
This summer a competition kicked off by Facebook to develop a deepfake detector served up results that were better than guessing — but only just in the case of a data-set the researchers hadn’t had prior access to.
Microsoft, meanwhile, says its Video Authenticator tool was created using a public dataset from Face Forensic++ and tested on the DeepFake Detection Challenge Dataset, which it notes are “both leading models for training and testing deepfake detection technologies”.
It’s partnering with the San Francisco-based AI Foundation to make the tool available to organizations involved in the democratic process this year — including news outlets and political campaigns.
“Video Authenticator will initially be available only through RD2020 [Reality Defender 2020], which will guide organizations through the limitations and ethical considerations inherent in any deepfake detection technology. Campaigns and journalists interested in learning more can contact RD2020 here,” Microsoft adds.
The tool has been developed by its R&D division, Microsoft Research, in coordination with its Responsible AI team and an internal advisory body on AI, Ethics and Effects in Engineering and Research Committee — as part of a wider program Microsoft is running aimed at defending democracy from threats posed by disinformation.
“We expect that methods for generating synthetic media will continue to grow in sophistication,” it continues. “As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.”
On the latter front, Microsoft has also announced a system that will enable content producers to add digital hashes and certificates to media that remain in their metadata as the content travels online — providing a reference point for authenticity.
The second component of the system is a reader tool, which can be deployed as a browser extension, for checking certificates and matching the hashes to offer the viewer what Microsoft calls “a high degree of accuracy” that a particular piece of content is authentic/hasn’t been changed.
The certification will also provide the viewer with details about who produced the media.
Microsoft is hoping this digital watermarking authenticity system will end up underpinning a Trusted News Initiative announced last year by UK publicly funded broadcaster, the BBC — specifically for a verification component, called Project Origin, which is led by a coalition of the BBC, CBC/Radio-Canada, Microsoft and The New York Times.
It says the digital watermarking tech will be tested by Project Origin with the aim of developing it into a standard that can be adopted broadly.
“The Trusted News Initiative, which includes a range of publishers and social media companies, has also agreed to engage with this technology. In the months ahead, we hope to broaden work in this area to even more technology companies, news publishers and social media companies,” Microsoft adds.
While work on technologies to identify deepfakes continues, its blog post also emphasizes the importance of media literacy — flagging a partnership with the University of Washington, Sensity and USA Today aimed at boosting critical thinking ahead of the US election.
This partnership has launched a Spot the Deepfake Quiz for voters in the US to “learn about synthetic media, develop critical media literacy skills and gain awareness of the impact of synthetic media on democracy”, as it puts it.
The interactive quiz will be distributed across web and social media properties owned by USA Today, Microsoft and the University of Washington and through social media advertising, per the blog post.
The tech giant also notes that it’s supporting a public service announcement (PSA) campaign in the US encouraging people to take a “reflective pause” and check to make sure information comes from a reputable news organization before they share or promote it on social media ahead of the upcoming election.
“The PSA campaign will help people better understand the harm misinformation and disinformation have on our democracy and the importance of taking the time to identify, share and consume reliable information. The ads will run across radio stations in the United States in September and October,” it adds.
Impatient for a coronavirus vaccine, dozens of scientists around the world are giving themselves — and sometimes, friends and family — their own unproven versions.
Amazon has been granted an approval by the U.S. Federal Aviation Administration (FAA) that will allow it to start trialling commercial deliveries via drone, Bloomberg reports. This certification is the same one granted to UPS and a handful of other companies, and while it doesn’t mean that Amazon can immediately start operating a consumer drone delivery service for everyone, it does allow them to make progress towards that goal.
Amazon has said it’ll kick off its own delivery tests, though it hasn’t shared any details on when and where exactly those will begin. The FAA clearance for these trials is adapted from the safety rules and regulations it imposes for companies operating a commercial airline service, with special exceptions allowing for companies to bypass the requirements that specifically deal with onboard crew and staff working the aircraft, since the drones don’t have any.
These guidelines are at best a patchwork solution designed by the agency and its commercial partners to help provide a way for them to get underway with crucial systems development and safety testing and design, but the FAA is working towards a more fit-for-purpose set of regulations to govern drone airline operation for later this year. That will mostly be related to authorizing flights over crowds – but any drone flights will still require constant human observation.
Ultimately, any actual viable and practical system of drone delivery will require fully autonomous operation, without direct line-of-sight observation. Amazon has plans for its MK27 drones, which have a maximum 5 lb carrying capacity, to do just that, but it’ll still likely be many years before the regulatory and air traffic control infrastructure is updated to the point where that can happen regularly.
Elon Musk is set to deliver a progress update for Neuralink, the company and technology he founded that aims to create a direct, ultra-low latency connection between our brains and our computers. The update will kick off at 3 PM PT (6 PM ET), and will be streamed live above.
Based on Musk’s tweets, what we should see is an actual product demo of a Neuralink device in action. The multi-CEO has said that it will be a version 2 of the robot that Neuralink revealed last year during a similar update. That robot is a surgical automated platform that’s designed to perform the highly precise brain surgery that implants the internal part of Neuralink’s tech, which will ultimately communicate wirelessly with a receiver on the outside of the skull that can then transmit thought-based input to computers — if development reaches its lofty goals.
Musk has tempered expectations somewhat — what we’ll see is still very much an “experimental medical device for use only in patients with extreme medical problems,” and not the ultimate vision of an interface designed for general consumer user that he hopes to someday achieve. But expectations are still high, given that last year the company had embarked on animal testing, and was talking about potentially entering human testing within the next 12 months.
One of the areas of autonomous driving technology with the most potential to have a near-term and dramatic impact remains trucking: There’s a growing lack of drivers for long-haul routes, and highway trucking remains a relatively uncomplicated (though still very challenging) type of driving for AV systems to tackle.
Many companies are pursuing the challenge of autonomous trucking, but TuSimple and Waymo are leading the pack. TuSimple CTO Dr. Xiaodi You, who co-founded the company in 2015, and Waymo’s Boris Sofman, who leads the company’s autonomous trucking engineering efforts, will both join us at TC Sessions: Mobility on our virtual stage. The event takes place October 6-7, and we’re excited to hear from these two technology leaders working at the forefront of the industry.
TuSimple has accomplished a lot since its debut five years ago, including recently laying the groundwork for a U.S.-wide network of shipping routes in partnership with UPS, Xpress, food service supply company McLane and Penske Truck Leasing. The company is also seeking a sizable new funding round to help it scale, while actively testing with regular routes between Arizona and Texas.
Waymo, which originated at Google as that company’s self-driving car project before spinning out under parent entity Alphabet, adding self-driving trucks to the list of technologies it’s developing in 2017. Sofman joined in 2019, when Waymo hired on much of the engineering talent from his prior company, smart toy robotics maker Anki. Sofman’s resume also includes developing off-road autonomous vehicles, which likely comes in handy as Waymo seeks to roll out testing of its autonomous long-haul trucks across Texas and New Mexico.
In case you’re wondering, this won’t just be one long webinar. We have some technical tricks up our sleeves that will bring all of what you’d expect from our in-person events, from the informative panels and provocative one-on-one interviews to the networking and even a pitch-off session. While virtual isn’t the same as our events in the past, it has provided one massive benefit: democratizing access.
If you’re a startup or investor based in Europe, Africa, Australia, South America or another region in the U.S., you can listen in, network and connect with other participants here in Silicon Valley.
Get your tickets for TC Sessions: Mobility to hear from Bryan Salesky, along with several other fantastic speakers from Porsche, Waymo, Lyft and more. Tickets are just $145 for a limited time, with discounts for groups, students and exhibiting startups. We hope to see you there!