Sony accelerates push into car sector in diversification drive

Sony accelerates push into car sector in diversification drive

Enlarge (credit: Kazuhiro Nogi | Getty )

Sony expects to supply imaging sensors to 15 of the world’s top 20 global automakers by 2025, underscoring the company’s ambitions for electric vehicles and autonomous driving as it tries to diversify beyond mobile phones.

The Japanese conglomerate flagged its intention to accelerate a push into the auto industry in 2020 when it unveiled a prototype EV called the Vision-S. This year, it has launched an EV division and announced a joint venture with Honda to make cars.

Sony has now said it aims to provide the sensors crucial to EVs and autonomous vehicles, as it diversifies beyond making smartphone camera parts for Apple, Google, and Samsung.

Read 13 remaining paragraphs | Comments

#cars, #imaging, #self-driving, #sensors, #sony, #tech

The iPhone 13 Pro goes to Disneyland

This year’s iPhone review goes back to Disneyland for the first time in a couple of years for, uh, obvious reasons. I’m happy to report that the iPhone 13 Pro and iPhone 13 performed extremely well and the limited testing I was able to do on the iPhone mini and iPhone 13 Pro Max showed that for the first time you’re able to make a pretty easy choice based on size once you’ve decided you’re ok without telephoto.

One of the major reasons I keep bringing these iPhones back to Disneyland is that it’s pretty much the perfect place to test the improvements Apple claims it is making in an intense real-world setting. It’s typically hot, the network environment is atrocious, you have to use your phone for almost everything these days from pictures to ticket scanning to food ordering and you’re usually there as long as you can to get the most out of your buck. It’s the ideal stress test that doesn’t involve artificial battery rundowns or controlled photo environments. 

In my testing, most of Apple’s improvements actually had a visible impact on the quality of life of my trip, though in some cases not massive. Screen brightness, the longer telephoto and battery life were all bright spots.

Performance and battery

The battery of the iPhone 13 Pro hit just over the 13 hour mark in the parks for me running it right to the dregs. Since there was so much video testing this year, the camera app did stay on screen longer than usual at just over 1hr of active ‘on screen’ usage which does put a bit of a strain on the system. I’d say that in real-world standard use you’ll probably get a bit more than that out of it so I’m comfortable saying that Apple’s estimate of an hour or more longer video playback time from the iPhone 12 Pro is probably pretty accurate. 

Though it was hard to get the same level of stress on the iPhone 13 Pro Max during my tests, I’d say you can expect even more battery life out of it, given the surplus it still had when my iPhone 13 Pro needed charging. Bigger battery, more battery life, not a big shock.

If you’re using it in the parks and doing the rope drop I’d say I would plan on taking it off the charger at 6am or so and plan to have a charger handy by about 4pm so you don’t go dead. That’s not a bad run overall for an iPhone in challenging conditions and with heavy camera use. 

Apple’s new ProMotion display was a nice upgrade as well, and I did notice the increased screen brightness. Typically the bump in brightness was only truly noticeable side-by-side with an iPhone 12 Pro with high-key content displayed on the screen. Popping open the Disneyland app for the barcode meant a bit better consistency in scanning (though that’s pretty hard to say for sure) and a visual increase in overall brightness in direct sun. Out of the Sun I’d say you’d be hard pressed to tell.

The variable refresh rate of the ProMotion screen cranking all the way up to 120hz while scrolling Safari is a really nice quality of life improvement. I’m unfortunately a bit jaded in this department because I’ve done a ton of my computing on the iPad Pro for the past couple of years, but it’s going to be an amazing bump for iPhone users that haven’t experienced it. Because Apple’s system is not locked at 120hz, it allows them to conserve battery life by slowing down the screen’s refresh rate when viewing static content like photos or text when not scrolling. I’m happy to say that I did not see any significant ramping while scrolling, so it’s really responsive and seamless in its handling of this variability.

The new A15 chip is, yes, more powerful than last year. Here’s some numbers if that’s your sort of thing:

Impressive as hell, especially for more battery life not less. The power-per-watt performance of Apple’s devices continues to be the (relatively) un-sung victory of its chips department. It’s not just that this year’s iPhones or the M1 laptops are crazy fast, it’s that they’re also actually usable for enormous amounts of time not connected to a charger. For those curious, the iPhone 12 Pro appears to have 6GB of RAM. 

Design

The design of the iPhone continues to be driven by the camera and radio. Whatever is necessary to support the sensors and lenses of the camera package and whatever is necessary to ensure that the antennas can accommodate 5G are in control of the wheel at this point in the iPhone’s life, and that’s pretty natural. 

The camera array on the back of the iPhone 13 Pro is bigger and taller in order to accommodate the three new cameras Apple has installed here. And I do mean bigger, like 40% bigger overall with taller arrays. Apple’s new cases now have a very noticeable raised ridge that exists to protect the lenses when you’re setting the case down on a surface. 

Everything else is sort of built around the camera and the need for wireless charging and radio performance. But Apple’s frosted glass and steel rim look retains its jewel-like quality this year and they’re still really good looking phones. I doubt the vast majority of people will see them long without a case but while you do they’re nice looking phones.

The front notch has been pared down slightly due to improvements in camera packaging, which leaves a tiny bit more screen real-estate for things like videos, but we’ll have to wait to see if developers find clever ways to use the extra pixels. 

Now, on to the cameras.

Cameras

It seems impossible that Apple continues to make year-over-year improvements that genuinely improve your optionality and quality of images that are enough to matter. And yet. The camera quality and features are a very real jump from the iPhone 11 Pro across the board and still a noticeable improvement from the iPhone 12 Pro for you early adopters. Anything older and you’re going to get a blast of quality right to the face that you’re going to love. 

The camera packaging and feature set is also more uniform across the lineup than ever before with Apple’s IBIS in camera sensor shift stabilization system appearing in every model — even the iPhone 13 mini which is a crazy achievement given the overall package size of this sensor array.

In my experience in the parks this year, Apple’s improvements to cameras made for a material difference no matter which lens I chose. From low light to long zoom, there’s something to love here for every avid photographer. Oh, and that Cinematic Mode, we’ll talk about that too. 

Telephoto

Of all of the lenses I expected improvement from, the telephoto was actually not that high on my list. But I was pleasantly surprised by the increased range and utility of this lens. I am an admitted telephoto addict, with some 60% of my photos on iPhone 12 Pro taken with the tele lens over the wide. I just prefer the ability to pick and choose my framing more closely without having to crop after the fact. 

Having Night Mode on the telephoto now means that it doesn’t fall back to the wide lens with crop in dark conditions as it used to. Now you get that native telephoto optics plus the Night Mode magic. This means much better black points and great overall exposure even hand held at zoom — something that felt just completely out of reach a couple of years ago.

With the higher zoom level, portraits are cropped tighter, with better organic non-portrait-mode bokeh which is lovely. With this new lens you’re going to be able to shoot better looking images of people, period.

If you’re a camera person, the 3x reminds me a lot of my favorite 105mm fixed portrait lens. It’s got the crop, it’s got the nice background separation and the optical quality is very, very good on this lens package. Apple knocked it out of the park on the tele this time around. 

The longer optical range was also very handy in a Disneyland world where performers are often kept separate from guests — sometimes for effect but mostly because of pandemic precautions. Being able to reach out and get that shot of Kylo Ren hyping up the crowd was a fun thing to be enabled to do.

Wide

Apple’s wide lens gets the biggest overall jump in sensor technology. A larger ƒ/1.5 aperture and new 1.9µm pixels roughly doubles the light gathering — and it shows. Images at night and inside ride buildings had a marked improvement in overall quality due to deeper blacks and better dynamic range. 

With Night Mode enabled, the deeper light gathering range and improved Smart HDR 4 makes for deeper blacks and a less washed out appearance. If I had to characterize it, it would be ‘more natural’ overall — a theme I’ve seen play out across the iPhone cameras this time around. 

Without Night Mode enabled, the raw improvement in image quality due to more light being captured is immediately evident. Though I think there are few situations where you need to turn off Night Mode any more, subjects in motion in low light are one of those and you’ll get a few inches extra of wiggle room with this new sensor and lens combo in those instances. 

Having sensor shift OIS come to the wide on the iPhone 13 across the range is a huge godsend to both still shots and video. Though I’m spoiled having been able to play with the iPhone 12 Pro Max’s stabilization, if you haven’t shot with it before you’re going to be incredibly happy with the additional levels of sharpness it brings.

Ultra Wide

Apple’s ultra wide camera has been in need of some love for a while. Though it offered a nice additional perspective, it has suffered from a lack of auto-focus and sub-par light gathering ability since its release. This time around it gets both a larger ƒ/1.8 aperture and autofocus. Apple claims 92% more light gathering and my testing in pretty rough lighting conditions shows a massive improvement across the board. 

Typically at Disneyland I like to shoot the wide in one of two ways: up close to create a fisheye-type perspective for portraits or to snag a vista when the lighting or scene setting is especially good. Having auto focus available improves the first a ton and the wider aperture gives the second a big boost too. 

Check out these shots of a moonlit Trader Sam’s, a snap that you might grab because the lighting and scenery are just right. The iPhone 12 Pro isn’t bad at all here but there is an actually quite clear difference between the two in exposure. Both of these were taken with Night Mode disabled in order to compare the raw improvement in aperture.

The delta is clear, and I’m pretty impressed in general with how much Apple keeps improving this ultra wide camera, though it seems clear at this point that we’re hitting the upper limits of what a 12MP sensor at this size can bring to a lens with such a wide POV. 

The new ISP also improves Night Mode shooting here too — and with a bit more raw range to work with given the wider aperture, your night mode shots lose even more of that bright candy-like look and get a deeper and more organic feeling. 

Macro photos and video

Another new shooting possibility presented by the iPhone 13 Pro is a pretty impressive macro mode that can shoot as close as 2cm. It’s really, really well done given that it’s being implemented in a super wide lens on a smartphone. 

I was able to shoot incredibly detailed snaps very, very close-up. We’re talking ‘the surface texture of objects’ close; ‘pollen hanging off a bee’s thorax’ close; dew…well you get the idea. It’s close, and it’s a nice tool to have without having to carry a macro attachment with you. 

I found the sharpness and clarity of the macro images I captured to be excellent within the rough 40% area that comprised the center of the capture area. Due to the fact that the macro mode is on the ultra wide, there is a significant amount of comatic aberration around the edges of the image. Basically, the lens is so curved you get a bit of separation between wavelengths of light coming in at oblique angles, leading to a rainbow effect. This is only truly visible at very close distances at the minimum of the focal range. If you’re a few cm away you’ll notice and you’ll probably crop it out or live with it. If you’re further away getting a ‘medium macro’ at 10cm or whatever you’ll likely not notice it much.

This is a separate factor from the extremely slim field-of-focus that is absolutely standard with all macro lenses. You’re going to have to be precise at maximum macro, basically, but that’s nothing new.

Given how large scale Disneyland is I actually had to actively seek out ways to use the macro, though I’d imagine it would be useful in more ways in other venues. But I still got cool shots of textures in the bottles in Radiator Springs and some faux fungi at Galaxy’s Edge. 

Macro video is similarly fun but requires extremely stable hands or a tripod to really take advantage of given that the slightest movement of your hands is going to move the camera a massive amount of distance proportional to the focal area. Basically, tiny hand moves, big camera moves in this mode. But it’s a super fun tool to add to your arsenal and I had fun chasing bugs around some flower petals in the garden of the Grand Californian hotel with it.

As a way to go from world scale down to fine detail it’s a great way to mix up your shots.

One interesting quirk of the ultra wide camera being the home of macro on iPhone 13 Pro is that there is a noticeable transition between the wide and ultra-wide cameras as you move into macro range. This presents as a quick-shift image transition where you can see one camera clicking off and the other one turning on — something that was pretty much never obvious in other scenarios even though the cameras switch all the time depending on lighting conditions and imaging judgement calls made by the iPhone’s camera stack. 

Users typically never notice this at all, but given that there is now an official macro camera available when you swoop in close to an object while you’re on 1x then it’s going to flip over to the .5x mode in order to let you shoot super close. This is all totally fine, by the way, but can result in a bit of flutter if you’re moving in and out of range with the cameras continuously switching as you enter and exit ‘macro distance’ (around 10-15cm). 

When I queried about this camera switching behavior, Apple said that “a new setting will be added in a software update this fall to turn off automatic camera switching when shooting at close distances for macro photography and video.”

This should solve this relatively small quirk for people who want to work specifically at the macro range. 

Photographic Styles and Smart HDR 4

One of the constant tensions with Apple’s approach to computational photography has been its general leaning towards the conservative when it comes to highly processed images. Simply put, Apple likes its images to look ‘natural’, where other similar systems from competitors like Google or Samsung have made different choices in order to differentiate and create ‘punchier’ and sometimes just generally brighter images. 

I did some comparisons of these approaches back when Apple introduced Night Mode two years ago.  

The general idea hasn’t changed much even with Apple’s new launches this year, they’re still hewing to nature as a guiding principle. But now they’ve introduced Photographic Styles in order to give you the option of cranking two controls they’re calling Tone and Warmth. These are basically vibrance and color temperature (but only generally). You can choose from 5 presets including no adjustments or you can adjust the two settings on any of the presets on a scale of -100 to +100. 

I would assume that long term people will play with these and recommendations will get passed around on how to get a certain look. My general favorite of these is vibrant because I like the open shadows and mid-tone pop. Though I would assume a lot of folks will gravitate towards Rich Contrast because more contrast is generally more pleasing to the human eye. 

In this shot of some kid-sized speeders, you can see the effects on the shadows and midtones as well as the overall color temperature. Rather than being a situational filter, I view this as a deep ‘camera setting’ feature, much like choosing the type of film that you wanted to roll with in a film camera. For more contrast you might choose a Kodak Ektachrome, for cooler-to-neutral colors perhaps a Fuji, for warm skin tones perhaps a Kodak Portra and for boosted color maybe an Ultramax. 

This setting gives you the option to set up your camera the way you want the color to sit in a similar way. The setting is then retained when you close camera.app. This way when you open it, it’s set to shoot the way you want it to. This goes for the vast majority of camera settings now under iOS 15, which is a nice quality of life improvement over the old days when the iPhone camera reset itself every time you opened it. 

It’s worth noting that these color settings are ‘imbedded’ in the image, which means they are not adjustable afterwards like Portrait Mode’s lighting scenarios. They are also not enabled during RAW — which makes sense.

Smart HDR4 also deserves a mention here because it’s now doing an additional bit of smart segmentation based on subjects in the frame. In a situation with a backlit group of people, for instance, the new ISP is going to segment out each of those subjects individually and apply color profiles, exposure, white balance and other adjustments to them — all in real time. This makes for a marked improvement in dark-to-light scenarios like shooting out of windows and shooting into the sun. 

I would not expect much improvement out of the selfie camera this year, it’s just much the same as normal. Though you can use Cinematic Mode on it which is fun if not that useful in selfie modes.

Cinematic Mode

This is an experimental mode that has been shipped live to the public. That’s the best way to set the scene for those folks looking to dive into it. Contrary to Apple’s general marketing, this won’t yet replace any real camera rack focus setup on a film set, but it does open up a huge toolset for budding filmmakers and casual users that was previously locked behind a lot of doors made up of cameras, lenses and equipment. 

Cinematic Mode uses the camera’s depth information, the accelerometer and other signals to craft a video that injects synthetic bokeh (blur) and tracks subjects in the frame to intelligently ‘rack’ focus between them depending on what it thinks you want. There is also some impressive focus tracking features built in that allow you to lock onto a subject and follow them in a ‘tracking shot’ which can keep them in focus through obstacles like crowds, railings and water. I found all of these depth-leveraging features that did tracking to be incredibly impressive in my early testing, but they were often let down a bit by the segmentation masking that struggled to define crisp, clear borders around subjects to separate them from the background. It turns out that doing what portrait mode does with a still image is just insanely hard to do 30 times a second with complex, confusing backgrounds. 

The feature is locked to 1080p/30fps which says a lot about its intended use. This is for family shots presented on the device, AirPlayed to your TV or posted on the web. I’d imagine that this will actually get huge uptake with the TikTok filmmaker crowd who will do cool stuff with the new storytelling tools of selective focus.

I did some test shooting with my kids walking through crowds and riding on carousels that was genuinely, shockingly good. It really does provide a filmic, dreamy quality to the video that I was previously only able to get with quick and continuous focus adjustments on an SLR shooting video with a manually focused lens. 

That, I think, is the major key to understanding Cinematic Mode. Despite the marketing, this mode is intended to unlock new creative possibilities for the vast majority of iPhone users who have no idea how to set focal distances, bend their knees to stabilize and crouch-walk-rack-focus their way to these kinds of tracking shots. It really does open up a big bucket that was just inaccessible before. And in many cases I think that those willing to experiment and deal with its near-term foibles will be rewarded with some great looking shots to add to their iPhone memories widget.

I’ll be writing more about this feature later this week so stay tuned. For now, what you need to know is that an average person can whip this out in bright light and get some pretty fun and impressive results, but it is not a serious professional tool, yet. And even if you miss focus on a particular subject you are able to adjust that in post with a quick tap of the edit button and a tap on a subject — as long as it’s within the focal range of the lens.

As a filmmaking tool for the run and gun generation it’s a pretty compelling concept. The fact is that it allows people to spend less time and less technical energy on the mechanics of filmmaking and more time on the storytelling part. Moviemaking has always been an art that is intertwined with technology — and one of the true exemplars of the ideal that artists are always the first to adopt new technology and push it to its early limits.

Just as Apple’s portrait mode has improved massively over the past 6 years, I expect Cinematic Mode to keep growing and improving. The relatively sketchy performance in low light and the locked zoom are high on my list to see bumps next year, as is improved segmentation. It’s an impressive technical feat that Apple is able to deliver this kind of slicing and adjustment not only in real-time preview but also in post-shooting editing modes, and I’m looking forward to seeing it evolve. 

Assessment

This is a great update that improves user experience in every way, even during an intense day-long Disneyland outing. The improved brightness and screen refresh means easier navigation of park systems and better visibility in daylight for directions and wait times and more. The better cameras mean you’re getting improved shots in dark-to-light situations like waiting in lines or shooting from under overhangs. The nice new telephoto lets you shoot close-up shots of cast members who are now often separated from the crowds by large distances, which is cool — and as a bonus acts as a really lovely portrait lens even while not in Portrait mode.

Overall this was one of the best experiences I’ve had testing a phone at the parks, with a continuous series of ‘wow’ moments with the cameras that sort of made me question my confirmation bias. I ended up with a lot of shots like the night mode wide angle and telephoto ones I shared above that impressed me so much I ended up doing a lot of gut checking asking other people in blind tests what they thought of the two images. Each time I did so the clear winner was the iPhone 13 — it really is just a clear cut improvement in image making across the board.

The rest of the package is pretty well turned out here too, with massive performance gains in the A15 Bionic with not only no discernable impact on battery life but a good extra hour to boot. The performance chart above may give the wow factor but that performance charted on the power usage of the chip across a day is what continues to be the most impressive feat of Apple’s chip teams. 

The iPhones 13 are an impressive field this year, providing a solid moat of image quality, battery life and now, thankfully, screen improvements that should serve Apple well over the next 12 months.

#apple, #apple-inc, #computing, #disneyland, #food, #google, #imaging, #ios, #ios-11, #ipad, #iphone, #iphone-7, #isp, #kodak, #mobile-phones, #ram, #sam, #samsung, #smartphone, #steel, #tc

Data scientists: don’t be afraid to explore new avenues

I’m a native French data scientist who cut his teeth as a research engineer in computer vision in Japan and later in my home country. Yet I’m writing from an unlikely computer vision hub: Stuttgart, Germany.

But I’m not working on German car technology, as one would expect. Instead, I found an incredible opportunity mid-pandemic in one of the most unexpected places: An ecommerce-focused, AI-driven, image-editing startup in Stuttgart focused on automating the digital imaging process across all retail products.

My experience in Japan taught me the difficulty of moving to a foreign country for work. In Japan, having a point of entry with a professional network can often be necessary. However, Europe has an advantage here thanks to its many accessible cities. Cities like Paris, London, and Berlin often offer diverse job opportunities while being known as hubs for some specialties.

While there has been an uptick in fully remote jobs thanks to the pandemic, extending the scope of your job search will provide more opportunities that match your interest.

Search for value in unlikely places, like retail

I’m working at the technology spin-off of a luxury retailer, applying my expertise to product images. Approaching it from a data scientist’s point of view, I immediately recognized the value of a novel application for a very large and established industry like retail.

Europe has some of the most storied retail brands in the world — especially for apparel and footwear. That rich experience provides an opportunity to work with billions of products and trillions of dollars in revenue that imaging technology can be applied to. The advantage of retail companies is a constant flow of images to process that provides a playing ground to generate revenue and possibly make an AI company profitable.

Another potential avenue to explore are independent divisions typically within an R&D department. I found a significant number of AI startups working on a segment that isn’t profitable, simply due to the cost of research and the resulting revenue from very niche clients.

Companies with data are companies with revenue potential

I was particularly attracted to this startup because of the potential access to data. Data by itself is quite expensive and a number of companies end up working with a finite set. Look for companies that directly engage at the B2B or B2C level, especially retail or digital platforms that affect front-end user interface.

Leveraging such customer engagement data benefits everyone. You can apply it towards further research and development on other solutions within the category, and your company can then work with other verticals on solving their pain points.

It also means there’s massive potential for revenue gains the more cross-segments of an audience the brand affects. My advice is to look for companies with data already stored in a manageable system for easy access. Such a system will be beneficial for research and development.

The challenge is that many companies haven’t yet introduced such a system, or they don’t have someone with the skills to properly utilize it. If you finding a company isn’t willing to share deep insights during the courtship process or they haven’t implemented it, look at the opportunity to introduce such data-focused offerings.

In Europe, the best bets involve creating automation processes

I have a sweet spot for early-stage companies that give you the opportunity to create processes and core systems. The company I work for was still in its early days when I started, and it was working towards creating scalable technology for a specific industry. The questions that the team was tasked with solving were already being solved, but there were numerous processes that still had to be put into place to solve a myriad of other issues.

Our year-long efforts to automate bulk image editing taught me that as long as the AI you’re building learns to run independently across multiple variables simultaneously (multiple images and workflows), you’re developing a technology that does what established brands haven’t been able to do. In Europe, there are very few companies doing this and they are hungry for talent who can.

So don’t be afraid of a little culture shock and take the leap.

#artificial-intelligence, #berlin, #column, #data-scientist, #digital-imaging, #e-commerce, #europe, #germany, #imaging, #job-search, #london, #paris, #startups, #tc

Y Combinator-backed Adra wants to turn all dentists into cavity-finding ‘super dentists’

Like other areas of healthcare, the dental industry is steadily embracing technology. But while much of it is in the orthodontic realm, other startups, like Adra, are bringing artificial intelligence into a dentist’s day-to-day workflow, particularly in finding cavities, of what will be a $435.08 billion global dental services market this year.

The Singapore-based company was founded in 2021, but was an idea that started last year. Co-founder Hamed Fesharaki has been a dentist for over a decade and owns two clinics in Singapore.

He said dentists learn to read X-rays in dental school, but it can take a few years to get good at it. Dentists also often have just minutes to read them as they hop between patients.

As a result, dentists end up misdiagnosing cavities up to 40% of the time, co-founder Yasaman Nemat said. Her background is in imaging, where she developed an artificial intelligence machine identifying hard-to-see cancers, something Fesharaki thought could also be applied to dental medicine.

Providing the perspective of a more experienced dentist, Adra’s intent is to make every dentist “a super dentist,” Fesharaki told TechCrunch. Its software detects cavities and other dental problems on dental X-rays faster and 25% more accurately, so that clinics can use that time to better serve patients and increase revenue.

Example of Adra’s software. Image Credits: Adra

“We are coming from the eye of an experienced dentist to help illustrate the problems by turning the X-rays into images to better understand what to look for,” he added. “Ultimately, the dentist has the final say, but we bring the experience element to help them compare and give them suggestions.”

By quickly pointing out the problem and the extent of it, dentists can decide in what way they want to treat it — for example, do a filling, a fluoride treatment or wait.

Along with third co-founder Shifeng Chen, the company is finishing up its time in Y Combinator’s summer cohort and has raised $250,000 so far. Fesharaki intends to do more formalized seed fundraising and wants to bring on more engineers to tackle user experience and add more features.

The company has a few clinics doing pilots and wants to attract more as it moves toward a U.S. Food and Drug Administration clearance. Fesharaki expects it to take six to nine months to receive the clearance, and then Adra will be able to hit the market in late 2022 or early 2023.

#adra, #artificial-intelligence, #dentist, #dentistry, #enterprise, #funding, #hamed-fasharaki, #health, #healthcare, #imaging, #recent-funding, #saas, #shifeng-chen, #singapore, #startups, #tc, #y-combinator, #yasaman-nemat

Satellite imagery startup Satellogic to go public via SPAC valuing the company at $850M

The space SPAC frenzy might’ve died down, but it isn’t over: Earth observation startup Satellogic is the latest to go public via a merger with CF Acquisition Corp. V, a special purpose acquisition company set up by Cantor Fitzgerald. Satellogic already has 17 satellites in orbit, and aims to scale its constellation to over 300 satellites to provide sub-meter resolution imaging of the Earth updated on a daily frequency.

The SPAC deal values the company at $850 million, and includes a PIPE worth $100 million with funds contributed by SoftBank’s SBLA Advisers Group and Cantor Fitzgerald. It assumes revenue of around $800 million for the combined company by 2025, and Satellogic expects to have a cash balance of around $274 million resulting from the deal at close.

Satellogic has raised a total of just under $124 million since its founding in 2010, from investors including Tencent, Pitanga Fund and others. The company claims its satellites are the only ones that can provide imaging at the resolution it offers with a price tag that remains relatively affordable for commercial clients.

#commercial-spaceflight, #companies, #finance, #imaging, #satellite, #satellogic, #softbank, #softbank-group, #spac, #special-purpose-acquisition-company, #tc, #tencent

As clinical guidelines shift, heart disease screening startup pulls in $43M Series B

Cleerly Coronary, a company that uses A.I powered imaging to analyze heart scans, announced a $43 million Series B funding this week. The funding comes at a moment when it seems that a new way of screening for heart disease is on its way. 

Cleerly was started in 2017 by James K. Min a cardiologist, and the director of the Dalio Institute for Cardiac Imaging at New York Presbyterian Hospital/Weill Cornell Medical College. The company, which uses A.I to analyze detailed CT scans of the heart, has 60 employees, and has raised $54 million in total funding.

The Series B round was led by Vensana Capital, but also included LVR Health, New Leaf Venture Partners, DigiTx Partners, and Cigna Ventures. 

The startup’s aim is to provide analysis of detailed pictures of the human heart that have been examined by artificial intelligence. This analysis is based on images taken via Cardiac Computer Tomography Angiogram (CTA), a new, but rapidly growing manner of scanning for plaques. 

“We focus on the entire heart, so every artery, and its branches, and then atherosclerosis characterization and quantification,” says Min. “We look at all of the plaque buildup in the artery, [and] the walls of the artery, which historical and traditional methods that we’ve used in cardiology have never been able to do.”

Cleerly is a web application, and it requires that a CTA image specifically, which the A.I. is trained to analyze, is actually taken when patients go in for a checkup. 

When a patient goes in for a heart exam after experiencing a symptom like chest pain, there are a few ways they can be screened. They might undergo a stress test, an echocardiogram (ECG), or a coronary angiogram – a catheter and x-ray-based test. CTA is a newer form of imaging in which a scanner takes detailed images of the heart, which is illuminated with an injected dye. 

Cleerly’s platform is designed to analyze those CTA images in detail, but they’ve only recently become a first-line test (a go-to, in essence) when patients come in with suspected heart problems. The European Society of Cardiology updated guidelines to make CTA a first-line test in evaluating patients with chronic coronary disease. In the UK, it became a first-line test in the evaluation of patients with chest pain in 2016.

CTA is already used in the US, but guidelines may expand how often it’s actually used. A review on CTA published on the American College of Cardiology website notes that it shows “extraordinary potential.” 

There’s movement on the insurance side, too. In 2020, United Healthcare announced the company will now reimburse for CTA scans when they’re ordered to examine low-to medium risk patients with chest pain. Reimbursement qualification is obviously a huge boon to broader adoption.

CTA imaging might not be great for people who already have stents in their hearts, or, says Min, those who are just in for a routine checkup (there is low-dose radiation associated with a CTA scan). Rather, Cleerly will focus on patients who have shown symptoms or are already at high risk for heart disease. 

The CDC estimates that currently 18.2 million adults currently have coronary artery heart disease (the most common kind), and that 47 percent of Americans have one of the three most prominent risk factors for the disease: high blood pressure, high cholesterol, or a smoking habit. 

These shifts (and anticipated shifts) in guidelines suggest that a lot more of these high-risk patients may be getting CTA scans in the future, and Cleerly has been working on mining additional information from them in several large-scale clinical trials.

There are plenty of different risk factors that contribute to heart disease, but the most basic understanding is that heart attacks happen when plaques build up in the arteries, which narrows the arteries and constricts the flow of blood. Clinical trials have suggested that the types of plaques inside the body may contain information about how risky certain blockages are compared to others beyond just much of the artery they block. 

A trial on 25,251 patients found that, indeed, the percentage of construction in the arteries increases the risk of heart attack. But the type of plaque in those arteries identified high-risk patients better than other measures. Patients who went on to have sudden heart attacks, for example, tended to have higher levels of fibrofatty or necrotic core plaque in their hearts. 

These results do suggest that it’s worth knowing a bit more detail about plaque in the heart. Note that Min is an author of this study, but it was also conducted at 13 different medical centers. 

As with all A.I based diagnostic tools the big question is: How well does it actually recognize features within a scan? 

At the moment FDA documents emphasize that it is not meant to supplant a trained medical professional who can interpret the results of a scan. But tests have suggested it fares pretty well. 

A June 2021 study compared Cleerly’s A.I analysis of CTA scans to that of three expert readers, and found that the A.I had a diagnostic accuracy of about 99.7 percent when evaluating patients who had severe narrowing in their arteries. Three of nine study authors hold equity in Cleerly. 

With this most recent round of funding, Min says he aims to pursue more commercial partnerships and scale up to meet the existing demand. “We have sort of stayed under the radar, but we came above the radar because now I think we’re prepared to fulfill demand,” he says. 

Still, the product itself will continue to be tested and refined. Cleerly is in the midst of seven performance indication studies that will evaluate just how well the software can spot the litany of plaques that can build up in the heart.

#a-i, #artificial-intelligence, #disease, #fda, #heart-attack, #high-blood-pressure, #imaging, #medical-imaging, #medicine, #radiation, #tc

C2i, a genomics SaaS product to detect traces of cancer, raises $100M Series B

If you or a loved one has ever undergone a tumor removal as part of cancer treatment, you’re likely familiar with the period of uncertainty and fear that follows. Will the cancer return, and if so, will the doctors catch it at an early enough stage? C2i Genomics has developed software that’s 100x more sensitive in detecting residual disease, and investors are pouncing on the potential. Today, C2i announced a $100 million Series B led by Casdin Capital. 

“The biggest question in cancer treatment is, ‘Is it working?’ Some patients are getting treatment they don’t benefit from and they are suffering the side effects while other patients are not getting the treatment they need,” said Asaf Zviran, co-founder and CEO of C2i Genomics in an interview.

Historically, the main approach to cancer detection post-surgery has been through the use of MRI or X-ray, but neither of those methods gets super accurate until the cancer progresses to a certain point. As a result, a patient’s cancer may return, but it may be a while before doctors are able to catch it.

Using C2i’s technology, doctors can order a liquid biopsy, which is essentially a blood draw that looks for DNA. From there they can sequence the entire genome and upload it to the C2i platform. The software then looks at the sequence and identifies faint patterns that indicate the presence of cancer, and can inform if it’s growing or shrinking.

“C2i is basically providing the software that allows the detection and monitoring of cancer to a global scale. Every lab with a sequencing machine can process samples, upload to the C2i platform and provide detection and monitoring to the patient,” Zviran told TechCrunch.

C2i Genomics’ solution is based on research performed at the New York Genome Center (NYGC) and Weill Cornell Medicine (WCM) by Dr. Zviran, along with Dr. Dan Landau, faculty member at the NYGC and assistant professor of medicine at WCM, who serves as scientific co-founder and member of C2i’s scientific advisory board. The research and findings have been published in the medical journal, Nature Medicine.

While the product is not FDA-approved yet, it’s already being used in clinical research and drug development research at NYU Langone Health, the National Cancer Center of Singapore, Aarhus University Hospital and Lausanne University Hospital.

When and if approved, New York-based C2i has the potential to drastically change cancer treatment, including in the areas of organ preservation. For example, some people have functional organs, such as the bladder or rectum, removed to prevent cancer from returning, leaving them disabled. But what if the unnecessary surgeries could be avoided? That’s one goal that Zviran and his team have their minds set on achieving.

For Zviran, this story is personal. 

“I started my career very far from cancer and biology, and at the age of 28 I was diagnosed with cancer and I went for surgery and radiation. My father and then both of my in-laws were also diagnosed, and they didn’t survive,” he said.

Zviran, who today has a PhD in molecular biology, was previously an engineer with the Israeli Defense Force and some private companies. “As an engineer, looking into this experience, it was very alarming to me about the uncertainty on both the patients’ and physicians’ side,” he said.

This round of funding will be used to accelerate clinical development and commercialization of the company’s C2-Intelligence Platform. Other investors that participated in the round include NFX, Duquesne Family Office, Section 32 (Singapore), iGlobe Partners and Driehaus Capital.

#artificial-intelligence, #biotech, #blood-test, #c2i-genomics, #cancer, #cancer-screening, #cancer-treatment, #casdin-capital, #cloud, #cornell, #drug-development, #fda, #funding, #health, #imaging, #mri, #new-york-university, #radiation, #recent-funding, #saas, #startups, #surgery, #tc, #tumor, #x-ray

Pixxel closes $7.3M seed round and unveils commercial hyperspectral imaging product

LA and Bangalore-based space startup Pixxel has closed a $7.3 million seed round, including newly committed capital from Techstars, Omnivore VC and more. The company has also announced a new product focus: Hyperspectral imaging. It aims to provide that imaging at the highest resolution commercially available, via a small satellite constellation that will provide 24-hour, global coverage once it’s fully operational.

Pixxel’s funding today is an extension of the $5 million it announced it had raised back in August of last year. At the time, the startup had only revealed that it was focusing on Earth imaging,, and it’s unveiling its specific pursuit of hyperspectral imaging for the first time today. Hyperspectral imaging uses far more light frequencies than the much more commonly-used multispectral imaging used in satellite observation today, allowing for unprecedented insight and detection of previously invisible issues, including migration of pest insect populations in agriculture, or observing gas leaks and other ecological threats.

Standard multispectral imaging (left) vs. hyperspectral imaging (right) Credit: EPFL

“We started with analyzing existing satellite images, and what we could do with this immediately,” explained Pixxel co-founder and CEO Awais Ahmed in an interview. “We realized that in most cases, it was not able to even see certain problems or issues that we wanted to solve – for example, we wanted to be able to look at air pollution and water pollution levels. But to be able to do that there were no commercial satellites that would enable us to do that, or even open source satellite data at the resolution that would enable us to do that.”

The potential of hyperspectral imaging on Earth, across a range of sectors, is huge, according to Ahmed, but Pixxel’s long-term vision is all about empowering a future commercial space sector to make the most of in-space resources.

“We started looking at space as a sector for us to be able to work in, and we realized that what we wanted to do was to be able to enable people to take resources from space to use in space,” Ahmed said. That included asteroid mining, for example, and when we investigated that, we found hyperspectral imaging was the imaging tech that would enable us to map these asteroids as to whether they contain these metal or these minerals. So that knowledge sort of transferred to this more short-term problem that we were looking at solving.”

Part of the reason that Pixxel’s founders couldn’t find existing available hyperspectral imaging at the resolutions they needed was that as a technology, it has previously been restricted to internal governmental use through regulation. The U.S. recently opened up the ability for commercial entities to pursue very high-resolution hyperspectral imaging for use on the private market, effectively because they realized that these technical capabilities were becoming available in other international markets anyway. Ahmed told me that the main blocker was still technical, however.

Pixxel's Hyperspectral imaging satellite at its production facility in Bangalore

Image Credits: Pixxel

“If we were to build a camera like this even two or three years ago, it would not have been possible because of the miniaturized sensors, the optics, etc.,” he said. “The advances that have happened only happened very recently, so it’s also the fact that this the right time to take it from the scientific domain to the commercial domain.”

Pixxel now aims to have its first hyperspectral imaging satellite launched and operating on orbit within the next few months, and it will then continue to launch additional satellites after that once it’s able to test and evaluate the performance of its first spacecraft in an actual operating environment.


Early Stage is the premier “how-to” event for startup entrepreneurs and investors. You’ll hear firsthand how some of the most successful founders and VCs build their businesses, raise money and manage their portfolios. We’ll cover every aspect of company building: Fundraising, recruiting, sales, product-market fit, PR, marketing and brand building. Each session also has audience participation built-in — there’s ample time included for audience questions and discussion. Use code “TCARTICLE at checkout to get 20% off tickets right here.

#aerospace, #asteroid-mining, #awais-ahmed, #bangalore, #imaging, #louisiana, #metal, #optics, #recent-funding, #satellite-constellation, #space, #spectroscopy, #startups, #tc, #techstars, #united-states

Adobe delivers native Photoshop for Apple Silicon Macs and a way to enlarge images without losing detail

Adobe has been moving quickly to update its imaging software to work natively on Apple’s new in-house processors for Macs, starting with the M1-based MacBook Pro and MacBook Air released late last year. After shipping native versions of Lightroom and Camera Raw, it’s now releasing an Apple Silicon-optimized version of Photoshop, which delivers big performance gain vs. the Intel version running on Apple’s Rosetta 2 software emulation layer.

How much better? Per internal testing, Adobe says that users should see improvements of up to 1.5x faster performance on a number of different features offered by Photoshop, vs. the same tasks being done on the emulated version. That’s just the start, however, since Adobe says it’s going to continue to coax additional performance improvements out of the software on Apple Silicon in collaboration with Apple over time. Some features are also still missing from the M1-friendly addition, including the ‘Invite to Edit Cloud Documents’ and ‘Preset Syncing’ options, but those will be ported over in future iterations as well.

In addition to the Apple Silicon version of Photoshop, Adobe is also releasing a new Super Resolution feature in the Camera Raw plugin (to be released for Lightroom later) that ships with the software. This is an image enlarging feature that uses machine learning trained on a massive image dataset to blow up pictures to larger sizes while still preserving details. Adobe has previously offered a super resolution option that combined multiple exposures to boost resolution, but this works from a single photo.

It’s the classic ‘Computer, enhance’ sci-fi feature made real, and it builds on work that Photoshop previously did to introduce its ‘Enhance details’ feature. If you’re not a strict Adobe loyalist, you might also be familiar with Pixelmator Pro’s ‘ML Super Resolution’ feature, which works in much the same way – albeit using a different ML model and training data set.

Adobe's Super Resolution comparison photo

Adobe’s Super Resolution in action

The bottom line is that Adobe’s Super Resolution will output an image with twice the horizontal and twice the vertical resolution – meaning in total, it has 4x the number of pixels. It’ll do that while preserving detail and sharpness, which adds up to allowing you to make larger prints from images that previously wouldn’t stand up to that kind of enlargement. It’s also great for cropping in on photos in your collection to capture tighter shots of elements that previously would’ve been rendered blurry and disappointing as a result.

This feature benefits greatly from GPUs that are optimized for machine learning jobs, including CoreML and Windows ML. That means that Apple’s M1 chip is a perfect fit, since it includes a dedicated ML processing region called the Neural Engine. Likewise, Nvidia’s RTX series of GPUs and their TensorCores are well-suited to the task.

Adobe also released some major updates for Photoshop for iPad, including version history for its Cloud Documents non-local storage. You can also now store versions of Cloud Documents offline and edit them locally on your device.

#adobe-creative-cloud, #adobe-lightroom, #adobe-photoshop, #apple, #apple-inc, #apps, #artificial-intelligence, #imaging, #intel, #m1, #machine-learning, #macintosh, #ml, #photoshop, #pixelmator, #software, #steve-jobs, #tc

Ibex Medical Analytics raises $38M for its AI-powered cancer diagnostic platform

Israel-based Ibex Medical Analytics, which has an AI-driven imaging technology to detect cancer cells in biopsies more efficiently, has raised a $38 million Series B financing round led by Octopus Ventures and 83North. Also participating in the round was aMoon, Planven Entrepreneur Ventures and Dell Technologies Capital, the corporate venture arm of Dell Technologies. The company has now raised a total of $52 million since its launch in 2016. Ibex plans to use the investment to further sell into diagnostic labs in North America and Europe.

Originally incubated out of the Kamet Ventures incubator, Ibex’s “Galen” platform mimics the work of a pathologist, allowing them to diagnose cancer more accurately and faster and derive new insights from a biopsy specimen.

Because rates of cancer are on the rise and the medical procedures have become more complex, pathologists have a higher workload. Plus, says Ibex, there is a global shortage of pathologists, which can mean delays to the whole diagnostic process. The company claims pathologists can be 40% more productive using its solution.

Speaking to TechCrunch, Joseph Mossel, Ibex CEO and Co-founder said: “You can think of it as a pathologist’s assistant, so it kind of prepares the case in advance, marks the regions of interest, and allows the pathologist to achieve the efficiency gains.”

He said the company has secured the largest pathology network in France, and LD path, which is five pathology labs that service 24 NHS trusts in the UK, among others.

Michael Niddam, of Kamet Ventures said Ibex was an “excellent example of how Kamet works with founders very early on.” Ibex founders Joseph Mossel and Dr. Chaim Linhart had previously joined Kamet as Entrepreneurs in Residence before developing their idea.

#assistant, #cancer, #dell-technologies-capital, #europe, #france, #imaging, #kamet-ventures, #nhs, #north-america, #octopus-ventures, #outer-space, #pathology, #spacecraft, #spaceflight, #tc, #united-kingdom

SpaceX sets new record for most satellites on a single launch with latest Falcon 9 mission

SpaceX has set a new all-time record for the most satellites launched and deployed on a single mission, with its Transporter-1 flight on Sunday. The launch was the first of SpaceX’s dedicated rideshare missions, in which it splits up the payload capacity of its rocket among multiple customers, resulting in a reduced cost for each but still providing SpaceX with a full launch and all the revenue it requires to justify lauding one of its vehicles.

The launch today included 143 satellites, 133 of which were from other companies who booked rides. SpaceX also launched 10 of its own Starlink satellites, adding to the already more than 1,000 already sent to orbit to power SpaceX’s own broadband communication network. During a launch broadcast last week, SpaceX revealed that it has begun serving beta customers in Canada and is expanding to the UK with its private pre-launch test of that service.

Customers on today’s launch included Planet Labs, which sent up 48 SuperDove Earth imaging satellites; Swarm, which sent up 36 of its own tiny IoT communications satellites, and Kepler, which added to its constellation with eight more of its own communication spacecraft. The rideshare model that SpaceX now has in place should help smaller new space companies and startups like these build out their operational on-orbit constellations faster, complementing other small payload launchers like Rocket Lab, and new entrant Virgin Orbit, to name a few.

This SpaceX launch was also the first to deliver Starlink satellites to a polar orbit, which is a key part of the company’s continued expansion of its broadband service. The mission also included a successful landing and recovery of the Falcon 9 rocket’s first-stage booster, the fifth for this particular booster, and a dual recovery of the fairing halves used to protect the cargo during launch, which were fished out of the Atlantic ocean using its recovery vessels and will be refurbished and reused.

#aerospace, #broadband, #canada, #communications-satellites, #elon-musk, #falcon, #falcon-9, #hyperloop, #imaging, #outer-space, #planet-labs, #rocket-lab, #satellite, #space, #spaceflight, #spacex, #starlink, #startups, #tc, #united-kingdom

Watch SpaceX’s first dedicated rideshare rocket launch live, carrying a record-breaking payload of satellites

 

SpaceX is set to launch the very first of its dedicated rideshare missions – an offering it introduced in 2019 that allows small satellite operators to book a portion of a payload on a Falcon 9 launch. SpaceX’s rocket has a relatively high payload capacity compared to the size of many of the small satellites produced today, so a rideshare mission like this offers smaller companies and startups a chance to get their spacecraft in orbit without breaking the bank. Today’s attempt is scheduled for 10 AM EST (7 AM PST) after a first try yesterday was cancelled due to weather. So far, weather looks much better for today.

The cargo capsule atop the Falcon 9 flying today holds a total of 133 satellites according to SpaceX, which is a new record for the highest number of satellites being launched on a single rocket – beating out a payload of 104 spacecraft delivered by Indian Space Research Organization’s PSLV-C37 launch back in February 2017. It’ll be a key demonstration not only of SpaceX’s rideshare capabilities, but also of the complex coordination involved in a launch that includes deployment of multiple payloads into different target orbits in relatively quick succession.

This launch will be closely watched in particular for its handling of orbital traffic management, since it definitely heralds what the future of private space launches could look like in terms of volume of activity. Some of the satellites flying on this mission are not much larger than an iPad, so industry experts will be paying close attention to how they’re deployed and tracked to avoid any potential conflicts.

Some of the payloads being launched today include significant volumes of startup spacecraft, including 36 of Swarm’s tiny IoT network satellites, and eight of Kepler’s GEN-1 communications satellites. There are also 10 of SpaceX’s own Starlink satellites on board, and 48 of Planet Labs’ Earth-imaging spacecraft.

The launch stream above should begin around 15 minutes prior to the mission start, which is set for 10 AM EST (7 AM PST) today.

#aerospace, #bank, #communications-satellites, #falcon-9, #imaging, #indian-space-research-organization, #ipad, #outer-space, #planet-labs, #satellite, #small-satellite, #space, #spacecraft, #spaceflight, #spacex, #starlink, #tc

Watch SpaceX launch its first dedicated rideshare mission live, carrying a record-breaking number of satellites

[UPDATE: Today’s attempt was scrubbed due to weather conditions. Another launch window is available tomorrow at 10 AM ET]

SpaceX is set to launch the very first of its dedicated rideshare missions – an offering it introduced in 2019 that allows small satellite operators to book a portion of a payload on a Falcon 9 launch. SpaceX’s rocket has a relatively high payload capacity compared to the size of many of the small satellites produced today, so a rideshare mission like this offers smaller companies and startups a chance to get their spacecraft in orbit without breaking the bank.

The cargo capsule atop the Falcon 9 flying today holds a total of 133 satellites according to SpaceX, which is a new record for the highest number of satellites being launched on a single rocket – beating out a payload of 104 spacecraft delivered by Indian Space Research Organization’s PSLV-C37 launch back in February 2017. It’ll be a key demonstration not only of SpaceX’s rideshare capabilities, but also of the complex coordination involved in a launch that includes deployment of multiple payloads into different target orbits in relatively quick succession.

This launch will be closely watched in particular for its handling of orbital traffic management, since it definitely heralds what the future of private space launches could look like in terms of volume of activity. Some of the satellites flying on this mission are not much larger than an iPad, so industry experts will be paying close attention to how they’re deployed and tracked to avoid any potential conflicts.

Some of the payloads being launched today include significant volumes of startup spacecraft, including 36 of Swarm’s tiny IoT network satellites, and eight of Kepler’s GEN-1 communications satellites. There are also 10 of SpaceX’s own Starlink satellites on board, and 48 of Planet Labs’ Earth-imaging spacecraft.

The launch stream above should begin around 15 minutes prior to the mission start, which is set for 9:40 AM EST (6:40 AM PST) today.

#aerospace, #bank, #communications-satellites, #falcon-9, #imaging, #indian-space-research-organization, #ipad, #outer-space, #planet-labs, #satellite, #small-satellite, #space, #spacecraft, #spaceflight, #spacex, #starlink, #tc

Teledyne to acquire FLIR in $8 billion cash and stock deal

Industrial sensor giant Teledyne is set to acquire sensing company FLIR in a deal valued at around $8 billion in a mix of stock and cash, pending approvals with an expected closing date sometime in the middle of this year. While both companies make sensors, aimed primarily at industrial and commercial customers, they actually focus on different specialties that Teledyne said in a press release makes FLIR’s business complimentary to, rather than competitive with, its existing offerings.

FLIR’s technology has appeared in the consumer market via add-on thermal cameras designed for mobile devices, including the iPhone. These are useful for things like identifying the source of drafts and potential plumbing leaks, but the company’s main business, which includes not only thermal imaging, but also visible light imaging, video analysts and threat detection technology, serves deep-pocketed customers including the aerospace and defense industries.

Teledyne also serves aerospace and defense customers, including NASA, as well as healthcare, marine and climate monitoring agencies. The company’s suite of offerings include seismic sensors, oscilloscopes and other instrumentation, as well as digital imaging, but FLIR’s products cover some areas not currently addressed by Teledyne, and in more depth.

#aerospace, #california, #companies, #digital-imaging, #flir, #healthcare, #imaging, #iphone, #mobile-devices, #surveillance, #tc, #thermal-imaging

Iris Automation raises $13 million for visual drone object avoidance tech

It’s only a matter of time now before drones become a key component of everyday logistics infrastructure, but there are still significant barriers between where we are today and that future – particularly when it comes to regulation. Iris Automation is developing computer vision products that can help simplify the regulatory challenges involved in setting standards for pilotless flight, thanks to its detect-and-avoid technology that can run using a wide range of camera hardware. The company has raised a $13 million Series B funding round to improve and extend its tech, and to help provide demonstrations of its efficacy in partnership with regulators.

I spoke to Iris Automation CEO Jon Damush, and Iris Automation investor Tess Hatch, VP at Bessemer Venture Partners, about the round and the startup’s progress and goals. Damush, who took over as CEO earlier this year, talked about his experience at Boeing, his personal experience as a pilot, and the impact on aviation of the advent of small, cheap and readily accessible electric motors, batteries and powerful computing modules, which have set the stage for an explosion in the commercial UAV industry.

“You’ve now shattered some of the barriers that have been in aerospace for the past 50 years, because you’re starting to really democratize the tools of production that allow people to make things that fly much easier than they could before,” Damush told me. “So with that, and the ability to take a human out of the cockpit, comes some interesting challenges – none more so than the regulatory environment.”

The U.S. Federal Aviation Administration (FAA), and most airspace regulators around the world, essentially break regulations around commercial flight down into two spheres, Damush explains. The first is around operations – what are you going to do while in flight, and are you doing that the right way. The second, however, is about the pilot, and that’s a much trickier thing to adapt to pilotless aircraft.

“One of the biggest challenges is the part of the regulations called 91.113b, and what that part of the regs states is that given weather conditions that permit, it’s the pilot on the airplane that has the ultimate responsibility to see and avoid other aircraft,”  That’s not a separation standard that says you’ve got to be three miles away, or five miles away or a mile away – that is a last line of defense, that is a safety net, so that when all the other mitigations that lead to a safe flight from A to B fail, the pilot is there to make sure you don’t collide into somebody.”

Iris comes in here, with an optical camera-based obstacle avoidance system that uses computer vision to effectively replace this last line of defence when there isn’t a pilot to do so. And what this unlocks is a key limiting factor in today’s commercial drone regulatory environment: The ability to fly aircraft beyond visual line of sight. All that means is that drones can operate without having to guarantee that an operator has eyes on them at all times. When you first hear that, you imagine that this factors in mostly to long-distance flight, but Damush points out that it’s actually more about volume – removing the constraints of having to keep a drone within visual line of sight at all times means you can go from having one operator per drone, to one operator managing a fleet of drones, which is when the economies of scale of commercial drone transportation really start to make sense.

Iris has made progress towards making this a reality, working with the FAA this year as part of its integrated pilot program to demonstrate the system in two different use cases. It also released the second version of its Casia system, which can handle significantly longer range object detection. Hatch pointed out that these were key reasons why Bessemer upped its stake with this follow-on investment, and when I asked if COVID-19 has had any impact on industry appetite or confidence in the commercial drone market, she said that has been a significant factor, and it’s also changing the nature of the industry.

“The two largest industries [right now] are agriculture and public safety enforcement,” Hatch told me. “And public safety enforcement was not one of those last year, it was agriculture, construction and energy. That’s definitely become a really important vertical for the drone industry – one could imagine someone having a heart attack or an allergic reaction, an ambulance takes on average 14 minutes to get to that person, when a drone can be dispatched and deliver an AED or an epi pen within minutes, saving that person’s life. So I really hope that tailwind continues post COVID.”

This Series B round includes investment from Bee Partners, OCA Ventures, and new strategic investors Sony Innovation Fund and Verizon Ventures (disclosure: TechCrunch is owned by Verizon Media Group, though we have no involvement, direct or otherwise, with their venture arm). Damush pointed out that Sony provides great potential strategic value because it develops so much of the imaging sensor stack used in the drone industry, and Sony also develops drones itself. For its part, Verizon offers key partner potential on the connectivity front, which is invaluable for managing large-scale drone operations.

#aerospace, #articles, #bee-partners, #bessemer-venture-partners, #boeing, #ceo, #computing, #drone, #embedded-systems, #emerging-technologies, #energy, #federal-aviation-administration, #funding, #imaging, #iris-automation, #recent-funding, #robotics, #science-and-technology, #sony-innovation-fund, #startups, #tc, #technology, #tess-hatch, #unmanned-aerial-vehicles, #verizon-media-group, #verizon-ventures, #vp

Rocket Lab successfully launches satellite for Japanese startup Synspective

Rocket Lab has completed its 17th mission, putting a synthetic aperture radar (SAR) satellite on orbit for client Synspective, a Tokyo-based space startup that has raised over $100 million in funding to date. Syspective aims to operate a 30-satellite constellation that can provide global imaging coverage of Earth, with SAR’s benefits of being able to see through clouds and inclement weather, as well as in all lighting conditions.

This is Synspective’s first satellite on orbit, and it took off from Rocket Lab’s launch facility on the Mahia Peninsula in New Zealand. It will operate in a sun synchronous orbit approximately 300,000 miles from Earth, and will act a a demonstrator of the startup’s technology to pave the way for the full constellation, which will provide commercially available SAR data avails both raw, and processed via the company’s in-development AI technology to provide analytics and insights.

For Rocket Lab, this marks the conclusion of a successful year in launch operations, which also saw the company take its key first steps towards making its Electron launch system partially reusable. The company did have one significant setback as well, with a mission that failed to deliver its payloads to orbit in July, but the company quickly bounced back from that failure with improvements to prevent a similar incident in future.

In 2021, Rocket Lab will aim to launch its first mission from the U.S., using its new launch facility at Wallops Island, in Virginia. That initial U.S. flight was supposed to happen in 2020, but the COVID-19 pandemic, followed by a NASA certification process for one of its systems, pushed the launch to next year.

#aerospace, #electron, #imaging, #new-zealand, #outer-space, #rocket-lab, #satellite, #science, #space, #spaceflight, #synspective, #tc, #tokyo, #united-states, #virginia

LA-based A-Frame, a developer of celebrity-led personal care brands, raises cash for its brand incubator

A-Frame, a Los Angeles-based developer of personal care brands supported by celebrities, has raised $2 million in a new round of funding led by Initialized Capital.

Joining Initialized in the round is the serial entrepreneur Moise Emquies, whose previous clothing lines, Ella Moss and Splendid, were acquired by the fashion holding company VFC in 2017.

A-Frame previously raised a seed round backed by cannabis dispensary Columbia Care. The company’s first product is a hand soap, Keeper. Other brands in suncare and skincare, children and babycare, and bath and body will follow, the company said.

“We partner with the investment groups at the agencies,” said company founder and chief executive, Ari Bloom. “We start interviewing different talent, speaking with their agents and their managers. We create an entity that we spin out. I wouldn’t say that we compete with the agencies.”

So far, the company has worked with CAA, UTA and WME on all of the brands in development, Bloom said. Two new brands should launch in the next couple of weeks.

As part of the round, actor, activist, and author Hill Harper has joined the company as a co-founder and as the company’s chief strategy officer. Emquies is also joining the company as its chief brand officer.

“Hill is my co-founder. He and I have worked together for a number of years. He’s with me at the holding company level. Identifying the opportunities,” said Bloom. “He’s bridging the gap between business and talent. He’s a part of the conversations when we talk to the agencies, managers and the talent. He’s a great guy that I think has a lot of respect in the agency and talent world.”

Initialized General Partner Alda Leu Dennis took point on the investment for Initialized and will take a seat on the company’s board of directors alongside Emquies. Other directors include Columbia Care chief executive, Nicholas Vita, and John D. Howard, the chief executive of Irving Place Capital.

“For us the calculus was to look at personal care and see what categories need to be reinvented because of sustainability,” said Bloom. “It was important to us once we get to a category what is the demographic opportunity. Even if categories were somewhat evolved they’re not all the way there… everything is in non-ingestible personal care. When you have a celebrity focused brand you want to focus on franchise items.”

The Keeper product is a subscription-based model for soap concentrates and cleansing hand sprays.

A serial entrepreneur, Bloom’s last business was the AR imaging company, Avametric, which was backed by Khosla Ventures and Y Combinator and wound up getting acquired by Gerber Technology in 2018. Bloom is also a founder of the Wise Sons Delicatessen in San Francisco.

“We first invested in Avametric at Initialized in 2013 and he had experience prior to that as well. From a venture perspective I think of these all around real defensibility of brand building,” said Dennis.

The investors believe that between Bloom’s software for determining market preferences, A-Frame’s roster of celebrities and the company’s structure as a brand incubator, all of the ingredients are in place for a successful direct to consumer business.

However, venture capitalists have been down this road before. The Honest Co. was an early attempt to build a sustainable brand around sustainable personal care products. Bloom said Honest provided several lessons for his young startup, one of them being a knowledge of when a company has reached the peak of its growth trajectory and created an opportunity for other, larger companies to take a business to the next level.

“Our goal is a three-to-seven year horizon that is big enough at a national scale that a global player can come in and internationally scale it,” said Bloom.

#alda-leu-dennis, #ceo, #co-founder, #imaging, #initialized-capital, #khosla-ventures, #los-angeles, #san-francisco, #serial-entrepreneur, #tc, #y-combinator

Intel is providing the smarts for the first satellite with local AI processing on board

Intel detailed today its contribution to PhiSat-1, a new tiny small satellite that was launched into sun-synchronous orbit on September 2. PhiSat-1 has a new kind of hyperspectral-thermal camera on board, and also includes a Movidius Myriad 2 Vision Processing Unit. That VPU is found in a number of consumer devices on Earth, but this is its first trip to space – and the first time it’ll be handling large amounts of local data, saving researchers back on Earth precious time and satellite downlink bandwidth.

Specifically, the AI on board the PhiSat-1 will be handling automatic identification of cloud cover – images where the Earth is obscured in terms of what the scientists studying the data actually want to see. Getting rid of these images before they’re even transmitted means that the satellite can actually realize a bandwidth savings of up to 30%, which means more useful data is transmitted to Earth when it is in range of ground stations for transmission.

The AI software that runs on the Intel Myriad 2 on PhiSat-1 was created by startup Ubotica, which worked with the hardware maker behind the hyperspectral camera. It also had to be tuned to compensate for the excess exposure to radiation, though a bit surprisingly testing at CERN found that the hardware itself didn’t have to be modified in order to perform within the standards required for its mission.

Computing at the edge takes on a whole new meaning when applied to satellites on orbit, but it’s definitely a place where local AI makes a ton of sense. All the same reasons that companies seek to handle data processing and analytics at the site of sensors hear on Earth also apply in space – but magnified exponentially in terms of things like network inaccessibility and quality of connections, so expect to see a lot more of this.

PhiSat-1 was launched in September as part of Arianspace’s first rideshare demonstration mission, which it aims to use to show off its ability to offer launch services to smaller startups for smaller payloads at lower costs.

#aerospace, #artificial-intelligence, #data-processing, #imaging, #intel, #movidius, #prisma, #radiation, #satellite, #science, #space, #spectroscopy, #tc, #vision

Satellite radar startup ICEYE raises 87 million to continue to grow its operational constellation

Finnish startup ICEYE, which has been building out and operating a constellation of Synthetic-Aperture Radar (SAR) small satellites, has raised an $87 million Series C round of financing. This round of funding was led by existing investor True Ventures, and includes participation by OTB Ventures, and it brings the total funding for ICEYE to $152 million since its founding in 2014.

ICEYE has already launched a total of five SAR satellites, and will be launching an addition four later this year, with a plan to add eight more throughout 2021. Its SAR satellites were the first ever small satellites with SAR imaging capabilities, and it designed and built the spacecraft in-house. SAR imaging is innovative because it uses relatively small actual physical antennas, combined with fast motion across a targeted imaging area, to create a much larger synthetic aperture than the physical aperture of the radar antenna itself – which in turn means it’s capable of producing very high-resolution, two- and three-dimensional images of areas and objects.

ICEYE has been able to rack up a number of achievements, including record-setting 0.25 meter resolution for a small SAR satellite, and record turnaround time in terms of capture data delivery, reaching only five minutes from when data begins its downlink connection to ground stations, to actually having processed images available for customers to use on their own systems.

The purpose of this funding is to continue and speed up the growth of the ICEYE satellite constellation, as well as providing round-the-clock customer service operations across the world. ICEYE also hopes to set up U.S.-based manufacturing operations for future spacecraft.

SAR, along with other types of Earth imaging, have actually grown in demand during the ongoing COVID-19 crisis – especially when provided by companies focused on delivering them via lower cost, small satellite operations. That’s in part due to their ability to provide services that supplement inspection and monitoring work that would’ve been done previously in person, or handled via expensive operations including aircraft observation or tasked geosynchronous satellites.

#aerospace, #capella-space, #iceye, #imaging, #recent-funding, #satellite-constellation, #satellites, #science, #spacecraft, #spaceflight, #startups, #tc, #true-ventures

MIT engineers develop a totally flat fisheye lens that could make wide-angle cameras easier to produce

Engineers at MIT, in partnership with the University of Massachusetts at Lowell, have devised a way to build a camera lens that avoids the typical spherical curve of ultra-wide-angle glass, while still providing true optical fisheye distortion. The fisheye lens is relatively specialist, producing images that can cover as wide an area as 180 degrees or more, but they can be very costly to produce, and are typically heavy, large lenses that aren’t ideal for use on small cameras like those found on smartphones.

This is the first time that a flat lens has been able to product clear, 180-degree images that cover a true panoramic spread. The engineers were able to make it work by patterning a thin wafer of glass on one side with microscopic, three-dimensional structures that are positioned very precisely in order to scatter any inbound light in precisely the same way that a curved piece of glass would.

The version created by the researchers in this case is actually designed to work specifically with the infrared portion of the light spectrum, but they could also adapt the design to work with visible light, they say. Whether IR or visible light, there are a range of potential uses of this technology, since capturing a 180-degree panorama is useful not only in some types of photography, but also for practical applications like medical imaging, and in computer vision applications where range is important to interpreting imaging data.

This design is just one example of what’s called a ‘Metalens’ – lenses that make use of microscopic features to change their optical characteristics in ways that would traditionally have been accomplished through macro design changes – like building a lens with an outward curve, for instance, or stacking multiple pieces of glass with different curvatures to achieve a desired field of view.

What’s unusual here is that the ability to accomplish a clear, detailed and accurate 180-degree panoramic image with a perfectly flat metalens design came as a surprise even to the engineers who worked on the project. It’s definitely an advancement of the science that goes beyond what may assumed was the state of the art.

#fisheye-lens, #gadgets, #glass, #hardware, #imaging, #lenses, #massachusetts, #medical-imaging, #mit, #optics, #science, #science-and-technology, #smartphones, #tc

Join us Wednesday, September 9 to watch Techstars Starburst Space Accelerator Demo Day live

The 2020 class of Techstars’ Starburst Space Accelerator are graduating with an official Demo Day on Wednesday at 10 AM PT (1 PM ET), and you can watch all the teams present their startups live via the stream above. This year’s class includes 10 companies building innovative new solutions to challenges either directly or indirectly related to commercial space.

Techstars Starburst is a program with a lot of heavyweight backing from both private industry and public agencies, including from NASA’s JPL, the U.S. Air Force, Lockheed Martin, Maxar Technologies, SAIC, Israel Aerospace Industries North America, and The Aerospace Corporation. The program, led by Managing Director Matt Kozlov, is usually based locally in LA, where much of the space industry has significant presence, but this year the Demo Day is going online due to the ongoing COVID-19 situation.

Few, if any, programs out there can claim such a broad representation of big-name partners from across commercial, military and general civil space in terms of stakeholders, which is the main reason it manages to attract a range of interesting startups.  This is the second class of graduating startups from the Starburst Space Accelerator; last year’s batch included some exceptional standouts like on-orbit refuelling company Orbit Fab (also a TechCrunch Battlefield participant), imaging micro-satellite company Pixxel and satellite propulsion company Morpheus.

As for this year’s class, you can check out a full list of all ten participating companies below. The demo day presentations begin tomorrow, September 9 at 10 AM PT/1 PM PT, so you can check back in here then to watch live as they provide more details about what it is they do.

Bifrost

A synthetic data API that allows AI teams to generate their own custom datasets up to 99% faster – no tedious collection, curation or labelling required.
founders@bifrost.ai

Holos Inc.

A virtual reality content management system that makes it super easy for curriculum designers to create and deploy immersive learning experiences.
founders@holos.io

Infinite Composites Technologies

The most efficient gas storage systems in the universe.
founders@infinitecomposites.com

Lux Semiconductors

Lux is developing next generation System-on-Foil electronics.
founders@luxsemiconductors.com

Natural Intelligence Systems, Inc.

Developer of next generation pattern based AI/ML systems.
leadership@naturalintelligence.ai

Prewitt Ridge

Engineering collaboration software for teams building challenging deep tech projects.
founders@prewittridge.com

SATIM

Providing satellite radar-based intelligence for decision makers.
founders@satim.pl

Urban Sky

Developing stratospheric Microballoons to capture the freshest, high-res earth observation data.
founders@urbansky.space

vRotors

Real-time remote robotic controls.
founders@vrotors.com

WeavAir

Proactive air insights.
founders@weavair.com

#aerospace, #artificial-intelligence, #astronomy, #collaboration-software, #content-management-system, #demo-day, #electronics, #imaging, #israel-aerospace-industries, #lockheed-martin, #louisiana, #matt-kozlov, #maxar-technologies, #ml, #orbit-fab, #outer-space, #robotics, #saic, #satellite, #science, #space, #spaceflight, #startups, #tc, #techstars, #u-s-air-force, #virtual-reality

Adobe Spark adds support for animations to its social media graphics tool

Spark is one of those products in Adobe’s Creative Suite that doesn’t always get a lot of attention. But the company’s tool for creating social media posts (which you can try for free) has plenty of fans, and maybe that’s no surprise, given that its mission is to help small business owners and agencies create engaging social media posts without having to learn a lot about design. Today, Adobe added one of the most requested features to Spark on mobile and the web: animations.

“At Adobe, we have this rich history with After Effects,” Spark product manager Lisa Boghosian told me. “We wanted to bring professional motion design to non-professionals, because what solopreneurs or small business owners know what keyframes are or know how to build pre-comps and have five layers. It’s just not where they’re spending their time and they shouldn’t have to. That’s really what Spark is for: you focus on your business and building that. We’ll help guide you into expressing building that base.”

Image Credits: Fernando Trabanco Fotografía / Getty Images

Guiding users is what Spark does across its features, be that designing the flow of your text, adding imaging or now animations. It does that through providing a vast number of templates — which include a set of animated templates, as well as easy access to free images, Adobe Stock and icons from the Noun Project (on top of your own imagery, of course).

The team also decided to do away with a lot of the accouterments of movie editors, including timelines. Instead, the team pre-built the templates and the logic behind how new designs display those animations based on best practices. “Instead of exposing a timeline to a user and asking them to put things on a timeline and adjusting the speed — and guessing — we’ve taken on that role because we want to guide you to that best experience.”

Image Credits: Fernando Trabanco Fotografía / Getty Images

In addition to the new animations feature, Spark is also getting improved tools for sharing assets across the Creative Cloud suite thanks to support for Creative Cloud Libraries. That makes it far easier for somebody to move images from Lightroom or Photoshop to Spark, but since Spark is also getting quite popular with agencies, it’ll make collaborating easier as well. The service already has tools for organizing assets today, but this makes it far easier to work across the various Creative Cloud tools.

Boghosian tells me the team had long had animations on its roadmap, but it took a while to bring it to market, in part because Adobe wanted to get the performance right. “We had to make sure that performance was up to par with what we wanted to deliver,” she said. “And so the experience of exporting a project — we didn’t want it to take a significant amount of time because we really didn’t want the user sitting there waiting for it. So we had to bring up the backend to really support the experience we wanted.” She also noted that the team wanted to have the Creative Cloud Libraries integration ready before launching animations. 

Once you’ve created your animation, Spark lets you export it as an MP4 video file or as a static image. Spark will not let you download GIFs.

#adobe, #adobe-creative-cloud, #adobe-photoshop, #creative-suite, #imaging, #software, #spark, #tc

Rocket Lab returns to flight with a successful launch of a Capella Space satellite

Rocket Lab is back to active launch status after encountering an issue with its last mission that resulted in a loss of the payload. In just over a month, Rocket Lab was able to identify what went wrong with the Electron launch vehicle used on that mission and correct the issue. On Sunday, it successfully launched a Sequoia satellite on behalf of client Capella Space from its New Zealand launch facility.

The “I Can’t Believe It’s Not Optical” mission is Rocket Lab’s 14th Electron launch, and it lifted off from the company’s private pad at 11:05 PM EDT (8:05 PM PDT). The Sequoia satellite is the first in startup Capella Space’s constellation of Synthetic Aperture Radar (SAR) satellites to be available to general customers. When complete, the constellation will provide hourly high-quality imaging of Earth, using radar rather than optical sensors in order to provide accurate imaging regardless of cloud cover and available light.

This launch seems to have gone off exactly as planned, with the Electron successfully lifting off and delivering the Capella Space satellite to its target orbit. Capella had been intending to launch this spacecraft aboard a SpaceX Falcon 9 rocket via a rideshare mission, but after delays to that flight, it changed tack and opted for a dedicated launch with Rocket Lab.

Rocket Lab’s issue with its July 4 launch was a relatively minor one – an electrical system failure that caused the vehicle to simply shut down, as a safety measure. The team’s investigation revealed a component of the system that was not stress-tested as strenuously as it should’ve been, and Rocket Lab immediately instituted a fix for both future and existing in-stock Electron vehicles in order to get back to active flight in as little time as possible.

While Rocket Lab has also been working on a recovery system that will allow it to reuse the booster stage of its Electron for multiple missions, this launch didn’t involve any tests related to that system development. The company still hopes to test recovery of a booster sometime before the end of this year on an upcoming launch.

#aerospace, #capella-space, #electron, #flight, #imaging, #new-zealand, #outer-space, #rocket-lab, #satellite, #science, #space, #spaceflight, #spacex, #tc

India’s first Earth-imaging satellite startup raises $5 million, first launch planned for later this year

Bengaluru-based Pixxel is getting ready to launch its first Earth imaging satellite later this year, with a scheduled mission aboard a Soyuz rocket. The roughly one-and-a-half-year old company is moving quickly, and today it’s announcing a $5 million seed funding round to help it accelerate even more. The funding is led by Blume Ventures, Lightspeed India Partners, and growX ventures, while a number of angel investors participated.

This isn’t Pixxel’s first outside funding: It raised $700,000 in pre-seed money from Techstars and others last year. But this is significantly more capital to invest in the business, and the startup plans to use it to grow its team, and to continue to fund the development of its Earth observation constellation.

The goal is to fully deploy said constellation, which will be made up of 30 satellites, by 2022. Once all of the company’s small satellites are on-orbit, the the Pixxel network will be able to provide globe-spanning imaging capabilities on a daily basis. The startup claims that its technology will be able to provide data that’s much higher quality when compared to today’s existing Earth imaging satellites, along with analysis driven by PIxxel’s own deep learning models, which are designed to help identify and even potentially predict large problems and phenomena that can have impact on a global scale.

Pixxel’s technology also relies on very small satellites (basically the size of a bear fridge) that nonetheless provide a very high quality image at a cadence that even large imaging satellite networks that already exist would have trouble delivering. The startup’s founders, Awais Ahmed and Kshitij Khandelwal, created the company while still in the process of finishing up the last year of their undergraduate studies. The founding team took part in Techstars’ Starubst Space Accelerator last year in LA.

#aerospace, #artificial-intelligence, #bengaluru, #blume-ventures, #earth, #google, #imaging, #learning, #lightspeed-india-partners, #louisiana, #mentorships, #outer-space, #private-spaceflight, #recent-funding, #satellite, #small-satellite, #space, #spaceflight, #startups, #tc, #techstars

Ford to use Boston Dynamics’ dog-like robots to map their manufacturing facilities

Ford is going to employ two of Boston Dynamics’ ‘Spot’ robots, which are four-legged, dog-like walking robots that weigh roughly 70 lbs each, to help them update the original engineering plans for one of the transmission manufacturing plans. The plants, Ford explains, have undergone any number of changes since their original construction, and it’s difficult to know if the plans they have match up with the reality of the plants as they exist today. The Spot robots, with their laser scanning and imaging capabilities, will be able to produce highly-detailed and accurate maps that Ford engineers can then use to modernize and retool the facility.

There are a few benefits that Ford hopes to realize by employing the Spot robots in place of humans to map the facility: First, they should save a considerable amount of time, since they replace a time-intensive process of setting up a tripod with a laser scanner at various points throughout the facility and spending a while at each location manually capturing the environment. The Spot dogs are roving and scanning continuously, providing a reduction of up to 50% in terms of actual time to complete the facility scan.

The robot dogs are also equipped with five cameras as well as laser scanners, and can operate for up to two hours travelling at around 3 mph continuously. The data they collect can then be synthesized for a more complete overall picture, and because of their small size and nimble navigation capabilities, they can map areas of the plant that aren’t necessarily reachable by people attempting to do the same job.

This is a pilot program that Ford is conducting, using two Spot robots leased by Boston Dynamics . But if it works out the way they seem to think it will, you can imagine that the automaker might seek to expand the program to cover other efforts at more of its manufacturing facilities.

#automotive, #boston-dynamics, #companies, #ford, #imaging, #laser, #optics, #robot, #robotics, #tc, #transportation

NASA to fly a football stadium-sized high-altitude balloon to study light from newborn stars

NASA’s latest mission won’t actually reach space – but it will come very close, with a massive observation craft made up of a football stadium-sized high-altitude balloon, along with a special stratospheric telescope instrument that can observe wavelengths of light blocked by Earth’s atmosphere, cast from newly-formed stars.

The mission is called the ‘Astrophysics Stratospheric Telescope for High Spectral Resolution Observations at Submillimeter-wavelengths,’ but shortened to ASTHROS since that’s a mouthful. It is currently set to take off from Antartica in December 2023, and the main payload is an 8.4-foot telescope that will point itself at four primary targets, including two regions in the Milky Way where scientists have observed star formation activity.

That telescope, the largest ever to be flown in this way, will be held aloft by a balloon that will measure roughly 400 feet wide, when fully inflated, with scientists on the ground able to precisely direct the orientation of the business end of the observation instrument. It’s mission will include between two or three full loops around the South Pole, during a period spanning between three and four weeks as it drifts along high-altitude stratospheric winds. After that, the telescope will separate from the balloon and return to Earth slowed by a parachute so that it can potentially be recovered and reflow again in future to perform similar experiments.

While floating a balloon up to the edge of Earth’s atmosphere might sound like more of a relaxed affair than launching a satellite using an explosion-propelled rocket, NASA’s Jet Propulsion Lab engineer Jose Siles said in an agency release that balloon science missions are actually higher-risk than space missions, in part because many elements of them are novel. At the same time, however, they have the potential to provide significant rewards at a reduced cost relative to satellite launches on rockets.

The end goal is for ASTRHOS to create “the first detailed 3D maps of the density, speed and motion of gas” in these regions around newborn stars, in order to help better understand how they can impeded the development of other stars, or encourage the birth of some. This will be helpful in refining existing simulations of the formation and evolution of galaxies, the agency says.

#aerospace, #astronomy, #balloon, #engineer, #imaging, #science, #space, #stratosphere, #tc

Autonomous driving startup turns its AI expertise to space for automated satellite operation

Hungarian autonomous driving startup AImotive is leveraging its technology to address a different industry and growing need: autonomous satellite operation. AImotive is teaming up with C3S, a supplier of satellite and space-based technologies, to develop a hardware platform for performing AI operations onboard satellites. AImotive’s aiWare neural network accelerator will be optimized by C3S for use on satellites, which have a set of operating conditions that in many ways resembles those onboard cars on the road – but with more stringent requirements in terms of power management, and environmental operating hazards.

The goal of the team-up is to have AImotive’s technology working on satellites that are actually operational on orbit by the second half of next year. The projected applications of onboard neural network acceleration extend to a number of different functions according to the companies, including telecommunications, Earth imaging and observation, autonomously docking satellites with other spacecraft, deep space mining and more.

While it’s true that most satellites operate essentially in an automated fashion already – mean gin they’re not generally manually flown at every given moment – true neural network-based onboard AI smarts would provide them with much more autonomy when it comes to performing tasks, like imaging a specific area or looking for specific markers in ground or space-based targets. Also, AImotive and C3S believe that local processing of data has the potential to be a significant game-changer when it comes to the satellite business.

Currently, most of the processing of data collected by satellites is done after the raw information is transmitted to ground stations. That can actually result in a lot of lag time between data collection, and delivery of processed data to customers, particularly when the satellite operator or another go-between is acting as the processor on behalf of the client rather than just delivering raw info (and doing this analysis is also a more lucrative proposition for the data provider, or course).

AImotive’s tech could mean that processing happens locally, on the satellite where the information is captured. There’s been a big shift towards this kind of ‘computing at the edge’ in the ground-based IoT world, and it only makes sense to replicate that in space, for many of the same reasons – including that it reduces time to delivery, meaning more responsive service for paying customers.

#aerospace, #aimotive, #artificial-intelligence, #computing, #imaging, #neural-network, #satellite, #satellite-imagery, #satellites, #science, #space, #spacecraft, #spaceflight, #tc, #telecommunications

High Earth Orbit Robotics uses imaging satellites to provide on-demand check-ups for other satellites

Maintaining satellites on orbit and ensuring they make full use of their operational lifespan has never been more important, given concerns around sustainable operations in an increasingly crowded orbital environment. As companies tighten their belts financially to deal with the ongoing economic impact of COVID-19, too, it’s more important than ever for in-space assets to live up to their max potential. A startup called High Earth Orbit (HEO) Robotics has a very clever solution that makes use of existing satellites to provide monitoring services for others, generating revenue from unused Earth imaging satellite time and providing a valuable maintenance service all at the same time.

HEO’s model employs cameras already on orbit mounted on Earth observation satellites operated by partner companies, and tasks them with collecting images of the satellites of its customers, who are looking to ensure their spacecraft are in good working order, oriented in the correct way, and with all their payloads properly deployed. Onboard instrumentation can provide satellite operators with a lot of diagnostic information, but sometimes there are problems only external photography can properly identify, or that require confirmation or further detail to resolve.

The beauty of HEO’s model is that it’s truly a win for all involved; Earth observation satellites generally aren’t in use at all times – they have considerable down time in particular when they’re over open water, for instance, HEO’s founder and CEO William Crowe tells me.

“We try to use the satellites at otherwise low-value times, like when they are over the ocean (which of course is most of the time),” Crowe said via email. “We also task our partners just like we would as a regular Earth-imaging business, specifying an area on Earth’s surface to image, the exception being that there is always a spacecraft in the field-of-view.”

The company is early on in its trajectory, but it has just released a proof-of-concept capture of the International Space Station, as seen in the slides provided by HEO below. The image was captured by a satellite owned by the Korean Aerospace Research Institute, which is operated by commercial satellite operator SI Imaging Services. HEO’s software compensated for the relative velocity of the satellite to the ISS, which was a very fast 10 km/s (around 6.2 miles per second). The company says it’s working towards getting even higher-resolution images.

The beauty of HEO’s model is that it actually requires no capital expenditure to work, in terms of the satellites used: Crowe explained that they currently pay-per-use, which means they only spend when they have a client request, so that the revenue covers the cost of tasking the partner satellite. HEO does plan to launch its own satellites in the “medium-term,” however, Crowe said, in order to cover the gaps that currently exist in coverage and in anticipation of an explosion in the low Earth orbit satellite population, which is expected to expand from the existing 2,000 or so spacecraft to as many as 100,000 or more over roughly the next decade.

HEO could ultimately provide imaging of not only other satellites, but also space debris to help with removal efforts, and even asteroids that could prove potential targets for mining and resource gathering. It’s a remarkably well-considered idea that stands to benefit from the explosion of growth in the orbital satellite industry, and also stands out among space startups because it has a near-term path to revenue that doesn’t require a massive outlay of capital up front.

#aerospace, #ceo, #imaging, #international-space-station, #robotics, #satellite, #satellite-imagery, #satellites, #space, #spacecraft, #spaceflight, #starlink, #tc

Rocket Lab launch fails during rocket’s second stage burn, causing a loss of vehicle and payloads

Rocket Lab’s ‘Pic or it didn’t happen’ launch on Saturday ended in failure, with a total loss of the Electron launch vehicle and all seven payloads on board. The launch vehicle experienced a failure during the second stage burn post-launch, after a lift-off from the Rocket Lab Launch Complex 1 on Mahia Peninsula in New Zealand.

The mission appeared to be progressing as intended, but the launch vehicle appeared to experience unexpected stress during the ‘Max Q’ phase of launch, or the period during which the Electron rocket experiences the most significant atmospheric pressure prior to entering space.

Launch video cut off around six minutes after liftoff during the live stream, and rocket was subsequently shown to be falling from its current altitude before the web stream was cut short. Rocket Lab then revealed via Twitter that the Electron vehicle was lost during the second stage burn, and committed to sharing more information when it becomes available.

This is an unexpected development for Rocket Lab, which has flown 11 uneventful consecutive Electron missions since the beginning of its program.

Rocket Lab CEO and founder Peter Beck posted an apology to Twitter, noting that all satellites were lost, and that he’s “incredibly sorry” to all customer who suffered loss of payload today. That includes Canon, which was flying a new Earth imaging satellite with demonstration imaging tech on board, as well as Planet, which had five satellites for its newest and most advanced Earth imaging constellation on the vehicle.

We’ll update with more info about the cause and next steps from Rocket Lab when available.

#aerospace, #ceo, #electron, #flight, #imaging, #new-zealand, #outer-space, #peter-beck, #rocket, #rocket-lab, #satellite, #spaceflight, #spaceport, #tc