C2i, a genomics SaaS product to detect traces of cancer, raises $100M Series B

If you or a loved one has ever undergone a tumor removal as part of cancer treatment, you’re likely familiar with the period of uncertainty and fear that follows. Will the cancer return, and if so, will the doctors catch it at an early enough stage? C2i Genomics has developed software that’s 100x more sensitive in detecting residual disease, and investors are pouncing on the potential. Today, C2i announced a $100 million Series B led by Casdin Capital. 

“The biggest question in cancer treatment is, ‘Is it working?’ Some patients are getting treatment they don’t benefit from and they are suffering the side effects while other patients are not getting the treatment they need,” said Asaf Zviran, co-founder and CEO of C2i Genomics in an interview.

Historically, the main approach to cancer detection post-surgery has been through the use of MRI or X-ray, but neither of those methods gets super accurate until the cancer progresses to a certain point. As a result, a patient’s cancer may return, but it may be a while before doctors are able to catch it.

Using C2i’s technology, doctors can order a liquid biopsy, which is essentially a blood draw that looks for DNA. From there they can sequence the entire genome and upload it to the C2i platform. The software then looks at the sequence and identifies faint patterns that indicate the presence of cancer, and can inform if it’s growing or shrinking.

“C2i is basically providing the software that allows the detection and monitoring of cancer to a global scale. Every lab with a sequencing machine can process samples, upload to the C2i platform and provide detection and monitoring to the patient,” Zviran told TechCrunch.

C2i Genomics’ solution is based on research performed at the New York Genome Center (NYGC) and Weill Cornell Medicine (WCM) by Dr. Zviran, along with Dr. Dan Landau, faculty member at the NYGC and assistant professor of medicine at WCM, who serves as scientific co-founder and member of C2i’s scientific advisory board. The research and findings have been published in the medical journal, Nature Medicine.

While the product is not FDA-approved yet, it’s already being used in clinical research and drug development research at NYU Langone Health, the National Cancer Center of Singapore, Aarhus University Hospital and Lausanne University Hospital.

When and if approved, New York-based C2i has the potential to drastically change cancer treatment, including in the areas of organ preservation. For example, some people have functional organs, such as the bladder or rectum, removed to prevent cancer from returning, leaving them disabled. But what if the unnecessary surgeries could be avoided? That’s one goal that Zviran and his team have their minds set on achieving.

For Zviran, this story is personal. 

“I started my career very far from cancer and biology, and at the age of 28 I was diagnosed with cancer and I went for surgery and radiation. My father and then both of my in-laws were also diagnosed, and they didn’t survive,” he said.

Zviran, who today has a PhD in molecular biology, was previously an engineer with the Israeli Defense Force and some private companies. “As an engineer, looking into this experience, it was very alarming to me about the uncertainty on both the patients’ and physicians’ side,” he said.

This round of funding will be used to accelerate clinical development and commercialization of the company’s C2-Intelligence Platform. Other investors that participated in the round include NFX, Duquesne Family Office, Section 32 (Singapore), iGlobe Partners and Driehaus Capital.

#artificial-intelligence, #biotech, #blood-test, #c2i-genomics, #cancer, #cancer-screening, #cancer-treatment, #casdin-capital, #cloud, #cornell, #drug-development, #fda, #funding, #health, #imaging, #mri, #new-york-university, #radiation, #recent-funding, #saas, #startups, #surgery, #tc, #tumor, #x-ray

0

Pixxel closes $7.3M seed round and unveils commercial hyperspectral imaging product

LA and Bangalore-based space startup Pixxel has closed a $7.3 million seed round, including newly committed capital from Techstars, Omnivore VC and more. The company has also announced a new product focus: Hyperspectral imaging. It aims to provide that imaging at the highest resolution commercially available, via a small satellite constellation that will provide 24-hour, global coverage once it’s fully operational.

Pixxel’s funding today is an extension of the $5 million it announced it had raised back in August of last year. At the time, the startup had only revealed that it was focusing on Earth imaging,, and it’s unveiling its specific pursuit of hyperspectral imaging for the first time today. Hyperspectral imaging uses far more light frequencies than the much more commonly-used multispectral imaging used in satellite observation today, allowing for unprecedented insight and detection of previously invisible issues, including migration of pest insect populations in agriculture, or observing gas leaks and other ecological threats.

Standard multispectral imaging (left) vs. hyperspectral imaging (right) Credit: EPFL

“We started with analyzing existing satellite images, and what we could do with this immediately,” explained Pixxel co-founder and CEO Awais Ahmed in an interview. “We realized that in most cases, it was not able to even see certain problems or issues that we wanted to solve – for example, we wanted to be able to look at air pollution and water pollution levels. But to be able to do that there were no commercial satellites that would enable us to do that, or even open source satellite data at the resolution that would enable us to do that.”

The potential of hyperspectral imaging on Earth, across a range of sectors, is huge, according to Ahmed, but Pixxel’s long-term vision is all about empowering a future commercial space sector to make the most of in-space resources.

“We started looking at space as a sector for us to be able to work in, and we realized that what we wanted to do was to be able to enable people to take resources from space to use in space,” Ahmed said. That included asteroid mining, for example, and when we investigated that, we found hyperspectral imaging was the imaging tech that would enable us to map these asteroids as to whether they contain these metal or these minerals. So that knowledge sort of transferred to this more short-term problem that we were looking at solving.”

Part of the reason that Pixxel’s founders couldn’t find existing available hyperspectral imaging at the resolutions they needed was that as a technology, it has previously been restricted to internal governmental use through regulation. The U.S. recently opened up the ability for commercial entities to pursue very high-resolution hyperspectral imaging for use on the private market, effectively because they realized that these technical capabilities were becoming available in other international markets anyway. Ahmed told me that the main blocker was still technical, however.

Pixxel's Hyperspectral imaging satellite at its production facility in Bangalore

Image Credits: Pixxel

“If we were to build a camera like this even two or three years ago, it would not have been possible because of the miniaturized sensors, the optics, etc.,” he said. “The advances that have happened only happened very recently, so it’s also the fact that this the right time to take it from the scientific domain to the commercial domain.”

Pixxel now aims to have its first hyperspectral imaging satellite launched and operating on orbit within the next few months, and it will then continue to launch additional satellites after that once it’s able to test and evaluate the performance of its first spacecraft in an actual operating environment.


Early Stage is the premier “how-to” event for startup entrepreneurs and investors. You’ll hear firsthand how some of the most successful founders and VCs build their businesses, raise money and manage their portfolios. We’ll cover every aspect of company building: Fundraising, recruiting, sales, product-market fit, PR, marketing and brand building. Each session also has audience participation built-in — there’s ample time included for audience questions and discussion. Use code “TCARTICLE at checkout to get 20% off tickets right here.

#aerospace, #asteroid-mining, #awais-ahmed, #bangalore, #imaging, #louisiana, #metal, #optics, #recent-funding, #satellite-constellation, #space, #spectroscopy, #startups, #tc, #techstars, #united-states

0

Adobe delivers native Photoshop for Apple Silicon Macs and a way to enlarge images without losing detail

Adobe has been moving quickly to update its imaging software to work natively on Apple’s new in-house processors for Macs, starting with the M1-based MacBook Pro and MacBook Air released late last year. After shipping native versions of Lightroom and Camera Raw, it’s now releasing an Apple Silicon-optimized version of Photoshop, which delivers big performance gain vs. the Intel version running on Apple’s Rosetta 2 software emulation layer.

How much better? Per internal testing, Adobe says that users should see improvements of up to 1.5x faster performance on a number of different features offered by Photoshop, vs. the same tasks being done on the emulated version. That’s just the start, however, since Adobe says it’s going to continue to coax additional performance improvements out of the software on Apple Silicon in collaboration with Apple over time. Some features are also still missing from the M1-friendly addition, including the ‘Invite to Edit Cloud Documents’ and ‘Preset Syncing’ options, but those will be ported over in future iterations as well.

In addition to the Apple Silicon version of Photoshop, Adobe is also releasing a new Super Resolution feature in the Camera Raw plugin (to be released for Lightroom later) that ships with the software. This is an image enlarging feature that uses machine learning trained on a massive image dataset to blow up pictures to larger sizes while still preserving details. Adobe has previously offered a super resolution option that combined multiple exposures to boost resolution, but this works from a single photo.

It’s the classic ‘Computer, enhance’ sci-fi feature made real, and it builds on work that Photoshop previously did to introduce its ‘Enhance details’ feature. If you’re not a strict Adobe loyalist, you might also be familiar with Pixelmator Pro’s ‘ML Super Resolution’ feature, which works in much the same way – albeit using a different ML model and training data set.

Adobe's Super Resolution comparison photo

Adobe’s Super Resolution in action

The bottom line is that Adobe’s Super Resolution will output an image with twice the horizontal and twice the vertical resolution – meaning in total, it has 4x the number of pixels. It’ll do that while preserving detail and sharpness, which adds up to allowing you to make larger prints from images that previously wouldn’t stand up to that kind of enlargement. It’s also great for cropping in on photos in your collection to capture tighter shots of elements that previously would’ve been rendered blurry and disappointing as a result.

This feature benefits greatly from GPUs that are optimized for machine learning jobs, including CoreML and Windows ML. That means that Apple’s M1 chip is a perfect fit, since it includes a dedicated ML processing region called the Neural Engine. Likewise, Nvidia’s RTX series of GPUs and their TensorCores are well-suited to the task.

Adobe also released some major updates for Photoshop for iPad, including version history for its Cloud Documents non-local storage. You can also now store versions of Cloud Documents offline and edit them locally on your device.

#adobe-creative-cloud, #adobe-lightroom, #adobe-photoshop, #apple, #apple-inc, #apps, #artificial-intelligence, #imaging, #intel, #m1, #machine-learning, #macintosh, #ml, #photoshop, #pixelmator, #software, #steve-jobs, #tc

0

Ibex Medical Analytics raises $38M for its AI-powered cancer diagnostic platform

Israel-based Ibex Medical Analytics, which has an AI-driven imaging technology to detect cancer cells in biopsies more efficiently, has raised a $38 million Series B financing round led by Octopus Ventures and 83North. Also participating in the round was aMoon, Planven Entrepreneur Ventures and Dell Technologies Capital, the corporate venture arm of Dell Technologies. The company has now raised a total of $52 million since its launch in 2016. Ibex plans to use the investment to further sell into diagnostic labs in North America and Europe.

Originally incubated out of the Kamet Ventures incubator, Ibex’s “Galen” platform mimics the work of a pathologist, allowing them to diagnose cancer more accurately and faster and derive new insights from a biopsy specimen.

Because rates of cancer are on the rise and the medical procedures have become more complex, pathologists have a higher workload. Plus, says Ibex, there is a global shortage of pathologists, which can mean delays to the whole diagnostic process. The company claims pathologists can be 40% more productive using its solution.

Speaking to TechCrunch, Joseph Mossel, Ibex CEO and Co-founder said: “You can think of it as a pathologist’s assistant, so it kind of prepares the case in advance, marks the regions of interest, and allows the pathologist to achieve the efficiency gains.”

He said the company has secured the largest pathology network in France, and LD path, which is five pathology labs that service 24 NHS trusts in the UK, among others.

Michael Niddam, of Kamet Ventures said Ibex was an “excellent example of how Kamet works with founders very early on.” Ibex founders Joseph Mossel and Dr. Chaim Linhart had previously joined Kamet as Entrepreneurs in Residence before developing their idea.

#assistant, #cancer, #dell-technologies-capital, #europe, #france, #imaging, #kamet-ventures, #nhs, #north-america, #octopus-ventures, #outer-space, #pathology, #spacecraft, #spaceflight, #tc, #united-kingdom

0

SpaceX sets new record for most satellites on a single launch with latest Falcon 9 mission

SpaceX has set a new all-time record for the most satellites launched and deployed on a single mission, with its Transporter-1 flight on Sunday. The launch was the first of SpaceX’s dedicated rideshare missions, in which it splits up the payload capacity of its rocket among multiple customers, resulting in a reduced cost for each but still providing SpaceX with a full launch and all the revenue it requires to justify lauding one of its vehicles.

The launch today included 143 satellites, 133 of which were from other companies who booked rides. SpaceX also launched 10 of its own Starlink satellites, adding to the already more than 1,000 already sent to orbit to power SpaceX’s own broadband communication network. During a launch broadcast last week, SpaceX revealed that it has begun serving beta customers in Canada and is expanding to the UK with its private pre-launch test of that service.

Customers on today’s launch included Planet Labs, which sent up 48 SuperDove Earth imaging satellites; Swarm, which sent up 36 of its own tiny IoT communications satellites, and Kepler, which added to its constellation with eight more of its own communication spacecraft. The rideshare model that SpaceX now has in place should help smaller new space companies and startups like these build out their operational on-orbit constellations faster, complementing other small payload launchers like Rocket Lab, and new entrant Virgin Orbit, to name a few.

This SpaceX launch was also the first to deliver Starlink satellites to a polar orbit, which is a key part of the company’s continued expansion of its broadband service. The mission also included a successful landing and recovery of the Falcon 9 rocket’s first-stage booster, the fifth for this particular booster, and a dual recovery of the fairing halves used to protect the cargo during launch, which were fished out of the Atlantic ocean using its recovery vessels and will be refurbished and reused.

#aerospace, #broadband, #canada, #communications-satellites, #elon-musk, #falcon, #falcon-9, #hyperloop, #imaging, #outer-space, #planet-labs, #rocket-lab, #satellite, #space, #spaceflight, #spacex, #starlink, #startups, #tc, #united-kingdom

0

Watch SpaceX’s first dedicated rideshare rocket launch live, carrying a record-breaking payload of satellites

 

SpaceX is set to launch the very first of its dedicated rideshare missions – an offering it introduced in 2019 that allows small satellite operators to book a portion of a payload on a Falcon 9 launch. SpaceX’s rocket has a relatively high payload capacity compared to the size of many of the small satellites produced today, so a rideshare mission like this offers smaller companies and startups a chance to get their spacecraft in orbit without breaking the bank. Today’s attempt is scheduled for 10 AM EST (7 AM PST) after a first try yesterday was cancelled due to weather. So far, weather looks much better for today.

The cargo capsule atop the Falcon 9 flying today holds a total of 133 satellites according to SpaceX, which is a new record for the highest number of satellites being launched on a single rocket – beating out a payload of 104 spacecraft delivered by Indian Space Research Organization’s PSLV-C37 launch back in February 2017. It’ll be a key demonstration not only of SpaceX’s rideshare capabilities, but also of the complex coordination involved in a launch that includes deployment of multiple payloads into different target orbits in relatively quick succession.

This launch will be closely watched in particular for its handling of orbital traffic management, since it definitely heralds what the future of private space launches could look like in terms of volume of activity. Some of the satellites flying on this mission are not much larger than an iPad, so industry experts will be paying close attention to how they’re deployed and tracked to avoid any potential conflicts.

Some of the payloads being launched today include significant volumes of startup spacecraft, including 36 of Swarm’s tiny IoT network satellites, and eight of Kepler’s GEN-1 communications satellites. There are also 10 of SpaceX’s own Starlink satellites on board, and 48 of Planet Labs’ Earth-imaging spacecraft.

The launch stream above should begin around 15 minutes prior to the mission start, which is set for 10 AM EST (7 AM PST) today.

#aerospace, #bank, #communications-satellites, #falcon-9, #imaging, #indian-space-research-organization, #ipad, #outer-space, #planet-labs, #satellite, #small-satellite, #space, #spacecraft, #spaceflight, #spacex, #starlink, #tc

0

Watch SpaceX launch its first dedicated rideshare mission live, carrying a record-breaking number of satellites

[UPDATE: Today’s attempt was scrubbed due to weather conditions. Another launch window is available tomorrow at 10 AM ET]

SpaceX is set to launch the very first of its dedicated rideshare missions – an offering it introduced in 2019 that allows small satellite operators to book a portion of a payload on a Falcon 9 launch. SpaceX’s rocket has a relatively high payload capacity compared to the size of many of the small satellites produced today, so a rideshare mission like this offers smaller companies and startups a chance to get their spacecraft in orbit without breaking the bank.

The cargo capsule atop the Falcon 9 flying today holds a total of 133 satellites according to SpaceX, which is a new record for the highest number of satellites being launched on a single rocket – beating out a payload of 104 spacecraft delivered by Indian Space Research Organization’s PSLV-C37 launch back in February 2017. It’ll be a key demonstration not only of SpaceX’s rideshare capabilities, but also of the complex coordination involved in a launch that includes deployment of multiple payloads into different target orbits in relatively quick succession.

This launch will be closely watched in particular for its handling of orbital traffic management, since it definitely heralds what the future of private space launches could look like in terms of volume of activity. Some of the satellites flying on this mission are not much larger than an iPad, so industry experts will be paying close attention to how they’re deployed and tracked to avoid any potential conflicts.

Some of the payloads being launched today include significant volumes of startup spacecraft, including 36 of Swarm’s tiny IoT network satellites, and eight of Kepler’s GEN-1 communications satellites. There are also 10 of SpaceX’s own Starlink satellites on board, and 48 of Planet Labs’ Earth-imaging spacecraft.

The launch stream above should begin around 15 minutes prior to the mission start, which is set for 9:40 AM EST (6:40 AM PST) today.

#aerospace, #bank, #communications-satellites, #falcon-9, #imaging, #indian-space-research-organization, #ipad, #outer-space, #planet-labs, #satellite, #small-satellite, #space, #spacecraft, #spaceflight, #spacex, #starlink, #tc

0

Teledyne to acquire FLIR in $8 billion cash and stock deal

Industrial sensor giant Teledyne is set to acquire sensing company FLIR in a deal valued at around $8 billion in a mix of stock and cash, pending approvals with an expected closing date sometime in the middle of this year. While both companies make sensors, aimed primarily at industrial and commercial customers, they actually focus on different specialties that Teledyne said in a press release makes FLIR’s business complimentary to, rather than competitive with, its existing offerings.

FLIR’s technology has appeared in the consumer market via add-on thermal cameras designed for mobile devices, including the iPhone. These are useful for things like identifying the source of drafts and potential plumbing leaks, but the company’s main business, which includes not only thermal imaging, but also visible light imaging, video analysts and threat detection technology, serves deep-pocketed customers including the aerospace and defense industries.

Teledyne also serves aerospace and defense customers, including NASA, as well as healthcare, marine and climate monitoring agencies. The company’s suite of offerings include seismic sensors, oscilloscopes and other instrumentation, as well as digital imaging, but FLIR’s products cover some areas not currently addressed by Teledyne, and in more depth.

#aerospace, #california, #companies, #digital-imaging, #flir, #healthcare, #imaging, #iphone, #mobile-devices, #surveillance, #tc, #thermal-imaging

0

Iris Automation raises $13 million for visual drone object avoidance tech

It’s only a matter of time now before drones become a key component of everyday logistics infrastructure, but there are still significant barriers between where we are today and that future – particularly when it comes to regulation. Iris Automation is developing computer vision products that can help simplify the regulatory challenges involved in setting standards for pilotless flight, thanks to its detect-and-avoid technology that can run using a wide range of camera hardware. The company has raised a $13 million Series B funding round to improve and extend its tech, and to help provide demonstrations of its efficacy in partnership with regulators.

I spoke to Iris Automation CEO Jon Damush, and Iris Automation investor Tess Hatch, VP at Bessemer Venture Partners, about the round and the startup’s progress and goals. Damush, who took over as CEO earlier this year, talked about his experience at Boeing, his personal experience as a pilot, and the impact on aviation of the advent of small, cheap and readily accessible electric motors, batteries and powerful computing modules, which have set the stage for an explosion in the commercial UAV industry.

“You’ve now shattered some of the barriers that have been in aerospace for the past 50 years, because you’re starting to really democratize the tools of production that allow people to make things that fly much easier than they could before,” Damush told me. “So with that, and the ability to take a human out of the cockpit, comes some interesting challenges – none more so than the regulatory environment.”

The U.S. Federal Aviation Administration (FAA), and most airspace regulators around the world, essentially break regulations around commercial flight down into two spheres, Damush explains. The first is around operations – what are you going to do while in flight, and are you doing that the right way. The second, however, is about the pilot, and that’s a much trickier thing to adapt to pilotless aircraft.

“One of the biggest challenges is the part of the regulations called 91.113b, and what that part of the regs states is that given weather conditions that permit, it’s the pilot on the airplane that has the ultimate responsibility to see and avoid other aircraft,”  That’s not a separation standard that says you’ve got to be three miles away, or five miles away or a mile away – that is a last line of defense, that is a safety net, so that when all the other mitigations that lead to a safe flight from A to B fail, the pilot is there to make sure you don’t collide into somebody.”

Iris comes in here, with an optical camera-based obstacle avoidance system that uses computer vision to effectively replace this last line of defence when there isn’t a pilot to do so. And what this unlocks is a key limiting factor in today’s commercial drone regulatory environment: The ability to fly aircraft beyond visual line of sight. All that means is that drones can operate without having to guarantee that an operator has eyes on them at all times. When you first hear that, you imagine that this factors in mostly to long-distance flight, but Damush points out that it’s actually more about volume – removing the constraints of having to keep a drone within visual line of sight at all times means you can go from having one operator per drone, to one operator managing a fleet of drones, which is when the economies of scale of commercial drone transportation really start to make sense.

Iris has made progress towards making this a reality, working with the FAA this year as part of its integrated pilot program to demonstrate the system in two different use cases. It also released the second version of its Casia system, which can handle significantly longer range object detection. Hatch pointed out that these were key reasons why Bessemer upped its stake with this follow-on investment, and when I asked if COVID-19 has had any impact on industry appetite or confidence in the commercial drone market, she said that has been a significant factor, and it’s also changing the nature of the industry.

“The two largest industries [right now] are agriculture and public safety enforcement,” Hatch told me. “And public safety enforcement was not one of those last year, it was agriculture, construction and energy. That’s definitely become a really important vertical for the drone industry – one could imagine someone having a heart attack or an allergic reaction, an ambulance takes on average 14 minutes to get to that person, when a drone can be dispatched and deliver an AED or an epi pen within minutes, saving that person’s life. So I really hope that tailwind continues post COVID.”

This Series B round includes investment from Bee Partners, OCA Ventures, and new strategic investors Sony Innovation Fund and Verizon Ventures (disclosure: TechCrunch is owned by Verizon Media Group, though we have no involvement, direct or otherwise, with their venture arm). Damush pointed out that Sony provides great potential strategic value because it develops so much of the imaging sensor stack used in the drone industry, and Sony also develops drones itself. For its part, Verizon offers key partner potential on the connectivity front, which is invaluable for managing large-scale drone operations.

#aerospace, #articles, #bee-partners, #bessemer-venture-partners, #boeing, #ceo, #computing, #drone, #embedded-systems, #emerging-technologies, #energy, #federal-aviation-administration, #funding, #imaging, #iris-automation, #recent-funding, #robotics, #science-and-technology, #sony-innovation-fund, #startups, #tc, #technology, #tess-hatch, #unmanned-aerial-vehicles, #verizon-media-group, #verizon-ventures, #vp

0

Rocket Lab successfully launches satellite for Japanese startup Synspective

Rocket Lab has completed its 17th mission, putting a synthetic aperture radar (SAR) satellite on orbit for client Synspective, a Tokyo-based space startup that has raised over $100 million in funding to date. Syspective aims to operate a 30-satellite constellation that can provide global imaging coverage of Earth, with SAR’s benefits of being able to see through clouds and inclement weather, as well as in all lighting conditions.

This is Synspective’s first satellite on orbit, and it took off from Rocket Lab’s launch facility on the Mahia Peninsula in New Zealand. It will operate in a sun synchronous orbit approximately 300,000 miles from Earth, and will act a a demonstrator of the startup’s technology to pave the way for the full constellation, which will provide commercially available SAR data avails both raw, and processed via the company’s in-development AI technology to provide analytics and insights.

For Rocket Lab, this marks the conclusion of a successful year in launch operations, which also saw the company take its key first steps towards making its Electron launch system partially reusable. The company did have one significant setback as well, with a mission that failed to deliver its payloads to orbit in July, but the company quickly bounced back from that failure with improvements to prevent a similar incident in future.

In 2021, Rocket Lab will aim to launch its first mission from the U.S., using its new launch facility at Wallops Island, in Virginia. That initial U.S. flight was supposed to happen in 2020, but the COVID-19 pandemic, followed by a NASA certification process for one of its systems, pushed the launch to next year.

#aerospace, #electron, #imaging, #new-zealand, #outer-space, #rocket-lab, #satellite, #science, #space, #spaceflight, #synspective, #tc, #tokyo, #united-states, #virginia

0

LA-based A-Frame, a developer of celebrity-led personal care brands, raises cash for its brand incubator

A-Frame, a Los Angeles-based developer of personal care brands supported by celebrities, has raised $2 million in a new round of funding led by Initialized Capital.

Joining Initialized in the round is the serial entrepreneur Moise Emquies, whose previous clothing lines, Ella Moss and Splendid, were acquired by the fashion holding company VFC in 2017.

A-Frame previously raised a seed round backed by cannabis dispensary Columbia Care. The company’s first product is a hand soap, Keeper. Other brands in suncare and skincare, children and babycare, and bath and body will follow, the company said.

“We partner with the investment groups at the agencies,” said company founder and chief executive, Ari Bloom. “We start interviewing different talent, speaking with their agents and their managers. We create an entity that we spin out. I wouldn’t say that we compete with the agencies.”

So far, the company has worked with CAA, UTA and WME on all of the brands in development, Bloom said. Two new brands should launch in the next couple of weeks.

As part of the round, actor, activist, and author Hill Harper has joined the company as a co-founder and as the company’s chief strategy officer. Emquies is also joining the company as its chief brand officer.

“Hill is my co-founder. He and I have worked together for a number of years. He’s with me at the holding company level. Identifying the opportunities,” said Bloom. “He’s bridging the gap between business and talent. He’s a part of the conversations when we talk to the agencies, managers and the talent. He’s a great guy that I think has a lot of respect in the agency and talent world.”

Initialized General Partner Alda Leu Dennis took point on the investment for Initialized and will take a seat on the company’s board of directors alongside Emquies. Other directors include Columbia Care chief executive, Nicholas Vita, and John D. Howard, the chief executive of Irving Place Capital.

“For us the calculus was to look at personal care and see what categories need to be reinvented because of sustainability,” said Bloom. “It was important to us once we get to a category what is the demographic opportunity. Even if categories were somewhat evolved they’re not all the way there… everything is in non-ingestible personal care. When you have a celebrity focused brand you want to focus on franchise items.”

The Keeper product is a subscription-based model for soap concentrates and cleansing hand sprays.

A serial entrepreneur, Bloom’s last business was the AR imaging company, Avametric, which was backed by Khosla Ventures and Y Combinator and wound up getting acquired by Gerber Technology in 2018. Bloom is also a founder of the Wise Sons Delicatessen in San Francisco.

“We first invested in Avametric at Initialized in 2013 and he had experience prior to that as well. From a venture perspective I think of these all around real defensibility of brand building,” said Dennis.

The investors believe that between Bloom’s software for determining market preferences, A-Frame’s roster of celebrities and the company’s structure as a brand incubator, all of the ingredients are in place for a successful direct to consumer business.

However, venture capitalists have been down this road before. The Honest Co. was an early attempt to build a sustainable brand around sustainable personal care products. Bloom said Honest provided several lessons for his young startup, one of them being a knowledge of when a company has reached the peak of its growth trajectory and created an opportunity for other, larger companies to take a business to the next level.

“Our goal is a three-to-seven year horizon that is big enough at a national scale that a global player can come in and internationally scale it,” said Bloom.

#alda-leu-dennis, #ceo, #co-founder, #imaging, #initialized-capital, #khosla-ventures, #los-angeles, #san-francisco, #serial-entrepreneur, #tc, #y-combinator

0

Intel is providing the smarts for the first satellite with local AI processing on board

Intel detailed today its contribution to PhiSat-1, a new tiny small satellite that was launched into sun-synchronous orbit on September 2. PhiSat-1 has a new kind of hyperspectral-thermal camera on board, and also includes a Movidius Myriad 2 Vision Processing Unit. That VPU is found in a number of consumer devices on Earth, but this is its first trip to space – and the first time it’ll be handling large amounts of local data, saving researchers back on Earth precious time and satellite downlink bandwidth.

Specifically, the AI on board the PhiSat-1 will be handling automatic identification of cloud cover – images where the Earth is obscured in terms of what the scientists studying the data actually want to see. Getting rid of these images before they’re even transmitted means that the satellite can actually realize a bandwidth savings of up to 30%, which means more useful data is transmitted to Earth when it is in range of ground stations for transmission.

The AI software that runs on the Intel Myriad 2 on PhiSat-1 was created by startup Ubotica, which worked with the hardware maker behind the hyperspectral camera. It also had to be tuned to compensate for the excess exposure to radiation, though a bit surprisingly testing at CERN found that the hardware itself didn’t have to be modified in order to perform within the standards required for its mission.

Computing at the edge takes on a whole new meaning when applied to satellites on orbit, but it’s definitely a place where local AI makes a ton of sense. All the same reasons that companies seek to handle data processing and analytics at the site of sensors hear on Earth also apply in space – but magnified exponentially in terms of things like network inaccessibility and quality of connections, so expect to see a lot more of this.

PhiSat-1 was launched in September as part of Arianspace’s first rideshare demonstration mission, which it aims to use to show off its ability to offer launch services to smaller startups for smaller payloads at lower costs.

#aerospace, #artificial-intelligence, #data-processing, #imaging, #intel, #movidius, #prisma, #radiation, #satellite, #science, #space, #spectroscopy, #tc, #vision

0

Satellite radar startup ICEYE raises 87 million to continue to grow its operational constellation

Finnish startup ICEYE, which has been building out and operating a constellation of Synthetic-Aperture Radar (SAR) small satellites, has raised an $87 million Series C round of financing. This round of funding was led by existing investor True Ventures, and includes participation by OTB Ventures, and it brings the total funding for ICEYE to $152 million since its founding in 2014.

ICEYE has already launched a total of five SAR satellites, and will be launching an addition four later this year, with a plan to add eight more throughout 2021. Its SAR satellites were the first ever small satellites with SAR imaging capabilities, and it designed and built the spacecraft in-house. SAR imaging is innovative because it uses relatively small actual physical antennas, combined with fast motion across a targeted imaging area, to create a much larger synthetic aperture than the physical aperture of the radar antenna itself – which in turn means it’s capable of producing very high-resolution, two- and three-dimensional images of areas and objects.

ICEYE has been able to rack up a number of achievements, including record-setting 0.25 meter resolution for a small SAR satellite, and record turnaround time in terms of capture data delivery, reaching only five minutes from when data begins its downlink connection to ground stations, to actually having processed images available for customers to use on their own systems.

The purpose of this funding is to continue and speed up the growth of the ICEYE satellite constellation, as well as providing round-the-clock customer service operations across the world. ICEYE also hopes to set up U.S.-based manufacturing operations for future spacecraft.

SAR, along with other types of Earth imaging, have actually grown in demand during the ongoing COVID-19 crisis – especially when provided by companies focused on delivering them via lower cost, small satellite operations. That’s in part due to their ability to provide services that supplement inspection and monitoring work that would’ve been done previously in person, or handled via expensive operations including aircraft observation or tasked geosynchronous satellites.

#aerospace, #capella-space, #iceye, #imaging, #recent-funding, #satellite-constellation, #satellites, #science, #spacecraft, #spaceflight, #startups, #tc, #true-ventures

0

MIT engineers develop a totally flat fisheye lens that could make wide-angle cameras easier to produce

Engineers at MIT, in partnership with the University of Massachusetts at Lowell, have devised a way to build a camera lens that avoids the typical spherical curve of ultra-wide-angle glass, while still providing true optical fisheye distortion. The fisheye lens is relatively specialist, producing images that can cover as wide an area as 180 degrees or more, but they can be very costly to produce, and are typically heavy, large lenses that aren’t ideal for use on small cameras like those found on smartphones.

This is the first time that a flat lens has been able to product clear, 180-degree images that cover a true panoramic spread. The engineers were able to make it work by patterning a thin wafer of glass on one side with microscopic, three-dimensional structures that are positioned very precisely in order to scatter any inbound light in precisely the same way that a curved piece of glass would.

The version created by the researchers in this case is actually designed to work specifically with the infrared portion of the light spectrum, but they could also adapt the design to work with visible light, they say. Whether IR or visible light, there are a range of potential uses of this technology, since capturing a 180-degree panorama is useful not only in some types of photography, but also for practical applications like medical imaging, and in computer vision applications where range is important to interpreting imaging data.

This design is just one example of what’s called a ‘Metalens’ – lenses that make use of microscopic features to change their optical characteristics in ways that would traditionally have been accomplished through macro design changes – like building a lens with an outward curve, for instance, or stacking multiple pieces of glass with different curvatures to achieve a desired field of view.

What’s unusual here is that the ability to accomplish a clear, detailed and accurate 180-degree panoramic image with a perfectly flat metalens design came as a surprise even to the engineers who worked on the project. It’s definitely an advancement of the science that goes beyond what may assumed was the state of the art.

#fisheye-lens, #gadgets, #glass, #hardware, #imaging, #lenses, #massachusetts, #medical-imaging, #mit, #optics, #science, #science-and-technology, #smartphones, #tc

0

Join us Wednesday, September 9 to watch Techstars Starburst Space Accelerator Demo Day live

The 2020 class of Techstars’ Starburst Space Accelerator are graduating with an official Demo Day on Wednesday at 10 AM PT (1 PM ET), and you can watch all the teams present their startups live via the stream above. This year’s class includes 10 companies building innovative new solutions to challenges either directly or indirectly related to commercial space.

Techstars Starburst is a program with a lot of heavyweight backing from both private industry and public agencies, including from NASA’s JPL, the U.S. Air Force, Lockheed Martin, Maxar Technologies, SAIC, Israel Aerospace Industries North America, and The Aerospace Corporation. The program, led by Managing Director Matt Kozlov, is usually based locally in LA, where much of the space industry has significant presence, but this year the Demo Day is going online due to the ongoing COVID-19 situation.

Few, if any, programs out there can claim such a broad representation of big-name partners from across commercial, military and general civil space in terms of stakeholders, which is the main reason it manages to attract a range of interesting startups.  This is the second class of graduating startups from the Starburst Space Accelerator; last year’s batch included some exceptional standouts like on-orbit refuelling company Orbit Fab (also a TechCrunch Battlefield participant), imaging micro-satellite company Pixxel and satellite propulsion company Morpheus.

As for this year’s class, you can check out a full list of all ten participating companies below. The demo day presentations begin tomorrow, September 9 at 10 AM PT/1 PM PT, so you can check back in here then to watch live as they provide more details about what it is they do.

Bifrost

A synthetic data API that allows AI teams to generate their own custom datasets up to 99% faster – no tedious collection, curation or labelling required.
founders@bifrost.ai

Holos Inc.

A virtual reality content management system that makes it super easy for curriculum designers to create and deploy immersive learning experiences.
founders@holos.io

Infinite Composites Technologies

The most efficient gas storage systems in the universe.
founders@infinitecomposites.com

Lux Semiconductors

Lux is developing next generation System-on-Foil electronics.
founders@luxsemiconductors.com

Natural Intelligence Systems, Inc.

Developer of next generation pattern based AI/ML systems.
leadership@naturalintelligence.ai

Prewitt Ridge

Engineering collaboration software for teams building challenging deep tech projects.
founders@prewittridge.com

SATIM

Providing satellite radar-based intelligence for decision makers.
founders@satim.pl

Urban Sky

Developing stratospheric Microballoons to capture the freshest, high-res earth observation data.
founders@urbansky.space

vRotors

Real-time remote robotic controls.
founders@vrotors.com

WeavAir

Proactive air insights.
founders@weavair.com

#aerospace, #artificial-intelligence, #astronomy, #collaboration-software, #content-management-system, #demo-day, #electronics, #imaging, #israel-aerospace-industries, #lockheed-martin, #louisiana, #matt-kozlov, #maxar-technologies, #ml, #orbit-fab, #outer-space, #robotics, #saic, #satellite, #science, #space, #spaceflight, #startups, #tc, #techstars, #u-s-air-force, #virtual-reality

0

Adobe Spark adds support for animations to its social media graphics tool

Spark is one of those products in Adobe’s Creative Suite that doesn’t always get a lot of attention. But the company’s tool for creating social media posts (which you can try for free) has plenty of fans, and maybe that’s no surprise, given that its mission is to help small business owners and agencies create engaging social media posts without having to learn a lot about design. Today, Adobe added one of the most requested features to Spark on mobile and the web: animations.

“At Adobe, we have this rich history with After Effects,” Spark product manager Lisa Boghosian told me. “We wanted to bring professional motion design to non-professionals, because what solopreneurs or small business owners know what keyframes are or know how to build pre-comps and have five layers. It’s just not where they’re spending their time and they shouldn’t have to. That’s really what Spark is for: you focus on your business and building that. We’ll help guide you into expressing building that base.”

Image Credits: Fernando Trabanco Fotografía / Getty Images

Guiding users is what Spark does across its features, be that designing the flow of your text, adding imaging or now animations. It does that through providing a vast number of templates — which include a set of animated templates, as well as easy access to free images, Adobe Stock and icons from the Noun Project (on top of your own imagery, of course).

The team also decided to do away with a lot of the accouterments of movie editors, including timelines. Instead, the team pre-built the templates and the logic behind how new designs display those animations based on best practices. “Instead of exposing a timeline to a user and asking them to put things on a timeline and adjusting the speed — and guessing — we’ve taken on that role because we want to guide you to that best experience.”

Image Credits: Fernando Trabanco Fotografía / Getty Images

In addition to the new animations feature, Spark is also getting improved tools for sharing assets across the Creative Cloud suite thanks to support for Creative Cloud Libraries. That makes it far easier for somebody to move images from Lightroom or Photoshop to Spark, but since Spark is also getting quite popular with agencies, it’ll make collaborating easier as well. The service already has tools for organizing assets today, but this makes it far easier to work across the various Creative Cloud tools.

Boghosian tells me the team had long had animations on its roadmap, but it took a while to bring it to market, in part because Adobe wanted to get the performance right. “We had to make sure that performance was up to par with what we wanted to deliver,” she said. “And so the experience of exporting a project — we didn’t want it to take a significant amount of time because we really didn’t want the user sitting there waiting for it. So we had to bring up the backend to really support the experience we wanted.” She also noted that the team wanted to have the Creative Cloud Libraries integration ready before launching animations. 

Once you’ve created your animation, Spark lets you export it as an MP4 video file or as a static image. Spark will not let you download GIFs.

#adobe, #adobe-creative-cloud, #adobe-photoshop, #creative-suite, #imaging, #software, #spark, #tc

0

Rocket Lab returns to flight with a successful launch of a Capella Space satellite

Rocket Lab is back to active launch status after encountering an issue with its last mission that resulted in a loss of the payload. In just over a month, Rocket Lab was able to identify what went wrong with the Electron launch vehicle used on that mission and correct the issue. On Sunday, it successfully launched a Sequoia satellite on behalf of client Capella Space from its New Zealand launch facility.

The “I Can’t Believe It’s Not Optical” mission is Rocket Lab’s 14th Electron launch, and it lifted off from the company’s private pad at 11:05 PM EDT (8:05 PM PDT). The Sequoia satellite is the first in startup Capella Space’s constellation of Synthetic Aperture Radar (SAR) satellites to be available to general customers. When complete, the constellation will provide hourly high-quality imaging of Earth, using radar rather than optical sensors in order to provide accurate imaging regardless of cloud cover and available light.

This launch seems to have gone off exactly as planned, with the Electron successfully lifting off and delivering the Capella Space satellite to its target orbit. Capella had been intending to launch this spacecraft aboard a SpaceX Falcon 9 rocket via a rideshare mission, but after delays to that flight, it changed tack and opted for a dedicated launch with Rocket Lab.

Rocket Lab’s issue with its July 4 launch was a relatively minor one – an electrical system failure that caused the vehicle to simply shut down, as a safety measure. The team’s investigation revealed a component of the system that was not stress-tested as strenuously as it should’ve been, and Rocket Lab immediately instituted a fix for both future and existing in-stock Electron vehicles in order to get back to active flight in as little time as possible.

While Rocket Lab has also been working on a recovery system that will allow it to reuse the booster stage of its Electron for multiple missions, this launch didn’t involve any tests related to that system development. The company still hopes to test recovery of a booster sometime before the end of this year on an upcoming launch.

#aerospace, #capella-space, #electron, #flight, #imaging, #new-zealand, #outer-space, #rocket-lab, #satellite, #science, #space, #spaceflight, #spacex, #tc

0

India’s first Earth-imaging satellite startup raises $5 million, first launch planned for later this year

Bengaluru-based Pixxel is getting ready to launch its first Earth imaging satellite later this year, with a scheduled mission aboard a Soyuz rocket. The roughly one-and-a-half-year old company is moving quickly, and today it’s announcing a $5 million seed funding round to help it accelerate even more. The funding is led by Blume Ventures, Lightspeed India Partners, and growX ventures, while a number of angel investors participated.

This isn’t Pixxel’s first outside funding: It raised $700,000 in pre-seed money from Techstars and others last year. But this is significantly more capital to invest in the business, and the startup plans to use it to grow its team, and to continue to fund the development of its Earth observation constellation.

The goal is to fully deploy said constellation, which will be made up of 30 satellites, by 2022. Once all of the company’s small satellites are on-orbit, the the Pixxel network will be able to provide globe-spanning imaging capabilities on a daily basis. The startup claims that its technology will be able to provide data that’s much higher quality when compared to today’s existing Earth imaging satellites, along with analysis driven by PIxxel’s own deep learning models, which are designed to help identify and even potentially predict large problems and phenomena that can have impact on a global scale.

Pixxel’s technology also relies on very small satellites (basically the size of a bear fridge) that nonetheless provide a very high quality image at a cadence that even large imaging satellite networks that already exist would have trouble delivering. The startup’s founders, Awais Ahmed and Kshitij Khandelwal, created the company while still in the process of finishing up the last year of their undergraduate studies. The founding team took part in Techstars’ Starubst Space Accelerator last year in LA.

#aerospace, #artificial-intelligence, #bengaluru, #blume-ventures, #earth, #google, #imaging, #learning, #lightspeed-india-partners, #louisiana, #mentorships, #outer-space, #private-spaceflight, #recent-funding, #satellite, #small-satellite, #space, #spaceflight, #startups, #tc, #techstars

0

Ford to use Boston Dynamics’ dog-like robots to map their manufacturing facilities

Ford is going to employ two of Boston Dynamics’ ‘Spot’ robots, which are four-legged, dog-like walking robots that weigh roughly 70 lbs each, to help them update the original engineering plans for one of the transmission manufacturing plans. The plants, Ford explains, have undergone any number of changes since their original construction, and it’s difficult to know if the plans they have match up with the reality of the plants as they exist today. The Spot robots, with their laser scanning and imaging capabilities, will be able to produce highly-detailed and accurate maps that Ford engineers can then use to modernize and retool the facility.

There are a few benefits that Ford hopes to realize by employing the Spot robots in place of humans to map the facility: First, they should save a considerable amount of time, since they replace a time-intensive process of setting up a tripod with a laser scanner at various points throughout the facility and spending a while at each location manually capturing the environment. The Spot dogs are roving and scanning continuously, providing a reduction of up to 50% in terms of actual time to complete the facility scan.

The robot dogs are also equipped with five cameras as well as laser scanners, and can operate for up to two hours travelling at around 3 mph continuously. The data they collect can then be synthesized for a more complete overall picture, and because of their small size and nimble navigation capabilities, they can map areas of the plant that aren’t necessarily reachable by people attempting to do the same job.

This is a pilot program that Ford is conducting, using two Spot robots leased by Boston Dynamics . But if it works out the way they seem to think it will, you can imagine that the automaker might seek to expand the program to cover other efforts at more of its manufacturing facilities.

#automotive, #boston-dynamics, #companies, #ford, #imaging, #laser, #optics, #robot, #robotics, #tc, #transportation

0

NASA to fly a football stadium-sized high-altitude balloon to study light from newborn stars

NASA’s latest mission won’t actually reach space – but it will come very close, with a massive observation craft made up of a football stadium-sized high-altitude balloon, along with a special stratospheric telescope instrument that can observe wavelengths of light blocked by Earth’s atmosphere, cast from newly-formed stars.

The mission is called the ‘Astrophysics Stratospheric Telescope for High Spectral Resolution Observations at Submillimeter-wavelengths,’ but shortened to ASTHROS since that’s a mouthful. It is currently set to take off from Antartica in December 2023, and the main payload is an 8.4-foot telescope that will point itself at four primary targets, including two regions in the Milky Way where scientists have observed star formation activity.

That telescope, the largest ever to be flown in this way, will be held aloft by a balloon that will measure roughly 400 feet wide, when fully inflated, with scientists on the ground able to precisely direct the orientation of the business end of the observation instrument. It’s mission will include between two or three full loops around the South Pole, during a period spanning between three and four weeks as it drifts along high-altitude stratospheric winds. After that, the telescope will separate from the balloon and return to Earth slowed by a parachute so that it can potentially be recovered and reflow again in future to perform similar experiments.

While floating a balloon up to the edge of Earth’s atmosphere might sound like more of a relaxed affair than launching a satellite using an explosion-propelled rocket, NASA’s Jet Propulsion Lab engineer Jose Siles said in an agency release that balloon science missions are actually higher-risk than space missions, in part because many elements of them are novel. At the same time, however, they have the potential to provide significant rewards at a reduced cost relative to satellite launches on rockets.

The end goal is for ASTRHOS to create “the first detailed 3D maps of the density, speed and motion of gas” in these regions around newborn stars, in order to help better understand how they can impeded the development of other stars, or encourage the birth of some. This will be helpful in refining existing simulations of the formation and evolution of galaxies, the agency says.

#aerospace, #astronomy, #balloon, #engineer, #imaging, #science, #space, #stratosphere, #tc

0

Autonomous driving startup turns its AI expertise to space for automated satellite operation

Hungarian autonomous driving startup AImotive is leveraging its technology to address a different industry and growing need: autonomous satellite operation. AImotive is teaming up with C3S, a supplier of satellite and space-based technologies, to develop a hardware platform for performing AI operations onboard satellites. AImotive’s aiWare neural network accelerator will be optimized by C3S for use on satellites, which have a set of operating conditions that in many ways resembles those onboard cars on the road – but with more stringent requirements in terms of power management, and environmental operating hazards.

The goal of the team-up is to have AImotive’s technology working on satellites that are actually operational on orbit by the second half of next year. The projected applications of onboard neural network acceleration extend to a number of different functions according to the companies, including telecommunications, Earth imaging and observation, autonomously docking satellites with other spacecraft, deep space mining and more.

While it’s true that most satellites operate essentially in an automated fashion already – mean gin they’re not generally manually flown at every given moment – true neural network-based onboard AI smarts would provide them with much more autonomy when it comes to performing tasks, like imaging a specific area or looking for specific markers in ground or space-based targets. Also, AImotive and C3S believe that local processing of data has the potential to be a significant game-changer when it comes to the satellite business.

Currently, most of the processing of data collected by satellites is done after the raw information is transmitted to ground stations. That can actually result in a lot of lag time between data collection, and delivery of processed data to customers, particularly when the satellite operator or another go-between is acting as the processor on behalf of the client rather than just delivering raw info (and doing this analysis is also a more lucrative proposition for the data provider, or course).

AImotive’s tech could mean that processing happens locally, on the satellite where the information is captured. There’s been a big shift towards this kind of ‘computing at the edge’ in the ground-based IoT world, and it only makes sense to replicate that in space, for many of the same reasons – including that it reduces time to delivery, meaning more responsive service for paying customers.

#aerospace, #aimotive, #artificial-intelligence, #computing, #imaging, #neural-network, #satellite, #satellite-imagery, #satellites, #science, #space, #spacecraft, #spaceflight, #tc, #telecommunications

0

High Earth Orbit Robotics uses imaging satellites to provide on-demand check-ups for other satellites

Maintaining satellites on orbit and ensuring they make full use of their operational lifespan has never been more important, given concerns around sustainable operations in an increasingly crowded orbital environment. As companies tighten their belts financially to deal with the ongoing economic impact of COVID-19, too, it’s more important than ever for in-space assets to live up to their max potential. A startup called High Earth Orbit (HEO) Robotics has a very clever solution that makes use of existing satellites to provide monitoring services for others, generating revenue from unused Earth imaging satellite time and providing a valuable maintenance service all at the same time.

HEO’s model employs cameras already on orbit mounted on Earth observation satellites operated by partner companies, and tasks them with collecting images of the satellites of its customers, who are looking to ensure their spacecraft are in good working order, oriented in the correct way, and with all their payloads properly deployed. Onboard instrumentation can provide satellite operators with a lot of diagnostic information, but sometimes there are problems only external photography can properly identify, or that require confirmation or further detail to resolve.

The beauty of HEO’s model is that it’s truly a win for all involved; Earth observation satellites generally aren’t in use at all times – they have considerable down time in particular when they’re over open water, for instance, HEO’s founder and CEO William Crowe tells me.

“We try to use the satellites at otherwise low-value times, like when they are over the ocean (which of course is most of the time),” Crowe said via email. “We also task our partners just like we would as a regular Earth-imaging business, specifying an area on Earth’s surface to image, the exception being that there is always a spacecraft in the field-of-view.”

The company is early on in its trajectory, but it has just released a proof-of-concept capture of the International Space Station, as seen in the slides provided by HEO below. The image was captured by a satellite owned by the Korean Aerospace Research Institute, which is operated by commercial satellite operator SI Imaging Services. HEO’s software compensated for the relative velocity of the satellite to the ISS, which was a very fast 10 km/s (around 6.2 miles per second). The company says it’s working towards getting even higher-resolution images.

The beauty of HEO’s model is that it actually requires no capital expenditure to work, in terms of the satellites used: Crowe explained that they currently pay-per-use, which means they only spend when they have a client request, so that the revenue covers the cost of tasking the partner satellite. HEO does plan to launch its own satellites in the “medium-term,” however, Crowe said, in order to cover the gaps that currently exist in coverage and in anticipation of an explosion in the low Earth orbit satellite population, which is expected to expand from the existing 2,000 or so spacecraft to as many as 100,000 or more over roughly the next decade.

HEO could ultimately provide imaging of not only other satellites, but also space debris to help with removal efforts, and even asteroids that could prove potential targets for mining and resource gathering. It’s a remarkably well-considered idea that stands to benefit from the explosion of growth in the orbital satellite industry, and also stands out among space startups because it has a near-term path to revenue that doesn’t require a massive outlay of capital up front.

#aerospace, #ceo, #imaging, #international-space-station, #robotics, #satellite, #satellite-imagery, #satellites, #space, #spacecraft, #spaceflight, #starlink, #tc

0

Rocket Lab launch fails during rocket’s second stage burn, causing a loss of vehicle and payloads

Rocket Lab’s ‘Pic or it didn’t happen’ launch on Saturday ended in failure, with a total loss of the Electron launch vehicle and all seven payloads on board. The launch vehicle experienced a failure during the second stage burn post-launch, after a lift-off from the Rocket Lab Launch Complex 1 on Mahia Peninsula in New Zealand.

The mission appeared to be progressing as intended, but the launch vehicle appeared to experience unexpected stress during the ‘Max Q’ phase of launch, or the period during which the Electron rocket experiences the most significant atmospheric pressure prior to entering space.

Launch video cut off around six minutes after liftoff during the live stream, and rocket was subsequently shown to be falling from its current altitude before the web stream was cut short. Rocket Lab then revealed via Twitter that the Electron vehicle was lost during the second stage burn, and committed to sharing more information when it becomes available.

This is an unexpected development for Rocket Lab, which has flown 11 uneventful consecutive Electron missions since the beginning of its program.

Rocket Lab CEO and founder Peter Beck posted an apology to Twitter, noting that all satellites were lost, and that he’s “incredibly sorry” to all customer who suffered loss of payload today. That includes Canon, which was flying a new Earth imaging satellite with demonstration imaging tech on board, as well as Planet, which had five satellites for its newest and most advanced Earth imaging constellation on the vehicle.

We’ll update with more info about the cause and next steps from Rocket Lab when available.

#aerospace, #ceo, #electron, #flight, #imaging, #new-zealand, #outer-space, #peter-beck, #rocket, #rocket-lab, #satellite, #spaceflight, #spaceport, #tc

0

How to watch Rocket Lab launch satellites for Canon, Planet and more live

Rocket Lab is launching a rideshare mission today which includes seven small satellites from a number of different companies, including primary payload provider Canon, which is flying a satellite equipped with the camera-maker’s Earth imaging technology, including high-res photo capture equipment. The Electron rocket that Rocket Lab is flying today will also carry five Planet SuperDove Earth-Observation satellites, as well as a CubeSat from In-Space missions.

The launch, which is named ‘Pics or It Didn’t Happen’ is set to take place during a window which opens at 5:19 PM EDT (2:19 PM PDT) and extends until 6:03 PM EDT (3:03 PM EDT), lifting off from Rocket Lab’s Launch Complex 1 on the Mahia Peninsula in New Zealand. To check it out live, tune in directly via Rocket Lab’s website here – the live stream should begin around 15 minutes prior to the opening of the launch window.

This is Rocket Lab’s third flight this year, and while the company is still in the process of developing and testing its rocket booster recovery program, this mission won’t include any booster recovery attempt. This is the company’s 13th Electron flight, and the next planned test in that system’s development is set for flight 17.

#aerospace, #booster, #canon, #cubesat, #electron, #imaging, #new-zealand, #outer-space, #rocket-lab, #satellite, #space, #spacecraft, #spaceflight, #tc

0

Astroscale expands into geostationary satellite life extension with new acquisition

Orbital spacecraft sustainability startup Astroscale has acquired the IP, most assets and staff of a an Isreali company called Effective Space Solutions in order to broaden its service offering to include servicing geostationary (GEO) satellites, as well as low Earth orbit (LEO) debris removal. Astroscale, founded in Japan in 2013 with a mission of addressing the growing problem of orbital debris and sustainable space operations, is also setting up an office in Israel as part of this deal.

Already, Astroscale has offices in the U.K., the U.S. and Singapore, and this new arrangement will make it even more of a global company. The operation in Israel will focus on the GEO satellite life extension aspect of the business, which is what ESS was working on previously. Satellite life extension is actually something that a number of companies are looking to develop and bring to market, including orbital ‘gas station’ company Orbit Fab, as well as larger legacy industry companies like Maxar.

Extending the life of GEO satellites with on-orbit servicing is potentially a very lucrative industry, since it would mean that companies can get a lot more usable life, and revenue, out of their considerable investments in building the expensive, large and pricey to launch spacecraft to begin with.

GEO satellites provide crucial communications and navigation infrastructure, including via GPS, as well as satellite internet networks and long-distance Earth imaging and observation capabilities. On-orbit satellite servicing could mean that these investments, which can range into the billions, can operate long beyond their intended lifespan, and could even eventually be updated with new hardware, sensors or other capabilities as more modern equipment than they launched with becomes available.

Launch costs are often the most expensive part of deploying any orbital spacecraft, so the potential of repurposing existing on orbit assets through life extension efforts could change the fundamental economics of doing business in space.

Astroscale will be taking on and continuing to develop ESS’ Space Drone program, which is not yet at the point where it’s actually launching orbital space servicing missions, but the work of the Isreali company will definitely give Astrocale a leg-up in terms of building out its own orbital servicing ambitions.

#aerospace, #astroscale, #flight, #gas-station, #gps, #imaging, #israel, #japan, #ma, #orbit, #outer-space, #satellite, #singapore, #space, #space-debris, #spaceflight, #startups, #tc, #united-kingdom, #united-states

0

Momentus set to deploy satellites for a free 4K space-based Earth live-streaming service

Space bus company Momentus has signed a new contract that will see it provide in-space transportation and deployment for Sen, the UK company that’s building a 4K real-time video streaming service providing live, high-quality views of Earth, both free for individuals and via an ope source data platform for developers and service creators.

Santa Clara-based Momentus is an in-space transportation startup that provides services to satellite companies looking to move payloads after launch. They can do things like alter the orbits of satellites, and can provide that last-mile transportation leg for payloads going up on other rockets, like the SpaceX Falcon 9, which is providing the ride for the Sen satellites to their drop-off points.

From there, Momentus will use its Vigoride orbital transfer vehicles to take the Sen satellites the rest of the way. The Vigoride is a water plasma–based propulsion vehicle that will get its first test flight later this year, and the goal is to get it to operational status by 2021. The mission on behalf of Sen is set to take place in 2022.

Sen’s technology will provide imaging from small satellites equipped with multiple cameras, and ultimately it’ll operate an entire constellation built on the foundation of the first five to be launched by Momentus. The video will be available for invidiuals to view via web and smartphone app for free, and Momentus plans to offer premium services to businesses as its go to market plan.

Once it has Vigoride up and running in an operational capacity, Momentus plans to develop a new version called Ardoride that will follow in 2022 or 2023, providing more capacity for bigger payloads and transportation to higher orbits – as well as trips as far as the Moon.

#aerospace, #elon-musk, #falcon-9, #hyperloop, #imaging, #momentus, #outer-space, #satellite, #science, #smartphone, #space, #spaceflight, #spacex, #tc, #united-kingdom

0

R&D Roundup: Sweat power, Earth imaging, testing ‘ghostdrivers’

I see far more research articles than I could possibly write up. This column collects the most interesting of those papers and advances, along with notes on why they may prove important in the world of tech and startups.

This week: one step closer to self-powered on-skin electronics; people dressed as car seats; how to make a search engine for 3D data; and a trio of Earth imaging projects that take on three different types of disasters.

Sweat as biofuel

Monitoring vital signs is a crucial part of healthcare and is a big business across fitness, remote medicine and other industries. Unfortunately, powering devices that are low-profile and last a long time requires a bulky battery or frequent charging is a fundamental challenge. Wearables powered by body movement or other bio-derived sources are an area of much research, and this sweat-powered wireless patch is a major advance.

A figure from the paper showing the device and interactions happening inside it.

The device, described in Science Robotics, uses perspiration as both fuel and sampling material; sweat contains chemical signals that can indicate stress, medication uptake, and so on, as well as lactic acid, which can be used in power-generating reactions.

The patch performs this work on a flexible substrate and uses the generated power to transmit its data wirelessly. It’s reliable enough that it was used to control a prosthesis, albeit in limited fashion. The market for devices like this will be enormous and this platform demonstrates a new and interesting direction for researchers to take.

#artificial-intelligence, #autonomous-systems, #coronavirus, #covid-19, #cybernetics, #esa, #extra-crunch, #gadgets, #health, #imaging, #lidar, #machine-learning, #mit, #national-science-foundation, #plastics, #satellite-imagery, #science, #self-driving-car, #space, #tc, #technology, #telemedicine

0