Everyone you know is a Disney princess, which means AR is queen

This weekend, all of your friends morphed one by one into animated, Pixar-inspired characters. This isn’t a fever dream, and you’re not alone.

On Thursday, Snapchat released a Cartoon 3D Style Lens, which uses AR to make you look like a background character from “Frozen.” Naturally, even though TikTok’s own AR cartoon effects aren’t quite as convincing as Snapchat’s, people are turning to TikTok to share videos of themselves as Disney princesses, because of course they are.

This isn’t the first time that a Disney-esque AR trend has gone viral. In August 2020, Snapchat had 28.5 million new installs, which was its biggest month since May 2019, when it got 41.2 million new installs. It might not be a coincidence that in early August 2020, Snapchat released the Cartoon Face lens, which users realized could be used to “Disneyfy” their pets – the tag #disneydog got 40.9 million views across platforms on TikTok. Then, Snapchat struck viral gold again in December, when they released the Cartoon lens, which rendered more realistic results for human faces than the previous iteration.

According to Sensor Tower, Snapchat’s global installs continued to climb month-over-month throughout the rest of 2020, though installs slightly declined in December. Still, Snapchat got 36 million downloads that month. Now, after the newest Cartoon Style 3D lens went viral again, Snapchat hit number 6 on the App Store’s free apps charts, compared to TikTok’s number 2 slot. Still, Snapchat downloads in May were 32 million, down from 34 million in April, while TikTok saw 80.3 million installs in May, up from 59.3 million in April.

Image Credits: Snapchat, screenshots by TechCrunch

But there’s a new app in the number 1 slot that also made an impact on this weekend’s cartoon explosion. Released in March, Voilà AI Artist is yet another platform that turns us into cartoon versions of ourselves. Unlike the AR-powered effects on Snapchat or TikTok, Voilà is a photo editor. Users upload a selfie, and after watching an ad (the ad-free version costs $3 per week), it reveals what you would look like as a cartoon.

Voilà AI Artist was only downloaded 400 times globally in March 2021. By May, the app surpassed 1 million downloads, and during the first two weeks of this month alone, the app has been downloaded over 10.5 million times.

Again, like the repetitive iterations on the “Disneyfy” trend, apps like Voilà aren’t new. FaceApp went viral in 2019, showing people what they’ll look like when they’re old, graying, and wrinkled. The app became the center of a privacy controversy, since it uploaded users’ photos to the cloud to edit their selfies with AI. FaceApp made a statement that it “might store updated photos in the cloud” for “performance and traffic reasons,” but that “most images” are deleted “within 48 hours.” Still, this ambiguous language set off the warning bells, urging us to think about the potentially nefarious implications of seeing what we’ll look like in sixty years. Two years earlier, FaceApp put out a “hotness” filter, which made users’ skin lighter – FaceApp apologized for its racist AI. Voilà, which is owned by Wemagine.AI LLP in Canada, has also been criticized for its AI’s eurocentrism. As these apps grow in popularity, they can also uphold some of our culture’s most harmful biases.

Image Credits: Voilà

Like FaceApp, Voilà requires an internet connection to use the app. Additionally, its terms outline that users grant the company “a non-exclusive, worldwide, royalty-free, sublicensable, and transferable license to host, store, use in any way, display, reproduce, modify, adapt, edit, publish, and distribute Uploaded and Generated content.” Basically, that means that if you upload an image to the platform, Voilà has the right to use it, but they don’t own it. This isn’t abnormal for these apps – when we upload photos to Instagram, for example, we also grant the platform the right to use our images.

Still, it’s a good thing that apps like Voilà force us to consider what we give up in exchange for the knowledge that we’d make a good Disney princess. Earlier this month, TikTok updated its U.S. privacy policy to dictate that the app “may collect biometric identifiers and biometric information” from users’ content. This includes “faceprints and voiceprints,” terms that TikTok left undefined. When TechCrunch reached Tiktok for comment, they couldn’t confirm why the terms now changed to allow for the automatic collection of biometric data, which refers to any features, measurements, or characteristics of our body that distinguish us, even fingerprints.

It’s no wonder that as Voilà climbed to the number one slot on the App Store, Snapchat re-upped their Pixar-inspired AR lens. Facebook’s own Spark AR platform is rolling out new features, and last week at WWDC, Apple announced a major update to RealityKit, its AR software. But these trends reveal more about our growing comfort with face-altering AR than they do about our nostalgia for Disney.

#app-store, #apple, #apps, #ar, #augmented-reality, #canada, #computing, #disney, #instagram, #internet-culture, #mobile-applications, #photo-editor, #realitykit, #snapchat, #software, #technology, #tiktok, #united-states

0

This Week in Apps: WWDC 21 highlights, Instagram Creator Week recap, Android 12 beta 2 arrives

Welcome back to This Week in Apps, the weekly TechCrunch series that recaps the latest in mobile OS news, mobile applications and the overall app economy.

The app industry continues to grow, with a record 218 billion downloads and $143 billion in global consumer spend in 2020. Consumers last year also spent 3.5 trillion minutes using apps on Android devices alone. And in the U.S., app usage surged ahead of the time spent watching live TV. Currently, the average American watches 3.7 hours of live TV per day, but now spends four hours per day on their mobile devices.

Apps aren’t just a way to pass idle hours — they’re also a big business. In 2019, mobile-first companies had a combined $544 billion valuation, 6.5x higher than those without a mobile focus. In 2020, investors poured $73 billion in capital into mobile companies — a figure that’s up 27% year-over-year.

This week, our series will take a dive into the key announcements impacting app developers from WWDC 21.

This Week in Apps will soon be a newsletter! Sign up here: techcrunch.com/newsletters

WWDC 21 Wrap-Up

Image Credits: Apple

Apple’s WWDC went virtual again this year, but it didn’t slow down the pace of announcements. This week, Apple introduced a slate of new developer tools and frameworks, changes to iOS that will impact how consumers use their devices and new rules for publishing on its App Store, among other things. We don’t have the bandwidth to dig into every dev update — and truly, there are better places to learn about, say, the new concurrency capabilities of Swift 5.5 or what’s new with SwiftUI.

But after a few days of processing everything new, here’s what’s jumping out as the bigger takeaways and updates.

Xcode Cloud

Apple’s development IDE, Xcode 13, now includes Xcode Cloud, a built-in continuous integration and delivery service hosted on Apple’s cloud infrastructure. Apple says the service, birthed out of its 2018 Buddybuild acquisition, will help to speed up the pace of development by combining cloud-based tools for building apps along with tools to run automated tests in parallel, deliver apps to testers via TestFlight and view tester feedback through the web-based App Store Connect dashboard. Beyond the immediate improvements to the development process (which developers are incredibly excited about based on #WWDC21 tweets) Xcode Cloud represents a big step by Apple further into the cloud services space, where Amazon (AWS), Google and Microsoft have dominated. While Xcode Cloud may not replace solutions designed for larger teams with more diverse needs, it’s poised to make app development easier — and deliver a new revenue stream to Apple. If only Apple had announced the pricing! 

Swift Playgrounds 4

Image Credits: Apple

Swift Playgrounds got a notable update in iPadOS 15, as it will now allow developers to build iPhone and iPad apps right on their iPad and submit them to the App Store. In Swift Playgrounds 4, coming later this year, Apple says developers will be able to create the visual design of an app using SwiftUI, see the live preview of their app’s code while building and can run their apps full-screen to test them out. App projects can also be opened and edited with either Swift Playgrounds or Xcode.

While it’s not the Xcode on iPad system some developers have been requesting, it will make app building more accessible because of iPad’s lower price point compared with Mac. It could also encourage more people to try app development, as Swift Playgrounds helps student coders learn the basics then move up to more challenging lessons over time. Now, they can actually build real apps and hit the publish button, too.

App Store

Antitrust pressure swirling around Apple has contributed to a growing sentiment among some developers that Apple doesn’t do enough to help them grow their businesses — and therefore, is undeserving of a 15%-30% cut of the revenues the developers themselves worked to gain. The new App Store updates may start to chip away at that perception.

Soon, developers will be able to create up to 35 custom product pages targeted toward different users, each with their unique URL for sharing and analytics for measuring performance. The pages can include different preview videos, screenshots and text.

Image Credits: Apple

Apple will also allow developers to split traffic between three treatments of the app’s default page to measure which ones convert best, then choose the percentage of the App Store audience that will see one of the three treatments.

Meanwhile, the App Store will begin to show to customers in-app events taking place inside developers’ apps — like game competitions, fitness challenges, film premieres and more — effectively driving traffic to apps and re-engaging users. Combined, Apple is making the case that its App Store can drive discovery beyond just offering an app listing page.

Beyond the App Store product itself, Apple overhauled its App Store policies to address the growing problem of scam apps. The changes give Apple permission to crack down on scammers by removing offenders from its Developer Program. The new guidelines also allow developers to report spam directly to Apple, instead of, you know, relying on tweets and press.

Apple has historically downplayed the scam problem. It noted how the App Store stopped over $1.5 billion in fraudulent transactions in 2020, for example. Even if it’s a small percentage of the App Store, scam apps with fake ratings not only can cheat users out of millions of dollars, they reduce consumer trust in the App Store and Apple itself, which has longer-term consequences for the ecosystem health. What’s unclear, however, is why Apple is seemingly trying to solve the App Review issues using forms — to report fraud (and now, to appeal rulings, too) when it’s becoming apparent that Apple needs a more systematic way of keeping tabs on the app ecosystem beyond the initial review process.

Notifications overhaul

The App Store discovery updates mentioned above also matter more because developers may need to reduce their reliance on notifications to send users back into their apps. Indeed, iOS 15 users will be able to choose which apps they don’t need to hear from right away — these will be rounded up into a new Notification Summary that arrives on a schedule they configure, where Siri intelligence helps determine which apps get a top spot. If an app was already struggling to re-engage users through push notifications, getting relegated to the end of a summary is not going to help matters.

And users can “Send to Summary” right from the Lock Screen notification itself in addition to the existing options to “Deliver Quietly” or be turned off. That  means any ill-timed push could be an app developer’s last.

Image Credits: Apple

Meanwhile, the clever new “Focus” modes let iOS users configure different quiet modes for work, play, sleeping and more, each with their own set of rules and even their own home screens. But making this work across the app ecosystem will require developer adoption of four “interruption levels,” ranging from passive to critical. A new episode of a fav show should be a “passive” notification, for example. “Active” is the default setting — which doesn’t get to break into Focus. “Time sensitive” notifications should be reserved for alerting to more urgent matters, like a delivery that’s arrived on your doorstep or an account security update. These may be able to break through Focus, if allowed.

Image Credits: Apple

“Critical” notifications would be reserved for emergencies, like severe weather alerts or local safety updates. While there is a chance developers may abuse the new system to get their alert through, they risk users silencing their notifications entirely or deleting the app. Focus mode users will be power users and more technically savvy, so they’ll understand that an errant notification here was a choice and not a mistake on the developer’s part.

Image Credits: Apple

Augmented Reality

Apple has been steadily pushing out more tools for building augmented reality apps, but this WWDC it just introduced a huge update that will make it easier for developers getting started with AR. With the launch of RealityKit 2, Apple’s new Object Capture API will allow developers to create 3D models in minutes using only an iPhone or iPad (or a DSLR or drone if they choose).

Explains Apple this will address one of the most difficult parts of making great AR apps, which was the process of creating 3D models. Before, this could take hours and cost thousands of dollars — now, developers with just an iPhone and Mac can participate. The impacts of this update will be seen in the months and years ahead, as developers adopt the new tools for things like AR shopping, games and other AR experiences — including ones we may not have seen yet, but are enabled by more accessible AR technology tools and frameworks.

SharePlay

This update is unexpected and interesting, despite missing what would have been an ideal launch window: mid-pandemic back in 2020. With SharePlay, developers can bring their apps into what Apple is calling “Group Activities” — or shared experiences that take place right inside FaceTime.

If you were co-watching Hulu with friends during the pandemic, you get the idea. But Apple isn’t tacking on some co-viewing system here. Instead, it’s introducing new APIs that let users listen to music, stream video or screen share with friends, in a way that feels organic to FaceTime. There was a hint of serving the locked-down COVID-19 pandemic crowd with this update, as Apple talks about making people feel as if they’re “in the same room” — a nod to those many months where that was not possible. And that may have inspired the changes, to be sure. Similarly, FaceTime’s support for Android and scheduled calls — a clear case of Zoom envy — feels like a case of playing catch-up on Apple’s part.

Image Credits: Apple

The immediate demand for these sorts of experiences may be dulled by a population that’s starting to recover from the pandemic — people are now going out and seeing others in person again thanks to vaccines. But the ability to use apps while FaceTime’ing has a lifespan that extends beyond the COVID era, particularly among iPhone’s youngest users. The demographic growing up with smartphones at ever-younger ages don’t place phone calls — they text and FaceTime. Some argue Gen Z even prefers the latter.

Image Credits: Apple

With its immediate support for Apple services like Apple Music and Apple TV+, SharePlay will hit the ground running — but it will only fully realize its vision with developer adoption. But such a system seems possibly only because of Apple’s tight control over its platform. It also gives a default iOS app a big advantage over third-parties.

More

There were, of course, hundreds of updates announced this week, like Spatial audio, Focus modes, AirPods updates, iPadOS improvements (widgets! multi-tasking), Health updates, iCloud+ with Private Relay, watchOS improvements, Spotlight’s upgrade, macOS 12 Monterey (with Continuity with Universal Control), HomePod updates, StoreKit 2, Screen Time APIs, ShazamKit, App Clips improvements, Photos improvements and others.

Many, however, were iterative updates — like a better version Apple Maps, for example, or Siri support for third-party devices. Others are Apple’s attempt to catch up with competitors, like the Google Lens-like “Live Text” update for taking action on things snapped in your photos. The more significant changes, however, aren’t yet here — like the plan to add Driver’s Licenses to Wallet and the plan to shift to passwordless authentication systems. These will change how we use devices for years to come.

Weekly News

Platforms: Google

✨ Not to be outdone by WWDC (ha), Google this week launched Android 12, beta 2. This release brings more of the new features and design changes to users that weren’t yet available in the first beta which debuted at Google I/O. This includes the new privacy dashboard; the addition of the mic and camera indicators that show when an app is using those features; an indication when an app is reading from the clipboard; and a new panel that makes it easier to switch between internet providers or Wi-Fi networks.

Google also this week released its next Pixel feature drop which brought new camera and photo features, privacy features, Google Assistant improvements and more. Highlights included a way to create stargazing videos, a car crash detection feature and a way to answer or reject calls hands-free.

E-commerce

Pinterest wants to get more users clicking “buy.” The company this week added a new Shopping List feature which automatically organizes your saved Product Pins for easier access.

Augmented Reality

Google discontinued its AR-based app Measure, which had allowed users to measure things in the real world using the phone’s camera. The app had seen some stability and accuracy issues in the past.

Fintech

Facebook’s Messenger app added Venmo-like QR codes for person-to-person payments inside its app in the U.S. Users can scan the codes to send or request a payment, even if they’re not Facebook friends with the other party. Payments are sent over Facebook Pay, which is backed by a users’ credit card, debit card or a PayPal account.

Downloads of fintech apps are up 132% globally YoY according to an AppsFlyer marketing report.

Twitter and Square CEO Jack Dorsey said Square is thinking about adding a bitcoin hardware wallet to its product lineup. The exec detailed some of the thinking behind the plan in a Twitter thread.

✨ Social: Creator Week recap

Instagram head Adam Mosseri said Facebook will help creators get around Apple’s 30% cut. While any transactions that take place in iOS will follow Apple’s rules, Mosseri said Facebook will look for other ways to help creators make a living where they don’t have to give up a portion of their revenue — like by connecting brands and creators offline or affiliate deals.

Related to this, Instagram announced during its Creator Week event it will start testing a native affiliate tool that will allow creators to recommend products and then earn commissions on those sales. Creators can also now link their merch shops to personal profiles instead of just business profiles, and by year-end, will be able to partner on merch and drops with companies like Bravado/UMG, Fanjoy, Represent and Spring.

Image Credits: Instagram

Instagram also rolled out a new “badge” for live videos which lets viewers tip creators, similar to Facebook’s Stars. Facebook also said paid online events, fan subscriptions, badges and its upcoming news products will remain free through 2023. And it rolled out new features and challenges to help creators earn additional payouts for hitting certain milestones.

Finally, Instagram in a blog post explained how its algorithm works. The post details how the app decides what to show users first, why some posts get more views than others, how Explore works and other topics.

Messaging

Giphy’s Clips (GIFs with sound) are now available in the Giphy iMessage app, instead of only on the web and in its iOS app. That means you can send the…uh, videos (??)…right from your keyboard.

Dating

Image Credits: Tinder

Match-owned dating app Tinder added a way for users to block contacts. The feature requires users grant the app permission to access the phone’s contacts database, which is a bit privacy-invasive. But then users can go through their contacts and check those they want to block on Tinder. The benefit is this would allow people to block exes and abusers. But on the downside, it permits cheating as users can block partners and those who might see them and report back.

Streaming & Entertainment

YouTube will allow creators to repurpose audio from existing YouTube videos as its “Shorts” product — basically, its TikTok competitor — rolls out to more global markets.

Gaming

Google’s cross-platform cloud gaming service Google Stadia is coming to Chromecast with Google TV and Android TV starting on June 23.

Roblox is generating estimated revenue of $3.01 million daily on iPhone, according to data from Finbold. Clash of Clans, Candy Crush Saga, Pokémon GO and others follow. Good thing if they have to pay up over that music usage lawsuit.

Image Credits: Finbold

Utilities

Apple-owned weather app Dark Sky, whose technology just powered a big iOS 15 revamp of Apple’s stock weather app, is not shutting down just yet. The company announced its iOS app, web app and API will remain online through the end of 2022, instead of 2021 as planned.

Productivity

Microsoft’s Outlook email app for iOS now lets you use your voice to write emails and schedule meetings. The feature leverages Cortana, and follows the launch of a Play My Emails feature inside Outlook Mobile.

Government & Policy

President Biden revoked and replaced Trump’s actions which had targeted Chinese apps, like TikTok and WeChat. The president signed a new executive order that requires the Commerce Dept. to review apps with ties to “foreign adversaries” that may pose national security risks. Trump had previously tried to ban the apps outright, but his order was blocked by federal courts.

Google has agreed to show more mobile search apps for users to choose from on new Android phones following feedback from the European Commission. The company had been showing a choice screen where app providers bid against each other for the slot, and pay only if users download apps. DuckDuckGo and others complained the solution has not been working.

Security & Privacy

Security flaws were found in Samsung’s stock mobile apps impacting some Galaxy devices. One could have allowed for data theft through the Secure Folder app. Samsung Knox security software could have been used to install malicious apps. And a bug in Samsung Dex could have scraped data from notifications. There are no indications users were impacted and the flaws were fixed.

An App Store analysis published by The Washington Post claims nearly 2% of the top grossing apps on one day were scam apps, which cost people $48 million. They included several VPN apps that told users their iPhones were infected with viruses, a QR code reader that tricked customers into a subscription for functionality that comes with an iPhone, and apps that pretend to be from big-name brands, like Amazon and Samsung.

Multiple apps were removed from the Chinese app store for violating data collection rules, Reuters reported. The apps hailed from Sogou, iFlytek and others, and included virtual keyboards.

Funding and M&A

💰Mexican payments app Clip raised $250 million from SoftBank’s Latin American Fund and Viking Global Investors, valuing the business at $2 billion. The app offers a Square-like credit card reader device and others, and has begun to offer cash advances to clients.

🤝 Shopify acqui-hires the team from the augmented reality home design app Primer. The app, which will be shut down, had allowed users to visualize what tile, wallpaper or paint will look like on surfaces inside their home.

💰 Singapore-based corporate services “super app” Osome raised $16 million in Series A funding. The app offers online accounting and other business services for SMBs. Investors include Target Global, AltaIR Capital, Phystech Ventures, S16VC and VC Peng T. Ong.

📈  Chinese grocery delivery app Dingdong Maicai, backed by Sequoia and Tiger Global, has filed for a U.S. IPO. To date, the company has raised $1 billion.

💰San Francisco-based MaintainX raised $39 million in Series B funding led by Bessemer Venture Partners for its mobile-first platform for industrial and frontline workers to help track maintenance, safety and operations.

💰Berlin’s Ada Health raised $90 million in Series B funding in a round led by Leaps by Bayer, the impact investment arm of Bayer AG. The app lets users monitor their symptoms and track their health and clinical data.

💰Photo app Dispo confirmed its previously leaked Series A funding, which earlier reports had pegged as being around $20 million. The app had been rebranded from David’s Disposable and dropped its association with YouTuber David Dobrik, following sexual assault allegations regarding a member of the Vlog Squad. Spark Capital severed ties with Dispo as a result. Seven Seven Six and Unshackled Ventures remained listed as investors, per Dispo’s press release, but the company didn’t confirm the size of the round.

💰Brazilian fintech Nubank raised a $750 million extension to its Series G (which was $400 million last year) led by Berkshire Hathaway. The company offers a digital bank account accessible from an app, debit card, payments, loans, insurance and more. The funding brings the company to a $1.15 billion valuation.

💰Seattle-based tutoring app Kadama raised $1.7 million in seed funding led by Grishin Robotics. The app, which offering an online tutoring marketplace aimed at Gen Z, rode the remote learning wave to No. 2 in the Education category on the App Store.

📈  Mark Cuban-based banking app Dave, which helps Americans build financial stability, is planning to go public via a SPAC launched by Chicago-based Victory Park Capital called VPC Impact Acquisition Holdings III. It also includes a $210 million private investment from Tiger Global Management.

🤝  Mobile game publisher Voodoo acquired Tel Aviv-based marketing automation platform Bidshake for an undisclosed sum. Launched in January 2020, Bidshake combines data aggregation and analytics with campaign and creative management. It will continue to operate independently.

Downloads

Turntable — tt.fm

Image Credits: tt.fm on iPhone/Brian Heater

Newly launched music social network tt.fm is a Turntable.fm rival that lets you virtually hang out with friends while listening to music. To be clear, the app is not the same as Turntable.fm, which shut down in 2013 but then returned during the pandemic as people looked to connect online. While that Turntable was rebirthed by its founder Billy Chasen, Turntable – tt.fm hails from early Turntable.fm employee, now tt.fm CEO Joseph Perla. But as live events are coming back, the question now may be not which Turntable app to choose, but whether the Turnable.fm experience has missed the correct launch window…again.

SketchAR

SketchAR

SketchAR

The art app SketchAR previously offered artists tools to draw with AR, turn photos into AR, create AR masks for Snapchat, play games and more. With its latest update, artists can now turn their work into NFTs directly inside the app and sell it. The app, now used by nearly 500,000 users, will select a “Creator of the Week” to NFT on OpenSea. Others can create and auction their art as NFTs on-demand.

Tweets

#android, #android-apps, #app-stores, #apple, #apps, #ar, #augmented-reality, #creators, #developers, #facebook, #google, #instagram, #ios, #ios-apps, #ipad, #ipados, #mobile, #swift, #tc, #this-week-in-apps, #wwdc, #wwdc-2021, #xcode

0

Shopify acquires augmented reality home design app Primer

In Friday acquisition news, Shopify shared today that they’ve acquired augmented reality startup Primer, which makes an app that lets users visualize what tile, wallpaper or paint will look like on surfaces inside their home.

In a blog post, co-founders Adam Debreczeni and Russ Maschmeyer write that Primer’s app and services will be shutting down next month as part of the deal. Debreczeni tells TechCrunch that Primer’s team of eight employees will all be joining Shopify following the acquisition.

Primer had partnered with dozens of tile and textile design brands to allow users to directly visualize what their designs would look like using their iPhone and iPad and Apple’s augmented reality platform ARKit. The app has been highlighted by Apple several times including this nice write-up by the App Store’s internal editorial team.

Terms of the deal weren’t disclosed. Primer’s backers included Slow Ventures, Abstract Ventures, Foundation Capital and Expa.

There’s been a lot of big talk about how augmented reality will impact online shopping, but aside from some of the integrations made in home design, there hasn’t been an awful lot that’s found its way into real consumer use. Shopify has worked on some of their own integrations — allowing sellers to embed 3D models into their storefronts that users can drop into physical space — but it’s clear that there’s much more room left to experiment.

#abstract-ventures, #app-store, #apple, #apple-inc, #augment, #augmented-reality, #companies, #foundation-capital, #ipad, #iphone, #online-shopping, #paint, #primer, #shopify, #slow-ventures, #software, #technology, #tile

0

Apple’s RealityKit 2 allows developers to create 3D models for AR using iPhone photos

At its Worldwide Developer Conference, Apple announced a significant update to RealityKit, its suite of technologies that allow developers to get started building AR (augmented reality) experiences. With the launch of RealityKit 2, Apple says developers will have more visual, audio, and animation control when working on their AR experiences. But the most notable part of the update is how Apple’s new Object Capture technology will allow developers to create 3D models in minutes using only an iPhone.

Apple noted during its developer address that one of the most difficult parts of making great AR apps was the process of creating 3D models. These could take hours and thousands of dollars.

With Apple’s new tools, developers will be able take a series of pictures using just an iPhone (or iPad or DSLR, if they prefer) to capture 2D images of an object from all angles, including the bottom.

Then, using the Object Capture API on macOS Monterey, it only takes a few lines of code to generate the 3D model, Apple explained.

Image Credits: Apple

To begin, developers would start a new photogrammetry session in RealityKit that points to the folder where they’ve captured the images. Then, they would call the process function to generate the 3D model at the desired level of detail. Object Capture allows developers to generate the USDZ files optimized for AR Quick Look — the system that lets developers add virtual, 3D objects in apps or websites on iPhone and iPad. The 3D models can also be added to AR scenes in Reality Composer in Xcode.

Apple said developers like Wayfair, Etsy and others are using Object Capture to create 3D models of real-world objects — an indication that online shopping is about to get a big AR upgrade.

Wayfair, for example, is using Object Capture to develop tools for their manufacturers so they can create a virtual representation of their merchandise. This will allow Wayfair customers to be able to preview more products in AR than they could today.

Image Credits: Apple (screenshot of Wayfair tool))

In addition, Apple noted developers including Maxon and Unity are using Object Capture for creating 3D content within 3D content creation apps, such as Cinema 4D and Unity MARS.

Other updates in RealityKit 2 include custom shaders that give developers more control over the rendering pipeline to fine tune the look and feel of AR objects; dynamic loading for assets; the ability to build your own Entity Component System to organize the assets in your AR scene; and the ability to create player-controlled characters so users can jump, scale and explore AR worlds in RealityKit-based games.

One developer, Mikko Haapoja of Shopify, has been trying out the new technology (see below) and shared some real-world tests where he shot objects using an iPhone 12 Max via Twitter.

Developers who want to test it for themselves can leverage Apple’s sample app and install Monterey on their Mac to try it out.

read more about Apple's WWDC 2021 on TechCrunch

#animation, #apple, #apple-inc, #apps, #ar, #augmented-reality, #computing, #ios, #ipad, #iphone, #macos, #mobile, #online-shopping, #realitykit, #unity, #wayfair, #wwdc-2021

0

Facebook’s Spark AR platform expands to video calling with Multipeer API

At today’s F8 developer conference, Facebook announced new capabilities for Spark AR, its flagship AR creation software. Since Spark AR was announced at F8 2017, more than 600,000 creators from 190 countries have published over 2 million AR effects on Facebook and Instagram, making it the largest mobile AR platform, according to Facebook. If you’ve ever posted a selfie on your Instagram story with an effect that gave you green hair, or let you control a dog’s facial expression by moving your own face, then you’ve used Spark AR

Soon, these AR effects will be available for video calling on Messenger, Instagram, and Portal with the introduction of a Multipeer API. Creators can develop effects that bring call participants together by using a shared AR effect. As an example, Spark AR shared a promo video of a birthday party held over a video call, in which an AR party hat appears on each of the participants’ heads. 

Creators can also develop games for users to play during their video calls. This already exists on Facebook video calls – think of the game where you compete to see who can catch the most flying AR hamburgers in their mouth in a minute. But when the ability to make new, lightweight games opens to developers, we’ll see some new games to challenge our friends with on video calls. 

These video call effects and multipeer AR games will be bolstered by Spark’s platform exclusive multi-class segmentation capability. This lets developers augment multiple segments of a user’s body (like hair or skin) at once within a single effect. 

Facebook also discussed its ongoing ambition to build AR glasses. Chris Barber, Director of Partnerships for Spark AR, said that this goal is still “years away” – but, Barber did tease some potential features for the innovative, wearable tech. 

“Imagine being able to teleport to a friend’s sofa to watch a show together, or being able to share a photo of something awesome you see on a hike,” Barber said. Maybe this won’t sound so dystopian by the time the product launches, years down the road. 

Last October, Spark AR launched the AR Partner Network, a program for the platform’s most advanced creators, and this year, Spark launched an AR curriculum through Facebook’s BluePrint Platform to help creators learn how to improve their AR effects. Applications for the Spark Partner Network will open again this summer. For now, creators and developers can apply to start building effects for video calling through the Spark AR Video Calling Beta

#api, #apps, #ar, #augmented-reality, #facebook, #instagram, #messenger, #mobile-software, #operating-systems, #social, #social-media, #software, #spark, #spark-ar

0

Augmented reality NFT platform Anima gets backing from Coinbase

Augmented reality and non-fungible tokens, need I say more? Yes? Oh, well NFTs have certainly had their moment in 2021 but the question of what they do or what can be done with them has certainly been getting voiced more frequently as the speculative gold rush begins to cool off and people start to think more about how digital goods can evolve in the future.

Anima, a small creative crypto startup built by the founders of photo/video app Ultravisual, which Flipboard acquired back in 2014, is looking to use AR to shift how NFT art and collectibles can be viewed and shared. Their latest venture is an effort to help artists bring their digital creations to a bigger digital stage and help find what the future of NFTs looks like in augmented reality.

The startup has put together a small $500k pre-seed round from Coinbase Ventures, Divergence Ventures, Flamingo DAO, Lyle Owerko and Andrew Unger.

“As NFTs move away from being a more speculative market where it’s all about returns on your purchases, I think that’s healthy and it’s good for us specifically because we want to make things that are more approachable,” co-founder Alex Herrity says.

Their broader vision is finding ways for digital objects to interact with the real world, something that’s been a pretty top-of-mind concern for the AR world over the last few years, though augmented reality development has cooled more recently as creators have sunk into a wait-and-see attitude towards new releases from Apple and Facebook. Both the AR and NFT spaces are incredibly early, something Anima’s co-founders were quick to admit, but they think both spaces have matured enough that the gimmicks are out in the open.

“There’s a context shift that happens when you see AR as a vehicle to have a tactile relationship with something that you collected or that you see is a lifestyle accessory versus the common thing now where it’s a little bit more of an experiential gimmick,” co-founder Neil Voss tells TechCrunch.

The team has worked with a couple artists already as they’ve made early experiments in bringing digital art objects into AR  and they’re launching a marketplace late next month based on ConsenSys’s Palm platform where they hope to showcase more of their future partnerships.

 

#anima, #apple, #arkansas, #augmented-reality, #blockchain, #blockchains, #co-founder, #coinbase, #coinbase-ventures, #consensys, #cryptocurrency, #ethereum, #facebook, #flipboard, #marketing

0

Snap acquires AR startup WaveOptics, which provides tech for Spectacles, for over $500M

Snap yesterday announced the latest iteration of its Spectacles augmented reality glasses, and today the company revealed a bit more news: it is also acquiring the startup that supplied the technology that helps power them. The Snapchat parent is snapping up WaveOptics, an AR startup that makes the waveguides and projectors used in AR glasses. These overlay virtual images on top of the views of the real world someone wearing the glasses can see, and Snap worked with WaveOptics to build its latest version of Spectacles.

The deal was first reported by The Verge, and a spokesperson for Snap directly confirmed the details to TechCrunch. Snap is paying over $500 million for the startup, in a cash-and-stock deal. The first half of that will be coming in the form of stock when the deal officially closes, and the remainder will be payable in cash or stock in two years.

This is a big leap for WaveOptics, which had raised around $65 million in funding from investors that included Bosch, Octopus Ventures and a host of individuals, from Stan Boland (veteran entrepreneur in the UK, most recently at FiveAI) and Ambarish Mitra (the co-founder of early AR startup Blippar). PitchBook estimates that its most recent valuation was only around $105 million.

WaveOptics was founded in Oxford, and it’s not clear where the team will be based after the deal is closed — we have asked.

We have been covering the company since its earliest days, when it displayed some very interesting, early, and ahead-of-its-time technology: waveguides based on hologram physics and photonic crystals. The important and key thing is that its tech drastically compresses size and load of the hardware needed to process and display images, meaning a much wider and more flexible range of form factors for AR hardware based on WaveOptics tech.

It’s not clear whether WaveOptics will continue to work with other parties post-deal, but it seems that one obvious advantage for Snap would be making the startup’s technology exclusive to itself.

Snap has been on something of an acquisition march in recent times — it’s made at least three other purchases of startups since January, including Fit Analytics for an AR-fuelled move into e-commerce, as well as Pixel8Earth and StreetCred for its mapping tools.

This deal, however, marks Snap’s biggest acquisition to date in terms of valuation. That is not only a mark of the premium price that foundational artificial intelligence tech continues to command — in addition to the team of scientists that built WaveOptics, it also has 12 filed and in-progress patents — but also Snap’s financial and, frankly, existential commitment to having a seat at the table when it comes not just to social apps that use AR, but hardware, and being at the centre of not just using the tech, but setting the pace and agenda for how and where that will play out.

That’s been a tenacious and not always rewarding place for it to be, but the company — which has long described itself as a “camera company” — has kept hardware in the mix as an essential component for its future strategy.

 

#ar, #artificial-intelligence, #augmented-reality, #computer-vision, #europe, #exit, #glasses, #snap, #snapchat, #spectacles, #tc, #waveguides, #waveoptics

0

Snap debuts true AR glasses that show the potential (and limitations) of AR

Snap Inc., the company best known for the popular Snapchat social camera app, has announced its first pair of augmented reality glasses that most people would agree actually qualify as real AR glasses. Like previous glasses the company has produced, they are called Spectacles.

Spectacles will not be available to buy as a mass-market consumer product—at least not in the immediately foreseeable future. Instead, Snap is seeding units to developers and content creators so the glasses can be used to create new experiences and filters. These creators will build these with Lens Studio, a Snapchat-specific tool that is already widely in use.

Spectacles enable new ways to view and create Snapchat Lenses, which are generally simple augmented reality filters that Snapchat users apply to the videos they send each other.

Read 16 remaining paragraphs | Comments

#ar, #ar-glasses, #augmented-reality, #filters, #glasses, #snap, #snap-spectacles, #snapchat, #snapchat-spectacles, #spectacles, #tech

0

Snap emphasizes commerce in updates to its camera and AR platforms

At Snap’s Partner Summit, the company announced a number of updates to the company’s developer tools and AR-focused Lens Studio including several focused on bringing shopping deeper into the Snapchat experience.

One of the cooler updates involved the company’s computer vision Scan product which analyzes content in a user’s camera feed to quickly bring up relevant information. Snap says the feature is used by around 170 million users per month. Scan which has now been given more prominent placement inside the camera section of the app has been upgraded with commerce capabilities with a feature called Screenshop.

Users can now use their Snap Camera to scan a friend’s outfit after which they’ll quickly be served up shopping recommendations from hundreds of brands. The company is using the same technology for another upcoming feature that will allow users to snap pictures of ingredients in their kitchen and get served recipes from Allrecipes that integrate them.

The features are part of a broader effort to intelligently suggest lenses to users based on what their camera is currently focused on.

Business will now be able to establish public profiles inside Snapchat where users can see all of their different offerings, including Lenses, Highlights, Stories and items for sale through Shop functionality.

On the augmented reality side, Snap is continuing to emphasize business solutions with API integrations that make lenses smarter. Retailers will be able to use the Business Manager to integrate their product catalogs so that users can only access try-on lenses for products that are currently in stock.

Partnerships with luxury fashion platform Farfetch and Prada will tap into further updates to the AR platform including technical 3D mesh advances that make trying on clothing virtually appear more realistic. Users will also be able to use voice commands and visual gestures to cycle between items they’re trying on in the new experiences.

“We’re excited about the power of our camera platform to bring Snapchatters together with the businesses they care about in meaningful ways,” said Snap’s global AR product lead Carolina Arguelles Navas. “And, now more than ever, our community is eager to experience and try on, engage with, and learn about new products, from home.”

#allrecipes, #api, #augmented-reality, #farfetch, #instant-messaging, #lens-studio, #marketing, #mobile-applications, #playstation-home, #prada, #snap-inc, #snapchat, #software, #technology, #vertical-video

0

Google Maps to add more detailed maps, crowd indicators, better routing and more

Google has announced a series of updates soon coming to Google Maps, as part of the company’s larger goal of delivering over 100 A.I.-powered improvements to the platform by year-end. Among the new improvements, detailed during Google I/O’s developer conference this week, are new routing updates, Live View enhancements, an expansion of detailed street maps, a new “area busyness” feature, and a more personalized Maps experience.

The new routing updates will involve the use of machine learning and navigation information to help reduce “hard-braking moments” — meaning, those times when traffic suddenly slows, and you have to slam on your brakes.

Today, when you get your directions in Maps, Google calculates multiple route options based on a variety of factors, like how many lanes a road has or how direct the route is. With the update, it will add one more: which routes are least likely to cause a “hard-braking moment.” Google will recommend the route that has the least likelihood of those sorts of moments, if the ETA is the same or the difference is minimal between another route. The company says it expects this change could potentially eliminate 100 million hard-breaking events in routes driven with Google Maps every year.

Live View, Google Maps’ augmented reality feature launched in 2019, will soon become available directly from the map interface so you can instantly explore the neighborhood and view details about nearby shops and restaurants, including how busy they are, recent reviews and photos. It will also be updated to include street signs for complex intersections, and will tell you where you are in relation to places like your hotel, so you can make your way back more easily, when in unfamiliar territory.

Image Credits: Google

Google will also expand the more detailed maps it first rolled out to last year to New York, San Francisco, and London. These maps offer more granularity, including both natural features and street info like the location of sidewalks, crosswalks and pedestrian islands, for example. The information can be particularly useful for those who navigate a city by foot, scooter, bike, or in a wheelchair.

By the end of 2021, these detailed maps will be available in 50 more cities, including  Berlin, São Paulo, Seattle, and Singapore.

Image Credits: Google

Another new feature expands on the “busyness” information Google already provided for businesses, based on anonymized location data collected by Maps users. During the pandemic, that feature became a useful way to avoid crowds at local stores and other businesses, for health and safety. Now, Google Maps will display “busyness” info for parts of town or neighborhoods, to help you either avoid (or perhaps locate) crowded areas — like a street festival, farmers’ market, or nightlife spot, among other things.

Image Credits: Google

Finally, Google Maps will begin customizing its interface to the individual in new ways.

For starters, it may show relevant information based on the time of day where you are.

For instance, when you open the map at 8 AM on a weekday, you may see coffee shops more prominently highlighted, but at night, you may see dinner spots. If you’ve traveled out of town, Google Maps may instead show you landmarks and tourists attractions. And if you want to see more of the same, you can tap on any place to see similar places nearby.

Image Credits: Google

 

Google says these features will roll out globally across iOS and Android in the coming months, but did not provide an exact timeframe for each specific feature. The more detailed maps will arrive by year-end, however.

#apps, #ar, #artificial-intelligence, #augmented-reality, #cities, #crowds, #google, #google-maps, #machine-learning, #maps, #navigation, #streets, #tc

0

Super raises $50M to cover home repairs and maintenance via a subscription model

The real estate sales market has been in an upswing this year, and today a startup that’s addressing one of homeowners’ biggest needs — repair and maintenance services, and specifically the stress of sorting these out when things break down — is announcing some funding on the heels of strong growth.

Super — which has built a business providing repair and maintenance for electrical and mechanical systems, appliances, and plumbing by way of a monthly subscription — has closed a growth round of $50 million.

The startup plans to use the funding to expand into new markets, to hire more people, and to continue adding more maintenance/repair services and partnerships into its wider home-warranty-by-subscription proposition.

CEO Jorey Ramer, who co-founded the company with Ryan Donnelly (VP of engineering), also said that another part of the investment will be used to enhance the AI tech that underpins Super’s service and pricing plans. More on that below.

The San Francisco-based company is currently active in some of the fastest-growing housing markets in the U.S.,  Austin, Chicago, Dallas, Houston, Phoenix, San Antonio, and Washington, D.C. (ironically not in SF itself), and it has grown revenue 7x since April 2019, when it previously raised money, a $20 million Series B. It’s not disclosing actual revenue numbers, nor user numbers.

This is latest Series C has a number of strategic backers that speaks to the bigger ecosystem of financial and insurance services that interlink with each other, and which are used by the average person in the course of home ownership. (Indeed, Super these days seems to refer to itself as an “insuretech”.)

Led by Wells Fargo Strategic Capital, the venture arm of the banking giant, others in the round included home construction giant Asahi Kasei, AAA – Auto Club Group (which also sells insurance), Gaingels, and REACH. The last of these is a scale-up service from Second Century Ventures, which is the investment fund of the National Association of Realtors. Aquiline Technology Growth, Liberty Mutual Strategic Ventures, Moderne Ventures and the HSB Fund of Munich Re Ventures — which all invested in Super’s previous $20 million round back in April 2019 — also participated.

The company has now raised $80 million in total, and it’s not disclosing its valuation.

As we have noted before, Ramer came up with the idea for Super when he himself moved to San Francisco after he sold his previous startup, Jumptap — an advertising network acquired by Millennial Media (which is now part of Verizon by way of its acquisition of AOL, just like TechCrunch). He’d been an apartment renter for all of his adult life, but when he moved to the Bay Area, he found himself buying property, and it came with more than a little reluctance because of the headache of taking care of his new home.

“I liked being a renter,” he said in an interview. “You pay a fee, and you know what to expect.” (“Super” is a reference to the superintendents that handle maintenance and repair in an apartment building, and to what Super hopes customers will think about its service.)

The route that Ramer decided to take for how to approach filling that gap, interestingly, is not unlike the challenges that Jumptap faced in the world of ad tech: instead of trying to build a services business from the ground up, he opted to build an integrated network that tapped into a number of small services enterprises already working in the business of maintaining homes. (The correlation here is that, rather than building a first-party behemoth, the approach is to knit together a number of online properties so that people looking to advertise can do so across a wide range of places in a network).

Super has created a kind of marketplace: the services businesses and individuals that Super engages with to carry out maintenance and repairs are all licensed and use its platform for free, essentially, and Super handles remuneration based on call-outs. For users, the call-outs come as part of their monthly plans, and they include different options based on which level of service they pay for.

The funding it’s announcing today will be used in part to enhance how those monthly plans work.

Not only are there algorithms that Super has built to determine how to price its services based on location, size of home and other factors; but there are features in the app that subscribers can use to interact with Super to report issues, call out maintenance people, and provide more detail about problems to improve faster, and in some cases, automated adjudication on issues.

Better tech for more responsive home services has been an interesting area of the market, but one that’s largely been ignored up to now, but as they have matured, AR and other computer vision breakthroughs have definitely helped to advance that game. (And a number of others are also tapping into that, including Hover, Nana, Jobber and more.)

The way that the service has been built to scale — working with contractors means adding in more kinds of coverage is easier than building from the ground up — also means that Super over time may well add more services into the mix.

“The things we would do are things your super would do,” Ramer said. “So that might include fixing plumbing, but might also potentially include cleaning carpets, which you could think of as maintenance. Painting is another interesting area. It seems like it might be a cosmetic thing, but if you do not paint, you risk dry rot. It’s also preventative care. So if we, say, cover 100% maintenance you could imagine that included, too.”

One area where it’s unlikely to move is general contract work, say rebuilding a bathroom or kitchen, or adding in a new room in your loft: the focus it seems will remain on the essentials of keeping your home working.

But aside of expanding the services directly on its own platform, there are also potentially opportunities for how Super might work with partners. AAA for example has a notable business not just in roadside assistance but also insurance coverage. Ramer describes Super as “roadside assistance for your home,” and he points out that it’s a natural partnership to sell those alongside each other.

Similarly, Wells Fargo, as a mortgage lender, is a natural complement, providing a route to its customers to help maintain the properties that they’re in the process of paying off to the bank. This in turn also becomes a kind of insurance policy to the bank itself, as it keeps the homes it is financing in better shape.

“Wells Fargo embraces innovation, and we’re excited to support a tech-forward platform like Super which brings further advancement to the home services market,” said Matthew Raubacher, managing director for WFSC’s Principal Technology Investments Group, in a statement. “The challenges of ongoing repairs and maintenance resonates with every homeowner, and Super provides an experience that is convenient for the customer, while boosting job visibility for local contractors and businesses. We look forward to seeing them continue to widen their geographic footprint and expand their product offering.”

#apps, #artificial-intelligence, #augmented-reality, #ecommerce, #funding, #home-services, #real-estate, #super

0

Everything Google announced at I/O today

This year’s I/O event from Google was heavy on the “we’re building something cool” and light on the “here’s something you can use or buy tomorrow.” But there were also some interesting surprises from the semi-live event held in and around the company’s Mountain View campus. Read on for all the interesting bits.

Android 12 gets a fresh new look and some quality of life features

We’ve known Android 12 was on its way for months, but today was our first real look at the next big change for the world’s most popular operating system. A new look, called Material You (yes), focuses on users, apps, and things like time of day or weather to change the UI’s colors and other aspects dynamically. Some security features like new camera and microphone use indicators are coming, as well as some “private compute core” features that use AI processes on your phone to customize replies and notifications. There’s a beta out today for the adventurous!

Wow, Android powers 3 billion devices now

Subhed says it all (but read more here). Up from 2 billion in 2017.

Smart Canvas smushes Docs, productivity, and video calls together

Millions of people and businesses use Google’s suite of productivity and collaboration tools, but the company felt it would be better if they weren’t so isolated. Now with Smart Canvas you can have a video call as you work on a shared doc together and bring in information and content from your Drive and elsewhere. Looks complicated, but potentially convenient.

AI conversations get more conversational with LaMDA

It’s a little too easy to stump AIs if you go off script, asking something in a way that to you seems normal but to the language model is totally incomprehensible. Google’s LaMDA is a new natural language processing technique that makes conversations with AI models more resilient to unusual or unexpected queries, making it more like a real person and less like a voice interface for a search function. They demonstrated it by showing conversations with anthropomorphized versions of Pluto and a paper airplane. And yes, it was exactly as weird as it sounds.

Google built a futuristic 3D video calling booth

One of the most surprising things at the keynote had to be Project Starline, a high-tech 3D video call setup that uses Google’s previous research and Lytro DNA to show realistic 3D avatars of people on both sides of the system. It’s still experimental but looks very promising.

Wear OS gets a revamp and lots of health-focused apps

Image Credits: Google

Few people want to watch a movie on their smartwatch, but lots of people like to use it to track their steps, meditation, and other health-related practices. Wear OS is getting a bunch of Fitbit DNA infused, with integrated health tracking stuff and a lot of third party apps like Calm and Flo.

Samsung and Google announce a unified smartwatch platform

These two mobile giants have been fast friends in the phone world for years, but when it comes to wearables, they’ve remained rivals. In the face of Apple’s utter dominance in the smartwatch space, however, the two have put aside their differences and announced they’ll work on a “unified platform” so developers can make apps that work on both Tizen and Wear OS.

And they’re working together on foldables too

Apparently Google and Samsung realized that no one is going to buy foldable devices unless they do some really cool things, and that collaboration is the best way forward there. So the two companies will also be working together to improve how folding screens interact with Android.

Android TV hits 80 million devices and adds phone remote

Image Credits: Google

The smart TV space is a competitive one, and after a few starts Google has really made it happen with Android TV, which the company announced had reached 80 million monthly active devices — putting it, Roku, and Amazon (the latter two with around 50 million monthly active accounts) all in the same league. The company also showed off a powerful new phone-based remote app that will (among other things) make putting in passwords way better than using the d-pad on the clicker. Developers will be glad to hear there’s a new Google TV emulator and Firebase Test Lab will have Android TV support.

Your Android phone is now (also) your car key

Well, assuming you have a really new Android device with a UWB chip in it. Google is working with BMW first, and other automakers soon most likely, to make a new method for unlocking the car when you get near it, or exchanging basic commands without the use of a fob or Bluetooth. Why not Bluetooth you ask? Well, Bluetooth is old. UWB is new.

Vertex collects machine learning development tools in one place

Google and its sibling companies are both leaders in AI research and popular platforms for others to do their own AI work. But its machine learning development tools have been a bit scattershot — useful but disconnected. Vertex is a new development platform for enterprise AI that puts many of these tools in one place and integrates closely with optional services and standards.

There’s a new generation of Google’s custom AI chips

Google does a lot of machine learning stuff. Like, a LOT a lot. So they are constantly working to make better, more efficient computing hardware to handle the massive processing load these AI systems create. TPUv4 is the latest, twice as fast as the old ones, and will soon be packaged into 4,096-strong pods. Why 4,096 and not an even 4,000? The same reason any other number exists in computing: powers of 2.

And they’re powering some new Photos features including one that’s horrifying

cinematic google photo

NO THANK YOU

Google Photos is a great service, and the company is trying to leverage the huge library of shots most users have to find patterns like “selfies with the family on the couch” and “traveling with my lucky hat” as fun ways to dive back into the archives. Great! But they’re also taking two photos taken a second apart and having an AI hallucinate what comes between them, leading to a truly weird looking form of motion that shoots deep, deep into the uncanny valley, from which hopefully it shall never emerge.

Forget your password? Googlebot to the rescue

Google’s “AI makes a hair appointment for you” service Duplex didn’t exactly set the world on fire, but the company has found a new way to apply it. If you forget your password, Duplex will automatically fill in your old password, pick a new one and let you copy it before submitting it to the site, all by interacting with the website’s normal reset interface. It’s only going to work on Twitter and a handful of other sites via Chrome for now, but hey, if it happens to you a lot, maybe it’ll save you some trouble.

Enter the Shopping Graph

Image Credits: Google I/O 2021

The aged among our readers may remember Froogle, Google’s ill-fated shopping interface. Well, it’s back… kind of. The plan is to include lots of product information, from price to star rating, availability and other info, right in the Google interface when you search for something. It sucks up this information from retail sites, including whether you have something in your cart there. How all this benefits anyone more than Google is hard to imagine, but naturally they’re positioning it as wins all around. Especially for new partner Shopify. (Me, I use DuckDuckGo.)

Flutter cross-platform devkit gets an update

A lot of developers have embraced Google’s Flutter cross-platform UI toolkit. The latest version, announced today, adds some safety settings, performance improvements, and workflow updates. There’s lots more coming, too.

Firebase gets an update too

Popular developer platform Firebase got a bunch of new and updated features as well. Remote Config gets a nice update allowing developers to customize the app experience to individual user types, and App Check provides a basic level of security against external threats. There’s plenty here for devs to chew on.

The next version of Android Studio is Arctic Fox

Image Credits: Google

The beta for the next version of Google’s Android Studio environment is coming soon, and it’s called Arctic Fox. It’s got a brand new UI building toolkit called Jetpack Compose, and a bunch of accessibility testing built in to help developers make their apps more accessible to people with disabilities. Connecting to devices to test on them should be way easier now too. Oh, and there’s going to be a version of Android Studio for Apple Silicon.

#artificial-intelligence, #augmented-reality, #automotive, #finance, #gadgets, #google, #google-i-o, #google-i-o-2021, #google-io-2021, #hardware, #media, #mobile, #privacy, #tc, #transportation, #wearables

0

Google is making a 3D, life-size video calling booth

Google is working on a video calling booth that uses 3D imagery on a 3D display to create a lifelike image of the people on both sides. While it’s still experimental, “Project Starline” builds on years of research and acquisitions, and could be the core of a more personal-feeling video meeting in the near future.

The system was only shown via video of unsuspecting participants, who were asked to enter a room with a heavily obscured screen and camera setup. Then the screen lit up with a video feed of a loved one, but in a way none of them expected:

“I could feel her and see her, it was like this 3D experience. It was like she was here.”

“I felt like I could really touch him!”

“It really, really felt like she and I were in the same room.”

CEO Sundar Pichai explained that this “experience” was made possible with high-resolution cameras and custom depth sensors, almost certainly related to these Google research projects into essentially converting videos of people and locations into interactive 3D scenes:

The cameras and sensors — probably a dozen or more hidden around the display — capture the person from multiple angles and figure out their exact shape, creating a live 3D model of them. This model and all the color and lighting information is then (after a lot of compression and processing) sent to the other person’s setup, which shows it in convincing 3D. It even tracks their heads and bodies to adjust the image to their perspective. (There’s a bit more on an early version of the technique here.)

But 3D TVs have more or less fallen by the wayside; turns out no one wants to wear special glasses for hours at a time, and the quality on glasses-free 3D was generally pretty bad. So what’s making this special 3D image?

Pichai said “we have developed a breakthrough light field display,” probably with the help of the people and IP it scooped up from Lytro, the light field camera company that didn’t manage to get its own tech off the ground and dissolved in 2018.

Light field cameras and displays create and show 3D imagery using a variety of techniques that are very difficult to explain or show in 2D. The startup Looking Glass has made several that are extremely arresting to view in person, showing 3D models and photographic scenes that truly look like tiny holograms.

Whether Google’s approach is similar or different, the effect appears to be equally impressive, as the participants indicate. They’ve been testing this internally and are getting ready to send out units to partners in various industries (such as medicine) where the feeling of a person’s presence makes a big difference.

At this point Project Starline is still very much a prototype, and probably a ridiculously expensive one — so don’t expect to get one in your home any time soon. But it’s not wild to think that a consumer version of this light field setup may be available down the line. Google promises to share more later this year.

#augmented-reality, #gadgets, #google, #google-i-o-2021, #google-io-2021, #hardware, #light-field, #science, #tc

0

Artificial raises $21M led by Microsoft’s M12 for a lab automation platform aimed at life sciences R&D

Automation is extending into every aspect of how organizations get work done, and today comes news of a startup that is building tools for one industry in particular: life sciences. Artificial, which has built a software platform for laboratories to assist with, or in some cases fully automate, research and development work, has raised $21.5 million.

It plans to use the funding to continue building out its software and its capabilities, to hire more people, and for business development, according to Artificial’s CEO and co-founder David Fuller. The company already has a number of customers including Thermo Fisher and Beam Therapeutics using its software directly and in partnership for their own customers. Sold as aLab Suite, Artificial’s technology can both orchestrate and manage robotic machines that labs might be using to handle some work; and help assist scientists when they are carrying out the work themselves.

“The basic premise of what we’re trying to do is accelerate the rate of discovery in labs,” Fuller said in an interview. He believes the process of bringing in more AI into labs to improve how they work is long overdue. “We need to have a digital revolution to change the way that labs have been operating for the last 20 years.”

The Series A is being led by Microsoft’s venture fund M12 — a financial and strategic investor — with Playground Global and AME Ventures also participating. Playground Global, the VC firm co-founded by ex-Google exec and Android co-creator Andy Rubin (who is no longer with the firm), has been focusing on robotics and life sciences and it led Artificial’s first and only other round. Artificial is not disclosing its valuation with this round.

Fuller hails from a background in robotics, specifically industrial robots and automation. Before founding Artificial in 2018, he was at Kuka, the German robotics maker, for a number of years, culminating in the role of CTO; prior to that, Fuller spent 20 years at National Instruments, the instrumentation, test equipment and industrial software giant. Meanwhile, Artificial’s co-founder, Nikhita Singh, has insight into how to bring the advances of robotics into environments that are quite analogue in culture. She previously worked on human-robot interaction research at the MIT Media Lab, and before that spent years at Palantir and working on robotics at Berkeley.

As Fuller describes it, he saw an interesting gap (and opportunity) in the market to apply automation, which he had seen help advance work in industrial settings, to the world of life sciences, both to help scientists track what they are doing better, and help them carry out some of the more repetitive work that they have to do day in, day out.

This gap is perhaps more in the spotlight today than ever before, given the fact that we are in the middle of a global health pandemic. This has hindered a lot of labs from being able to operate full in-person teams, and increased the reliance on systems that can crunch numbers and carry out work without as many people present. And, of course, the need for that work (whether it’s related directly to Covid-19 or not) has perhaps never appeared as urgent as it does right now.

There have been a lot of advances in robotics — specifically around hardware like robotic arms — to manage some of the precision needed to carry out some work, but up to now no real efforts made at building platforms to bring all of the work done by that hardware together (or in the words of automation specialists, “orchestrate” that work and data); nor link up the data from those robot-led efforts, with the work that human scientists still carry out. Artificial estimates that some $10 billion is spent annually on lab informatics and automation software, yet data models to unify that work, and platforms to reach across it all, remain absent. That has, in effect, served as a barrier to labs modernising as much as they could.

A lab, as he describes it, is essentially composed of high-end instrumentation for analytics, alongside then robotic systems for liquid handling. “You can really think of a lab, frankly, as a kitchen,” he said, “and the primary operation in that lab is mixing liquids.”

But it is also not unlike a factory, too. As those liquids are mixed, a robotic system typically moves around pipettes, liquids, in and out of plates and mixes. “There’s a key aspect of material flow through the lab, and the material flow part of it is much more like classic robotics,” he said. In other words, there is, as he says, “a combination of bespoke scientific equipment that includes automation, and then classic material flow, which is much more standard robotics,” and is what makes the lab ripe as an applied environment for automation software.

To note: the idea is not to remove humans altogether, but to provide assistance so that they can do their jobs better. He points out that even the automotive industry, which has been automated for 50 years, still has about 6% of all work done by humans. If that is a watermark, it sounds like there is a lot of movement left in labs: Fuller estimates that some 60% of all work in the lab is done by humans. And part of the reason for that is simply because it’s just too complex to replace scientists — who he described as “artists” — altogether (for now at least).

“Our solution augments the human activity and automates the standard activity,” he said. “We view that as a central thesis that differentiates us from classic automation.”

There have been a number of other startups emerging that are applying some of the learnings of artificial intelligence and big data analytics for enterprises to the world of science. They include the likes of Turing, which is applying this to helping automate lab work for CPG companies; and Paige, which is focusing on AI to help better understand cancer and other pathology.

The Microsoft connection is one that could well play out in how Artificial’s platform develops going forward, not just in how data is perhaps handled in the cloud, but also on the ground, specifically with augmented reality.

“We see massive technical synergy,” Fuller said. “When you are in a lab you already have to wear glasses… and we think this has the earmarks of a long-term use case.”

Fuller mentioned that one area it’s looking at would involve equipping scientists and other technicians with Microsoft’s HoloLens to help direct them around the labs, and to make sure people are carrying out work consistently by comparing what is happening in the physical world to a “digital twin” of a lab containing data about supplies, where they are located, and what needs to happen next.

It’s this and all of the other areas that have yet to be brought into our very AI-led enterprise future that interested Microsoft.

“Biology labs today are light- to semi-automated—the same state they were in when I started my academic research and biopharmaceutical career over 20 years ago. Most labs operate more like test kitchens rather than factories,” said Dr. Kouki Harasaki, an investor at M12, in a statement. “Artificial’s aLab Suite is especially exciting to us because it is uniquely positioned to automate the masses: it’s accessible, low code, easy to use, highly configurable, and interoperable with common lab hardware and software. Most importantly, it enables Biopharma and SynBio labs to achieve the crowning glory of workflow automation: flexibility at scale.”

Harasaki is joining Peter Barratt, a founder and general partner at Playground Global, on Artificial’s board with this round.

“It’s become even more clear as we continue to battle the pandemic that we need to take a scalable, reproducible approach to running our labs, rather than the artisanal, error-prone methods we employ today,” Barrett said in a statement. “The aLab Suite that Artificial has pioneered will allow us to accelerate the breakthrough treatments of tomorrow and ensure our best and brightest scientists are working on challenging problems, not manual labor.”

#artificial-intelligence, #augmented-reality, #automation, #biotech, #enterprise, #health, #life-sciences, #rd, #robotics, #science, #startups, #tc

0

The Last Gameboard raises $4M to ship its digital tabletop gaming platform

The tabletop gaming industry has exploded over the last few years as millions discovered or rediscovered its joys, but it too is evolving — and The Last Gameboard hopes to be the venue for that evolution. The digital tabletop platform has progressed from crowdfunding to $4M seed round, and having partnered with some of the biggest names in the industry, plans to ship by the end of the year.

As the company’s CEO and co-founder Shail Mehta explained in a TC Early Stage pitch-off earlier this year, The Last Gameboard is a 16-inch square touchscreen device with a custom OS and a sophisticated method of tracking game pieces and hand movements. The idea is to provide a digital alternative to physical games where that’s practical, and do so with the maximum benefit and minimum compromise.

If the pitch sounds familiar… it’s been attempted once or twice before. I distinctly remember being impressed by the possibilities of D&D on an original Microsoft Surface… back in 2009. And I played with another at PAX many years ago. Mehta said that until very recently there simply wasn’t the technology and market weren’t ready.

“People tried this before, but it was either way too expensive or they didn’t have the audience. And the tech just wasn’t there; they were missing that interaction piece,” she explained, and certainly any player will recognize that the, say, iPad version of a game definitely lacks physicality. The advance her company has achieved is in making the touchscreen able to detect not just taps and drags, but game pieces, gestures and movements above the screen, and more.

“What Gameboard does, no other existing touchscreen or tablet on the market can do — it’s not even close,” Mehta said. “We have unlimited touch, game pieces, passive and active… you can use your chess set at home, lift up and put down the pieces, we track it the whole time. We can do unique identifiers with tags and custom shapes. It’s the next step in how interactive surfaces can be.”

It’s accomplished via a not particularly exotic method, which saves the Gameboard from the fate of the Surface and its successors, which cost several thousand dollars due to their unique and expensive makeups. Mehta explained that they work strictly with ordinary capacitive touch data, albeit at a higher framerate than is commonly used, and then use machine learning to characterize and track object outlines. “We haven’t created a completely new mechanism, we’re just optimizing what’s available today,” she said.

The Last Gameboard's interface, showing games available to play on the tablet's surface.

Image Credits: The Last Gameboard

At $699 for the Gameboard it’s not exactly an impulse buy, either, but the fact of the matter is people spend a lot of money on gaming, with some titles running into multiple hundreds of dollars for all the expansions and pieces. Tabletop is now a more than $20 billion industry. If the experience is as good as they hope to make it, this is an investment many a player will not hesitate (much, anyway) to make.

Of course, the most robust set of gestures and features won’t matter if all they had on the platform were bargain-bin titles and grandpa’s-parlor favorites like Parcheesi. Fortunately The Last Gameboard has managed to stack up some of the most popular tabletop companies out there, and aims to have the definitive digital edition for their games.

Asmodee Digital is probably the biggest catch, having adapted many of today’s biggest hits, from modern classics Catan and Carcassone to crowdfunded breakout hit Scythe and immense dungeon-crawler Gloomhaven. The full list of partners right now includes Dire Wolf Digital, Nomad Games, Auroch Digital, Restoration Games, Steve Jackson Games, Knights of Unity, Skyship Studios, EncounterPlus, PlannarAlly, and Sugar Gamers, as well as individual creators and developers.

Animation of two players grabbing dots on a screen and moving them around.

Image Credits: The Last Gameboard

These games may be best played in person, but have successfully transitioned to digital versions, and one imagines that a larger screen and inclusion of real pieces could make for an improved hybrid experience. There will be options both to purchase games individually, like you might on mobile or Steam, or to subscribe to an unlimited access model (pricing to be determined on both).

It would also be something that the many gaming shops and playing venues might want to have a couple of on hand. Testing out a game in-store and then buying a few to stock, or convincing consumers to do the same, could be a great sales tactic for all involved.

In addition to providing a unique and superior digital version of a game, the device can connect with others to trade moves, send game invites, and all that sort of thing. The whole OS, Mehta said, “is alive and real. If we didn’t own it and create it, this wouldn’t work.” This is more than a skin on top of Android with a built-in store, but there’s enough shared that Android-based ports will be able to be brought over with little fuss.

Head of content Lee Allentuck suggested that the last couple years (including the pandemic) have started to change game developers’ and publishers’ minds about the readiness of the industry for what’s next. “They see the digital crossover is going to happen — people are playing online board games now. If you can be part of that new trend at the very beginning, it gives you a big opportunity,” he said.

CEO Shail Mehta (center) plays Stop Thief on the Gameboard with others on the team.

Allentuck, who previously worked at Hasbro, said there’s widespread interest in the toy and tabletop industry to be more tech-forward, but there’s been a “chicken and egg scenario,” where there’s no market because no one innovates, and no one innovates because there’s no market. Fortunately things have progressed to the point where a company like The Last Gameboard can raise $4M series A to help cover the cost of creating that market.

The round was led by TheVentureCity, with participation from SOSV, Riot Games, Conscience VC, Corner3 VC, and others. While the company didn’t go through HAX, SOSV’s involvement has that HAX-y air, and partner Garrett Winther gives a glowing recommendation of its approach: “They are the first to effectively tie collaborative physical and digital gameplay together while not losing the community, storytelling or competitive foundations that we all look for in gaming.”

Mehta noted that the pandemic nearly cooked the company by derailing their funding, which was originally supposed to come through around this time last year when everything went pear-shaped. “We had our functioning prototype, we had filed for a patent, we got the traction, we were gonna raise, everything was great… and then COVID hit,” she recalled. “But we got a lot of time to do R&D, which was actually kind of a blessing. Our team was super small so we didn’t have to lay anyone off — we just went into survival mode for like six months and optimized, developed the platform. 2020 was rough for everyone, but we were able to focus on the core product.”

Now the company is poised to start its beta program over the summer and (following feedback from that) ship its first production units before the holiday season when purchases like this one seem to make a lot of sense.

(This article originally referred to this raise as The Last Gameboard’s round A — it’s actually the seed. This has been updated.)

#artificial-intelligence, #augmented-reality, #funding, #fundings-exits, #gadgets, #gaming, #hardware, #tabletop, #tabletop-gaming, #tc

0

SightCall raises $42M for its AR-based visual assistance platform

Long before Covid-19 precipitated “digital transformation” across the world of work, customer services and support was built to run online and virtually. Yet it too is undergoing an evolution supercharged by technology.

Today, a startup called SightCall, which has built an augmented reality platform to help field service teams, the companies they work for, and their customers carry out technical and mechanical maintenance or repairs more effectively, is announcing $42 million in funding, money that it plans to use to invest in its tech stack with more artificial intelligence tools and expanding its client base.

The core of its service, explained CEO and co-founder Thomas Cottereau, is AR technology (which comes embedded in their apps or the service apps its customers use, with integrations into other standard software used in customer service environments including Microsoft, SAP, Salesforce and ServiceNow). The augmented reality experience overlays additional information, pointers and other tools over the video stream.

This is used by, say, field service engineers coordinating with central offices when servicing equipment; or by manufacturers to provide better assistance to customers in emergencies or situations where something is not working but might be repaired quicker by the customers themselves rather than engineers that have to be called out; or indeed by call centers, aided by AI, to diagnose whatever the problem might be. It’s a big leap ahead for scenarios that previously relied on work orders, hastily drawn diagrams, instruction manuals, and voice-based descriptions to progress the work in question.

“We like to say that we break the barriers that exist between a field service organization and its customer,” Cottereau said.

The tech, meanwhile, is unique to SightCall, built over years and designed to be used by way of a basic smartphone, and over even a basic mobile network — essential in cases where reception is bad or the locations are remote. (More on how it works below.)

Originally founded in Paris, France before relocating to San Francisco, SightCall has already built up a sizable business across a pretty wide range of verticals, including insurance, telecoms, transportation, telehealth, manufacturing, utilities, and life sciences/medical devices.

SightCall has some 200 big-name enterprise customers on its books, including the likes of Kraft-Heinz, Allianz, GE Healthcare and Lincoln Motor Company, providing services on a B2B basis as well as for teams that are out in the field working for consumer customers, too. After seeing 100% year-over-year growth in annual recurring revenue in 2019 and 2020, SightCall’s CEO says it’s looking like it will hit that rate this year as well, with a goal of $100 million in annual recurring revenue.

The funding is being led by InfraVia, a European private equity firm, with Bpifrance also participating. The valuation of this round is not being disclosed, but I should point out that an investor told me that PitchBook’s estimate of $122 million post-money is not accurate (we’re still digging on this and will update as and when we learn more).

For some further context on this investment, InfraVia invests in a number of industrial businesses, alongside investments in tech companies building services related to them such as recent investments in Jobandtalent, so this is in part a strategic investment. SightCall has raised $67 million to date.

There has been an interesting wave of startups emerging in recent years building out the tech stack used by people working in the front lines and in the field, a shift after years of knowledge workers getting most of the attention from startups building a new generation of apps.

Workiz and Jobber are building platforms for small business tradespeople to book jobs and manage them once they’re on the books; BigChange helps manage bigger fleets; and Hover has built a platform for builders to be able to assess and estimate costs for work by using AI to analyze images captured by their or their would-be customers’ smartphone cameras.

And there is Streem, which I discovered is a close enough competitor to SightCall that they’ve acquired Adwords ads based on SightCall searches in Google. Just ahead of the Covid-19 pandemic breaking wide open, General Catalyst-backed Streem was acquired by Frontdoor to help with the latter’s efforts to build out its home services business, another sign of how all of this is leaping ahead.

What’s interesting in part about SightCall and sets it apart is its technology. Co-founded in 2007 by Cottereau and Antoine Vervoort (currently SVP of product and engineering), the two are both long-time telecoms industry vets who had both worked on the technical side of building next-generation networks.

SightCall first started life as a company called Weemo that built video chat services that could run on WebRTC-based frameworks, which emerged at a time when we were seeing a wider effort to bring more rich media services into mobile web and SMS apps. For consumers and to a large extent businesses, mobile phone apps that work ‘over the top’ (distributed not by your mobile network carrier but the companies that run your phone’s operating system, and thus partly controlled by them) really took the lead and continue to dominate the market for messaging and innovations in messaging.

After a time, Weemo pivoted and renamed itself as SightCall, focusing on packaging the tech that it built into whichever app (native or mobile web) where one of its enterprise customers wanted the tech to live.

The key to how it works comes by way of how SightCall was built, Cottereau explained. The company has spent ten years building and optimizing a network across data centers close to where its customers are, which interconnects with Tier 1 telecoms carriers and has a lot of latency in the system to ensure uptime. “We work with companies where this connectivity is mission critical,” he said. “The video solution has to work.”

As he describes it, the hybrid system SightCall has built incorporates its own IP that works both with telecoms hardware and software, resulting in a video service that provides 10 different ways for streaming video and a system that automatically chooses the best in a particular environment, based on where you are, so that even if mobile data or broadband reception don’t work, video streaming will. “Telecoms and software are still very separate worlds,” Cottereau said. “They still don’t speak the same language, and so that is part of our secret sauce, a global roaming mechanism.”

The tech that the startup has built to date not only has given it a firm grounding against others who might be looking to build in this space, but has led to strong traction with customers. The next steps will be to continue building out that technology to tap deeper into the automation that is being adopted across the industries that already use SightCall’s technology.

“SightCall pioneered the market for AR-powered visual assistance, and they’re in the best position to drive the digital transformation of remote service,” said Alban Wyniecki, partner at InfraVia Capital Partners, in a statement. “As a global leader, they can now expand their capabilities, making their interactions more intelligent and also bringing more automation to help humans work at their best.”

“SightCall’s $42M Series B marks the largest funding round yet in this sector, and SightCall emerges as the undisputed leader in capital, R&D resources and partnerships with leading technology companies enabling its solutions to be embedded into complex enterprise IT,” added Antoine Izsak of Bpifrance. “Businesses are looking for solutions like SightCall to enable customer-centricity at a greater scale while augmenting technicians with knowledge and expertise that unlocks efficiencies and drives continuous performance and profit.”

Cottereau said that the company has had a number of acquisition offers over the years — not a surprise when you consider the foundational technology it has built for how to architect video networks across different carriers and data centers that work even in the most unreliable of network environments.

“We want to stay independent, though,” he said. “I see a huge market here, and I want us to continue the story and lead it. Plus, I can see a way where we can stay independent and continue to work with everyone.”

#ai, #ar, #artificial-intelligence, #augmented-reality, #customer-service, #enterprise, #europe, #field-service, #funding, #industrial, #manufacturing, #service-engineers, #sightcall, #tc, #weemo

0

Snap to launch a new Creator Marketplace this month, initially focused on Lens Creators

Snap on Wednesday announced its plan to soon launch a Creator Marketplace, which will make it easier for businesses to find and partner with Snapchat creators, including lens creators, AR creators and later, prominent Snapchat creators known as Snap Stars. At launch, the marketplace will focus on connecting brands and AR creators for AR ads. It will then expand to support all Snap Creators by 2022.

The company had previously helped connect its creator community with advertisers through its Snapchat Storytellers program, which first launched into pilot testing in 2018 — already a late arrival to the space. However, that program’s focus was similar to Facebook’s Brand Collabs Manager, as it focused on helping businesses find Snap creators who could produce video content.

Snap’s new marketplace, meanwhile, has a broader focus in terms of connecting all sorts of creators with the Snap advertising ecosystem. This includes Lens Creators, Developers and Partners, and then later, Snap’s popular creators with public profiles.

Snap says the Creator Marketplace will open to businesses later this month to help them partner with a select group of AR Creators in Snap’s Lens Network. These creators can help businesses build AR experiences without the need for extensive creative resources, which makes access to Snap’s AR ads more accessible to businesses, including smaller businesses without in-house developer talent.

Lens creators have already found opportunity working for businesses that want to grow their Snapchat presence — even allowing some creators to quit their day jobs and just build lens for a living. Snap has been further investing in this area of its business, having announced in December a $3.5 million fund directed towards AR Lens creation. The company said at the time there were tens of thousands of Lens creators who had collectively made over 1.5 million Lenses to date.

Using Lenses has grown more popular, too, the company had noted, saying that over 180 million people interact with a Snapchat Lens every day — up from 70 million daily active users of Lenses when the Lens Explorer section first launched in the app in 2018.

Now, Snap says that over 200 million Snapchat users interact with augmented reality on a daily basis, on average, out of its 280 million daily users. The majority (over 90%) of these users are 13-25 year olds. In total, users are posting over 5 billion Snaps per day.

Snap says the Creator Marketplace will remain focused on connecting businesses with AR Lens Creators throughout 2021.

The following year, it will expand to include the community of professional creators and storytellers who understand the current trends and interests of the Snap user base and can help businesses with their ad campaigns. The company will not take a cut of the deals facilitated through the Marketplace, it says.

This would include the creators making content for Snap’s new TikTok rival, Spotlight, which launched in November 2020. Snap encouraged adoption of the feature by shelling out $1 million per day to creators of top videos. In March 2021, over 125 million Snapchat users watched Spotlight, it says.

Image Credits: Snapchat

Spotlight isn’t the only way Snap is challenging TikTok.

The company also on Wednesday announced it’s snagging two of TikTok’s biggest stars for its upcoming Snap Originals lineup: Charli and Dixie D’Amelio. The siblings, who have gained over 20 million follows on Snapchat this past year, will star in the series “Charli vs. Dixie.” Other new Originals will feature names like artist Megan Thee Stallion, actor Ryan Reynolds, twins and influencers Niki and Gabi DeMartino, and YouTube beauty vlogger Manny Mua, among others.

Snap’s shows were watched by over 400 million people in 2020, including 93% of the Gen Z population in the U.S., it noted.

 

 

#advertising, #advertising-tech, #apps, #ar-ads, #augmented-reality, #brands, #creators, #lenses, #marketplace, #media, #mobile, #mobile-applications, #snap, #snap-inc, #snapchat, #social, #spotlight

0

Filing: Snap paid $124M for Fit Analytics as it gears up for a bigger e-commerce push

Earlier this year we reported on how Snap had acquired Berlin-based Fit Analytics, an AI-based fitting technology startup, as part of a wider push into e-commerce services, specifically to gain technology that can help prospective online shoppers get a better sense of how a particular item or size would fit them. A 10-Q filing from Snap today has now put a price tag on that deal.

Snap paid a total of $124.4 million, covering technology, IP, customer relationships and payouts to the team. The filing also noted that Snap spent a total of $204.5 million on acquisitions in 2020, but did not break them out.

The news comes ahead of Snap — whose flagship app Snapchat now has 280 million daily active users — preparing for its Snap Partner Conference in May. Sources say the company plans to announce, among other news, deeper commerce features for Snapchat — specifically tools to make it easier for Snapchat users to interact with and buy items that appear in the app, either in ads or more organically in content shared by other users.

While the exact details of those commerce tools, and the timing of when they might come online, are not yet known, Snap has hardly kept its interest in commerce a secret.

Snap has been hiring for roles to support its commerce efforts. Currently it’s advertising for a variety of engineering, marketing and product roles in commerce, to, in the words of one of the listings, for a Product Manager, “develop and launch shopping experiences and services that make shopping fun for Snapchatters and drive results for brands.” The listings also include a role specifically to work on Snapchat-based e-commerce efforts for direct-to-consumer (D2C) businesses.

And it has been making other recent acquisitions in addition to Fit Analytics that also line up with that.

They have included Screenshop, an app that describes itself as “the first AI-back style lens,” which can identify shoppable items in photos and then build a custom catalog of similar products that you can buy (akin to “shop the look” features that you will have come across in fashion media). And it’s also acquired Ariel AI, which has built technology to quickly render people in 3D, technology that can be used in a diverse set of applications, from games to virtual try-ons of clothing, makeup or accessories.

Snap confirmed the Ariel acquisition to CNBC in January. And while Screenshop deal was first reported earlier this month by The Information, Snap has declined to comment on it, although we have found people who worked at the startup now working at Snap.

Both acquisitions closed in 2020, according to reports, meaning that they came out of that year’s $204.5 million acquisition run. (Snap also noted a smaller acquisition, for $7.6 million, in the most recent quarter, but it did not disclose any further details.)

Even before all this, Snap had been making smaller efforts and tests in commerce going back years, although none of them have tipped into mainstream efforts.

Among them, in 2018 it launched a Snap Store — but that so far has not progressed beyond selling merchandise based on Bitmoji characters. And work on a Gucci shoe campaign last year, where Snapchat users could try shoes on in AR and then buy them, was seen by some as its big step into commerce — “we’ve moved from pure entertainment and expanded the use-case. And so with brands, it’s a really exciting time, especially in fashion and beauty. The Snapchat camera is connecting brands to their audiences in new ways,” a Snapchat AR executive said at the time — but that also didn’t develop into much beyond a one-off effort.

But with the pandemic leading to a surge of shopping online, and technology continuing to improve, the iron may finally be hot here.

As we said around the Fit Analytics acquisition, the idea of diversifying Snapchat’s revenue streams by building in more commerce experiences makes a lot of sense.

It gives the company another revenue stream at a time when Apple is introducing changes that might well affect how advertising can run and be monetized in the future. (The company most recently posted average revenues per user of $2.74, a figure Wall Street will be hoping will grow, not shrink.) It also plays into the demographics that Snapchat targets, where younger consumers are using social media apps to discover, share and shop for goods.

And specifically in the case of fashion, building experiences to shop for items on Snapchat leans into the augmented reality, image-altering, hyper-visual technology that has become a well-known and much-used hallmark of Snapchat and its owner, self-titled “camera company” Snap.

#artificial-intelligence, #augmented-reality, #fit-analytics, #ma, #mergers-and-acquisitions, #snap, #social, #tc

0

Audi spinoff holoride collects $12m in Series A led by Terranet AB

Holoride, the company that’s building an immersive XR in-vehicle media platform, today announced it raised €10 million (approximately $12 million) in its Series A investing round, earning the company a €30 million ($36 million) valuation. 

The Swedish ADAS software development company Terranet led the round with €3.2 million (~$3.9 million), followed by a group of Chinese financial and automotive technology investors, organized by investment professional Jingjing Xu, and educational and entertainment game development company Schell Games, which has partnered with holoride in the past to create content. 

Holoride will use the fresh funds to search for new developers and other talent both as it prepares to expand into global markets like Europe, the United States and Asia, and in advance of its summer 2022 launch for private passenger cars. 

“This goes hand-in-hand with putting more emphasis on the content creator community, and as of summer this year, releasing a lot of tools to help them build content for cars on our platform,” Nils Wollny, holoride’s CEO and founder, told TechCrunch. 

The Munich-based company launched at CES in 2019. TechCrunch got to test out its in-car virtual reality system. Our team was surprised, and delighted, to find that holoride had figured out how to quell the motion sickness caused both by being a passenger in a vehicle, and by using a VR headset. The key? Matching the experience users have within the headset to the movement of the vehicle. Once holoride launches, users will be able to download the holoride app to their phones or other personal devices like VR headsets, which will connect wirelessly to the car itself, and extend their reality.  

“Our technology has two sides,” said Wollny. “One is the localization, or positioning software, that takes data points from the car and performs real time synchronization. The other part is what we call our Elastic Software Development Kit. Content creators can build elastic content, which adapts to your travel time and routes. The collaboration with Terranet means their sensors and software stack that allow for a more precise capture and interpretation of the environment at an even faster speed with higher accuracy will enable us in the future for even more possibilities.”

Terranet’s VoxelFlow™ software, which was originally designed for ADAS applications, will help holoride advance its real time, in-vehicle XR entertainment. Terranet’s CEO Par-olof Johannesson, describes VoxelFlow™ as a new paradigm within computer vision and object identification, wherein a combination of sensors, event cameras and a laser scanner are integrated into a car’s windshield and headlamps in order to calculate the distance, direction and speed of an object.

Terranet’s VoxelFlow™ uses computer vision and object identification via a combination of sensors, event cameras and a laser scanner, which are integrated into a car’s windshield and headlamps, in order to calculate the distance, direction and speed of an object.

Holoride, which is manufacturer-agnostic, will be able to use the data points calculated by VoxelFlow™ in real time if holoride were being used in a vehicle that was built integrated with Terranet’s software. But more important is the ability for holoride to reuse 3D event data for XR applications, giving it to creators so they can create the most interactive experience. Terranet is also looking forward to opening up a new vertical for VoxelFlow™

“We are of course very eager to access holoride’s wide pipeline, as well,” said Johannesson. “This deal is very much about expanding the addressable market and tapping into the heart of the automotive industry, where lead times and turnaround times are usually pretty long.”

Holoride is on a mission to revolutionize the passenger experience by turning dead car time into interactive experiences that can run the gamut of gaming, education, productivity, mindfulness and more. For example, around Halloween 2019, holoride teamed up with Ford and Universal Pictures to immerse riders into the frightening world of the Bride of Frankenstein, replete with monsters jumping out and tasks for riders to perform. 

Wollny said holoride always has an eye towards the next step, even though its first product hasn’t gone to market yet. He understands that the future is in autonomous vehicles, and wants to build an essential element of the future tech stack of future cars, cars in which everyone is a passenger. 

“Car manufacturers always focus on the buyer of the car or the driver, but not so much on the passenger,” said Wollny. “The passenger is who holoride really focuses on. We want to turn every vehicle into a moving theme park.”

#augmented-reality, #entertainment, #holoride, #tc, #transportation, #virtual-reality

0

Leo AR, user-facing marketplace for 3D objects, raises $3 million seed round

Apple’s introduction of ARKit changed the game for entrepreneurs, not unlike the App Store did on a much, much larger scale back in 2008.

One entrepreneur, Dana Loberg, has capitalized on the launch of ARKit with her startup Leo AR.

Leo is the result of a few pivots. The company first started out as MojiLala, which launched out of betaworks. It was a hassle-free sticker marketplace that allowed artists to upload their stickers and sell them through the platform for end-users to use in a number of locations.

In 2017, MojiLala released a new app called Surreal, which allowed artists to sell virtual objects to end users and lay them over their camera to record fun content. Now as Leo AR, the company is focused on 3D augmented reality objects without losing focus on giving artists an easy-to-use outlet for their virtual wares.

Today, Leo is announcing the raise of a $3 million seed round led by Great Oaks Ventures, with participation from Dennis Phelps of IVP, betaworks, Deutsch Telekom, Quake Capital, and other angel investors.

Image Credits: Leo AR

The app operates on a freemium basis, letting end users subscribe to certain artists they like on the platform. Leo takes a 30 percent cut on those purchases, but Loberg said that her main priority beyond generating revenue is ensuring that artists get paid well and are incentivized to create and sell through her platform.

Loberg also shared that the app has exploded in popularity among children, who enjoy creating videos with dinosaurs or dragons in them.

In fact, Leo users have created more than 8 million videos on the platform, and active users add more than 85 3D objects to their scenes and average 10+ minutes in the app when they use it.

Leo not only lets users distribute their content out to other platforms like Instagram, but it also has a feed of the best videos created in Leo for others to check out.

#apps, #augmented-reality, #betaworks, #dana-loberg, #entertainment, #recent-funding, #startups

0

Amazon is opening a London hair salon to test AR and other retail technologies

Amazon announced this morning it’s opening Amazon Salon, the retailer’s first hair salon and a place where Amazon aims to test new technologies with the general public. The salon will occupy over 1,500 sq. ft on Brushfield Street in London’s Spitalfields, where Amazon says it will initially be trialing the use of augmented reality (AR) and “point-and-learn” technology — the latter being a system that allow customers to point to products on a display shelf in order to learn more through videos and other content that then appears on a display screen.

To then order the products, the customers will scan the QR code on the shelf, which takes them to the Amazon.co.uk shopping page for the item where they can add it to their cart and check out.

Image Credits: Amazon

The salon’s AR technology, meanwhile, will be used to allow customers to experiment by virtually trying on different hair colors before making a commitment to a new shade.

Amazon has already entered the convenience store market, grocery business and other physical retail, where it’s innovating with new technologies like cashierless checkout, smart grocery carts, and biometric systems. But it’s not clear that Amazon actually has ambitions to be in the salon business itself. Instead, it seems the salon will largely serve as a testing ground for new technologies that Amazon will likely want to sell to other retail clients in the future, or perhaps implement in its own stores. And in the case of AR, Amazon may want to gather data on customers’ experiences it can use on its own shopping site, too.

Hinting that its goals are not about the salon business itself, Amazon today describes the salon as an “experiential venue where we showcase new products and technology,” and notes that it has no other plans to open more salons at this time.

The company has also recruited an existing salon owner, Elena Lavagni of Neville Hair & Beauty Salon, to help with this project, instead of hiring a new staff to run it long-term. Lavagni and her team have previously provided hairdressing services for other events, like Paris Fashion Week and the Cannes Film Festival.

Image Credits: Amazon

Amazon has not detailed what sort of data it will collect from customers who use the salon, but it’s clearly there to learn about how new retail technologies would work in a real-world environment. But the fact that Amazon is capturing customer images for its hair color virtual try-on should raise questions about what it plans to do with the data it collects from the new salon. Will it only be used to learn about the specific technology being tested, or will it be put to other uses, too?

As many recall, Amazon has a complicated history with its use of technologies like facial recognition and biometrics, having sold biometric facial recognition services to law enforcement in the U.S., while its facial recognition technology was the subject of a data privacy lawsuit. And its Ring camera company continues to work in partnership with police. Customers should be told if they’re participating in an Amazon research project, not just having fun with new tech products.

Like other Amazon physical stores, the salon will first be open to Amazon employees only before offering bookings to the wider public in the weeks to come.

#amazon, #amazon-co-uk, #ar, #augmented-reality, #ecommerce, #london, #online-shopping, #salon, #shopping, #technology

0