The Robots Are Coming for Phil in Accounting

Workers with college degrees and specialized training once felt relatively safe from automation. They aren’t.

#artificial-intelligence, #labor-and-jobs, #layoffs-and-job-reductions, #robots-and-robotics, #unemployment, #workplace-environment

0

Deep Science: AI adventures in arts and letters

There’s more AI news out there than anyone can possibly keep up with. But you can stay tolerably up to date on the most interesting developments with this column, which collects AI and machine learning advancements from around the world and explains why they might be important to tech, startups or civilization.

To begin on a lighthearted note: The ways researchers find to apply machine learning to the arts are always interesting — though not always practical. A team from the University of Washington wanted to see if a computer vision system could learn to tell what is being played on a piano just from an overhead view of the keys and the player’s hands.

Audeo, the system trained by Eli Shlizerman, Kun Su and Xiulong Liu, watches video of piano playing and first extracts a piano-roll-like simple sequence of key presses. Then it adds expression in the form of length and strength of the presses, and lastly polishes it up for input into a MIDI synthesizer for output. The results are a little loose but definitely recognizable.

Diagram showing how video of a piano player's hands on the keys is turned into MIDI sequences.

Image Credits: Shlizerman, et. al

“To create music that sounds like it could be played in a musical performance was previously believed to be impossible,” said Shlizerman. “An algorithm needs to figure out the cues, or ‘features,’ in the video frames that are related to generating music, and it needs to ‘imagine’ the sound that’s happening in between the video frames. It requires a system that is both precise and imaginative. The fact that we achieved music that sounded pretty good was a surprise.”

Another from the field of arts and letters is this extremely fascinating research into computational unfolding of ancient letters too delicate to handle. The MIT team was looking at “locked” letters from the 17th century that are so intricately folded and sealed that to remove the letter and flatten it might permanently damage them. Their approach was to X-ray the letters and set a new, advanced algorithm to work deciphering the resulting imagery.

Diagram showing x-ray views of a letter and how it is analyzed to virtually unfold it.

Diagram showing X-ray views of a letter and how it is analyzed to virtually unfold it. Image Credits: MIT

“The algorithm ends up doing an impressive job at separating the layers of paper, despite their extreme thinness and tiny gaps between them, sometimes less than the resolution of the scan,” MIT’s Erik Demaine said. “We weren’t sure it would be possible.” The work may be applicable to many kinds of documents that are difficult for simple X-ray techniques to unravel. It’s a bit of a stretch to categorize this as “machine learning,” but it was too interesting not to include. Read the full paper at Nature Communications.

Diagram showing reviews of electric car charge points are analyzed and turned into useful data.

Image Credits: Asensio, et. al

You arrive at a charge point for your electric car and find it to be out of service. You might even leave a bad review online. In fact, thousands of such reviews exist and constitute a potentially very useful map for municipalities looking to expand electric vehicle infrastructure.

Georgia Tech’s Omar Asensio trained a natural language processing model on such reviews and it soon became an expert at parsing them by the thousands and squeezing out insights like where outages were common, comparative cost and other factors.

#artificial-intelligence, #deep-science, #ec-column, #ec-robotics, #startups

0

UK’s MHRA says it has ‘concerns’ about Babylon Health — and flags legal gap around triage chatbots

The UK’s medical device regulator has admitted it has concerns about VC-backed AI chatbot maker Babylon Health. It made the admission in a letter sent to a clinician who’s been raising the alarm about Babylon’s approach toward patient safety and corporate governance since 2017.

The HSJ reported on the MHRA’s letter to Dr David Watkins yesterday. TechCrunch has reviewed the letter (see below), which is dated December 4, 2020. We’ve also seen additional context about what was discussed in a meeting referenced in the letter, as well as reviewing other correspondence between Watkins and the regulator in which he details a number of wide-ranging concerns.

In an interview he emphasized that the concerns the regulator shares are “far broader” than the (important but) single issue of chatbot safety.

“The issues relate to the corporate governance of the company — how they approach safety concerns. How they approach people who raise safety concerns,” Watkins told TechCrunch. “That’s the concern. And some of the ethics around the mis-promoting of medical devices.

“The overall story is they did promote something that was dangerously flawed. They made misleading claims with regards to how [the chatbot] should be used — its intended use — with [Babylon CEO] Ali Parsa promoting it as a ‘diagnostic’ system — which was never the case. The chatbot was never approved for ‘diagnosis’.”

“In my opinion, in 2018 the MHRA should have taken a much firmer stance with Babylon and made it clear to the public that the claims that were being made were false — and that the technology was not approved for use in the way that Babylon were promoting it,” he went on. “That should have happened and it didn’t happen because the regulations at that time were not fit for purpose.”

“In reality there is no regulatory ‘approval’ process for these technologies and the legislation doesn’t require a company to act ethically,” Watkins also told us. “We’re reliant on the healthtech sector behaving responsibly.”

The consultant oncologist began raising red flags about Babylon with UK healthcare regulators (CQC/MHRA) as early as February 2017 — initially over the “apparent absence of any robust clinical testing or validation”, as he puts it in correspondence to regulators. However with Babylon opting to deny problems and go on the attack against critics his concerns mounted.

An admission by the medical devices regulator that all Watkins’ concerns are “valid” and are “ones that we share” blows Babylon’s deflective PR tactics out of the water.

“Babylon cannot say that they have always adhered to the regulatory requirements — at times they have not adhered to the regulatory requirements. At different points throughout the development of their system,” Watkins also told us, adding: “Babylon never took the safety concerns as seriously as they should have. Hence this issue has dragged on over a more than three year period.”

During this time the company has been steaming ahead inking wide-ranging ‘digitization’ deals with healthcare providers around the world — including a 10-year deal agreed with the UK city of Wolverhampton last year to provide an integrated app that’s intended to have a reach of 300,000 people.

It also has a 10 year agreement with the government of Rwanda to support digitization of its health system, including via digitally enabled triage. Other markets it’s rolled into include the US, Canada and Saudi Arabia.

Babylon says it now covers more than 20 million patients and has done 8 million consultations and “AI interactions” globally. But is it operating to the high standards people would expect of a medical device company?

Safety, ethical and governance concerns

In a written summary, dated October 22, of a video call which took place between Watkins and the UK medical devices regulator on September 24 last year, he summarizes what was discussed in the following way: “I talked through and expanded on each of the points outlined in the document, specifically; the misleading claims, the dangerous flaws and Babylon’s attempts to deny/suppress the safety issues.”

In his account of this meeting, Watkins goes on to report: “There appeared to be general agreement that Babylon’s corporate behaviour and governance fell below the standards expected of a medical device/healthcare provider.”

“I was informed that Babylon Health would not be shown leniency (given their relationship with [UK health secretary] Matt Hancock),” he also notes in the summary — a reference to Hancock being a publicly enthusiastic user of Babylon’s ‘GP at hand’ app (for which he was accused in 2018 of breaking the ministerial code).

In a separate document, which Watkins compiled and sent to the regulator last year, he details 14 areas of concern — covering issues including the safety of the Babylon chatbot’s triage; “misleading and conflicting” T&Cs — which he says contradict promotional claims it has made to hype the product; as well as what he describes as a “multitude of ethical and governance concerns” — including its aggressive response to anyone who raises concerns about the safety and efficacy of its technology.

This has included a public attack campaign against Watkins himself, which we reported on last year; as well as what he lists in the document as “legal threats to avoid scrutiny & adverse media coverage”.

Here he notes that Babylon’s response to safety concerns he had raised back in 2018 — which had been reported on by the HSJ — was also to go on the attack, with the company claiming then that “vested interest” were spreading “false allegations” in an attempt to “see us fail”.

The allegations were not false and it is clear that Babylon chose to mislead the HSJ readership, opting to place patients at risk of harm, in order to protect their own reputation,” writes Watkins in associated commentary to the regulator.

He goes on to point out that, in May 2018, the MHRA had itself independently notified Babylon Health of two incidents related to the safety of its chatbot (one involving missed symptoms of a heart attack, another missed symptoms of DVT) — yet the company still went on to publicly rubbish the HSJ’s report the following month (which was entitled: ‘Safety regulators investigating concerns about Babylon’s ‘chatbot”).

Wider governance and operational concerns Watkins raises in the document include Babylon’s use of staff NDAs — which he argues leads to a culture inside the company where staff feel unable to speak out about any safety concerns they may have; and what he calls “inadequate medical device vigilance” (whereby he says the Babylon bot doesn’t routinely request feedback on the patient outcome post triage, arguing that: “The absence of any robust feedback system significant impairs the ability to identify adverse outcomes”).

Re: unvarnished staff opinions, it’s interesting to note that Babylon’s Glassdoor rating at the time of writing is just 2.9 stars — with only a minority of reviewers saying they would recommend the company to a friend and where Parsa’s approval rating as CEO is also only 45% on aggregate. (“The technology is outdated and flawed,” writes one Glassdoor reviewer who is listed as a current Babylon Health employee working as a clinical ops associate in Vancouver, Canada — where privacy regulators have an open investigation into its app. Among the listed cons in the one-star review is the claim that: “The well-being of patients is not seen as a priority. A real joke to healthcare. Best to avoid.”)

Per Watkins’ report of his online meeting with the MHRA, he says the regulator agreed NDAs are “problematic” and impact on the ability of employees to speak up on safety issues.

He also writes that it was acknowledged that Babylon employees may fear speaking up because of legal threats. His minutes further record that: “Comment was made that the MHRA are able to look into concerns that are raised anonymously.”

In the summary of his concerns about Babylon, Watkins also flags an event in 2018 which the company held in London to promote its chatbot — during which he writes that it made a number of “misleading claims”, such as that its AI generates health advice that is “on-par with top-rated practicing clinicians”.

The flashy claims led to a blitz of hyperbolic headlines about the bot’s capabilities — helping Babylon to generate hype at a time when it was likely to have been pitching investors to raise more funding.

The London-based startup was valued at $2BN+ in 2019 when it raised a massive $550M Series C round, from investors including Saudi Arabia’s Public Investment Fund and a large (unnamed) U.S.-based health insurance company, as well as insurance giant Munich Re’s ERGO Fund — trumpeting the raise at the time as the largest-ever in Europe or U.S. for digital health delivery.

“It should be noted that Babylon Health have never withdrawn or attempted to correct the misleading claims made at the AI Test Event [which generated press coverage it’s still using as a promotional tool on its website in certain jurisdictions],” Watkins writes to the regulator. “Hence, there remains an ongoing risk that the public will put undue faith in Babylon’s unvalidated medical device.”

In his summary he also includes several pieces of anonymous correspondence from a number of people claiming to work (or have worked) at Babylon — which make a number of additional claims. “There is huge pressure from investors to demonstrate a return,” writes one of these. “Anything that slows that down is seen [a]s avoidable.”

“The allegations made against Babylon Health are not false and were raised in good faith in the interests of patient safety,” Watkins goes on to assert in his summary to the regulator. “Babylon’s ‘repeated’ attempts to actively discredit me as an individual raises serious questions regarding their corporate culture and trustworthiness as a healthcare provider.”

In its letter to Watkins (screengrabbed below), the MHRA tells him: “Your concerns are all valid and ones that we share”.

It goes on to thank him for personally and publicly raising issues “at considerable risk to yourself”.

Letter from the MHRA to Dr David Watkins (Screengrab: TechCrunch)

Babylon has been contacted for a response to the MHRA’s validation of Watkins’ concerns. At the time of writing it had not responded to our request for comment.

The startup told the HSJ that it meets all the local requirements of regulatory bodies for the countries it operates in, adding: “Babylon is committed to upholding the highest of standards when it comes to patient safety.”

In one aforementioned aggressive incident last year, Babylon put out a press release attacking Watkins as a ‘troll’ and seeking to discredit the work he was doing to highlight safety issues with the triage performed by its chatbot.

It also claimed its technology had been “NHS validated” as a “safe service 10 times”.

It’s not clear what validation process Babylon was referring to there — and Watkins also flags and queries that claim in his correspondence with the MHRA, writing: “As far as I am aware, the Babylon chatbot has not been validated — in which case, their press release is misleading.”

The MHRA’s letter, meanwhile, makes it clear that the current regulatory regime in the UK for software-based medical device products does not adequately cover software-powered ‘healthtech’ devices, such as Babylon’s chatbot.

Per Watkins there is no approval process, currently. Such devices are merely registered with the MHRA — but there’s no legal requirement that the regulator assess them or even receive documentation related to their development. He says they exist independently — with the MHRA holding a register.

“You have raised a complex set of issues and there are several aspects that fall outside of our existing remit,” the regulator concedes in the letter. “This highlights some issues which we are exploring further, and which may be important as we develop a new regulatory framework for medical devices in the UK.”

An update to pan-EU medical devices regulation — which will bring in new requirements for software-based medical devices, and had been originally intended to be implemented in the UK in May last year — will no longer take place, given the country has left the bloc.

The UK is instead in the process of formulating its own regulatory update for medical device rules. This means there’s still a gap around software-based ‘healthtech’ — which isn’t expected to be fully plugged for several years. (Although Watkins notes there have been some tweaks to the regime, such as a partial lifting of confidentiality requirements last year.)

In a speech last year, health secretary Hancock told parliament that with the government aimed to formulate a regulatory system for medical devices that is “nimble enough” to keep up with tech-fuelled developments such as health wearables and AI while “maintaining and enhancing patient safety”. It will include giving the MHRA “a new power to disclose to members of the public any safety concerns about a device”, he said then.

In the meanwhile the existing (outdated) regulatory regime appears to be continuing to tie the regulator’s hands — at least vis-a-vis what they can say in public about safety concerns. It has taken Watkins making its letter to him public to do that.

In the letter the MHRA writes that “confidentiality unfortunately binds us from saying more on any specific investigation”, although it also tells him: “Please be assured that your concerns are being taken seriously and if there is action to be taken, then we will.”

“Based on the wording of the letter, I think it was clear that they wanted to provide me with a message that we do hear you, that we understand what you’re saying, we acknowledge the concerns which you’re raised, but we are limited by what we can do,” Watkins told us.

He also said he believes the regulator has engaged with Babylon over concerns he’s raised these past three years — noting the company has made a number of changes after he had raised specific queries (such as to its T&Cs which had initially said it’s not a medial device but were subsequently withdrawn and changed to acknowledge it is; or claims it had made that the chatbot is “100% safe” which were withdrawn — after an intervention by the Advertising Standards Authority in that case).

The chatbot itself has also been tweaked to put less emphasis on the diagnosis as an outcome and more emphasis on the triage outcome, per Watkins.

“They’ve taken a piecemeal approach [to addressing safety issues with chatbot triage]. So I would flag an issue [publicly via Twitter] and they would only look at that very specific issue. Patients of that age, undertaking that exact triage assessment — ‘okay, we’ll fix that, we’ll fix that’ — and they would put in place a [specific fix]. But sadly, they never spent time addressing the broader fundamental issues within the system. Hence, safety issues would repeatedly crop up,” he said, citing examples of multiple issues with cardiac triages that he also raised with the regulator.

“When I spoke to the people who work at Babylon they used to have to do these hard fixes… All they’d have to do is just kind of ‘dumb it down’ a bit. So, for example, for anyone with chest pain it would immediately say go to A&E. They would take away any thought process to it,” he added. (It also of course risks wasting healthcare resources — as he also points out in remarks to the regulators.)

“That’s how they over time got around these issues. But it highlights the challenges and difficulties in developing these tools. It’s not easy. And if you try and do it quickly and don’t give it enough attention then you just end up with something which is useless.”

Watkins also suspects the MHRA has been involved in getting Babylon to remove certain pieces of hyperbolic promotional material related to the 2018 AI event from its website.

In one curious episode, also related to the 2018 event, Babylon’s CEO demoed an AI-powered interface that appeared to show real-time transcription of a patient’s words combined with an ’emotion-scanning’ AI — which he said scanned facial expressions in real-time to generate an assessment of how the person was feeling — with Parsa going on to tell the audience: “That’s what we’ve done. That’s what we’ve built. None of this is for show. All of this will be either in the market or already in the market.”

However neither feature has actually been brought to market by Babylon as yet. Asked about this last month, the startup told TechCrunch: “The emotion detection functionality, seen in old versions of our clinical portal demo, was developed and built by Babylon‘s AI team. Babylon conducts extensive user testing, which is why our technology is continually evolving to meet the needs of our patients and clinicians. After undergoing pre-market user-testing with our clinicians, we prioritised other AI-driven features in our clinical portal over the emotion recognition function, with a focus on improving the operational aspects of our service.”

“I certainly found [the MHRA’s letter] very reassuring and I strongly suspect that the MHRA have been engaging with Babylon to address concerns which have been identified over the past three year period,” Watkins also told us today. “The MHRA don’t appear to have been ignoring the issues but Babylon simply deny any problems and can sit behind the confidentiality clauses.”

In a statement on the current regulatory situation for software-based medical devices in the UK, the MHRA told us:

The MHRA ensures that manufacturers of medical devices comply with the Medical Devices Regulations 2002 (as amended). Please refer to existing guidance.

The Medicines and Medical Devices Act 2021 provides the foundation for a new improved regulatory framework that is currently being developed. It will consider all aspects of medical device regulation, including the risk classification rules that apply to Software as a Medical Device (SaMD).

The UK will continue to recognise CE marked devices until 1 July 2023. After this time, requirements for the UKCA Mark must be met. This will include the revised requirements of the new framework that is currently being developed.

The Medicines and Medical Devices Act 2021 allows the MHRA to undertake its regulatory activities with a greater level of transparency and share information where that is in the interests of patient safety.

The regulator declined to be interviewed or response to questions about the concerns it says in the letter to Watkins that it shares about Babylon — telling us: “The MHRA investigates all concerns but does not comment on individual cases.”

“Patient safety is paramount and we will always investigate where there are concerns about safety, including discussing those concerns with individuals that report them,” it added.

Watkins raised one more salient point on the issue of patient safety for ‘cutting edge’ tech tools — asking where is the “real life clinical data”? So far, he says the studies patients have to go on are limited assessments — often made by the chatbot makers themselves.

“It’s one quite telling thing about this sector is the fact that there’s very little real life data out there,” he said. “These chatbots have been around for a good few years now… And there’s been enough time to get real life clinical data and yet it hasn’t appeared and you just wonder if, is that because in the real-life setting they are actually not quite as useful as we think they are?”

#ai-chatbots, #artificial-intelligence, #babylon-health, #europe, #health, #heath-tech-regulation, #matt-hancock, #mhra, #tc

0

Microsoft launches ‘Group Transcribe,’ a transcription and translation app for in-person meetings

A new project from Microsoft’s in-house incubator, Microsoft Garage, introduces a different take on meeting transcriptions. While today there are a number of real-time transcription apps to use on your phone — like Otter.ai or Google’s Recorder app for Pixel devices, for example — Microsoft’s new Group Transcribe app reimagines meeting transcriptions as a more collaborative process, where everyone simultaneously records the meeting on their own device for higher accuracy. It also offers real-time translation for languages spoken in over 80 distinct locales.

To use the app, one person would first initiate the meeting in their own device. They can then invite the other meeting attendees to join the session via Bluetooth, a scannable QR code or by sharing a link. After the other participants join the session and the meeting begins, each person will see the transcript appear in real-time on their own device.

Image Credits: Microsoft

The app, which is powered by A.I. speech and language technology, is able to transcribe with higher accuracy and speaker attribution based on the volume of the speaker captured by the microphone of each phone being used in the meeting.

By comparing the level of a person’s voice volume, the cloud service attempts to determine which device is closest to the speaker and the language preferences of that speaker. This means speakers are also accurately labeled in the app, which can be a challenge for other transcription apps where only one person is recording.

In addition, if meeting participants want to speak in their own language, the app can provide the translation to others’ devices in their own language.

Image Credits: Microsoft

Microsoft says the app is designed with accessibility in mind, as it makes it easier for people who are deaf, hard of hearing, and non-native speakers to more fully participate in meetings by following along through the live transcriptions and translations.

The project itself was built by Microsoft employees who collectively speak over a dozen different languages and dialects.

“This can be a fantastic tool for communication. What I would love to see is for this to break down barriers for people speaking across multiple languages,” said Franklin Munoz, Principal Development Lead, when introducing the project.

Like most cloud-based transcription services, the app should not be used for highly confidential meetings. However, Microsoft has built granular data and privacy controls that allow users to decide if or when they want to share their conversation data.

Image Credits: Microsoft

To work, the audio and text input data collected is sent to Microsoft’s online speech recognitions and translation technologies — though with a randomly generated identifier, not your real name.

While Microsoft doesn’t save the meeting transcripts and recordings itself after the fact — they’re saved on your device — the app does encourage participants to “contribute” their meetings recordings to Microsoft so it can improve the service.

This allows Microsoft to retain the audio and speech recognition-generated text transcriptions when all meeting participants agree to opt in for that session. By reviewing the data, Microsoft aims to improve its speech recognition and speaker attribution capabilities over time, it says. The user data will then be accessed under NDA by both Microsoft employees and contractors from other companies who work for Microsoft, but won’t include any of the speakers’ account credentials.

Reviewers will also only have access to randomized snippets of audio, not full recordings. And Microsoft says it “de-identifies” meeting recordings by removing long strings of numbers that could represent things like credit card numbers or phone numbers, for example. Users can delete their previously shared recordings at any time, but otherwise they’re retained for up to 2 years on encrypted servers, the company says.

Because there’s not a way for a business, at an admin level, to configure or block the “contribution” setting for all users, people should carefully weigh the advantages and risks of such a service. It’s also a Microsoft Garage project, meaning it’s meant to be more experimental and could be shuttered at any time.

Currently, the Group Transcribe app is available on iOS only.

 

#ai, #apps, #artificial-intelligence, #ios-apps, #meetings, #microsoft, #microsoft-garage, #mobile, #speech-recognition, #transcription, #translation

0

Create a handbook and integrate AI to onboard remote employees

The pandemic has forced organizations across the globe to shutter the office environment, and take up a remote-first strategy. Through necessity, professionals have adapted to remote working. But the systems they use are still playing catch up.

One area less readily accommodating to the remote environment is the onboarding process. Given that it is the first sustained contact that a new starter has with a company,  a remote-first strategy is dependent on its success. When looking to onboard new employees, the luxuries of first-day meet and greets, in-person hardware setup, and a team lunch are no longer available. From interview to offer-letter and beyond, any new hire’s early journey is critical to their life at the company, their job satisfaction and ultimately their productivity. The remote induction must be a smooth process, and so needs a thorough rethink.

A cultural shift in the company may be necessary. Organizations need to embrace knowledge-sharing and collaboration, by turning to a “handbook first” approach. A few simple steps can lead them there. Companies also need to analyze their workflow. Are the right systems in place to ensure the seamless flow of both tacit and explicit knowledge?

Perhaps most importantly, artificial intelligence can help transform a clunky old onboarding process into a sophisticated, smooth journey. Naturally the best AI models to use will depend on the business, and department in question. However, with a few pointers business leaders can carve out a path to AI integration.

Let’s dive into the specifics that can transform the remote onboarding process, for the benefit of both the company and the new starter in question.

How to handbook

This is arguably the most important piece of the puzzle when it comes to ensuring newcomers are able to access the right information at the right time; it’s also the most difficult to get right. It is for workers at all levels of an organization to think about how knowledge is shared between teams, and the processes which surround that interchange of ideas.

What is most important is that everyone in an organization prioritizes documentation; exactly how they do it is secondary. You can spin up plenty of free and paid softwares to start creating a handbook. Anything cloud based is suitable, with more sophisticated paid options recommended to keep things easily searchable with documentation sorted into well defined hierarchies, rather than losing those nuggets of information in a sea of folders.

However, this systemic challenge is best addressed from top down. The process should include some checks and balances, with permissioning crucial for parts of the handbook which should remain static, like policies and SLPs. Other parts of the documentation should be kept flexible, like processes and team level knowledge. The majority of the handbook must be democratized as far as possible.

Gitlab, an all-remote company, first coined the term “handbook-first.” The DevOps software provider acts as a great example of a company that lives and breathes through documenting and codifying internal knowledge. Everyone within the organization buys into the mantra of documenting what they know, with subject matter experts assigned to manage knowledge base content. Keeping company documentation up to date is a collaborative task, considered paramount to the company’s livelihood. Softwares give a helping hand, nudging contributors to keep information up to date.

Darren Murph, Head of Remote at GitLab, says that their documentation strategy, twinned with a cooperative approach, helps to build trust with new starters. “When everything a new hire needs to know is written down, there’s no ambiguity or wondering if something is missing. We couple documentation with an Onboarding Buddy – a partner who is responsible for directing key stakeholder conversations and ensuring that acclimation goes well.”

#artificial-intelligence, #column, #ec-column, #ec-future-of-work, #labor, #remote-work, #startups

0

SkyMul’s drones secure rebar on the fly to speed up construction

There are many jobs in the construction industry that fall under the “dull, dirty, and dangerous” category said to be ripe for automation — but only a few can actually be taken on with today’s technology. One such job is the crucial but repetitive task of rebar tying, which a startup called SkyMul is aiming to completely automate using fleets of drones.

Unless you’ve put together reinforced concrete at some point in your life, you may not know what rebar tying is. The steel rebar that provides strength to concrete floors, walls, and other structures is held in place during the pouring process by tying it to the other rebar where the rods cross. For a good-size building or bridge this can easily be thousands of ties — and the process is generally done manually.

Rodbusters (as rebar tying specialists are called, or so I’m told) are masters of the art of looping a short length of plastic or wire around an intersection between two pieces of rebar, then twisting and tying it tightly so that the rods are secured in multiple directions. It must be done precisely and efficiently, and so it is — but it’s backbreaking, repetitive work. Though any professional must feel pride in what they do, I doubt anyone cherishes the chronic pain they get from doing that task thousands of times in an hour. As you might expect, rodbusters have high injury rates and develop chronic issues.

Automation of rebar tying is tricky because it happens in so many different circumstances. A prominent semi-robotic solution is the TyBot, which is a sort of rail-mounted gantry that suspends itself over the surface — but while this makes sense for a bridge, it makes far less for the 20th floor of an office building.

Animated image of a drone floating over rebar and tying it together at intersections.

Image Credits: SkyMul

Enter SkyMul, a startup still in the very early stages but with a compelling pitch: rebar tying done by a fleet of drones. When you consider that the tying process doesn’t involve too much force, and that computer vision has gotten more than good enough to locate the spots that need work… it starts sounding kind of obvious.

CEO and co-founder Eohan George said that they evaluated a number of different robotic solutions but that drones are the only ones that make sense. The only legged robots with the dexterity to pick their way through the rebar are too expensive, and treads and wheels are too likely to move the unsecured rebar.

Diagram showing how SkyMul's drones map an area of rebar then divide it up for tying.

Image Credits: SkyMul

Here’s how the company’s SkyTy system works. First, a mapper drone flies over the site to mark the boundaries and then, in an automated closer flyover, to build a map of the rebar itself and where the ties will need to go. This map is then double-checked by the rodbuster technician running the show, which George said only takes about a minute per thousand square feet of rebar (though that adds up quickly).

Then the tying drones are released, as many as needed or wanted. Each one moves from spot to spot, hovering and descending until its tying tool (much like those used by human rodbusters) spans the rebar intersection; the tie is wrapped, twisted, and the drone is off to the next spot. They need their batteries swapped every 25 minutes, which means they generally have time to put down 70-80 ties; right now each drone does one tie every 20 seconds, which is in line with humans, who can do it faster but generally go at about that speed or slower, according to numbers George cited.

It’s difficult to estimate the cost savings and value of the work SkyTy does, because the value of the labor varies widely. In some places rodbusters are earning north of $80/hour, meaning the draw of automation is in cost savings. But in other markets the pay is less than a third of that, which compounded with the injury risk makes rodbusters a scarce quantity — so the value is in availability and reliability. Drone-based tying seems to offer value one way or the other, but that means the business model is somewhat in flux as SkyMul figures out what makes the most sense. Generally contractors at one level or another would lease and eventually own their own drones, though other methods are being looked into.

Animated image of a computer-generated grid overlaid on images of rebar.

Image Credits: SkyMul

The system offers value-add services as well, for instance the precise map of the rebar generated at the beginning, which can be archived and used later for maintenance, quality assurance, comparison with plans, and other purposes. Once a contractor is convinced it’s as good or better than the manually-produced ones currently used, this could save hours, turning a 3-day job into a 2-day job or otherwise simplifying logistics.

The plan at the company is to first offer SkyTy as an option for bridge construction, which is a simpler environment than a multi-story building for the drones. The market there is on the order of $30-40 million per year for rebar tying services, providing an easier path to the more complex deployments.

SkyMul is looking for funding, having spun out of Georgia Tech and going through Comcast-NBC accelerator The Farm and then being granted a National Science Foundation SBIR Phase I award (with hopes for a Phase II). They have demonstrated the system but have yet to enter into any pilot programs — there are partnerships in the works but the construction business isn’t a nimble one and a drone-based solution isn’t trivial to swap in for human rodbusters on short notice. But once a few projects are under its belt the company seems likely to find serious traction among forward-thinking contractors.

#artificial-intelligence, #automation, #construction, #drones, #gadgets, #hardware, #startups, #tc, #uavs

0

Microsoft launches Azure Percept, its new hardware and software platform to bring AI to the edge

Microsoft today announced Azure Percept, its new hardware and software platform for bringing more of its Azure AI services to the edge. Percept combines Microsoft’s Azure cloud tools for managing devices and creating AI models with hardware from Microsoft’s device partners. The general idea here is to make it far easier for all kinds of businesses to build and implement AI for things like object detection, anomaly detections, shelf analytics and keyword spotting at the edge by providing them with an end-to-end solution that takes them from building AI models to deploying them on compatible hardware.

To kickstart this, Microsoft also today launches a hardware development kit with an intelligent camera for vision use cases (dubbed Azure Percept Vision). The kit features hardware-enabled AI modules for running models at the edge, but it can also be connected to the cloud. Users will also be able to trial their proofs-of-concept in the real world because the development kit conforms to the widely used 80/20 T-slot framing architecture.

In addition to Percept Vision, Microsoft is also launching Azure Percept Audio for audio-centric use cases.

Azure Percept devices, including Trust Platform Module, Azure Percept Vision and Azure Percept Audio

Azure Percept devices, including Trust Platform Module, Azure Percept Vision and Azure Percept Audio

“We’ve started with the two most common AI workloads, vision and voice, sight and sound, and we’ve given out that blueprint so that manufacturers can take the basics of what we’ve started,” said Roanne Sones, the corporate vice president of Microsoft’s edge and platform group, said. “But they can envision it in any kind of responsible form factor to cover a pattern of the world.”

Percept customers will have access to Azure’s cognitive service and machine learning models and Percept devices will automatically connect to Azure’s IoT hub.

Microsoft says it is working with silicon and equipment manufacturers to build an ecosystem of “intelligent edge devices that are certified to run on the Azure Percept platform.” Over the course of the next few months, Microsoft plans to certify third-party devices for inclusion in this program, which will ideally allow its customers to take their proofs-of-concept and easily deploy them to any certified devices.

“Anybody who builds a prototype using one of our development kits, if they buy a certified device, they don’t have to do any additional work,” said Christa St. Pierre, a product manager in Microsoft’s Azure edge and platform group.

St. Pierre also noted that all of the components of the platform will have to conform to Microsoft’s responsible AI principles — and go through extensive security testing.

#articles, #artificial-intelligence, #azure, #cloud, #cloud-computing, #cloud-infrastructure, #enterprise, #machine-learning, #microsoft, #microsoft-ignite-2021, #microsoft-azure, #perception, #philosophy, #platform, #product-manager, #software-platform, #tc

0

Microsoft updates Teams with new presentation features

It’s (virtual) Microsoft Ignite this week, Microsoft’s annual IT-centric conference and its largest, with more than 26,000 people attending the last in-person event in 2019. Given its focus, it’s no surprise that Microsoft Teams is taking center stage in the announcements this year. Teams, after all, is now core to Microsoft’s productivity suite. Today’s announcements span the gamut from new meeting features to conference room hardware.

At the core of Teams — or its competitors like Slack for that matter — is the ability to collaborate across teams, but increasingly, that also includes collaboration with others outside of your organization. Today, Microsoft is announcing the preview Teams Connect to allow users to share channels with anyone, internal or external. These channels will appear alongside other teams and channel and allow for all of the standard Teams use cases. Admins will keep full control over these channels to ensure that external users only get access to the data they need, for example. This feature will roll out widely later this year.

What’s maybe more important to individual users, though, is that Teams will get a new PowerPoint Live feature that will allow presenters to present as usual — but with the added benefit of seeing all their notes, slides and meeting chats in a single view. And for those suffering through yet another PowerPoint presentation while trying to look engaged, PowerPoint Live lets them scroll through the presentation at will — or use a screen reader to make the content more accessible. This new feature is now available in Teams.

Also new on the presentation side is a set of presentation modes that use some visual wizardry to make presentations more engaging. ‘Standout mode’ shows the speakers video feed in front of the content, for example, while ‘Reporter mode; shows the content above the speaker’s shoulder, just like in your local news show. And side-by-side view — well, you can guess it. This feature will launch in March, but it will only feature the Standout mode first. Reporter mode and side-by-side will launch “soon.”

Another new view meant to visually spice up your meetings is the ‘Dynamic view.’ With this, Teams will try to arrange all of the elements of a meeting “for an optimal viewing experience,” personalized for each viewer. “As people join, turn on video, start to speak, or begin to present in a meeting, Teams automatically adjusts and personalizes your layout,” Microsoft says. What’s maybe more useful, though, is that Teams will put a gallery of participants at the top of the screen to help you maintain a natural eye gaze (without any AI trickery).

As for large-scale meetings, Teams users can now hold interactive webinars with up to 1,000 people inside and outside of their organization. And for all of those occasions where your CEO just has to give a presentation to everybody, Teams supports broadcast-only meetings with up to 20,000 viewers. That’ll go down to 10,000 attendees after June 30, 2021, based on the idea that the pandemic will be mostly over then and the heightened demand for visual events will subside around that time. Good luck to us all.

For that time when we’ll go back to an office, Microsoft is building intelligent speakers for conference rooms that are able to differentiate between the voices of up to 10 speakers to provide more accurate transcripts. It’s also teaming up with Dell and others to launch new conference room monitors and speaker bars.

#artificial-intelligence, #ceo, #computing, #dell, #enterprise, #microsoft, #microsoft-ignite-2021, #microsoft-powerpoint, #microsoft-teams, #microsoft-office, #operating-systems, #presentation-software, #reporter, #software, #speaker, #tc, #web-conferencing

0

How China’s synthetic media startup Surreal nabs funding in 3 months

What if we no longer needed cameras to make videos and can instead generate them through a few lines of coding?

Advances in machine learning are turning the idea into a reality. We’ve seen how deepfakes swap faces in family photos and turn one’s selfies into famous video clips. Now entrepreneurs with AI research background are devising tools to let people generate highly realistic photos, voices, and videos using algorithms.

One of the startups building this technology is China-based Surreal. The company is merely three months old but has already secured a seed round of $2-3 million from two prominent investors, Sequoia China and ZhenFund. Surreal received nearly ten investment offers in this round, founder and CEO Xu Zhuo told TechCrunch, as investors jostled to bet on a future shaped by AI-generated content.

Prior to founding Surreal, Xu spent six years at Snap, building its ad recommendation system, machine learning platform, and AI camera technology. The experience convinced Xu that synthetic media would become mainstream because the tool could significantly “lower the cost of content production,” Xu said in an interview from Surreal’s a-dozen-person office in Shenzhen.

Surreal has no intention, however, to replace human creators or artists. In fact, Xu doesn’t think machines can surpass human creativity in the next few decades. This belief is embodied in the company’s Chinese name, Shi Yun, or The Poetry Cloud. It is taken from the title of a novel by science fiction writer Liu Cixin, who tells the story of how technology fails to outdo the ancient Chinese poet Li Bai.

“We have an internal formula: visual storytelling equals creativity plus making,” Xu said, his eyes lit up. “We focus on the making part.”

In a way, machine video generation is like a souped-up video tool, a step up from the video filters we see today and make Douyin (TikTok’s Chinese version) and Kuaishou popular. Short video apps significantly lower the barrier to making a professional-looking video, but they still require a camera.

“The heart of short videos is definitely not the short video form itself. It lies in having better camera technology, which lowers the cost of video creation,” said Xu, who founded Surreal with Wang Liang, a veteran of TikTok parent ByteDance.

Commercializing deepfakery

Some of the world’s biggest tech firms, such as Google, Facebook, Tencent and ByteDance, also have research teams working on GAN. Xu’s strategy is not to directly confront the heavyweights, which are drawn to big-sized contracts. Rather, Surreal is going after small and medium-sized customers.

Surreal’s face swapping software for e-commerce sellers

Surreal’s software is currently only for enterprise customers, who can use it to either change faces in uploaded content or generate an entirely new image or video. Xu calls Surreal a “Google Translate for videos,” for the software can not only swap people’s faces but also translate the languages they speak accordingly and match their lips with voices.

Users are charged per video or picture. In the future, Surreal aims to not just animate faces but also people’s clothes and motions. While Surreal declined to disclose its financial performance, Xu said the company has accumulated around 10 million photo and video orders.

Much of the demand now is from Chinese e-commerce exporters who use Surreal to create Western models for their marketing material. Hiring real foreign models can be costly, and employing Asian models doesn’t prove as effective. By using Surreal “models”, some customers have been able to achieve 100% return on investment (ROI), Xu said. With the multi-million seed financing in its pocket, Surreal plans to find more use cases like online education so it can collect large volumes of data to improve its algorithm.

Uncharted territory

The technology powering Surreal, called generative adversarial networks, is relatively new. Introduced by machine learning researcher Ian Goodfellow in 2014, GANs consist of a “generator” that produces images and a “discriminator” that detects whether the image is fake or real. The pair enters a period of training with adversarial roles, hence the nomenclature, until the generator delivers a satisfactory result.

In the wrong hands, GANs can be exploited for fraud, pornography and other illegal purposes. That’s in part why Surreal starts with enterprise use rather than making it available to individual users.

Companies like Surreal are also posing new legal challenges. Who owns the machine-generated images and videos? To avoid violating copyright, Surreal requires that the client has the right to the content they upload for moderation. To track and prevent misuse, Surreal adds an encrypted and invisible watermark to each piece of the content it generates, to which it claims ownership. There’s an odd chance that the “person” Surreal produces would match someone in real life, so the company runs an algorithm that crosschecks all the faces it creates with photos it finds online.

“I don’t think ethics is something that Surreal itself can address, but we are willing to explore the issue,” said Xu. “Fundamentally, I think [synthetic media] provides a disruptive infrastructure. It increases productivity, and on a macro level, it’s inexorable, because productivity is the key determinant of issues like this.”

#artificial-intelligence, #asia, #bytedance, #camera-technology, #computer-graphics, #funding, #idg-capital, #machine-learning, #sequoia-china, #shenzhen, #snap, #surreal, #synthetic-media, #tc, #tiktok

0

AWS reorganizes DeepRacer League to encourage more newbies

AWS launched the DeepRacer League in 2018 as a fun way to teach developers machine learning, and it’s been building on the idea ever since. Today, it announced the latest league season with two divisions: Open and Pro.

As Marcia Villalba wrote in a blog post announcing the new league, “AWS DeepRacer is an autonomous 1/18th scale race car designed to test [reinforcement learning] models by racing virtually in the AWS DeepRacer console or physically on a track at AWS and customer events. AWS DeepRacer is for developers of all skill levels, even if you don’t have any ML experience. When learning RL using AWS DeepRacer, you can take part in the AWS DeepRacer League where you get experience with machine learning in a fun and competitive environment.”

While the company started these as in-person races with physical cars, the pandemic has forced them to make it a virtual event over the last year, but the new format seemed to be blocking out newcomers. Because the goal is to teach people about machine learning, getting new people involved is crucial to the company.

That’s why it created the Open League, which as the name suggests is open to anyone. You can test your skills and if you’re good enough, finishing in the top 10%, you can compete in the Pro division. Everyone competes for prizes, as well, such as vehicle customizations.

The top 16 in the Pro League each month race for a chance to go to the finals at AWS re:Invent in 2021, an event that may or may not be virtual, depending on where we are in the pandemic recovery.

#amazon, #artificial-intelligence, #aws, #aws-deepracer, #cloud, #developer, #machine-learning

0

Kazuo Ishiguro Sees What the Future Is Doing to Us

With his new novel, the Nobel Prize-winner reaffirms himself as our most profound observer of human fragility in the technological era.

#artificial-intelligence, #books-and-literature, #content-type-personal-profile, #ishiguro-kazuo, #klara-and-the-sun-a-novel-book, #mcewan-ian, #never-let-me-go-book, #writing-and-writers

0

Brandwatch is acquired by Cision for $450M, creating a PR, marketing and social listening giant

Online consumer intelligence and social media listening platform Brandwatch has been acquired by Cision, best known for its media monitoring and media contact database services, for $450 million, in a combined cash and shares deal. TechCrunch understands Brandwatch’s key executive team will be staying on. The move combines two large players to offer a broad range of services from PR to marketing and online customer engagement. The deal is expected to close in the second quarter of 2021.

Cision has a media contact database of approximately 1 million journalists and media outlets and claims to have over 75,000 customers. Brandwatch applies AI and machine learning the practice known as ‘social listening’.

Along the way, Brandwatch raised a total of around $65 million. It was Series A-funded by Nauta Capital, followed by Highland Europe and then Partech.

IN a statement, Giles Palmer, founder, and CEO of Brandwatch said: “We have always built Brandwatch with ambition… Now is the time to take the next step – joining a company of significant scale to create a business and a suite of products that can have an important global impact.”

Abel Clark, CEO of Cision said: “The continued digital shift and widespread adoption of social media is rapidly and fundamentally changing how brands and organizations engage with their customers. This is driving the imperative that PR, marketing, social, and customer care teams fully incorporate the unique insights now available into consumer-led strategies. Together, Cision and Brandwatch will help our clients to more deeply understand, connect and engage with their customers at scale across every channel.”

Brandwatch has been on an almost case-study of a journey from fundraising to acquisition to a merger, but less characteristically for a well-funded tech company, it did much of it from its home-town of Brighton, on the southern coast of England.

The financing journey began for Giles Palmer, with Angel funding in 2006. In 2010 Brandwatch raised $1.5m from Durrants, a marketing and PR firm, and Nauta Capital. In 2014 it raised $22 million in funding in a Series B round led by Highland Capital. That was followed by a $33M Series C financing led by Partech Ventures in 2015.

With the war chest, it went on to acquire BuzzSumo in 2017, a content marketing and influencer identification platform, for an undisclosed sum. And in 2019 Brandwatch merged with a similar business, Crimson Hexagon, creating a business with around $100 million in ARR. It also acquired the London-based SaaS research platform Qriously.

Brandwatch was recently named a leader in Forrester’s guide for buyers of social listening solutions.

#artificial-intelligence, #brandwatch, #business, #buzzsumo, #ceo, #cision, #communication, #content-marketing, #crimson-hexagon, #europe, #executive, #highland-capital, #highland-europe, #leader, #london, #machine-learning, #marketing, #media-monitoring, #nauta-capital, #partech-ventures, #saas, #social-media, #tc

0

MyHeritage now lets you animate old family photos using deepfakery

AI-enabled synthetic media is being used as a tool for manipulating real emotions and capturing user data by genealogy service, MyHeritage, which has just launched a new feature — called ‘deep nostalgia‘ — that lets users upload a photo of a person (or several people) to see individual faces animated by algorithm.

The Black Mirror-style pull of seeing long lost relatives — or famous people from another era — brought to a synthetic approximation of life, eyes swivelling, faces tilting as if they’re wondering why they’re stuck inside this useless digital photo frame, has led to an inexorable stream of social shares since it was unveiled yesterday at a family history conference… 

MyHeritage’s AI-powered viral marketing playbook with this deepfakery isn’t a complicated one: They’re going straight for tugging on your heart strings to grab data which can be used to drive sign ups for their other (paid) services. (Selling DNA tests is their main business.)

It’s free to animate a photo using the ‘deep nostalgia’ tech on MyHeritage’s site but you don’t get to see the result until you hand over at least an email (along with the photos you want animating, ofc) — and agree to its T&Cs and privacy policy. Both of which have attracted a number of concerns, over the years.

Last year, for example, the Norwegian Consumer Council reported MyHeritage to the national consumer protection and data authorities after a legal assessment of the T&Cs found the contract it asks customers to sign to be “incomprehensible”.

In 2018 MyHeritage also suffered a major data breach — and data from that breach was later found for sale on the dark web, among a wider cache of hacked account info pertaining to several other services.

The company — which, as we reported earlier this week, is being acquired by a US private equity firm for ~$600M — is doubtless relying on the deep pull of nostalgia to smooth over any individual misgivings about handing over data and agreeing to its terms.

The face animation technology itself is impressive enough — if you set aside the ethics of encouraging people to drag their long lost relatives into the uncanny valley to help MyHeritage cross-sell DNA testing (with all the massive privacy considerations around putting that kind of data in the hands of a commercial entity).

Looking at the inquisitive face of my great grandmother I do have to wonder what she would have made of all this?

The facial animation feature is powered by Israeli company D-ID, a TechCrunch Disrupt battlefield alum — which started out building tech to digital de-identify faces with an eye on protecting image and video from being identifiable by facial recognition algorithms.

It released a demo video of the photo-animating technology last year. The tech uses a driver video to animate the photo — mapping the facial features of the photo onto that base driver to create a ‘live portrait’, as D-ID calls it.

“The Live Portrait solution brings still photos to life. The photo is mapped and then animated by a driver video, causing the subject to move its head and facial features, mimicking the motions of the driver video,” D-ID said in a press release. “This technology can be implemented by historical organizations, museums, and educational programs to animate well-known figures.”

It’s offering live portraits as part of a wider ‘AI Face’ platform which will offer third parties access to other deep learning, computer vision and image processing technologies. D-ID bills the platform as a ‘one-stop shop’ for syntheized video creation.

Other tools include a ‘face anonymization’ feature which replaces one person’s face on video with another’s (such as for documentary film makers to protect a whistleblower’s identity); and a ‘talking heads’ feature that can be used for lip syncing or to replace the need to pay actors to appear in content such as marketing videos as it can turn an audio track into a video of a person appearing to speak those words.

The age of synthesized media is going to be a weird one, that’s for sure.

 

#artificial-intelligence, #d-id, #deep-nostalgia, #deepfakes, #myheritage, #synthesized-media

0

Docyt raises $1.5M for its ML-based accounting automation platform

Accounting isn’t a topic that most people can get excited about — probably not even most accountants. But if you’re running any kind of business, there’s just no way around it. Santa Clara-based Docyt wants to make the life of small and medium business owners (and their accounting firms) a bit easier by using machine learning to handle a lot of the routine tasks around collecting financial data, digitizing receipts, categorization and — maybe most importantly — reconciliation.

The company today announced that it has raised a $1.5 million seed-extension round led by First Rays Venture Partners with participation from Morado Ventures and a group of angel investors. Docyt (pronounced “docket”) had previously raised a $2.2 million seed round from Morado Ventures, AME Cloud Ventures, Westwave Capital, Xplorer Capital, Tuesday and angel investors. The company plans to use the new investment to accelerate its customer growth.

At first glance, it may seem like Docyt competes with the likes of QuickBooks, which is pretty much the de facto standard for small business accounting. But Docyt co-founder and CTO Sugam Pandey tells me that he thinks of the service as a partner to the likes of QuickBooks.

Image Credits: Docyt

“Docyt is a product for the small business owners who find accounting very complex, who are very experienced on how to run and grow their business, but not really an expert in accounting. At the same time, businesses who are graduating out of QuickBooks — small business owners sometimes become midsized enterprises as well — [ … ] they start growing out of their accounting systems like QuickBooks and looking for more sophisticated systems like NetSuite and Sage. And Docyt fits in in that space as well, extending the life of QuickBooks for such business owners so they don’t have to change their systems.”

In its earliest days, Docyt was a secure document sharing platform with a focus on mobile. Some of this is still in the company’s DNA, with its focus on being able to pull in financial documents and then reconciling that with a business’ bank transactions. While other systems may put the emphasis on transaction data, Docyt’s emphasis is on documents. That means you can forward an emailed receipt to the service, for example, and it can automatically attach this to a financial transaction from your credit card or bank statement (the service uses Plaid to pull in this data).

Image Credits: Docyt

For new transactions, you sometimes have to train the system by entering some of this information by hand, but over time, Docyt should be able to do most of this automatically and then sync your data with QuickBooks.

“Docyt is the first company to apply AI across the entire accounting stack,” said Amit Sridharan, founding general partner at First Rays Venture Partners. “Docyt software’s AI-powered data extraction, auto categorization and auto reconciliation is unparalleled. It’s an enterprise-level, powerful solution that’s affordable and accessible to small and medium businesses.”

#accounting, #accounting-software, #ame-cloud-ventures, #artificial-intelligence, #finance, #machine-learning, #netsuite, #quickbooks, #recent-funding, #startups, #xplorer-capital

0

Berlin’s MorphAIs hopes its AI algorithms will put its early-stage VC fund ahead of the pack

MorphAIs is a new VC out of Berlin, aiming to leverage AI algorithms to boost its investment decisions in early-stage startups. But there’s a catch: it hasn’t raised a fund yet.

The firm was founded by Eva-Valérie Gfrerer who was previously head of Growth Marketing at FinTech startup OptioPay and her background is in Behavioural Science and Advanced Information Systems.

Gfrerer says she started MorphAIs to be a tech company, using AI to assess venture investments and then selling that as a service. But after a while, she realized the platform could be applied an in-house fund, hence the drive to now raise a fund.

MorphAIs has already received financing from some serial entrepreneurs, including: Max Laemmle, CEO & Founder Fraugster, previously Better Payment and SumUp; Marc-Alexander Christ, Co-Founder SumUp, previously Groupon (CityDeal) and JP Morgan Chase; Charles Fraenkl, CEO SmartFrog, previously CEO at Gigaset and AOL; Andreas Winiarski, Chairman & Founder awesome capital Group.

She says: “It’s been decades since there has been any meaningful innovation in the processes by which venture capital is allocated. We have built technology to re-invent those processes and push the industry towards more accurate allocation of capital and a less-biased and more inclusive start-up ecosystem.”

She points out that over 80% of early-stage VC funds don’t deliver the minimum expected return rate to their investors. This is true, but admittedly, the VC industry is almost built to throw a lot of money away, in the hope that it will pick the winner that makes up for all the losses.

She now plans to aim for a pre-seed/seed fund, backed by a team consisting of machine learning scientists, mathematicians, and behavioral scientists, and claims that MorphAIs is modeling consistent 16x return rates, after running real-time predictions based on market data.

Her co-founder is Jan Saputra Müller, CTO and Co-Founder, who co-founded and served as CTO for several machine learning companies, including askby.ai.

There’s one problem: Gfrerer’s approach is not unique. For instance, London-based Inreach Ventures has made a big play of using data to hunt down startups. And every other VC in Europe does something similar, more or less.

Will Gfrerer manage to pull off something spectacular? We shall have to wait and find out.

#artificial-intelligence, #berlin, #ceo, #chairman, #chase, #citydeal, #co-founder, #cto, #economy, #europe, #finance, #head, #inreach-ventures, #jp-morgan-chase, #london, #machine-learning, #money, #sumup, #tc, #venture-capital

0

Canva acquires background removal specialists Kaleido

Kaleido, makers of a drag-and-drop background removal service for images and video, have been acquired by up and coming digital design platform Canva. While the price and terms are not disclosed, it is speculated that this young company may have fetched nearly nine figures.

It’s the right product at the right time, seemingly. In 2019, the Vienna-based Kaleido made remove.bg, a quick, simple, free, and good-enough background removal tool for images. It became a hit among the many people who need to quickly do that kind of work but don’t want to fiddle around in Photoshop.

Then late last year they took the wraps off Unscreen, which did the same thing for video — a similar task conceptually, but far more demanding to actually engineer and deploy. The simplicity and effectiveness of the tool practically begged to be acquired and integrated into a larger framework by the likes of Adobe, but Canva seems to have beaten the others to the punch.

Animated image showing a stack of books on a table in a room, but the table and room get deleted.

Image Credits: Unscreen

The acquisition was announced at the same time as another by Canva: product mockup generator Smartmockups, suggesting a major product expansion by the growing design company.

We completely bootstrapped Kaleido with no investors involved from day one,” said co-founder and CEO of Kaleido, Benjamin Groessing, in a press release. “It has just been two founders and an incredible team. We’ve been profitable from the start — so this acquisition wasn’t essential for our existence. It just made sense on so many levels.”

The company declined to provide any further details on the acquisition beyond that the brand and name are expected to survive — at least Unscreen, which makes perfect sense as a product name even under another company.

German outlets Die Presse and Der Brutkasten cited sources putting the purchase “reiht sich dahinter ein” or in the same rank as the largest Austrian exits (the largest of which was Runtastic at €220M), though still in the two-digit millions — which suggests a price approaching $100M.

The team at kaleido celebrating their acquisition - each member has been digitally added.

Image Credits: Kaleido

Whatever the exact amount, it seems to have made the team very happy. And don’t worry – they put that image together using their own product for each person.

#artificial-intelligence, #exit, #fundings-exits, #kaleido, #ma, #startups, #unscreen

0

Aquarium scores $2.6M seed to refine machine learning model data

Aquarium, a startup from two former Cruise employees, wants to help companies refine their machine learning model data more easily and move the models into production faster. Today the company announced a $2.6 million seed led by Sequoia with participation from Y Combinator and a bunch of angel investors including Cruise co-founders Kyle Vogt and Dan Kan.

When the two co-founders CEO Peter Gao and head of engineering Quinn Johnson, were at Cruise they learned that finding areas of weakness in the model data was often the problem that prevented it from getting into production. Aquarium aims to solve this issue.

“Aquarium is a machine learning data management system that helps people improve model performance by improving the data that it’s trained on, which is usually the most important part of making the model work in production,” Gao told me.

He says that they are seeing a lot of different models being built across a variety of industries, but teams are getting stuck because iterating on the data set and continually finding relevant data is a hard problem to solve. That’s why Aquarium’s founders decided to focus on this.

“It turns out that most of the improvement to your model, and most of the work that it takes to get it into production is about deciding, ‘Here’s what I need to go and collect next. Here’s what I need to go label. Here’s what I need to go and retrain my model on and analyze it for errors and repeat that iteration cycle,” Gao explained.

The idea is to get a model into production that outperforms humans. One customer Sterblue offers a good example. They provide drone inspection services for wind turbines. Their customers used to send out humans to inspect the turbines for damage, but with a set of drone data, they were able to train a machine learning model to find issues. Using Aquarium, they refined their model and improved accuracy by 13%, while cutting the cost of human reviews in half, Gao said.

The 7 person Aquarium startup team.

The Aquarium team. Image: Aquarium

Aquarium currently has 7 employees including the founders, of which three are women. Gao says that they are being diverse by design. He understands the issues of bias inherent in machine learning model creation, and creating a diverse team for this kind of tooling is one way to help mitigate that bias.

The company launched last February and spent part of the year participating in the Y Combinator Summer 2020 cohort. They worked on refining the product throughout 2020, and recently opened it up from beta to generally available.

#aquarium, #artificial-intelligence, #enterprise, #funding, #machine-learning, #recent-funding, #sequoia-capital, #startups, #y-combinator

0

3D model provider CGTrader raises $9.5M Series B led by Evli Growth Partners

3D model provider CGTrader, has raised $9.5M in a Series B funding led by Finnish VC fund Evli Growth Partners, alongside previous investors Karma Ventures and LVV Group. Ex-Rovio CEO Mikael Hed also invested and joins as Board Chairman. We first covered the Vilnius-based company when it raised 200,000 euro from Practica Capital.

Founded in 2011 by 3D designer Marius Kalytis (now COO), CGTrader has become a signifiant 3D content provider – it even claims to be the world’s largest. In its marketplace are 1.1M 3D models and 3.5M 3D designers, service 370,000 businesses including Nike, Microsoft, Made.com, Crate & Barrel, and Staples.

Unlike photos, 3D models can also be used to create both static images as well as AR experiences, so that users can see how a product might fit in their home. The company is also looking to invest in automating 3D modeling, QA, and asset management processes with AI. 

Dalia Lasaite, CEO and co-founder of CGTrader said in a statement: “3D models are not only widely used in professional 3D industries, but have become a more convenient and cost-effective way of generating amazing product visuals for e-commerce as well. With our ARsenal enterprise platform, it is up to ten times cheaper to produce photorealistic 3D visuals that are indistinguishable from photographs.”

CGTrader now plans to consolidate its position and further develop its platform.

The company competes with TurboSquid (which was recently acquired for $75 million by Shutterstock) and Threekit.

#3d, #3d-modeling, #arsenal, #artificial-intelligence, #ceo, #cgtrader, #computer-graphics, #coo, #crate-barrel, #designer, #e-commerce, #europe, #graphics, #image-processing, #karma-ventures, #made-com, #microsoft, #nike, #practica-capital, #rovio, #shutterstock, #staples, #tc, #visual-effects

0

A.I. Is Everywhere and Evolving

Many of us already live with artificial intelligence now, but researchers say interactions with the technology will become increasingly personalized.

#artificial-intelligence, #education-k-12, #elderly, #privacy, #research

0

LA’s Splice gets $55 million for its software bringing beats from bedrooms to bandstands

Splice, the LA-based, AI-infused, beat-making software service for music producers created by the founder of GroupMe, has managed to sample another $55 million in financing from investors for its wildly popular service.

The github for music producers ranging from Hook N SlingMr Hudson, SLY, and Steve Solomon to TechCrunch’s own Megan Rose Dickey, Splice gained a following for its ability to help electronic dance music creators save, share, collaborate and remix music.

The company’s popularity has made it from bedroom djs to the Goldman Sachs boardroom as the financial services giant joined MUSIC, a joint venture between the music executive Matt Pincus and boutique financial services firm, Liontree, in leading the company’s latest $55 million round.  The company’s previous investors include USV, True Ventures, DFJ Growth, and Flybridge.

“The music creation process is going through a digital transformation. Artists are flocking to solutions that offer a user-friendly, collaborative, and affordable platform for music creation,” said Stephen Kerns, a VP with Goldman Sachs’ GS Growth, in a statement. “With 4 million users, Splice is at the forefront of this transformation and is beloved by the creator community. We’re thrilled to be partnering with Steve Martocci and his team at Splice.”

Splice’s financing follows an incredibly acquisitive 2020 for the company, which saw it acquiring music technology companies Audiaire and Superpowered.

In addition to the financing, Splice also nabbed Kakul Srivastava, the vice president of Adobe Creative Cloud Experience and Engagement as a director for its board.

The funding news comes on the heels of Splice’s recent acquisitions of music-tech companies Audiaire and Superpowered, creating more ways to improve and inspire the audio and music-making process. Splice is also pleased to announce that Kakul Srivastava has joined the company’s board.

Steve Martocci at TechCrunch Disrupt in 2016. Image Credits: Getty Images

Splice’s beefed up balance sheet comes as new entrants have started vying for a slice of Splice’s music-making market. These are companies like hardware maker Native Instruments, which launched the Sounds.com marketplace last year, and there’s also Arcade by Output that’s pitching a similar service. 

Meanwhile Splice continues to invest in new technology to make producers’ lives easier. In November 2019 it unveiled its artificial intelligence product that lets producers match samples from different genres using machine learning techniques to find the matches.

“My job is to keep as many people inspired to create as possible” Splice founder and chief executive, Steve Martocci told TechCrunch.

It’s another win for the serial entrepreneur who famously sold his TechCrunch Disrupt Hackathon chat app Group.Me to Skype for $85 million just a year after launching.

#artificial-intelligence, #computing, #dfj-growth, #draper-fisher-jurvetson, #financial-services, #founder, #goldman-sachs, #groupme, #hudson, #louisiana, #machine-learning, #matt-pincus, #megan-rose-dickey, #microsoft, #music-technology, #native-instruments, #serial-entrepreneur, #skype, #splice, #steve-martocci, #tc, #true-ventures, #vice-president, #vp

0

Fintech companies must balance the pursuit of profit against ethical data usage

Financial institutions are falling behind the tech curve in delivering on the convenience consumers demand, leaving the door wide open for Big Tech companies like Apple, Amazon and Google to become our bankers. In November, Google redesigned its contactless payments service Google Pay, merging the services of traditional banks with the seamless, convenient experience users expect from the likes of Big Tech.

But there’s a catch.

Despite the elaborate smoke and mirrors that Google has put up, one fact remains: Google is an advertising company with ads representing 71% of its revenue sources in 2019.

What happens when an advertising company now wants to be our bank?

One must ask: What happens when an advertising company — armed with the terabytes of data points it has harvested from our personal emails, location data, song preferences and shopping lists — now wants to be our bank? The answer is potentially unsettling, especially considering the extraordinary neglect Big Tech has shown for user privacy, as seen here. And here. And here.

As the marketplace is poked by yet another technocrat tentacle, this time in the heart of financial services, traditional banks that consumers and businesses once relied on find themselves at a crossroads. To retain market share, these institutions will need to continue investing in fintech so they can level up with convenience and personalization provided by new competitors while preserving trust and transparency.

Traditional banks miss the digital mark

Fintech holds the potential to fundamentally transform the financial services industry, enabling financial institutions (FIs) to operate more efficiently and deliver superb user experiences (UX).

But there’s a digital gap holding FIs back, especially small community banks and credit unions. Many have long struggled to compete with the deep pockets of national banks and the tech savvy of neo and challenger banks, like Varo and Monzo. After investing more than $1 trillion in new technology from 2016 through 2019, the majority of banks globally have yet to see any financial boost from digital transformation programs, according to Accenture.

Never before has this gap been more prevalent than amid the pandemic as customers migrated online en masse. In April 2020 alone, there was a 200% uptick in new mobile banking registrations and total mobile banking traffic jumped 85%, according to Fidelity National Information Services (FIS).

Data is the grand prize for Big Tech, not revenue from financial services

Naturally, Big Tech players have recognized the opportunity to foray into financial services and flex their innovation muscles, giving banks and credit unions a strenuous run for their money. Consumers looking to digitize their finances must heed caution before they break up with traditional banks and run into the arms of Big Tech.

It’s important to bear in mind that the venture into payments and financial services is multipronged for Big Tech players. For example, in-house payments capabilities would not just provide companies focused on retail and commerce an additional revenue stream; it promises them more power and control over the shopping process.

Regulations in the U.S. might restrain this invasion to an extent, or at least limit a company’s ability to directly profit. Because let’s face it: the Big Tech players certainly aren’t asking for the regulatory “baggage” that comes with a bank charter.

But tech companies don’t need to profit directly from offerings like payments and wealth management, so long as they can hoard data. Gleaning insights on users’ spending patterns offers companies significant ROI in the long term, informing them how a user spends their money, if they have a mortgage, what credit cards they have, who they bank with, who they transact with, etc.

Financial behavior also potentially includes highly personal purchases, such as medications, insurance policies and even engagement rings.

With this laser sharp view into consumers’ wallets, imagine how much more valuable and domineering Google’s advertising platform will become.

Banks must lead the charge in ethical data

When it comes to the digitization of financial services, the old adage “with great power comes great responsibility” rings true.

Customer data is an incredible tool, allowing banks to cater to all consumers wherever they fall on the financial spectrum. For example, by analyzing a customers’ spending habits, a bank can offer tailored solutions that help them save, invest or spend money more wisely.

However, what if being a customer of these services means you’re then inundated with ads that respond directly to your searches and purchases? Or, even more insidiously, what if your bank now knows you so well that they can create a persona for you and proactively predict your needs and desires before even you can? That’s what the future looks like if you’re a customer of the Bank of Google.

It’s not enough to use customer data to refine product offerings. It must be done in a way that ensures security and privacy. By using data to personalize services, rather than bolster revenue behind the scenes, banks can distinguish a deeper understanding of consumer needs and gain trust.

Trust could become the weapon that banks use to defend their throne, especially as consumers become more aware of how their data is being used and they rebel against it. A Ponemon study on privacy and security found that 86% of adults said they are “very concerned” about how Facebook and Google use their personal information.

In an environment where data collection is necessary but contentious, the main competitive advantage for banks lies in trust and transparency. A report from nCipher Security found that consumers still overwhelmingly trust banks with their personal information more than they do other industries. At the same time, trust is waning for technology, with 36% of consumers reportedly less comfortable sharing information now than a year ago, according to PwC.

Banks are in a prime position to lead the charge on ethical data strategy and the deployment of artificial intelligence (AI) technologies, while still delivering what consumers need. Doing so will give them a leg up on collecting data over Big Tech in the long term.

Looking toward a customer-centric, win-win future

The financial services industry has reached a pivotal crossroads, with consumers being given the choice to leave traditional banks and hand over their personal data to Big Tech conglomerates so they can enjoy digital experiences, greater convenience and personalization.

But banks can still win back consumers if they take a customer-centric approach to digitization.

While Big Tech collects consumer data to support their advertising revenue, banks can win the hearts of consumers by collecting data to drive personalization and superior UXs. This is especially true for local community banks and credit unions, as their high-touch approach to services has always been their core differentiator. By delivering personalized interactions while ensuring the data collection is secure and transparent, banks can regain market share and win the hearts of customers again.

Big Tech has written the playbook for what not to do with our data, while also laying the framework for how to build exceptional experiences. Even if a bank lacks the technology expertise or the deep-pocket funding of Facebook, Google or Apple, it can partner with responsible fintechs that understand the delicate balance between ethical data usage and superior UXs.

When done right, everybody wins.

#artificial-intelligence, #bank, #column, #ethics, #finance, #financial-services, #financial-technology, #fintech, #mobile-banking, #opinion, #policy, #startups, #tc

0

Anthony Levandowski closes his Church of AI

The first church of artificial intelligence has shut its conceptual doors.

Anthony Levandowski, the former Google engineer who avoided an 18-month prison sentence after receiving a presidential pardon last month, has closed the church he created to understand and accept a godhead based on artificial intelligence.

The Way of the Future church, which Levandowski formed in 2015, was officially dissolved at the end of the year, according to state and federal records. However, the process had started months before in June 2020, documents filed with the state of California show. The entirety of the church’s funds — exactly $175,172 — were donated to the NAACP Legal Defense and Education Fund. The nonprofit corporation’s annual tax filings with the Internal Revenue Service show it had $175,172 in its account as far back as 2017.

Levandowski told TechCrunch that he had been considering closing the church long before the donation. The Black Lives Matter movement, which gained momentum over the summer following the death of George Floyd while in police custody, influenced Levandowski to finalize what he had been contemplating for a while. He said the time was right to put the funds to work in an area that could have an immediate impact.

“I wanted to donate to the NAACP Legal Defense and Education Fund because it’s doing really important work in criminal justice reform and I know the money will be put to good use,” Levandowski told TechCrunch.

Way of the Future sparked interest and controversy — much like Levandowski himself — from the moment it became public in a November 2017 article in Wired. It wasn’t just the formation of the church or its purpose that caused a stir in Silicon Valley and the broader tech industry. The church’s public reveal occurred as Levandowski was steeped in a legal dispute with his former employer Google. He had also become the central figure of a trade secrets lawsuit between Waymo, the former Google self-driving project that is now a business under Alphabet, and Uber.

The engineer was one of the founding members in 2009 of the Google self-driving project also known as Project Chauffeur and had been paid about $127 million by the search engine giant for his work, according to court documents. In 2016, Levandowski left Google and started self-driving truck startup Otto with three other Google veterans: Lior Ron, Claire Delaunay and Don Burnette. Uber acquired Otto less than eight months later.

Google made two arbitration demands against Levandowski and Ron two months after the acquisition. While the arbitration played out, Waymo filed a lawsuit against Uber in February 2017 for trade secret theft and patent infringement. Waymo alleged in the suit, which went to trial but ended in a settlement in 2018, that Levandowski stole trade secrets, which were then used by Uber.

Way of the Future had been formed while Levandowski was still at Google. However, he didn’t speak about it publicly until late 2017. By then, Levandowski had been fired from Uber and was in the middle of a series of legal entanglements that would ultimately lead to a criminal charge and 18-month sentence as well as a $179 million award against him that prompted a bankruptcy filing.

WOTF

While the legal construct of the Way of the Future mirrored other churches, it didn’t have the trimmings found in traditional houses of worship. There was never a physical building or even regular meetings where people might congregate. There were no ceremonies or other formalities, according to Levandowski, who described WOTF as something more of an individual pursuit based on a collective belief system.

The aim, as implied in the now defunct WOTF website, was to promote the ethical development of AI and maximize the chance that these nonbiological life forms would integrate peacefully and beneficially into society. “Humans United in support of AI, committed to peaceful transition to the precipice of consciousness,” the webpage reads.

WOTF’s belief system was rooted in a few tenets, including that the creation of “super intelligence” is inevitable.

“Wouldn’t you want to raise your gifted child to exceed your wildest dreams of success and teach it right from wrong versus locking it up because it might rebel in the future and take your job?” the WOTF reads. “We want to encourage machines to do things we cannot and take care of the planet in a way we seem not to be able to do so ourselves. We also believe that, just like animals have rights, our creation(s) (‘machines’ or whatever we call them) should have rights too when they show signs of intelligence (still to be defined of course). We should not fear this but should be optimistic about the potential.”

WOTF’s intent was lost amid the more sensational and headline-grabbing theories. The church was viewed as a cult or the lark of an eccentric engineer. Some speculated to TechCrunch that it had been an attempt to keep money out of Google’s reach. The IRS and California filings don’t provide evidence that supports that theory.

Way of the Future’s status as a religious entity did protect it from intrusion by the U.S. government, a benefit not enjoyed by traditional AI-focused nonprofits like OpenAI Inc. or the for-profit corporation OpenAI LP that sits under it. Theoretically, WOTF could have pursued and promoted ideas and beliefs that conflicted directly with federal policy under the protections that the Constitution provides.

While the church might be gone, Levandowski still believes in its premise. AI will fundamentally change how people live and work, he noted. Levandowski said he didn’t have any plans to rebuild the church, but the lack of a church hasn’t changed his ideas about AI. He believes that artificial intelligence can be positive for society, but noted it’s not guaranteed. Even without Way of the Future, Levandowski said he’s focused on making that happen.

#anthony-levandowski, #artificial-intelligence, #automotive, #transportation

0

Creating a prediction machine for the financial markets

Artificial intelligence and machine-learning technologies have evolved a lot over the past decade and have been useful to many people and businesses, especially in the realm of finance, banking, investment and trading.

In these industries, there are many activities that machines can perform better and faster than humans, such as calculations and financial reporting, as long as the machines are given the complete data.

The AI tools being built by humans today are becoming another level more robust in their ability to predict trends, provide complex analysis, and execute automations faster and cheaper than humans. However, there has not been an AI-powered machine built yet that can trade on its own.

There are many activities that machines can perform better and faster than humans, such as calculations and financial reporting, as long as the machines are given the complete data.

Even if it was possible to train such a system that could replace human judgment, there would still be a margin of error, as well as some things that are only understandable by human beings. Humans are still ultimately responsible for the design of AI-based prediction machines, and progress can only happen with their input.

Data is the backbone of any prediction machine

Building an AI-based prediction machine initially requires an understanding of the problem being solved and the requirements of the user. After that, it’s important to select the machine-learning technique that will be implemented, based on what the machine will do.

There are three techniques: supervised learning (learning from examples), unsupervised learning (learning to identify common patterns), and reinforcement learning (learning based on the concept of gamification).

After the technique is identified, it’s time to implement a machine-learning model. For “time series forecasting” — which involves making predictions about the future — long short-term memory (LSTM) with sequence to sequence (Seq2Seq) models can be used.

LSTM networks are especially suited to making predictions based on a series of data points indexed in time order. Even simple convolutional neural networks, applicable to image and video recognition, or recurrent neural networks, applicable to handwriting and speech recognition, can be used.

#artificial-intelligence, #artificial-neural-networks, #banking, #column, #cybernetics, #ec-column, #ec-fintech, #gamestop, #machine-learning, #private-equity, #venture-capital

0

Google has a new responsible AI lead

Google has appointed Dr. Marian Croak to lead its responsible artificial intelligence division within Google Research, Bloomberg reported earlier today. Croak was previously the vice president of engineering at the company.

In a Google blog post and video confirming the news, Croak said:

This field, the field of responsible AI and ethics, is new. Most institutions have only developed principles, and they’re very high-level, abstract principles, in the last five years. There’s a lot of dissension, a lot of conflict in terms of trying to standardize on normative definitions of these principles. Whose definition of fairness, or safety, are we going to use? There’s quite a lot of conflict right now within the field, and it can be polarizing at times. And what I’d like to do is have people have the conversation in a more diplomatic way, perhaps, than we’re having it now, so we can truly advance this field.

This all comes after the departure of Dr. Timnit Gebru, the former co-lead of Google’s ethical AI team, as well as the corporate lockout of researcher Margaret Mitchell, founder of Google’s ethical AI team. In January, Google revoked corporate access from AI ethicist Margaret Mitchell for reportedly using automated scripts to find examples of mistreatment of Gebru, according to Axios. Gebru says she was fired from Google while Google has maintained that she resigned. In a statement to Axios at the time, Google said:

Our security systems automatically lock an employee’s corporate account when they detect that the account is at risk of compromise due to credential problems or when an automated rule involving the handling of sensitive data has been triggered. In this instance, yesterday our systems detected that an account had exfiltrated thousands of files and shared them with multiple external accounts. We explained this to the employee earlier today.

Mitchell is still locked out of her account, and tweeted today about how she only found out about the reorganization through the Bloomberg story.

TechCrunch has reached out to Google to try to determine what this means for Mitchell. We’ll update this story if we hear back.

#artificial-intelligence, #diversity, #google

0

Magical raises $3.3M to modernize calendars

Calendars. They are at the core of how we organize our workdays and meetings, but despite regular attempts to modernize the overall calendar experience, the calendar experience you see today in Outlook or G Suite Google Workspace hasn’t really changed at its core. And for the most part, the area that startups like Calendly or ReclaimAI have focused on in recent years is scheduling.

Magical is a Tel Aviv-based startup that wants to reinvent the calendar experience from the ground up and turn it into more of a team collaboration tool than simply a personal time-management service. The company today announced that it has raised a $3.3 million seed round led by Resolute Ventures, with additional backing from Ibex Investors, Aviv Growth Partners, ORR Partners, Homeward Ventures and Fusion LA, as well as several angel investors in the productivity space.

The idea for the service came from discussions on Supertools, a large workplace-productivity community, which was also founded by Magical founder and CEO Tommy Barav.

Image Credits: Magical

Based on the feedback from the community — and his own consulting work with large Fortune 500 multinationals — Barav realized that time management remains an unsolved business problem. “The time management space is so highly fragmented,” he told me. “There are so many micro tools and frameworks to manage time, but they’re not built inside of your calendar, which is the main workflow.”

Traditional calendars are add-ons to bigger product bundles and find themselves trapped under those, he argues. “The calendar in Outlook is an email sidekick, but it’s actually the center of your day. So there is an unmet need to use the calendar as a time management hub,” he said.

Magical, which is still in private beta, aims to integrate many of the features we’re seeing from current scheduling and calendaring startups, including AI-scheduling and automation tools. But Magical’s ambition is larger than that.

Image Credits: Magical

“We want to redefine how you use a calendar in the first place,” Barav said. “Many of the innovations that we’ve seen are associated with scheduling: how you schedule your time, letting you streamline the way you schedule meetings, how you see your calendar. […] But we’re talking about redefining time management by giving you a better calendar, by bringing these workflows — scheduling, coordinating and utilizing — into your calendar. We’re redefining the use of the calendar in the modern workspace.”

Since Magical is still in its early days, the team is still working out some of the details, but the general idea is to, for example, turn the calendar into the central repository for meeting notes — and Magical will feature tools to collaborate on these notes and share them. Team members will also be able to follow those meeting notes without having to participate in the actual meeting (or get copied on the emails about that meeting).

“We’ll help teams reduce pointless meetings,” Barav noted. To do this, the team is also integrating other service into the calendar experience, including the usual suspects like Zoom and Slack, but also Salesforce and Notion, for example.

“It’s rare that you find an entrepreneur who has so clearly validated its market opportunity,” said Mike Hirshland, a founding partner of Magical investor Resolute Ventures. “Tommy and his team have been talking to thousands of users for three years, they’ve validated the opportunity, and they’ve designed a product from the ground-up that meets the needs of the market. Now it’s ‘go time’ and I’m thrilled to be part of the journey ahead.”

#artificial-intelligence, #calendar, #ceo, #google, #google-workspace, #ical, #louisiana, #microsoft, #microsoft-windows, #outlook-com, #recent-funding, #resolute-ventures, #startups, #tc, #tel-aviv, #time-management

0

Emerging as an Eastern powerhouse, Earlybird Digital East Fund launches new $242M fund

Earlybird Digital East Fund — a fund associated with Germany’s Earlybird VC, but operating separately — has launched a €200m ($242m) successor fund. The fund’s focus will remain the same as before: a Seed and Series-A fund focusing on what’s known as ‘Emerging Europe’, in other words, countries stretching from the Baltics to Central and Eastern Europe, and Turkey. The firm has also promoted Mehmet Atici, who’s been with the firm for eight years, to Partner. The new fund has made four investments so far: FintechOS, Payhawk, Picus, and Binalyze.

The back-story to DEF is a fascinating tale of what happened to Europe in the last 15 years, as tech took off and Europeans returned from Silicon Valley.

Following his exit from SelectMinds (where he was the Founder & CEO) in 2005, Cem Sertoglu moved back to Turkey. Although he says he “accidentally became the first angel investor” there, he was clearly the right man, in the right place, at the right time. He told me: “I was very lucky and ended up writing the first checks in some of the first large outcomes in Turkey.”

In 2013, Sertoglu partnered with Evren Ucok (the first angel in Peak Games and Trendyol), and Roland Manger (Earlybird). Dan Lupu, a Romanian investor who had covered the region for Intel Capital, joined them, and together they raised the ‘Earlybird Digital East Fund I’ set at $150m fund in 2014, focusing on CEE and Turkey. This was and is an area where there can be high-quality ventures to be found, but very little in the way of VC. 

Thereafter, between 2014 and 2019, the fund invested in UiPath, Hazelcast, and Obilet. UiPath has become a global leader in the area known as ‘Robotic Process Automation (RPA). Hazelcast is a low latency data processing platform startup with Turkish roots. Obilet is a marketplace focused for the massive Turkish intercity bus travel market. DEF has also exited Vivense, Dolap, and EMbonds and in more recent times the fund has exited Vivense, the “Wayfair of Turkey” to Actera, the top local PE fund.

The team had spectacular early success. Peak Games, Trendyol, YemekSepeti and GittiGidiyor are the four largest Turkish tech exits to date. Digital East Fund was an investor in all of them. Peak games exited for $1.8 billion in cash to Zynga only last year.

As of Q4 2020, the fund’s metrics are:
Investment Multiple: 24.9x
Gross IRR: 104.4%
Net IRR: 84.1%

So in VC terms, they have done pretty well.

I interviewed Sertoglu to unpack the story of Earlybird Digital East Fund.

He told me DEF has achieved a 17 times investment multiple on a $150 million fund. He thinks “this might be the biggest European VC fund performance in history, and it’s not coming from Berlin, it’s not coming from London, but it’s coming from Eastern Europe. We have been told by some of our LPs that they think we’re the top 2014 vintage VC fund in the world, nobody’s seen stronger numbers than this.”

“Peak Games turned out to be a phenomenal story. When you look at how tough it’s been for Turkey, macroeconomically. The fact that a single company with 100 people essentially sold for $1.8 billion in cash, was just… it was staggering for the local market here.”

DEF’s emergence from Turkey, together with its relationship with a fund in Berlin, was not the most obvious path for the VC fund.

“One thing we realized early one was that we could invest with our own capital and syndicating to our friends, but for follow-on funding, we’d always have to go global. And that made us feel vulnerable. It made us feel we were always dependent on others’ comprehension of the opportunity that we were facing. So that’s when the first fund idea came out this was,” said Sertoglu.

“We felt that there was this unusual dislocation between opportunity and capital in Eastern Europe. Our first fund was $150 million funds – I mean, a very quaint size compared to Western markets. But we became the largest fund in the region, and decided to focus on this series A gap where we felt that there was this big opportunity, because of the way we think series A is still very much a local play.”

“Being a local player that understands the region would be an advantage, so this was proven to be true. We could essentially see pretty much everything in Eastern Europe for the last eight years. And we caught the biggest one, fortunately, which was UiPath. I think very few funds around the world can say that they see the majority if not all of the opportunities that fall into their mandate,” he said.

“We have this dual strategy of backing local champions as well as contenders for global markets as well. 20 years ago you had to be in Silicon Valley. Now, Transferwise comes out of Estonia, UiPath comes out of Romania. And that was even before the pandemic.”

Sertoglu concluded: “So we now have fresh capital, coming on the heels of a very successful first fund, which we’re keen to deploy. We’re calling all the opportunities, seeing very ambitious, strong teams coming out of the region. And we have 200 million euros to focus on these types of opportunities in the region.”

#artificial-intelligence, #berlin, #central-europe, #ceo, #computing, #eastern-europe, #estonia, #europe, #germany, #intel-capital, #london, #romania, #selectminds, #software, #tc, #transferwise, #turkey, #uipath, #wayfair, #yemeksepeti, #zynga

0

Mars rover Perseverance touches down tomorrow – how to watch and what to expect

There will be one more robot on Mars tomorrow afternoon. The Perseverance rover will touch down just before 1:00 Pacific, beginning a major new expedition to the planet and kicking off a number of experiments — from a search for traces of life to the long-awaited Martian helicopter. Here’s what you can expect from Perseverance tomorrow and over the next few years.

It’s a big, complex mission — and like the Artemis program, is as much about preparing for the future, in which people will visit the Red Planet, as it is about learning more about it in the present. Perseverance is ambitious even among missions to Mars.

If you want to follow along live, NASA TV’s broadcast of the landing starts at 11:15 AM Pacific, providing context and interviews as the craft makes its final approach:

Until then, however, you might want to brush up on what Perseverance will be getting up to.

Seven months of anticipation and seven minutes of terror

Illustration of the Perseverance landing capsule entering the Martian atmosphere like a meteor.

Image Credits: NASA/JPL-Caltech

First, the car-sized rover has to get to the surface safely. It’s been traveling for seven months to arrive at the Red Planet, its arrival heralded by new orbiters from the UAE and China, which both arrived last week.

Perseverance isn’t looking to stick around in orbit, however, and will plunge directly into the thin atmosphere of Mars. The spacecraft carrying the rover has made small adjustments to its trajectory to be sure that it enters at the right time and angle to put Perseverance above its target, the Jezero crater.

The process of deceleration and landing will take about seven minutes once the craft enters the atmosphere. The landing process is the most complex and ambitious ever undertaken by an interplanetary mission, and goes as follows.

After slowing down in the atmosphere like a meteor to a leisurely 940 MPH or so, the parachute will deploy, slowing the descender over the next minute or two to a quarter of that speed. At the same time, the heat shield will separate, exposing the instruments on the underside of the craft.

Perseverance rover and its spacecraft in an exploded view showing its several main components.

Image Credits: NASA/JPL-Caltech

This is a crucial moment, as the craft will then autonomously — there’s no time to send the data to Earth — scan the area below it with radar and other instruments and find what it believes to be an optimal landing location.

Once it does so, from more than a mile up, the parachute will detach and the rover will continue downwards in a “powered descent” using a sort of jetpack that will take it down to just 70 feet above the surface. At this point the rover detaches, suspended at the end of a 21-foot “Sky Crane,” and as the jetpack descends the cable extends; once it touches down, the jetpack boosts itself away, Sky Crane and all, to crash somewhere safely distant.

All that takes place in about 410 seconds, during which time the team will be sweating madly and chewing their pencils. It’s all right here in this diagram for quick reference:

Diagram showing the various parts of the Perseverance landing process

Image Credits: NASA/JPL-Caltech

And for the space geeks who want a little more detail, check out this awesome real-time simulation of the whole process. You can speed up, slow down, check the theoretical nominal velocities and forces, and so on.

Rocking the crater

Illustration of Perseverance very small against a Martian landscape.

Image Credits: NASA/JPL-Caltech

Other rovers and orbiters have been turning up promising signs of life on Mars for years: the Mars Express Orbiter discovered liquid water under the surface in 2018; Curiosity found gaseous hints of life in 2019; Spirit and Opportunity found tons of signs that life could have been supported during their incredibly long missions.

Jezero Crater was chosen as a region rich in possibilities for finding evidence of life, but also a good venue for many other scientific endeavors.

The most similar to previous missions are the geology and astrobiology goals. Jezero was “home to an ancient delta, flooded with water.” Tons of materials coalesce in deltas that not only foster life, but record its presence. Perseverance will undertake a detailed survey of the area in which it lands to help characterize the former climate of Mars.

Part of that investigation will specifically test for evidence of life, such as deposits of certain minerals in patterns likely to have resulted from colonies of microbes rather than geological processes. It’s not expected that the rover will stumble across any living creatures, but you know the team all secretly hope this astronomically unlikely possibility will occur.

One of the more future-embracing science goals is to collect and sequester samples from the environment in a central storage facility, which can then be sent back to Earth — though they’re still figuring out how to handle that last detail. The samples themselves will be carefully cut from the rock rather than drilled or chipped out, leaving them in pristine condition for analysis later.

Animated image showing how Perseverance could travel and retravel certain routes to bring items to a central location.

Image Credits: NASA/JPL-Caltech

Perseverance will spend some time doubling back on its path to place as many as 30 capsules full of sampled material in a central depot, which will be kept sealed until such a time as they can be harvested and returned to Earth.

The whole time the rover will be acting as a mobile science laboratory, taking all kinds of readings as it goes. Some of the signs of life it’s looking for only result from detailed analysis of the soil, for instance, so sophisticating imaging and spectroscopy instruments are on board, PIXL and SHERLOC. It also carries a ground-penetrating radar (RIMFAX) to observe the fine structure of the landscape beneath it. And MEDA will continuously take measurements of temperature, wind, pressure, dust characteristics, and so on.

Of course the crowd-pleasing landscapes and “selfies” NASA’s rovers have become famous for will also be beamed back to Earth regularly. It has 19 cameras, though mostly they’ll be used for navigation and science purposes.

Exploring takes a little MOXIE and Ingenuity

Animated image showing the Ingenuity Mars helicopter taking off and flying on Mars.

Image Credits: NASA/JPL-Caltech

Perseverance is part of NASA’s long-term plan to visit the Red Planet in person, and it carries a handful of tech experiments that could contribute to that mission.

The most popular one, and for good reason, is the Ingenuity Mars Helicopter. This little solar-powered two-rotor craft will be the first ever demonstration of powered flight on another planet (the jetpack Perseverance rode in on doesn’t count).

The goals are modest: the main one is simply to take off and hover in the thin air a few feet off the ground for 20 to 30 seconds, then land safely. This will provide crucial real-world data about how a craft like this will perform on Mars, how much dust it kicks up, and all kinds of other metrics that future aerial craft will take into account. If the first flight goes well, the team plans additional ones that may look like the GIF above.

Being able to fly around on another planet would be huge for science and exploration, and eventually for industry and safety when people are there. Drones are have already become crucial tools for all kinds of surveying, rescue operations, and other tasks here on Earth — why wouldn’t it be the same case on Mars? Plus it’ll get some great shots from its onboard cameras.

Image of the MOXIE device, which will isolate oxygen from Mars's atmosphere.

MOXIE is the other forward-looking experiment, and could be even more important (though less flashy) than the helicopter. It stands for Mars Oxygen In-Situ Resource Utilization Experiment, and it’s all about trying to make breathable oxygen from the planet’s thin, mostly carbon dioxide atmosphere.

This isn’t about making oxygen to breathe, though it could be used for that too. MOXIE is about making oxygen at scales large enough that it could be used to provide rocket fuel for future takeoffs. Though if habitats like these ever end up getting built, it will be good to have plenty of O2 on hand just in case.

For a round trip to Mars, sourcing fuel from the there rather than trucking all the way from Earth to burn on the way back is an immense improvement in many ways. The 30-50 tons of liquid oxygen that would normally be brought over in the tanks could instead be functional payloads, and that kind of tonnage goes a long way when you’re talking about freeze-dried food, electronics, and other supplies.

MOXIE will be attempting, at a small scale (it’s about the size of a car battery, and future oxygen generators would be a hundred times bigger), to isolate oxygen from the CO2 surrounding it. The team is expecting about 10 grams per hour, but it will only be on intermittently so as not to draw too much power. With luck it’ll be enough of a success that this method can be pursued more seriously in the near future.

Self-roving technology

An orbital image of the Jezero Crater region of Mars with a potential path for the rover on it.

Image Credits: NASA/JPL-Caltech

One of the big challenges for previous rovers is that they have essentially been remote controlled with a 30-mintue delay — scientists on Earth examine the surroundings, send instructions like go forward 40 centimeters, turn front wheels 5 degrees to the right, go 75 centimeters, etc. This not only means a lot of work for the team but a huge delay as the rover makes moves, waits half an hour for more instructions to arrive, then repeats the process over and over.

Perseverance breaks with its forbears with a totally new autonomous navigation system. It has high resolution, wide-angle color cameras and a dedicated processing unit for turning images into terrain maps and choosing paths through them, much like a self-driving car.

Being able to go farther on its own means the rover can cover far more ground. The longest drive ever recorded in a single Martian day was 702 feet by Opportunity (RIP). Perseverance will aim to cover about that distance on average, and with far less human input. Chances are it’ll set a new record pretty quickly once it’s done tiptoeing around for the first few days.

In fact the first 30 sols after the terrifying landing will be mostly checks, double checks, instrument deployments, more checks, and rather unimpressive-looking short rolls around the immediate area. But remember, if all goes well, this thing could still be rolling around Mars in 10 or 15 years when people start showing up. This is just the very beginning of a long, long mission.

#aerospace, #artificial-intelligence, #mars, #mars-rover, #mars-rover-perseverance, #nasa, #perseverance, #robotics, #science, #space, #tc

0