Productivity startup Time is Ltd raises $5.6M to be the ‘Google Analytics for company time’

Productivity analytics startup Time is Ltd wants to be the Google Analytics for company time. Or perhaps a sort of “Apple Screen Time” for companies. Whatever the case, the founders reckon that if you can map how time is spent in a company enormous productivity gains can be unlocked and, money better spent.

It’s now raised a $5.6 million late seed funding round led by Mike Chalfen, of London-based Chalfen Ventures, with participation from Illuminate Financial Management and existing investor Accel. Acequia Capital and former Seal Software chairman Paul Sallaberry are also contributing to the new round, as is former Seal board member Clark Golestani. Furthermore, Ulf Zetterberg, founder and former CEO of contract discovery and analytics company Seal Software, is joining as President and co-founder.

The venture is the latest from serial entrepreneur Jan Rezab, better known for founding SocialBakers, which was acquired last year.

We are all familiar with inefficient meetings, pestering notifications chat, video conferencing tools and the deluge of emails. Time is Ltd. says it plans to address this by acquiring insights and data platforms such as Microsoft 365, Google Workspace, Zoom, Webex, MS Teams, Slack, and more. The data and insights gathered would then help managers to understand and take a new approach to measure productivity, engagement, and collaboration, the startup says.

The startup says it has now gathered 400 indicators that companies can choose from. For example, a task set by The Wall Street Journal for Time is Ltd. found the average response time for Slack users vs. email was 16.3 minutes, comparing to emails which was 72 minutes.

Chalfen commented: “Measuring hybrid and distributed work patterns is critical for every business. Time Is Ltd.’s platform makes such measurement easily available and actionable for so many different types of organizations that I believe it could make work better for every business in the world.”

Rezab said: “The opportunity to analyze these kinds of collaboration and communication data in a privacy-compliant way alongside existing business metrics is the future of understanding the heartbeat of every company – I believe in 10 years time we will be looking at how we could have ignored insights from these platforms.”

Tomas Cupr, Founder and Group CEO of Rohlik Group, the European leader of e-grocery, said: “Alongside our traditional BI approaches using performance data, we use Time is Ltd. to help improve the way we collaborate in our teams and improve the way we work both internally and with our vendors – data that Time is Ltd. provides is a must-have for business leaders.”

#accel, #analytics, #apple, #articles, #board-member, #business-intelligence, #ceo, #chairman, #computing, #digital-marketing, #e-grocery, #europe, #google, #leader, #london, #microsoft, #mike-chalfen, #seal-software, #serial-entrepreneur, #slack, #socialbakers, #software, #tc, #the-wall-street-journal, #time-is-ltd, #video-conferencing, #webex

0

June makes product analytics more accessible

Meet June, a new startup that wants to make it easier to create analytics dashboards and generate reports even if you’re not a product analytics expert. June is built on top of your Segment data. Like many no-code startups, it uses templates and a graphical interface so that non-technical profiles can start using it.

“What we do today is instant analytics and that’s why we’re building it on top of Segment,” co-founder and CEO Enzo Avigo told me. “It lets you access data much more quickly.”

Segment acts as the data collection and data repository for your analytics. After that, you can start playing with your data in June. Eventually, June plans to diversify its data sources.

“Our long-term vision is to become the Airtable of analytics,” Avigo said.

If you’re familiar with Airtable, June may look familiar. The company has built a template library to help you get started. For instance, June helps you track user retention, active users, your acquisition funnel, engagement, feature usage, etc.

Image Credits: June

Once you pick a template, you can start building a report by matching data sources with templates. June automatically generates charts, sorts your user base into cohorts and shows you important metrics. You can create goals so that you receive alerts in Slack whenever something good or bad is happening.

Advanced users can also use June so that everyone in the team is using the same tool. They can create custom SQL queries and build a template based on those queries.

The company raised a seed round of $1.85 million led by Point Nine. Y Combinator, Speedinvest, Kima Ventures, eFounders and Base Case also participated, as well as several business angels.

Prior to June, the startup’s two co-founders worked for Intercom. They noticed that the analytics tool was too hard to use for many people. They didn’t rely on analytics to make educated decisions.

There are hundreds of companies using June every week and that number is growing by 10% per week. Right now, the product is free but the company plans to charge based on usage.

Image Credits: June

#analytics, #developer, #europe, #france, #france-newsletter, #fundings-exits, #june, #product-analytics, #segment, #startups

0

New Instagram insights make its TikTok competitor Reels more appealing

Over the last year, Instagram has added a slew of features to help independent creators make a living, like Instagram Shop and Shopping in Reels. Today, Instagram launched new Insights for Reels and Live on its Professional Dashboard, giving businesses and creators essential data about the reach of their content. These tools will help Reels catch up with its competitor TikTok, which already offers users detailed analytics. As Instagram and TikTok continue trying to keep up with one another, it can only be a good thing for influencers and small businesses that use these platforms to bolster their income. 

Previously, Instagram creators could only view publicly available metrics, like the views, likes, or comments on a Reel. Now, they will be able to access data like Accounts Reached, Saves, and Shares for their Reels. Instagram will also share the number of Peak Concurrent Viewers that tune in to watch their Live videos. Plus, in the Account Insights section of the app, Instagram will add breakdowns that show users what kinds of accounts they are reaching, and which content formats are generating their strongest engagement. 

For entrepreneurs and content creators whose businesses run on social commerce, these analytics might not change the game, but they certainly make it easier to play. Shopping in Reels makes in-app sales more convenient, but until now, scant data was available to help businesses tailor their Reels to reach potential customers. On the other hand, TikTok’s analytics have long provided creators with data on their videos’ average watch time, types of traffic sources, and performance by geographic location. The viral video app announced earlier this month that it would work with specific brands, like the streetwear label Hype, to test in-app sales. This would deepen its competition with Instagram, but it’s still unclear when the feature will be widely available. So, Instagram’s Insights, combined with established in-app shopping, can create a perfect storm for content creators to better reach and monetize their target audiences.

“I always thought it was weird that there were no Insights for Reels. Sometimes it feels like shooting in the dark,” Quinn Jones told TechCrunch. Jones is one of the owners of KIKAY, a handmade jewelry business based in Los Angeles. With over 90,000 followers across Instagram and TikTok, the Gen-Z creators rely on social media to expand their audience and increase their sales. Though KIKAY has gone viral on TikTok, Jones said that Instagram has been the best way for the small business to gain followers.

“Insights are definitely going to be useful going forward,” said Jones. “It’s currently hard to tell the actual effective reach your videos have, and seeing Insights means more feedback to help improve content.”

For influencers, these analytics are also helpful for collaborating with brands on sponsored content. 

“I’ve been wanting Insights for Reels for the longest time. All we know now is views, likes, and comments,” said Cara Cochran, an LGBTQ+ content creator and microinfluencer. She notes that brands have already been pushing creators to make videos on Reels ever since Instagram redesigned its interface to place the short videos front-and-center. 

“Now that they are rolling out analytics, I think we will see a lot of brands push for more and more Reels instead of just static posts,” she says. “I think it brings their products to life in a whole new way, and it almost works like a commercial for them instead of just a static ad.” 

Instagram will begin rolling out Insights today. The company also says that over the coming months, it will add tools to help creators measure engagement over a preset time frame and begin to support Insights on desktop. 

#analytics, #apps, #creators, #insights, #instagram, #social-media, #social-media-marketing, #tiktok

0

This crypto surveillance startup — ‘We’re bomb sniffing dogs’ — just raised Series A funding

Solidus Labs, a company that says its surveillance and risk-monitoring software can detect manipulation across cryptocurrency trading platforms, is today announcing $20 million in Series A funding led by Evolution Equity Partners, with participation from Hanaco Ventures, Avon Ventures, 645 Ventures, the cryptocurrencies derivative exchange FTX,  and also a sprinkling of government officials, including former CFTC commissioner Chris Giancarlo and former SEC commissioner Troy Paredes.

It’s pretty great timing, given the various signals coming from the U.S. government just last week that it’s intent on improving its crypto monitoring efforts — such as the U.S. Treasury’s call for stricter cryptocurrency compliance with the IRS.

Of course, Solidus didn’t spring into existence last week. Rather, Solidus was founded in 2017 by several former Goldman Sachs employees who worked on the firm’s electronic trading desk for equities. At the time, Bitcoin was only becoming buzzier, but while the engineers anticipated different use cases for the cryptocurrency, they also recognized that a lack of compliance tools would be a barrier to its adoption by bigger financial institution, so they left to build these.

Fast forward and today Solidus employs 30 people, has raised $23.75 million altogether, and is the process of doubling its head count to address growing demand. We talked with Solidus’s New York-based cofounder and CEO Asaf Meir — who was himself one of those former Goldman engineers — about the company late last week. Excerpts from chat follow, edited lightly for length.

TC: Who are your customers?

AM: We work with exchanges, broker dealers, OTC desks, liquidity providers, and regulators — anyone who is exposed to the risk of buying and selling cryptocurrencies crypto assets or digital assets, whatever you want to call them.

TC: What are you promising to uncover for them?

AM: What we detect, largely speaking, is volume and price manipulation, and that has to do with wash trading, spoofing, layering, pump and dumps, and an additional growing library of crypto native alerts that truly only exist in our unique market.

We had a 400% increase in inbound demand over 2020 driven largely by two factors, I think. One is regulatory scrutiny. Globally, regulators have gone off to market participants, letting them know that they have to ask for permission not forgiveness. The second reason — which I like better — is the drastic institutional increase in appetite toward exposure for this asset class. Every institution, the first question they ask any executing platform is: ‘What are your risk mitigation tools? How do you make sure there is market integrity?’

TC: We talked a couple of months ago, and you mentioned having a growing pipeline of customers, like the trading platform Bittrex in Seattle. Is demand coming primarily from the U.S.?

AM: We have demand in Asia and in Europe, as well, so we will be our opening offices there, too.

TC: Is your former employer Goldman a customer?

AM: I can’t comment on that, but I would say there isn’t a bank right now that isn’t thinking about how they’re going to get exposure to crypto assets, and in order to do that in a safe, compliant and robust way, they have to employ crypto-specific solutions.

Right now, there’s the new frontier —  the clients we’re currently working with, which are these crypto-pure exchanges, broker dealers. liquidity providers, and even traditional financial institutions that are coming into crypto and opening a crypto operation or a crypto desk. Then there’s the new new frontier; your NFTs, stablecoins, indexes, lending platforms, decentralized protocols and God knows what [else] all of a sudden reaching out to us, telling us they want to do the right thing, to ensure the users on their platform are well-protected, and that trading activities are audited, and [to enlist us] to prevent any manipulation.

TC: How does your subscription service work and who is building the tech?

AM: We consume private data from our clients — all their training data —  and we then put it in our detection models, which we ultimately surface through insights and alerts on our dashboard, which they have access to.

As for who is building it, we have a lot of fintech engineers who are coming from Goldman and Morgan Stanley and Citi and bringing that traditional knowledge of large trading systems at scale; we also have incredible data scientists out of Israel whose expertise is in anomaly detection, which they are applying to financial crime, working with us.

TC: What do these crimes look like?

AM: When we started out, there was much more wholesale manipulation happening whether through wash trading or pump-and-dumps — things that are more easy to perform. What we’re seeing today are extremely sophisticated manipulation schemes where bad actors are able to exploit different executing platforms. We’re quite literally surfacing new alerts that if you were to use a legacy, rule-based system you wouldn’t be able to [surface] because you’re not really sure what you’re looking for. We oftentimes have an alert that we haven’t named yet; we just know that this type of behavior is considered manipulative in nature and that our client should be looking into it.

TC: Can you elaborate a bit more about these new anomalies?

AM: I’m conflicted about how much can we share of our clients’ private data. But one thing we’re seeing is [a surge in] account extraction attacks, which is when through different ways, bad actors are able to gain access to an account’s funds and are able in a sophisticated way to trade out of the exchange or broker dealer or custodian. That’s happening in different social engineering-related ways, but we’re able, through account deviation and account profiling, to alert the exchange or broker dealer or financial institution we’re working with to avoid that.

We’re about detection and prevention, not about tracing [what went wrong and where] after the fact. And we can do that regardless of knowing even personal identifiable information about that account. It’s not about the name or the IP address; it’s all about the attributes of trading. In fact, if we have an exchange in Hong Kong that’s experiencing a pump-and-dump on a certain coin pair, we can preemptively warn the rest of our client base so they can take steps to prepare and protect themselves.

TC: On the prevention front, could you also stop that activity on the Hong Kong exchange? Are you empowered by your clients to step in if you detect something anomalous?

AM: We’re bomb sniffing dogs, so we’re not coming to disable the bot. We know how to take the data and point out manipulation, but it’s then up to the financial institution to handle the case.

Pictured above: Seated left to right is CTO Praveen Kumar and CEO Asaf Meir. Standing is COO Chen Arad.

#645-ventures, #analytics, #asaf-meir, #blockchain, #chainalysis, #crypto, #elementus, #evolution-equity-partners, #ftx, #hanaco-ventures, #recent-funding, #solidus-labs, #startups, #surveillance, #tc, #venture-capital

0

Piano raises $88M for analytics, subscription and personalization tools for publishers, adds LinkedIn as investor

As publishers face up to whatever might be their next existential crisis — there are so many options from which to choose, including Substack stealing all their writers; or Clubhouse pulling in people through audio-based conversations where news and analysis intermingle seamlessly with networking — a startup that’s helping them build more tools to keep their businesses and audiences intact, and hopefully grow, is announcing a growth round of its own.

Piano, which provides analytics and subscription services to publishers, has closed a round of $88 million, funding that it will be using to continue building out the technology that it provides to its customers, as well as forge into newer areas where it can better connect audiences online.

The funding comes on the heels of a strong period for Piano. The company works with around 1,000 customers — they include CNBC, Wall Street Journal, NBC Sports, Insider, The Economist, Gannett, Le Parisien, Nielsen, MIT Technology Review, The Telegraph, South China Morning Post and (disclaimer) TechCrunch; and it has seen revenues grow 400% since 2019.

Piano has an interesting new backer in this round that might point to what form those newer areas of development might take. LinkedIn, the Microsoft-owned social networking site aimed at the working world, is participating in this Series B, which is being led by previous backer Updata Partners. Rittenhouse Ventures, which is based in Piano’s hometown of Philadelphia, is also participating.

(Piano is not disclosing valuation with this round but I understand it’s operating on a $75 million annual run rate currently. It has now raised just over $241 million.)

Trevor Kaufman, Piano’s CEO, would not be drawn out on how it specifically will be working with LinkedIn, but it’s notable that the latter company has long held back from leveraging the profiles it holds on its 740 million users to do much outside of the core LinkedIn experience.

That could be applied in a number of ways, for example similar to Facebook and Google logins to third-party sites; or for providing an identity layer to comment on stories; or even building a way to manage logins via a LinkedIn profile, which could then potentially be used to help people manage and read/consume all the content they subscribe to. Or something totally different: LinkedIn has a lot of unrealised potential that Piano could help tap.

“Members are increasingly turning to LinkedIn to stay informed on the news and views that shape their respective industries – critical to this is the work we do with trusted publishers and journalists,” said Scott Roberts, VP and Head of Business Development at LinkedIn, in a statement. “The opportunity to collaborate with Piano to help unlock more value for publisher content on LinkedIn makes it a natural strategic investment opportunity.”

The last year has seen many of us spending significantly more time indoors, and for a number of us that has also meant more reading, especially of smaller and more digestible formats such as periodicals. In a way, it’s no surprise that models like Substack’s have emerged and apparently thrived in this period, where writers are looking for different approaches and ways of connecting with readers while publishers by and large are conserving costs and strategies to weather out the storm.

Piano’s rise in that context is especially interesting, as in many cases it’s not reinventing the wheel for publishers but providing them with the tools to better leverage the content production that they already have in place. What’s notable is that in the process, it’s been able to capitalize on changing sentiments in the publishing industry. Whereas paywalls and subscriptions have in the past been seen as a drag on traffic (and the ads that get sold against it) and only useful for those in the world of B2B, now they are increasingly becoming more commonplace in a much wider range of settings, Kaufman said.

Piano’s tools are notable not just as basic levers to manage subscriptions (free and paid) but a more sophisticated set of analytics that provide more insight into how content is being read, which can in turn be used to develop those subscription tiers and determine the likelihood of people subscribing (Nieman Lab has good article on how that works here). 

To add to that, now another area where Piano is likely to develop more products is in the area of newsletters. No, not the Substack kind, but building tools for publishers to help them build out newsletter businesses that they can monetize if they choose. Indeed, the other kind of newsletter venture is far from Piano’s agenda.

“I can’t imagine a more damaging entity for journalism than Substack,” Kaufman told me. “I think it’s gotten a tremendous amount of attention from writers because it is a fantasy come true for journalists, this idea that you can make $500k a year for writing on occasion. But nothing can be farther from the truth.” He believes the model is so “pumped up venture funding” that it’s not a viable one for the long haul.

That remains to be seen, I suppose, and of course Piano has a strong vested interest in supporting its publisher customers. What it mainly says to me is that there are still some innings left in this game, and maybe some more games in a longer series.

The company may also be dipping into more M&A, given how fragmented the audience development, analytics and measurement space is. In March of this year, the company acquired AT Internet, a French company, to better manage and crunch analytics from across a number of silos, including traffic, advertising, subscriptions, engagement and more.

“Piano’s recent growth has been outstanding, and we continue to be impressed by the expanding set of capabilities they bring to both media companies and brands looking to drive more revenue from their audiences,” said Jon Seeber, general partner at Updata Partners and a member of Piano’s board, in a statement “They now have a true end-to-end platform that can power all aspects of the customer journey, allowing their clients to incorporate only the highest-quality data from across touchpoints to create the best experiences for users.”

#analytics, #funding, #media, #newsletters, #paywalls, #piano, #publishers, #publishing, #subscriptions

0

Google Analytics prepares for life after cookies

As consumer behavior and expectations around privacy have shifted — and operating systems and browsers have adapted to this — the age of cookies as a means of tracking user behavior is coming to an end. Few people will bemoan this, but advertisers and marketers rely on having insights into how their efforts translate into sales (and publishers like to know how their content performs as well). Google is obviously aware of this and it is now looking to machine learning to ready its tools like Google Analytics for this post-cookie future.

headshot of Vidhya Srinivasan, VP/GM, Advertising at Google

Vidhya Srinivasan, VP/GM, Advertising at Google

Last year, the company brought several machine learning tools to Google Analytics already. At the time, the focus was on alerting users to significant changes in their campaign performance, for example. Now, it is taking this a step further by using its machine learning systems to model user behavior when cookies are not available.

It’s hard to underestimate the importance of this shift, but according to Vidhya Srinivasan, Google’s VP and GM for Ads Buying, Analytics and Measurement who joined the company after a long stint at Amazon two years ago (and IBM before that), it’s also the only way to go.

“The principles we outlined to drive our measurement roadmap are based on shifting consumer expectations and ecosystem paradigms. Bottom line: the future is consented. It’s modeled. It’s first-party. So that’s what we’re using as our guide for the next gen of our products and solutions,” she said in her first media interview after joining Google.

It’s still early days and a lot of users may yet consent and opt in to tracking and sharing their data in some form or another. But the early indications are that this will be a minority of users. Unsurprisingly, first-party data and the data Google can gather from users who consent becomes increasingly valuable in this context.

Because of this, Google is now also making it easier to work with this so-called ‘consented data’ and to create better first-party data through improved integrations with tools like the Google Tag Manager.

Last year, Google launched Consent Mode, which helps advertisers manage cookie behavior based on local data-protection laws and user preferences. For advertisers in the EU and in the U.K., Consent Mode allows them to adjust their Google tags based on a user’s choices and soon, Google will launch a direct integration with Tag Manager to make it easier to modify and customize these tags.

How Consent Mode works today.

What’s maybe more important, though, is that Consent Mode will now use conversion modeling for users who don’t consent to cookies. Google says this can recover about 70% of ad-click-to-conversion journeys that would otherwise be lost to advertisers.

In addition, Google is also making it easier for bring in first-party data (in a privacy-forward way) to Google Analytics to improve measurements and its models.

“Revamping a popular product with a long history is something people are going to have opinions about – we know that. But we felt strongly that we needed Google Analytics to be relevant to changing consumer behavior and ready for a cookie-less world – so that’s what we’re building,” Srinivasan said. “The machine learning that Google has invested in for years — that experience is what we’re putting in action to drive the modeling underlying this tech. We take having credible insights and reporting in the market seriously. We know that doing the work on measurement is critical to market trust. We don’t take the progress we’ve made for granted and we’re looking to continue iterating to ensure scale, but above all we’re prioritizing user trust.”

 

 

#advertising-tech, #amazon, #analytics, #articles, #computing, #european-union, #gm, #google, #google-analytics, #ibm, #machine-learning, #operating-systems, #tc, #tracking, #united-kingdom, #vp, #web-analytics, #world-wide-web

0

Analytics as a service: Why more enterprises should consider outsourcing

With an increasing number of enterprise systems, growing teams, a rising proliferation of the web and multiple digital initiatives, companies of all sizes are creating loads of data every day. This data contains excellent business insights and immense opportunities, but it has become impossible for companies to derive actionable insights from this data consistently due to its sheer volume.

According to Verified Market Research, the analytics-as-a-service (AaaS) market is expected to grow to $101.29 billion by 2026. Organizations that have not started on their analytics journey or are spending scarce data engineer resources to resolve issues with analytics implementations are not identifying actionable data insights. Through AaaS, managed services providers (MSPs) can help organizations get started on their analytics journey immediately without extravagant capital investment.

MSPs can take ownership of the company’s immediate data analytics needs, resolve ongoing challenges and integrate new data sources to manage dashboard visualizations, reporting and predictive modeling — enabling companies to make data-driven decisions every day.

AaaS could come bundled with multiple business-intelligence-related services. Primarily, the service includes (1) services for data warehouses; (2) services for visualizations and reports; and (3) services for predictive analytics, artificial intelligence (AI) and machine learning (ML). When a company partners with an MSP for analytics as a service, organizations are able to tap into business intelligence easily, instantly and at a lower cost of ownership than doing it in-house. This empowers the enterprise to focus on delivering better customer experiences, be unencumbered with decision-making and build data-driven strategies.

Organizations that have not started on their analytics journey or are spending scarce data engineer resources to resolve issues with analytics implementations are not identifying actionable data insights.

In today’s world, where customers value experiences over transactions, AaaS helps businesses dig deeper into their psyche and tap insights to build long-term winning strategies. It also enables enterprises to forecast and predict business trends by looking at their data and allows employees at every level to make informed decisions.

#analytics, #analytics-as-a-service, #artificial-intelligence, #big-data, #business-intelligence, #column, #data-management, #data-mining, #ec-column, #ec-enterprise-applications, #enterprise, #machine-learning, #predictive-analytics

0

Hacking my way into analytics: A creative’s journey to design with data

Growing up, did you ever wonder how many chairs you’d have to stack to reach the sky?

No? I guess that’s just me then.

As a child, I always asked a lot of “how many/much” questions. Some were legitimate (“How much is 1 USD in VND?”); some were absurd (“How tall is the sky and can it be measured in chairs?”). So far, I’ve managed to maintain my obnoxious statistical probing habit without making any mortal enemies in my 20s. As it turns out, that habit comes with its perks when working in product.

Growing up, did you ever wonder how many chairs you’d have to stack to reach the sky?

My first job as a product designer was at a small but energetic fintech startup whose engineers also dabbled in pulling data. I constantly bothered them with questions like, “How many exports did we have from that last feature launched?” and “How many admins created at least one rule on this page?” I was curious about quantitative analysis but did not know where to start.

I knew I wasn’t the only one. Even then, there was a growing need for basic data literacy in the tech industry, and it’s only getting more taxing by the year. Words like “data-driven,” “data-informed” and “data-powered” increasingly litter every tech organization’s product briefs. But where does this data come from? Who has access to it? How might I start digging into it myself? How might I leverage this data in my day-to-day design once I get my hands on it?

Data discovery for all: What’s in the way?

“Curiosity is our compass” is one of Kickstarter’s guiding principles. Powered by a desire for knowledge and information, curiosity is the enemy of many larger, older and more structured organizations — whether they admit it or not — because it hinders the production flow. Curiosity makes you pause and take time to explore and validate the “ask.” Asking as many what’s, how’s, why’s, who’s and how many’s as possible is important to help you learn if the work is worth your time.

#analytics, #business-intelligence, #column, #data-analysis, #data-management, #data-tools, #database, #developer, #ec-column, #startups

0

With the right tools, predicting startup revenue is possible

For a long time, “revenue” seemed to be a taboo word in the startup world. Fortunately, things have changed with the rise of SaaS and alternative funding sources such as revenue-based investing VCs. Still, revenue modeling remains a challenge for founders. How do you predict earnings when you are still figuring it out?

The answer is twofold: You need to make your revenue predictable, repeatable and scalable in the first place, plus make use of tools that will help you create projections based on your data. Here, we’ll suggest some ways you can get more visibility into your revenue, find the data that really matter and figure out how to put a process in place to make forecasts about it.

You need to make your revenue predictable, repeatable and scalable in the first place, plus make use of tools that will help you create projections based on your data.

Base projections on repeatable, scalable results

Aaron Ross is a co-author of “Predictable Revenue,” a book based on his experience of creating a process and team that helped grow Salesforce’s revenue by more than $100 million. “Predictable” is the key word here: “You want growth that doesn’t require guessing, hope and frantic last-minute deal-hustling every quarter- and year-end,” he says.

This makes recurring revenue particularly desirable, though it is by no means the be-all-end-all of predictable revenue. On one hand, there is always the risk that recurring revenue won’t last, as customers may churn and organic growth runs out of gas. On the other, there is a broader picture for predictable revenue that goes beyond subscription-based models.

Ross and his co-author, Marylou Tyler, outline three steps to predictable revenue: predictable lead generation, a dedicated sales development team and consistent sales systems. They wrote an entire book about it, so it would be hard to sum it up here. So what’s the takeaway? You shouldn’t base your projections on processes and results that aren’t repeatable and scalable.

Cross the hot coals

In their early days, startups usually grow via word of mouth, luck and sheer hustle. The problem is that it likely won’t lead to sustainable growth; as the saying goes, what got you here won’t get you there. In between, there is typically a phase of uncertainty and missed results that Ross refers to as “the hot coals.”

Before the hot coals, predicting revenue is vain at best, and oftentimes impossible. I, for one, remember being at a loss when an old-school investor asked me for five-year profit-and-loss projections when my now-defunct startup was nowhere near a stable money-making path. Not all seed investors expect this, so there was obviously a mismatch here, but the challenge is still the same for most founders: How do you bridge the gap between traditional projections and the reality of a startup?

#analytics, #business-intelligence, #chargebee, #chartmogul, #finance, #fishtown-analytics, #fivetran, #forecasting, #saas, #salesforce, #segment, #startups, #stitch, #tc, #y-combinator-alumni, #zuora

0

Orca Security raises $210M Series C at a unicorn valuation

Orca Security, an Israeli cybersecurity startup that offers an agent-less security platform for protecting cloud-based assets, today announced that it has raised a $210 million Series C round at a $1.2 billion valuation. The round was led by Alphabet’s independent growth fund CapitalG and Redpoint Ventures. Existing investors GGV Capital, ICONIQ Growth and angel syndicate Silicon Valley CISO Investment also participated. YL Ventures, which led Orca’s seed round and participated in previous rounds, is not participating in this round — and it’s worth noting that the firm recently sold its stake in Axonius after that company reached unicorn status.

If all of this sounds familiar, that may be because Orca only raised its $55 million Series B round in December, after it announced its $20.5 million Series A round in May. That’s a lot of funding rounds in a short amount of time, but something we’ve been seeing more often in the last year or so.

Orca Security co-founders Gil Geron (left) and Avi Shua (right). Image Credits: Orca Security

As Orca co-founder and CEO Avi Shua told me, the company is seeing impressive growth and it — and its investors — want to capitalize on this. The company ended last year beating its own forecast from a few months before, which he noted was already aggressive, by more than 50%. Its current slate of customers includes Robinhood, Databricks, Unity, Live Oak Bank, Lemonade and BeyondTrust.

“We are growing at an unprecedented speed,” Shua said. “We were 20-something people last year. We are now closer to a hundred and we are going to double that by the end of the year. And yes, we’re using this funding to accelerate on every front, from dramatically increasing the product organization to add more capabilities to our platform, for post-breach capabilities, for identity access management and many other areas. And, of course, to increase our go-to-market activities.”

Shua argues that most current cloud security tools don’t really work in this new environment. Many, because they are driven by metadata, can only detect a small fraction of the risks, and agent-based solutions may take months to deploy and still not cover a business’ entire cloud estate. The promise of Orca Security is that it can not only cover a company’s entire range of cloud assets but that it is also able to help security teams prioritize the risks they need to focus on. It does so by using what the company calls its “SideScanning” technology, which allows it to map out a company’s entire cloud environment and file systems.

“Almost all tools are essentially just looking at discrete risk trees and not the forest. The risk is not just about how pickable the lock is, it’s also where the lock resides and what’s inside the box. But most tools just look at the issues themselves and prioritize the most pickable lock, ignoring the business impact and exposure — and we change that.”

It’s no secret that there isn’t a lot of love lost between Orca and some of its competitors. Last year, Palo Alto Networks sent Orca Security a sternly worded letter (PDF) to stop it from comparing the two services. Shua was not amused at the time and decided to fight it. “I completely believe there is space in the markets for many vendors, and they’ve created a lot of great products. But I think the thing that simply cannot be overlooked, is a large company that simply tries to silence competition. This is something that I believe is counterproductive to the industry. It tries to harm competition, it’s illegal, it’s unconstitutional. You can’t use lawyers to take your competitors out of the media.”

Currently, though, it doesn’t look like Orca needs to worry too much about the competition. As GGV Capital managing partner Glenn Solomon told me, as the company continues to grow and bring in new customers — and learn from the data it pulls in from them — it is also able to improve its technology.

“Because of the novel technology that Avi and [Orca Security co-founder and CPO] Gil [Geron] have developed — and that Orca is now based on — they see so much. They’re just discovering more and more ways and have more and more plans to continue to expand the value that Orca is going to provide to customers. They sit in a very good spot to be able to continue to leverage information that they have and help DevOps teams and security teams really execute on good hygiene in every imaginable way going forward. I’m super excited about that future.”

As for this funding round, Shua noted that he found CapitalG to be a “huge believer” in this space and an investor that is looking to invest into the company for the long run (and not just trying to make a quick buck). The fact that CapitalG is associated with Alphabet was obviously also a draw.

“Being associated with Alphabet, which is one of the three major cloud providers, allowed us to strengthen the relationship, which is definitely a benefit for Orca,” he said. “During the evaluation, they essentially put Orca in front of the security leadership at Google. Definitely, they’ve done their own very deep due diligence as part of that.”

#alphabet, #analytics, #axonius, #capitalg, #cloud, #cybersecurity-startup, #enterprise, #ggv-capital, #identity-access-management, #orca-security, #recent-funding, #redpoint-ventures, #security, #startups, #tc, #yl-ventures

0

No-code business intelligence service y42 raises $2.9M seed round

Berlin-based y42 (formerly known as Datos Intelligence), a data warehouse-centric business intelligence service that promises to give businesses access to an enterprise-level data stack that’s as simple to use as a spreadsheet, today announced that it has raised a $2.9 million seed funding round led by La Famiglia VC. Additional investors include the co-founders of Foodspring, Personio and Petlab.

The service, which was founded in 2020, integrates with over 100 data sources, covering all the standard B2B SaaS tools from Airtable to Shopify and Zendesk, as well as database services like Google’s BigQuery. Users can then transform and visualize this data, orchestrate their data pipelines and trigger automated workflows based on this data (think sending Slack notifications when revenue drops or emailing customers based on your own custom criteria).

Like similar startups, y42 extends the idea data warehouse, which was traditionally used for analytics, and helps businesses operationalize this data. At the core of the service is a lot of open source and the company, for example, contributes to GitLabs’ Meltano platform for building data pipelines.

y42 founder and CEO Hung Dang

y42 founder and CEO Hung Dang.

“We’re taking the best of breed open-source software. What we really want to accomplish is to create a tool that is so easy to understand and that enables everyone to work with their data effectively,” Y42 founder and CEO Hung Dang told me. “We’re extremely UX obsessed and I would describe us as no-code/low-code BI tool — but with the power of an enterprise-level data stack and the simplicity of Google Sheets.”

Before y42, Vietnam-born Dang co-founded a major events company that operated in over 10 countries and made millions in revenue (but with very thin margins), all while finishing up his studies with a focus on business analytics. And that in turn led him to also found a second company that focused on B2B data analytics.

Image Credits: y42

Even while building his events company, he noted, he was always very product- and data-driven. “I was implementing data pipelines to collect customer feedback and merge it with operational data — and it was really a big pain at that time,” he said. “I was using tools like Tableau and Alteryx, and it was really hard to glue them together — and they were quite expensive. So out of that frustration, I decided to develop an internal tool that was actually quite usable and in 2016, I decided to turn it into an actual company. ”

He then sold this company to a major publicly listed German company. An NDA prevents him from talking about the details of this transaction, but maybe you can draw some conclusions from the fact that he spent time at Eventim before founding y42.

Given his background, it’s maybe no surprise that y42’s focus is on making life easier for data engineers and, at the same time, putting the power of these platforms in the hands of business analysts. Dang noted that y42 typically provides some consulting work when it onboards new clients, but that’s mostly to give them a head start. Given the no-code/low-code nature of the product, most analysts are able to get started pretty quickly  — and for more complex queries, customers can opt to drop down from the graphical interface to y42’s low-code level and write queries in the service’s SQL dialect.

The service itself runs on Google Cloud and the 25-people team manages about 50,000 jobs per day for its clients. the company’s customers include the likes of LifeMD, Petlab and Everdrop.

Until raising this round, Dang self-funded the company and had also raised some money from angel investors. But La Famiglia felt like the right fit for y42, especially due to its focus on connecting startups with more traditional enterprise companies.

“When we first saw the product demo, it struck us how on top of analytical excellence, a lot of product development has gone into the y42 platform,” said Judith Dada, General Partner at LaFamiglia VC. “More and more work with data today means that data silos within organizations multiply, resulting in chaos or incorrect data. y42 is a powerful single source of truth for data experts and non-data experts alike. As former data scientists and analysts, we wish that we had y42 capabilities back then.”

Dang tells me he could have raised more but decided that he didn’t want to dilute the team’s stake too much at this point. “It’s a small round, but this round forces us to set up the right structure. For the series, A, which we plan to be towards the end of this year, we’re talking about a dimension which is 10x,” he told me.

#alteryx, #analytics, #berlin, #big-data, #business-intelligence, #business-software, #ceo, #cloud, #data, #data-analysis, #data-management, #data-warehouse, #enterprise, #general-partner, #information-technology, #judith-dada, #recent-funding, #shopify, #sql, #startups, #vietnam

0

It’s time to abandon business intelligence tools

Organizations spend ungodly amounts of money — millions of dollars — on business intelligence (BI) tools. Yet, adoption rates are still below 30%. Why is this the case? Because BI has failed businesses.

Logi Analytics’ 2021 State of Analytics: Why Users Demand Better survey showed that knowledge workers spend more than five hours a day in analytics, and more than 99% consider analytics very to extremely valuable when making critical decisions. Unfortunately, many are dissatisfied with their current tools due to the loss of productivity, multiple “sources of truth,” and the lack of integration with their current tools and systems.

A gap exists between the functionalities provided by current BI and data discovery tools and what users want and need.

Throughout my career, I’ve spoken with many executives who wonder why BI continues to fail them, especially when data discovery tools like Qlik and Tableau have gained such momentum. The reality is, these tools are great for a very limited set of use cases among a limited audience of users — and the adoption rates reflect that reality.

Data discovery applications allow analysts to link with data sources and perform self-service analysis, but still come with major pitfalls. Lack of self-service customization, the inability to integrate into workflows with other applications, and an overall lack of flexibility seriously impacts the ability for most users (who aren’t data analysts) to derive meaningful information from these tools.

BI platforms and data discovery applications are supposed to launch insight into action, informing decisions at every level of the organization. But many are instead left with costly investments that actually create inefficiencies, hinder workflows and exclude the vast majority of employees who could benefit from those operational insights. Now that’s what I like to call a lack of ROI.

Business leaders across a variety of industries — including “legacy” sectors like manufacturing, healthcare and financial services — are demanding better and, in my opinion, they should have gotten it long ago.

It’s time to abandon BI — at least as we currently know it.

Here’s what I’ve learned over the years about why traditional BI platforms and newer tools like data discovery applications fail and what I’ve gathered from companies that moved away from them.

The inefficiency breakdown is killing your company

Traditional BI platforms and data discovery applications require users to exit their workflow to attempt data collection. And, as you can guess, stalling teams in the middle of their workflow creates massive inefficiencies. Instead of having the data you need to make a decision readily available to you, instead, you have to exit the application, enter another application, secure the data and then reenter the original application.

According to the 2021 State of Analytics report, 99% of knowledge workers had to spend additional time searching for information they couldn’t easily locate in their analytics solution.

#analytics, #business-intelligence, #column, #ec-enterprise-applications, #ec-future-of-work, #enterprise, #information-technology, #real-time-data, #saas, #startups

0

Noogata raises $12M seed round for its no-code enterprise AI platform

Noogata, a startup that offers a no-code AI solution for enterprises, today announced that it has raised a $12 million seed round led by Team8, with participation from Skylake Capital. The company, which was founded in 2019 and counts Colgate and PepsiCo among its customers, currently focuses on e-commerce, retail and financial services, but it notes that it will use the new funding to power its product development and expand into new industries.

The company’s platform offers a collection of what are essentially pre-built AI building blocks that enterprises can then connect to third-party tools like their data warehouse, Salesforce, Stripe and other data sources. An e-commerce retailer could use this to optimize its pricing, for example, thanks to recommendations from the Noogata platform, while a brick-and-mortar retailer could use it to plan which assortment to allocate to a given location.

Image Credits: Noogata

“We believe data teams are at the epicenter of digital transformation and that to drive impact, they need to be able to unlock the value of data. They need access to relevant, continuous and explainable insights and predictions that are reliable and up-to-date,” said Noogata co-founder and CEO Assaf Egozi. “Noogata unlocks the value of data by providing contextual, business-focused blocks that integrate seamlessly into enterprise data environments to generate actionable insights, predictions and recommendations. This empowers users to go far beyond traditional business intelligence by leveraging AI in their self-serve analytics as well as in their data solutions.”

Image Credits: Noogata

We’ve obviously seen a plethora of startups in this space lately. The proliferation of data — and the advent of data warehousing — means that most businesses now have the fuel to create machine learning-based predictions. What’s often lacking, though, is the talent. There’s still a shortage of data scientists and developers who can build these models from scratch, so it’s no surprise that we’re seeing more startups that are creating no-code/low-code services in this space. The well-funded Abacus.ai, for example, targets about the same market as Noogata.

“Noogata is perfectly positioned to address the significant market need for a best-in-class, no-code data analytics platform to drive decision-making,” writes Team8 managing partner Yuval Shachar. “The innovative platform replaces the need for internal build, which is complex and costly, or the use of out-of-the-box vendor solutions which are limited. The company’s ability to unlock the value of data through AI is a game-changer. Add to that a stellar founding team, and there is no doubt in my mind that Noogata will be enormously successful.”

#analytics, #artificial-intelligence, #big-data, #business, #business-intelligence, #computing, #data-warehouse, #e-commerce, #enterprise, #machine-learning, #noogata, #recent-funding, #salesforce, #startups, #stripe, #team8

0

TigerGraph raises $105M Series C for its enterprise graph database

TigerGraph, a well-funded enterprise startup that provides a graph database and analytics platform, today announced that it has raised a $105 million Series C funding round. The round was led by Tiger Global and brings the company’s total funding to over $170 million.

“TigerGraph is leading the paradigm shift in connecting and analyzing data via scalable and native graph technology with pre-connected entities versus the traditional way of joining large tables with rows and columns,” said TigerGraph found and CEO, Yu Xu. “This funding will allow us to expand our offering and bring it to many more markets, enabling more customers to realize the benefits of graph analytics and AI.”

Current TigerGraph customers include the likes of Amgen, Citrix, Intuit, Jaguar Land Rover and UnitedHealth Group. Using a SQL-like query language (GSQL), these customers can use the company’s services to store and quickly query their graph databases. At the core of its offerings is the TigerGraphDB database and analytics platform, but the company also offers a hosted service, TigerGraph Cloud, with pay-as-you-go pricing, hosted either on AWS or Azure. With GraphStudio, the company also offers a graphical UI for creating data models and visually analyzing them.

The promise for the company’s database services is that they can scale to tens of terabytes of data with billions of edges. Its customers use the technology for a wide variety of use cases, including fraud detection, customer 360, IoT, AI, and machine learning.

Like so many other companies in this space, TigerGraph is facing some tailwind thanks to the fact that many enterprises have accelerated their digital transformation projects during the pandemic.

“Over the last 12 months with the COVID-19 pandemic, companies have embraced digital transformation at a faster pace driving an urgent need to find new insights about their customers, products, services, and suppliers,” the company explains in today’s announcement. “Graph technology connects these domains from the relational databases, offering the opportunity to shrink development cycles for data preparation, improve data quality, identify new insights such as similarity patterns to deliver the next best action recommendation.”

#amgen, #analytics, #articles, #artificial-intelligence, #aws, #business-intelligence, #ceo, #citrix, #citrix-systems, #computing, #data, #database, #enterprise, #graph-database, #intuit, #jaguar-land-rover, #machine-learning, #tiger-global

0

Twinco Capital scores €3M for its supply chain finance solution

Twinco Capital, a Madrid and Amsterdam-based startup making it easier to access supply chain finance, has raised €3 million in funding.

Leading the round is Spanish VC fund Mundi Ventures, with participation from previous backer Finch Capital and several unnamed angels. Twinco Capital also has a debt facility with the Spanish investment bank EBN Banco de Negocios, which is common for any type of lending company.

Founded in 2016 by Sandra Nolasco and Carmen Marin Romano, Twinco Capital offers a supply chain finance solution that includes purchase order funding. To do this, it integrates with large corporates on the purchase side and then funds suppliers by paying up to 60% of the purchase order value upfront and the remainder immediately upon delivery.

The entire process is digital, promising a quick decision and fast deployment of funds, and is powered by Twinco’s supply chain analytics and the data it is able to access by partnering with both sides of the supply chain.

“The financing of global supply chains is expensive and inefficient, the burden of the cost is mostly borne by the suppliers and in particular by those that are SMEs in emerging markets,” explains Twinco Capital co-founder and CEO Sandra Nolasco.

“Take any global supply chain, such as apparel, automotive, electronics etc. Exporters in countries like Bangladesh, China or Vietnam that have been supplying European companies for years, with stable commercial relationships. However, their creditworthiness is still measured only on the basis of annual financials, making access to competitive liquidity a major obstacle for growth”.

By having visibility on both sides, including upcoming orders, Twinco provides liquidity to the suppliers “from purchase order to final invoice payment”.

“We do that by analyzing supply chain data – the performance of the suppliers, the network effects between common suppliers and buyers (and many more data points I am not allowed to mention!),” says the Twinco CEO. “In short, using advanced data analytics we can better assess, price and significantly mitigate risk. The good news is that the more transactions we fund, the more suppliers and buyers we add, the more robust is our risk assessment. We believe there is a strong network effect”.

To that end, Twinco makes money by charging a “discount fee” for each purchase order it funds. “Since default rates are a fraction of that fee, we can unlock significant value,” says Nolasco.

Meanwhile, the fintech is also unlocking an asset class for investors and competes with local banks that are much more manual and don’t benefit from increased visibility via network effects. Nolasco says that to ensure interests are aligned, the company uses a portion of equity to also invest in the purchase orders it funds.

#analytics, #europe, #finch-capital, #fundings-exits, #mundi-ventures, #startups, #supply-chain, #tc, #twinco-capital

0

Iteratively raises $5.4M to help companies build data pipelines they can trust

As companies gather more data, ensuring that they can trust the quality of that data is becoming increasingly important. An analytics pipeline is only as good as the data it collects, after all, and messy data — or outright bugs — can easily lead to issues further down the line.

Seattle-based Iteratively wants to help businesses build data pipelines they can trust. The company today announced a $5.4 million seed funding round led by Google’s AI-centric Gradient Ventures fund. Fika Ventures and early Iteratively investor PSL Ventures also participated, with Gradient Ventures partner Zach Bratun-Glennon joining the company’s board.

Patrick Thompson, Iteratively’s Co-founder and CEO, started working on Iteratively about two years ago. Before that, he worked at Atlassian and at Simplicity, where he met his co-founder Ondrej Hrebicek. After getting started, the team spent six months doing customer discovery and the theme they picked up on was that companies weren’t trusting the data they captured.

“We interviewed a ton of companies who built internal solutions, trying to solve this particular problem. We actually built one at Atlassian, as well, so I was very much intimately familiar with this pain. And so we decided to bring a product to market that really helps alleviate the pain,” he told me.

Image Credits: Iteratively

In a lot of companies, the data producers and data consumers don’t really talk to each other — and if they do, it’s often only through a spreadsheet or wiki. Iteratively aims to provide a collaborative environment to bring these different groups together and create a single source of truth for all stakeholders. “Typically, there’s a handoff process, either on a JIRA ticket or a Confluence page or spreadsheet, where they try to hand over these requirements — and generally, it’s never really implemented correctly, which then causes a lot of pain points down down the line,” Thompson explained.

Currently, Iteratively focuses on event streaming data for product and marketing analytics — the kind of data that typically flows into a Mixpanel, Amplitude or Segment. Iteratively itself sits at the origin of the data, say an app, and then validates the data and routes it to whatever third-party solution a company may use. That means the tool sits right where the data is generated, but this setup also means that none of the data ever flows through Iteratively’s own servers.

Image Credits: Iteratively

“We don’t actually see the data,” Thompson stressed. “We’re not a data set processor. We’re a wrapper over the top of your own analytics pipeline or your own third party SaaS tools, but we verify the payloads as they flow through our SDK on the client.”

Over time, though, that may change, he acknowledged and Iteratively may do some data processing as well, but likely with a focus on metadata and observability.

Since the company doesn’t actually process any of the data itself, it’s charging customers by seat and not based on how many events move through their pipelines, for example. That may obviously change over time as the company looks into doing some data processing on its side as well.

Currently, Iteratively has about 10 employees and plans to grow that to 20 by the end of the year. The company plans to hire across R&D, sales and marketing.

Iteratively‘s software has a unique approach to enabling company-wide collaboration and enforcing data quality,” said Grandient’s Bratun-Glennon. “Going forward, we believe that intelligent analytics and data-driven business decision making will differentiate successful companies and best-in-class products. Iteratively‘s mission, product and team are poised to give each of their customers these capabilities.”

#analytics, #articles, #artificial-intelligence, #atlassian, #business-intelligence, #co-founder, #data, #data-processing, #developer, #fika-ventures, #gradient-ventures, #information-technology, #mixpanel, #seattle

0

Hightouch raises $2.1M to help businesses get more value from their data warehouses

Hightouch, a SaaS service that helps businesses sync their customer data across sales and marketing tools, is coming out of stealth and announcing a $2.1 million seed round. The round was led by Afore Capital and Slack Fund, with a number of angel investors also participating.

At its core, Hightouch, which participated in Y Combinator’s Summer 2019 batch, aims to solve the customer data integration problems that many businesses today face.

During their time at Segment, Hightouch co-founders Tejas Manohar and Josh Curl witnessed the rise of data warehouses like Snowflake, Google’s BigQuery and Amazon Redshift — that’s where a lot of Segment data ends up, after all. As businesses adopt data warehouses, they now have a central repository for all of their customer data. Typically, though, this information is then only used for analytics purposes. Together with former Bessemer Ventures investor Kashish Gupta, the team decided to see how they could innovate on top of this trend and help businesses activate all of this information.

hightouch founders

HighTouch co-founders Kashish Gupta, Josh Curl and Tejas Manohar.

“What we found is that, with all the customer data inside of the data warehouse, it doesn’t make sense for it to just be used for analytics purposes — it also makes sense for these operational purposes like serving different business teams with the data they need to run things like marketing campaigns — or in product personalization,” Manohar told me. “That’s the angle that we’ve taken with Hightouch. It stems from us seeing the explosive growth of the data warehouse space, both in terms of technology advancements as well as like accessibility and adoption. […] Our goal is to be seen as the company that makes the warehouse not just for analytics but for these operational use cases.”

It helps that all of the big data warehousing platforms have standardized on SQL as their query language — and because the warehousing services have already solved the problem of ingesting all of this data, Hightouch doesn’t have to worry about this part of the tech stack either. And as Curl added, Snowflake and its competitors never quite went beyond serving the analytics use case either.

Image Credits: Hightouch

As for the product itself, Hightouch lets users create SQL queries and then send that data to different destinations  — maybe a CRM system like Salesforce or a marketing platform like Marketo — after transforming it to the format that the destination platform expects.

Expert users can write their own SQL queries for this, but the team also built a graphical interface to help non-developers create their own queries. The core audience, though, is data teams — and they, too, will likely see value in the graphical user interface because it will speed up their workflows as well. “We want to empower the business user to access whatever models and aggregation the data user has done in the warehouse,” Gupta explained.

The company is agnostic to how and where its users want to operationalize their data, but the most common use cases right now focus on B2C companies, where marketing teams often use the data, as well as sales teams at B2B companies.

Image Credits: Hightouch

“It feels like there’s an emerging category here of tooling that’s being built on top of a data warehouse natively, rather than being a standard SaaS tool where it is its own data store and then you manage a secondary data store,” Curl said. “We have a class of things here that connect to a data warehouse and make use of that data for operational purposes. There’s no industry term for that yet, but we really believe that that’s the future of where data engineering is going. It’s about building off this centralized platform like Snowflake, BigQuery and things like that.”

“Warehouse-native,” Manohar suggested as a potential name here. We’ll see if it sticks.

Hightouch originally raised its round after its participation in the Y Combinator demo day but decided not to disclose it until it felt like it had found the right product/market fit. Current customers include the likes of Retool, Proof, Stream and Abacus, in addition to a number of significantly larger companies the team isn’t able to name publicly.

#afore-capital, #analytics, #articles, #big-data, #business-intelligence, #cloud, #computing, #data-management, #data-warehouse, #developer, #enterprise, #information, #personalization, #recent-funding, #slack-fund, #software-as-a-service, #startups, #tc

0

Microsoft launches Azure Purview, its new data governance service

As businesses gather, store and analyze an ever-increasing amount of data, tools for helping them discover, catalog, track and manage how that data is shared are also becoming increasingly important. With Azure Purview, Microsoft is launching a new data governance service into public preview today that brings together all of these capabilities in a new data catalog with discovery and data governance features.

As Rohan Kumar, Microsoft’s corporate VP for Azure Data told me, this has become a major paint point for enterprises. While they may be very excited about getting started with data-heavy technologies like predictive analytics, those companies’ data- and privacy- focused executives are very concerned to make sure that the way the data is used is compliant or that the company has received the right permissions to use its customers’ data, for example.

In addition, companies also want to make sure that they can trust their data and know who has access to it and who made changes to it.

“[Purview] is a unified data governance platform which automates the discovery of data, cataloging of data, mapping of data, lineage tracking — with the intention of giving our customers a very good understanding of the breadth of the data estate that exists to begin with, and also to ensure that all these regulations that are there for compliance, like GDPR, CCPA, etc, are managed across an entire data estate in ways which enable you to make sure that they don’t violate any regulation,” Kumar explained.

At the core of Purview is its catalog that can pull in data from the usual suspects like Azure’s various data and storage services but also third-party data stores including Amazon’s S3 storage service and on-premises SQL Server. Over time, the company will add support for more data sources.

Kumar described this process as a ‘multi-semester investment,’ so the capabilities the company is rolling out today are only a small part of what’s on the overall roadmap already. With this first release today, the focus is on mapping a company’s data estate.

Image Credits: Microsoft

“Next [on the roadmap] is more of the governance policies,” Kumar said. “Imagine if you want to set things like ‘if there’s any PII data across any of my data stores, only this group of users has access to it.’ Today, setting up something like that is extremely complex and most likely you’ll get it wrong. That’ll be as simple as setting a policy inside of Purview.”

In addition to launching Purview, the Azure team also today launched Azure Synapse, Microsoft’s next-generation data warehousing and analytics service, into general availability. The idea behind Synapse is to give enterprises — and their engineers and data scientists — a single platform that brings together data integration, warehousing and big data analytics.

“With Synapse, we have this one product that gives a completely no code experience for data engineers, as an example, to build out these [data] pipelines and collaborate very seamlessly with the data scientists who are building out machine learning models, or the business analysts who build out reports for things like Power BI.”

Among Microsoft’s marquee customers for the service, which Kumar described as one of the fastest-growing Azure services right now, are FedEx, Walgreens, Myntra and P&G.

“The insights we gain from continuous analysis help us optimize our network,” said Sriram Krishnasamy, senior vice president, strategic programs at FedEx Services. “So as FedEx moves critical high value shipments across the globe, we can often predict whether that delivery will be disrupted by weather or traffic and remediate that disruption by routing the delivery from another location.”

Image Credits: Microsoft

#analytics, #big-data, #business-intelligence, #cloud, #computing, #data, #data-management, #data-protection, #developer, #enterprise, #general-data-protection-regulation, #information, #microsoft, #rohan-kumar, #tc

0

Amazon S3 Storage Lens gives IT visibility into complex S3 usage

As your S3 storage requirements grow, it gets harder to understand exactly what you have, and this especially true when it crosses multiple regions. This could have broad implications for administrators, who are forced to build their own solutions to get that missing visibility. AWS changed that this week when it announced a new product called Amazon S3 Storage Lens, a way to understand highly complex S3 storage environments.

The tool provides analytics that help you understand what’s happening across your S3 object storage installations, and to take action when needed. As the company describes the new service in a blog post, “This is the first cloud storage analytics solution to give you organization-wide visibility into object storage, with point-in-time metrics and trend lines as well as actionable recommendations,” the company wrote in the post.

Amazon S3 Storage Lens Console

Image Credits: Amazon

The idea is to present a set of 29 metrics in a dashboard that help you “discover anomalies, identify cost efficiencies and apply data protection best practices,” according to the company. IT administrators can get a view of their storage landscape and can drill down into specific instances when necessary, such as if there is a problem that requires attention. The product comes out of the box with a default dashboard, but admins can also create their own customized dashboards, and even export S3 Lens data to other Amazon tools.

For companies with complex storage requirements, as in thousands or even tens of thousands of S3 storage instances, who have had to kludge together ways to understand what’s happening across the systems, this gives them a single view across it all.

S3 Storage Lens is now available in all AWS regions, according to the company.

#amazon-s3, #analytics, #aws, #cloud, #dashboards, #enterprise, #storage

0

Databricks launches SQL Analytics

AI and data analytics company Databricks today announced the launch of SQL Analytics, a new service that makes it easier for data analysts to run their standard SQL queries directly on data lakes. And with that, enterprises can now easily connect their business intelligence tools like Tableau and Microsoft’s Power BI to these data repositories as well.

SQL Analytics will be available in public preview on November 18.

In many ways, SQL Analytics is the product Databricks has long been looking to build and that brings its concept of a ‘lake house’ to life. It combines the performance of a data warehouse, where you store data after it has already been transformed and cleaned, with a data lake, where you store all of your data in its raw form. The data in the data lake, a concept that Databrick’s co-founder and CEO Ali Ghodsi has long championed, is typically only transformed when it gets used. That makes data lakes cheaper, but also a bit harder to handle for users.

Image Credits: Databricks

“We’ve been saying Unified Data Analytics, which means unify the data with the analytics. So data processing and analytics, those two should be merged. But no one picked that up,” Ghodsi told me. But ‘lake house’ caught on as a term.

“Databricks has always offered data science, machine learning. We’ve talked about that for years. And with Spark, we provide the data processing capability. You can do [extract, transform, load]. That has always been possible. SQL Analytics enables you to now do the data warehousing workloads directly, and concretely, the business intelligence and reporting workloads, directly on the data lake.”

The general idea here is that with just one copy of the data, you can enable both traditional data analyst use cases (think BI) and the data science workloads (think AI) Databricks was already known for. Ideally, that makes both use cases cheaper and simpler.

The service sits on top of an optimized version of Databricks’ open-source Delta Lake storage layer to enable the service to quickly complete queries. In addition, Delta Lake also provides auto-scaling endpoints to keep the query latency consistent, even under high loads.

While data analysts can query these data sets directly, using standard SQL, the company also built a set of connectors to BI tools. Its BI partners include Tableau, Qlik, Looker and Thoughtspot, as well as ingest partners like Fivetran, Fishtown Analytics, Talend and Matillion.

Image Credits: Databricks

“Now more than ever, organizations need a data strategy that enables speed and agility to be adaptable,” said Francois Ajenstat, Chief Product Officer at Tableau. “As organizations are rapidly moving their data to the cloud, we’re seeing growing interest in doing analytics on the data lake. The introduction of SQL Analytics delivers an entirely new experience for customers to tap into insights from massive volumes of data with the performance, reliability and scale they need.”

In a demo, Ghodsi showed me what the new SQL Analytics workspace looks like. It’s essentially a stripped-down version of the standard code-heavy experience that Databricks users are familiar with. Unsurprisingly, SQL Analytics provides a more graphical experience that focuses more on visualizations and not Python code.

While there are already some data analysts on the Databricks platform, this obviously opens up a large new market for the company — something that would surely bolster its plans for an IPO next year.

#ali-ghodsi, #analytics, #apache-spark, #artificial-intelligence, #business-intelligence, #cloud, #data-analysis, #data-lake, #data-management, #data-processing, #data-science, #data-warehouse, #databricks, #democrats, #enterprise, #fishtown-analytics, #fivetran, #information, #looker, #machine-learning, #python, #sql, #tableau, #talend

0

Rockset announces $40M Series B as data analytics solution gains momentum

Rockset, a cloud-native analytics company, announced a $40 million Series B investment today led by Sequoia with help from Greylock, the same two firms that financed its Series A. The startup has now raised a total of $61.5 million, according to the company.

As co-founder and CEO Venkat Venkataramani told me at the time of the Series A in 2018, there is a lot of manual work involved in getting data ready to use and it acts as a roadblock to getting to real insight. He hoped to change that with Rockset.

“We’re building out our service with innovative architecture and unique capabilities that allows full-featured fast SQL directly on raw data. And we’re offering this as a service. So developers and data scientists can go from useful data in any shape, any form to useful applications in a matter of minutes. And it would take months today,” he told me in 2018.

In fact, “Rockset automatically builds a converged index on any data — including structured, semi-structured, geographical and time series data — for high-performance search and analytics at scale,” the company explained.

It seems to be resonating with investors and customers alike as the company raised a healthy B round and business is booming. Rockset supplied a few metrics to illustrate this. For starters, revenue grew 290% in the last quarter. While they didn’t provide any foundational numbers for that percentage growth, it is obviously substantial.

In addition, the startup reports adding hundreds of new users, again not nailing down any specific numbers, and queries on the platform are up 313%. Without specifics, it’s hard to know what that means, but that seems like healthy growth for an early stage startup, especially in this economy.

Mike Vernal, a partner at Sequoia, sees a company helping to get data to work faster than other solutions, which require a lot of handling first. “Rockset, with its innovative new approach to indexing data, has quickly emerged as a true leader for real-time analytics in the cloud. I’m thrilled to partner with the company through its next phase of growth,” Vernal said in a statement.

The company was founded in 2016 by the creators of RocksDB. The startup had previously raised a $3 million seed round when they launched the company and the $18.5 million A round in 2018.

#analytics, #cloud, #cloud-native, #data, #enterprise, #fundings-exits, #greylock, #recent-funding, #rockset, #sequoia, #startups, #tc

0

SimilarWeb raises $120M for its AI-based market intelligence platform for sites and apps

Israeli startup SimilarWeb has made a name for itself with an AI-based platform that lets sites and apps track and understand traffic not just on their own sites, but those of its competitors. Now, it’s taking the next step in its growth. The startup has raised $120 million, funding it will use to continue expanding its platform both through acquisitions and investing in its own R&D, with a focus on providing more analytics services to larger enterprises alongside its current base of individuals and companies of all sizes that do business on the web.

Co-led by ION Crossover Partners and Viola Growth, the round doubles the total amount that the startup has raised to date to $240 million. Or Offer, SimilarWeb’s founder and CEO, said in an interview that it was not disclosing its valuation this time around except to say that his company is now “playing in the big pool.” It counts more than half of the Fortune 100 as customers, with Walmart, P&G, Adidas and Google, among them.

For some context, it hit an $800 million valuation in its last equity round, in 2017.

SimilarWeb’s technology competes with other analytics and market intelligence providers ranging from the likes of Nielsen and ComScore through to the Apptopias of the world in that, at its most basic level, it provides a dashboard to users that provides insights into where people are going on desktop and mobile. Where it differs, Offer said, is in how it gets to its information, and what else it’s doing in the process.

For starters, it focuses not just how many people are visiting, but also a look into what is triggering the activity — the “why”, as it were — behind the activity. Using a host of AI tech such as machine learning algorithms and deep learning — like a lot of tech out of Israel, it’s being built by people with deep expertise in this area — Offer says that SimilarWeb is crunching data from a number of different sources to extrapolate its insights.

He declined to give much detail on those sources but told me that he cheered the arrival of privacy gates and cookie lists for helping ferret out, expose and sometimes eradicate some of the more nefarious “analytics” services out there, and said that SimilarWeb has not been affected at all by that swing to more data protection, since it’s not an analytics service, strictly speaking, and doesn’t sniff data on sights in the same way. It’s also exploring widening its data pool, he added:

“We are always thinking about what new signals we could use,” he said. “Maybe they will include CDNs. But it’s like Google with its rankings in search. It’s a never ending story to try to get the highest accuracy in the world.”

The global health pandemic has driven a huge amount of activity on the web this year, with people turning to sites and apps not just for leisure — something to do while staying indoors, to offset all the usual activities that have been cancelled — but for business, whether it be consumers using e-commerce services for shopping, or workers taking everything online and to the cloud to continue operating.

That has also seen a boost of business for all the various companies that help the wheels turn on that machine, SimilarWeb included.

“Consumer behavior is changing dramatically, and all companies need better visibility,” said Offer. “It started with toilet paper and hand sanitizer, then moved to desks and office chairs, but now it’s not just e-commerce but everything. Think about big banks, whose business was 70% offline and is now 70-80% online. Companies are building and undergoing a digital transformation.”

That in turn is driving more people to understand how well their web presence is working, he said, with the basic big question being: “What is my marketshare, and how does that compare to my competition? Everything is about digital visibility, especially in times of change.”

Like many other companies, SimilarWeb did see an initial dip in business, Offer said, and to that end the company has taken on some debt as part of Israel’s Paycheck Protection Program, to help safeguard some jobs that needed to be furloughed. But he added that most of its customers prior to the pandemic kicking off are now back, along with customers from new categories that hadn’t been active much before, like automotive portals.

That change in customer composition is also opening some doors of opportunity for the company. Offer noted that in recent months, a lot of large enterprises — which might have previously used SimilarWeb’s technology indirectly, via a consultancy, for example — have been coming to the company direct.

“We’ve started a new advisory service [where] our own expert works with a big customer that might have more deep and complex questions about the behaviour we are observing. They are questions all big businesses have right now.” The service sounds like a partly-educational effort, teaching companies that are not necessarily digital-first be more proactive, and partly consulting.

New customer segments, and new priorities in the world of business, are two of the things that drove this round, say investors.

“SimilarWeb was always an incredible tool for any digital professional,” said Gili Iohan of ION Crossover Partners, in a statement. “But over the last few months it has become apparent that traffic intelligence — the unparalleled data and digital insight that SimilarWeb offers — is an absolute essential for any company that wants to win in the digital world.”

As for acquisitions, SimilarWeb has historically made these to accelerate its technical march. For example, in 2015 it acquired Quettra to move deeper into mobile analytics and it acquired Swayy to move into content discovery insights (key for e-commerce intelligence). Offer would not go into too much detail about what it has identified as a further target but given that there are quite a lot of companies building tech in this area currently, that there might be a case for some consolidation around bigger platforms to combine some of the features and functionality. Offer said that it was looking at “companies with great data and digital intelligence, with a good product. There are a lot of opportunities right now on the table.”

The company will also be doing some hiring, with the plan to be to add 200 more people globally by January (it has around 600 employees today).

“Since we joined the company three years ago, SimilarWeb has executed a strategic transformation from a general-purpose measurement platform to vertical-based solutions, which has significantly expanded its market opportunity and generated immense customer value,” said Harel Beit-On, Founder and General Partner at Viola Growth, in a statement. “With a stellar management team of accomplished executives, we believe this round positions the company to own the digital intelligence category, and capitalize on the acceleration of the digital era.”

#analytics, #artificial-intelligence, #ecommerce, #enterprise, #europe, #market-intelligence, #recent-funding, #similarweb, #startups, #web-traffic

0

Google Meet and other Google services go down (Updated)

Google’s engineers aren’t having a good day today. This afternoon, a number of Google services went offline or are barely reachable. These services include Google Meet, Drive, Docs, Analytics, Classroom and Calendar, for example.

While Google’s own status dashboards don’t show any issues, we’re seeing reports from around the world from people who aren’t able to reach any of these services. Best we can tell, these issues started around 6pm PT.

It’s unusual for this number of Google services to go down at once. Usually, it’s only a single service that is affected. This time around, however, it’s clearly a far broader issue.

We’ve reached out to Google and will update this post once we hear more about what happened.

Update (6:30pm PT): and we’re back. It looks like most Google services are now recovering.

 

#analytics, #calendar, #classroom, #companies, #docs, #drive, #gmail, #google, #google-meet, #san-francisco-bay-area, #tc, #websites, #x

0

Mailchimp launches new AI tools as it continues its transformation to marketing platform

Mailchimp may have started out as an easy to use newsletter tool, but that was almost 20 years ago. Today’s company still does email, but at its core, it is now a marketing automation platform for small businesses that also offers a website builder, basic online stores, digital ad support and analytics to make sense of it all. Like before, though, the company’s main goal is to make all these features easy to use for small business users.

Image Credits: Mailchimp

Today, Mailchimp, which has never taken outside funding, is taking the next step in its own transformation with the launch of a set of AI-based tools that give small businesses easy access to the same kind of capabilities that their larger competitors now use. That includes personalized product recommendations for shoppers and forecasting tools for behavioral targeting to see which users are most likely to buy something, for example. But there’s now also a new AI-backed tool to help business owners design their own visual asset (based in part on its acquisition of Sawa), as well as a tool to help them write better email subject lines.

There’s also a new tool that helps businesses choose the next best action. It looks at all of the data the service aggregates and gives users actionable recommendations for how to improve their email campaign performance.

Image Credits: Mailchimp

“The journey to get here started about four years ago,” Mailchimp’s founding CEO Ben Chestnut told me. “We were riding high. Email was doing amazing for us. And things look so good. And I had a choice, I felt I could sell the business and make a lot of money. I had some offers. Or I could just coast, honestly. I could just be a hero in email and keep it simple and just keep raking in the money. Or I could take on another really tough challenge, which would be act two of  Mailchimp. And I honestly didn’t know what that would be. To be honest with you, that was four years ago, it could have been anything really.”

But after talking to the team, including John Foreman, the head of data analytics at the time and now Mailchimp’s CPO, Chestnut put the company on this new path to go after the marketing automation space. In part, he told me, he did so because he noted that the email space was getting increasingly crowded. “You know how that ends. I mean, you can’t stay there forever with this many competitors. So I knew that we had to up our game,” he said.

And that meant going well beyond email and building numerous new products.

Image Credits: Mailchimp

“It was a huge transformation for us,” Chestnut acknowledged. “We had to get good at building for other customer segments at the time, like e-commerce customers and others. And that was new for us, too. It’s all kinds of new disciplines for us. To inflict that kind of change on your employees is very, very rough. I just can’t help but look back with gratitude that my employees were willing to go on this journey with me. And they actually had faith in me and this release — this fall release — is really the culmination of everything we’ve been working on for four years to me.”

One thing that helped was that Mailchimp already had e-commerce customers — and as Chestnut noted, they were pushing the system to its limit. Only a few years ago, the culture at Mailchimp looked at them as somewhat annoying, though, Chestnut admitted, because they were quite demanding. They didn’t even make the company a lot of money either. At the time, non-profits were Mailchimp’s best customers, but they weren’t pushing the technology to its limits.

Despite this transformation, Mailchimp hasn’t made a lot of acquisitions to accelerate this process. Chestnut argues that a lot of what it is doing — say adding direct mail — is something that was more or less and extension of what it was already good at. But it did make some small AI and ML acquisitions to bring the right expertise in-house, as well as two e-commerce acquisitions, including Lemonstand. Most recently, Mailchimp acquired Courier, a British magazine, newsletter and podcast, marking its first move into the print business.

With this new set of products and services, Mailchimp is now aiming to give small businesses access to the same capabilities the larger e-commerce players have long had, but without the complexity.

To build tools based on machine learning, one needs data — and that’s something Mailchimp already had.

“We’ve been doing marketing for decades,” Mailchimp CPO Foreman said. “And we have millions of small businesses on the platform. And so not only do we build all these tools ourselves, which allows us to integrate them from a visual design perspective — they’re not necessarily acquisitions — but we have this common data set from years and years of doing marketing across millions of businesses, billions of customers we’re talking to, and so we thought, how can we use intelligence — artificial intelligence, machine learning, etc. — to also sand down how all of these tools connect.”

Chestnut says he isn’t likely to put the company on a similar transformation anytime soon. “I really believe you can only take on one major transformation per decade,” he said. “And so you better pick the right one and you better invest it. We’re all in on this all-in-one marketing platform that’s e-commerce enabled. That is unique enough. And now what I’m trying to get my company to do is go deep.”

#advertising-tech, #analytics, #articles, #artificial-intelligence, #automation, #ben-chestnut, #business, #cloud-applications, #computing, #email, #mailchimp, #marketing-automation, #startups, #tc, #website-builder

0

New Zendesk dashboard delivers customer service data in real time

Zendesk has been offering customers the ability to track customer service statistics for some time, but it has always been a look back. Today, the company announced a new product called Explore Enterprise that lets customers capture that valuable info in real time, and share it with anyone in the organization, whether they have a Zendesk license or not.

While it has had Explore in place for a couple of years now, Jon Aniano, senior VP of product at Zendesk says the new enterprise product is in response to growing customer data requirements. “We now have a way to deliver what we call Live Team Dashboards, which delivers real time analytics directly to Zendesk users,” Aniano told TechCrunch.

In the days before COVID that meant displaying these on big monitors throughout the customer service center. Today, as we deal with the pandemic, and customer service reps are just as likely to be working from home, it means giving management the tools they need to understand what’s happening in real time, a growing requirement for Zendesk customers as they scale, regardless of the pandemic.

“What we’ve found over the last few years is that our customers’ appetite for operational analytics is insatiable, and as customers grow, as customer service needs get more complex, the demands on a contact center operator or customer service team are higher and higher, and teams really need new sets of tools and new types of capabilities to meet what they’re trying to do in delivering customer service at scale in the world,” Aniano told TechCrunch.

One of the reasons for this is the shift from phone and email as the primary ways of accessing customer service to messaging tools like WhatsApp. “With the shift to messaging, there are new demands on contact centers to be able to handle real-time interactions at scale with their customers,” he said.

And in order to meet that kind of demand, it requires real-time analytics that Zendesk is providing with this announcement. This arms managers with the data they need to put their customer service resources where they are needed most in the moment in real time.

But Zendesk is also giving customers the ability to share these statistics with anyone in the company. “Users can share a dashboard or historical report with anybody in the company regardless of whether they have access to Zendesk. They can share it in Slack, or they can embed a dashboard anywhere where other people in the company would like to have access to those metrics,” Aniano explained.

The new service will be available starting on August 31st for $29 per user per month.

#analytics, #cloud, #customer-experience, #customer-service, #dashboards, #enterprise, #saas, #zendesk

0

Mode raises $33M to supercharge its analytics platform for data scientists

Data science is the name of the game these days for companies that want to improve their decision making by tapping the information they are already amassing in their apps and other systems. And today, a startup called Mode Analytics, which has built a platform incorporating machine learning, business intelligence and big data analytics to help data scientists fulfil that task, is announcing $33 million in funding to continue making its platform ever more sophisticated.

Most recently, for example, the company has started to introduce tools (including SQL and Python tutorials) for less technical users, specifically those in product teams, so that they can structure queries that data scientists can subsequently execute faster and with more complete responses — important for the many follow up questions that arise when a business intelligence process has been run. Mode claims that its tools can help produce answers to data queries in minutes.

This Series D is being led by SaaS specialist investor H.I.G. Growth Partners, with previous investors Valor Equity Partners, Foundation Capital, REV Venture Partners, and Switch Ventures all participating. Valor led Mode’s Series C in February 2019, while Foundation and REV respectively led its A and B rounds.

Mode is not disclosing its valuation, but co-founder and CEO Derek Steer confirmed in an interview that it was “absolutely” an up-round.

For some context, PitchBook notes that last year its valuation was $106 million. The company now has a customer list that it says covers 52% of the Forbes 500, including Anheuser Busch, Zillow, Lyft, Bloomberg, Capital One, VMWare, and Conde Nast. It says that to date it has processed 830 million query runs and 170 million notebook cell runs for 300,000 users. (Pricing is based on a freemium model, with a free “Studio” tier and Business and Enterprise tiers priced based on size and use.)

Mode has been around since 2013, when it was co-founded by Steer, Benn Stancil (Mode’s current president) and Josh Ferguson (initially the CTO and now chief architect).

Steer said the impetus for the startup came out of gaps in the market that the three had found through years of experience at other companies.

Specifically, when all three were working together at Yammer (they were early employees and stayed on after the Microsoft acquisition), they were part of a larger team building custom data analytics tools for Yammer. At the time, Steer said Yammer was paying $1 million per year to subscribe to Vertica (acquired by HP in 2011) to run it.

They saw an opportunity to build a platform that could provide similar kinds of tools — encompassing things like SQL Editors, Notebooks, and reporting tools and dashboards — to a wider set of users.

“We and other companies like Facebook and Google were building analytics internally,” Steer recalled, “and we knew that the world wanted to work more like these tech companies. That’s why we started Mode.”

All the same, he added, “people were not clearly exactly about what a data scientist even was.”

Indeed, Mode’s growth so far has mirrored that of the rise of data science overall, as the discipline of data science, and the business case for employing data scientists to help figure out what is “going on” beyond the day to day, getting answers by tapping all the data that’s being amassed in the process of just doing business. That means Mode’s addressable market has also been growing.

But even if the trove of potential buyers of Mode’s products has been growing, so has the opportunity overall. There has been a big swing in data science and big data analytics in the last several years, with a number of tech companies building tools to help those who are less technical “become data scientists” by introducing more intuitive interfaces like drag-and-drop features and natural language queries.

They include the likes of Sisense (which has been growing its analytics power with acquisitions like Periscope Data), Eigen (focusing on specific verticals like financial and legal queries), Looker (acquired by Google) and Tableau (acquired by Salesforce).

Mode’s approach up to now has been closer to that of another competitor, Alteryx, focusing on building tools that are still aimed primary at helping data scientists themselves. You have any number of database tools on the market today, Steer noted, “Snowflake, Redshift, BigQuery, Databricks, take your pick.” The key now is in providing tools to those using those databases to do their work faster and better.

That pitch and the success of how it executes on it is what has given the company success both with customers and investors.

“Mode goes beyond traditional Business Intelligence by making data faster, more flexible and more customized,” said Scott Hilleboe, MD, H.I.G. Growth Partners, in a statement. “The Mode data platform speeds up answers to complex business problems and makes the process more collaborative, so that everyone can build on the work of data analysts. We believe the company’s innovations in data analytics uniquely position it to take the lead in the Decision Science marketplace.”

Steer said that fundraising was planned long before the coronavirus outbreak to start in February, which meant that it was timed as badly as it could have been. Mode still raised what it wanted to in a couple of months — “a good raise by any standard,” he noted — even if it’s likely that the valuation suffered a bit in the process. “Pitching while the stock market is tanking was terrifying and not something I would repeat,” he added.

Given how many acquisitions there have been in this space, Steer confirmed that Mode too has been approached a number of times, but it’s staying put for now. (And no, he wouldn’t tell me who has been knocking, except to say that it’s large companies for whom analytics is an “adjacency” to bigger businesses, which is to say, the very large tech companies have approached Mode.)

“The reason we haven’t considered any acquisition offers is because there is just so much room,” Steer said. “I feel like this market is just getting started, and I would only consider an exit if I felt like we were handicapped by being on our own. But I think we have a lot more growing to do.”

#analytics, #apps, #big-data, #data-science, #enterprise, #mode-analytics, #recent-funding, #startups, #tc

0

Messenger tools can help you recover millions in lost revenue

We’ve all had annoyingly memorable experiences with websites — websites that invite you to subscribe to browser notifications or bombard you with pop-ups that ask for your email before you’ve even had a chance to look around. That’s no way to do customer service. Yet many brands still use these lead capture tactics, ones that often permanently turn off would-be customers.

The principle that underlies these tactics makes sense; brands want the chance to communicate with those visitors more personally on a channel like email. But a gap most brands never bridge is the one between how personal they want to get with a website visitor and how personal they are in their initial interaction with that visitor.

In my experience as a marketer, there are few better ways to bridge that gap than a thoughtful implementation of messenger tools, those chat bubbles many big brands use to offer real-time customer support.

Implementing this strategy alone has allowed me to help my clients recover millions of dollars in what would have been lost revenue — more than $5 million for a local dentistry I’ve worked with. Here’s how it works, starting with where to deploy it.

Picking candidate pages through observing user flow and bounce rates

When picking pages for where to deploy messenger tools, the one principle to keep in mind is that you don’t want to offer customer support to those who don’t need it. So every time I implement messenger tools, I think about four key customer segments:

  1. A recurring website visitor — potentially an existing customer.
  2. Website visitors who have no interest in the product or service.
  3. Website visitors who have feature-related questions.
  4. Website visitors who are on the fence about buying a product or service offering.

Sometimes it’s obvious which category a website visitor falls into; for instance, someone who clicks on your client login portal is obviously already a customer and someone who rapidly clicks off your site is obviously not interested in your offering. But for most other users, it’s a lot less clear. That’s where heat map software used in tandem with Google Analytics could be tremendously helpful in mapping user behavior to a profile.

#analytics, #column, #customer-service, #ecommerce, #extra-crunch, #google-analytics, #growth-and-monetization, #growth-marketing, #live-chat, #messenger, #startups, #web-analytics, #windows-live-messenger

0

Google Cloud’s new BigQuery Omni will let developers query data in GCP, AWS and Azure

At its virtual Cloud Next ’20 event, Google today announced a number of updates to its cloud portfolio, but the public alpha launch of BigQuery Omni is probably the highlight of this year’s event. Powered by Google Cloud’s Anthos hybrid-cloud platform, BigQuery Omni allows developers to use the BigQuery engine to analyze data that sits in multiple clouds, including those of Google Cloud competitors like AWS and Microsoft Azure — though for now, the service only supports AWS, with Azure support coming later.

Using a unified interface, developers can analyze this data locally without having to move data sets between platforms.

“Our customers store petabytes of information in BigQuery, with the knowledge that it is safe and that it’s protected,” said Debanjan Saha, the GM and VP of Engineering for Data Analytics at Google Cloud, in a press conference ahead of today’s announcement. “A lot of our customers do many different types of analytics in BigQuery. For example, they use the built-in machine learning capabilities to run real-time analytics and predictive analytics. […] A lot of our customers who are very excited about using BigQuery in GCP are also asking, ‘how can they extend the use of BigQuery to other clouds?’ ”

Image Credits: Google

Google has long said that it believes that multi-cloud is the future — something that most of its competitors would probably agree with, though they all would obviously like you to use their tools, even if the data sits in other clouds or is generated off-platform. It’s the tools and services that help businesses to make use of all of this data, after all, where the different vendors can differentiate themselves from each other. Maybe it’s no surprise then, given Google Cloud’s expertise in data analytics, that BigQuery is now joining the multi-cloud fray.

“With BigQuery Omni customers get what they wanted,” Saha said. “They wanted to analyze their data no matter where the data sits and they get it today with BigQuery Omni.”

Image Credits: Google

He noted that Google Cloud believes that this will help enterprises break down