Spotify opens a second personalized playlist to sponsors, after ‘Discover Weekly’ in 2019

Spotify is opening up its personalized playlist, “On Repeat,” to advertising sponsorship. This playlist, launched in 2019 and featuring users’ favorite songs, is only the second personalized playlist on the music streaming service that’s being made available for sponsorship. Spotify’s flagship playlist, “Discover Weekly,” became the first in 2019.

The sponsorship is made possible through the company’s Sponsored Playlist ad product, which gives brands the ability to market to Spotify’s free users with audio, video and display ad messages across breaks, allowing the advertiser to own the experience “end-to-end,” the company says.

It also gives brands an opportunity to reach Spotify’s most engaged users.

When Spotify opened up “Discover Weekly” to sponsorship, for example, it noted that users who listened to this playlist streamed more than double those who didn’t. Similarly, “On Repeat” caters to Spotify’s more frequent users because of its focus on tracks users have played most often.

Since the launch of “On Repeat” in September 2019, Spotify says the playlist has reached 12 billion streams globally. Fans have also spent over 750 million hours listening to the playlist, where artists like Bad Bunny, The Weeknd, and Ariana Grande have topped the list for “most repeated” listens.

Though Spotify today offers its numerous owned and operated playlists for sponsorship, its personalized playlists have largely been off-limits — except for “Discover Weekly.” These are highly-valued properties, as Spotify directs users to stream collections powered by its algorithms, which Spotify organizes in its ever-expanding “Made for You” hub in its app. Here, users can jump in between “Discover Weekly,” and other collections organized by genre, artist, decade, and more — like new releases, favorites, suggestions, and more.

With the launch of sponsorship for “On Repeat,” brands across 30 global markets, including North America, Europe, Latin America and Asia-Pacific will be able to own another of Spotify’s largest personalized properties for a time.

The first U.S. advertiser to take advantage of the sponsorship is TurboTax, which cited the personalization elements and user engagement with the playlist among the reasons why the ad product made sense for them.

“Like music, taxes are not one size fits all. Every tax situation is unique and every individual’s needs are different,” said Cathleen Ryan, VP of Marketing for TurboTax, in a statement about the launch. “We’re using Spotify’s deep connection to its engaged listeners to get in front of consumers and show them that with TurboTax you can get the expertise you need on your terms. With Spotify, we’re able to get both reach and unique targeting that ensures the right audiences know about the tools, guidance and expertise that TurboTax offers,” she added.

#ad-technology, #adtech, #advertising, #advertising-tech, #brands, #media, #personalization, #playlist, #spotify, #streaming-music, #turbotax

0

Spotify adds three new types of personalized playlists with launch of ‘Spotify Mixes’

Spotify this morning announced it’s significantly expanding its selection of personalized playlists with the addition of three new categories of playlists under the heading of “Spotify Mixes.” This collection will include artist mixes, genre mixes, and decade mixes — meaning you’ll gain access to a sizable number of new mixes with easy-to-understand titles, like 2010s Mix, R&B Mix, Pop Mix, Drake Mix, Selena Gomez Mix, and so on — or whatever reflects your own tastes and interests.

The company says the idea for the Spotify Mixes was inspired by its Daily Mixes, launched in fall 2016.

The Daily Mixes had been one of the company’s first big expansions in personalization beyond its flagship playlist, Discover Weekly, as they introduced a large set of playlists that reflected users’ listening history. Today, Daily Mixes bring together your recent listens with other tracks to keep you engaged — and the new Spotify Mixes essentially do the same, as they’re populated with music you like plus “fresh tracks.” The difference is that the new mixes have clearer names and a more specific focus, in some cases.

The Spotify Mixes will be available to all users globally, including both Free users and Premium subscribers. At launch, you can find them within Search in the “Made for You” hub.

You’ll easily spot them, too, as Spotify has already populated its app with a selection of mixes in the top three rows of the “Made for You” hub. Here, you’ll find “Your Genre Mixes,” “Your Artist Mixes,” and “Your Decade Mixes” —  each with a horizontally scrollable selection of mixes to get you started. Spotify says each mix category will be updated frequently and will always have several playlists available.

The new feature somewhat competes with a similar offering on Pandora, launched three years ago. The SiriusXM-owned music app had used its Music Genome technology to create personalized playlists across a number of attributes, including also genre and mood. While not an apples-to-apples comparison, necessarily, Pandora’s launch had instantly expanded its users’ access to personalized playlists by the dozens. It’s actually a bit surprising that it took Spotify as long as it did to offer a competitive response.

Spotify says the new playlists are rolling out today to global users.

 

#media, #music, #personalization, #playlists, #spotify

0

Voice recognition features return to TiVo through a partnership with Atlanta-based Pindrop

TiVo devices are getting new voice recognition capabilities thanks to a partnership with the Atlanta-based startup Pindrop, which is now offering its voice recognition and personalization technologies for consumer devices.

The new voice recognition capabilities replace TiVo’s discontinued use of the Alexa voice recognition service, which happened with little fanfare last year.

TiVo made a big push with its Alexa integration a little over two years ago, but the switch to Pindrop’s services shows that there’s a robust market for voice-enabled services and providers are moving from different markets to compete on Amazon and Google’s home turf.

Through the integration with Pindrop’s services, TiVo homeowners will now be able to search for shows and control their devices using their voice. But Pindrop’s tech, which was developed initially as an anti-fraud technology for financial services firms and big business customers, goes beyond basic voice recognition.

Pindrop’s tech can tell the difference between different speakers, setting up opportunities for the personalization of programming with each user being able to call up their individual account for Netflix, Amazon or other services with simple voice commands.

“Beyond just understanding what was said, we want to understand the context of the situation to drive intelligent system behavior in the moment,” said Jon Heim, Senior Director of Product & Conversation Services at TiVo. “The ability to distinguish between different members of a household based on their voice is an example of this contextual awareness, enabling us to provide an unprecedented level of personalization through an experience tailored to that specific person.”

It’s cool.

When different users say the “What should I watch?” prompt, TiVo devices can now pull up personalized content they are most likely to want to watch. If another member of the household says the same command, the device will display different results.

The technology requires user opt-in, and while Pindrop’s tech can differentiate between speakers, the identity of the speaker is anonymized. 

It’s a service that Pindrop has already rolled out to eight of the ten largest banks in the U.S., according to Pindrop co-founder and chief executive Vijay Balasubramanian. And the foray into consumer devices through the TiVo partnership is just the beginning.

The company has also integrated with SEI Robotics devices, the white label manufacturer of Android devices.

Pindrop has plenty of cash in the bank to finance its push into the world of consumer devices. The company’s profitable and is looking at an annual run rate just shy of $100 million, according to Balasubramanian.

For its next trick, the company intends to roll out its voice recognition service in cars and other networked consumer devices, according to Balasubramanian.

“[We’re] working with OEMS for auto… they’re in the proof of concept phase,” he said. 

#amazon, #android, #atlanta, #bank, #computing, #digital-video-recorders, #director, #netflix, #operating-systems, #personalization, #pindrop, #pindrop-security, #speaker, #tc, #tivo, #united-states, #voice-recognition

0

Hightouch raises $2.1M to help businesses get more value from their data warehouses

Hightouch, a SaaS service that helps businesses sync their customer data across sales and marketing tools, is coming out of stealth and announcing a $2.1 million seed round. The round was led by Afore Capital and Slack Fund, with a number of angel investors also participating.

At its core, Hightouch, which participated in Y Combinator’s Summer 2019 batch, aims to solve the customer data integration problems that many businesses today face.

During their time at Segment, Hightouch co-founders Tejas Manohar and Josh Curl witnessed the rise of data warehouses like Snowflake, Google’s BigQuery and Amazon Redshift — that’s where a lot of Segment data ends up, after all. As businesses adopt data warehouses, they now have a central repository for all of their customer data. Typically, though, this information is then only used for analytics purposes. Together with former Bessemer Ventures investor Kashish Gupta, the team decided to see how they could innovate on top of this trend and help businesses activate all of this information.

hightouch founders

HighTouch co-founders Kashish Gupta, Josh Curl and Tejas Manohar.

“What we found is that, with all the customer data inside of the data warehouse, it doesn’t make sense for it to just be used for analytics purposes — it also makes sense for these operational purposes like serving different business teams with the data they need to run things like marketing campaigns — or in product personalization,” Manohar told me. “That’s the angle that we’ve taken with Hightouch. It stems from us seeing the explosive growth of the data warehouse space, both in terms of technology advancements as well as like accessibility and adoption. […] Our goal is to be seen as the company that makes the warehouse not just for analytics but for these operational use cases.”

It helps that all of the big data warehousing platforms have standardized on SQL as their query language — and because the warehousing services have already solved the problem of ingesting all of this data, Hightouch doesn’t have to worry about this part of the tech stack either. And as Curl added, Snowflake and its competitors never quite went beyond serving the analytics use case either.

Image Credits: Hightouch

As for the product itself, Hightouch lets users create SQL queries and then send that data to different destinations  — maybe a CRM system like Salesforce or a marketing platform like Marketo — after transforming it to the format that the destination platform expects.

Expert users can write their own SQL queries for this, but the team also built a graphical interface to help non-developers create their own queries. The core audience, though, is data teams — and they, too, will likely see value in the graphical user interface because it will speed up their workflows as well. “We want to empower the business user to access whatever models and aggregation the data user has done in the warehouse,” Gupta explained.

The company is agnostic to how and where its users want to operationalize their data, but the most common use cases right now focus on B2C companies, where marketing teams often use the data, as well as sales teams at B2B companies.

Image Credits: Hightouch

“It feels like there’s an emerging category here of tooling that’s being built on top of a data warehouse natively, rather than being a standard SaaS tool where it is its own data store and then you manage a secondary data store,” Curl said. “We have a class of things here that connect to a data warehouse and make use of that data for operational purposes. There’s no industry term for that yet, but we really believe that that’s the future of where data engineering is going. It’s about building off this centralized platform like Snowflake, BigQuery and things like that.”

“Warehouse-native,” Manohar suggested as a potential name here. We’ll see if it sticks.

Hightouch originally raised its round after its participation in the Y Combinator demo day but decided not to disclose it until it felt like it had found the right product/market fit. Current customers include the likes of Retool, Proof, Stream and Abacus, in addition to a number of significantly larger companies the team isn’t able to name publicly.

#afore-capital, #analytics, #articles, #big-data, #business-intelligence, #cloud, #computing, #data-management, #data-warehouse, #developer, #enterprise, #information, #personalization, #recent-funding, #slack-fund, #software-as-a-service, #startups, #tc

0

Boost ROI with intent data and personalized multichannel marketing campaigns

Coronavirus is causing large and small businesses to drastically cut marketing budgets. In Forrester’s self-described “most optimistic scenario,” the analysts project a 28% drop in U.S. marketing spend by the end of 2021. Even Google is cutting its marketing budget in half. As marketers move forward, Forrester predicts marketing automation platforms will grow despite an overall decline in marketing technology investment.

Automation platforms help marketers scale their communications. However, scaling communications is not a substitute for intimacy, which all humans crave. Because of the pandemic, it is harder than ever to get attention, let alone make a connection. More mass email blasts from your marketing automation platform are not going to get you the connections with prospects you crave. So how should marketers proceed? Direct mail captures 100% of your audience’s attention. It provides a sensory experience for your prospects and customers, and that helps establish an emotional connection.

Winning marketers are strategically merging automation and digital data with the more intimate channel of direct mail. We call this tactile marketing automation (TMA).

TMA is the integration of direct mail or personalized swag with a marketing automation platform. With TMA, a marketer doesn’t have to think about creating direct mail campaigns outside of digital campaigns. Rather, direct mail experiences are already fully integrated into the pre-built customer journey.

TMA uses intent data to inform content, messaging and the timing of direct mail touchpoints that maximize relevancy and scalability. Multichannel campaigns including direct mail report an ROI 18 percentage points higher than those without direct mail. Plus, 84% of marketers state direct mail improves multichannel campaign performance.

Read on to see how you can merge digital communications and direct mail to deliver remarkable experiences that spark a connection.

Incorporate intent data

Personalization is a key ingredient of a remarkable experience. Many marketers automate processes by introducing marketing software and then call it personalization. But, oftentimes it’s just quicker batching and blasting. Brands can’t just change the first name on a piece of content and call it “personalized.” Real personalization is necessary and vital for real results. Our consumers expect more. The best way to introduce real personalization within a marketing mix is to use intent data and trigger-driven campaigns.

#digital-marketing, #direct-marketing, #ecommerce, #email-marketing, #marketing, #marketing-automation, #online-advertising, #personalization, #startups

0

Google’s personalized audio news feature, Your News Update, comes to Google Podcasts

Last year, Google launched a personalized playlist of audio news called “Your News Update” for Google Assistant. The feature leverages machine learning techniques to understand the news content and how it relates to the listener’s own likes and interests. Today, the company says it’s publishing this personalized audio experience on Google Podcasts, allowing it to reach millions of podcast listeners in the U.S.

To subscribe to Your News Update, users will launch the Google Podcasts app, navigate to the Explore tab, then subscribe to listen a mix of stories that will reflect their interests, location, user history, and other preferences.

Image Credits:

Your News Update was designed to be a smarter alternative to Alexa’s popular Flash Briefing. Today, Alexa users can customize their own Flash Briefing by adding additional sources and other content from any of the over 10,000 now Flash Briefing skills. However, this takes work on the end user’s part.

Google’s Your News Update instead relies on algorithms to do the heavy lifting, based on Google’s understanding of your interests.

This personalization takes into account data you’ve explicitly provided Google — like the topics, sources, and locations you’ve said you’re interested in following. In addition, it allows Google to use the data the company has gleaned from your use of other Google products to further personalize the news to your own interests, unless you’ve gone to your account settings page to turn personalization off.

Via a link in Google Assistant underneath the Your News Update feature, Google directs you to a website where it further explains how its news algorithms work. Here, Google explains the news algorithms don’t “try” to personalize results based on your political beliefs or other demographic factors.

However, Google will know and learn from activity like if you follow or hide specific publishers on Google News or Discover, if you follow particular topics, or if you directed Google to show you similar articles more or less frequently.

In other words, Google combines the wealth of information it knows about you with its knowledge of other efforts you’ve made to customize your news to your liking in order to craft Your News Update.

In some cases, this can be useful. You can get updates on your favorite sports team or your hometown news, for instance. There may not be a stated intention of directing someone towards left or right leaning sources, but it could certainly end up that way based on how this personalization technology works and Google’s publisher lineup — which includes both left of center and right of center sources.

Image Credits: Google

In addition, Google said at launch that the personalization will get better the more you use the feature on Google Assistant, as it will learn from how you engage with the product.

Alongside the podcast update, Google is also making it easier to listen to local stories via Google Assistant. Users can say, “Hey Google, play local news” or “Hey Google, play news about [your city],” to hear a mix of native audio and text-to-speech local news stories. How you engage with this product can inform the choices made by Your News Update, as well.

 

#google, #media, #news, #personalization

0

Five ways to bring a UX lens to your AI project

As AI and machine-learning tools become more pervasive and accessible, product and engineering teams across all types of organizations are developing innovative, AI-powered products and features. AI is particularly well-suited for pattern recognition, prediction and forecasting, and the personalization of user experience, all of which are common in organizations that deal with data.

A precursor to applying AI is data — lots and lots of it! Large data sets are generally required to train an AI model, and any organization that has large data sets will no doubt face challenges that AI can help solve. Alternatively, data collection may be “phase one” of AI product development if data sets don’t yet exist.

Whatever data sets you’re planning to use, it’s highly likely that people were involved in either the capture of that data or will be engaging with your AI feature in some way. Principles for UX design and data visualization should be an early consideration at data capture, and/or in the presentation of data to users.

1. Consider the user experience early

Understanding how users will engage with your AI product at the start of model development can help to put useful guardrails on your AI project and ensure the team is focused on a shared end goal.

If we take the ‘”Recommended for You” section of a movie streaming service, for example, outlining what the user will see in this feature before kicking off data analysis will allow the team to focus only on model outputs that will add value. So if your user research determined the movie title, image, actors and length will be valuable information for the user to see in the recommendation, the engineering team would have important context when deciding which data sets should train the model. Actor and movie length data seem key to ensuring recommendations are accurate.

The user experience can be broken down into three parts:

  • Before — What is the user trying to achieve? How does the user arrive at this experience? Where do they go? What should they expect?
  • During — What should they see to orient themselves? Is it clear what to do next? How are they guided through errors?
  • After — Did the user achieve their goal? Is there a clear “end” to the experience? What are the follow-up steps (if any)?

Knowing what a user should see before, during and after interacting with your model will ensure the engineering team is training the AI model on accurate data from the start, as well as providing an output that is most useful to users.

2. Be transparent about how you’re using data

Will your users know what is happening to the data you’re collecting from them, and why you need it? Would your users need to read pages of your T&Cs to get a hint? Think about adding the rationale into the product itself. A simple “this data will allow us to recommend better content” could remove friction points from the user experience, and add a layer of transparency to the experience.

When users reach out for support from a counselor at The Trevor Project, we make it clear that the information we ask for before connecting them with a counselor will be used to give them better support.

If your model presents outputs to users, go a step further and explain how your model came to its conclusion. Google’s “Why this ad?” option gives you insight into what drives the search results you see. It also lets you disable ad personalization completely, allowing the user to control how their personal information is used. Explaining how your model works or its level of accuracy can increase trust in your user base, and empower users to decide on their own terms whether to engage with the result. Low accuracy levels could also be used as a prompt to collect additional insights from users to improve your model.

3. Collect user insights on how your model performs

Prompting users to give feedback on their experience allows the Product team to make ongoing improvements to the user experience over time. When thinking about feedback collection, consider how the AI engineering team could benefit from ongoing user feedback, too. Sometimes humans can spot obvious errors that AI wouldn’t, and your user base is made up exclusively of humans!

One example of user feedback collection in action is when Google identifies an email as dangerous, but allows the user to use their own logic to flag the email as “Safe.” This ongoing, manual user correction allows the model to continuously learn what dangerous messaging looks like over time.

Image Credits: Google

If your user base also has the contextual knowledge to explain why the AI is incorrect, this context could be crucial to improving the model. If a user notices an anomaly in the results returned by the AI, think of how you could include a way for the user to easily report the anomaly. What question(s) could you ask a user to garner key insights for the engineering team, and to provide useful signals to improve the model? Engineering teams and UX designers can work together during model development to plan for feedback collection early on and set the model up for ongoing iterative improvement.

4. Evaluate accessibility when collecting user data

Accessibility issues result in skewed data collection, and AI that is trained on exclusionary data sets can create AI bias. For instance, facial recognition algorithms that were trained on a data set consisting mostly of white male faces will perform poorly for anyone who is not white or male. For organizations like The Trevor Project that directly support LGBTQ youth, including considerations for sexual orientation and gender identity are extremely important. Looking for inclusive data sets externally is just as important as ensuring the data you bring to the table, or intend to collect, is inclusive.

When collecting user data, consider the platform your users will leverage to interact with your AI, and how you could make it more accessible. If your platform requires payment, does not meet accessibility guidelines or has a particularly cumbersome user experience, you will receive fewer signals from those who cannot afford the subscription, have accessibility needs or are less tech-savvy.

Every product leader and AI engineer has the ability to ensure marginalized and underrepresented groups in society can access the products they’re building. Understanding who you are unconsciously excluding from your data set is the first step in building more inclusive AI products.

5. Consider how you will measure fairness at the start of model development

Fairness goes hand-in-hand with ensuring your training data is inclusive. Measuring fairness in a model requires you to understand how your model may be less fair in certain use cases. For models using people data, looking at how the model performs across different demographics can be a good start. However, if your data set does not include demographic information, this type of fairness analysis could be impossible.

When designing your model, think about how the output could be skewed by your data, or how it could underserve certain people. Ensure the data sets you use to train, and the data you’re collecting from users, are rich enough to measure fairness. Consider how you will monitor fairness as part of regular model maintenance. Set a fairness threshold, and create a plan for how you would adjust or retrain the model if it becomes less fair over time.

As a new or seasoned technology worker developing AI-powered tools, it’s never too early or too late to consider how your tools are perceived by and impact your users. AI technology has the potential to reach millions of users at scale and can be applied in high-stakes use cases. Considering the user experience holistically — including how the AI output will impact people — is not only best-practice but can be an ethical necessity.

#artificial-intelligence, #column, #cybernetics, #design, #developer, #machine-learning, #personalization, #startups, #tc, #user-experience, #user-interfaces, #ux

0