Apple Music is using Shazam to solve the streaming industry’s problem with DJ mixes

Apple Music announced today that it’s created a process to properly identify and compensate all of the individual creators involved in making a DJ mix. Using technology from the audio-recognition app Shazam, which Apple acquired in 2018 for $400 million, Apple Music is working with major and independent labels to devise a fair way to divide streaming royalties among DJs, labels, and artists who appear in the mixes. This is intended to help DJ mixes retain long-term monetary value for all creators involved, making sure that musicians get paid for their work even when other artists iterate on it. And, as one of Apple’s first major integrations of Shazam’s technology, it appears that the company saw value in

Historically, it’s been difficult for DJs to stream mixes online, since live streaming platforms like YouTube or Twitch might flag the use of other artists’ songs as copyright infringement. Artists are entitled to royalties when their song is played by a DJ during a live set, but dance music further complicates this, since small samples from various songs can be edited and mixed together into something unrecognizable.

Apple Music already hosts thousands of mixes, including sets from Tomorrowland’s digital festivals from 2020 and 2021, but only now is it formally announcing the tech that enables it to do this, even though Billboard noted it in June. As part of this announcement, Studio K7!’s DJ Kicks archive of mixes will begin to roll out on the service, giving fans access to mixes that haven’t been on the market in over 15 years.

“Apple Music is the first platform that offers continuous mixes where there’s a fair fee involved for the artists whose tracks are included in the mixes and for the artist making those mixes. It’s a step in the right direction where everyone gets treated fairly,” DJ Charlotte de Witte said in a statement on behalf of Apple. “I’m beyond excited to have the chance to provide online mixes again.”

Image Credits: Apple Music

For dance music fans, the ability to stream DJ mixes is groundbreaking, and it can help Apple Music compete with Spotify, which leads the industry in paid subscribers as it surpasses Apple’s hold on podcasting. Even as Apple Music has introduced lossless audio, spatial audio, and classical music acquisitions, the company hasn’t yet outpaced Spotify, though the addition of DJ mixes adds yet another unique music feature.

Still, Apple Music’s dive into the DJ royalties conundrum doesn’t necessarily address the broader crises at play among live musicians and DJs surviving through a pandemic.

Though platforms like Mixcloud allow DJs to stream sets and monetize using pre-licensed music, Apple Music’s DJ mixes will not include user-generated content. MIDiA Research, in partnership with Audible Magic, found that user-generated content (UGC) — online content that uses music, whether it’s a lipsync TikTok or a Soundcloud DJ mix — could be a music industry goldmine worth over $6 billion in the next two years. But Apple is not yet investing in UGC, as individuals cannot yet upload their personal mixes to stream on the platform like they might on Soundcloud. According to a Billboard report from June, Apple Music will only host mixes after the streamer has identified 70% of the combined tracks.

Apple Music didn’t respond to questions about how exactly royalties will be divided, but this is only a small step in reimagining how musicians will make a living in a digital landscape.

While these innovations help get artists compensated, streaming royalties only account for a small percentage of how musicians make money — Apple pays musicians one cent per stream, while competitors like Spotify pay only fractions of cents. This led the Union of Musicians and Allied Workers (UMAW) to launch a campaign in March called Justice at Spotify, which demands a one-cent-per-stream payout that matches Apple’s. But live events remain a musician’s bread and butter, especially given platforms’ paltry streaming payouts — of course, the pandemic hasn’t been conducive to touring. To add insult to injury, the Association for Electronic Music estimated in 2016 that dance music producers missed out on $120 million in royalties from their work being used without attribution in live performances.

#apple, #apple-inc, #apple-music, #apps, #artist, #audible, #audible-magic, #billboard, #computing, #disc-jockey, #entertainment, #media, #mixcloud, #music-industry, #online-content, #operating-systems, #shazam, #soundcloud, #spotify, #streaming, #streaming-media, #technology, #twitch

Tech giants still aren’t coming clean about COVID-19 disinformation, says EU

European Union lawmakers have asked tech giants to continue reporting on efforts to combat the spread of vaccine disinformation on their platforms for a further six months.

“The continuation of the monitoring programme is necessary as the vaccination campaigns throughout the EU is proceeding with a steady and increasing pace, and the upcoming months will be decisive to reach a high level of vaccination in Member States. It is key that in this important period vaccine hesitancy is not fuelled by harmful disinformation,” the Commission writes today.

Facebook, Google, Microsoft, TikTok and Twitter are signed up to make monthly reports as a result of being participants in the bloc’s (non-legally binding) Code of Practice on Disinformation — although, going forward, they’ll be switching to bi-monthly reporting.

Publishing the latest batch of platform reports for April, the Commission said the tech giants have shown they’re unable to police “dangerous lies” by themselves — while continuing to express dissatisfaction at the quality and granularity of the data that is being (voluntarily) provided by platforms vis-a-via how they’re combating online disinformation generally.

“These reports show how important it is to be able to effectively monitor the measures put in place by the platforms to reduce disinformation,” said Věra Jourová, the EU’s VP for values and transparency, in a statement. “We decided to extend this programme, because the amount of dangerous lies continues to flood our information space and because it will inform the creation of the new generation Code against disinformation. We need a robust monitoring programme, and clearer indicators to measure impact of actions taken by platforms. They simply cannot police themselves alone.”

Last month the Commission announced a plan to beef up the voluntary Code, saying also that it wants more players — especially from the adtech ecosystem — to sign up to help de-monitize harmful nonsense.

The Code of Practice initiative pre-dates the pandemic, kicking off in 2018 when concerns about the impact of ‘fake news’ on democratic processes and public debate were riding high in the wake of major political disinformation scandals. But the COVID-19 public health crisis accelerated concern over the issue of dangerous nonsense being amplified online, bringing it into sharper focus for lawmakers.

In the EU, lawmakers are still not planning to put regional regulation of online disinformation on a legal footing, preferring to continue with a voluntary — and what the Commission refers to as ‘co-regulatory’ — approach which encourages action and engagement from platforms vis-a-vis potentially harmful (but not illegal) content, such as offering tools for users to report problems and appeal takedowns, but without the threat of direct legal sanctions if they fail to live up to their promises.

It will have a new lever to ratchet up pressure on platforms too, though, in the form of the Digital Services Act (DSA). The regulation — which was proposed at the end of last year  — will set rules for how platforms must handle illegal content. But commissioners have suggested that those platforms which engage positively with the EU’s disinformation Code are likely to be looked upon more favorably by the regulators that will be overseeing DSA compliance.

In another statement today, Thierry Breton, the commissioner for the EU’s Internal Market, suggested the combination of the DSA and the beefed up Code will open up “a new chapter in countering disinformation in the EU”.

“At this crucial phase of the vaccination campaign, I expect platforms to step up their efforts and deliver the strengthened Code of Practice as soon possible, in line with our Guidance,” he added.

Disinformation remains a tricky topic for regulators, given that the value of online content can be highly subjective and any centralized order to remove information — no matter how stupid or ridiculous the content in question might be — risks a charge of censorship.

Removal of COVID-19-related disinformation is certainly less controversial, given clear risks to public health (such as from anti-vaccination messaging or the sale of defective PPE). But even here the Commission seems most keen to promote pro-speech measures being taken by platforms — such as to promote vaccine positive messaging and surface authoritative sources of information — noting in its press release how Facebook, for example, launched vaccine profile picture frames to encourage people to get vaccinated, and that Twitter introduced prompts appearing on users’ home timeline during World Immunisation Week in 16 countries, and held conversations on vaccines that received 5 million impressions.

In the April reports by the two companies there is more detail on actual removals carried out too.

Facebook, for example, says it removed 47,000 pieces of content in the EU for violating COVID-19 and vaccine misinformation policies, which the Commission notes is a slight decrease from the previous month.

While Twitter reported challenging 2,779 accounts, suspending 260 and removing 5,091 pieces of content globally on the COVID-19 disinformation topic in the month of April.

Google, meanwhile, reported taking action against 10,549 URLs on AdSense, which the Commission notes as a “significant increase” vs March (+1378).

But is that increase good news or bad? Increased removals of dodgy COVID-19 ads might signify better enforcement by Google — or major growth of the COVID-19 disinformation problem on its ad network.

The ongoing problem for the regulators who are trying to tread a fuzzy line on online disinformation is how to quantify any of these tech giants’ actions — and truly understand their efficacy or impact — without having standardized reporting requirements and full access to platform data.

For that, regulation would be needed, not selective self-reporting.

 

#code-of-practice-on-disinformation, #covid-19, #deception, #digital-services-act, #disinformation, #europe, #european-union, #facebook, #fake-news, #google, #microsoft, #online-content, #online-disinformation, #policy, #social, #thierry-breton, #twitter, #vera-jourova

UK publishes draft Online Safety Bill

The UK government has published its long-trailed (child) ‘safety-focused’ plan to regulate online content and speech.

The Online Safety Bill has been in the works for years — during which time a prior plan to require age verification for accessing online porn in the UK, also with the goal of protecting kids from being exposed to inappropriate content online but which was widely criticized as unworkable, got quietly dropped.

At the time the government said it would focus on introducing comprehensive legislation to regulate a range of online harms. It can now say it’s done that.

The 145-page Online Safety Bill can be found here on the gov.uk website — along with 123 pages of explanatory notes and an 146-page impact assessment.

The draft legislation imposes a duty of care on digital service providers to moderate user generated content in a way that prevents users from being exposed to illegal and/or harmful stuff online.

The government dubs the plan globally “groundbreaking” and claims it will usher in “a new age of accountability for tech and bring fairness and accountability to the online world”.

Critics warn the proposals will harm freedom of expression by encouraging platforms to over-censor, while also creating major legal and operational headaches for digital businesses that will discourage tech innovation.

The debate starts now in earnest.

The bill will be scrutinised by a joint committee of MPs — before a final version is formally introduced to Parliament for debate later this year.

How long it might take to hit the statute books isn’t clear but the government has a large majority in parliament so, failing major public uproar and/or mass opposition within its own ranks, the Online Safety Bill has a clear road to becoming law.

Commenting in a statement, digital secretary Oliver Dowden said: “Today the UK shows global leadership with our groundbreaking laws to usher in a new age of accountability for tech and bring fairness and accountability to the online world.

“We will protect children on the internet, crack down on racist abuse on social media and through new measures to safeguard our liberties, create a truly democratic digital age.”

The length of time it’s taken for the government to draft the Online Safety Bill underscores the legislative challenge involved in trying to ‘regulate the Internet’.

In a bit of a Freudian slip, the DCMS’ own PR talks about “the government’s fight to make the internet safe”. And there are certainly question-marks over who the future winners and losers of the UK’s Online Safety laws will be.

Safety and democracy?

In a press release about the plan, the Department for Digital, Media, Culture and Sport (DCMS) claimed the “landmark laws” will “keep children safe, stop racial hate and protect democracy online”.

But as that grab-bag of headline goals implies there’s an awful lot going on here — and huge potential for things to go wrong if the end result is an incoherent mess of contradictory rules that make it harder for digital businesses to operate and for Internet users to access the content they need.

The laws are set to apply widely — not just to tech giants or social media sites but to a broad swathe of websites, apps and services that host user-generated content or just allow people to talk to others online.

In scope services will face a legal requirement to remove and/or limit the spread of illegal and (in the case of larger services) harmful content, with the risk of major penalties for failing in this new duty of care toward users. There will also be requirements for reporting child sexual exploitation content to law enforcement.

Ofcom, the UK’s comms regulator — which is responsible for regulating the broadcast media and telecoms sectors — is set to become the UK Internet’s content watchdog too, under the plan.

It will have powers to sanction companies that fail in the new duty of care toward users by hitting them with fines of up to £18M or ten per cent of annual global turnover (whichever is higher).

The regulator will also get the power to block access to sites — so the potential for censoring entire platforms is baked in.

Some campaigners backing tough new Internet rules have been pressing the government to include the threat of criminal sanctions for CEOs to concentrate C-suite minds on anti-harms compliance. And while ministers haven’t gone that far, DCMS says a new criminal offence for senior managers has been included as a deferred power — adding: “This could be introduced at a later date if tech firms don’t step up their efforts to improve safety.”

Despite there being widespread public support in the UK for tougher rules for Internet platforms, the devil is the detail of how exactly you propose to do that.

Civil rights campaigners and tech policy experts have warned from the get-go that the government’s plan risks having a chilling effect on online expression by forcing private companies to be speech police.

Legal experts are also warning over how workable the framework will be, given hard to define concepts like “harms” — and, in a new addition, content that’s defined as “democratically important” (which the government wants certain platforms to have a special duty to protect).

The clear risk is massive legal uncertainty wrapping digital businesses — with knock-on impacts on startup innovation and availability of services in the UK.

The bill’s earlier incarnation — a 2019 White Paper — had the word “harms” in the title. That’s been swapped for a more anodyne reference to “safety” but the legal uncertainty hasn’t been swapped out.

The emphasis remains on trying to rein in an amorphous conglomerate of ‘harms’ — some illegal, others just unpleasant — that have been variously linked to or associated with online activity. (Often off the back of high profile media reporting, such as into children’s exposure to suicide content on platforms like Instagram.)

This can range from bullying and abuse (online trolling), to the spread of illegal content (child sexual exploitation), to content that’s merely inappropriate for children to see (legal pornography).

Certain types of online scams (romance fraud) are another harm the government wants the legislation to address, per latest additions.

The umbrella ‘harms’ framing makes the UK approach distinct to the European Union’s Digital Service Act — a parallel legislative proposal to update the EU’s digital rules that’s more tightly focused on things that are illegal, with the bloc setting out rules to standardize reporting procedures for illegal content; and combating the risk of dangerous products being sold on ecommerce marketplaces with ‘know your customer’ requirements.

In a response to criticism of the UK Bill’s potential impact on online expression, the government has added measures which it said today are aimed at strengthen people’s rights to express themselves freely online.

It also says it’s added in safeguards for journalism and to protect democratic political debate in the UK.

However its approach is already raising questions — including over what look like some pretty contradictory stipulations.

For example, the DCMS’ discussion of how the bill will handle journalistic content confirms that content on news publishers’ own websites won’t be in scope of the law (reader comments on those sites are also not in scope) and that articles by “recognised news publishers” shared on in-scope services (such as social media sites) will be exempted from legal requirements that may otherwise apply to non journalistic content.

Indeed, platforms will have a legal requirement to safeguard access to journalism content. (“This means [digital platforms] will have to consider the importance of journalism when undertaking content moderation, have a fast-track appeals process for journalists’ removed content, and will be held to account by Ofcom for the arbitrary removal of journalistic content,” DCMS notes.)

However the government also specifies that “citizen journalists’ content will have the same protections as professional journalists’ content” — so exactly where (or how) the line gets drawn between “recognized” news publishers (out of scope), citizen journalists (also out of scope), and just any old person blogging or posting stuff on the Internet (in scope… maybe?) is going to make for compelling viewing.

Carve outs to protect political speech also complicate the content moderation picture for digital services — given, for example, how extremist groups that hold racist opinions can seek to launder their hate speech and abuse as ‘political opinion’. (Some notoriously racist activists also like to claim to be ‘journalists’…)

DCMS writes that companies will be “forbidden from discriminating against particular political viewpoints and will need to apply protections equally to a range of political opinions, no matter their affiliation”.

“Policies to protect such content will need to be set out in clear and accessible terms and conditions and firms will need to stick to them or face enforcement action from Ofcom,” it goes on, adding: “When moderating content, companies will need to take into account the political context around why the content is being shared and give it a high level of protection if it is democratically important.”

Platforms will face responsibility for balancing all these conflicting requirements — drawing on Codes of Practice on content moderation that respects freedom of expression which will be set out by Ofcom — but also under threat of major penalties being slapped on them by Ofcom if they get it wrong.

Interestingly, the government appears to be looking favorably on the Facebook-devised ‘Oversight Board’ model, where a panel of humans sit in judgement on ‘complex’ content moderation cases — and also discouraging too much use of AI filters which it warns risk missing speech nuance and over-removing content. (Especially interesting given the UK government’s prior pressure on platforms to adopt AI tools to speed up terrorism content takedowns.)

“The Bill will ensure people in the UK can express themselves freely online and participate in pluralistic and robust debate,” writes DCMS. “All in-scope companies will need to consider and put in place safeguards for freedom of expression when fulfilling their duties. These safeguards will be set out by Ofcom in codes of practice but, for example, might include having human moderators take decisions in complex cases where context is important.”

“People using their services will need to have access to effective routes of appeal for content removed without good reason and companies must reinstate that content if it has been removed unfairly. Users will also be able to appeal to Ofcom and these complaints will form an essential part of Ofcom’s horizon-scanning, research and enforcement activity,” it goes on.

“Category 1 services [the largest, most popular services] will have additional duties. They will need to conduct and publish up-to-date assessments of their impact on freedom of expression and demonstrate they have taken steps to mitigate any adverse effects. These measures remove the risk that online companies adopt restrictive measures or over-remove content in their efforts to meet their new online safety duties. An example of this could be AI moderation technologies falsely flagging innocuous content as harmful, such as satire.”

Another confusing-looking component of the plan is that while the bill includes measures to tackle what it calls “user-generated fraud” — such as posts on social media for fake investment opportunities or romance scams on dating apps — fraud that’s conducted online via advertising, emails or cloned websites will not be in scope, per DCMS, as it says “the Bill focuses on harm committed through user-generated content”.

Yet since Internet users can easily and cheaply create and run online ads — as platforms like Facebook essentially offer their ad targeting tools to anyone who’s willing to pay — then why carve out fraud by ads as exempt?

It seems a meaningless place to draw the line. Fraud where someone paid a few dollars to amplify their scam doesn’t seem a less harmful class of fraud than a free Facebook post linking to the self-same crypto investment scam.

In short, there’s a risk of arbitrary/ill-thought through distinctions creating incoherent and confusing rules that are prone to loopholes. Which doesn’t sound good for anyone’s online safety.

In parallel, meanwhile, the government is devising an ambitious pro-competition ex ante regime to regulate tech giants specifically. Ensuring coherence and avoiding conflicting or overlapping requirements between that framework for platform giants and these wider digital harms rules is a further challenge.

#europe, #oliver-dowden, #online-content, #online-harms, #online-safety-bill, #oversight-board, #policy, #social, #social-media, #tc, #uk-government, #united-kingdom

Talking tech’s exodus, Twitter’s labels, and Medium’s next moves with founder Ev Williams

Earlier today, we had the chance to talk with Twitter and Medium cofounder Ev Williams, along with operator-turned investor James Joaquin, who helps oversee the day-to-day of the mission-focused venture firm they separately cofounded six years ago, Obvious Ventures.

We collectively discussed lot of venture-y things, some of which we’ll publish next week, so stayed tuned. In the meantime, we spent some time talking specifically with Williams about both Twitter and Medium and some of the day’s biggest headlines. Following are some excerpts from that chat, lightly edited for length and clarity.

TC: A lot of tech CEOs saying have been saying goodbye to San Francisco in 2020. Do you think the trend is attracting too much attention or perhaps not enough?

EW: I moved away from the Bay Area a little over a year ago, with my family to New York. I’d lived in San Francisco for 20 years, and I had never lived in New York, and thought, ‘Why not go? Now seems like a good time.’ Turns out I was wrong. [Laughs.] It was a very bad time to move to New York. So I was there for for six months, and quickly came back to California, which is a great place to be in a world where you’re not going into bars and restaurants and seeing people.

TC: You moved when COVID took hold?

EW: Yes. In March, Manhattan suddenly seemed not ideal. So now I’m on the peninsula.

I’m from San Francisco. It was really, for me, just honestly looking for a change. But an enabling factor that could be common in many of these cases is the fact that I no longer have to be in the office in San Francisco every day, [whereas] for most of 20 years [beforehand], all my work life was in an office in San Francisco, generally with a company I had started, so I thought it was important to be there.

This was pre COVID and remote work. But remote work was becoming more common. And I noticed in 2018 or so, with this massive number of companies that were in San Francisco —  startups and large public companies and pre IPO companies — the competition for talent had gotten more extreme than it had ever been. So it got me —  along with a lot of founders and CEOs — thinking about maybe the advantage of hiring locally and having everybody in the same office [was a pro] that was starting to get outweighed by the cons. . . And, of course, the tools and technology that make remote work possible were getting better all the time.

TC: Given that you cofounded Twitter, I have to ask about this presidential transition that is maybe, finally happening. In January, Donald Trump will lose the privileges he enjoyed as president. Given the amount of disinformation he has published routinely, do you think Twitter should have cracked down on him sooner? How would you rate its handling of a president who really tested its boundaries in every way?

EW: I think what Twitter has done especially recently is a pretty good solution. I mean, I don’t agree with the the notion or that he should have been removed altogether a long time ago. Having the visibility, literally seeing, what what the President is thinking at any given moment, as ludicrous as it is, is helpful.

What he would be doing if he didn’t have Twitter is unclear, but he’d be doing something to get his message out there. And what the company has done most recently with the warnings on his tweets or blocking them is great. It’s providing more information. It’s kind of ‘buyer beware’ about this information. And it’s a bolder step than any platform had done previously. It’s a good version of an in between where previously [people would] talk about just kicking people off, [and] allowing freedom of speech.

TC: You started Blogger, then Twitter, then Medium. As someone who has spent much of your career  focused on content and distribution, do you have any other thoughts about what more Twitter or other platforms could be doing [to tackle disinformation]? Because there is going to be somebody who comes along again with the same autocratic tendencies.

EW: I think all of society gets more information savvy — that’s one hope over the long term. It wasn’t that long ago that if something was in “media,” it was accepted as true. And now I think everyone’s skeptical. We’ve learned that that’s not necessarily the case and certainly not online.

Unfortunately, we’re now at the point where a lot of people have lost faith in everything published or shared anywhere. But I think that’s a step along the evolution of just getting more media savvy and knowing that sources really matter, and as we build both better tools, things will get better.

TC: Speaking of content platforms, Medium charges $50 per year for users to access an unlimited amount of articles from individual writers and poets. Have you said how many subscribers the platform now has?

EW: We haven’t given a precise number, but I can tell you it’s in the high hundreds of thousands. It’s been a been a couple years now, and I’m a very firm believer in the model — not only that people will pay for quality information, but that it’s just a much healthier model for publishers, be they individuals or companies, because it creates that feedback loop of ‘quality gets rewarded.’

If people aren’t getting value, they unsubscribe, and that isn’t the case with an advertising model. If people click, you keep making money, and you can kind of keep tricking people or keep appealing to lowest-common-denominator impulses. There were a couple of decades where the mantra was ‘No one will pay for content on the internet,’ which obviously seems silly now. But that was that was the established belief for such a long time.

TC: Do you ever think you should have charged from the outset? I  sometimes wonder if it’s harder to throw on the switch afterward.

EW: Yes, and no. When we first switched to this model in 2017, we created a subscription, but the vast majority of content was — and actually still is — outside of the paywall. And our model is different than most because it’s a platform, and we don’t own the content, and we have an agreement with our creators that they can publish behind the paywall if they want, and we will pay them if they do that. But they can also publish outside the paywall if they’re not interested in making money and want maximum reach. And those those models are actually very complimentary because the scale of the platform brings a lot of people in through the top of the funnel.

Scale is really important for most businesses, but for a paywall, it’s especially important because people have to be visiting with enough frequency to actually hit the paywall and be motivated to pay.

TC: Out of curiosity, what do you make of Substack, a startup that invites writers to create their own newsletters using a subscription model and then takes a cut of their revenue in exchange for a host of back end services.

EW: There’s a bit of a creator renaissance going on right now that is part of a bigger wave of a people being willing to pay for quality information, and independent writers and thinkers actually breaking out on their own and building brands and followings. And I think we’re going to see more of that.

TC: Medium has raised $132 million over the years. Will you raise more? Where do you want to take the platform in the next 12 to 24 months?

EW: We’re not yet not yet profitable, so I anticipate that we will raise more money.

There’s a very big business to be built here. While more and more people are willing to pay for content way, I don’t think that means that most people will subscribe to dozens of sources, whether they’re websites with paywalls or newsletters. If you look at how basically every media category has evolved, a lot of them have gone through this shift from free to paid, at least at the higher end of the market. That includes music, television, and even games. And at the high end, there tend to be players who own a large part of the market, and I think that comes down to offering the best consumer value proposition — one that gives people lots of optionality, lots of personalization, and lots of value for one price.

I think that the same thing is going to play out in this area, and for the subscription that’s able to reach critical mass, that’s a multi-billion dollar business. And that’s what we’re aiming to build.

#collaborative-consumption, #ev-williams, #james-joaquin, #medium, #obvious-ventures, #online-content, #publishing, #tc, #twitter, #venture-capital

Microsoft launches a deepfake detector tool ahead of US election

Microsoft has added to the slowly growing pile of technologies aimed at spotting synthetic media (aka deepfakes) with the launch of a tool for analyzing videos and still photos to generate a manipulation score.

The tool, called Video Authenticator, provides what Microsoft calls “a percentage chance, or confidence score” that the media has been artificially manipulated.

“In the case of a video, it can provide this percentage in real-time on each frame as the video plays,” it writes in a blog post announcing the tech. “It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”

If a piece of online content looks real but ‘smells’ wrong chances are it’s a high tech manipulation trying to pass as real — perhaps with a malicious intent to misinform people.

And while plenty of deepfakes are created with a very different intent — to be funny or entertaining — taken out of context such synthetic media can still take on a life of its own as it spreads, meaning it can also end up tricking unsuspecting viewers.

While AI tech is used to generate realistic deepfakes, identifying visual disinformation using technology is still a hard problem — and a critically thinking mind remains the best tool for spotting high tech BS.

Nonetheless, technologists continue to work on deepfake spotters — including this latest offering from Microsoft.

Although its blog post warns the tech may offer only passing utility in the AI-fuelled disinformation arms race: “The fact that [deepfakes are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology. However, in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.”

This summer a competition kicked off by Facebook to develop a deepfake detector served up results that were better than guessing — but only just in the case of a data-set the researchers hadn’t had prior access to.

Microsoft, meanwhile, says its Video Authenticator tool was created using a public dataset from Face Forensic++ and tested on the DeepFake Detection Challenge Dataset, which it notes are “both leading models for training and testing deepfake detection technologies”.

It’s partnering with the San Francisco-based AI Foundation to make the tool available to organizations involved in the democratic process this year — including news outlets and political campaigns.

“Video Authenticator will initially be available only through RD2020 [Reality Defender 2020], which will guide organizations through the limitations and ethical considerations inherent in any deepfake detection technology. Campaigns and journalists interested in learning more can contact RD2020 here,” Microsoft adds.

The tool has been developed by its R&D division, Microsoft Research, in coordination with its Responsible AI team and an internal advisory body on AI, Ethics and Effects in Engineering and Research Committee — as part of a wider program Microsoft is running aimed at defending democracy from threats posed by disinformation.

“We expect that methods for generating synthetic media will continue to grow in sophistication,” it continues. “As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.”

On the latter front, Microsoft has also announced a system that will enable content producers to add digital hashes and certificates to media that remain in their metadata as the content travels online — providing a reference point for authenticity.

The second component of the system is a reader tool, which can be deployed as a browser extension, for checking certificates and matching the hashes to offer the viewer what Microsoft calls “a high degree of accuracy” that a particular piece of content is authentic/hasn’t been changed.

The certification will also provide the viewer with details about who produced the media.

Microsoft is hoping this digital watermarking authenticity system will end up underpinning a Trusted News Initiative announced last year by UK publicly funded broadcaster, the BBC — specifically for a verification component, called Project Origin, which is led by a coalition of the BBC, CBC/Radio-Canada, Microsoft and The New York Times.

It says the digital watermarking tech will be tested by Project Origin with the aim of developing it into a standard that can be adopted broadly.

“The Trusted News Initiative, which includes a range of publishers and social media companies, has also agreed to engage with this technology. In the months ahead, we hope to broaden work in this area to even more technology companies, news publishers and social media companies,” Microsoft adds.

While work on technologies to identify deepfakes continues, its blog post also emphasizes the importance of media literacy — flagging a partnership with the University of Washington, Sensity and USA Today aimed at boosting critical thinking ahead of the US election.

This partnership has launched a Spot the Deepfake Quiz for voters in the US to “learn about synthetic media, develop critical media literacy skills and gain awareness of the impact of synthetic media on democracy”, as it puts it.

The interactive quiz will be distributed across web and social media properties owned by USA Today, Microsoft and the University of Washington and through social media advertising, per the blog post.

The tech giant also notes that it’s supporting a public service announcement (PSA) campaign in the US encouraging people to take a “reflective pause” and check to make sure information comes from a reputable news organization before they share or promote it on social media ahead of the upcoming election.

“The PSA campaign will help people better understand the harm misinformation and disinformation have on our democracy and the importance of taking the time to identify, share and consume reliable information. The ads will run across radio stations in the United States in September and October,” it adds.

#artificial-intelligence, #canada, #computer-graphics, #deep-learning, #deepfakes, #disinformation, #election-interference, #facebook, #media, #media-literacy, #microsoft-research, #online-content, #san-francisco, #science-and-technology, #social-media, #special-effects, #synthetic-media, #the-new-york-times, #united-kingdom, #united-states, #university-of-washington, #usa-today

Tencent Music bets on China’s crowded podcasting space

Listeners of podcasts, audiobooks and other audio shows are estimated to number 542 million in China this year, according to a third-party survey by marketing firm iiMedia. It’s a healthy jump from the 489 million users recorded in 2019, and it no doubt has attracted new players to the game.

That includes Tencent Music Entertainment (TME), the Tencent spin-off that is sometimes regarded as the Spotify of China but differs on many fronts in practice. The group’s main line of businesses goes beyond music streaming to encompass virtual karaoke, live streaming and audio content, a category that has recently seen a big push from the firm.

In its newly released quarterly report, TME said it has made “significant progress in expanding” its audio library by adding thousands of new adaptions from popular IP pieces and works from independent producers. This intensifies competition in what is already a crowded space.

Like Spotify, TME is late to voice-based content, an umbrella term that can include everything from podcasts, audiobooks, radio stations to more innovative listening experience like audio live streaming. This sector in China has for years been occupied by leading companies Ximalaya, the main investor in San Francisco-based podcasting firm Himalaya, and Nasdaq-listed Lizhi.

TME’s thrust into audio content holds no immediate promise, for there is still no obvious path to profitability. Chinese users are known to be reluctant to pay for digital content, and when they do, say, for educational and self-improvement podcasts, the enthusiasm tends to fade quickly. Deep-pocketed platforms often resort to offering content for free to gain market share, relentlessly forcing out smaller contestants. The result is that everyone needs to find more indirect ways to monetize.

Lizhi, for instance, primarily generates revenues by selling virtual items through its live, interactive audio sessions, while the contribution from user subscriptions and advertising remains paltry. The seven-year-old company hasn’t turned a profit, recording a net loss of 133 million yuan or $19.1 million last year.

Indirect monetization is nothing new in China’s internet industry. Tencent, most famous for its WeChat messenger, notably relies on gaming revenues that its social networking products help drive. TME, similarly, gets the bulk of its money by selling virtual items in music-themed live streams, while only 6% of its 657 million monthly active users on music streaming apps are paying. The MAU growth has also come to a standstill as China’s online music market saturates; from 2017 to 2020, TME added only 50 million new users to its music streaming services. The question is whether the music titan can breathe new life into the adjacent audio sector.

#asia, #china, #lizhi, #media, #music-services, #online-content, #online-karaoke, #podcast, #spotify, #streaming-media, #tencent, #tencent-music-entertainment, #wechat

Google and Facebook must pay media for content reuse, says Australia

The Australia government has said it will adopt a mandatory code to require tech giants such as Google and Facebook to pay local media for reusing their content. The requirement for them to share ad revenue with domestic publishers was reported earlier by Reuters.

Treasurer Josh Frydenberg published an opinion article in The Australian Friday — writing that an earlier plan to create a voluntary code by November this year to govern the relationship between digital platforms and media businesses — in order to “protect consumers, improve transparency and address the power imbalance between the parties” — had failed owing to “insufficient progress”.

“On the fundamental issue of payment for content, which the code was seeking to resolve, there was no meaningful progress and, in the words of the ACCC [Australia’s competition commission], “no expectation of any even being made”,” he wrote.

The ACCC has been tasked with devising the code which Frydenberg said will include provisions related to value exchange and revenue sharing; transparency of ranking algorithms; access to user data; presentation of news content; and penalties and sanctions for non-compliance.

“The intention is to have a draft code of conduct released for comment by the end of July and legislated shortly thereafter,” he added. “It is only fair that the search engines and social media giants pay for the original news content that they use to drive traffic to their sites.”

The debate around compensation for tech giants’ reuse of (and indirect monetization of) others’ editorial content — by displaying snippets of news stories on their platforms and aggregation services — is not a new one, though the coronavirus crisis has likely dialled up publisher pressure on policymakers as advertiser marketing budgets nose-dive globally and media companies stare down the barrel of a revenue crunch.

Earlier this month France’s competition watchdog ordered Google to negotiate in good faith with local media firms to pay for reusing their content.

The move followed a national law last year to transpose a pan-EU copyright reform that’s intended to extend rights to news snippets. However instead of paying French publishers for reusing their content Google stopped displaying content that’s covered by the law in local search and Google News.

France’s competition watchdog said it believes the unilateral move constitutes an abuse of a dominant market position — taking the step of applying an interim order to force Google to the negotiating table while it continues to investigate.

Frydenberg’s article references the French move, as well as pointing back to a 2014 attempt by Spain which also created legislation seeking to make Google to pay for snippets of news reused in its News aggregator product. In the latter case Google simply pulled the plug on its News service in the market — which it remains closed in Spain to this day…

Google’s message to desktop users in Spain if they try to navigate to its News product

“We are under no illusions as to the difficulty and complexity of implementing a mandatory code to govern the relationship between the digital platforms and the news media businesses. However, there is a need to take this issue head-on,” Frydenberg goes on. “We are not seeking to protect traditional media companies from the rigour of competition or technological disruption.

“Rather, to create a level playing field where market power is not misused, companies get a fair go and there is appropriate compensation for the production of original news content.”

Reached for comment on the Australian government’s plan, a Google spokesperson sent us this statement:

We’ve worked for many years to be a collaborative partner to the news industry, helping them grow their businesses through ads and subscription services and increase audiences by driving valuable traffic. Since February, we have engaged with more than 25 Australian publishers to get their input on a voluntary code and worked to the timetable and process set out by the ACCC. We have sought to work constructively with industry, the ACCC and Government to develop a Code of Conduct, and we will continue to do so in the revised process set out by the Government today.

Google continues to argue that it provides ample value to news publishers by directing traffic to their websites, where they can monetize it via ads and/or subscription conversions, saying that in 2018 alone it sent in excess of 2BN clicks to Australian news publishers from Australian users.

It also points out publishers can choose whether or not they wish their content to appear in Google search results. Though, in France, it’s worth noting the competition watchdog took the view that Google declaring that it won’t not pay to display any news could put some publishers at a disadvantage vs others.

The dominance of Google’s search engine certainly looks to be a key component for such interventions, along with Facebook’s grip on digital attention spans.

On this, Frydenberg’s articles cites a report by the country’s competition commission which found more than 98 per cent of online searches on mobile devices in Australia are with Google. While Facebook was found to have some 17M local users who connected to its platform for at least half an hour a day. (Australia’s total population is around 25M.)

“For every $100 spent by advertisers in Australia on online advertising, excluding classifieds, $47 goes to Google, $24 to Facebook and $29 to other participants,” Frydenberg also wrote, noting that the local online ad market is worth around $9BN per year — growing more than 8x since 2005.

Reached for comment on the government plan for a mandatory code for reuse of news content, Facebook sent us the following statement — attributed to Will Easton, MD, Facebook Australia and New Zealand:

We’re disappointed by the Government’s announcement, especially as we’ve worked hard to meet their agreed deadline. COVID-19 has impacted every business and industry across the country, including publishers, which is why we announced a new, global investment to support news organisations at a time when advertising revenue is declining. We believe that strong innovation and more transparency around the distribution of news content is critical to building a sustainable news ecosystem. We’ve invested millions of dollars locally to support Australian publishers through content arrangements, partnerships and training for the industry and hope the code will protect the interests of millions of Australians and small businesses that use our services every day.

If enough countries pursue a competition-flavored legislative fix against Google and Facebook to try to extract rents for media publishers it may be more difficult for them to dodge some form of payment for reusing news content. Though the adtech giants still hold other levers they could pull to increase their charges on publishers.

Indeed, their duel role — involved in both the distribution, discovery and monetization of online content and ads, controlling massive ad networks as well as applying algorithms to create content hierarchies to service ads alongside — has attracted additional antitrust scrutiny in certain markets.

After launching a market study of Google and Facebook’s ad platforms last July, the UK’s Competition and Markets Authority (CMA) raised concerns in an interim report in December — kicking off a consultation on a range of potential inventions from breaking up the platform giants to limiting their ability to set self-serving defaults and enforcing data sharing and/or feature interoperability to help rivals compete.

Per its initial findings, the CMA said there were “reasonable grounds” for suspecting serious impediments to competition in the online platforms and digital advertising market. However the regulator has so far favored making recommendations to government, to feed a planned “comprehensive regulatory framework” to govern the behaviour of online platforms, rather than taking it upon itself to intervene directly.

#advertising-tech, #australia, #competition-and-markets-authority, #european-union, #facebook, #france, #google, #local-search, #media, #new-zealand, #online-advertising, #online-content, #online-platforms, #search-engine, #search-engines, #spain, #subscription-services, #united-kingdom