Email client K-9 Mail will become Thunderbird for Android

K-9 Mail will become Thunderbird for Android in time.

Enlarge / K-9 Mail will become Thunderbird for Android in time. (credit: Mozilla)

The open source Mozilla Thunderbird email client has a long and storied history, but until now, that history has been limited to the desktop. That’s about to change, according to a post on the Thunderbird blog. Thunderbird will be coming to Android through the popular open source mobile email client K-9 Mail.

According to Thunderbird’s Jason Evangelho, Mozilla has acquired the source code and naming rights to K-9 Mail. K-9 Mail project maintainer Christian Ketterer (who goes by “cketti” in the OSS community) will join the Thunderbird team, and over time, K-9 Mail will become Thunderbird for Android.

Mozilla will invest finance and development time in K-9 to add several features and quality-of-life enhancements before that happens, though. The blog post lists these bullets on the features road map:

Read 4 remaining paragraphs | Comments

#android, #email-client, #k-9-mail, #mozilla, #tech, #thunderbird

Mozilla releases Firefox version 100 this week

A special 100th-version splash page appears on the first launch of a new Firefox installation.

Enlarge / A special 100th-version splash page appears on the first launch of a new Firefox installation. (credit: Samuel Axon)

Firefox released its 100th update, and some fanfare accompanied the release on Mozilla’s blog about the web browser. Firefox 100 is available this week for both desktop and mobile versions.

To celebrate, Mozilla says it will be regularly sharing fan art inspired by Firefox throughout May. But while that 100 number carries some symbolic weight, the update itself isn’t particularly monumental.

On the desktop, subtitles and captions are now supported in Firefox’s picture-in-picture mode for videos. Three key websites officially support subtitles and captions in PIP: YouTube, Netflix, and Amazon Prime Video. Plus, the feature works on websites that support the WebVTT standard, like Twitter.

Read 4 remaining paragraphs | Comments

#android, #browser, #firefox, #firefox-100, #https, #ios, #mozilla, #tech, #web-browser, #webvtt

Mozilla apparently makes and is discontinuing a VR version of Firefox

Mozilla apparently makes and is discontinuing a VR version of Firefox

Enlarge (credit: Mozilla)

If you didn’t know that Mozilla made a VR-specific version of Firefox called Firefox Reality, then it’s OK for you to continue not knowing, because Mozilla announced today that it would be discontinuing support for the browser a little over three years after introducing it.

The Spanish co-op Igalia will pick up the pieces next week with a “somewhat beta” browser called Wolvic, which will be based on Firefox Reality’s source code. Firefox Reality will be removed from all the app stores in which it is available “in the coming weeks.” Like Firefox Reality, Wolvic will use the WebXR standard to enable VR and AR experiences on websites, rather than requiring a download of a standalone app from a curated app store.

This may simply be a case of a company discontinuing a niche project intended for a niche market that wasn’t generating sufficient user interest—it’s rare for companies not just to cancel but to willingly hand off overwhelmingly successful products. But Mozilla has been open about its need to carefully manage its resources as it has downsized over the years—The company endured multiple rounds of layoffs in 2020, both pre– and mid-pandemic, citing a need to “refocus.”

Read 1 remaining paragraphs | Comments

#firefox, #mozilla, #tech, #wolvic

Firefox 95 for Windows and Mac introduces RLBox, a new sandboxing tech

A minimalist view of the Firefox web browser.

Enlarge / A minimalist view of the Firefox web browser. (credit: Firefox)

Mozilla has released the latest version of Firefox, Firefox 95, for Windows and macOS. It’s available now for all users on both platforms.

The Firefox team says the new macOS version reduces CPU usage during event processing and that power usage is reduced while streaming video from sites like Netflix, “especially in fullscreen.” macOS users will also get a faster content process startup and will enjoy memory allocator improvements for better overall performance.

On both macOS and Windows, Mozilla has “improved page load performance by speculatively compiling JavaScript ahead of time.” There’s also a way to move the picture-in-picture toggle button to the opposite side of the video on both platforms, plus a handful of fixes.

Read 3 remaining paragraphs | Comments

#firefox, #firefox-95, #macos, #mozilla, #tech, #web-browser, #windows, #windows-10, #windows-11

Firefox 94 for iOS and Android adds new features for bookmarks and tabs

Mozilla's current logo for Firefox.

Enlarge / Mozilla’s current logo for Firefox. (credit: Mozilla)

Today, Mozilla updated the mobile versions of its Firefox web browser on iOS and Android with an overhauled home page and a new tab management feature.

Mozilla wrote in a blog post today announcing the update that the mobile version of the browser is specifically designed for “on-the-go, short bursts of online interactions that are constantly interrupted by life.”

To that end, the new update seeks to make it easier to jump into previously abandoned or uninterrupted content. There’s a new “jump back in” feature that lets you go directly to your last opened tab.

Read 4 remaining paragraphs | Comments

#android, #firefox, #firefox-94, #ios, #mobile, #mozilla, #pocket, #tech, #web-browser

Today’s Firefox 91 release adds new site-wide cookie-clearing action

This menacing firefox seems to be on the prowl for unwanted third-party cookies.

Enlarge / This menacing firefox seems to be on the prowl for unwanted third-party cookies. (credit: Hung Chung Chih via Getty Images)

Mozilla’s Firefox 91, released this morning, includes a new privacy management feature called Enhanced Cookie Clearing. The new feature allows users to manage all cookies and locally stored data generated by a particular website—regardless of whether they’re cookies tagged to that site’s domain or cookies placed from that site but belonging to a third-party domain, eg Facebook or Google.

Building on Total Cookie Protection

The new feature builds and depends upon Total Cookie Protection, introduced in February with Firefox 86. Total Cookie Protection partitions cookies by the site that placed them, rather than the domain that owns them—which means that if a hypothetical third party we’ll call “Forkbook” places tracking (or authentication) cookies on both momscookies.com and grandmascookies.com, it can’t reliably tie the two together.

Without cookie partitioning, a single Forkbook cookie would contain the site data for both momscookies.com and grandmascookies.com. With cookie partitioning, Forkbook must set two separate cookies—one for each site—and can’t necessarily relate one to the other.

Read 8 remaining paragraphs | Comments

#browsers, #cookies, #firefox, #mozilla, #privacy, #tech

Firefox 89 delivered more speed—today’s Firefox 90 adds SmartBlock 2.0

The red panda of the Internet just keeps getting better.

Enlarge / The red panda of the Internet just keeps getting better. (credit: Kiszon Pascal via Getty Images)

Today, Mozilla launched Firefox 90. The newest version of Mozilla’s increasingly privacy-focused browser adds improved print-to-PDF functionality, individual exceptions to HTTPS-only mode, an about:third-party page to help identify compatibility issues introduced by third-party applications, and a new SmartBlock feature that cranks up protection from cross-site tracking while making sure site logins still function.

There’s also a new background updater for Windows, which allows a small background application to check for, download, and install Firefox updates while the browser is not running.

SmartBlock 2.0

The newest version of Mozilla’s built-in SmartBlock privacy feature makes it easier for users to keep their tracking protection settings cranked up, without breaking individual websites. The updated version seems to especially target Facebook login, which is increasingly used around the web as a third-party authentication and login tool.

Read 14 remaining paragraphs | Comments

#firefox, #mozilla, #mozilla-firefox, #tech

YouTube’s recommender AI still a horrorshow, finds major crowdsourced study

For years YouTube’s video-recommending algorithm has stood accused of fuelling a grab-bag of societal ills by feeding users an AI-amplified diet of hate speech, political extremism and/or conspiracy junk/disinformation for the profiteering motive of trying to keep billions of eyeballs stuck to its ad inventory.

And while YouTube’s tech giant parent Google has, sporadically, responded to negative publicity flaring up around the algorithm’s antisocial recommendations — announcing a few policy tweaks or limiting/purging the odd hateful account — it’s not clear how far the platform’s penchant for promoting horribly unhealthy clickbait has actually been rebooted.

The suspicion remains nowhere near far enough.

New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of ‘bottom-feeding’/low grade/divisive/disinforming content — stuff that tries to grab eyeballs by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation — which in turn implies that YouTube’s problem with recommending terrible stuff is indeed systemic; a side-effect of the platform’s rapacious appetite to harvest views to serve ads.

That YouTube’s AI is still — per Mozilla’s study — behaving so badly also suggests Google has been pretty successful at fuzzing criticism with superficial claims of reform.

The mainstay of its deflective success here is likely the primary protection mechanism of keeping the recommender engine’s algorithmic workings (and associated data) hidden from public view and external oversight — via the convenient shield of ‘commercial secrecy’.

But regulation that could help crack open proprietary AI blackboxes is now on the cards — at least in Europe.

To fix YouTube’s algorithm, Mozilla is calling for “common sense transparency laws, better oversight, and consumer pressure” — suggesting a combination of laws that mandate transparency into AI systems; protect independent researchers so they can interrogate algorithmic impacts; and empower platform users with robust controls (such as the ability to opt out of “personalized” recommendations) are what’s needed to rein in the worst excesses of the YouTube AI.

Regrets, YouTube users have had a few…

To gather data on specific recommendations being made made to YouTube users — information that Google does not routinely make available to external researchers — Mozilla took a crowdsourced approach, via a browser extension (called RegretsReporter) that lets users self-report YouTube videos they “regret” watching.

The tool can generate a report which includes details of the videos the user had been recommended, as well as earlier video views, to help build up a picture of how YouTube’s recommender system was functioning. (Or, well, ‘dysfunctioning’ as the case may be.)

The crowdsourced volunteers whose data fed Mozilla’s research reported a wide variety of ‘regrets’, including videos spreading COVID-19 fear-mongering, political misinformation and “wildly inappropriate” children’s cartoons, per the report — with the most frequently reported content categories being misinformation, violent/graphic content, hate speech and spam/scams.

A substantial majority (71%) of the regret reports came from videos that had been recommended by YouTube’s algorithm itself, underscoring the AI’s starring role in pushing junk into people’s eyeballs.

The research also found that recommended videos were 40% more likely to be reported by the volunteers than videos they’d searched for themselves.

Mozilla even found “several” instances when the recommender algorithmic put content in front of users that violated YouTube’s own community guidelines and/or was unrelated to the previous video watched. So a clear fail.

A very notable finding was that regrettable content appears to be a greater problem for YouTube users in non-English speaking countries: Mozilla found YouTube regrets were 60% higher in countries without English as a primary language — with Brazil, Germany and France generating what the report said were “particularly high” levels of regretful YouTubing. (And none of the three can be classed as minor international markets.)

Pandemic-related regrets were also especially prevalent in non-English speaking countries, per the report — a worrying detail to read in the middle of an ongoing global health crisis.

The crowdsourced study — which Mozilla bills as the largest-ever into YouTube’s recommender algorithm — drew on data from more than 37,000 YouTube users who installed the extension, although it was a subset of 1,162 volunteers — from 91 countries — who submitted reports that flagged 3,362 regrettable videos which the report draws on directly.

These reports were generated between July 2020 and May 2021.

What exactly does Mozilla mean by a YouTube “regret”? It says this is a crowdsourced concept based on users self-reporting bad experiences on YouTube, so it’s a subjective measure. But Mozilla argues that taking this “people-powered” approach centres the lived experiences of Internet users and is therefore helpful in foregrounding the experiences of marginalised and/or vulnerable people and communities (vs, for example, applying only a narrower, legal definition of ‘harm’).

“We wanted to interrogate and explore further [people’s experiences of falling down the YouTube ‘rabbit hole’] and frankly confirm some of these stories — but then also just understand further what are some of the trends that emerged in that,” explained Brandi Geurkink, Mozilla’s senior manager of advocacy and the lead researcher for the project, discussing the aims of the research.

“My main feeling in doing this work was being — I guess — shocked that some of what we had expected to be the case was confirmed… It’s still a limited study in terms of the number of people involved and the methodology that we used but — even with that — it was quite simple; the data just showed that some of what we thought was confirmed.

“Things like the algorithm recommending content essentially accidentally, that it later is like ‘oops, this actually violates our policies; we shouldn’t have actively suggested that to people’… And things like the non-English-speaking user base having worse experiences — these are things you hear discussed a lot anecdotally and activists have raised these issues. But I was just like — oh wow, it’s actually coming out really clearly in our data.”

Mozilla says the crowdsourced research uncovered “numerous examples” of reported content that would likely or actually breach YouTube’s community guidelines — such as hate speech or debunked political and scientific misinformation.

But it also says the reports flagged a lot of what YouTube “may” consider ‘borderline content’. Aka, stuff that’s harder to categorize — junk/low quality videos that perhaps toe the acceptability line and may therefore be trickier for the platform’s algorithmic moderation systems to respond to (and thus content that may also survive the risk of a take down for longer).

However a related issue the report flags is that YouTube doesn’t provide a definition for borderline content — despite discussing the category in its own guidelines — hence, says Mozilla, that makes the researchers’ assumption that much of what the volunteers were reporting as ‘regretful’ would likely fall into YouTube’s own ‘borderline content’ category impossible to verify.

The challenge of independently studying the societal effects of Google’s tech and processes is a running theme underlying the research. But Mozilla’s report also accuses the tech giant of meeting YouTube criticism with “inertia and opacity”.

It’s not alone there either. Critics have long accused YouTube’s ad giant parent of profiting off-of engagement generated by hateful outrage and harmful disinformation — allowing “AI-generated bubbles of hate” surface ever more baleful (and thus stickily engaging) stuff, exposing unsuspecting YouTube users to increasingly unpleasant and extremist views, even as Google gets to shield its low grade content business under a user-generated content umbrella.

Indeed, ‘falling down the YouTube rabbit hole’ has become a well-trodden metaphor for discussing the process of unsuspecting Internet users being dragging into the darkest and nastiest corners of the web. This user reprogramming taking place in broad daylight via AI-generated suggestions that yell at people to follow the conspiracy breadcrumb trail right from inside a mainstream web platform.

Back as 2017 — when concern was riding high about online terrorism and the proliferation of ISIS content on social media — politicians in Europe were accusing YouTube’s algorithm of exactly this: Automating radicalization.

However it’s remained difficult to get hard data to back up anecdotal reports of individual YouTube users being ‘radicalized’ after viewing hours of extremist content or conspiracy theory junk on Google’s platform.

Ex-YouTube insider — Guillaume Chaslot — is one notable critic who’s sought to pull back the curtain shielding the proprietary tech from deeper scrutiny, via his algotransparency project.

Mozilla’s crowdsourced research adds to those efforts by sketching a broad — and broadly problematic — picture of the YouTube AI by collating reports of bad experiences from users themselves.

Of course externally sampling platform-level data that only Google holds in full (at its true depth and dimension) can’t be the whole picture — and self-reporting, in particular, may introduce its own set of biases into Mozilla’s data-set. But the problem of effectively studying big tech’s blackboxes is a key point accompanying the research, as Mozilla advocates for proper oversight of platform power.

In a series of recommendations the report calls for “robust transparency, scrutiny, and giving people control of recommendation algorithms” — arguing that without proper oversight of the platform, YouTube will continue to be harmful by mindlessly exposing people to damaging and braindead content.

The problematic lack of transparency around so much of how YouTube functions can be picked up from other details in the report. For example, Mozilla found that around 9% of recommended regrets (or almost 200 videos) had since been taken down — for a variety of not always clear reasons (sometimes, presumably, after the content was reported and judged by YouTube to have violated its guidelines).

Collectively, just this subset of videos had had a total of 160M views prior to being removed for whatever reason.

In other findings, the research found that regretful views tend to perform well on the platform.

A particular stark metric is that reported regrets acquired a full 70% more views per day than other videos watched by the volunteers on the platform — lending weight to the argument that YouTube’s engagement-optimising algorithms disproportionately select for triggering/misinforming content more often than quality (thoughtful/informing) stuff simply because it brings in the clicks.

While that might be great for Google’s ad business, it’s clearly a net negative for democratic societies which value truthful information over nonsense; genuine public debate over artificial/amplified binaries; and constructive civic cohesion over divisive tribalism.

But without legally-enforced transparency requirements on ad platforms — and, most likely, regulatory oversight and enforcement that features audit powers — these tech giants are going to continue to be incentivized to turn a blind eye and cash in at society’s expense.

Mozilla’s report also underlines instances where YouTube’s algorithms are clearly driven by a logic that’s unrelated to the content itself — with a finding that in 43.6% of the cases where the researchers had data about the videos a participant had watched before a reported regret the recommendation was completely unrelated to the previous video.

The report gives examples of some of these logic-defying AI content pivots/leaps/pitfalls — such as a person watching videos about the U.S. military and then being recommended a misogynistic video entitled ‘Man humiliates feminist in viral video.’

In another instance, a person watched a video about software rights and was then recommended a video about gun rights. So two rights make yet another wrong YouTube recommendation right there.

In a third example, a person watched an Art Garfunkel music video and was then recommended a political video entitled ‘Trump Debate Moderator EXPOSED as having Deep Democrat Ties, Media Bias Reaches BREAKING Point.’

To which the only sane response is, umm what???

YouTube’s output in such instances seems — at best — some sort of ‘AI brain fart’.

A generous interpretation might be that the algorithm got stupidly confused. Albeit, in a number of the examples cited in the report, the confusion is leading YouTube users toward content with a right-leaning political bias. Which seems, well, curious.

Asked what she views as the most concerning findings, Mozilla’s Geurkink told TechCrunch: “One is how clearly misinformation emerged as a dominant problem on the platform. I think that’s something, based on our work talking to Mozilla supporters and people from all around the world, that is a really obvious thing that people are concerned about online. So to see that that is what is emerging as the biggest problem with the YouTube algorithm is really concerning to me.”

She also highlighted the problem of the recommendations being worse for non-English-speaking users as another major concern, suggesting that global inequalities in users’ experiences of platform impacts “doesn’t get enough attention” — even when such issues do get discussed.

Responding to Mozilla’s report in a statement, a Google spokesperson sent us this statement:

“The goal of our recommendation system is to connect viewers with content they love and on any given day, more than 200 million videos are recommended on the homepage alone. Over 80 billion pieces of information is used to help inform our systems, including survey responses from viewers on what they want to watch. We constantly work to improve the experience on YouTube and over the past year alone, we’ve launched over 30 different changes to reduce recommendations of harmful content. Thanks to this change, consumption of borderline content that comes from our recommendations is now significantly below 1%.”

Google also claimed it welcomes research into YouTube — and suggested it’s exploring options to bring in external researchers to study the platform, without offering anything concrete on that front.

At the same time, its response queried how Mozilla’s study defines ‘regrettable’ content — and went on to claim that its own user surveys generally show users are satisfied with the content that YouTube recommends.

In further non-quotable remarks, Google noted that earlier this year it started disclosing a ‘violative view rate‘ (VVR) metric for YouTube — disclosing for the first time the percentage of views on YouTube that comes from content that violates its policies.

The most recent VVR stands at 0.16-0.18% — which Google says means that out of every 10,000 views on YouTube, 16-18 come from violative content. It said that figure is down by more than 70% when compared to the same quarter of 2017 — crediting its investments in machine learning as largely being responsible for the drop.

However, as Geurkink noted, the VVR is of limited use without Google releasing more data to contextualize and quantify how far its AI was involved in accelerating views of content its own rules state shouldn’t be viewed on its platform. Without that key data the suspicion must be that the VVR is a nice bit of misdirection.

“What would be going further than [VVR] — and what would be really, really helpful — is understanding what’s the role that the recommendation algorithm plays in this?” Geurkink told us on that, adding: “That’s what is a complete blackbox still. In the absence of greater transparency [Google’s] claims of progress have to be taken with a grain of salt.”

Google also flagged a 2019 change it made to how YouTube’s recommender algorithm handles ‘borderline content’ — aka, content that doesn’t violate policies but falls into a problematic grey area — saying that that tweak had also resulted in a 70% drop in watchtime for this type of content.

Although the company confirmed this borderline category is a moveable feast — saying it factors in changing trends as well as context and also works with experts to determine what’s get classed as borderline — which makes the aforementioned percentage drop pretty meaningless since there’s no fixed baseline to measure against.

It’s notable that Google’s response to Mozilla’s report makes no mention of the poor experience reported by survey participants in non-English-speaking markets. And Geurkink suggested that, in general, many of the claimed mitigating measures YouTube applies are geographically limited — i.e. to English-speaking markets like the US and UK. (Or at least arrive in those markets first, before a slower rollout to other places.) 

A January 2019 tweak to reduce amplification of conspiracy theory content in the US was only expanded to the UK market months later — in August — for example.

“YouTube, for the past few years, have only been reporting on their progress of recommendations of harmful or borderline content in the US and in English-speaking markets,” she also said. “And there are very few people questioning that — what about the rest of the world? To me that is something that really deserves more attention and more scrutiny.”

We asked Google to confirm whether it had since applied the 2019 conspiracy theory related changes globally — and a spokeswoman told us that it had. But the much higher rate of reports made to Mozilla of — a yes broader measure of — ‘regrettable’ content being made in non-English-speaking markets remains notable.

And while there could be others factors at play, which might explain some of the disproportionately higher reporting, the finding may also suggest that, where YouTube’s negative impacts are concerned, Google directs greatest resource at markets and languages where its reputational risk and the capacity of its machine learning tech to automate content categorization are strongest.

Yet any such unequal response to AI risk obviously means leaving some users at greater risk of harm than others — adding another harmful dimension and layer of unfairness to what is already a multi-faceted, many-headed-hydra of a problem.

It’s yet another reason why leaving it up to powerful platforms to rate their own AIs, mark their own homework and counter genuine concerns with self-serving PR is for the birds.

(In additional filler background remarks it sent us, Google described itself as the first company in the industry to incorporate “authoritativeness” into its search and discovery algorithms — without explaining when exactly it claims to have done that or how it imagined it would be able to deliver on its stated mission of ‘organizing the world’s information and making it universally accessible and useful’ without considering the relative value of information sources… So color us baffled at that claim. Most likely it’s a clumsy attempt to throw disinformation shade at rivals.)

Returning to the regulation point, an EU proposal — the Digital Services Act — is set to introduce some transparency requirements on large digital platforms, as part of a wider package of accountability measures. And asked about this Geurkink described the DSA as “a promising avenue for greater transparency”.

But she suggested the legislation needs to go further to tackle recommender systems like the YouTube AI.

“I think that transparency around recommender systems specifically and also people having control over the input of their own data and then the output of recommendations is really important — and is a place where the DSA is currently a bit sparse, so I think that’s where we really need to dig in,” she told us.

One idea she voiced support for is having a “data access framework” baked into the law — to enable vetted researchers to get more of the information they need to study powerful AI technologies — i.e. rather than the law trying to come up with “a laundry list of all of the different pieces of transparency and information that should be applicable”, as she put it.

The EU also now has a draft AI regulation on the table. The legislative plan takes a risk-based approach to regulating certain applications of artificial intelligence. However it’s not clear whether YouTube’s recommender system would fall under one of the more closely regulated categories — or, as seems more likely (at least with the initial Commission proposal), fall entirely outside the scope of the planned law.

“An earlier draft of the proposal talked about systems that manipulate human behavior which is essentially what recommender systems are. And one could also argue that’s the goal of advertising at large, in some sense. So it was sort of difficult to understand exactly where recommender systems would fall into that,” noted Geurkink.

“There might be a nice harmony between some of the robust data access provisions in the DSA and the new AI regulation,” she added. “I think transparency is what it comes down to, so anything that can provide that kind of greater transparency is a good thing.

“YouTube could also just provide a lot of this… We’ve been working on this for years now and we haven’t seen them take any meaningful action on this front but it’s also, I think, something that we want to keep in mind — legislation can obviously take years. So even if a few of our recommendations were taken up [by Google] that would be a really big step in the right direction.”

#advertising-tech, #artificial-intelligence, #content-moderation, #disinformation, #european-union, #google, #hate-speech, #machine-learning, #mozilla, #policy, #recommender-systems, #social, #social-media, #tc, #youtube

TikTok called out for lack of ads transparency and for failing to police political payola

TikTok announced a ban on political advertising all the way back in 2019. So you’d be forgiven for thinking the ugly problem of democracy-denting political disinformation doesn’t apply inside its walled garden of dancing Gen Zers. But you’d be wrong.

New research by Mozilla suggests that policy loopholes and lax oversight, especially around influencer marketing, coupled with an ongoing lack of ads transparency by TikTok — which offers no publicly searchable ad archive — are making its video-sharing platform vulnerable to passing off political ads as organic content.

Mozilla says it found over a dozen instances of TikTok influencers across the political spectrum who were being paid (or otherwise compensated) by a variety of political organizations to promote partisan messages without disclosing that these posts were sponsored.

“Our research found that TikTok influencers across the political spectrum had undisclosed paid relationships with various political organizations in the U.S.,” it writes. “Several right-wing TikTok influencers appear to be funded by conservative
organizations like Turning Point USA, a tax-exempt nonprofit which has a dedicated influencer program specifically targeted at funding young conservative content creators on social media.”

Examples of TikTok influencers spreading political messaging (Image credits: Mozilla)

It similarly found evidence of left-leaning sponsored political messaging being spread without proper disclosures by TikTok influencers, noting that: “We found some evidence that progressive influencers supported by left-leaning political organizations were posting pro-Biden messages prior to the U.S. presidential election. For instance, The 99 Problems created and funded the Hype House account House of US, where influencers post political messaging.”

In the report, Th€se Are Not Po£itical Ad$: How Partisan Influencers Are Evading TikTok’s Weak Political Ad Policies, Mozilla calls out the platform for not offering adequate tools for ‘influencers’ — aka users who have amassed a large enough number of followers to become attractive targets for advertisers to target for making paid postings — to report sponsorships, pointing out that other major social media platforms (like Facebook/Instagram) do offer such tools and can flag influencer content if they’re found failing to properly report ads.

“Of course, it’s hard to know exactly how self-disclosure ad policies are being enforced across platforms but TikTok is significantly far behind Instagram and YouTube when it comes to providing tools and enacting clear, strict, and transparent policies,” Mozilla writes in the report.

Per TikTok’s rules, content creators are supposed to self-identify any paid content (typically by using the hashtag #ad or #sponsored), in keeping with U.S. Federal Trade Commission guidelines for the disclosure of paid influence.

But, as Mozilla points out, if TikTok isn’t actively monitoring or scrutinizing influencer ads (as the report suggests) it raises an obvious concern over how the platform can claim to be enforcing its “trust and safety” protocols.

Mozilla’s report also points to rumours that TikTok is testing features that will allow influencers to pay to further promote specific posts — which could dial up the ‘dark money’ political disinformation problem further, i.e. if not combined with active policing and enforcement of sponsorship disclosures.

“There do not appear to be any safeguards preventing creators from using this feature to promote paid political messages,” it warns. “It is unclear how TikTok is monitoring this content to ensure that it complies with their political ad policy.”

Another major criticism in the report is the general lack of ads transparency by TikTok vs other social platforms — with Mozilla’s report pointing out that it does not offer public, searchable ad databases as others (including Facebook/Instagram, Snap, and Google/YouTube) do. Twitter has also had a searchable ads archive since 2018.

“Mozilla believes Facebook and Google are doing a poor job on ad transparency, so the fact that TikTok can’t match even them is troubling,” the report notes.

In recommendations to TikTok (or to policymakers shaping laws aimed at preventing abuse of such platforms) Mozilla suggests that it needs to develop specific mechanisms for content creators to disclose partnerships; invest in comprehensive advertising transparency, including launching an ad database which includes paid partnerships (not just native platform ads); and update its policies and enforcement processes to cover all the ways that paid political influence can happen on its platform.

TikTok was contacted with questions on its approach to ads transparency and sponsored content. It sent this statement:

“Political advertising is not allowed on TikTok, and we continue to invest in people and technology to consistently enforce this policy and build tools for creators on our platform. As we evolve our approach we appreciate feedback from experts, including researchers at the Mozilla Foundation, and we look forward to a continuing dialogue as we work to develop equitable policies and tools that promote transparency, accountability, and creativity.”

There are signs that TikTok is trying to get ahead of criticisms in the report — as Mozilla’s researcher, Becca Ricks, notes that the company has very recently (“within the past week”) created a branded content policy.

“It includes mention of a ‘branded content toggle‘ to help influencers disclose paid partnerships,” she went on, adding: “We’re currently analyzing the feature to learn more. But we’re cautiously optimistic that this could be a (small) step in the right direction, especially after we raised these issues directly with TikTok two weeks ago in the course of our research.

That said, Mozilla’s other recommendations — and the entirety of the problems we uncovered in the research — remain. So TikTok has a long road to being truly transparent.”

Mozilla’s report is just the latest black cloud to fall over TikTok’s platform which is under pressure on a variety of fronts related to its content and wider policies, including around ad disclosures.

Last week, EU regulators kicked off what they couched as a formal “dialogue” with TikTok following a number of complaints by consumer protection groups which have accused the platform of hidden marketing, aggressive advertising techniques targeted at children and misleading and confusing contractual terms.

Other regional complaints have called out TikTok’s approach to privacy and user data. And it’s being sued in the UK over its handling of children’s data.

Concerns over weak age verification also led to an intervention by Italy’s data protection regulator earlier this year — acting on concerns for the safety of underage users. In that case TikTok was forced to remove over half a million accounts which were suspected of being used by children younger than 13.

In recent months TikTok has been trying to burnish its image with policymakers, announcing what it bills as a ‘Transparency Center’ in the U.S. last year — and another for Europe this April — saying these centers would provide a space for outside experts to access information about its content moderation and security policies.

However Mozilla said the centers suffer from a lack transparency vis-a-vis ads, writing in the report that they “do not provide detailed transparency regarding advertisements”, and specifying TikTok does not disclose specific data about “how many or which ads were rejected under TikTok’s ban on political advertisements”, for example.

TikTok’s opacity arounds ads looks to be on borrowed time as the issue of online political ads transparency is coming into sharper focus around the world.

In the U.S. a bipartisan bill to try to regulate online platforms that sell ads was introduced in 2017 — although progress stalled as the bill failed to pass ahead of the 2019 US presidential election.

In Europe lawmakers are expected to put forward a regulatory proposal this fall that will tighten ad disclosure and reporting requirements on platforms, as part of a wider package of digital reforms that aim to drive safety, transparency and accountability.

#ads-transparency, #advertising-tech, #influencer-marketing, #mozilla, #native-advertising, #platform-regulation, #policy, #political-advertising, #social, #social-media, #social-media-platforms, #tiktok

Mozilla beefs up anti-cross-site tracking in Firefox, as Chrome still lags on privacy

Mozilla has further beefed up anti-tracking measures in its Firefox browser. In a blog post yesterday it announced that Firefox 86 has an extra layer of anti-cookie tracking built into the enhanced tracking protection (ETP) strict mode — which it’s calling ‘Total Cookie Protection’.

This “major privacy advance”, as it bills it, prevents cross-site tracking by siloing third party cookies per website.

Mozilla likens this to having a separate cookie jar for each site — so, for e.g., Facebook cookies aren’t stored in the same tub as cookies for that sneaker website where you bought your latest kicks and so on.

The new layer of privacy wrapping “provides comprehensive partitioning of cookies and other site data between websites in Firefox”, explains Mozilla.

Along with another anti-tracking feature it announced last month — targeting so called ‘supercookies’ — aka sneaky trackers that store user IDs in “increasingly obscure” parts of the browser (like Flash storageETags, and HSTS flags), i.e. where it’s difficult for users to delete or block them — the features combine to “prevent websites from being able to ‘tag’ your browser, thereby eliminating the most pervasive cross-site tracking technique”, per Mozilla.

There’s a “limited exception” for cross-site cookies when they are needed for non-tracking purposes — Mozilla gives the example of popular third-party login providers.

“Only when Total Cookie Protection detects that you intend to use a provider, will it give that provider permission to use a cross-site cookie specifically for the site you’re currently visiting. Such momentary exceptions allow for strong privacy protection without affecting your browsing experience,” it adds.

Tracker blocking has long been an arms race against the adtech industry’s determination to keep surveilling web users — and thumbing its nose at the notion of consent to spy on people’s online business — pouring resource into devising fiendish new techniques to try to keep watching what Internet users are doing. But this battle has stepped up in recent years as browser makers have been taking a tougher pro-privacy/anti-tracker stance.

Mozilla, for example, started making tracker blocking the default back in 2018 — going on make ETP the default in Firefox in 2019, blocking cookies from companies identified as trackers by its partner, Disconnect.

While Apple’s Safari browser added an ‘Intelligent Tracking Prevention’ (ITP) feature in 2017 — applying machine learning to identify trackers and segregate the cross-site scripting data to protect users’ browsing history from third party eyes.

Google has also put the cat among the adtech pigeons by announcing a planned phasing out of support for third party cookies in Chrome — which it said would be coming within two years back in January 2020 — although it’s still working on this ‘privacy sandbox’ project, as it calls it (now under the watchful eye of UK antitrust regulators).

Google has been making privacy strengthening noises since 2019, in response to the rest of the browser market responding to concern about online privacy.

In April last year it rolled back a change that had made it harder for sites to access third-party cookies, citing concerns that sites were able to perform essential functions during the pandemic — though this was resumed in July. But it’s fair to say that the adtech giant remains the laggard when it comes to executing on its claimed plan to beef up privacy.

Given Chrome’s marketshare, that leaves most of the world’s web users exposed to more tracking than they otherwise would be by using a different, more privacy-pro-active browser.

And as Mozilla’s latest anti-cookie tracking feature shows the race to outwit adtech’s allergy to privacy (and consent) also isn’t the sort that has a finish line. So being slow to do privacy protection arguably isn’t very different to not offering much privacy protection at all.

To wit: One worrying development — on the non-cookie based tracking front — is detailed in this new paper by a group of privacy researchers who conducted an analysis of CNAME tracking (aka a DNS-based anti-tracking evasion technique) and found that use of the sneaky anti-tracking evasion method had grown by around a fifth in just under two years.

The technique has been raising mainstream concerns about ‘unblockable’ web tracking since around 2019 — when developers spotted the technique being used in the wild by a French newspaper website. Since then use has been rising, per the research.

In a nutshell the CNAME tracking technique cloaks the tracker by injecting it into the first-party context of the visited website — via the content being embedded through a subdomain of the site which is actually an alias for the tracker domain.

“This scheme works thanks to a DNS delegation. Most often it is a DNS CNAME record,” writes one of the paper authors, privacy and security researcher Lukasz Olejnik, in a blog post about the research. “The tracker technically is hosted in a subdomain of the visited website.

“Employment of such a scheme has certain consequences. It kind of fools the fundamental web security and privacy protections — to think that the user is wilfully browsing the tracker website. When a web browser sees such a scheme, some security and privacy protections are relaxed.”

Don’t be fooled by the use of the word ‘relaxed’ — as Olejnik goes on to emphasize that the CNAME tracking technique has “substantial implications for web security and privacy”. Such as browsers being tricked into treating a tracker as legitimate first-party content of the visited website (which, in turn, unlocks “many benefits”, such as access to first-party cookies — which can then be sent on to remote, third-party servers controlled by the trackers so the surveilling entity can have its wicked way with the personal data).

So the risk is that a chunk of the clever engineering work being done to protect privacy by blocking trackers can be sidelined by getting under the anti-trackers’ radar.

The researchers found one (infamous) tracker provider, Criteo, reverting its tracking scripts to the custom CNAME cloak scheme when it detected the Safari web browser in use — as, presumably, a way to circumvent Apple’s ITP.

There are further concerns over CNAME tracking too: The paper details how, as a consequence of current web architecture, the scheme “unlocks a way for broad cookie leaks”, as Olejnik puts it — explaining how the upshot of the technique being deployed can be “many unrelated, legitimate cookies” being sent to the tracker subdomain.

Olejnik documented this concern in a study back in 2014 — but he writes that the problem has now exploded: “As the tip of the iceberg, we found broad data leaks on 7,377 websites. Some data leaks happen on almost every website using the CNAME scheme (analytics cookies commonly leak). This suggests that this scheme is actively dangerous. It is harmful to web security and privacy.”

The researchers found cookies leaking on 95% of the studies websites.

They also report finding leaks of cookies set by other third-party scripts, suggesting leaked cookies would in those instances allow the CNAME tracker to track users across websites.

In some instances they found that leaked information contained private or sensitive information — such as a user’s full name, location, email address and (in an additional security concern) authentication cookie.

The paper goes on to raise a number of web security concerns, such as when CNAME trackers are served over HTTP not HTTPS, which they found happened often, and could facilitate man-in-the-middle attacks.

Defending against the CNAME cloaking scheme will require some major browsers to adopt new tricks, per the researchers — who note that while Firefox (global marketshare circa 4%) does offer a defence against the technique Chrome does not.

Engineers on the WebKit engine that underpins Apple’s Safari browser have also been working on making enhancements to ITP aimed at counteracting CNAME tracking.

In a blog post last November, IPT engineer John Wilander wrote that as defence against the sneaky technique “ITP now detects third-party CNAME cloaking requests and caps the expiry of any cookies set in the HTTP response to 7 days. This cap is aligned with ITP’s expiry cap on all cookies created through JavaScript.”

The Brave browser also announced changes last fall aimed at combating CNAME cloaking.

“In version 1.25.0, uBlock Origin gained the ability to detect and block CNAME-cloaked requests using Mozilla’s terrific browser.dns API. However, this solution only works in Firefox, as Chromium does not provide the browser.dns API. To some extent, these requests can be blocked using custom DNS servers. However, no browsers have shipped with CNAME-based adblocking protection capabilities available and on by default,” it wrote.

“In Brave 1.17, Brave Shields will now recursively check the canonical name records for any network request that isn’t otherwise blocked using an embedded DNS resolver. If the request has a CNAME record, and the same request under the canonical domain would be blocked, then the request is blocked. This solution is on by default, bringing enhanced privacy protections to millions of users.”

But the browser with the largest marketshare, Chrome, has work to do, per the researchers, who write:

Because Chrome does not support a DNS resolution API for extensions, the [uBlock version 1.25 under Firefox] defense could not be applied to this browser. Consequently, we find that four of the CNAME-based trackers (Oracle Eloqua, Eulerian, Criteo, and Keyade) are blocked by uBlock Origin on Firefox but not on the Chrome version.

#anti-tracking, #chrome, #cookies, #firefox, #mozilla, #privacy, #tracker-blockers

The Rust programming language finds a new home in a non-profit foundation

Rust, the programming language — not the survival game, now has a new home: the Rust Foundation. AWS, Huawei, Google, Microsoft and Mozilla banded together to launch this new foundation today and put a two-year commitment to a million-dollar budget behind it. This budget will allow the project to “develop services, programs, and events that will support the Rust project maintainers in building the best possible Rust.”

Rust started out as a side project inside of Mozilla to develop an alternative to C/C++ . Designed by Mozilla Research’s Graydon Hore, with contributions from the likes of JavaScript creator Brendan Eich, Rust became the core language for some of the fundamental features of the Firefox browser and its Gecko engine, as well as Mozilla’s Servo engine. Today, Rust is the most-loved language among developers. But with Mozilla’s layoffs in recent months, a lot of the Rust team lost its job and the future of the language became unclear without a main sponsor, though the project itself has thousands of contributors and a lot of corporate users, so the language itself wasn’t going anywhere.

A large open-source project oftens needs some kind of guidance and the new foundation will provide this — and it takes a legal entity to manage various aspects of the community, including the trademark, for example. The new Rust board will feature 5 board directors from the 5 founding members, as well as 5 directors from project leadership.

“Mozilla incubated Rust to build a better Firefox and contribute to a better Internet,” writes Bobby Holley, Mozilla and Rust Foundation Board member, in a statement. “In its new home with the Rust Foundation, Rust will have the room to grow into its own success, while continuing to amplify some of the core values that Mozilla shares with the Rust community.”

All of the corporate sponsors have a vested interest in Rust and are using it to build (and re-build) core aspects of some of their stacks. Google recently said that it will fund a Rust-based project that aims to make the Apache webserver safer, for example, while Microsoft recently formed a Rust team, too, and is using the language to rewrite some core Windows APIs. AWS recently launched Bottlerocket, a new Linux distribution for containers that, for example, features a build system that was largely written in Rust.

 

#aws, #brendan-eich, #firefox, #free-software, #gecko, #google, #huawei, #javascript, #microsoft, #mozilla, #mozilla-foundation, #programming-languages, #rust, #servo, #software, #tc

Automattic, Mozilla, Twitter and Vimeo urge EU to beef up user controls to help tackle ‘legal-but-harmful’ content

Automattic, Mozilla, Twitter and Vimeo have penned an open letter to EU lawmakers urging them to ensure that a major reboot of the bloc’s digital regulations doesn’t end up bludgeoning freedom of expression online.

The draft Digital Services Act and Digital Markets Act are due to be unveiled by the Commission next week, though the EU lawmaking process means it’ll likely be years before either becomes law.

The Commission has said the legislative proposals will set clear responsibilities for how platforms must handle illegal and harmful content, as well as applying a set of additional responsibilities on the most powerful players which are intended to foster competition in digital markets.

In their joint letter, entitled ‘Crossroads for the open Internet’, the four tech firms argue that: “The Digital Services Act and the Democracy Action Plan will either renew the promise of the Open Internet or compound a problematic status quo – by limiting our online environment to a few dominant gatekeepers, while failing to meaningfully address the challenges preventing the Internet from realising its potential.”

On the challenge of regulating digital content without damaging vibrant online expression they advocate for a more nuanced approach to “legal-but-harmful” content — pressing a ‘freedom of speech is not freedom of reach’ position by urging EU lawmakers not to limit their policy options to binary takedowns (which they suggest would benefit the most powerful platforms).

Instead they suggest tackling problem (but legal) speech by focusing on content visibility as key and ensuring consumers have genuine choice in what they see — implying support for regulation to require that users have meaningful controls over algorithmic feeds (such as the ability to switch off AI curation entirely).

“Unfortunately, the present conversation is too often framed through the prism of content removal alone, where success is judged solely in terms of ever-more content removal in ever-shorter periods of time. Without question, illegal content — including terrorist content and child sexual abuse material — must be removed expeditiously. Indeed, many creative self-regulatory initiatives proposed by the European Commission have demonstrated the effectiveness of an EU-wide approach,” they write.

“Yet by limiting policy options to a solely stay up-come down binary, we forgo promising alternatives that could better address the spread and impact of problematic content while safeguarding rights and the potential for smaller companies to compete. Indeed, removing content cannot be the sole paradigm of Internet policy, particularly when concerned with the phenomenon of ‘legal-but-harmful’ content. Such an approach would benefit only the very largest companies in our industry.

“We therefore encourage a content moderation discussion that emphasises the difference between illegal and harmful content and highlights the potential of interventions that address how content is surfaced and discovered. Included in this is how consumers are offered real choice in the curation of their online environment.”

Twitter does already let users switch between a chronological content view or ‘top tweets’ (aka, its algorithmically curated feed) — so arguably it already offers users “real choice” on that front. That said, its platform can also inject some (non-advertising) content into a user’s feed regardless of whether a person has elected to see it — if its algorithms believe it’ll be of interest. So not quite 100% real choice then.

Another example is Facebook — which does offer a switch to turn off algorithmic curation of its News Feed. But it’s so buried in settings most normal users are unlikely to discover it. (Underlying the importance of default settings in this context; algorithmic defaults with buried user choice do already exist on mainstream platforms — and don’t sum to meaningful user control over what they’re exposed to.)

In the letter, the companies go on to write that they support “measures towards algorithmic transparency and control, setting limits to the discoverability of harmful content, further exploring community moderation, and providing meaningful user choice”.

“We believe that it’s both more sustainable and more holistically effective to focus on limiting the number of people who encounter harmful content. This can be achieved by placing a technological emphasis on visibility over prevalence,” they suggest, adding: “The tactics will vary from service to service but the underlying approach will be familiar.”

The Commission has signalled that algorithmic transparency will be a key plank of the policy package — saying in October that the proposals will include requirements for the biggest platforms to provide information on the way their algorithms work when regulators ask for it.

Commissioner Margrethe Vestager said then that the aim is to “give more power to users — so algorithms don’t have the last word about what we get to see, and what we don’t get to see” — suggesting requirements to offer a certain level of user control could be coming down the pipe for the tech industry’s dark patterns.

In their letter, the four companies also express support for harmonizing notice-and-action rules for responding to illegal content, to clarify obligations and provide legal certainty, as well as calling for such mechanisms to “include measures proportionate to the nature and impact of the illegal content in question”.

The four are also keen for EU lawmakers to avoid a one-size-fits-all approach for regulating digital players and markets. Although given the DSA/DMA split that looks unlikely; there will at least be two sizes involved in Europe’s rebooted rules, and most likely a lot more nuance.

“We recommend a tech-neutral and human rights-based approach to ensure legislation transcends individual companies and technological cycles,” they go on, adding a little dig over the controversial EU Copyright directive — which they describe as a reminder there are “major drawbacks in prescribing generalised compliance solutions”.

“Our rules must be sufficiently flexible to accommodate and allow for the harnessing of sectoral shifts, such as the rise of decentralised hosting of content and data,” they go on, arguing a “far-sighted approach” can be ensured by developing regulatory proposals that “optimise for effective collaboration and meaningful transparency between three core groups: companies, regulators and civil society”.

Here the call is for “co-regulatory oversight grounded in regional and global norms”, as they put it, to ensure Europe’s rebooted digital rules are “effective, durable, and protective of individuals’ rights”.  

The joint push for collaboration that includes civic society contrasts with Google’s public response to the Commission’s DSA/DMA consultation — which mostly focused on trying to lobby against ex ante rules for gatekeepers (like Google will surely be designated).

Though on liability for illegal content front the tech giant also lobbied for clear delineating lines between how illegal material must be handled and what’s “lawful-but-harmful.”

The full official detail of the DSA and DMA proposals are expected next week.

A Commission spokesperson declined to comment on the specific positions set out by Twitter et al today, adding that the regulatory proposals will be unveiled “soon”. (December 15 is the slated date.)

Last week — setting out the bloc’s strategy towards handling politically charged information and disinformation online — values and transparency commissioner, Vera Jourova, confirmed the forthcoming DSA will not set specific rules for the removal of “disputed content”.

Instead, she said there will be a beefed up code of practice for tackling disinformation — extending the current voluntary arrangement with additional requirements. She said these will include algorithmic accountability and better standards for platforms to cooperate with third-party fact-checkers. Tackling bots and fake accounts and clear rules for researchers to access data are also on the (non-legally-binding) cards.

“We do not want to create a ministry of truth. Freedom of speech is essential and I will not support any solution that undermines it,” said Jourova. “But we also cannot have our societies manipulated if there are organized structures aimed at sewing mistrust, undermining democratic stability and so we would be naive to let this happen. And we need to respond with resolve.”

#automattic, #digital-markets-act, #digital-regulation, #digital-services-act, #eu, #europe, #mozilla, #policy, #social, #twitter, #vimeo

Gifting a gadget? Check its creep factor on Mozilla’s ‘Privacy not included’ list of shame

Buying someone a gadget is a time-honored tradition, but these days it can be particularly fraught, considering you may buy them a fitness tracker that also monitors emotions, or a doorbell that snitches to the cops. Mozilla has put together a helpful list of popular gadgets with ratings on just how creepy they are.

“Privacy not included” has become an annual tradition for the internet rights advocate, and this year has an especially solid crop of creepy devices, given the uptick in smart speakers, smart security cameras, and smart litterboxes.

On the “creepy” end of the spectrum is… pretty much everything by Amazon except the Kindle. The devices in question send tons of data to Amazon by design, of course, but Mozilla feels the company hasn’t yet earned the trust to make that sort of thing acceptable. Facebook’s Portal earns a creepy spot for a similar reason.

Image Credits: Mozilla

Some random gadgets like a smart coffee maker and Moleskine smart notebook get creepy ratings because they don’t give the kinds of assurances about data and security that any company collecting that information should give. That sort of thing is common in smart gadgets — they may not be fundamentally creepy, but the company that makes them reserves the right to make it creepy at any time.

On the other end of the spectrum, Withings earns points for its smart devices with reasonable privacy policies and security. Non-Ring smart doorbells get good marks, and Garmin’s smart watches too.

These are informal rankings based on the potential for abuse or exposure of your data, and it doesn’t mean that they’re perfectly safe or private. If you’re buying one of these things, it’s best to immediately go through the settings and preferences and disable anything that smells invasive or creepy. You can always enable features again, but once you’ve put your data out there, it’s hard to get it back.

Check out the rest of the list here.

#gadgets, #hardware, #mozilla, #privacy, #security

Google calls DOJ’s antitrust lawsuit “deeply flawed” in GIF-laden blog response

Google was clearly anticipating today’s U.S. Department of Justice antitrust complaint filing – the company posted an extensive rebuttal of the lawsuit to its Keyword company blog. The post, penned by SVP of Global Affairs and Google Chief Legal Officer Kent Walker, suggests that the DOJ’s case is “deeply flawed” and “would do nothing to help consumers,” before going into a platform-by-platform description of why it thinks its position in the market isn’t representative of unfair market dominance that would amount to antitrust.

Google’s blog post is even sprinkled with GIFs – something that’s pretty common for the search giant when it comes to its consumer product launches. These GIFs include step-by-step screen recordings of setting search engines other than Google as your default in Chrome on both mobile and desktop. These processes are both described as “trivially easy” by Walker in the post, but they do look like a bit of an own-goal when you notice just how many steps it takes to get the job done on desktop in particular, including what looks like a momentary hesitation in where to click to drill down further for the “Make Default” command.

Image Credits: Google

Google also reportedly makes reference to companies choosing their search engine as default because of the quality of their service, including both Apple and Mozilla (with a link drop for our own Frederic Lardinois). Ultimately, Google is making the argument that its search engine isn’t dominant because of a lack of viable options fostered by anti-competitive practices, but that instead it’s a result of building a quality product that consumers then opt in to using from among a field of choices.

The DOJ’s full suit dropped this morning, and an initial analysis suggests that this scrutiny is perhaps inopportunely timed in terms of its proximity to the election to actually have any significant teeth. There is some indication that a more broad, bipartisan investigation with support from state level attorney generals on both sides of the aisle could follow later, however, so it’s not necessarily all just going to go away regardless of election outcome.

#apple, #chrome-os, #doj, #freeware, #gif, #google, #google-search, #google-chrome, #kent-walker, #mozilla, #operating-systems, #search-engine, #search-engines, #software, #tc, #web-browsers

Mozilla shutters Firefox Send and Notes

Mozilla today announced that it will shutter two products: Firefox Send, the free file transfer service it already put on hiatus in July, and Firefox Notes, its note-taking extension and mobile app.

Firefox Send launched in March 2019. At the time, Mozilla described it as a file-sharing tool with a focus on privacy. That privacy is also what is now doing it in. When it paused the service earlier this year, the company said it was investigating reports of abuse, especially from malware groups. At the time, Mozilla said it was looking into how it could improve its abuse reporting capabilities and that it would add a requirement that users have a Firefox Account.

But instead of relaunching it, the organization decided to shutter the service instead.

“Firefox Send was a promising tool for encrypted file sharing,” the organization writes in today’s update. “Send garnered good reach, a loyal audience and real signs of value throughout its life. Unfortunately, some abusive users were beginning to use Send to distribute malware and as part of spear phishing attacks. This summer we took Firefox Send offline to address this challenge. In the intervening period, as we weighed the cost of our overall portfolio and strategic focus, we made the decision not to relaunch the service.”

Mozilla says that Firefox Notes was initially meant to be an experiment for testing new ways to sync encrypted data. “Having served that purpose, we kept the product as a little utility tool for Firefox and Android users,” Mozilla says, but it is now decommissioning it and shutting it down completely in early November.

It’s hard not to look at today’s announcement in the context of the overall challenges that Mozilla is going through. If the organization were in a better financial position — and hadn’t laid off around 25% of its staff this year —  it may have kept Notes alive and maybe tried to rework Send. Now, however, it has fewer options to experiment, especially with free services, as it tries to refocus on Firefox and a few other core projects.

#deadpool, #firefox, #mozilla, #tc

Mozilla cuts 250 jobs, says Firefox development will be affected

The Firefox logo.

Enlarge (credit: Getty Images | Anadolu Agency)

Mozilla Corporation is laying off 250 people, about a quarter of its workforce, explaining that the COVID-19 pandemic has significantly lowered revenue. Mozilla previously had about 1,000 employees.

The Firefox maker’s CEO, Mitchell Baker, announced the job cuts yesterday, writing that “economic conditions resulting from the global pandemic have significantly impacted our revenue. As a result, our pre-COVID plan was no longer workable.”

In a memo sent to employees, Baker said the 250 job cuts include “closing our current operations in Taipei, Taiwan.” The layoffs will reduce Mozilla’s workforce in the United States, Canada, Europe, Australia, and New Zealand. Another 60 people will be reassigned to different teams.

Read 10 remaining paragraphs | Comments

#biz-it, #firefox, #mozilla

Mozilla lays off 250

Mozilla today announced a major restructuring of its commercial arm, the Mozilla Corporation that will see about 250 employees lose their jobs and the shuttering of the organization’s operations in Taipei, Taiwan. This move comes after the organization already laid off about 70 employees earlier this year.  The most recent numbers from 2018 put Mozilla at about 1,000 employees worldwide.

Citing falling revenues because of the global pandemic, Mozilla’s executive chairwoman and CEO Mitchell Baker said in an internal message that the company’s pre-COVID plans were no longer feasible.

“Pre-COVID, our plan for 2020 was a year of change: building a better internet by accelerating product value in Firefox, increasing innovation, and adjusting our finances to ensure financial stability over the long term,” Baker writes. “We started with immediate cost-saving measures such as pausing our hiring, reducing our wellness stipend and cancelling our All-Hands. But COVID-19 has accelerated the need and magnified the depth for these changes. Our pre-COVID plan is no longer workable. We have talked about the need for change — including the likelihood of layoffs — since the spring. Today these changes become real.”

Layed off employees will receive severance that is at least equivalent to their full base pay through December 31 and will still receive their individual performance bonuses for the first half of the year, as well as part of their company bonus and the standard COBRA health insurance benefits.

Mozilla promises that its smaller organization will be able to act more “quickly and nimbly” and that it will work more closely with partners that share its goal of an open web ecosystem. At the same time, Baker wants Mozilla to remain a “technical powerhouse of the internet activist movement,” yet she also acknowledges that the organization as a whole must also focus on economics and work on creating sustainable business models that still stay true to its mission.

‘We are also restructuring to put a crisper focus on new product development and go to market activities,” writes Baker. “In the long run, I am confident that the new organizational structure will serve our product and market impact goals well, but we will talk in detail about this in a bit.”

On the product side, Mozilla will continue to focus on Firefox, as well as Pocket, its Hubs virtual reality project, its new VPN service, Web Assembly and other privacy and security products. But it is also launching a new Design and UX team, as well as a new applied machine learning team to help bring machine learning to its products.

#covid, #firefox, #mitchell-baker, #mozilla, #open-source, #personnel, #taipei, #taiwan, #tc, #web-browsers

We test Mozilla’s new Wireguard-based $5/mo VPN service

Mozilla's new Wireguard-based service offers a very simple, attractive, and cleanly functional VPN user interface.

Enlarge / Mozilla’s new Wireguard-based service offers a very simple, attractive, and cleanly functional VPN user interface. (credit: Jim Salter)

Mozilla, the open source company best known for the Firefox Web browser, made its VPN service generally available in the United States this month. The cross-platform VPN is based on Wireguard and delivered in partnership with well-known and especially techie-friendly VPN provider Mullvad. Mullvad itself was, to the best of our knowledge, the first publicly available VPN provider to offer Wireguard support back in 2017.

The Mozilla VPN service costs $4.95 per month and offers server endpoints in 30-plus countries. It currently has VPN clients available for Windows 10, Android, and iOS—but users of other operating systems, such as MacOS and Linux, are going to have to wait. Mozilla says that support for MacOS and Linux is coming soon—but unfortunately, even if you’re an advanced user who understands Wireguard configs, you can’t just roll your own connection now.

The service authenticates via Firefox cloud account. When you sign up for a Mozilla VPN subscription, you’ll be asked to create a Firefox account if you don’t already have one. The Firefox account is an SSO (Single Sign On) service which uses oauth2, much like a Google account—but it’s not tied to a Google account, so even if you sign up using a Gmail address tied to an Android device, that device won’t be automatically logged in.

Read 18 remaining paragraphs | Comments

#mozilla, #tech, #uncategorized, #vpn, #wireguard

Investors are browsing for Chromium startups

A few months ago, we declared that “browsers are interesting again,” thanks to increased competition among the major players. Now, as more startups are getting onboard, things are getting downright exciting.

A small but growing number of projects are building web browsers with a more specific type of user in mind. Whether that perceived user is prioritizing improved speed, organization or toolsets aligned with their workflow, entrepreneurs are building these projects with the assumption that Google’s one-size-fits-all approach with Chrome leaves plenty of users with a suboptimal experience.

Building a modern web browser from scratch isn’t the most feasible challenge for a small startup. Luckily open-source projects have enabled developers to build their evolved web browsers on the bones of the apps they aim to compete with. For browsers that are not Safari, Firefox, Chrome or a handful of others, Google’s Chromium open-source project has proven to be an invaluable asset.

Since Google first released Chrome in late 2008, the company has also been updating Chromium. The source code powers the Microsoft Edge and Opera web browsers, but also allows smaller developer teams to harness the power of Chrome when building their own apps.

These upstart browsers have generally sought to compete with the dominant powers on the privacy front, but as Chrome and Safari have begun shipping more features to help users manage how they are tracked online, entrepreneurs are widening their product ambitions to tackle usability upgrades.

Aiding these heightened ambitions is increased attention on custom browsers from investors. Mozilla co-founder Brendan Eich’s Brave has continued to scale, announcing last month they had 5 million daily active users of their privacy-centric browser.

Today, Thrive Capital’s Josh Miller spoke with TechCrunch about his project The Browser Company which has raised $5 million from some notable Silicon Valley operators. Other hot upstart efforts include Mighty, a subscription-based, remote-streamed Chrome startup from Mixpanel founder Suhail Doshi, and Blue Link Labs, a recent entrant that’s building a decentralized peer-to-peer browser called Beaker browser.

Mighty

As front-end developers have gotten more ambitious and web applications have gotten more complex, Chrome has earned the reputation of being quite the RAM hog.

#brave, #browsers, #chromium, #ev-williams, #founders-fund, #freeware, #github, #google, #google-chrome, #mighty, #mixpanel, #mozilla, #opera, #slack-fund, #tc, #thrive-capital, #web-browsers, #y-combinator

Comcast, Mozilla strike privacy deal to encrypt DNS lookups in Firefox

The Firefox logo.

Enlarge (credit: Getty Images | Anadolu Agency)

Comcast is partnering with Mozilla to deploy encrypted DNS lookups on the Firefox browser, the companies announced today. Comcast’s version of DNS over HTTPS (DoH) will be turned on by default for Firefox users on Comcast’s broadband network, but people will be able to switch to other options like Cloudflare and NextDNS. No availability date was announced.

Comcast is the first ISP to join Firefox’s Trusted Recursive Resolver (TRR) program, Mozilla said in today’s announcement. Cloudflare and NextDNS were already in Mozilla’s program, which requires encrypted-DNS providers to meet privacy and transparency criteria and pledge not to block or filter domains by default “unless specifically required by law in the jurisdiction in which the resolver operates.”

“Adding ISPs in the TRR program paves the way for providing customers with the security of trusted DNS resolution, while also offering the benefits of a resolver provided by their ISP such as parental control services and better optimized, localized results,” the announcement said. “Mozilla and Comcast will be jointly running tests to inform how Firefox can assign the best available TRR to each user.”

Read 10 remaining paragraphs | Comments

#biz-it, #comcast, #dns-over-https, #firefox, #mozilla, #policy

Ameelio wants to take on for-profit prison calling rackets after starting with free letters to inmates

Among the many problems with the prison system are enormous fees for things like video calls, which a handful of companies provide at grossly inflated rates. Ameelio hopes to step in and provide free communication options to inmates; Its first product, sending paper letters, is being welcomed with open arms by those with incarcerated loved ones.

Born from the minds of Yale Law students, Ameelio is their attempt to make a difference in the short term while pushing for reform in the long term, said co-founder and CEO Uzoma Orchingwa.

“I was studying mass incarceration, and the policy solutions I was writing about were going to take a long time to happen,” Orchingwa said. “It’s going to be a long battle before we can make even little inroads. So I was thinking, what can I do in the interim while I work on the longer term project of prison reform?”

He saw reports that inmates with regular communication with loved ones have better outcomes when released, but also that in many prisons, that communication was increasingly expensive and restricted. Some prisons have banned in-person meetings altogether — not surprising during a pandemic — leaving video calling at extortionate rates the only option for speaking face to face with a loved one.

Sometimes costing a dollar a minute, these fees add up quickly and, naturally, this impacts already vulnerable populations the most. Former FCC Commissioner Mignon Clyburn, for whom this was an issue of particular interest during her term, called the prison communication system “the clearest, most glaring type of market failure I’ve ever seen as a regulator.”

It’s worth noting that these private, expensive calling services weren’t always the norm, but were born fairly recently as the private prison industry has expanded and multiplied the ways it makes money off inmates. Some states ban the practice, but others have established relationships with the companies that provide these services — and a healthy kickback to the state and prison, of course.

This billion-dollar industry is dominated by two companies: Securus and Global Tel Link. The service they provide is fairly rudimentary compared with those we on the outside take for granted. Video and audio calls are scheduled, recorded, skimmed for keywords, and kept available to authorities for a few months in case they’re needed.

At a time when video calls are being provided for free to billions around the world who have also been temporarily restricted from meeting in person, charging at all for it seems wrong — and charging a dollar a minute seems monstrous.

Ameelio’s crew of do-gooder law students and developers doesn’t think they can budge the private prison system overnight, so they’re starting with a different product, but one that also presents difficulties to families trying to communicate with inmates: letters.

Written mail is a common way to keep in contact with someone in prison, but there are a few obstacles that may prevent the less savvy from doing so. Ameelio facilitates this by providing an up-to-date list of correct addresses and conventions for writing to any of the thousands of criminal justice facilities around the country, as well as the correct way to look up and identify the inmate you’re trying to contact — rarely as simple as just putting their name at the top.

“The way prison addresses work, the inmate address is different from the physical address. So we scraped addresses and built a database for that, and built a way to find the different idiosyncrasies, like how many lines are necessary, what to put on each line, etc,” said co-founder Gabe Saruhashi.

Once that’s sorted, you write your letter, attach a photo if you want, and it’s printed out and sent (via direct-mail-as-a-service startup Lob). It’s easy to see how removing the friction and cost of printing, addressing and so on would lead to more frequent communication.

Since starting a couple months ago and spreading word of the service on Facebook groups and other informal means, they’ve already sent more than 4,000 letters. But while it’s nice for people to be able to send letters, Ameelio plans to cater to larger organizations that use mail at larger scales.

“The communications challenges that families have are the same challenges that criminal justice organizations and lawyers have when communicating with their clients,” explained Orchingwa. They have to manage the addresses, letter-writing and sending, and a network of people to check on recipients and other follow-up actions. “We’re talking to them, and a lot were very interested in the service we’re offering, so we’re going to roll out a version for organizations. We’re creating a business model in which these organizations, and some of them are well funded, can pay us back but also pay it forward and help keep it free for others.”

How an organization might use and track letter-writing campaigns.

Sending letters is just the opening play for Ameelio, though, but it’s also a way to make the contacts they need and research the market. Outcry against the private calling systems has been constant but the heterogeneous nature of prisons run under state policies means “we don’t have one system, we have 51 separate systems,” as Orchingwa put it. That and the fact that it makes a fair amount of money.

“There’s a lot of movement around getting Securus and Global Tel out,” he said, “But it would shift from families to the state paying, so they need to make back the money they were making from kickbacks.”

Some states have banned paid calls or never allowed them, but others are only changing their policies now in response to external pressure. It’s with these that Ameelio hopes to succeed first.

“We can start in states where there’s no strong relationship to these companies,” said Orchingwa. “You’re going to have state and county officials being asked by their constituents, ‘why are we using them when there’s a free alternative?’ ”

You may wonder whether it’s possible for a fresh young startup to build a video calling platform ready for deployment in such a short time. The team was quick to explain that the actual video call part of the product is something that, like sending letters, can be accomplished through a third party.

“The barrier right now is not at all the video infrastructure – enterprise and APIs will provide that. We already have an MVP of how that will look,” said Saruhashi. Even the hardware is pretty standard — just regular Android tablets stuck to the wall.

“The hard part is the dashboard for the [Department of Corrections],” Saruhashi continued. “They need a way to manage connections that are coming in, schedule conversations, get logs and review them when they’re done.”

But they’re also well into the development of that part, which ultimately is also only a medium-grade engineering challenge, already solved in many other contexts.

Currently the team is evaluating participation in a number of accelerators, and is already part of Mozilla’s Spring MVP Lab, the precursor to a larger incubator effort announced earlier today. “We love them,” said Mozilla’s Bart Decrem.

Right now the company is definitely early stage, with more plans than accomplishments, and they’re well aware that this is just the start — just as establishing better communications options is just the start for more comprehensive reform of the prison and justice system.

#ameelio, #jail, #mozilla, #prison, #startups, #tc, #video-calling, #video-conferencing

Mozilla goes full incubator with ‘Fix The Internet’ startup lab and early stage investments

After testing the waters this spring with its incubator-esque MVP Lab, Mozilla is doubling down on the effort with a formal program dangling $75,000 investments in front of early stage companies. The focus on “a better society” and the company’s open-source clout should help differentiate it from the other options out there.

Spurred on by the success of a college hackathon using a whole four Apple Watches in February, Mozilla decided to try a more structured program in the spring. The first test batch of companies is underway, having started in April an 8-week program offering $2,500 per team member and $40,000 in prizes to give away at the end. Developers in a variety of domains were invited to apply, as long as they fit the themes of empowerment, privacy, decentralization, community, and so on.

It drew the interest of some 1,500 people in 520 projects, and 25 were chosen to receive the full package and stipend during the development of their MVP. The rest were invited to an “Open Lab” with access to some of Mozilla’s resources.

One example of what they were looking for is Ameelio, a startup whose members are hoping to render paid video calls in prisons obsolete with a free system, and provide free letter delivery to inmates as well. I wrote about the company here.

“The mission of this incubator is to catalyze a new generation of internet products and services where the people are in control of how the internet is used to shape society,” said Bart Decrem, a Mozilla veteran (think Firefox 1.0) and one of the principals at the Builders Studio. “And where business models should be sustainable and valuable, but do not need to squeeze every last dollar (or ounce of attention) from the user.”

“We think we are tapping into the energy in the student and professional ‘builder communities’ around wanting to work on ideas that matter. That clarion call really resonates,” he said. Not only that, but students with canceled internships are showing up in droves, it seems — mostly computer science, but design and other disciplines as well. There are no restrictions on applicants, like country of origin, previous funding, or anything like that.

The new incubator will be divided into three tiers.

First is the “Startup Studio,” which involves a $75,000 investment, “a post-money SAFE for 3.5% of the company when the SAFE converts (or we will participate in an already active funding round),” Decrem clarified.

Below that, as far as pecuniary commitment goes, is the “MVP Lab,” similar to the spring program but offering a total of $16,000 per team. And below that is the Open Lab again, but with ten $10,000 prizes rather than a top 3.

There are no hard numbers on how many teams will make up the two subsidized tiers, but think 20-30 total as opposed to 50 or 100. Meanwhile, collaboration, cross-pollination, and open source code is encouraged, as you might expect in a Mozilla project. And the social good aspect is strong as well, as a sampling of the companies in the spring batch shows.

Neutral is a browser plugin that shows the carbon footprint of your Amazon purchases, adding some crucial guilt to transactions we forget are powered by footsore humans and gas-guzzling long-distance goods transport. Meething, Cabal, and Oasis are taking on video conferencing, team chat, and social feeds from a decentralized standpoint, using the miracles of modern internet architecture to accomplish with distributed systems what once took centralized servers.

This summer will see the program inaugurated, but it’s only “the beginning of a multiyear effort,” Decrem said.

#accelerator, #funding, #incubator, #incubators, #mozilla, #open-source, #privacy, #startups, #venture-capital

Firefox gets a better password manager

Mozilla today launched version 76 of its Firefox browser, and with that, it’s launching a couple of new features that you’ll likely notice if you’re already using the open-source browser.

The highlight of today’s release is the enhanced password manager. Firefox Lockwise, as it is called these days, will now ask you for your device password when you try to copy and paste credentials from your “Logins and Passwords” page in the browser. After you’ve confirmed your device password, you can see and copy your credentials for five minutes. This should make it a bit harder for others to access password-protected sites on your machine, especially if you’re on a computer you regularly share with others.

Also new to Lockwise are alerts for vulnerable passwords that are identical to those that have been stolen in a known breach (but you would never reuse a password, right?), as well as warnings when a website you use has been breached and your logins and passwords were likely stolen.

In addition, Lockwise’s password generator now works with more sites and will help you find 12 random letters, numbers and symbols for you to use as your password.

With version 76, Firefox now also includes an improved picture-in-picture mode for video sites like YouTube. With this, you can keep watching a video in the corner of your screen while you continue with other tasks (though you can’t browse away from YouTube, for example, while you’re watching in the pop-out window). I wish I could have more control over the size of that picture-in-picture window because it’s pretty large, but that’s just how it is for now. New in version 76 is the ability to double-click on the popped out video to make it fullscreen. A small but welcome new feature.

Update: we clarified that PiP mode itself is not new. Only the double-click to fullscreen is.

If you’re an avid Zoom user, you’ll be happy to hear that Firefox has now made a few changes that allow you to use it in Firefox without the need for any additional downloads, and WebRender, which uses the GPU to render websites faster, is now enabled on even more machines.

#firefox, #mozilla, #security, #tc

Firefox 75 overhauls the browser’s address bar

Today, Mozilla rolled out Firefox 75, its latest update for the open source Web browser. The big change is a redesign of the address bar, which comes with some tweaks to how searches work when you’re using it.

When you begin using the new search field, you’ll notice that it looks a little different; it’s larger, and it has a larger font to match.

The drop-down that appears when you click in the search bar will show you multiple options for where to search, like Google or Amazon. That same view will show additional keyword suggestions as you type, with the goal being exposing “additional popular keywords that you might not have thought of to narrow your search even further,” according to the blog post announcing the redesign.

Read 5 remaining paragraphs | Comments

#firefox, #firefox-75, #mozilla, #tech, #web-browser