After losing a similar battle in Australia, Meta continues to resist efforts by a growing number of countries to require the social media company to pay for news linked on platforms like Facebook and Instagram. On Saturday, Meta announced that it would end news access for Canadian Facebook and Instagram users if the country’s Online News Act is passed, Reuters reported.
“A legislative framework that compels us to pay for links or content that we do not post, and which are not the reason the vast majority of people use our platforms, is neither sustainable nor workable,” Laventure said.
Tomorrow is the day that Meta expected would finally end its Cambridge Analytica woes. That’s when a US district court in California is scheduled to preliminarily approve a $725 million settlement agreement that Meta believed would release the company of all related claims.
However, just days before Meta could reach that seeming finish line, the state of New Mexico has moved to intervene. In a court filing yesterday, New Mexico argued that Meta might be interpreting its settlement agreement wrong and claimed that, for New Mexico citizens, the Cambridge Analytica scandal is far from resolved.
To clarify whether Meta’s agreement releases New Mexico’s and others’ claims and to ensure that the California court doesn’t “inadvertently or otherwise release claims” raised in New Mexico’s still-pending parallel action against Meta, New Mexico’s attorneys have asked to be heard “briefly” at tomorrow’s hearing.
Enlarge/ The Meta Quest Pro at a Best Buy demo station in October 2022.
The next Meta Quest headset, planned for launch this year, will be thinner, twice as powerful, and slightly more expensive than the Quest 2. That’s according to a leaked internal hardware roadmap presentation obtained by The Verge that also includes plans for high-end, smartband-controlled, ad-supported AR glasses by 2027.
The “Quest 3” will also include a new “Smart Guardian” system that lets users walk around safely in “mixed reality,” according to the presentation. That will come ahead of a more “accessible” headset, codenamed Ventura, which is planned for a release in 2024 at “the most attractive price point in the VR consumer market.”
That Ventura description brings to mind John Carmack’s October Meta Connect keynote, in which he highlighted his push for a “super cheap, super lightweight headset” targeting “$250 and 250 grams.” Carmack complained that Meta is “not building that headset today, but I keep trying.” Months later, Carmack announced he was leaving the company, complaining that he was “evidently not persuasive enough” to change the company for the better.
Over the past few years, the National Center for Missing and Exploited Children (NCMEC) saw worrying trends indicating that teen sextortion is on the rise online and, in extreme cases, leads to suicides. Between 2019 and 2021, the number of sextortion cases reported on NCMEC’s online tipline more than doubled. At the start of 2022, nearly 80 percent of those cases involved teens suffering financial sextortion—pressured to send cash or gift cards or else see their sexualized images spread online.
NCMEC already manages a database that works to stop the spread of child sexual abuse materials (CSAM), but that tool wouldn’t work for confused teens ashamed of struggling with sextortion, because it gathers information with every report that is not anonymized. Teens escaping sextortion needed a different kind of tool, NCMEC realized, one that removed all shame from the reporting process and worked more proactively, allowing minors to anonymously report sextortion before any of their images are ever circulated online.
Today, NCMEC officially launched that tool—Take It Down. Since its soft launch in December, already more than 200 people have used it to block uploads or remove images of minors shared online, NCMEC’s communications and brand vice president, Gavin Portnoy, told Ars.
After a jury unanimously decided last September that Meta owed $175 million to walkie-talkie app-maker Voxer for patent infringement, Meta tried to avoid paying up by requesting a judge either reject the jury’s verdict or give Meta a new trial. This week, a federal judge denied Meta’s request, making it likely that Meta will have to pay all those running royalties for illegally copying Voxer’s technology and using it to launch Facebook Live and Instagram Live.
Meta had argued seemingly everything it could to get out of paying millions in damages. It questioned whether the jury’s decision was reasonable, claiming that Voxer’s lawyer had made comments that biased the jury. In Meta’s view, no reasonable jury would have found that Meta infringed Voxer’s patented video-streaming and messaging technologies. Further, even if everyone agreed that there was infringement, Meta argued that the damages were too extreme and improperly calculated by Voxer’s expert. Instead of owing running royalties, Meta felt it should be required to pay either no damages or a lump sum.
In his decision, US District Judge Lee Yeakel affirmed that substantial evidence supported the jury’s verdict of patent infringement and sufficient evidence supported the damages that the jury awarded Voxer.
Enlarge/ Attorney Eric Schnapper speaks to reporters outside of the US Supreme Court following oral arguments for the case Twitter v. Taamneh on February 22, 2023, in Washington, DC. (credit: Anna Moneymaker / Staff | Getty Images North America)
Today it was Twitter’s turn to argue before the Supreme Court in another case this week that experts fear could end up weakening Section 230 protections for social networks hosting third-party content. In Twitter v. Taamneh, the Supreme Court must decide if under the Justice Against Sponsors of Terrorists Act (JASTA), online platforms should be held liable for aiding and abetting terrorist organizations that are known to be using their services to recruit fighters and plan attacks.
After close to three hours of arguments, justices still appear divided on how to address the complicated question, and Twitter’s defense was not as strong as some justices seemingly thought it could be.
Twitter attorney Seth Waxman argued that the social network and other defendants, Google and Meta, should not be liable under JASTA, partly because the act of providing the same general services—which anyone on their platforms can access—does not alone constitute providing substantial assistance to an individual planning a terrorist attack.
Yesterday, Meta CEO Mark Zuckerberg announced on Instagram that his company is testing out a new subscription service to help Facebook and Instagram users “get extra impersonation protection against accounts claiming to be you.” Called Meta Verified, the monthly service will cost $11.99 on the web and $14.99 on iOS and Android. It’s being rolled out in Australia and New Zealand starting this week, and there are plans to offer the service in other countries soon.
Reactions on Instagram were mixed, with approximately 35,000 users reacting with thumbs up, hearts, tears, laughter, anger, and shock emoji.
A Meta blog went into further detail on how the monthly subscription service works. Users will show a government ID to authenticate their accounts and will receive a verified badge. Meta will then begin proactively monitoring to block impostor accounts while providing additional account support. Similar to Twitter Blue, the Meta Verified service offers users “increased visibility and reach.” Announced before Twitter Blue launched, the monthly subscription service is designed partly in response to top creator requests “for broader access to verification and account support,” Meta’s blog said. Subscribers will also have access to “exclusive features” like stickers to help their posts stand out even more from basic accounts.
Like any social media platform, Truth Social relies on advertising to drive revenue, but as Twitter’s highly publicized struggle to retain advertisers has shown, it’s hard to attract major brands when a company’s content moderation capabilities appear undependable. That’s likely why Truth Social—which prides itself on sparking an “open, free, and honest global conversation” by largely avoiding content moderation altogether—has seemingly attracted no major advertisers.
A New York Times analysis of hundreds of Truth Social ads showed that the social media platform’s strategy for scraping by is taking ads from just about anyone. Currently, the platform, which was founded by former president Donald Trump, is attracting ad dollars from “hucksters and fringe marketers” who are peddling products like Trump tchotchkes, gun accessories, and diet pills, the Times reported.
In addition to Truth Social’s apparently struggling ad business, SFGate reported in November that Truth Social’s user base also seems to be dwindling. According to The Righting, a group monitoring conservative media, Truth Social traffic peaked last August at 4 million unique visitors but dropped to 2.8 million by October.
Meta will restore Donald Trump’s access to his Facebook and Instagram accounts “in the coming weeks” but “with new guardrails in place” to prevent real-world harm, the company said in a blog post yesterday.
Facebook suspended Trump “following his praise for people engaged in violence at the Capitol on January 6, 2021,” Meta President of Global Affairs Nick Clegg noted in the blog post. “We then referred that decision to the Oversight Board—an expert body established to be an independent check and balance on our decision-making. The Board upheld the decision but criticized the open-ended nature of the suspension and the lack of clear criteria for when and whether suspended accounts will be restored, directing us to review the matter to determine a more proportionate response.”
After the board review, Facebook decided to make Trump’s suspension last until at least January 7, 2023. “Now that the time period of the suspension has elapsed, the question is not whether we choose to reinstate Mr. Trump’s accounts, but whether there remain such extraordinary circumstances that extending the suspension beyond the original two-year period is justified,” Clegg wrote yesterday.
It’s fair to say that, once the pandemic started, sharing misinformation on social media took on an added, potentially fatal edge. Inaccurate information about the risks posed by the virus, the efficacy of masks, and the safety of vaccines put people at risk of preventable death. Yet despite the dangers of misinformation, it continues to run rampant on many social media sites, with moderation and policy often struggling to keep up.
If we’re going to take any measures to address this—something it’s not clear that social media services are interested in doing—then we have to understand why sharing misinformation is so appealing to people. An earlier study had indicated that people care about making sure that what they share is accurate, but they fail to check in many cases. A new study elaborates that by getting into why this disconnect develops: For many users, clicking “share” becomes a habit, something they pursue without any real thought.
How vices become habits
People find plenty of reasons to post misinformation that have nothing to do with whether they mistakenly believe the information is accurate. The misinformation could make their opponents, political or otherwise, look bad. Alternately, it could signal to their allies that they’re on the same side or part of the same cultural group. But the initial experiments described here suggest that this sort of biased sharing doesn’t explain a significant amount of information.
That violence was fueled by false election interference claims, mirroring attacks in the United States on January 6, 2021. Previously, Facebook-owner Meta said it was dedicated to blocking content designed to incite more post-election violence in Brazil. Yet today, the human rights organization Global Witness published results of a test that shows Meta is seemingly still accepting ads that do exactly that.
Global Witness submitted 16 ads to Facebook, with some calling on people to storm government buildings, others describing the election as stolen, and some even calling for the deaths of children whose parents voted for Brazil’s new president, Luiz Inácio Lula da Silva. Facebook approved all but two ads, which Global Witness digital threats campaigner Rosie Sharpe said proved that Facebook is not doing enough to enforce its own ad policies restricting such violent content.
Enlarge/ Police hold back Trump supporters outside the US Capitol’s Rotunda on January 6, 2021. (credit: Getty Images | Olivier Douliery/AFP)
Former US President Donald Trump yesterday petitioned Facebook owner Meta to restore his account, two years after he was banned for egging on his supporters as they attacked the US Capitol. “We believe that the ban on President Trump’s account on Facebook has dramatically distorted and inhibited the public discourse,” Trump’s campaign wrote in a letter to Meta Tuesday, according to NBC News.
The Trump team’s letter also reportedly said that a continuation of the ban would constitute a “deliberate effort by a private company to silence Mr. Trump’s political voice… every day that President Trump’s political voice remains silenced furthers an inappropriate interference in the American political and election process.” The letter was sent to Meta CEO Mark Zuckerberg, Meta VP of Global Affairs Nick Clegg, and Facebook VP of Public Policy Joel Kaplan.
Twitter already reversed its ban on Trump shortly after Elon Musk bought the company. Trump hasn’t tweeted yet. He formed his own social network, Truth Social, after being banned from Facebook and Twitter.
Meta said it’s suing “scraping-for-hire” service Voyager Labs for allegedly using fake accounts, proprietary software, and a sprawling network of IP addresses to surreptitiously collect massive amounts of personal data from users of Facebook, Instagram, Twitter, and other social networking sites.
“Defendant created and used over 38,000 fake Facebook user accounts and its Surveillance Software to scrape more than 600,000 Facebook users’ viewable profile information, including posts, likes, friends lists, photos, and comments, and information from Facebook Groups and Pages,” lawyers wrote in Meta’s complaint. “Defendant designed the Surveillance Software to conceal its presence and activity from Meta and others, and sold and licensed for profit the data it scraped.”
“Bringing individuality to light”
Among the California-based Facebook users to have their data scraped, Meta said, were “employees of nonprofit organizations, universities, news media organizations, health care facilities, the armed forces of the United States, and local, state, and federal government agencies, as well as full-time parents, retirees, and union members.” Meta said the data collection and use of fake accounts violate its terms of service.
Enlarge/ A view of a broken window after the supporters of Brazil’s former President Jair Bolsonaro participated in an anti-democratic riot at Planalto Palace in Brasilia, Brazil on January 9, 2023. (credit: Anadolu Agency / Contributor | Anadolu)
Claiming “election interference” in Brazil, thousands of rioters on Sunday broke into government buildings in the nation’s capital, Brasília. The rioters relied on social media and messaging apps to coordinate their attacks and evade government detection, The New York Times reported, following a similar “digital playbook” as those involved in the United States Capitol attacks on January 6, 2021. Now, social media platforms like Facebook and YouTube have begun removing content praising the most recent attacks, Reuters reported, earmarking this latest anti-democratic uprising as another sensitive event requiring widespread content removal.
Disinformation researchers told the Times that Twitter and Telegram played a central role for those involved with organizing the attacks, but Meta apps Facebook and WhatsApp were also used. Twitter has not responded to reports, but a Meta spokesperson told Ars and a Telegram spokesperson told Reuters that the companies have been cooperating with Brazilian authorities to stop content from spreading that could incite further violence. Both digital platforms confirmed an uptick in content moderation efforts starting before the election took place—with many popular social media platforms seemingly bracing for the riots after failing to quickly remove calls to violence during the US Capitol attacks.
“In advance of the election, we designated Brazil as a temporary high-risk location and have been removing content calling for people to take up arms or forcibly invade Congress, the Presidential palace, and other federal buildings,” a Meta spokesperson told Ars. “We’re also designating this as a violating event, which means we will remove content that supports or praises these actions.“
A lawsuit filed by Seattle Public Schools alleges that social media is one of the main causes of “a youth mental health crisis” and blames social media companies for “exploit[ing] the neurophysiology” of kids’ brains. Arguing that social media companies are violating the state public nuisance law, the lawsuit seeks financial damages and other remedies from the owners of Facebook, Instagram, Snapchat, TikTok, and YouTube.
“Defendants have successfully exploited the vulnerable brains of youth, hooking tens of millions of students across the country into positive feedback loops of excessive use and abuse of Defendants’ social media platforms,” the lawsuit said. “Worse, the content Defendants curate and direct to youth is too often harmful and exploitive (e.g., promoting a ‘corpse bride’ diet, eating 300 calories a day, or encouraging self-harm).”
The complaint was filed Thursday in US District Court for the Western District of Washington.
During a year that seemingly shook Twitter up for good—adding an edit button and demoting legacy verified users by selling off blue checks—it’s easy to overlook how many other tech companies also threw users for a loop with some unexpected policy changes in 2022.
Many decisions to reverse policies were political. Recall that Wikipedia stopped taking cryptocurrency donations due to the environmental cost. Google started allowing political emails to bypass Gmail spam filters ahead of elections, and then, following pressure from abortion rights activists, began auto-deleting location data from sensitive medical locations. Among the most shocking shifts to some, after Russia invaded Ukraine, Facebook made a controversial call to start considering some death threats aimed at Russian military forces as acceptable “political expression”—instead of violent speech in violation of community guidelines.
Other decisions seemed to reverse course on admittedly bad business moves. Amazon stopped paying “ambassadors” to tweet about how much they loved working in lawsuit-riddled warehouses. Apple killed its controversial plan to scan all iCloud photos for child sexual abuse materials. And chasing profits that were lost through its prior adult-content ban, perhaps the greatest surprise came when Tumblr started allowing nudity again.
In the past two weeks, United States lawmakers have increasingly restricted access to China-owned TikTok on government-managed devices. TikTok is one of the most popular apps in the world. Most recently, state agencies in Louisiana and West Virginia yesterday implemented new bans to prevent TikTok from tracking government employees or censoring their content. According to Reuters, that brings the total up to 19 out of 50 US states that have “at least partially blocked access on government computers to TikTok.”
It seems that states are taking what actions they can to protect US data, as, for months, President Joe Biden has seemingly dragged his feet after reportedly coming close to completing a deal with TikTok that would prevent a nationwide ban from wrenching the popular app out of the hands of 100 million Americans. Now news outlets report that it’s unlikely that Biden will seal that deal before the year ends. The New York Times reported that the deal’s terms are “unlikely to satisfy anyone.”
While Biden ponders his potential agreement, Congress seems just as ready as states to move more aggressively against TikTok. Just today, Congress introduced a new spending proposal that included a plan to restrict TikTok access for all federal employees on all government devices. Last week, the Senate voted to approve a similar ban restricting all federal employee access to the app. Reuters reported that this week, the US House of Representatives would have to approve that bill before passing it on to Biden. Even more aggressively, last week, Congress also introduced bipartisan legislation that went even further, seeking to ban TikTok for all users nationwide, citing national security concerns.
Social-Media-Marketing ist ein unverzichtbarer Teil des Werkzeugkastens im Vertrieb und wichtig für jeden in der heutigen Unternehmenslandschaft. Für Gründer kann es jedoch schwierig sein, die richtigen Plattformen für ihr Business zu finden. Hier erfahren Gründer, wie sie auf Social Media Kunden finden, die Geld haben, gern für die Dienstleistung bezahlen – und das, ohne Werbung zu schalten. Für Anbieter von hochpreisigen Dienstleistungen, etwa einem Coaching oder einer Agenturdienstleistung, sind diese Tipps genau richtig.
Die wichtigsten Social-Media-Kanäle für Startups
Für B2B-Unternehmen ist LinkedIn ein wichtiger Kanal, um mit potenziellen Kunden in Kontakt zu treten. Durch die Erstellung einer Unternehmensseite können Startups ihre Marke bei einer relevanten Zielgruppe präsentieren und sich als Experten positionieren. Bei LinkedIn gehen spannend geschriebene Postings auch schnell viral – und es dreht sich ohnehin um Business, keiner nimmt es übel, wenn jemand etwas verkaufen will.
Twitter ist großartig, um Influencer, Journalisten und andere Unternehmen in der Branche zu erreichen. Durch den Einsatz von relevanten Hashtags können Startups ihre Inhalte so darstellen, dass sie leicht gefunden und geteilt werden können. Twitter ist außerdem unverzichtbar für PR.
Auch wenn Facebook mittlerweile kein neuer Kanal mehr ist, so ist das Netzwerk doch immer noch eines der wichtigsten Social-Media-Netzwerke. Hier sind es vor allem die Themen-Gruppen, in denen rege diskutiert wird und die sich sogar für Marktforschung eignen.
Instagram ist eine visuell orientierte Plattform, die vor allem für B2C-Unternehmen relevant ist. Durch die Verwendung von attraktiven Bildern und Videos können Startups auf Instagram ihre Marke präsentieren und so neue Kunden gewinnen.
Sonderfall TikTok
Eines der besten Social-Media-Netzwerke für Gründer ist TikTok. Auf dieser Plattform kann man schnell und einfach neue Kunden finden. Die meisten Unternehmer sind auf TikTok, um ihre Produkte zu bewerben oder um ihre Dienstleistungen anzupreisen. Dort kann ein Startup, auch ohne jede vorherige Reichweite, direkt viral gehen. Dazu kann vorhandener Content, der ohnehin gut funktioniert, adaptiert werden. Bei TikTok ist Werbung aktuell unglaublich günstig. Es kann sich lohnen, dort jetzt aktiv zu werden!
So finden Gründer perfekte Kunden auf Social Media
Die beste Art und Weise, um Kunden auf Social Media zu finden, ist es, das Verhalten der Zielgruppe auf den Plattformen zu beobachten. So können Gründer sehen, was andere Unternehmen aus derselben Branche tun und aus ihren Erfolgen und Misserfolgen lernen. Aber Achtung: Achte darauf, nicht zu neugierig zu sein! Es ist keine gute Idee, jede Unternehmensseite zu verfolgen, die du findest. Stattdessen sollten Gründer nach Unternehmen suchen, die ähnliche Ziele haben, oder die in derselben Branche tätig sind. Diese Wettbewerbsanalyse hilft, die besten Kanäle für das eigene Startup zu finden.
Wer weißt, wer deine Zielgruppe ist und was sie gern möchte, kann sich um die nächsten Schritte kümmern.
Tipps zur Erstellung einer erfolgreichen Social-Media-Strategie
Wer ein Startup gründet oder leitet, sollte sicherstellen, dass seine Social-Media-Strategie zum Erfolg des Unternehmens beiträgt. Hier sind einige Tipps für dafür:
Ziele definieren: Was soll mit der Social-Media-Kampagne erreicht werden? Gründer müssen klare Ziele definieren und sie so konkret wie möglich formulieren. So ist es einfacher, Strategien für die Kampagne zu entwickeln und sie zu bewerten.
Im Voraus planen: Im Vorteil sind Gründer, die alle Beiträge, die sie veröffentlichen wollen, sorgfältig vorbereiten. So vermeiden sie, dass ihre Kampagne chaotisch wirkt oder versehentlich etwas veröffentlicht wird, das negativ aufgenommen werden könnte.
Zeitplan einhalten: Die beste Zeit für die Veröffentlichung von Beiträgen ermitteln und sich daran halten. Wenn Gründer feststellen, dass bestimmte Themen besonders gut ankommen oder negative Reaktionen hervorrufen, können sie entsprechend reagieren und gegebenenfalls Änderungen vornehmen.
An das Publikum denken: Beim Verfassen der Beiträge darauf achten, dass die Zielgruppe verständlich und interessant ist. Gründer sollten vermeiden, mit anderen Nutzern oder Unternehmen über kontroverse Themen zu streiten, und stattdessen versuchen, sachliche Themen in einer ruhigen und besonnenen Art und Weise anzusprechen und zu diskutieren.
Wie werden aus Fans jetzt Kunden?
Die Antwort auf diese Frage ist einfach: Indem der Gründer seinen Followern etwas verkauft! Natürlich sollte er nicht jedem etwas verkaufen. Aber wenn das Marketing die Kunden anspricht, werden sie darauf reagieren, zum Beispiel mit einem Kommentar. Alles, was jetzt noch zu tun ist, ist, dem Follower eine Nachricht zu schreiben und ihn zu fragen, ob er einen Bedarf hat.
In der Regel wird es ein Leichtes sein, persönliche Kontaktdaten und sogar die Telefonnummer zu bekommen. Und voilà – der Kontakt kann jetzt angerufen werden, um herausfinden, ob das Startup ihm helfen kann und ihm dann auch etwas zu verkaufen.
Über den Autor
Markus Baulig ist Gründer und Co-Geschäftsführer von Baulig Consulting.
Startup-Jobs: Auf der Suche nach einer neuen Herausforderung? In der unserer Jobbörse findet Ihr Stellenanzeigen von Startups und Unternehmen.
In the last half of 2022 alone, many services—from game platforms designed with kids in mind to popular apps like TikTok or Twitter catering to all ages—were accused of endangering young users, exposing minors to self-harm and financial and sexual exploitation. Some kids died, their parents sued, and some tech companies were shielded from their legal challenges by Section 230. As regulators and parents alike continue scrutinizing how kids become hooked on visiting favorite web destinations that could put them at risk of serious harm, a pressure that’s increasingly harder to escape has mounted on tech companies to take more responsibility for protecting child safety online.
In the United States, shielding kids from online dangers is still a duty largely left up to parents, and some tech companies would prefer to keep it that way. But by 2024, a first-of-its-kind California online child-safety law is supposed to take effect, designed to shift some of that responsibility onto tech companies. California’s Age-Appropriate Design Code Act (AB 2273) will force tech companies to design products and services with child safety in mind, requiring age verification and limiting features like autoplay or minor account discoverability via friend-finding tools. That won’t happen, however, if NetChoice gets its way.
The tech industry trade association—with members including Meta, TikTok, and Google—this week sued to block the law, arguing in a complaint that the law is not only potentially unconstitutional but also poses allegedly overlooked harms to minors.
Meta has been told its treatment of high-profile users, such as former US President Donald Trump, left dangerous content online, serving business interests at the expense of its human rights obligations.
A damning report published on Tuesday from the company’s oversight board—a “Supreme Court”-style body created by the parent company of Facebook, Instagram, and WhatsApp to rule on sensitive moderation issues—has urged the social media giant to make “significant” changes to its internal system for reviewing content from politicians, celebrities, and its business partners.
The board, which started assessing cases last year, is coordinated by the tech giant’s policy chief and former UK deputy prime minister Sir Nick Clegg and issues independent judgments on high-profile moderation cases as well as recommendations on certain policies.
Last year, the National Center for Missing and Exploited Children (NCMEC) released data showing that it received overwhelmingly more reports of child sexual abuse materials (CSAM) from Facebook than any other web service it tracked. Where other popular social platforms like Twitter and TikTok had tens of thousands of reports, Facebook had 22 million.
Today, Facebook announced new efforts to limit the spread of some of that CSAM on its platforms. Partnering with NCMEC, Facebook is building a “global platform” to prevent “sextortion” by helping “stop the spread of teens’ intimate images online.”
“We’re working with the National Center for Missing and Exploited Children (NCMEC) to build a global platform for teens who are worried intimate images they created might be shared on public online platforms without their consent,” Antigone Davis, Facebook’s VP, global head of safety, said in a blog post on Monday.
Sheela Lalani is one of many small business owners who depend on social platforms to generate extra holiday revenue. Her Instagram shop with unique, artisan-made children’s clothing—adorably modeled by smiling kids who joyfully twirl in her dresses—has attracted nearly 13,000 followers. She recently rolled out her holiday collection, when suddenly any hope of promoting her new clothing to followers was abruptly dashed when Meta deleted her Instagram account. They also disabled her personal Facebook account, her Facebook business page, and her newest Instagram boutique shop profile.
Lalani was dismayed, but then the situation got worse. Despite the disabled accounts, the PayPal account she linked to her social media pages to buy ads to promote her businesses got hit with a $900 charge. She immediately reached out to PayPal to dispute the charge—and is still waiting for a refund—but she also knew that getting PayPal to intervene wouldn’t fix the larger problem. Someone had bought Facebook or Instagram ads with her PayPal account, and she felt she had no way of reporting this behavior to Meta and stopping any future payments because Meta had disabled all of her accounts.
“This is so unfair for business owners and seems criminal,” Lalani told Ars.
Enlarge/ Mark Zuckerberg. (credit: Getty Images | Drew Angerer )
Meta is laying off 11,000 employees, about 13 percent of its workforce, CEO Mark Zuckerberg wrote in a message to staff today. Zuckerberg said his previous decision to increase spending didn’t pay off as he thought it would and that Meta’s “revenue outlook is lower than we expected at the beginning of this year.”
Meta had 87,314 employees as of September 30, 2022, an increase of 28 percent over the previous 12 months.
“At the start of Covid, the world rapidly moved online and the surge of e-commerce led to outsized revenue growth. Many people predicted this would be a permanent acceleration that would continue even after the pandemic ended,” Zuckerberg wrote. “I did too, so I made the decision to significantly increase our investments. Unfortunately, this did not play out the way I expected. Not only has online commerce returned to prior trends, but the macroeconomic downturn, increased competition, and ads signal loss have caused our revenue to be much lower than I’d expected. I got this wrong, and I take responsibility for that.”
At this point in the history of tech product marketing, consumers generally know what it means when a company sticks the word “Pro” at the end of a device name. From iPads and AirPods to the Microsoft Surface and Galaxy Watch, “Pro” models generally offer the same underlying device and core platform with a few “nice to have” top-of-the-line features for enthusiast users who want the best experience.
To get those Pro features, consumers generally have to pay a “Pro premium” of somewhere between 25 to 60 percent over the most expensive “non-Pro” model of the same product. Even the biggest Pro-version outliers we could find in the tech world barely top a 100 percent increase over their non-Pro progenitors.
Despite the name, the Meta Quest Pro doesn’t really belong in the same marketing universe as these previous “Pro” products. Meta’s new standalone VR headset costs $1,500 at launch, a whopping 275 percent more than its $400 predecessor, the Meta Quest 2 (which has sold quite well for its still-young market segment). The premium increases to 400 percent if you compare the Quest Pro to the $300 Meta was asking for a Quest 2 just a few months ago.
Investors wiped more than $65 billion from Meta’s market capitalization on Wednesday after the Facebook owner reported another quarter of declining revenues and failed to convince investors that big bets on the metaverse and artificial intelligence were paying off.
Shares in Meta dropped 19 percent in after-hours trading as the world’s largest social media platform joined other Big Tech groups in warning that an economic slowdown was hammering its advertising businesses as brands spend less on marketing.
On top of the wider macroeconomic woes, Meta faces a confluence of challenges, including rising competition for its Instagram platform from rivals such as short-form video app TikTok and difficulties in targeting and measuring advertising because of Apple’s privacy policy changes.
Pizza slices, cupcakes, and carrots are just a few emojis that anti-vaccine activists use to speak in code and continue spreading COVID-19 misinformation on Facebook.
Bloomberg reported that Facebook moderators have failed to remove posts shared in anti-vaccine groups and on pages that would ordinarily be considered violating content, if not for the code-speak. One group that Bloomberg reviewed, called “Died Suddenly,” is a meeting ground for anti-vaccine activists supposedly mourning a loved one who died after they got vaccines—which they refer to as having “eaten the cake.”
Facebook owner Meta told Bloomberg that “it’s removed more than 27 million pieces of content for violating its COVID-19 misinformation policy, an ongoing process,” but declined to tell Ars whether posts relying on emojis and code-speak were considered in violation of the policy.
Her potentially First Amendment-infringing policy reform recommendation comes after the Bureau of Internet and Technology and the Hate Crimes Unit of the Civil Rights Bureau conducted an investigation into how online platforms—including Reddit, Discord, 4chan, 8chan, Twitch, and YouTube—helped a white gunman prepare and then murder 10 Black people in Buffalo, New York, during a mass shooting in May.
According to the Office of the Attorney General, the gunman’s content, including snippets of his manifesto, was shared across mainstream platforms like Facebook, Instagram, Twitter, and TikTok. Among other solutions proposed, James and New York Governor Kathy Hochul announced that they want lawmakers to establish a civil liability so that no one shares extremist content, which can potentially inspire copycats.
The 2016 US election was a wake-up call about the dangers of political misinformation on social media. With two more election cycles rife with misinformation under their belts, social media companies have experience identifying and countering misinformation. However, the nature of the threat misinformation poses to society continues to shift in form and targets. The big lie about the 2020 presidential election has become a major theme, and immigrant communities are increasingly in the crosshairs of disinformation campaigns—deliberate efforts to spread misinformation.
Social media companies have announced plans to deal with misinformation in the 2022 midterm elections, but the companies vary in their approaches and effectiveness. We asked experts on social media to grade how ready Facebook, TikTok, Twitter, and YouTube are to handle the task.
2022 is looking like 2020
Dam Hee Kim, assistant professor of communication, University of Arizona
Enlarge/ “This here, this isn’t really what I meant,” Carmack said of last year’s promise to attend this year’s Meta Connect conference in the metaverse. (credit: Meta)
Last year, former Oculus CTO (and current company advisor) John Carmack threw down the gauntlet for Meta’s near-term metaverse plans. By the 2022 Meta Connect conference, Carmack said last October, he hoped he’d be in his headset, “walking around the [virtual] halls or walking around the stage as my avatar in front of thousands of people getting the feed across multiple platforms.”
Carmack’s vision didn’t come to pass Tuesday, as a jerky and awkward Carmack avatar gave one of his signature, hour-long unscripted talks amid a deserted VR space, broadcast out as plain old 2D video on Facebook.
“Last year I said that I’d be disappointed if we weren’t having Connect in Horizon this year,” Carmack said by way of introduction. “This here, this isn’t really what I meant. Me being an avatar on-screen on a video for you is basically the same thing as [just] being on a video.”
China’s ability to influence American politics by manipulating social media platforms has been a topic of much scrutiny ahead of the midterm elections, and this week has marked some progress toward mitigating risks on some of the most popular US platforms.
US President Joe Biden is currently working on a deal with China-based TikTok—often regarded as a significant threat to US national security—with the one goal of blocking potential propaganda or misinformation campaigns. Now today, Meta, owner of Facebook and Instagram, shared a report detailing the steps it took to remove the first “Chinese-origin influence operation” that Meta has identified attempting “to target US domestic politics ahead of the 2022 midterms.”
In the press release, Meta Global Threat Intelligence Lead Ben Nimmo joined Meta Director of Threat Disruption David Agranovich in describing the operation as initiated by a “small network.” They said that between fall 2021 and September 2022, there were four “largely separate and short-lived” efforts launched by clusters of “around half a dozen” China-based accounts, which targeted both US-based conservatives and liberals using platforms like Facebook, Instagram, and Twitter.
Meta’s business model depends on selling user data to advertisers, and it seems that the owner of Facebook and Instagram sought new paths to continue widely gathering data and to recover from the suddenly lost revenue. Last month, a privacy researcher and former Google engineer, Felix Krause, alleged that one way Meta sought to recover its losses was by directing any link a user clicks in the app to open in-browser, where Krause reported that Meta was able to inject a code, alter the external websites, and track “anything you do on any website,” including tracking passwords, without user consent.
Now, within the past week, two class action lawsuits [1][2] from three Facebook and iOS users—who point directly to Krause’s research—are suing Meta on behalf of all iOS users impacted, accusing Meta of concealing privacy risks, circumventing iOS user privacy choices, and intercepting, monitoring, and recording all activity on third-party websites viewed in Facebook or Instagram’s browser. This includes form entries and screenshots granting Meta a secretive pipeline through its in-app browser to access “personally identifiable information, private health details, text entries, and other sensitive confidential facts”—seemingly without users even knowing the data collection is happening.
Last weekend, Meta told Ars that it reversed an advertising ban on a newly released Holocaust film called Beautiful Blue Eyes, saying that the ban—for allegedly violating Meta’s race policy—was implemented in error.
The filmmaker whose movie was being blocked, Joshua Newton, told Ars that he’s still experiencing issues promoting the movie on Meta platforms, where accounts still seem to be restricted. Most frustrating to Alexander Newton, Joshua’s son and an actor featured in the film, is the fact that he still can’t promote his version of the movie’s title track on his Instagram.
“My Instagram page just has an endless spinning wheel when I click to promote, so they’ve damaged my account somehow,” Alexander alleged. “I have to completely restart the app on my phone to even get out of the app.”
This September, British filmmaker Joshua Newton prepared to rerelease his 2009 film Beautiful Blue Eyes. The 2022 premiere was important to Newton, as he’d waited more than a decade to finally share with the world a version of the movie that was previously lost.
Roy Scheider starred in Newton’s movie, and it ended up being his final role. Scheider—who is best known for playing the beloved Jaws police chief who says, “You’re gonna need a bigger boat”—portrayed a New York cop who reunites with his estranged son and tracks down the Nazi responsible for murdering his family members during the Holocaust. Because a camera malfunctioned and damaged some of Newton’s footage and Scheider died while filming, Newton previously thought he’d lost the edit he liked best. But then more than a decade passed, and Newton told Rolling Stone that AI technology had finally advanced enough that the filmmaker could repair lost film frames.
Excited to put this cut of his thriller in front of audiences, Newton prepared to promote the rerelease on Facebook. But in the days leading up to the premiere, Newton told Rolling Stone that he received an email informing him that in a rare turn of events, “Facebook had banned the filmmakers from promoting or advertising” the movie.
Several years after Facebook-owner Meta acquired WhatsApp and Instagram, the Federal Trade Commission launched an antitrust lawsuit that claimed that through these acquisitions, Meta had become a monopoly. A titan wielding enormous fortune over smaller companies, the FTC said Meta began buying or burying competitors in efforts that allegedly blocked rivals from offering better-quality products to consumers. In this outsize role, Meta stopped evolving consumer preferences for features like greater privacy options and stronger data protection from becoming the norm, the FTC claimed. The only solution the FTC could see? Ask a federal court to help them break up Meta and undo the damage the FTC did not foresee when it approved Meta’s acquisitions initially.
To investigate whether Meta truly possesses monopoly power, both Meta and the FTC have subpoenaed more than 100 Meta competitors each. Both hope to clearly define in court how much Meta dominates the market and just how negatively that impacts its competitors.
Through 132 subpoenas so far, Meta is on a mission to defend itself, claiming it needs to gather confidential trade secrets from its biggest competitors—not to leverage such knowledge and increase its market share, but to demonstrate in court that other companies are able to compete with Meta. According to court documents, Meta’s so hungry for this background on its competitors, it says it plans to subpoena more than 100 additional rivals, if needed, to overcome the FTC’s claims.
Both Google and Meta have taken steps to start paying US publishers for aggregating their news content, but neither tech giant has yet found a perfect solution that would fairly compensate publishers and potentially help combat the mass shuttering of newsrooms across America. The Wall Street Journal reported that Facebook stopped its program paying US publishers in July, and more recently, media outlets haven’t been thrilled by terms of Google’s “News Showcase” program, either, and were mostly resisting partnership.
In the latter case, WSJ reported that some media outlets were holding out on joining the News Showcase for a very specific reason. They were waiting to see what happened with a new bill—the Journalism Competition and Preservation Act—which seemed like a better deal. If passed, the JCPA would force Google and Meta to pay US news publishers collectively bargaining for fair payment. However, now, Senator Ted Cruz (R-Texas) has introduced a new amendment to the JCPA which, the Chicago Tribune reports, was narrowly approved this week. And Cruz’s new stipulation may have effectively killed the previously bipartisan bill by diminishing Democratic support, thus crushing US publishers’ supposed dream deal.
What Cruz has suggested is an amendment to prohibit tech companies and news organizations from using the collective bargaining tool to collude on efforts to censor content. While the bill itself waives an antitrust agreement so that news organizations can collectively bargain with tech companies, Cruz says that this key antitrust exemption would not apply if during the negotiation process anyone “engages in any discussion of content moderation.”
It’s been four years since users alleging harm caused by the Cambridge Analytica scandal sued Facebook (now Meta) for selling tons of easily identifying personal information to third parties, allegedly doing so even when users thought they had denied consent. In 2018, plaintiffs alleged in a consolidated complaint that Facebook acted in “astonishingly reckless” ways and did “almost nothing” to protect users from the potential harms of this “intentionally” obscured massive data market. The company, they said, put 87 million users at “a substantial and imminent risk of identity theft, fraud, stalking, scams, unwanted texts, emails, and even hacking.” And users’ only option to avoid these risks was to set everything on Facebook to private—so even friends wouldn’t see their activity.
Because of Facebook’s allegedly deceptive practices, plaintiffs said that “Facebook users suffered concrete injury in ways that transcend a normal data breach injury.” Plaintiffs had gotten so far in court defending these claims that Meta CEO Mark Zuckerberg was scheduled to take the stand for six hours this September, along with lengthy depositions scheduled for former Facebook Chief Operating Officer Sheryl Sandberg and current Meta Chief Growth Officer Javier Olivan. However, it looks like none of those depositions will be happening now.
On Friday, a joint motion was filed with the US District Court for the Northern District of California. It confirmed that the plaintiffs and Facebook had reached a settlement agreement that seems to have finally ended the class action lawsuit that Meta had previously said it hoped would be over by March 2023.
Yesterday, the anti-vaccine group the Children’s Health Defense celebrated the spread of poliovirus in New York, mocking health officials spreading awareness that polio is vaccine-preventable. Today, CHD reports that the group was also permanently banned from Facebook and Instagram yesterday. A screenshot of Meta’s notification in its press release says that the ban is due to CHD’s practice of spreading “misinformation that could cause physical harm.”
A Meta spokesperson tells Ars that Meta “removed the Instagram and Facebook accounts in question for repeatedly violating our COVID-19 policies.”
CHD says the ban came “without warning,” cutting the anti-vax group off from hundreds of thousands of followers on both social media platforms. Denying allegations that the group spreads misinformation, CHD suggested instead the ban is connected to CHD’s lawsuit against Meta that questions the validity of how Facebook and the Centers for Disease Control and Prevention label health misinformation. The group’s legal counsel in that lawsuit, Roger Teich, suggested that the ban was improper.
As TikTok’s popularity and earnings soar, the company has decided to crack down on political content creators—sometimes with thousands of followers—who violate the app’s policies against paid political ads. TikTok says it has been aware of the problem since 2020, but it became an issue of public concern in 2021. That’s when the Washington Post and The Mozilla Foundation uncovered TikToks from both left- and right-wing content creators that appeared to be violating FTC guidelines, which require, at a minimum, that posts must be marked with an “#ad” hashtag.
TikTok has always left it up to content creators to self-disclose when they conduct deals with business partners off-site. However, in June 2021, the company made it easier to flag posts as ads (or “branded content”) in an effort to encourage more self-disclosure. Mozilla considered this a step in the right direction, but it recommended that TikTok work harder ahead of the next elections to promptly remove undisclosed paid political ads.
This week, TikTok appears to have taken that advice, with its head of US safety, Eric Han, announcing that the company will remove any content that violates TikTok’s rules on paid political ads. To prepare creators for stricter enforcement, Han said TikTok will post an educational series in its Creator Portal and host briefings with content creators to ensure “the rules of the road are abundantly clear when it comes to paid content around elections.”
Through the pandemic, OnlyFans took over the online adult entertainment world to become a billion-dollar top dog, projected to earn five times more net revenue in 2022 than in 2020. As OnlyFans’ business grew, content creators on rival platforms complained that social media sites like Facebook and Instagram were blocking their content but seemingly didn’t block OnlyFans with the same fervor, creating an unfair advantage. OnlyFans’ mounting success amid every other platform’s demise seemed to underscore its mysterious edge.
As adult entertainers outside of OnlyFans’ content stream looked for answers to their declining revenue, they realized that Meta had not only allegedly targeted their accounts to be banned for posting supposedly inappropriate content but seemingly also for suspected terrorist activity. The more they dug into why they had been branded as terrorists, the more they suspected that OnlyFans paid Meta to put the mark on their heads—resulting in account bans that went past Facebook and Instagram and spanned popular social media apps across the Internet.
Now, Meta has been hit with multiple class action lawsuits alleging that senior executives at Meta accepted bribes from OnlyFans to shadow-ban competing adult entertainers by placing them on a “terrorist blacklist.” Meta claims the suspected scheme is “highly implausible,” and that it’s more likely that OnlyFans beat its rivals in the market through successful strategic moves, like partnering with celebrities. However, lawyers representing three adult entertainers suing Meta say the owner of Facebook and Instagram will likely have to hand over documents to prove it.
For the first time since Roe v. Wade was overturned, there’s a clear example showing exactly how Facebook will react to law enforcement requests for abortion data without user consent.
Forbes reports that a 17-year-old named Celeste Burgess in Nebraska had her Facebook messages subpoenaed by detective Ben McBride, who suspected that Burgess’ reported stillborn birth was a medication abortion. In the officer’s affidavit, he explains that he asked that Meta not notify the teen of the request for her Facebook data because she might tamper with or destroy evidence. Court records show that Meta complied with the logic.
Meta did not immediately respond to Ars’ request for comment on this case, but previously, Meta has said that “we notify users (including advertisers) about requests for their information before disclosing it unless we are prohibited by law from doing so or in exceptional circumstances, such as where a child is at risk of harm, emergencies, or when notice would be counterproductive.”
The bill is controversial because it targets large companies like Amazon, Alphabet, Meta, and Apple. It stops them from self-preferencing business practices, like promoting their products above others or forcing smaller businesses to buy ad space to compete. Critics, like Google, say the law could threaten everything from the quality of online services to national security, but supporters, like bill co-sponsor Representative David Cicilline (D-RI), say much of the criticism boils down to “lies coming from Big Tech.”
Hundreds of thousands of people recently signed a Change.org petition asking Instagram to stop eating up space in their feeds by recommending so many Reels from accounts they do not follow. Shortly after, Instagram-owner Meta confirmed that these users aren’t just imagining that there’s a sudden avalanche of Reels ruining their online social lives. The short videos currently make up about 15 percent of Instagram and Facebook user feeds—and soon, even more often, they’ll be shoving to the side all the updates from friends that users choose to follow.
Despite all the negative feedback, Meta revealed on an earnings call that it plans to more than double the number of AI-recommended Reels that users see. The company estimates that in 2023, about a third of Instagram and Facebook feeds will be recommended content.
“One of the main transformations in our business right now is that social feeds are going from being driven primarily by the people and accounts you follow to increasingly also being driven by AI recommending content that you’ll find interesting from across Facebook or Instagram, even if you don’t follow those creators,” Meta CEO Mark Zuckerberg says.
Enlarge/ Artist’s conception of the FTC fighting back against Meta’s latest proposed acquisition.
The Federal Trade Commission has filed an antitrust lawsuit against Meta in an attempt to stop the Facebook parent company from purchasing Within, which makes the popular virtual reality fitness app Supernatural.
Meta’s plans to spend a reported $400 million on Within have reportedly been under FTC scrutiny after the proposed acquisition was announced last October. That proposed deal, according to the suit, “would substantially lessen competition, or tend to create a monopoly, in the relevant market for VR dedicated fitness apps and the broader relevant market for VR fitness apps.”
Cornering the VR fitness market?
Meta has been on something of a VR acquisition spree in the last two years, scooping up game developers including Sanzaru Games (Asgard’s Wrath), Ready at Dawn (Lone Echo), Twisted Pixel (Wilson’s Heart), Downpour Interactive (Onward) and BigBox VR (Population: One). But the planned purchase of Within seems to be setting off antitrust alarm bells at the FTC because of the overlap with Beat Saber maker Beat Games, which Meta purchased in 2019.
On Tuesday, Meta’s president of global affairs, Nick Clegg, wrote in a statement that Meta is considering whether or not Facebook and Instagram should continue to remove all posts promoting falsehoods about vaccines, masks, and social distancing. To help them decide, Meta is asking its oversight board to weigh whether the “current COVID-19 misinformation policy is still appropriate” now that “extraordinary circumstances at the onset of the pandemic” have passed and many “countries around the world seek to return to more normal life.”
In 2018, when Meta CEO Mark Zuckerberg testified for a Senate hearing following the Cambridge Analytica scandal, his most frequent response to questions was some iteration of the evasive phrase “my team will get back to you.”
Four years later, plaintiffs in a subsequent California class action lawsuit claim that Meta’s team of designees on various topics have been just as unprepared to answer questions as Zuckerberg was before the Senate. Because of that, the plaintiffs are putting Zuckerberg back on the stand, hoping that six hours of deposing the billionaire and depositions from other high-level Meta executives will end a “long overdue discovery” process that the plaintiffs say has already dragged on too long.
“Much discovery work remains to be done,” the plaintiffs wrote this week in a joint case statement that includes a request for “approximately 35 more depositions.” In addition to Zuckerberg, former Facebook Chief Operating Officer Sheryl Sandberg will be deposed for possibly longer than five hours, and current Meta Chief Growth Officer Javier Olivan will be deposed for as long as three hours. All depositions have been scheduled through September 20, which the statement says will result in Meta missing a September 16 court deadline for discovery, dragging proceedings on further.
Last week, Senators Elizabeth Warren (D-Mass.) and Amy Klobuchar (D-Minn.) sent a letter to Meta asking what the company plans to do to end abortion-post censorship on its platforms. They gave Meta until this Friday, July 15, to respond, placing urgency on their request and seeking evidence that the company is taking immediate action.
Examples of censorship cited in the letter include instances where Facebook and Instagram removed “posts providing accurate information about how to legally access abortion services” within minutes and placed sensitivity screens over a post promoting an abortion documentary. The senators also took issue with censorship of health care workers, including a temporary account suspension of an “organization dedicated to informing people in the United States about their abortion rights.”
Censoring peaceful protesters isn’t the only reason governments have deliberately shut down the Internet in 2022, but researchers say it is the primary objective and is costing the most to the global economy.
According to a report from Top10VPN, the cost of government-ordered Internet shutdowns in 2022 has cost the global economy more than $10 billion. That figure nearly doubles 2021 costs, and it’s only halfway through the year.
At a cost of $8.77 billion, the biggest drain on the global economy is Russia. That country’s ongoing social media blackouts began shortly after the Ukraine invasion and are designed to limit peaceful protest and press freedoms by preventing access to Facebook, Instagram, and Twitter.
Imagine an online world where what users want matters, and interoperability reigns. Friends could choose whichever messaging app they like and seamlessly chat cross-app. Any pre-installed app could be deleted on any device. Businesses could finally access their Facebook data, and smaller tech companies could be better positioned to compete with giants. Big Tech could even face consequences for not preventing the theft of personal info.
As the US struggles to pass legislation to protect Internet consumers, in the EU, these ideals could become reality over the next few years. EU lawmakers today passed landmark rules to rein in the power of tech giants such as Alphabet unit Google, Amazon, Apple, Facebook (Meta), and Microsoft, establishing a task force to regulate unfair business practices in Big Tech.
Amazon said that the company plans to evolve with Europe’s “regulatory landscape” and review what the new legislation means for Amazon, its customers, and its partners. None of the other Big Tech companies mentioned immediately responded to a request for comment for this story.
Before the summer ends, California may pass the first US bill that would hold social media companies liable for product features that research has found are harmful to children. If passed, the law could have far-reaching consequences, potentially impacting how kids throughout the US use social media sites like TikTok, Instagram, and Snapchat.
Although much of prior reporting on the bill focused on its earlier goal to grant a parent’s right to sue over harm to individual children, WSJ reports that the amended version of the bill would instead “permit the state attorney general, local district attorneys, and city attorneys in California’s four largest cities to sue social media companies” for unfair business practices known to harm children.
Enlarge/ BURLINGAME, CALIFORNIA – MAY 04: Meta employee Ryan Carter (L) helps a member of the media with an Oculus virtual reality headset demonstration during a media preview of the new Meta Store on May 04, 2022 in Burlingame, California. Meta is set to open its first physical retail store on May 9. (Photo by Justin Sullivan/Getty Images) (credit: Justin Sullivan | Getty Images)
Meta is facing a growing backlash for the charges imposed on apps created for its virtual reality headsets, as developers complain about the commercial terms set around futuristic devices that the company hopes will help create a multibillion-dollar consumer market.
Facebook’s parent has pledged to spend $10 billion a year over the next decade on the “metaverse,” a much-hyped concept denoting an immersive virtual world filled with avatars.
The investment is spurred by a desire to own the next computing platform and avoid being trapped by rules set by Big Tech rivals, as it has been by Apple and Google with their respective mobile app stores.