More turmoil ahead for Twitter as EU Digital Services Act tests Musk’s vision

Elon Musk’s promise to European regulators that Twitter would follow the rules set out in the European Union’s new Digital Services Act (DSA) will finally be put to the test. The first few weeks of Musk’s reign at the company suggest he could be embroiled in a fight.

Within hours of Musk’s takeover, racist language previously blocked on the platform increased. The billionaire praised targeted ads, which are the subject of a European crackdown aimed at better protecting users from pervasive online surveillance. He personally retweeted misinformation about the assault and attempted kidnapping that seriously injured the husband of US House Speaker Nancy Pelosi. And he has now cut Twitter’s workforce in half and jettisoned the majority of the company’s thousands of contractors, who traditionally handle content moderation.

The DSA, which came into force on November 16, is a bold set of sweeping regulations on online content governance and accountability for digital services that subjects Twitter, Facebook and other platforms in many ways to the European Commission. and national authorities. These bodies will oversee compliance and enforcement and seem determined to make the internet a fairer place by limiting the power of Big Tech. Many expect this landmark EU legislation to set a new benchmark for other regulators around the world.

Concerns for safety and respect

The DSA’s content moderation rules could be a real challenge for Musk. While the DSA does not tell social media platforms what speech they can and cannot post, it sets new standards for terms of service and regulates the content moderation process to make it more transparent to users and regulators. The DSA is driven as much by security concerns and respect for fundamental rights as by the interests of the market. It draws attention to the impact of algorithmic decision-making (and there’s a fine line between Twitter’s new paid subscription model and algorithmic discrimination), and reinforces EU codes of conduct, such as the Misinformation Code of Practice, to which Twitter has subscribed.

And that requires platforms to remove illegal content. EU regulators say the DSA will ensure that anything illegal offline will also be illegal online. But what is considered illegal varies between EU member states. For example, an offensive statement flagged as illegal in Austria could be perfectly legal in France. It is true that the DSA has taken some steps to avoid imposing the law of one country on all other countries, but the fragmentation of the EU has led to and will continue to require significant compliance efforts from the share of platforms. There are also new due diligence obligations, some of which could increase the risk of legal discourse being swept aside as platforms err on the side of removal to avoid fines.

Perhaps working in Musk’s favor, the DSA does not require platforms to remove all harmful content, such as “awful but legal” hate speech. In fact, the DSA sets some limits on what can be removed, which means Musk may find it easier to remove as little as possible. But platforms are required to act responsibly and work with trusted flaggers and public authorities to ensure their services are not misused for illegal activity. And while platform operators have wide latitude to set their own speech standards, those standards must be clear and unambiguous, which Musk pledged to do anyway.

Sometimes it’s hard to take Musk at his word. He said in late October that no major content or account reinstatement decisions would occur until a content moderation board comprised of “widely diverse viewpoints” was formed. He has yet to announce the existence of such a council, but on November 20 several people fired the platforms for hate speech and misinformation, including former President Donald Trump – who was allowed to return to Twitter after Musk quizzed users — and Kanye West.

Twitter’s trust and safety and content moderation teams have suffered huge successes since Musk took over as “Chief Twit,” as he calls himself. Half of Twitter’s 7,500 employees have been laid off, with trust and security services the hardest hit. Twitter’s top trust and safety, human rights and compliance officers have left the company, as have some EU policy officials. That Musk fired the entire human rights team could backfire. This team has been tasked with ensuring that Twitter adheres to the United Nations Principles Establishing Corporate Responsibility to Protect Human Rights.

Media outlets reported that the company’s content moderation staff did not have access to their enforcement tools. Meanwhile, a second wave of global job cuts hit 4,400 of Twitter’s 5,500 external contractors last week, many of whom had worked as content moderators to tackle misinformation on the platform in the United States and abroad.

Extended obligations for large platforms

All of this raises questions about whether Twitter has the technical and political muscle to comply with and implement DSA obligations, particularly whether Twitter is designated as a “very large online platform” or VLOP (over 45 million users) under the DSA. If designated as such, the company will have to comply with important obligations and responsibly address systemic risks and abuses on its platform. They will be held accountable through independent annual compliance audits and public scrutiny and must provide a public repository of the online advertisements they have displayed over the past year. It is unlikely that Twitter will be able to meet these commitments if it does not have enough qualified personnel to understand the impact of its operations on human rights.

Additionally, Musk is a self-proclaimed “free speech absolutist,” who said he acquired Twitter because civilization needs “a common digital public square.” He criticized Twitter content moderation policies and says he opposes “censorship” which “goes far beyond the law”. Yet, after much criticism, he also said that Twitter cannot become a “free-for-all hellscape” where anything can be said without consequences.

Unfortunately, Twitter descended into pretty much that kind of a landscape soon after Musk took over. A flurry of racist slurs appeared on the platform in the first few days, while a revised paid subscription scheme that gives users a blue tick – which once indicated that the identity of the account holder had been verified as authenticated – was reintroduced but without the verification step. Anyone paying $7.99 could buy a blue check, and many who created fake accounts posing as people and businesses and tweeted out false information, from an Eli Lilly account announcing that its product to insulin base was now free to multiple accounts impersonating and parodying Musk himself.

Musk halted the deployment, but the damage was done. Margrethe Vestager, executive vice president of the European Commission, told CNBC that such a practice indicates that “your business model is fundamentally flawed.”

If Twitter is deemed a “very large online platform”, it will need to assess all kinds of risks, including misinformation, discrimination and lack of civic discourse arising from using the service, and take steps to mitigation to reduce societal harm. If Musk succeeds in his plans to grow Twitter’s user base over the next few years, Twitter will certainly face greater scrutiny under the DSA and could be forced to backtrack and staff its service. content moderation.

This is also the opinion of Thierry Breton. Commenting on the reduction in the number of moderators, the European commissioner for the internal market warned Musk that “he will have to increase them in Europe”.

“He will have to open his algorithms. We will have control, we will have access, people will no longer be able to talk nonsense,” Breton said.

Restrict user information for ads

Targeted advertising is another area where Musk’s plans could conflict with DSA obligations. Twitter’s advertising business is its main source of income. With the company losing money, Musk wants to increase his ad revenue. The DSA prohibits platforms from using sensitive user information, such as ethnicity or sexual orientation, for advertising purposes. Ads can no longer be targeted based on this data.

More broadly, the DSA increases transparency about the ads users see on their feeds: platforms must clearly differentiate content from advertising; ads should be labeled accordingly. It’s hard to see how all of these requirements will mesh neatly with Musk’s plans. He told advertisers that what Twitter needs are ads that are as relevant to users’ needs as possible. Highly relevant ads, he says, will serve as “real content.”

One of the most concerning things the DSA does not do is to fully protect anonymous speech. The provisions give government authorities alarming powers to flag controversial content and uncover data on anonymous speakers — and everyone else — without adequate procedural safeguards. Pseudonymity and anonymity are essential to protect users who may have opinions, identities or interests that do not match those in power.

Marginalized groups and human rights defenders can be in grave danger if those in power manage to discover their true identity. Musk pledged to “authentic all real humans” on Twitter; Unfortunately, the DSA is doing nothing to help them if Musk keeps his promise.

The DSA is an important tool for making the internet a fairer place, and it’s going to cause some turbulence for Twitter as Musk seeks to realize his vision for the platform. Much will depend on how social media platforms interpret their obligations under the DSA and how EU authorities enforce the regulation. Breton, the internal market commissioner, swore that Twitter “will fly by our rules.” For Musk, the seatbelt sign is on. It’s going to be a bumpy ride.

IMAGE: Elon Musk’s Twitter account is seen displayed on a smartphone with a Twitter logo in the background on November 21, 2022. (Photo by Nathan Stirk/Getty Images)

Comments are closed.