We must stand firm as Big Tech pushes back against safety regulation.
Following Mark Zuckerberg's annoucement that Meta will massively reduce content moderation in the USA we need to ensure that the UK safety standards in the Online Safety Act are enforced.
It was always something of a joke amongst parliamentarians, that when you asked a Meta executive what their company was doing to combat serious issues like race hate and suicide content on Facebook and Instagram, that you would get the same answer. They would tell us that this was an issue they took ‘seriously’, was against their ‘platform policies’, at that they had ‘systems in place’ to deal with it, without ever being able to explain how effective these ‘systems’ were. Well the good news for Meta execs in the USA is that they no longer have to pretend their policies work, as they’ve just scrapped them.
In his announcement on 7 January, Mark Zuckerberg confirmed that in America, Meta will no longer use fact checkers to moderate content on the platform, but instead rely on a system of community notices, similar to Elon Musk’s X platform, warning people that the information contained in certain posts is disputed. However, this major change, which reverses seven years of assurances from Zuckerberg about improving the trust and safety standards on his platforms, is not about fact checking political speech. It is a commercial decision to get rid of expensive fact checking and walk away from any responsibility for the content that is shared on their platforms. With the exception of the most serious illegal activity, like child abuse images, drug dealing, fraud and the promotion of terrorism, Meta will no longer scan for and mitigate harmful content. So this means that posts, for example, that incite racial hatred, promote self-harm and suicide to teenagers, and seek to dehumanise people because of their religious beliefs will be left to spread unchecked. Four years after Donald Trump was removed from Meta’s platforms for his part in inciting a riot on Capitol Hill in Washington DC, that led to a violent break-in of the Congress building, Zuckerberg has confirmed that if the same thing happened again, he would take no action.
In addition to getting rid of the fact checkers, Meta have announced that they will change their systems so that posts in ‘civic groups’ will be given a higher ranking in the feed of content that is promoted to users, to the detriment of trusted and verifiable news sources. ‘Civic groups’ is the term used by Meta to describe the friends, family and community groups that people sign up to, however these are also the main spaces where disinformation and conspiracy theories are shared. If you think platforms like Facebook are feeding division and intolerance in society today, and pushing medical disinformation that is a danger to public health, this is about to get a lot worse.
The prediction made before the US Presidential election in November by Marc Andreessen, a Trump donor and one of America’s leading tech entrepreneurs and investors, has come true. Elon Musk is leading a major push back from the tech sector against safety regulation on social media. Mark Zuckerberg has also confirmed his hope that President Trump’s new administration will seek to pressure other governments, particularly in Europe, to adopt the carve outs from responsibility and regulation of tech platforms, that are currently fixed in American law. In the UK, the Online Safety Act, passed in 2023 and which will become operational this year, prevents companies like Meta from walking away from their content moderation responsibilities. The legislation was drafted so that the requirement to scan for and remove priority illegal content was written onto the face of the Bill. This means that they have a legal duty to act against a wide range of offences from users like inciting racial hatred, promoting suicide and self-harm, and threatening violence against an individual. However, Mark Zuckerberg’s announcement is a clear signal that a major challenge will come from the Trump administration against tech laws like the Online Safety Act. This will most likely be made through trade negotiations where pressure will be brought against the UK to accept American standards for digital regulation as part of a wider set of proposals. Such provisions were included in the draft trade agreement between the UK and the USA that was discussed when Donald Trump was previously President. They are also part of the trade agreements between the USA, Canada and Mexico which were agreed during the first Trump Presidency. At the time UK ministers said they would not allow such a trade agreement to compromise legislation on online safety. That will most likely be put to the test again, and we must stand firm against such proposals which would remove any chance we have to hold tech executives to account and require them to enforce the safety standards on their platforms that are set in our laws. Why should big tech firms get such special treatment. Other media companies have legal liability for the content and adverts that they share with their views and readers. Platforms like Facebook, Instagram and X are not free speech platforms where all content is treated equally. These are advertising businesses that promote the content to users that’s most likely to engage them, so that they can monetise that interest. Sadly they’ve discovered that extremist content and disinformation can be good for business, but that doesn’t mean that our society should be shaped by their profit motive.
Damian Collins was UK Minister for Tech and the Digital Economy in 2022, and a former Chair of the House of Commons Select Committee for Digital, Culture, Media and Sport. In 2021 he Chaired the Joint Committee of the UK parliament which led the pre-legislative scrutiny of the Online Safety Bill.
Thanks for sharing this insightful article.
As someone with an interest in digital misinformation and AI ethics, I'm disturbed by these developments. Research has consistently shown that fact-checks work, and that community notes should complement, not substitute them.
I'm also concerned by suggestions in some corners that these developments will affect US content moderation only. Zuckerburg made a direct threat against global regulation of online harm in his announcement, including as you point out, in Europe. I'm afraid that if you decide to do business in these regions, you have a duty to respect local laws and operate responsibly.
I'm personally grateful for our progressive regulation in the UK, and also for upcoming European Union regulations that will target pernicious forms of online misinformation such as deepfakes. As a society, we must push back on social media platforms facilitating division and descending into a free-for-all of chaos and disorder.