Meta to crack down on plagiarism, low-quality AI content on Facebook
Meta, American multinational technology company has announced a sweeping set of measures aimed at curbing the spread of unoriginal and unverified content across its platforms, including Facebook.
In an effort to enhance content quality and integrity, the company will begin penalising accounts that repeatedly post plagiarised material — including copied text, photos, or videos — without adding meaningful value, Caliber.Az reports, citing foreign media.
The move mirrors steps already taken by YouTube.
According to a report by 3DNews, Meta has already taken action this year by removing approximately 10 million fake profiles that impersonated popular creators. Additionally, over 500,000 accounts have been stripped of monetisation privileges due to spam and fraudulent behaviour.
As reported by TechCrunch, such accounts will now face reduced visibility, and their comments may be excluded from recommendations. In cases of duplicate videos, users will be redirected to the original content.
However, Meta has clarified that its policies are not aimed at users who remix or comment on existing content, engage with trends, or offer original interpretations.
The company is also responding to the growing influx of so-called “AI slop” — low-quality content generated by artificial intelligence tools. Meta is advising publishers to watermark such materials and prioritise original storytelling. Automatically generated subtitles, if left unedited, may be deemed unacceptable under the new standards.
The changes come amid criticism over Meta's increasing reliance on automated moderation systems. A petition signed by nearly 30,000 users has called on the company to address erroneous account suspensions and the lack of access to human moderators — an issue that has disproportionately affected small businesses. Meta has yet to issue a public response to the petition.
To help creators adjust, the updates will be rolled out gradually. Publishers will be able to monitor reach reductions and flagged content through the Facebook Professional Dashboard and support centre. As part of its commitment to transparency, Meta will continue publishing enforcement data in its quarterly reports.
The most recent figures show that 3% of Facebook’s monthly users were identified as fake accounts. Between January and March 2025 alone, the company took action against over one billion such profiles.
In parallel, Meta is also shifting away from its in-house fact-checking model in the United States in favour of a system similar to Twitter's (X) Community Notes — allowing both users and moderators to assess the accuracy of posts collaboratively.
By Aghakazim Guliyev