Thousands of Facebook groups around the world have been suspended in the last days due to a “technical error”, officially recognized by Meta. The exact reason is not known, although many suspect that it would be an error of the automatic moderation system based on artificial intelligence. And then, behind this bug, the question remains: how much control does you have on our digital lives?
On Reddit, the R/Facebook community, abounds in messages from administrators who saw their deleted groups overnight, for no relevant reason. Some have lost several communities simultaneously, receiving notifications that they consider absurd. For example, a group with almost one million members, dedicated to birds with birds, was deleted because they would promote “nudity content”, according to Techcrunch. Another group, focused on interior design and millions of users, was accused of promoting “dangerous organizations”. And the list does not stop here.
Meta invokes a technical error but does not provide details
Although the Meta did not specify whether the closing of the groups is related to the recent prohibitions applied to the individual accounts, the signals in the digital environment suggest a larger phenomenon. Specifically, platforms such as Pinterest and Tumblr have faced, in recent weeks, similar waves of mass account suspension. Pinterest spoke about an internal error, but excluded the involvement of the AI, and Tumblr mentioned the testing of a new content filtration system, without clarifying if it is automated.
When more platforms block accounts or groups without clear explanations, the suspicion arises that automatic systems get out of control, and this erodes the confidence of users and content creators in these digital spaces, warns the specialists.
Meta has so far not offered clear explanations for these incidents. Meanwhile, an online petition has collected over 12,000 signatures, and some of the affected users, especially those who built their businesses around these groups, take into account legal actions, Techcrunch reports.
According to the quoted source, Meta’s spokesman Andy Stone confirmed that Meta is aware of the problem and works to remedy it: “We are aware of a technical error that has affected some Facebook groups. We work to remedy the situation.”.
Why algorithms are not “neutral”
For the sociologist Marius Wamstedel, within the Duke Kunshan University, such incidents cannot be reduced to simple technical mistakes.
If we look at the recent incident as a simple technical error, we lose sight that such problems are neither isolated nor reducible to the improper functioning of some algorithms, he believes. “The automation of the content moderation on the digital platforms has some obvious advantages: it allows the evaluation of an impressive amount of information in a short time, with low costs and without the danger that decisions are influenced by the values or idiosyncrauses of some human moderators. In other words, automation promises an honest, objective evaluation.he explains.
The artificial intelligence models are built on theoretical and trained principles that reproduce a particular vision of social acceptability. “In addition, there is the problem of their calibration. Here, there is a compromise between sensitivity (detecting all real cases of inadequate content) and specificity (avoiding wrong identifications). The most handy analogy is that with medical tests: a test with a very high sensitivity can give false positive results, identifying infections that do not exist; not to diagnose all the existing cases (false negative results) in this case, the moderation algorithm seems to have been configured to prioritize the elimination of any potential risk, which has led to the identification of harmless contents as dangerous “adds Marius Wamstedel.
The present case raises a fundamental question related to the transparency of the algorithms. We all understand what is an artificial intelligence model, but no one knows exactly the logic on which it works effectively, says the specialist.
“In the language of social sciences, algorithms are black boxes, visible processes through the results they produce, but whose interior mechanisms are opaque. And this leads to a deep asymmetry between platforms and their users, who understand very little about how they are evaluated, classified and sanctioned, but which support the consequences of the algorithm. We assumed earlier, the calibration of the model, we can suspect that the platform deliberately chose a design that avoids possible media scandals and reputation risks to the detriment of honest users, injured unfairly. “completes the teacher.
When digital identity becomes fragile
From the perspective of entrepreneur Laura Ioana Sardescu, a specialist in digital marketing strategies, the problem has another stake: loss of control over her own identity. “When you post content on a platform such as Facebook, it enters, legally and technically, into its territory. It does not belong to you entirely. What remains yours, however, is the brand image: the way you communicate, the position, the value you build over time“She says.
That is why, she believes, we are going through one of the most complicated periods for digital identity. “We are encouraged to invest time, budgets and energy in a single channel, without actually having real control over it. Or, this is precisely here the vulnerability: if a bug or an error to delete your account, group or page, your work disappears. The platform does not guarantee stability. Therefore, the goal should not be just visibility or likes. They are ephemeral and rarely connected to your real audience. The purpose is to build authority in a clear area of expertise, on several channels, in a strategic way. When you communicate relevant and coherently, over time, your brand becomes more than a presence on a platform. It becomes a voice. And An authentic voice cannot be deleted by an algorithm, ”says the entrepreneur.