DMR News

Advancing Digital Conversations

Bluesky Publishes First Transparency Report As User Growth And Moderation Activity Rise

ByJolyen

Feb 2, 2026

Bluesky Publishes First Transparency Report As User Growth And Moderation Activity Rise

Bluesky has released its first full transparency report, outlining how its Trust and Safety team handled moderation, legal requests, influence operations, and labeling in 2025, as the social network recorded rapid user growth and a sharp increase in platform activity.

User Growth And Platform Activity

The social media startup, which positions itself as an alternative to X and Threads, reported that its user base grew nearly 60% in 2025, increasing from 25.9 million users to 41.2 million. That figure includes accounts hosted on Bluesky’s own infrastructure as well as accounts running independently on the decentralized network built on Bluesky’s AT Protocol.

During the year, users published 1.41 billion posts, representing 61% of all posts ever made on the platform. Media posts accounted for 235 million of those entries, or 62% of all media shared on Bluesky to date.

Increase In Legal Requests

The company reported a fivefold rise in legal requests during 2025. Requests from law enforcement agencies, regulators, and legal representatives reached 1,470, compared with 238 in 2024.

While Bluesky released moderation-focused reports in 2023 and 2024, the latest publication is its first comprehensive transparency report. It expands beyond moderation to cover regulatory compliance, age-assurance efforts, account verification, automated labeling, and monitoring of influence operations.

Moderation Reports And User Participation

Bluesky received 9.97 million user reports in 2025, a 54% increase from 6.48 million in 2024. The company said the growth in reports closely matched its 57% increase in users over the same period.

Around 3% of the user base, or 1.24 million users, submitted reports during the year. The largest category was “misleading” content, including spam, which made up 43.73% of reports. Harassment accounted for 19.93%, and sexual content represented 13.54%. A broad “other” category made up 22.14% of reports, covering issues that did not fall into predefined areas such as violence, child safety, self-harm, or rule violations.

Within the misleading category, spam alone accounted for 2.49 million of the 4.36 million reports. In the harassment category, hate speech made up about 55,400 reports, followed by targeted harassment at roughly 42,520, trolling at 29,500, and doxxing at about 3,170.

Bluesky said most harassment reports fell into gray areas of antisocial behaviour, including rude or disruptive interactions that did not meet thresholds for more specific categories.

Sexual Content And Violence Reports

Of the 1.52 million reports related to sexual content, most involved mislabeling. Bluesky said adult material was often not properly tagged with metadata, which is used by the platform’s moderation tools to let users control what they see.

Smaller numbers of reports focused on nonconsensual intimate imagery, with about 7,520 cases, abuse content at around 6,120, and deepfakes at just over 2,000.

Violence-related reports totaled 24,670. These included approximately 10,170 reports for threats or incitement, 6,630 for glorification of violence, and 3,230 for extremist content.

Automated Detection And Behavioral Changes

In addition to user reports, Bluesky’s automated systems flagged 2.54 million potential violations. The company said it saw a significant decline in daily reports of antisocial behaviour after introducing a feature that detects toxic replies and reduces their visibility by placing them behind an extra click.

Daily reports of antisocial behaviour dropped 79% after the system was introduced. The company also reported a 50.9% decline in reports per 1,000 monthly active users between January and December.

Enforcement Actions And Influence Operations

Outside of moderation reports, Bluesky said it removed 3,619 accounts linked to suspected influence operations, most of which it believes originated in Russia.

In 2025, the platform took down 2.44 million items, including accounts and individual pieces of content. By comparison, in the previous year Bluesky removed 66,308 accounts, while automated systems accounted for 35,842 removals.

Moderators removed 6,334 records manually, while automated systems removed 282. The company also issued 3,192 temporary suspensions and carried out 14,659 permanent removals for ban evasion, primarily targeting inauthentic behaviour, spam networks, and impersonation.

Preference For Labeling Over Removal

The report shows that Bluesky relies more heavily on labeling than account removal. In 2025, the platform applied 16.49 million labels to content, a 200% increase year over year. Account takedowns rose 104%, from 1.02 million to 2.08 million.

Most labels were applied to adult or suggestive content and nudity, reflecting the company’s approach of providing users with controls rather than removing content outright where possible.


Featured image credits: Heute.at

For more stories like it, click the +Follow button at the top of this page to follow us.

Jolyen

As a news editor, I bring stories to life through clear, impactful, and authentic writing. I believe every brand has something worth sharing. My job is to make sure it’s heard. With an eye for detail and a heart for storytelling, I shape messages that truly connect.

Leave a Reply

Your email address will not be published. Required fields are marked *