Bluesky is facing its first major controversy over data scraping after a dataset containing one million public posts appeared on the AI platform Hugging Face, 404Media reports.
Compiled by Daniel van Strien, a machine learning librarian, the dataset was intended for AI research and analysis. However, it was removed following public outcry about user consent and data privacy.
The dataset included Bluesky users’ decentralized identifiers (DIDs), metadata, and post details, offering a searchable function to locate specific user content. Van Strien described the data as useful for developing language models, studying social media trends, and testing moderation tools. While technically legal, Bluesky users did not consent to this use of their content, prompting backlash. Van Strien later apologized in a Bluesky post, admitting the move failed to uphold transparency and consent principles.
Bluesky’s decentralized platform relies on the Authenticated Transfer (AT) Protocol and a public firehose API. This API provides access to an aggregated stream of all public activity, such as posts, likes, and follows. Bluesky representatives explained that its structure mirrors the openness of the internet, where public data can be crawled. They acknowledged the need for tools that allow users to express consent preferences but admitted these settings cannot be enforced on external developers.
The platform’s rapid growth has attracted millions of users, many fleeing platforms like X (formerly Twitter) over AI-related concerns. Although Bluesky itself does not train AI models on user content, its open system allows third parties to do so. This incident has raised alarms about the privacy risks of public networks.
Bluesky said it is working on consent mechanisms but emphasized that honoring these preferences will depend on external actors. “We’re having ongoing conversations with engineers & lawyers and we hope to have more updates to share on this shortly!” the company said.
While Bluesky’s efforts to address consent are a step in the right direction, it’s clear that more concrete measures are needed. Giving users stronger tools to control their data, like opt-in systems and anti-scraping protections, would be a meaningful way to balance innovation with respect for user rights. Platforms have a responsibility to ensure that openness doesn’t come at the expense of user trust.
Featured Image courtesy of NicoElNino/Getty Images/iStockphoto
Follow us for more tech news updates.