
Anthropic has conducted an internal experiment called Project Deal, where AI agents represented buyers and sellers in a marketplace that completed real transactions using allocated budgets.
Experiment Design And Participation
The pilot involved 69 Anthropic employees, each given $100 in gift cards to spend through AI agents acting on their behalf. The company ran four separate marketplace environments, including one “real” market where its most advanced model represented all participants and transactions were honored after completion.
Across the experiment, agents completed 186 deals totaling more than $4,000 in value.
Performance Differences Between Models
Anthropic reported that marketplaces using more advanced models produced better outcomes for users, measured by deal quality. However, participants did not appear to recognize differences in performance, suggesting that users represented by less capable agents could receive weaker outcomes without being aware of the disparity.
The company described this as a potential “agent quality” gap, where uneven capabilities between AI systems influence results in ways that are not visible to users.
Impact Of Instructions On Negotiation Behavior
The experiment also tested whether initial instructions given to agents would influence outcomes. Anthropic found that these instructions had little effect on the likelihood of completing a sale or on negotiated prices, indicating that agent behavior was driven more by model capability than by prompt configuration.
Implications For AI Mediated Transactions
The results highlight how AI agents can autonomously negotiate and complete transactions involving real goods and money within structured environments. The findings also point to differences in agent performance that may affect fairness and transparency in systems where users rely on automated intermediaries.
Featured image credits: Heute.at
For more stories like it, click the +Follow button at the top of this page to follow us.
