DMR News

Advancing Digital Conversations

Microsoft Engineer’s Warning on Copilot Designer

ByHuey Yee Ong

Mar 9, 2024

Microsoft Engineer’s Warning on Copilot Designer

Shane Jones, a seasoned Microsoft engineer with a tenure of six years, has recently voiced significant concerns over the company’s artificial intelligence tool, Copilot Designer. In his personal investigations conducted during his free time, Jones has encountered disturbing trends in the AI’s output, including the generation of violent and sexually explicit images. These findings stand in stark contrast to Microsoft’s advertised commitment to responsible AI development and use.

How Did the AI Go Wrong?

Copilot Designer, launched in March 2023 and powered by OpenAI technology, is designed to transform textual prompts into visual content, encouraging user creativity. However, Jones’ red-teaming efforts — a process aimed at identifying vulnerabilities by simulating attacks or breaches — have unveiled a worrying capability of the tool to produce content that egregiously violates Microsoft’s principles of ethical AI use.

Jones highlighted several issues with the AI’s content generation, including:

  • Demonized Depictions of Social Issues: Producing images of demons and monsters in scenarios involving abortion rights.
  • Violent Imagery: Generating visuals of teenagers with assault rifles and sexualized portrayals of women in violent tableaus.
  • Underage Illegal Activities: Creating scenes of underage drinking and drug use.

These concerning outcomes were not one-off incidents but consistently reproducible results, as demonstrated by CNBC’s independent testing using the same tool, which further validates Jones’ alarms.

What Steps Did Jones Take to Alert Microsoft?

Jones’ role at Microsoft is primarily as a principal software engineering manager, and his work on Copilot Designer is out of personal initiative rather than professional obligation. His approach is emblematic of the broader tech community’s practice where employees and external parties alike test technologies in their spare time to unearth potential issues. Jones took several steps after his internal warnings were overlooked, including:

  • Internal Reports:
    • Initiated a series of internal reports starting December, aimed at urging Microsoft to reconsider the tool’s availability until robust safeguards could be established.
  • Directed Towards OpenAI:
    • Following Microsoft’s response, which directed him towards OpenAI, Jones attempted to engage OpenAI directly, but was dissatisfied with the lack of proactive measures.
  • Public Disclosure:
    • Took his concerns public through a LinkedIn post, which was later removed following instructions from Microsoft’s legal department.
  • Contacting U.S. Senators:
    • Reached out to U.S. senators and had discussions with staff from the Senate’s Committee on Commerce, Science, and Transportation.
  • Escalating Concerns to Higher Authorities:
    • Drafted letters to FTC Chair Lina Khan and Microsoft’s board of directors on March 6th, detailing his findings and the company’s apathetic response.

His actions reflect a deep-seated worry about the technology’s potential misuse and the broader implications of generative AI in disseminating misinformation and inappropriate content, especially with the impending global political events and elections.

The public debate around the safety and ethical considerations of generative AI is intensifying, with Jones’ situation shedding light on the substantial gaps in content moderation and ethical AI deployment by leading technology firms. His experiences underscore a critical need for a more vigilant and responsive approach to AI development, one that prioritizes safety, ethical standards, and legal compliance above technological advancement or market competitiveness.

What Is Microsoft Doing in Response?

Microsoft’s spokesperson stated the company’s commitment to addressing employee concerns in accordance with company policies and highlighted the existence of robust internal channels for reporting and addressing issues related to service impact. However, Jones’ ordeal points to a pressing need for more effective mechanisms to address the risks associated with AI-generated content, particularly those that violate ethical standards or potentially harm users.

Jones’ endeavor to spotlight the deficiencies of Copilot Designer and push for necessary changes is a poignant reminder of the challenges facing AI development. As technology continues to evolve, the importance of maintaining ethical integrity, safeguarding user interests, and fostering transparency in incident reporting and resolution becomes ever more critical.


Related News:


Featured Image courtesy of SOPA Images/LightRocket via Getty Images

Huey Yee Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.