
OpenAI has reorganized several internal teams to accelerate development of new audio models, as the company prepares for the launch of an audio-first personal device expected in about a year, according to reporting from The Information.
Internal Restructuring Focused On Audio
According to the report, OpenAI has unified multiple engineering, product, and research groups over the past two months to rebuild its audio technology stack. The effort is intended to support a new generation of audio models and hardware designed around voice as the primary interface, rather than screens.
The audio overhaul is tied to plans for a personal device that prioritizes spoken interaction. The company is reportedly working toward an early 2026 release for its next audio model, which is expected to sound more natural, manage interruptions, and allow overlapping speech during conversations.
Industry Shift Toward Voice Interfaces
The changes at OpenAI mirror broader developments across the technology sector, where audio is becoming a central mode of interaction. Voice assistants are already common in more than a third of US households through smart speakers.
Meta recently introduced a feature for its Ray-Ban smart glasses that uses a five-microphone array to isolate conversations in noisy environments. Google began testing Audio Overviews in June, a feature that converts search results into conversational summaries. Tesla has integrated xAI’s chatbot Grok into its vehicles, enabling drivers to interact with systems such as navigation and climate controls through natural speech.
Startups Experiment With Audio-First Hardware
Smaller companies have also been pursuing audio-led devices, with mixed results. The Humane AI Pin, a screenless wearable, consumed hundreds of millions of dollars in funding before failing to gain traction. The Friend AI pendant, marketed as a wearable companion that records daily life, has drawn attention for privacy concerns.
At least two other companies are developing audio-based wearables. Sandbar and a separate venture led by Eric Migicovsky are working on AI-powered rings expected to launch in 2026, allowing users to interact with AI through spoken commands directed at the device.
Audio Models And Companion Devices
OpenAI’s upcoming audio model is expected to improve conversational flow by responding more fluidly and handling interruptions in real time. The company is also reported to be considering a broader lineup of audio-centric devices, potentially including smart glasses or screenless speakers, designed to function as ongoing conversational companions rather than task-based tools.
Design Influence From Former Apple Leadership
The Information noted that Jony Ive, who joined OpenAI’s hardware efforts following the company’s $6.5 billion acquisition of his firm io in May, has been involved in shaping the approach. Ive has previously argued that audio-first devices could reduce dependence on screens and address design decisions that contributed to excessive device use in earlier consumer electronics.
Featured image credits: Pexels
For more stories like it, click the +Follow button at the top of this page to follow us.
