Meta has unveiled Meta Motivo, an artificial intelligence model designed to bring more human-like motion to digital agents in the Metaverse. The model, announced on Thursday, addresses key challenges in avatar body control, enabling lifelike movements that could transform how users experience virtual worlds.
The company emphasized the potential of Meta Motivo to enhance digital interactions. “We believe this research could pave the way for fully embodied agents in the Metaverse, leading to more lifelike NPCs, democratization of character animation, and new types of immersive experiences,” Meta stated. This initiative is part of the tech giant’s broader investment in artificial intelligence, augmented reality, and other Metaverse technologies, with capital expenditures for 2024 projected to reach a record $37 billion to $40 billion.
In line with its commitment to open innovation, Meta continues to release many of its AI models for free, fostering development that could benefit its services. Alongside Meta Motivo, the company announced the Large Concept Model (LCM), a novel approach to language modeling. Unlike traditional large language models (LLMs) that predict individual tokens, the LCM predicts full concepts or sentences within a multilingual and multimodal framework. This new method aims to “decouple reasoning from language representation,” according to Meta.
Another addition to Meta’s suite of AI tools is Video Seal, which embeds an invisible, traceable watermark into videos to enhance security without disrupting the viewing experience.
As Meta positions itself at the forefront of AI and Metaverse technologies, the launch of these tools underscores the company’s ambition to shape the future of digital interaction and virtual reality.
Featured image courtesy of The Star
Follow us for more tech news updates.