DMR News

Advancing Digital Conversations

Meta Brings Llama AI to U.S. National Security

ByHilary Ong

Nov 6, 2024

Meta Brings Llama AI to U.S. National Security

Meta has announced that it is making its Llama AI models available to U.S. government agencies and contractors involved in national security. The decision aims to address concerns about open-source AI models potentially benefiting foreign adversaries and to help the United States compete with countries like China in the global AI race.

Meta announced that partnerships are being established with more than a dozen companies, including Amazon Web Services (AWS), Oracle, Microsoft, Palantir, and Lockheed Martin. These companies work closely with the U.S. government on various national security projects.

  • Oracle: Using Llama to speed up aircraft repairs by helping technicians process maintenance documents more quickly and accurately.
  • AWS and Microsoft: Supporting government operations by hosting Llama models on secure cloud platforms to handle sensitive data.

This announcement expands on comments made by Meta CEO Mark Zuckerberg during an earnings call, where he mentioned that Meta was collaborating with public sector entities to use Llama AI across government agencies. Meta is also extending access to allied nations, such as the United Kingdom, Canada, Australia, and New Zealand.

In a blog post, Meta’s President of Global Affairs, Nick Clegg, emphasized that the partnerships are intended to support democratic countries in developing AI technology. He explained that promoting American open-source AI models over those from China is crucial for national and allied interests.

Meta is making an exception to its usual policy, which prohibits the use of Llama models for military, warfare, or espionage purposes. The company confirmed to Bloomberg that this change applies to specific government agencies and contractors in the U.S. and allied countries.

However, the use of AI in defense applications remains a controversial topic. Last week, Reuters reported that Chinese researchers linked to the People’s Liberation Army (PLA) used an older version of Llama, called Llama 2, for military projects. These researchers, including some connected to a PLA R&D group, developed a chatbot designed for intelligence gathering and decision-making support. Meta responded, stating that this use was unauthorized and violated its policies, pointing out that Llama 2 is an outdated model.

Adding to the debate, a study by the nonprofit AI Now Institute highlighted risks associated with AI in military settings. The report warned that AI systems used for intelligence and surveillance are vulnerable, relying on personal data that adversaries could exploit and weaponize. The study also noted AI models’ limitations, such as biases and hallucinations, which remain unresolved and could pose threats if used improperly. The co-authors of the study recommend developing AI specifically tailored for defense, separate from commercial systems.

There is also resistance within the tech industry. Employees at major companies, such as Google and Microsoft, have protested their employers’ involvement in AI projects for the U.S. military. Although Meta argues that open AI models can accelerate defense research and benefit national security and the economy, adoption has been slow. The U.S. military remains cautious, with the U.S. Army being the only branch to have implemented generative AI so far, and skepticism persists around its effectiveness.


Featured Image courtesy of Shutterstock

Follow us for more tech news updates.

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *