The continuous flow of updates and revelations in the world of generative AI shows no signs of slowing, even as we approach the end of 2023 and the customary hush of the winter holiday season.
Consider Microsoft Research, the visionary branch of the software giant, which has just unveiled its latest creation: Phi-2, a compact language model (SML) designed for text-to-text AI tasks. Notably, Phi-2 is tailored to be “compact enough to operate on laptops or mobile devices,” as detailed in a post on X.
Impressively, Phi-2 boasts 2.7 billion parameters (the connections between artificial neurons), putting its performance in the same league as much larger models like Meta’s Llama 2-7B and Mistral-7B, both wielding 7 billion parameters.
In their blog post about Phi-2’s release, Microsoft researchers highlighted its superiority over Google’s freshly minted Gemini Nano 2 model, despite the latter having half a billion more parameters. Notably, Phi-2 also exhibits reduced “toxicity” and bias in its responses compared to Llama 2.
Microsoft couldn’t resist a playful jab at Google’s somewhat controversial staged demonstration video for Gemini, where they showcased the problem-solving prowess of their upcoming behemoth, Gemini Ultra. To their surprise, even though Phi-2 is likely a fraction of the size of Gemini Ultra, it performed admirably in answering questions and correcting student errors using the same prompts.
However, there’s a significant limitation to Phi-2, at least for now: it is exclusively licensed for “research purposes only,” with no allowance for commercial use, as per a customized Microsoft Research License. This restriction means that businesses aspiring to build products based on Phi-2 will need to explore other avenues.