Like many other companies like Google, Microsoft too is heavily investing in AI. Its multiyear, multibillion-dollar investment in OpenAI, the maker of ChatGPT, is just another example of the company's vision, led by CEO Satya Nadella. While Large Language Models (LLMs) like ChatGPT and Google Bard have vast capabilities, their extensive sizes require large computing resources, leading to limitations. To counter this, Microsoft has recently introduced Orca, a 13-billion parameter model that learns to imitate the reasoning process of Large Foundation Models (LFMs).
Unlike ChatGPT, Microsoft Orca is a smaller AI model, developed and tailored for specific use cases. According to a Microsoft research paper, Orca learns from a vast database of information that is provided by GPT 4's approximately one trillion parameters, including explanation traces, intricate instructions, and detailed thought processes, while getting rid of the formidable challenges posed by large-scale data handling and task variety. Due to its smaller size, Orca does not require large, dedicated computing resources. As a result, it can be optimized and tailored for specific applications without the need for a large-scale data center.
One of the most notable factors of this AI model is its open-source architecture. Unlike privately owned ChatGPT and Google Bard, Orca supports an open-source framework, meaning that the public can contribute to the development and improvement of the small LFM. It can take on the private models built by large tech companies by harnessing the power of the public.
While it is based on the foundations of Vicuna, another instruction-tuned model, Orca surpasses its capabilities by 100 percent on complex zero-shot reasoning benchmarks
Read more on tech.hindustantimes.com