This is not investment advice. The author has no position in any of the stocks mentioned. Wccftech.com has a disclosure and ethics policy.
The limited ability of the current iterations of Large Language Models (LLMs) to comprehend increasing loads of context remains one of the biggest impediments at the moment to achieving AI singularity - a threshold at which artificial intelligence demonstrably exceeds human intelligence. At first glance, the 200K-token context window for Anthropic's Claude 2.1 LLM appears impressive. However, its context recall proficiency leaves much to be desired, especially when compared with the relatively robust recall abilities of OpenAI's GPT-4.
Our new model Claude 2.1 offers an industry-leading 200K token context window, a 2x decrease in hallucination rates, system prompts, tool use, and updated pricing.
Claude 2.1 is available over API in our Console, and is powering our https://t.co/uLbS2JNczH chat experience. pic.twitter.com/T1XdQreluH
— Anthropic (@AnthropicAI) November 21, 2023
Anthropic announced yesterday that its latest Claude 2.1 LLM now supports an "industry-leading" context window of 200K tokens while delivering a 2x decrease in model hallucinations - a situation where a generative AI model perceives non-existent patterns or objects often as a result of unclear or contradictory input, delivering an inaccurate or nonsensical output.
For the benefit of those who might not be aware, a token is a basic unit of text or code that LLMs use to process and generate language. Depending on the tokenization method employed, a token might be a character, word, subword, or an entire segment of text or code. Claude 2.1's enlarged context window allows the LLM to understand and process a nearly
Read more on wccftech.com