Long context prompting for Claude 2.1

February 21, 2019

Anthropic’s Claude is one of the major foundation LLMs alongside OpenAI’s GPT4. Gemini from Google, when it is publicly released, will be another one and Llama2 in the open source world is also a major player. The largest token model for OpenAI is GPT4-128K. For Anthropic is is now Claude 2.1-200K. Both models have a “lost in the middle” issue and there are different ways to “solve” it (it’s not really solving it but it’s a way to adapt to stuff we are learning about how they work). This guidance from Anthropic for their top end model is fascinating: Adding the sentence “Here is the most relevant sentence in the context:” to the start of Claude’s response was enough to raise Claude 2.1’s score from 27% to 98% on the original evaluation.

View Anthropic's full post

POWERED BY

AI will rapidly change Private Equity due diligence

Get out ahead of the change.
Partner with a team that knows PE.
Now in pilot at multiple major firms.
Get Started
PRIVACY POLICYTERMS OF USESUBPROCESSOR LIST


© 2024, DiligentIQ