Example use case where big token size is needed



Example of task where 100K token size is required
Some tasks that would benefit significantly from a 100k token large language model like GPT-3 compared to smaller LLMs include:
Complex text summarization - Summarizing lengthy articles, research papers, legal documents, etc. The larger context window allows the model to understand the full context and generate accurate and coherent summaries.
Advanced code generation - Generating code for complex tasks in languages like Python, Java, etc. The increased representational capacity lets the model "learn" the syntax and paradigms of the language better.
Answering complex multi-fact questions - Answering questions that require pulling facts from multiple sources and putting them together in a coherent response. The larger model has a better grasp of the relevant facts and how to combine them.
Generating realistic dialogue - Creating realistic multi-turn conversations between characters. The 100k token size allows the model to maintain consistency over longer spans of dialogue.
Open-domain question answering - Answering complex questions that require knowledge spanning multiple domains. The model has learned representations for a wide range of knowledge.
Advanced text editing tasks - Like rewriting text to change the tone, style, or simplify the language, while maintaining the core meaning. The larger model has learned more nuanced representations of language.
Text-to-text translation - Accurately translating lengthy documents between languages. The increased context window helps preserve the nuances and full meaning of the original text.
So in general, tasks that require integrating and utilizing information from a wide context, complex reasoning over long spans of text, and generating coherent multi-sentence or multi-paragraph text would benefit significantly from the increased capabilities of a 100k token LLM.