Large-scale language models (LLMs) are trained to predict the next word in a text sequence. However, the way they generate text involves a combination of probability estimates and algorithms known as decoding strategies. According to AssemblyAI, these strategies are crucial in determining how the LLM selects the next word.
Next word predictor vs. text generator
LLM is often described in the non-scientific literature as a “next word predictor”, but this is misleading. In the decoding stage, LLM uses a variety of strategies to generate text, in addition to repeatedly outputting the most likely next word. These strategies are known as: Decoding StrategyAnd this fundamentally determines the way LLM produces texts.
Decoding Strategy
Decoding strategies can be divided into deterministic and probabilistic methods. Deterministic methods produce the same output for the same input, while probabilistic methods introduce randomness to produce different outputs even for the same input.
Deterministic method
Greedy Search
Greedy search is the simplest decoding strategy, where at each step the most likely next token is chosen. Although efficient, it often produces repetitive and tedious text.
Beam search
Beam search generalizes greedy search by maintaining a set of top K most probable sequences at each step. It improves text quality, but can still produce repetitive and unnatural text.
Probabilistic methods
Top-k sampling
Top-k sampling introduces randomness by sampling the next token from the top k most likely choices. However, choosing the optimal value of k can be difficult.
Top-p sampling (nuclear sampling)
Top-p sampling dynamically selects tokens based on a cumulative probability threshold, adapting to the distribution shape at each step and maintaining the diversity of the generated text.
Temperature sampling
Temperature sampling uses the temperature parameter to adjust the sharpness of the probability distribution. Lower temperatures produce more deterministic text, while higher temperatures increase randomness.
Information-content optimization through general sampling
General sampling introduces principles of information theory to balance predictability and surprise in generated text. It aims to generate text with average entropy while maintaining consistency and engagement.
Speeding up inference through speculative sampling
Speculative sampling, recently discovered by Google Research and DeepMind, improves inference speed by generating multiple tokens per model pass. It involves a draft model that generates tokens and a target model that verifies and modifies them, resulting in significant speedups.
conclusion
Understanding decoding strategies is crucial to optimizing the performance of LLMs in text generation tasks. Deterministic methods such as greedy search and beam search provide efficiency, while probabilistic methods such as top-k, top-p, and temperature sampling introduce the randomness needed for more natural output. Novel approaches such as general sampling and speculative sampling further improve text quality and inference speed, respectively.
Image source: Shutterstock