Tag: sequence length curriculum

Training duration and token counts alone don't determine LLM generalization. How sequence lengths are structured during training matters more-variable-length curricula outperform fixed-length approaches, reduce costs, and unlock true reasoning ability.

Recent-posts

Why Tokenization Still Matters in the Age of Large Language Models

Why Tokenization Still Matters in the Age of Large Language Models

Sep, 21 2025

Secure Prompting for Vibe Coding: How to Ask for Safer Code

Secure Prompting for Vibe Coding: How to Ask for Safer Code

Oct, 2 2025

How Training Duration and Token Counts Affect LLM Generalization

How Training Duration and Token Counts Affect LLM Generalization

Dec, 17 2025

Private Prompt Templates: How to Prevent Inference-Time Data Leakage in AI Systems

Private Prompt Templates: How to Prevent Inference-Time Data Leakage in AI Systems

Aug, 10 2025

Interoperability Patterns to Abstract Large Language Model Providers

Interoperability Patterns to Abstract Large Language Model Providers

Jul, 22 2025