Hacker News Logo

Offline

dayweek

From 300KB to 69KB per Token: How LLM Architectures Solve the KV Cache Problem

4 points|news.future-shock.ai|
future-shock-ai|2day