Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Compaction is just what Claude Code has done forever, right?




I think the point here is not that it does compaction (which Codex also already does) - but that the model was trained with examples of the Codex compaction, so it should perform better when compaction has taken place (a common source for drops in performance for earlier models).

Codex previously did only manual compaction, but yeah, maybe some extra training for compaction, too?

I am also trying to understand the difference between compaction, and what IDEs like Cursor do when they "summarize" context over long-running conversations.

Is this saying that said summarization now happens at the model level? Or are there other differences?


Afaik, there's no difference besides how aggressive or not it is.

But it's the same concept. Taking tokens in context and removing irreverent ones by summarizing, etc


Codex couldnt do what claude did before when reaching full context window

My understanding is that they trained it to explicitly use a self-prune/self-edit tool that trims/summarizes portions of its message history (e.g. use tool results from file explorations, messages that are no longer relevant, etc) during the session, rather than "panic-compact" at the end. In any case, it would be good if it does something like this.

Yes. It was missing in codex until now



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: