the 80/20 of memory management
i've been writing Odin for a while now — Pegasus (my PEG parser generator) and Wayu (a web framework) are both written in it. and if there's one thing i've learned about manual memory management, it's that arenas handle about 80% of your allocation patterns with maybe 20% of the complexity you'd expect.
the pitch is simple: instead of individually allocating and freeing every little thing, you allocate from a big chunk of memory (the arena), and when you're done with all of it, you destroy the whole chunk at once. no individual free() calls. no tracking who owns what. no use-after-free bugs. no memory leaks. just... nuke it.
coming from Go where the garbage collector handled everything, this felt too simple to actually work. but it does. and once you see the pattern, you start noticing that almost every allocation in your program falls into one of two buckets.
two lifecycles, that's it
here's the insight that made arenas click for me: most allocations have one of two lifecycles.
long-lived singletons — things that get created once and live for the entire program. grammar definitions, compiled bytecode, configuration, lookup tables. you load them at startup and they stick around until exit. there's no point in freeing them individually because they all die together when the process ends.
short-lived per-operation scratch space — things that exist for the duration of one operation and are garbage afterward. per-parse AST nodes, per-request response buffers, per-frame render state. they get created in a burst, used, and then the whole batch is useless.
each lifecycle gets its own arena. that's the whole strategy.
// Long-lived: program lifetime
grammar_arena: virtual.Arena
grammar_allocator := virtual.arena_allocator(&grammar_arena)
grammar := load_grammar(grammar_allocator) // lives forever
// Short-lived: per-parse
parse :: proc(input: []u8) -> AST {
arena: virtual.Arena
defer virtual.arena_destroy(&arena)
temp := virtual.arena_allocator(&arena)
// all allocations during parsing use temp
tokens := tokenize(input, temp)
ast := build_ast(tokens, temp)
// clone result to caller's allocator, arena freed on return
return clone_ast(ast, context.allocator)
}the long-lived arena just... exists. you never free it because the stuff in it is needed until the program exits. the OS reclaims it when the process dies. done.
the short-lived arena is where the magic really shows. every call to parse creates a fresh arena, does a ton of allocations into it (tokenizing, building AST nodes, intermediate results), extracts the final result, and then defer destroys the entire arena on return. all those intermediate allocations — gone in one shot.
why this beats individual free()
in Pegasus, the parser creates dozens of AST nodes during a single parse. in the old Go-brained approach, every node was individually allocated with new(), and i had to track and free each one. that's where bugs live — you forget one free() and you've got a leak, you free something too early and you've got a use-after-free, you free something twice and you've got a crash.
with arenas, none of that is possible. you can't leak because the arena destruction frees everything. you can't use-after-free because everything in the arena has the same lifetime. you can't double-free because you're not freeing individual things.
in Wayu it's the same pattern but for HTTP requests. each request gets its own arena. all the parsing, routing, middleware allocations happen in that arena. when the response is sent, the arena gets destroyed. clean slate for the next request.
odin makes this stupid easy
the thing that makes arenas practical in Odin is that every allocation procedure — new(), make(), append(), resize() — takes an optional allocator parameter. you don't need special arena-aware types or wrapper functions. you just pass the allocator:
// default allocator (general purpose)
node := new(AST_Node)
// arena allocator (bulk lifetime)
node := new(AST_Node, arena_allocator)same code, same types, different allocation strategy. you can even swap allocators for testing — use a tracking allocator in debug builds to verify zero leaks, use arenas in production for speed.
and arenas are fast. allocation is just bumping a pointer. no free lists, no coalescing, no bookkeeping. for short-lived arenas that get destroyed in bulk, you're basically getting allocation performance for free.
the remaining 20%
arenas don't solve everything. if you have an object that needs to outlive its arena but not live forever — like a cache entry with a TTL — you need something else. and if you're building a long-running data structure that grows and shrinks over time, a pool allocator or explicit ownership might be better.
but honestly? in both Pegasus and Wayu, those cases are rare. the vast majority of allocations are either "lives forever" or "lives for this operation." two arenas cover it.
the real lesson from working with arenas is that most memory management complexity comes from not thinking about lifetimes upfront. once you ask "when does this thing die?" the answer is almost always "when the program exits" or "when this operation finishes." and arenas map directly to those answers.