Compiler Optimization Through Verbs: Why Intent Matters
What if the compiler knew exactly what your function was supposed to do — before it even saw the implementation? That’s the core idea behind Prove’s verb system. And it changes everything for optimization.
The Problem with Traditional Compilers
Most compilers work from the inside out. They analyze your code, apply optimizations, and hope to infer what each function is supposed to accomplish. But they can’t be certain. A function might mutate global state, read files, throw exceptions, or launch threads — the compiler can’t know without running it. This is the side effect problem — and it’s one of the oldest challenges in compiler design.
This uncertainty forces compilers to be conservative. They can’t eliminate dead code, specialize implementations, or inline aggressively because they can’t prove those transformations are safe. LLVM’s alias analysis and GCC’s -fipa-pure-const pass spend significant effort trying to infer function purity after the fact — with limited success across module boundaries.

Verbs as Intent Declarations
In Prove, every function begins with a verb that declares its purpose:
transforms parse_email(email String) Email!
derives user_count() Integer
outputs log_message(String)
inputs fetch_user(id Integer) User!These aren’t comments or documentation. They’re compiler-enforced contracts. The compiler verifies that your implementation matches what the verb promises. A transforms function cannot do IO. A derives function cannot allocate new values. The compiler knows.
What the Compiler Knows
When it sees a transforms verb, the compiler knows:
- No side effects — the function is pure
- No IO — no file access, network calls, or console output
- Deterministic — same inputs always produce same outputs
- Optionally failable — may be declared with
!to return an error, but doesn’t have to
This is a lot of information. And it enables optimizations that traditional compilers can only dream about.
Exact Outlines, Minimal Runtime
When you define a function’s boundaries through its verb, the compiler can generate an exact outline. Consider this:
transforms calculate_total(items List<Price>) Decimal
from
reduce(items, 0, |acc, item| acc + item.cents / 100)The compiler knows this function:
- Will never allocate file handles
- Will never spawn threads
- Will never perform IO
- Has no observable side effects
So it can compile this down to bare metal. No safety nets for “what if this does IO” — because the verb guarantees it won’t. The runtime footprint shrinks to exactly what’s needed.
Concrete Optimization Examples
1. Zero-Cost Error Handling
Traditional languages carry error-handling overhead everywhere — even for functions that never fail. C++ compilers must generate exception handling tables for any function that might throw, and even “zero-cost exceptions” add binary size overhead. But transforms is the only pure verb that can fail, and only when explicitly marked:
transforms double(n Integer) Integer from n * 2
This cannot fail — there’s no ! in its signature. The compiler can omit all error-handling code paths. No Option wrapping, no Result types, no null checks — just the math.
2. Aggressive Inlining
Pure functions (those using transforms, validates, derives, creates, matches) are safe to inline. Traditional compilers use heuristic-based inlining that must conservatively account for possible side effects — LLVM’s inliner weighs call-site cost against potential side effects before committing. The compiler targets small pure functions — single-expression bodies for most pure verbs, up to three statements for non-allocating verbs like derives and validates. It can:
- Inline at call sites
- Constant-propagate through the body
- Evaluate pure calls at compile time
- Eliminate the function entirely if its result is unused
transforms square(n Integer) Integer
from
n * n
transforms area(r Integer) Integer
from
square(r) * 314159
The compiler can collapse this to r * r * 314159 — no function calls at all.
3. Iterator Fusion
Since transforms functions are pure, the compiler can fuse chained iterator operations — combining multiple passes over data into a single pass. This is similar to stream fusion in Haskell’s GHC, but where GHC must prove purity through its type system, Prove gets it directly from the verb declaration:
[markdown]
transforms calculate(prices List<Price>) Decimal
from
reduce(map(prices, |p| p.discount), 0, |acc, d| acc + d)The compiler knows this is safe to fuse because there’s no shared mutable state. No intermediate allocations — just a single traversal.
4. Escape Analysis and Copy Elision
When the compiler knows a function creates a new value (creates) versus derives from existing data (derives), it can:
- Elide unnecessary copies when values don’t escape — similar to C++’s copy elision, but informed by verb semantics
- Use region allocation for non-escaping values — related to the escape analysis used in Go and Java’s HotSpot JVM
- Eliminate intermediate allocations in chains
creates build_report(users List) Report from Report(name: "Summary", count: length(users), items: map(users, |u| u.name))
The compiler’s escape analysis tracks whether the Report leaves the calling scope, enabling allocation optimizations.
IO Verbs: Controlled Boundaries
IO verbs (inputs, outputs, streams, dispatches) define clear boundaries between pure and impure code. The compiler knows exactly which functions can perform IO and which cannot:
inputs fetch_users() List! from query(db, "SELECT * FROM users")!
Because verbs classify entire functions, the compiler can freely optimize any pure function in the call graph without worrying about hidden IO. The boundary is explicit and enforced.
Async Verbs: Structured Concurrency
With async verbs, the compiler understands concurrency boundaries:
detached— fire and forget, cannot declare a return typeattached— callback handler, must declare a return typelistens— event listener within an event looprenders— coordinates the event loop, manages state and listeners
This lets the compiler generate minimal async infrastructure. Compare with Rust’s async fn, which always generates a state machine regardless of how the coroutine is used — Prove only emits what the verb requires.
The Bottom Line
Traditional compilers work with uncertainty. They analyze code and make educated guesses about what functions do. Sometimes they’re right. Often they’re conservative — a problem well-documented in compiler literature as the phase-ordering problem.
Prove’s verb system removes the guesswork. When you declare intent upfront, the compiler doesn’t have to infer boundaries — it knows them. And that knowledge translates directly to:
- Smaller binaries — no code for “what if”
- Faster execution — no safety overhead for guaranteed-safe operations
- Better inlining — pure functions are safe to inline
- Compile-time evaluation — pure calls can be evaluated before runtime
The code you write is the code you get — but the compiler makes it lean. We are still working on the optimization and more will come in the future, but we are sure that this approach will be super-charged in the end!
Ready to try it? Check the tutorial to get started, see performance benchmarks against C, Rust, and Zig, or read the full verb reference and compiler documentation. More posts on the blog.



Leave a Reply