How Programming Languages Actually Work
From the text you type to the binary that runs - the full compiler pipeline, type safety ladder, and patterns every developer should know.
How many programming languages do you know? You probably counted three or four - Python, JavaScript, maybe Go or Rust. The actual number is closer to a dozen.
Written a SQL query? That’s a programming language. Used a regex? Language. Written a Dockerfile? Language. Configured Terraform? Also a language.
These are Domain-Specific Languages - DSLs. Each one is built for a narrow job instead of general-purpose programming. SQL handles data queries. Regex handles pattern matching. Dockerfile handles container builds. They all have grammars, parsers, and compilers. You’ve been a polyglot without knowing it.
DSLs come in two flavors.
External DSLs have their own grammar and their own parser. SQL, regex, HCL, and Dockerfile are external DSLs. You can’t freely mix them with your main language because they speak different syntax entirely.
Internal DSLs piggyback on a host language’s syntax. If you’ve used Rails routing (get '/users', to: 'users#index'), Jest’s expect(value).toBe(42), or a query builder like GORM in Go - you’ve used an internal DSL. They look like normal code but form a mini-language of their own.
| Artifact | Medium | Purpose |
|---|---|---|
SQL SELECT | External DSL | Query |
| Dockerfile | External DSL | Build instructions |
| Regex | External DSL | Pattern matching |
| OpenAPI YAML | Generic config | Specification |
| TLA+ | External DSL | Formal specification |
| Rails routes | Internal DSL (Ruby) | Configuration |
Once you notice this, languages show up everywhere. Your .gitignore has a grammar. Your CI config follows parsing rules. Markdown has a specification. And every single one of these goes through the same basic transformation pipeline to become something a machine can execute.
The Seven Stages
A compiler is a program that transforms source code into something a machine can run. Every compiled language does this through roughly the same sequence - a pipeline where each stage transforms the output of the one before it. Once you see this assembly line, each machine on it makes sense on its own.
The pipeline splits into two halves. The frontend (Stages 1-5) is language-specific - it understands your source language’s syntax, scoping, and type rules. The backend (Stages 6-7) is target-specific - it takes the compiler’s internal representation and generates output for a particular platform.
A single expression, traced through the full pipeline:
total = price * quantity + tax
Stage 1: The Lexer
The lexer reads raw characters one at a time and groups them into tokens - the smallest meaningful units of the language.
It scans t, o, t, a, l and keeps going until it hits a character that doesn’t belong - a space, an operator, a bracket. Those five characters become a single token: IDENT "total". Then it sees = and produces EQUALS. Then p, r, i, c, e become IDENT "price". One character at a time in, whole meaningful units out.
Input: t o t a l = p r i c e * q u a n t i t y + t a x
───────── ─ ───────── ─ ─────────────── ─ ─────
Output: IDENT = IDENT * IDENT + IDENT
"total" "price" "quantity" "tax"
From 30+ individual characters, the lexer produces seven tokens. Each token carries three things: a kind, a value, and a location (line and column number). That location metadata travels through the entire pipeline. That’s how the compiler can point to the exact character that caused an error five stages later.
The lexer knows nothing about grammar. It doesn’t care whether price * quantity makes sense. Its only job is grouping characters into meaningful chunks and passing them forward.
Stage 2: The Parser
The parser groups that flat token stream into a tree based on the language’s grammar. Two rules govern how the tree gets shaped.
Precedence determines which operator wins when two compete for the same operand. In price * quantity + tax, both * and + want quantity. Multiplication has higher precedence, so it claims quantity first:
=
/ \
total +
/ \
* tax
/ \
price quantity
If addition had won instead, the tree would mean price * (quantity + tax) - a different calculation entirely. The parser doesn’t evaluate anything. It shapes the tree to reflect the precedence rules, and the tree determines the result.
Associativity breaks ties when operators have equal precedence. a - b - c could group as (a - b) - c or a - (b - c). With a = 10, b = 3, c = 2, the first gives 5, the second gives 9. Subtraction is left-associative - it groups left to right - so (a - b) - c wins. Assignment goes the other direction: a = b = 5 means a = (b = 5), right to left. b gets 5 first, then a gets the result. Same precedence level, opposite grouping.
Every expression you write depends on these two rules. Parentheses exist to override them when the default isn’t what you want.
Write something grammatically invalid - like total = * price - and the parser rejects it at this stage. It doesn’t know if your formula is correct, but it knows if your structure is valid.
Some compilers hand-write their parsers (Go and Rust both do). Others generate them from grammar definitions using tools like ANTLR or tree-sitter.
Stage 3: The AST
The raw parse tree preserves everything - parentheses, whitespace, comments, redundant groupings. The Abstract Syntax Tree strips that noise away, keeping only the semantic structure.
Our expression becomes:
Assignment
target: total
value: Add
left: Multiply
left: price
right: quantity
right: tax
No = sign, no * or + symbols. Just named operations in a clean hierarchy. “Abstract” means abstracted from surface syntax. The tree still says the same thing, but in a form the compiler can work with directly. From this point forward, every stage operates on the AST, never the source text.
Stage 4: Name Resolution
Every identifier needs to connect to its declaration. The compiler walks the AST and binds each name to where it was defined:
Assignment
target: total → local variable, declared on line 1
value: Add
left: Multiply
left: price → function parameter
right: quantity → function parameter
right: tax → local variable, declared on line 3
When the compiler sees price, it searches outward through scopes - local block, enclosing function, module, globals - until it finds the declaration. If nothing matches, compilation fails here. This is the stage that catches your typos: write prce instead of price, and the compiler tells you exactly which name it couldn’t find.
Stage 5: The Type Checker
The type checker walks the AST and attaches a type to every expression:
total: float = price: float * quantity: int + tax: float
└── float ──┘
└─────── float ───────┘
If price were a string and you tried to multiply it by quantity, the type checker rejects the program. The output is a typed AST - same tree, but every node now carries type information.
This is where the compiler’s semantic personality lives. The type checker is usually the largest single component. Go’s is around 15,000 lines. Rust’s is over 100,000. Every decision a language makes about what it allows, what it rejects, and what conversions happen implicitly gets encoded here.
Stage 6: The IR
The Intermediate Representation is a simplified, target-independent version of your program.
t1 = load price
t2 = load quantity
t3 = mul t1, t2
t4 = load tax
t5 = add t3, t4
store total, t5
Flat. Simple. Every instruction has at most one operation and up to three operands. No nested expressions, no language-specific syntax. This is called three-address code.
The IR is the pivot point. Everything before it deals with the source language - its syntax, scoping rules, and type system. Everything after it deals with the target platform - its instruction set, memory model, and calling conventions.
Stage 7: Code Generation
The final stage walks the IR and emits actual output for a specific target:
// Go
total := price * float64(quantity) + tax
-- SQL
SELECT price * quantity + tax AS total FROM orders;
;; WebAssembly
local.get $price
local.get $quantity
f64.mul
local.get $tax
f64.add
local.set $total
Same IR, three different outputs. That’s not a coincidence.
The Pivot Point
The IR does something subtle and powerful: it decouples frontends from backends.
Without an IR, supporting M source languages on N target platforms requires M x N compiler implementations. Every language needs a separate backend for every target. With an IR in the middle, you need M frontends and N backends. The work drops from multiplicative to additive.
This is the core idea behind every major compiler infrastructure:
| IR | Used By | Style |
|---|---|---|
| LLVM IR | Clang (C/C++), Rust, Swift, Julia, Zig | SSA |
| WASM | The web platform | Stack-based |
| JVM bytecode | Java, Kotlin, Scala, Clojure | Stack-based |
| .NET CIL | C#, F#, VB.NET | Stack-based |
| GCC GIMPLE | GCC (C, C++, Fortran, Ada) | Three-address |
The most important style in that table is SSA - Static Single Assignment. In SSA, every variable is assigned exactly once. If a value needs to change, the compiler creates a new version:
x_1 = 5
x_2 = x_1 + 3
x_3 = x_2 * 2
Odd constraint, huge payoff. When every variable has exactly one definition, the compiler can trace exactly where every value comes from and where it goes. Optimization becomes dramatically simpler because there’s no ambiguity. LLVM IR, Rust’s MIR, Go’s compiler internals, V8’s Turbofan, and HotSpot all use SSA. It is the single most impactful idea in modern compiler design.
Three caveats about multi-target compilation
Can an IR target any language? In theory, yes. In practice, three things bite.
Semantic gap. Every target has its own execution model. Haskell’s laziness doesn’t translate cleanly to Python. Rust’s ownership has no equivalent in JavaScript. The IR must capture source intent, and each backend must approximate that intent in the target’s idioms.
Standard library mapping. Every backend needs to translate primitives to target idioms. Getting the current time becomes time.Now() in Go, new Date() in TypeScript, DateTime.now() in Dart, and NOW() in SQL. Straightforward work, but tedious.
Runtime differences. Go has goroutines. JavaScript has a single event loop. WASM has manual memory management. SQL has no loops. Pure computation - arithmetic, validation rules, business logic - translates cleanly across server, client, and database targets. Concurrency, I/O, and interaction patterns do not.
In practice, an IR supports 2-6 primary targets comfortably. Haxe compiles a single typed AST to eight targets, generating both server-side C++ and client-side JavaScript from the same source. Kotlin Multiplatform does the same across JVM, browser, and native. Beyond that range, the semantic gaps compound.
How the IR Becomes Real Code
The architectural argument for an IR is clear. But for the 2-6 targets within reach, what does the IR-to-code translation actually look like?
The IR is target-independent. It describes what your program does without committing to how any specific language expresses it. A backend’s job is to walk the IR and emit equivalent code in a target language - a translator between a universal internal representation and a specific external one.
Given the same IR instruction, different backends produce different output:
IR instruction: t1 = load price
t2 = load quantity
t3 = mul t1, t2
Go backend: t3 := price * quantity
Rust backend: let t3 = price * quantity;
JavaScript: const t3 = price * quantity;
WASM: local.get $price
local.get $quantity
f64.mul
local.set $t3
SQL: price * quantity
Each backend knows three things:
The target’s syntax. Every language has its own way to declare variables, call functions, and control flow. The backend has a template for each IR construct mapped to the target’s syntax.
The target’s type system. The IR might have Dollars as a refined type. The Go backend emits type Dollars float64. TypeScript emits a branded type. Rust emits a newtype wrapper. Same semantic concept, different encoding per target.
The target’s standard library. Getting the current time in IR might be @time.now(). Go emits time.Now(). JavaScript emits new Date(). Dart emits DateTime.now(). SQL emits NOW(). The backend maintains a mapping from IR primitives to target idioms.
Most backends are a few thousand lines of code. They implement a visitor pattern: walk the IR tree, match each node type, emit the corresponding target syntax. The complexity isn’t in the walking - it’s in handling the impedance mismatches between IR and target. If the IR has pattern matching and the target is Go, the backend has to lower pattern matching into if/else chains. That translation is the backend’s job.
Not everything translates to every target
When the target is “the browser,” you’re dealing with two possible targets: WASM and JavaScript. They have different capabilities, and not every part of your IR can go to either.
WASM is restrictive
WASM speaks a narrow language:
- Typed integer and float operations (i32, i64, f32, f64)
- Control flow - loops, branches, function calls
- A linear memory buffer for storing bytes
- Typed function signatures
What WASM does NOT natively support:
- Strings (they live in linear memory as raw bytes)
- Objects, maps, closures (simulated via memory layouts)
- DOM, HTTP, events, user input
- Anything asynchronous
JavaScript is permissive
JavaScript supports everything WASM does, plus:
- First-class objects, arrays, closures
- DOM manipulation
- Event handlers - click, input, scroll
- HTTP calls, WebSocket, storage
- Async/await, promises
- Anything the browser exposes
The natural split
Given what each target can do, your IR divides along a clean boundary.
Goes to WASM (pure computation):
- Math calculations: tax rates, interest, totals
- Validation predicates:
amount > 0,rate <= 100 - State machines: order lifecycle transitions
- Data transformations: sorting, filtering, aggregations
- Pure business rules: eligibility checks, risk scoring
Goes to JavaScript (everything involving the browser or I/O):
- UI components: rendering HTML, updating the DOM
- Event handlers: click, input, form submission
- HTTP calls: fetching data from the server’s API
- Browser storage: cookies, LocalStorage
- Async orchestration: coordinating calls between WASM and the UI
A concrete example
Say your source language defines an order processing module:
entity Order {
amount: Dollars where n >= 100
items: List<Item>
status: Pending | Confirmed | Shipped
}
function calculateTotal(items: List<Item>) -> Dollars
function validateOrder(order: Order) -> Result<Order, Error>
function renderOrderList(orders: List<Order>) -> UIComponent
function handleSubmit(order: Order) -> Promise<Result>
After compilation:
calculateTotalandvalidateOrdercompile to WASM (pure math and validation)renderOrderListandhandleSubmitcompile to JavaScript (DOM, async, HTTP)- The
OrderandItemtypes are defined in both, kept consistent through the shared IR - The server binary gets all of it compiled to Go
The JavaScript UI calls the WASM module for validation before hitting the network:
// User submits form
const order = readFormData();
// Fast validation in WASM, no network round-trip
const result = wasm.exports.validateOrder(order);
if (result.ok) {
// Call the server's API, which runs the same validation
await fetch('/api/orders', { method: 'POST', body: order });
} else {
showError(result.error);
}
The same validation rule runs in three places: Go on the server, WASM in the browser, and indirectly in the UI’s form handlers. One truth, three enforcements.
From Compiled Output to Running Program
The compiler has done its job. You have a binary, a JavaScript file, or a WASM module on disk. How does it actually run?
Three environments cover almost everything.
Native binary on an OS
A Go or Rust binary is machine code wrapped in an executable format - ELF on Linux, Mach-O on macOS, PE on Windows. When you run it, the OS loader reads the binary, maps it into memory, sets up the process (stack, heap, environment variables), and jumps to the entry point. The CPU executes the machine instructions directly.
No interpretation, no translation layer. The compiler already did the work.
JavaScript in a browser engine
A JavaScript bundle runs inside the browser’s JS engine - V8 in Chrome, SpiderMonkey in Firefox, JavaScriptCore in Safari. The engine is actually a compiler itself:
- Parser reads your JS into an AST (the same seven-stage pattern, compressed)
- An interpreter starts executing bytecode immediately
- A profiler watches for hot paths - functions called repeatedly
- A JIT compiler takes those hot paths and compiles them to machine code on the fly
- Subsequent calls skip the interpreter and run native
JavaScript execution is layered compilation. Your source becomes machine code eventually - just at runtime instead of ahead of time.
WASM in a browser engine
WebAssembly is a binary format pre-compiled from Rust, C++, Go, or similar. It skips the interpretation step:
- The browser’s WASM engine loads and validates the module
- Compiles the bytecode straight to machine code
- Executes inside the browser’s sandbox at near-native speed
WASM runs 2-10x faster than equivalent JavaScript for compute-heavy work. Same engine as JavaScript (V8 runs both), different execution path.
JavaScript and WASM coexist, they don’t compete
A common misconception: WASM will replace JavaScript. It won’t.
Compilation decides where each piece of your code goes. Runtime is about how those pieces cooperate once loaded. The typical pattern:
const response = await fetch('/math-engine.wasm');
const { instance } = await WebAssembly.instantiateStreaming(response);
// JS calls WASM for heavy computation
const riskScore = instance.exports.calculateRisk(transaction);
// JS updates the UI with the result
document.getElementById('score').textContent = riskScore;
JavaScript is the orchestrator. WASM is the specialist. They run in the same engine, share a block of linear memory, and call each other through a narrow typed interface. When WASM needs browser access - DOM, fetch, storage - it calls into JavaScript. When JavaScript hits expensive computation, it calls into WASM.
The complete journey
From source code to a running app in someone’s browser:
- You write source code
- The compiler runs its seven stages
- Three backends produce three outputs: server binary, WASM module, JavaScript bundle
- The server binary runs on AWS, waiting for requests
- A user opens your app in their browser
- Browser fetches the HTML shell from your server
- HTML references the JavaScript bundle and WASM module - browser fetches those too
- V8 parses and runs the JavaScript, then loads the WASM module
- The running JavaScript makes HTTP calls to your server binary’s API
- Server responds with JSON
- JavaScript uses the WASM module to validate and transform data
- JavaScript updates the DOM, the user sees the result
From characters on a screen to a running app spanning server and browser, every step has been traced. The same compiler, the same type system, the same validation rules - running in three completely different execution environments.
The Type Safety Ladder
The compiler pipeline shows you what happens to your code. The type system determines what gets rejected before it runs.
Type systems exist on a spectrum, and each level catches strictly more bug classes than the one below.
| Level | Name | What It Catches | Example |
|---|---|---|---|
| 0 | Untyped | Nothing at authoring time | Python dict, schema-less YAML |
| 1 | Structural | Shape, required fields, enum values | JSON Schema, OpenAPI |
| 2 | Nominal | Mixing same-shaped but differently-named types | Go’s type CustomerID int64 vs type OrderID int64 |
| 3 | Algebraic | Forgotten cases after adding variants | Rust enum with exhaustive match |
| 4 | Dimensional | Unit and magnitude confusion | F# units of measure |
| 5 | Refinement | Divide-by-zero, out-of-bounds, negative quantities | Refinement predicates in Liquid Haskell |
| 6 | Dependent | Nearly everything (at the cost of writing proofs) | Idris, Agda, Lean |
A single variable - an account balance - shows what each level buys you.
Level 0: balance = 500. A Python variable. Could be an int, a float, a string. You won’t know until runtime.
Level 1: balance: number. JSON Schema says it must be a number. You can still accidentally add a temperature reading to it.
Level 2: type Dollars float64. Go now distinguishes Dollars from Celsius even though both are float64 underneath. var balance Dollars = 500.0 can’t be accidentally passed where a Celsius is expected.
Level 3: Rust’s enum Currency with variants USD(f64), EUR(f64), INR(f64). Rust forces you to handle every variant in a match. Add a new currency, and every match that doesn’t handle it refuses to compile.
Level 4: let balance: float<dollar> = 500.0<dollar>. F#‘s units of measure prevent balance + temperature at compile time. The units are erased at runtime - zero performance cost.
Level 5: A refinement type like PositiveBalance constrains Dollars to values where n >= 0. Negative balances become unrepresentable. Not “caught by tests.” Impossible to construct.
Level 6: Dependent types let you encode almost any property as a type. A vector whose type includes its length. A sorted list whose type proves it’s sorted. Powerful, but requires writing mathematical proofs alongside your code. Research territory for now.
Most production codebases sit at Level 1 or 2. Moving to Level 3 gives you exhaustiveness checking. Moving to Level 4 prevents unit confusion. Each step up catches a real class of bugs that the level below misses entirely.
The same bug, five layers deep
A divide-by-zero bug, caught at five different points:
| Layer | Coverage | When |
|---|---|---|
| Production monitoring | Only inputs that hit prod | After the crash |
| Runtime assertion | Only inputs the code encounters | Runtime panic |
| Unit test | Only inputs you wrote tests for | Pre-deploy |
| Property test | Hundreds of random inputs | Pre-deploy |
| Refinement type | All possible inputs, ever | Compile time |
Each layer is cheaper and catches more than the one above. Types eliminate categorical bugs - wrong shape, wrong unit, missing case, out-of-bounds. Tests eliminate semantic bugs - wrong formula, wrong business logic. Both are necessary. Neither replaces the other.
Parse, Don’t Validate
This is the practical pattern that ties the type ladder to real code you can write on Monday.
Types can’t read runtime values. If a user types 50 into a form, the compiler has no way to know what that number is at compile time. So how do refinement types help with data that arrives at runtime?
The answer: force the check to happen at a boundary, exactly once, and then carry the proof as a type through the rest of the code.
Your application splits into two zones:
Untrusted input (HTTP request, database row, config file)
|
v [BOUNDARY] <- runtime check here, once
|
v
Refined type Error
| |
v v
Trusted core Reject early
(no defensive
checks needed)
A concrete example. Say you need to ensure a payment amount is always positive:
type PaymentAmount struct{ v float64 }
// The ONLY way to produce a PaymentAmount
func NewPaymentAmount(raw float64) (PaymentAmount, error) {
if raw <= 0 {
return PaymentAmount{}, errors.New("payment must be positive")
}
return PaymentAmount{v: raw}, nil
}
Once you have a PaymentAmount, every function downstream can trust it. There is no code path that produces an unchecked PaymentAmount. No defensive if amount <= 0 checks sprinkled across twenty files. The constructor is the single enforcer.
The distinction between validating and parsing is subtle but it matters:
- Validate:
isValid(x)returns true or false, keepsxas the original weak type. The information from the check is lost immediately. - Parse:
parse(x)returns a strong type or an error. The information is preserved in the type system.
This splits your code into a thin boundary layer (where raw data becomes refined data) and a trusted core (where everything is already guaranteed). Defensive checks that were scattered across the codebase collapse into a single point at the edge.
You can apply this pattern today, in any language. Even dynamically typed ones. Validate at the boundary, wrap in a named type or object, trust it downstream. Compiler support makes it airtight, but the pattern works without it too.
Dimensional Types
Parse-don’t-validate protects individual values. Dimensional types protect how values combine.
A dimensional type tags a number with its unit. The compiler refuses to mix incompatible dimensions even when both are plain floats underneath.
The most expensive unit confusion bug in history: NASA’s Mars Climate Orbiter. One team sent thrust data in pound-force seconds. The receiving team expected newton-seconds. The spacecraft entered Mars’ atmosphere at the wrong angle and burned up. A $327.6 million loss - from a bug that a dimensional type system would have caught at compile time.
Every codebase with numeric quantities carries the same class of risk:
- Financial applications mix currencies, percentages, and absolute amounts
- E-commerce mixes prices, quantities, weights, and tax rates
- Healthcare mixes dosages in milligrams, durations in days, and costs in dollars
- Any application with time mixes seconds, minutes, hours, and days
F# has the cleanest implementation of dimensional types in a production language:
[<Measure>] type dollar
[<Measure>] type euro
[<Measure>] type kg
let price : float<dollar> = 29.99<dollar>
let weight : float<kg> = 2.5<kg>
let rate : float<dollar/kg> = 12.0<dollar/kg>
// Compiles - units cancel correctly
let cost : float<dollar> = rate * weight // (dollar/kg) * kg = dollar
// Does NOT compile
let nonsense = price + weight // error: dollar and kg don't match
The dimensions are erased at runtime. Zero performance cost. They exist purely for the compiler to verify your math.
Bugs dimensional types would catch:
- Applying a monthly interest rate over a yearly period without converting. 12x error.
- Adding a price to a tax rate. Dollars plus percent is meaningless.
- Using gross weight where net weight was expected. Silent overcharge.
- Mixing currencies without explicit conversion.
Most mainstream languages don’t have built-in dimensional types. F# is the notable exception. But you can approximate the idea using nominal types - Go’s type Dollars float64, TypeScript’s branded types, Rust’s newtype pattern. You lose the automatic unit algebra (dollar/kg * kg = dollar), but you gain the basic safety of keeping Dollars and Kilograms as separate types that can’t be accidentally mixed.
Think about how many programs treat dollars, kilograms, percentages, and days as the same float64. Each one is a Mars Climate Orbiter waiting to happen at a smaller scale.
What Types Prove and What They Can’t
Types and tests catch different classes of bugs. Using one without the other leaves entire categories unchecked.
What types can prove:
- Input and output shapes are correct
- Units are coherent
- No divide-by-zero, no null dereference, no out-of-bounds
- Every enum case is handled
- Refinement constraints hold (
amount > 0,rate <= 100)
What types cannot prove:
- That your formula is the right formula
- That 18% is the correct tax rate for electronics
- That the output matches what a domain expert expects
- That external services will behave as documented
Two implementations of a simple interest calculation:
# Implementation A (correct)
def interest(principal, rate, days):
return principal * (rate / 365) * days
# Implementation B (wrong - off by factor of 2)
def interest(principal, rate, days):
return principal * (rate / 182.5) * days
Both type-check. Both accept the same inputs and produce the same output type. Only test vectors with known-correct answers can tell them apart:
input: principal=10000, rate=0.05, days=365
expected: 500.00
A returns: 500.00 (correct)
B returns: 1000.00 (wrong)
In a type-safe codebase, tests stop checking for nulls, empty strings, negative values, and shape mismatches. The type system handles all of that. Tests refocus on semantic correctness: is the math right? Does the business logic match what domain experts expect?
That is the real payoff of investing in type safety. Not fewer tests - better tests. The suite shrinks and refocuses on questions that actually matter.
The full pipeline - characters to tokens to trees to types to IR to binary - is one of the most elegant assembly lines in computer science. Each stage does one job. Each transformation feeds the next. The whole thing composes.
And the practical takeaway fits in a paragraph. Pick a language with a strong type system. Use nominal types to distinguish things that look alike but aren’t. Parse at the boundary, trust the types downstream. Write tests for semantics, not for shape.
The type system is already in your language. Use it.
Co-written with AI. Credit the prose, blame the opinions.