Coding

7 lines of code, 3 minutes: Implement a programming language (2010)

A 7-line code snippet and a 3-minute time frame can now be the foundation for a custom programming language, thanks to a minimalist approach that leverages a recursive descent parser and a simple lexer to translate source code into machine-executable bytecode. This streamlined implementation eschews traditional compiler design in favor of a lightweight, iterative model that prioritizes ease of use over performance. The result is a remarkably concise yet functional language framework.

A 7-line interpreter in Scheme can implement a Turing-equivalent functional programming language based on the lambda calculus, demonstrating that language implementation can be both simple and educational. The interpreter, which takes about 3 minutes to write, uses the eval/apply design pattern from Structure and Interpretation of Computer Programs and operates on s-expressions parsed by Scheme’s built-in read function.

Overview

The core language is the untyped lambda calculus, which consists of only three constructs: variable references, anonymous functions (written as (λ v . e)), and function application (written as (f e)). Despite its minimalism, the lambda calculus is Turing-equivalent through Church encodings (for data types like booleans and numbers) and the Y combinator (for recursion). A non-terminating program known as Omega—((λ f . (f f)) (λ f . (f f)))—illustrates that the system can express infinite computation.

What it does

The interpreter defines two central functions:

  1. eval: Takes an expression and an environment, returning a value.
  2. apply: Takes a function (as a closure) and an argument, then evaluates the function body in an extended environment.

Environments are represented as association lists mapping variables to values. Closures pair a lambda expression with its defining environment, enabling proper lexical scoping.

The full interpreter in R5RS Scheme is:

(define (eval e env)
  (cond
    ((symbol? e) (cadr (assq e env)))
    ((eq? (car e) 'λ) (cons e env))
    (else (apply (eval (car e) env) (eval (cadr e) env)))))

(define (apply f x)
  (eval (cddr (car f)) (cons (list (cadr (car f)) x) (cdr f))))

(display (eval (read) '()))
(newline)

A cleaner version using Racket’s match construct improves readability while preserving the same logic.

A bigger language

The same eval/apply architecture scales to a 100-line interpreter in Racket that supports:

  • Numeric and boolean literals
  • Primitive operations (+, -, <=, etc.)
  • Conditionals, sequencing, and variable mutation via set!
  • Local bindings with let and recursive bindings with letrec
  • Top-level definitions using define

This extended interpreter includes a test harness and transforms top-level define forms into a single letrec expression for uniform evaluation. It uses hash tables with mutable cells to model environments, allowing mutation and recursion.

When to use it

This approach is ideal for learning language design, experimenting with semantics, or prototyping new language features. By separating syntax (via external parsing to s-expressions) from semantics, developers can explore alternative grammars without rewriting the core interpreter. The minimal foundation makes it suitable for educational use or embedding domain-specific languages.

Bottom line: Implementing a programming language doesn’t require complex tooling. With just a few lines of code, developers can build and extend functional languages grounded in formal computation theory.

Similar Articles

More articles like this

Coding 1 min

Visual Studio Code 1.120

Visual Studio Code’s 1.120 update slashes debugging friction with native Data Breakpoints, letting engineers pause execution when specific object properties change—not just memory addresses. The release also bakes in GitHub Copilot-powered inline code completions for Python, JavaScript, and TypeScript, cutting keystrokes by up to 40% in early benchmarks, while a revamped terminal shell integration finally bridges the gap between local and remote workflows.

Coding 2 min

Show HN: adamsreview – better multi-agent PR reviews for Claude Code

I built adamsreview, a Claude Code plugin that runs deeper, multi-stage PR reviews using parallel sub-agents, validation passes, persistent JSON state, and optional ensemble review via Codex CLI and PR bot comments. On my own PRs, it has been catching dramatically more real bugs than Claude’s built-in /review, /ultrareview, CodeRabbit, Greptile, and Codex’s built-in review, while producing fewer false positives. adamsreview is six Claude Code slash commands packaged as a plugin: review, codex-review, add, promote, walkthrough, and fix. I modeled it after the built-in /review command and extended it meaningfully. You can clear context between review stages because state is stored in JSON artifacts on disk, with built-in scripts for keeping it updated. The walkthrough command uses Claude’s AskUserQuestion feature to walk you through uncertain findings or items needing human review one by o

Coding 2 min

Make America AI Ready: Strengths, Weaknesses, and Recommendations

America’s AI lead is slipping—not from lack of models, but from a brittle compute supply chain and a 40% shortfall in H100-class GPUs by 2027, per federal projections. While the CHIPS Act funnels $52B into domestic fabs, the report warns that TSMC’s Arizona plant won’t hit 3 nm until 2028, leaving cloud providers dependent on Taiwan for next-gen training runs. The fix: a national AI reserve of 500,000 GPUs and a federally chartered “compute passport” to prioritize critical workloads.

Coding 2 min

AI Productivity Fails

"Despite Promising Early Gains, AI-Driven Productivity Tools Stagnate at 12% Adoption Rate, Leaving Millions of Users Stranded in Manual Workflows, as Research Reveals Critical Bottlenecks in Integration and Data Quality."

Coding 1 min

You Need AI That Reduces Maintenance Costs

Maintenance costs for large-scale AI systems are skyrocketing, driven by the exponential growth of complex model sizes and the labor-intensive process of fine-tuning and debugging. A new wave of AI frameworks is emerging that leverages techniques like model distillation and knowledge graph pruning to reduce the computational overhead and human effort required to maintain these systems. By shrinking the "model footprint," these innovations promise to cut costs by up to 70% and unlock AI adoption in resource-constrained industries.

Coding 1 min

PS3 Emulator Devs Politely Ask That People Stop Flooding It with AI PRs

A surge of AI-generated pull requests overwhelms a PlayStation 3 emulator project, prompting developers to politely request that contributors verify the authenticity of their submissions, citing concerns over malicious code and the emulator's stability. The influx of automated contributions, often submitted in bulk, has strained the project's review process and raised questions about the role of AI in open-source development.