Sakla
Notes on data, code, and the outdoors

Lisp for Graph Algorithms: Academic Nostalgia or Practical Choice

February 05, 2024 J. Stover

Cleaning out an old hard drive, I found my Lisp code for computing minimum spanning trees and k-nearest-neighbor graphs. Written around 2003 for a research project on graph-theoretic association measures, the code still runs on SBCL without modification. That alone says something about Lisp.

The Original Project

The research involved testing whether word usage in U.S. Supreme Court opinions showed temporal patterns. I needed to compute interpoint distances on concordance data and build proximity graphs. Lisp seemed natural for the recursive tree operations and symbolic manipulation involved.

Prim's algorithm for minimum spanning trees translates cleanly into Lisp. The priority queue is a sorted association list. The edge relaxation step is a recursive function. The code reads like the textbook pseudocode, which was the point. Clarity over performance.

Would I Use Lisp Today?

For graph algorithms specifically, probably not. NetworkX in Python and igraph in R are comprehensive and well-tested. Writing your own Prim's implementation makes sense pedagogically but not practically. The libraries handle edge cases and optimizations that a research prototype ignores.

For symbolic computation and language processing, Lisp still has appeal. The macro system allows domain-specific abstractions that other languages cannot match. If you need to manipulate abstract syntax trees or implement a custom rule engine, Lisp is worth considering. For further reading, have a look at dog-friendly travel spots in France.

What Survived

The interesting part of revisiting old code is seeing what held up. The data structures were fine. The algorithm implementations were correct. The file I/O was terrible. Lisp's standard library for file handling was always its weakest point, and my workarounds were ugly.

The research itself produced a modest result: word frequency patterns in Supreme Court opinions do shift over time, but the association with specific legal outcomes was weak. The Friedman-Rafsky statistic showed significance in some concordance pairs but not consistently. A minor contribution to a minor subfield.

Preserving Academic Code

I uploaded the code to a public repository with a GPL license, as it was originally released. Whether anyone will use it is doubtful. But the principle of making research code available matters more than the probability of reuse. Reproducibility requires access to methods, not just results.

If you find yourself writing academic code in an unusual language, document it thoroughly and release it. The next person who needs a Lisp implementation of edge-disjoint spanning trees might be grateful. Or they might just use NetworkX. Either way, the code should exist.

filed under: lisp, algorithms, graph-theory, academic, programming
© 2026 Sakla
Statistician, open-source contributor, and occasional hiker. Writing about data, animals, and places worth visiting.