Article

NEAT-JAX

An attempt to combine NEAT (Neuroevolution of Augmenting Topologies) with JAX's hardware acceleration. The fundamental tension: NEAT evolves dynamic topology; JAX requires static tensor shapes for JIT compilation. This project is an honest exploration of whether that tension can be resolved.

The Collision

NEAT's power is topology evolution — networks grow by adding nodes and connections during evolution. JAX's power is vectorized, JIT-compiled tensor operations over fixed shapes. These are architecturally incompatible.

Three attempted workarounds:

1. Max-topology padding — define an upper bound on network size upfront, pad all genomes to that size with connection masks. Current implementation uses this.

2. Python genomes + JAX evaluation — keep genome mutation in Python lists, convert only for the forward pass. Works but the boundary crossing is expensive.

3. Fixed-topology masks — give up node addition entirely, only evolve edge patterns. Loses the core NEAT insight.

Known Issue

There is a known bug in the crossover step that prevents the full topology evolution loop from running. The basic forward pass works; the deeper problem remains unsolved.

The Open Question

Does the genome *representation* itself need to change? A real solution probably isn't a patch — it's a different representation: learnable topology masks, continuous graph relaxations, or something else entirely.

Why It Was Built

After finishing Karpathy's micrograd — wanted to understand the collision between evolutionary algorithms (inherently dynamic, population-based) and hardware-accelerated ML (requiring fixed, vectorizable computation). The honest conclusion: the collision is real and unresolved.

Related Pages

Sources

Evidence

Linked source: GitHub Repo: NEAT-JAX

Linked source: Website Source: blog / neat_jax_blog