I've attempted AI coding for 2-3 years now, and my mind is increasingly blown.
Two or three LLMs later I think we'll start swapping out coding (in formal language) with prompting (in natural language)
even more than we already do.
I have some insights about how this might look.
The model doesn't think in English. Run the same prompt in Norwegian and English and you get similar behavior. The internal
state isn't words but a shared feature space: text maps into it and back out again. If that space drives behavior, reproducibility
depends on the inputs that shape it. Prompts and context are part of the build recipe.
In open codebases, compiler flags and build scripts are public, versioned, and replayable. Treat prompts the same way so anyone
can rebuild the state that produced a change - GCC-style self-hosting for model-assisted code.
Formal languages map cleanly. They are precise, testable, and often mutually translatable, so they fit the internal representation.
Hence model -> code -> compiler -> machine code starts to look like a bootstrap loop: models generate code that generates
more code. Like RepRap and compilers, it self replicates and will obviously grow geometrically (or exponentially, or just
"explode"). We must, as usual with these tech developments, join the ustoppable wave, prepare for the explision, and try to
maybe steer the explosion towards ethical and humane outcomes.
To keep that loop trustworthy, record prompt, context, model version, and decoding settings alongside the emitted code, just
like pinning toolchains and build flags. If the code is open, the prompt recipe should be open too, for the same reasons we
publish Makefiles and CI specs.
What the loop actually looks like. Day to day: human or model -> prompt -> model -> code or text -> compiler/interpreter ->
machine code -> processor -> result. I judge the result, adjust prompt or context, and try again.
Models don't retain a whole program across calls, so source code remains the durable representation we share. To make that
code self-reproducible, store the prompt/context bundle next to each commit, along with the exact model and decoding settings.
Do it via a git hook or a hook in your LLM framework.
This doesn't replace code; it records how this version came to be and makes the repo rebuildable end-to-end, not just at the
compiler boundary. We should bake an AI into every repo via the knowledge and intent we keep with it
- tobben
Torbjørn Ludvigsen
I've attempted AI coding for 2-3 years now, and my mind is increasingly blown.
Two or three LLMs later I think we'll start swapping out coding (in formal language) with prompting (in natural language)
even more than we already do.
I have some insights about how this might look.
Simulation and HP5 Work
24-9-2025
So I made a Slideprinter simulation, check it out at hangprinter.org.
It's really cool because it simulates many aspects of a real machine:
I wrote a whole library to simulate the lines. Check out hp-sim5. This has been a grueling multi-month effort. Good cable/line simulation is not widely available from anywhere else.
The simulation runs on a state-of-the-art XPBD time loop that stays close to physical reality.
The move commands comes from real Klipper, with a real Hangprinter config.
Klipper can stream commands to the simulaton too (only recommended if a real mcu is attached, or you run klipper_mcu on a
realtime linux kernel).
The simulated steppers can lose steps; their torque varies with speed and load.
Spools can be clicked, grabbed, and pulled with your mouse or touch screen.
Simulation speed can be cranked up 100s of times beyond realtime for faster experiments.
The logo print simulation tracks millions of individual step pulses and micrograms of deposited material.
It's implemented both in Javascript and in Python.
The whole machine and its simulation scene and config is described in a file format called USD, see slideprinter.usda, which enables us to be data-driven, but also compatibility with various 3d modelling, reinforcement learning, and simulation
softwares such as Blender and Isaac Sim.
The cable solver models slack, stretch, wrap, and friction straight from that USD geometry, keeping the kinematics honest.
The Python build can hand off the same solver to NVIDIA Warp, so long jobs can use the GPU.
I think simulations are most important part of the Hangprinter Project.
I'd like to automate the hardware design and fw configuration.
We just now became able to quantify comparisons between Hangprinter hardware+config setups.
Just measure how close the tool head is to an optimal route at all times, compared to the exact positions in the gcode
file.
Optimize over CAD parameters and config values, and write a simple cost function.
So I'll keep focusing on simulation, not hardware.
But it's good to have ideas of cost regardless, especially since we have to integrate tightly with a firmware in order
to simulate.
The firmware implies ceirtain hardware which implies cost.
Cost Estimates
I computed some BOM cost estimates for various alternatives for the HP5 today. It includes three versions I've been
thinking about.
All cost estimates assume Swedish taxes and shipping, plus a 10% inflation in all prices since 2021.
The full spreadsheet is available here.
Baseline v5 (Roughly HP Performance)
This is the machine that the HP4 prototype was always meant to be. ODrive S1 boards, Motion tracking with Arducam, Duet3
6XD board etc.
Cost: 2500 USD
Software work required: We'd need to implement the Duet CAN Protocol in the ODrive firmware, which is hard, and the machine will be useless until
it's done.
We could dodge this hard sw work with 5 x 1XD boards, but that brings total cost up by 384 USD, to 2840 USD and defeats
some of the purpose with the HP5/HP Performance.
Duet HCL Nema v5 (Roughly HP Convenience)
The smallest change to get away from BLDC and ODrive, dodging protocol implementation work and cost, sacreficing performance.
The 5 x ODrive S1s are swapped with Duet's closed loop stepper drivers (Duet3 Expansion 1HCL), and the 5 x BLDC motors are swapped for a standard set of Nema17 motors.
Cost: 1750 USD
If we skip the encoders for this option, cost is down to ca 1600 USD.
Software work required: We would need to do implement torque mode in the expansion board and Duet CAN Protocol. That shouldn't be too hard, and the
machine can be useful without it as well.
Klipper Driven HP5 (Roughly HP Core)
No closed loop control, no torque mode.
The Duet3, the 5 x ODrive S1s, and the 5 x BLDC motors are swapped with 5 microcontroller units and 5 Nema17 motors.
The 5 microcontroller units will be driven by Klipper running on the Raspberry Pi 5 via CAN and a USB to CAN bridge.
Also, include one load sensor per anchor is included in this Klipper driven version.
Cost: 1050 USD
Adding closed loop to the Klipper Driven HP5 could come as cheap as 5x20 = 100 USD (source) bringing total cost up to 1150 USD.
Software work required: A lot. Klipper's Hangprinter support is very basic. But although its a lot of work, none of the work is very hard, since
it's mainly re-implementation into Python,
and we have the simulator to quickly test everything. The machine is potentially useful even without software upgrades,
since the basic Hangprinter support in Klipper already does work.
Thoughts
The Duet options might still make sense for those who need performance and with the minimal changes compared to something
that has been built proven to work (the HP4 prototype).
The Klipper option makes most sense to work on for people who want to go cheap and develop, and still maintain the possibility
of adding performance later.
- tobben
Torbjørn Ludvigsen
So I made a Slideprinter simulation, check it out at hangprinter.org.
Everything on this homepage, except those videos who are published via Vimeo or Youtube, is licensed under the Gnu Free Documentation License.
The videos published via Vimeo or Youtube are also licensed via Vimeo or Youtube.