Model Train-related Notes Blog -- these are personal notes and musings on the subject of model train control, automation, electronics, or whatever I find interesting. I also have more posts in a blog dedicated to the maintenance of the Randall Museum Model Railroad.
2025-11-18 - HO Live Camera Car v2: Arducam
Category Video
I’m building an HO-Size Car Camera project (see first post here).
This is the camera I’ve selected to use: https://amzn.to/43sCfHv, Arducam Module 3, with the 75 degrees FoV.
Raspberry Pi Zero 2 W w/ Arducam Module 3.
The paper “doc” points to this:
https://docs.arducam.com/Raspberry-Pi-Camera/Native-camera/12MP-IMX708/
More info at https://github.com/raspberrypi/rpicam-apps/
and the official doc is at https://www.raspberrypi.com/documentation/computers/camera_software.html#building-libcamera-and-rpicam-apps
Minimal Setup
$ sudo vim /boot/firmware/config.txt
Change 1 to 0:
camera_auto_detect=0
In section [all], add:
dtoverlay=imx708
$ sudo reboot
Click here to continue reading...
2025-11-15 - HO Live Camera Car, Version 2
Category Video
A few years ago I explored creating an HO-size live camera car using an old pin-point IP sec camera. It had terrible quality and I dropped the project. I also explored using an ESP32-CAM and abandoned it as the quality was not impressive. This time I want to start again, yet using a Raspberry Pi Camera as the basis.
A quick prototype test, displaying the camera video on a tablet running TCM.
The problem with the ESP32-CAM is that the default camera that ships with it is an OV2640 (datasheet here), which is an image sensor from OmniVision released in 2005. It’s not a bad image sensor, except that this thing is seriously outdated and has a crappy image quality out of the box -- there’s been a ton of progress on image sensors since then.
Hardware Choice
Instead, the new project is going to be based on a Raspberry Pi. After looking at the various options, I opted for a Raspberry Pi Zero 2 W:
- This is a fairly compact board that fits nicely on an HO-size car.
- Processor is a 1GHz quad-core 64-bit Arm Cortex-A53 CPU, similar to the one of an RPi 3.
- 512 MB SDRAM.
There is no reason to use a Raspberry Pi Zero -- it's only marginally cheaper yet has the single-core CPU from an RPi 1. From direct experience, this is extremely limited and slow. The Raspberry Pi Zero 2 W is a much better choice.
The Raspberry Pi Zero 2 W neatly fits in an HO-size gondola car!
Now we need to select an adequate camera. There are a lot of options available. We’ll start by looking at the “official” “RPi Camera” page at https://www.raspberrypi.com/documentation/accessories/camera.html
Click here to continue reading...
2025-09-08 - Thoughts on the Car Ride Video Plugin
Category Video
I have detailed in a previous post how I create my “car/cab ride” videos: a Mobius Maxi 4K is placed on a flat car, and either pulled or pushed by the train using a custom “3D-printed rod” draw-bar connector, then I use a DaVinci “Fuse” script that I wrote to remove that gray rod from the image.
The plugin transforms this image: |
into this: |
That DaVinci plugin has a lot of idiosyncrasies though. It’s based on a line-per-line contrast analysis, so it has no semantic of where the rod vs the rails are. When the rod gets very close to the rails in a curve, that analysis totally fails. And the backfill is extremely basic -- it’s a simple horizontal interpolation between both sides of the detected rod, line per line. That’s why it creates these horizontal bands in the middle, as there’s no pattern to it.
So I’m always on the lookout for other alternatives. Obviously, AI is all the hype these days, so let’s have a look at what we can do with a basic prompt in ChatGPT vs Gemini:
Original image (direct footage from camera): |
|
Prompt: |
|
ChatGPT version: |
Gemini version: |
OK, that was quite interesting. First, Gemini produced the resulting image in a couple seconds whilst it took ChatGPT almost a minute to give me back an image. Comparing both images:
- Gemini: The result is pretty much exactly what’s expected. The gray rod is gone, and the track in between has been not only smoothed, but its pattern looks actually pretty impressive. We can also see that the rod shadow has been removed, something my plugin can’t do.
- ChatGPT: For some reason, the image is zoomed in, and the aspect ratio changed. The gray rod is gone, and the track in between the rails looks really good. The rod shadow is also gone. But… there’s more. Other parts of the image have changed. The ceiling on the left side no longer has ceiling lights, and the entire lighting of the image has consequently darkened. The spot light on the top left has changed shape. All the text has become some kind of gibberish and the engine itself has somewhat changed, it’s more vertical. The stairs on the platform have entirely vanished! Finally… and it took me a few seconds to realize, the entire image is now super crisp. The focus depth from the camera is gone, and everything including the baggage car on the left and the track is in focus. Image details have literally been added that did not exist before.
Click here to continue reading...
2025-09-04 - Conductor 2: Startup Time
Category Rtac
One of the things I get from the new version of Wazz, the dashboard keeping track of the automation, are timings when the automation computer starts in the morning:
So here are the events listed above:
- The Automation Computer is powered on by the museum staff… Even though I run a pretty barebone version of Debian on it, it takes some time for Linux to boot and go through the systemd init. I don’t have that timing in the events above. Measuring it with a watchclock a while ago, I believe it’s in the 10-20 seconds range.
- 9:47:25am: The “computer consist” event is sent as soon as we reach runlevel 5, the multi-user GUI. That starts a script which runs a git update on the JMRI roster, and then starts the JMRI software.
- 9:48:46am: 81 seconds later, the “conductor running” event indicates that the Conductor add-on is loaded in JMRI. These 81 seconds correspond to the loading time of JMRI, when it invokes a Jython add-on trampoline that loads the Kotlin program Conductor in the JVM. It’s all a game of classloaders and stuff, and they essentially all run in the same JVM. But still, we basically have little control on that 81 seconds timing. It’s what JMRI takes to load.
- 9:49:15am: 29 seconds later, the “conductor script” event indicates that Conductor is loading the Kotlin Script for the actual automation script. That includes Conductor opening its UI, loading the SVG map, and compiling the automation script into the Kotling Scripting Engine takes about 20 seconds just by itself.
- 9:49:15am: Less than a second later, the “toggle” events are emitted by the automation script as soon as it starts executing. At that point, the automation is “live”.
In total, it takes about 2 minutes from cold “computer off” to the automation being active.
And to be clear, that's one minute too much
Click here to continue reading...
“Wazz” is my own web-based dashboard to get an instant overview of the automation at the Randall Museum Model Railroad. Last month, I started revamping the web site with a more modern implementation, and after about a month of work, I’ve just finished this major rework on the status dashboard with the following architecture:
This now results in a web page giving a dashboard like this:
This page gives me an overview of which computers are on, wherever the automated lines are active (the “toggles”), and which train ran last, and whether it completed its run properly.
The major visible part is this new “performance” tab that lets me see how the trains behave on their respective route:
Click here to continue reading...
2025-07-07 - New Wazz Web Dashboard
Category Rtac
“Wazz” is my own web-based dashboard to get an instant overview of the automation at the Randall Museum Model Railroad. It used to be a crummy javascript single-page web page that I had hacked quickly over the years. I decided to entirely rebuild it using React, TypeScript, and Vite.
The source is available on the here: https://github.com/model-railroad/conductor/tree/main/web/wazz
And the web app is deployed here: https://www.alfray.com/trains/randall/wazz/
I used JetBrains’ WebStorm as the IDE; that was a nice step up from my usual VSCode setup. Not much to discuss on the implementation side -- it’s really your typical no-thrills React-TypeScript web app.
The Conductor automation software exports a JSON status, which this reads, and displays. There’s an automatic refresh every 10 minutes. Note that this isn’t hosted in any cloud -- it totally relies on the automation computer at Randall having wifi access to the internet. The uptime for that connection is around 95%. Since it’s mostly a remote view dashboard, I don’t need a perfect uptime, nor do I expect it to have a high traffic load.
I’m essentially the sole user of that dashboard. Which also explains why the display may look cryptic -- it displays exactly what I want the way I want it, with no effort to be legible by people unfamiliar with the Conductor software.
This is the display I use for Distant Signal:
- https://www.adafruit.com/product/2278 $40, 64x32 with 4 mm pitch.
- https://www.aliexpress.us/item/3256808335479840.html, $18, 64x32 with 4 mm pitch.
Here’s the AliExpress one in use:
Using the AdaFruit version, here’s what the back of the panel looks like, annotated:
Source: AdaFruit.
Notice the little vertical chips that are highlighted in Red, Green, Blue above. There are 4 x 3 x 2 of them.
The HUB75 connector is an industry ad-hoc connector. As far as I can tell, there is no solid specification anywhere to be found. Instead, it seems to have evolved over the years, and used more or less in a compatible way.
Click here to continue reading...
A bit more progress on the Distant Signal project: the initial configuration script consisted of purely graphics primitives, and the automation would select which of the predetermined states to display.
That’s good, but when I’m going to have several of these displays for several turnouts, I realize there’s a lot of repetition because each state represents the entire screen -- thus each state needs to repeat the title, or the block numbers, for example. Instead, the new direction is to have “layers” to avoid repetition:
- A title layer defines the display… title. That’s all.
- A “states” layer defines multiple track states for the given turnout (typically 2).
- A “blocks” layer defines the block numbers to draw next to the track, and that means we can now have active vs inactive blocks and thus render them accordingly.
With that approach, we can change the display to look like this:
[Edit: And future me even added a view of this panel with a further style update, seen here in-situ:]
Click here to continue reading...
2025-03-26 - Distant Signal: Matrix Display for Turnout T330
Category Arduino
Here’s a new project, Distant Signal: https://github.com/model-railroad/distant-signal
The goal of this project is to display the state of a remote model-railroad turnout on a LED Matrix Display. That’s obviously the “phase 2” of the single-LED ESP32 display I toyed with last week.
The hardware for this project is an AdaFruit MatrixPortal ESP32-S3 driving an AdaFruit 64x32 RGB LED Matrix - or more exactly some clone/equivalent of such.
Overall, the project works exactly the same as the single-LED version did:
Here’s the first iteration of the display:
This version uses a basic text-based configuration script to define the content of the screen:
That’s the configuration script that the Conductor automation program would send to the display to initialize it. The configuration script defines several “states”, for example “T330 normal” vs “T330 reverse”. Each state is describes the entire content of a screen using a set of graphic primitives -- line, rect, text, and polygon.
Then MQTT would be used to select which state to display -- in this case turnout states, as the automation dictates which state should be displayed based on turnout sensor feedback. From the ESP32 point of view, the behavior is totally agnostic -- all it does is display a full screen of “something”.
2025-03-20 - Indicator for Turnout T330
Category Arduino
Here’s a new a experiment: we now have a new visual indicator of the position of the Sonora turnout T330, designed to be visible by the Saturday operators when standing at the Stockton Yard:
Sonora has the two mainline tracks that merge together at turnout T330, and there’s a signal bridge with signals that clearly indicate the position of the turnout. The problem is that the signal bridge is not visible from across the layout, where the operators are typically standing.
Thus this new experimental signal is located on the pillar -- it’s facing the operators, and it’s high behind the window, hopefully high enough to be visible even when the public is present in front of the layout.
I kept the new signal as simple as possible: green indicates the turnout is aligned straight for the “inner” track (block B320) and red indicates it is thrown for the “outer track” (block B321). Behind the signal, I placed a short explanation to hopefully make it clear what the color represents:
Click here to continue reading...






















