The place where random ideas get written down and lost in time.
2023-05-21 - Lua Optimization Tips
Category DEVhttps://www.lua.org/gems/sample.pdf
- Locals are faster to access than globals. Copy globals to locals especially for loop access.
- Tables are composed of an array (storing integer keys from 1..n) and a hash map for any other keys.
2023-05-17 - DaVinci Fusion Plugin
Category DEVHow to write a Fusion video effect plugin?
This SO answer leads to “we suck less” (aka WSL), and “eyeon”.
That last name “eyeon” is the company which originally created Fusion before being acquired by BlackMagic to integrate it in DaVinci Resolve.
- https://www.steakunderwater.com/wesuckless/ is the place to find a community behind plugins.
- API should be available via Help > Documentation > Developer. There’s a PDF IIRC.
- An older version (Fusion 8, 2016) can be found here: https://documents.blackmagicdesign.com/UserManuals/Fusion8_Scripting_Guide.pdf
- Scripting:
- “FusionScript”, to be used either in Lua or Python.
- Uses LuaJIT for performance. Lua is the preferred choice.
- Choice of Python 2 or 3.
- Fuses:
- OpenCL for tools & filters
- Scripts:
- Composition scripts (a “composition” is a full Fusion document)
- Tool scripts (for a “single tool” aka a single node?)
- Bin scripts (as in “media bin”)
- Utility scripts (act on Fusion itself)
- Script libraries (scriptlib, used by other scripts)
- External command-line script which can act on a composition.
- Event scripts.
- Composition callbacks (load, save, render, etc)
- Button callbacks (for UI)
- InTool scripts (executed when evaluating each frame)
Fuses has its own SDK documentation:
- https://documents.blackmagicdesign.com/UserManuals/Fusion_Fuse_SDK.pdf?_v=1658361162000
- Locally as “Fusion Fuse Manual.pdf” in C:\ProgramData\Blackmagic Design\DaVinci Resolve\Support\Developer\Fusion Fuse
- Plugin types:
- Image Processing, with inspector and onscreen crosshair.
- Metadata processing.
- Modified plugins (affects number inputs)
- View LUT plugins.
- Using Lua with LuaJIT.
- What I’d use is an Image Processing “Tool” plugin.
- Plugin has code to register its name.
- Code to register inputs (UI controls) but also Image input/output.
- Callback NotifyChanged when UI controls change (e.g. to adjust UI).
- Callback Process to process an image and compute an output.
- Native methods that apply to the entire image:
- ColorMatrixFull (RGBA offset, scale, etc)
- RGBA Color Gain, Gamma, Saturate.
- Color Space conversion (RGB, HSV, etc)
- Clear/Fill
- Channel operations on 1 image with + - * / and RGB coefficients.
- Channel operations on 2 images with RGB fg vs bg: Copy, Add, Multiply, Subtract, Divide, Threshold, And, Or, Xor, Negative, Difference.
- Transform or Merge (center, pivot, size, angle, edge wrapping)
- Crop, Resize
- Blur (also for Glow)
- Pixel Processing : create functions that take pixels from 2 images and return the resulting pixel.
- Functions are “pre declared” and stored in an array, then used in Process.
- Processing has 8 channels: 4x RGBA and 4x Bg-RGBA (I guess for the background?)
- Shapes: created using MoveTo/LineTo + matrix operations then filled and merged with image.
- Text overlays: font management, then draw as shapes.
- Per-pixel processing using for y/for x/getPixel/setPixel.
- DCTL (DaVinci Color Transform Language) is a shader-style processing.
- C-like syntax, operates om x,y individual pixels.
- Converted to GPU as needed.
- Transform DCTL to change color data in one image.
- Transition DCTL to compute transition between two images.
- OpenFX
- C/C++ shader with VS.Net SLN project.
Some example of Fuses here:
Filming trains with the video car pulled by my “extended” coupler leaves that coupler visible on screen. It’s a tad distracting.
One question I had is: Can I write a Fusion plugin that would “erase” this object?
Let’s look at the obvious part: how would I erase this object?
We can split the problem in two: how to detect the object, and how to erase it.
First, the “coupler” appears as a gray-ish rectangle. It always touches the bottom edge of the screen and extends upwards. It’s roughly centered horizontally, but can move left and right as the train takes on curves. If we had to modify a movie frame by frame, one optimization would be on a given frame the coupler will very close to its position in the previous frame.
The coupler is more or less a vertical rectangle.
If we were operating as a plugin in Fusion, ideally we could use a 2d ui crosshair to define its initial position, but we would also have to deal with the fact it moves slowly from frame to frame, and tracking would be ideal.
If we were operating as command-line script without UI, the initial position could be given as parameters.
Its width is constant. It tapers as it gets farther from the camera, but overall we could parametrize the max/min width we are looking for -- e.g. what looks like a vertical gray box of W pixels wide and at least H pixels height starting from the bottom of the screen, roughly centered?
If we have a UI giving us a starting pixel position, we can use a basic flood algorithm to detect the boundaries.
When looking at a still image capture, the coupler edges are naturally fuzzy. We cannot expect a stark transition between the coupler and its background. However we can parametrize this, as typically the fuzzy border will have a predictable width.
Although the coupler is mostly of the same gray color, that color can vary depending on illumination, not to mention tunnels, and there’s bound to be shade variation across the length of the object.
One option is to look at the image in HSL -- the coupler being gray should not have a hue.
Assuming we can detect the coupler, how do we “erase” it?
One suggestion is to treat the coupler as a mask of width W:
- Shift the image horizontally left and right by W pixels.
- Merge this over the coupler using an averaging algorithm.
- Using the luminance as the coefficient to merge it.
Assuming this works for one image, the obvious concern is how will it look once applied to a 30 fps movie? We may need to do some temporal averaging -- that is apply a percentage of the last frame to smooth the output.
Implementation wise, why kind of framework would be used?
One option is to make a standalone processing app, either in Java/Kotlin/C++, with its own movie decoding + UI rendering. The input movie decoding would take advantage of either ffmpeg or vlc as libraries or avcodec on linux. Similarly this can write a movie result on the fly.
In this mode, we would convert a raw footage upfront -- cam files ar 10 minutes long.
The other option is to use a Fusion plugin as the framework -- in this case Fusion provides the input and the output. Debugging seems potentially more challenging this way. The Fuse SDK doc indicates the Lua code can be reloaded “live” to ease debugging. This has the side benefit that we can judge performance immediately, and that we can take advantage of the fuse UI widgets to e.g. select the initial position to look at or to select some parameters (width, blending, etc).
In this mode, we could apply the tool to only the segments being rendered in the final production rather than on the entire raw footage.
One thing I don’t see in the Fuse SDK is a way for a “tool” to incrementally operate across frames in an animation. We want to do some basic tracking of the coupler position, and it would be nice if it could be automated. The other option is to simply animate the initial position, and/or to use a planar tracker to define it.
2023-02-24 - MQTT
Category DEVJust leaving this here to explore it later: https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/protocols/mqtt.html
Guides:
- https://learn.adafruit.com/alltheiot-protocols/mqtt
- https://docs.arduino.cc/tutorials/uno-wifi-rev2/uno-wifi-r2-mqtt-device-to-device
Spec for MQTT v5:
This would be useful for SDB: make it an MQTT client.
But potentially that can be useful for other projects.
For example, my current layout stats from Randall are sent via very hackish REST post (hackish in the sense it’s received via a BASH CGI :-p ⇒ not anymore, it’s a CGI py), and it may be better served by using this protocol: e.g. Conductor sending events to home via broker/gateways, and I'd want to better understand the possibilities.
From the adafruit guide above:
- An MQTT broker is a “stable” node to which (potentially ephemeral) MQTT clients connect. It’s basically the “local server” of a bunch of clients.
- Brokers typically have their own local database but they do not have to.
- An MQTT client is any end-point sending or receiving messages.
- Publish: A client sends a message to a broker.
- Subscribe: A client receives messages from a broker.
- A client keeps an open connection to a broker. The connection open/close is lightweight enough that it can be beneficial to open/close it when needed.
- MQTT messages have a fairly lightweight header, designed for lightweight payloads.
- The protocol is mostly transport-agnostic.
- The spec above recommends TCP, TLS, or WebSockets, and recommends avoiding UDP due to lack of ordering guarantee.
- QOS: 0=at most once delivery, 1=at least once delivery, 2=exactly once delivery.
- Messages are sent to “topics”: a hierarchical string e.g. /root/part1/…/partn.
- Concrete example: /building1/sensors/floor1/division2/temperature.
- Subscription can be done using wildcards: + for any given part, and # for “anything till the end”.
- Topics should be carefully defined upfront, as changing the schema later is not trivial especially with regards to wildcard subscriptions.
- An MQTT gateway is a client/bridge translating data from sensors to MQTT messages (example: sensor ⇔ RF/BLE ⇔ gateway ⇔ MQTT messages ⇔ broker).
MQTT Libraries
- Comprehensive list of brokers/clients here https://mqtt.org/software/.
- ⇒ Note that most of the time I want a “client” and not a “broker” (server).
- ESP-IDF has ESP-MQTT.
- ArduinoMqttClient.
- Eclipse Paho MQTT Client Java library.
- https://mvnrepository.com/artifact/org.eclipse.paho/org.eclipse.paho.mqttv5.client/1.2.5 ⇒ Last updated in 2020, seems dead in Java?
- Eclipse Mosquitto open source broker: https://mosquitto.org/
- Part of Debian or RPi packages. https://mosquitto.org/download/
- Mosquitto public “test” broker: https://test.mosquitto.org/
- Moquette : Java MQTT “lightweight” broker https://github.com/moquette-io/moquette
- Note that most MQTT brokers try to be “comprehensive” with persistence support built on top of some database; there are very very few light Java libraries brokers.
- Apache ActiveMQ: a broker and a client.
- https://activemq.apache.org/version-5-getting-started.html
- https://mvnrepository.com/artifact/org.apache.activemq/activemq-client
- HiveMQ MQTT Client - https://github.com/hivemq/hivemq-mqtt-client
- That looks fairly used, recent, instantly documented in the README with no need to go fishing around.
- https://mvnrepository.com/artifact/com.hivemq/hivemq-mqtt-client
Some interesting things here:
https://forum.mrhmag.com/post/an-operating-steam-throttle-you-can-customize-12548356?&trail=25
I don’t care about the “steam throttle” part. What I do care is this is using:
- A 3d printed case.
- An ESP32 and other arduino-like accessories, such as OLED screens, buttons, etc.
I realize I can use a principle like that for my own contraptions. For example for the ToF sensor for the Software Defined Blocks [research] project, I was wondering whether I should solder pin headers on the sensors and the ESP32 to use. The headers make it easier to prototype, but then they expose contacts that I may not want exposed during the real application -- and once soldered, the headers are impossible to remove neatly.
So here’s what I could be doing:
- 3d print my own case / support.
- In there, have holes for the pin headers.
- This would support the part in the direction I want.
- Just solder connections on the back on the appropriate pins.
- Or use Dupont connectors.
- The soldering can also help hold the part in place.
- It’s still possible to unsolder and remove the part.
The other option I had used before (for the motion sensor at Randall) was to use Dupont connectors for all the usable pins. That can also be worked in the 3d print to make room for only what I need, helping connect to the right pins.
Finally, for something like Software Defined Blocks [research], I’d want the sensor encased in a little box mimicking a railroad “trackside equipment house”, whatever that thing is called. It can just be a 3d printed rectangular box with a slanted root, painted in gray. Example 1. These are called “electrical cabinet”, or “equipment house”, “relay house”, etc.
VSCode > Terminal > use “v” icon > Terminal Settings
or
VSCode > File > Preferences > Settings (Ctrl + ,)
Scroll to Features > Terminal > Integrated > Automation Profile: Windows (or Linux)
“Edit in settings.json” → creates C:\Users\%USER%\AppData\Roaming\Code\User\settings.json
In the JSON section:
"terminal.integrated.shell.windows": "C:\\Windows\\Sysnative\\WindowsPowerShell\\v1.0\\powershell.exe",
replace by:
"terminal.integrated.shell.windows": "C:\\cygwin64\\bin\\bash.exe",
"terminal.integrated.shellArgs.windows": [
"--login",
"-i"
],
"terminal.integrated.env.windows": {
"CHERE_INVOKING": "1"
},
Try it with Ctrl-` (or Terminal > New)
Also copy that in the workspace profile rather than the user-wide profile.
Note that this method is deprecated as now there are “terminal profiles” but it still works.
2023-01-29 - ESP32 Variants Available
Category DEVI have 3 types of ESP32 hardware available around:
- The HelTex WIFI_Kit_32, an ESP32 with Wifi/BT and an OLED screen (I2C).
- IDF config: CPU freq 240 MHz, XTAL 26 MHz, SPI flash 4 MB (no RAM), internal SRAM 520 kB.
- Features: USB serial CP2101, Wifi, BT, OLED (cannot be removed), battery plug.
- Does NOT have: sdcard. No SPIRAM.
- The ESP32-CAM, an ESP32 with Wifi/BT, sdcard, camera.
- IDF config: CPU freq 240 MHz, XTAL 40 Mhz, SPIRAM 4MB (on SPI bus), SPI flash 4 MB (no RAM), internal SRAM 520 kB.
- Features: Wifi, BT, OV2640, sdcard (shared with onboard LED!).
- Does NOT have: USB serial, no OLED. Requires FTDI for access/program
- The ESP-32S …Should have similar specs to the HelTek Wifi Kit 32, without the OLED or sdcard or camera. It has a micro USB and a CP2102 UART. The ESP-WROOM-32 is listed as having a XTAL 40 Mhz in the Espressif docs.
- Features: USB serial (CP2102), Wifi, BT.
- Does NOT have: No sdcard. No OLED. No SPIRAM.
TinyGo is not ready, and Rust is a crappy language.
TinyGo
So first let's have a look at TinyGo.
This seems promising: https://tinygo.org/docs/concepts/faq/what-about-esp8266-esp32/
“As of September 2020, we now have support for the ESP32 and ESP8266 in TinyGo!”
OK but below we find they support 2 boards: a “mini32” and an ESP8266 NodeMCU.
They also explain that they get their ESP32 device definitions from the Rust esp-rs project, which recreates them from the ESP-IDF source.
No idea what a “mini32” is but it’s based on an ESP32 so it may work for us?
We can find this: https://github.com/LilyGO/ESP32-MINI-32-V1.3
It’s not clear which ESP32 CPU that covers, so it may work with my modules.
But we have a bigger problem:
https://tinygo.org/docs/reference/microcontrollers/esp32-mini32/
- SPI ⇒ TinyGo Support = Yes.
- I2C ⇒ TinyGo Support = Not Yet.
- Wifi ⇒ TinyGo Support = Not Yet.
That makes it… pointless. At least for now.
https://github.com/tinygo-org/tinygo/blob/release/src/machine/i2c.go is the implementation of the i2c interface for machines. I note the file is prefixed by “//go:build atmega || nrf || sam || stm32 || fe310 || k210 || rp2040”. Clearly esp32 is not in the list.
From what I can see, the doc is up-to-date and that I2C is really not supported on their ESP32 port yet. It’s worth noting that none of their projects support an embedded wifi driver like the ESP32 contains. They have the usual “Arduino Wifi via UART with AT commands” support, which is not at all the same thing as it does not imply a “network stack”, even limited.
So right now TinyGo can be skipped. It’s only useful for projects not requiring wifi and no I2C.
Rust
2 main issues with Rust in this project:
- The language is insufferable.
- The ownership rules are inscrutable, and the data types are impossible to understand clearly.
- The libraries are not helping much.
- Sure the esp_idf_hal seems to add an “oriented object” layer to the ESP IDF C functions, but overall it’s just the same API with lipstick on it, if and when I can find it.
- The small project with 2 blinking LEDs builds a 230 kB binary.
- There are just so many libraries injected in the build… Very similar in nature to a Node.JS build
Obviously the initial part is a problem of familiarity with the Rust language. One could claim it can be fixed by learning the language more to understand the complex ownership rules, the insane trait types, and the box/ref count thing. But that’s also the worry -- generating write-only code that will be inscrutable when I pick up a side project years later.
So that’s going to be the end of this doc: TinyGo is a no-go, and Rust ESP-RS is a no-go.
For the SDB project, there are 2 possible options:
- Regular C/C++.
- MicroPython was fairly reasonable and worth looking at again.
There are projects that rebuild OpenCV as static *.a for ESP32: https://github.com/joachimBurket/esp32-opencv
2023-01-28 - ESP32: MicroPython
Category DEVIn the same vein that I tried Rust & TinyGo using the “Software Defined Blocks” project as an excuse, we’re going to restart all over again but this time with MicroPython.
Links for MicroPython:
- https://docs.micropython.org/en/latest/esp32/quickref.html
- esp32 module: https://docs.micropython.org/en/latest/library/esp32.html#module-esp32
- There’s access to the NVM and a few other hardware centric properties.
- threads: https://docs.micropython.org/en/latest/library/_thread.html [tutorial]
However:
- FreeRTOS is used and pinned to core 0.
- Micropython is pinned to core 1, including all threads.
For SDB, it is expected that some of the camera/vision part will have to be written in C, and then made available to MicroPython as a module. The goal is to never make image processing in Python; instead MicroPython will be the glue e.g. to get images from a driver/module, and pass around to an analyzer module.
One thing I tried in the past is this customized version of MicroPython with the OV driver. https://www.google.com/search?q=micropython+esp32-cam for more links.
https://github.com/lemariva/micropython-camera-driver specifically of interest.
This rebuilds uPy with a dedicated camera driver.
It can be a good example of how to add C level code to a forked MicroPython.
2023-01-21 - Django with NGinx?
Category DEVSince the kids are doing Python, I think it’s time to show them how to build a self-hosted web site. Django comes to mind since it’s Python, and as an exercise we would expose it on my NGinx server.
Django tutorial: https://docs.djangoproject.com/en/4.1/intro/tutorial01/
Django / NGinx tutorial: https://realpython.com/django-nginx-gunicorn/ -- focuses on serving from a VM (in this case amazon), and using WSGI Http Server via GUnicorn.
Another obvious way to run this would be to run Django’s python-http server and then proxy from NG to that server.
The advantage of that is that the python server can be hosted on another machine on the local network, so that shows a real example of a distributed environment, and is likely easier for the tutorial aspect of this exercise.