Building a video soundboard in OBS

Despite the fact that I do not stream as much anymore, I continue to tinker with OBS on a daily basis. For a while now I wanted to have what I would call a “video soundboard”, a simple mechanism that allows me to quickly play short clips during my stream.

Now, this doesn’t really sound too hard. Create many scenes, add media sources, slap a StreamDeck button on top of that – boom, you are done. This is a crappy solution because it requires a ton of additional work to add new videos into the mix.

I wanted to have a mechanism that relies entirely on one single scene with one single media source for all the content. Control over what gets played is wholly set on the StreamDeck and the StreamDeck only, meaning that adding new content is as easy as adding a single new command to a new button on the StreamDeck.

Sounds interesting? Here is how it works.


Before we start, you will need the following things ready:

The Basic Setup

Create a new button on your StreamDeck with the Text File Tools plugin.

Specify a target filename and set the Input Text to the location of your desired media file (i.e. C:/Temp/heheboi.mp4). Leave the “Append” checkbox unchecked so the entire file gets rewritten.

This is all you will need to do to add new media to the scene. We can now set up OBS.

Within OBS, create a new scene (i.e. “Scene – Memes”) and add a media source to that scene (i.e. “Media Source – Memes”). Make sure this is a media source and not a VLC source!

Open the properties of the media source and be sure to check the “Local File” checkbox, “Restart playback when source becomes active”, “Use hardware decoding when available” and “Show nothing when playback ends”.

Hide the media source via the Sources list by clicking on the “eye” icon in the source list.

Setting up Advanced Scene Switcher macros

Now open the Advanced Scene Switcher plugin via Tools – Advanced Scene Switcher, navigate to the Macro tab and add a new Macro called “Meme Scene (Start)”.

Check the “Perform actions only on condition change” checkbox and add the following condition:

[ If ] [ File ]
Content of [ local file ] [ <PATH ON STREAMDECK BUTTON> ] matches:

(Yes, that is dot asterisk – no space or anything else before, after or inbetween)
Check the “use regular expressions” checkbox and the “if modification date changed” checkbox, and leave “if content changed” unchecked.

Now add the following actions:

[ Switch scene ]
Switch to scene [ Scene - Memes ]
Check the "Wait until transition to target scene is complete" checkbox.

[ Scene item visibility ]
On [ Scene - Memes ] [ Show ] [ Source ] [ Media Source - Memes ]

This takes care of actually playing the video when a change in the file is detected. But we also want to switch back to the previous scene when playback has finished, so we must add another macro.

Add the second macro “Meme Scene (End)”, check the “Perform actions only on condition change” checkbox and add the following conditions:

[ If ] [ Scene ]
[ Current scene is ] [ Scene - Memes ]

[ And ] [ Scene item visibility ] (Click the clock) [ For at least] [ 1.00 ] [ seconds ]
On [ Scene - Memes ] [ Media Source - Memes ] is [ Shown ]

[ And ] [ Media ]
[ Media Source - Memes ] state is [ Ended ]

Add the following actions to the second macro:

[ Switch scene ]
Switch to scene [ Previous Scene ]
(Check the "Wait until transition to target scene is complete" checkbox)

[ Scene item visibility ]
On [ Scene - Memes ] [ Hide ] [ Source ] [ Media Source - Memes ]

Now we should be good, right? Well, almost. While we react to changes in the file thanks to the macro and switch between the scenes, we still do not set the media file on the source. This is handled by the Lua script which we must set up as a final step.

Setting up the Lua script

Open the Scripts window via Tools – Scripts and add the VideoFileFromText.lua script.

You should see some options on the right side of the window.

Set the interval to 50ms, browse to select the same text file you used on the Elgato StreamDeck button for the Video File List, and select the “Scene – Memes” scene for the Scene, as well as the “Media Source – Memes” for the Media Source. Finally, check the “Enable script” button and you are done.

Tying it all together

Be sure that the Advanced Scene Switcher is active and press the button on the StreamDeck. The scene should switch to your Meme scene, play the video and then switch back. Add another button on the StreamDeck that writes a different video file path to the same text file.

Now press the new button, and the second video file should play.

This makes adding short clips really simple and pain-free. No need to manually create multiple scenes or deal with multi-action steps on the StreamDeck. Adding a new video is as quick as adding a new button, setting the path to the desired media file and giving it a nice button image.

Of course, this is just the solution that I came up with, so your mileage may vary.

However, I do think that the inherent simplicity makes it an ideal solution. What do you think?

Friendship ended with WebDrive

Now RaiDrive is my best friend.

After more than a decade I have finally migrated away from WebDrive. It is not that I am particularly unhappy with the product, so South River should not feel bad here. My use-case for the software simply changed over the years and WebDrive did not cater towards that.

Back in the 2000s I primarily used WebDrive to keep a connection to an FTP or SFTP system to easily manage and edit files. A perfect fit, a very reliable tool. Especially in a landscape where many applications only knew how to work with local (as in: on a local drive) files. With the advent of rich media content online, however, WebDrive’s approach to file access no longer fits my requirements.

These days, I manage pools of video and audio on remote systems. I do not want to download an entire file to “peek” into it. It is problematic for collaboration. And more importantly: It is slow and inefficient.

Enter RaiDrive, a Korean-based software that does this very well (on supported protocols).

I read some buzz online about Raidrive not being reliable, however, I cannot mirror that sentiment. The software has been reliable for me during the 2 months of daily use.

Being the old fogey that I am, I make no use of any of the hip “cloud” integrations both products offer, so I cannot speak to the quality of those. However, Raidrive uses EldoS/Callback’s reliable components – the same ones also used in my favourite sync tool Syncback Pro.

Content-based file search with Powershell and FileLocator

I love Powershell. Unfortunately, as soon as we cross into the realm of trying to grep for a specific string in gigabytes worth of large files, Powershell becomes a bit of a slowpoke.

Thankfully I also use the incredible FileLocator Pro, a highly optimized tool for searching file contents – no matter the size. The search is blazingly fast – and you can easily utilize FileLocator’s magic within Powershell!

For the sake of clarity: I will be using Powershell 7.1.3 for the following example.

# Add the required assembly
Add-Type -Path "C:\Program Files\Mythicsoft\FileLocator Pro\Mythicsoft.Search.Core.dll"

# Prepare the base search engine and criteria
$searchEngine                      = New-Object Mythicsoft.Search.Core.SearchEngine
$searchCriteria                    = New-Object Mythicsoft.Search.Core.SearchFileSystemCriteria

$searchCriteria.FileName           = "*.log"
$searchCriteria.FileNameExprType   = [Mythicsoft.Search.Core.ExpressionType]::Boolean

$searchCriteria.LookIn             = "C:\Temp\LogData"
$searchCriteria.LookInExprType     = [Mythicsoft.Search.Core.ExpressionType]::Boolean

$searchCriteria.SearchSubDirectory = $true

$searchCriteria.ContainingText     = ".*The device cannot perform the requested procedure.*"
$searchCriteria.ContentsExprType   = [Mythicsoft.Search.Core.ExpressionType]::RegExp

# Actually perform the search, $false executes it on the same thread as the Powershell session (as in: it's blocking)
$searchEngine.Start($searchCriteria, $false)

foreach($result in $searchEngine.SearchResultItems)
   # SeachResultItems are on a per-file basis.
   foreach($line in $result.FoundLines)
      "Match in $($result.FileName) on line $($line.LineNumber): $($line.Value)"

Wowzers, that’s pretty easy! In fact, a lot easier (and quicker, to boot!) than playing around with Get-Contents, StreamReaders and the like.

One thing of note here: Between running this on a loop for every file in a directory, it is actually quicker to process an entire tree of folders/files. The larger the dataset, the larger the gains through invoking FileLocator.

And yeah, you can use FileLocator on the command line through flpsearch.exe – however the results are not as easily digestable as the IEnumerables you get through the assembly.

The SemWare Editor is now available for free

There are two things you cannot ever have enough of: Good text editors and good file managers.

One of the arguably best commercial console-based editors for Windows with a history going back all the way to the 1980s is now available for free: The SemWare Editor.

If you have never heard or tried TSE Pro, imagine a mix between the simple and intuitive CUI of a EDIT.COM with the rich feature set of a VI, allowing you to extend and alter how the editor works by adding or modifying the included macros. The editor comes in two flavours: A true console application and a Windows-only pseudo console that has a few more bells and whistles. Of course, the purebred console version works great via SSH/telnet.

Now is a great time to give TSE a try, as the following announcement came on the mailing list:

Yes, this and future versions will be free.
The good Lord Willing, (ref: James 4:13-15),
I plan to continue working on TSE.

Sammy Mitchell

I cannot praise the editor enough and will vouch that it is worth every penny of its previous license cost.

You can grab the setup on Carlo’s TSE page.

Corsair K95 Lockups

I am still rocking my beloved Corsair K95 RGB, the original one with the 18 G-keys. I still think there is no keyboard to date that is as great for multiboxing as this one.

Since migrating to my new machine a few months ago, the keyboard would occasionally do quirky things. Letting iCue run for a while caused gamepad detection to “lock-up” and took quite a while. What is worse: No up- or downgrading of iCue made a difference here.

Things like the joy.cpl, GeForce Now, Inner Space or games supporting gamepads would frequently take minutes to load. Disconnecting and reconnecting the keyboard fixed the issue temporarily until something went bonkers again.

The solution to this was to force a reinstallation of the keyboard’s firmware through iCue. I have no idea why I would need to do this, but flashing the firmware again solved the issue permanently.

Windows 11!!one

I have been playing around with Windows 11 in a virtual machine. My thoughts can best be summed up with “a bouquet of unremarkable things nobody wanted”. Windows 11 already made the rounds on the internet over its strict “no old hardware allowed” policy and the back-and-forth over Direct Storage which seemed like nothing more than marketing bullshit.

Personally, I have an entirely different pet peeve with Windows 11: It looks revolting. It looks ugly. It looks disgusting. Windows 11 looks more and more like a failed attempt of skinning Wine to make it hip, fresh and cool. Or like the aftermath of a broken UXThemePatcher run. Or what happens when Window Blinds crashes.

“What is this?” – “…Unique”

Please remember: People, (presumably) actual living people, got paid to do this.

People that know – or should know – that a majority of old applications will look butt-ugly with a half-assed mix of design elements from Windows 2000 (console contents and some colours), Windows 8/10 (the window controls that were meant for rectangular themes) and the lunacy that is Windows 11.

The colours do not match. The icon language does not match. The margins do not match. Nothing matches.

Synergy 1 Pro on Linux clients – Automatically start before logon with LightDM

This is just a very quick and dirty how-to for getting Synergy 1 Pro to run on LightDM before logging in. All the other instructions I have found haven’t really worked out for me, so let this be my best try…

Step 1: Setting up

Before we can set up LightDM’s configuration we first need to create a PEM cert and configuration with the root user, as that is what is my LightDM process is running as.

Log into a normal interactive X session. Start the graphical Synergy 1 Pro client via “sudo synergy”, generate a certificate and set the client up in a way that it actually can connect and is approved by the server.

Step 2: Adjust LightDM’s configuration

I am on Arch, so my configuration sits in /etc/lightdm/lightdm.conf. Open the configuration and add the following block:

greeter-setup-script=/usr/bin/synergyc --daemon --name <CLIENT_NAME> --enable-crypto --tls-cert /root/.synergy/SSL/Synergy.pem <SERVER_IP/HOST>:24800

Step 3

There is no step 3.

Whenever a session gets terminated, the synergy client will also briefly be killed and respawned for the lightdm greeter. I have found no reason to setup anything other than the greeter-setup-script.

VMware Workstation – Containers with vctl

I never understood why people think Docker is a big thing. To me, it always seemed to solve a problem that does not exist by adding layers of complexity which inevitably always introduce new problems and bugs.

If you wanted to isolate processes, why not use jails or zones? “But Tsukasa”, people sneered at me with mild amusement in the past, “you don’t understand. It’s about the ease of replacing software!”. Yeah, you can do that without Docker, it’s called package management.

Somewhere along the line, the OCI was founded and at least there was some kind of standardized way of handling containers.

Enter VMware Workstation in the middle of 2020. Coming to us courtesy of a technical preview, VMware shipped the new vctl container CLI it plucked from VMware Fusion. And I really wanted to love it, because the idea behind it is good – but…

A Promising Disappointment

I am a VMware guy. After more than a decade with VMware Workstation, I really dig the features. Yes, you can probably achieve similar results with other virtualization solutions – but none make it as easy as VMware. Yes, call me indolent and a fanboy, if you must. So imagine my joy when VMware announced their container CLI.

No more need to install the Hyper-V role, no more fiddling with some wonky plugins – just a clean, supported product that does what Docker does, but with VMware’s hypervisor in the back. One product to be the definitive all-out solution for my desktop (x86) virtualization needs.

VMware creates a new virtual machine in the background that acts as a host for the containers. This machine does not show up on your usual list of running VMs. Instead, it will show you the active containers. Don’t click on them though, the Workstation UI does not really know what to do with containers and you will end up with a botched tab of nothingness.

Since vctl is using VMware’s hypervisor, all the good stuff is already in place and familiar to me. Network configuration is dead simple and I have all the tools to explore/manage the container VM.

The performance is also top-notch, so what could I possibly complain about?

The integration and the polish. vctl creates an alias to docker, so you can issue both a vctl ps or docker ps and get the same result. Unfortunately, vctl does not shim all the commands and parameters Docker has, meaning that a lot of tooling and cool integration simply does not work. Want to use VSCode’s Remote Container extension with VMware? Bad luck, the command does not reply in the expected fashion because it does not understand all the parameters.

This is incredibly disappointing because the container feature in Workstation is so close to being a fantastic proposition in a time where VMware sunsets some long-standing features (cough, Shared VMs, cough, Unity on Linux, cough).

It does what it says – and nothing more… yet!

Please don’t misunderstand: The feature does what it advertises to do. I can easily author a Dockerfile and build it with vctl – without having to install Docker. This by itself is already a godsend because it reduces the amount of software I need to install on my workstation.

But I cannot help but wonder how cool it would be to have a (parameter-compatible) drop-in replacement for Docker from VMware as part of the software I use for full virtualization anyway. And give me a docker-compose, while you are at it. Thanks.

Synology Diskstation – Two Things

I do not get to write neat posts nearly as often as I would like to. But this one does not violate any NDAs and is relevant to an OG post on this blog.

So today I want to talk about two things regarding my beloved DS2413+ that other people might find useful in some capacity. Or at least entertaining.

Be Cool, Be Quiet – Live the Noctua Lifestyle!

I replaced the two Y.S. Tech stock fans in my DS2413+ with two Noctua NF-P12 redux-1300. Technically you can pop in every 3-pin 120mm fan you want, however, due to the way Synology drives the fans, they might not drive enough airflow, stop working (as in stop spinning) or DSM complains about fan failure.

I originally intended to replace the fans with the official replacement parts, however, it seems that I got stiffed so procuring the parts on short notice was no option. After a bit of research, I settled on the NF-P12 because other folks around the internet had positive experiences with the swap-out. I used this rare chance to clean the interior of the NAS, routed the cables nicely and thought I was done – I was wrong. I learned that lesson when the unit started beeping in the middle of the night.

You do want to set the fan setting to “Cool Mode” in your power settings, otherwise, one of the fans will randomly stop spinning after a few hours. Setting the fan speed to “Cool mode” fixes the issue and prevents DSM from issuing alarm beeps.

There are some other hacky ways to edit the fan profiles manually via the console, however, this operation apparently needs to be repeated after each DSM update. I’m way too lazy for that.

As for my cool Noctua lifestyle: The temperatures are virtually identical and the fans are quiet (as you would expect from the mighty Austrian owl!).

If you want to live the dream, please be sure to check the web for other people’s reports of your specific unit. Depending on the model the fan size, pin type and compatible fans will vary.

The big question, though: Is it worth the hassle?

Honestly speaking there is very little difference between the Y.S. Tech and Noctua fan in terms of cooling performance and noise level – at least when used on “Cool mode”. But you want that Noctua lifestyle, don’t you?

Addendum 2021-09-13: After upgrading to DSM 7, something about the way the fans are being addressed seems to have changed. I ran into several instances where DSM would report the fan as “faulty” and turned it off completely. Changing the fan settings around does not seem to make a difference here. I have popped in a new set of Y.S. Tech fans (original Synology replacement parts) for the time being…

Data Scrubbing – Or: Dude, Where is my Data?

I run my data scrubbing tasks regularly. Due to a recent power outage, the system complained about possible Write Cache issues, successfully completed a scrub and prompted whether I want to reboot now or later. It also asked whether I wanted to remap the disk.

“Sure”, I thought to myself, “I like maps!”. I toggled the option and hit “Reboot now”. DSM rebooted… and that was about it.

Blinking status LEDs but no DSM web interface, no SMB and no NFS shares. Slightly nervous I tried to connect to the NAS via SSH. dmesg and the system messages did not show anything of particular interest, so I started poking around the internet.

Google spew upon me pages and pages of horror stories that made my skin crawl: Bad superblocks, broken filesystems, complete loss of data, cats and dogs living together – the whole nine yards to make me break into a cold sweat and fear for the worst.

In this case, though, a simple “top” explained the situation: DSM was performing an e2fsck check of my filesystem.

This obviously caused the logical device to be busy or unavailable and explains why all lvs, pvs and vgs commands listed everything to be in order and mdadm was reporting proper operation. This also explains why the shares were not available, as the logical volume was not mounted.

Personally, I find the design decision to not initialize the web interface a bit weird, as it is truly unsettling to see all your data in limbo, with your only indication that something is or could be happening being the blinking lights on the front of the unit (not the drive indicators).

I hope that DSM 7 might improve on that end. It would be cool if the web interface had come up and indicated that a volume is currently unavailable due to running filesystem checks. This would be much more transparent.

Closing Thoughts

The DS2413+ is still an awesome unit and I very much appreciate the stability and ease of use of it. Synology is doing a great job at being very user friendly, so it really hits hard when something like the e2fsck situation comes up.


Good news, everyone! This blog is now also available via Gopher.

I will be working on making the blog look better (as in: remove all the pesky HTML and replace it with proper plaintext) over the coming weeks.

It is honestly great to see that is still available through Gopher and I hope to join those elitist ranks with a proper and deserving presentation. But until then… please excuse the HTML.

Streamlining your OBS workflow

Building a stream layout is a lot of work. Design elements like colours, fonts and layouts have to be consistent. In the past, I used to design things in Photoshop or Affinity Photo, cut the assets up into smaller pieces and then either use them in OBS directly or run them through DaVinci Resolve for some basic animation. This approach works fine on a rather static layout.

Now I’ve been toying around with the idea of what I call “After Dark” streams that have their own, slightly different style. The fonts and layouts stay the same, however, all the colours change. With my old workflow I would either need to re-export and edit all the assets… or find another way.

For a while, I have been doing my layouts as HTML documents now. Using CSS animations and jQuery as a base for dynamic data processing, I can easily switch things around.

Since I am on Windows, reading/writing the contents of a JSON file is really easy with Powershell. So I can map some Stream Deck keys to perform value toggles in the JSON, causing my layout to dynamically adjust.

Same for the “Now Playing on Pretzel” widget. It processes the JSON file generated by Pretzel’s desktop client, dynamically resizes the widget and even fades out once music stops playing.

HTML stream layout comparison

The overall advantage is obvious: If I ever choose to edit the colour scheme, it is one edit within one CSS file. New font? A couple of changes. Changing the stream title, metadata et al is also just a simple set of nodes in a JSON file – the rest of the layout dynamically adjusts. And it is all easily accessible through one press on my Stream Deck.

Additionally, this approach reduces the number of required scenes/elements drastically. Whereas you would either need to toggle the visibility of sources or duplicate scenes on a more traditional setup, everything runs in proper code here. I have no dedicated intermission scene… the title card simply transforms into it, keeping all elements coherent within the scene.

“But Tsukasa, the performance impact”, people will yell. I dare say that any blur effect on a fullscreen video in OBS has probably a heavier impact on the performance than a reusable browser source. The entire title card sits at around 10% CPU usage, with a good portion of that going towards the VLC video source.

Dynamic changes to the layout

So I feel it is high time people stop using video-based layouts and migrate to proper HTML-based ones.

How Droidcam OBS gets it wrong

Given the current state of the world, you might be in need of a webcam to participate in meetings and prove that you actually wear clothes and/or pay attention. Given the current state of the world you might also have noticed that webcams have shot up in price.

However, fear not. You can use your smartphone as a webcam. Elgato is currently shilling EpocCam for iPhones, which is what lead me to take a look at these applications in the first place. One of the more popular solutions for Android seems to be Droidcam. There is an early access version that is specifically tailored for use with OBS called Droidcam OBS. However, for a solution aimed at streamers, this software gets it wrong so very very badly.

So, what is wrong with the software? Well, it comes with its own OBS plugin to feed the data into OBS, however, misses out on the most basic of the basic things any OBS user would expect: A way to actually change white balance, exposure and focus from within the plugin. In its current state, the video transmission works beautifully with a stable framerate at great performance. However, there are no remote controls that allow you to change the camera settings.

An app that is designed specifically so you can use the back camera of your phone as a webcam expects you to fiddle with a touchscreen on the front which you cannot possibly use when putting the phone in it’s intended capture position. All while the image within the smartphone app is only visible after you have already connected to OBS.

Now I can already hear you typing away “but Tsukasa, if you connect a camera to a Camlink you also have to set the parameters on the camera, you dummy”. This is true. But this is not a Camlink. This is a two-way communication that only works if OBS instructs it to. In other words: There is a channel that could potentially be used for these tasks.

But hey, the app is still in early access, so perhaps this will come at a later date. And surely other solutions offer remote adjustment of the camera parameters, right? Wrong. All the solutions I tested either expect you to fiddle with the touchscreen within the app on your phone or simply do not allow any adjustments at all.

So I suppose my criticism of Droidcam OBS is a bit harsh since every other app I tested is just as bad or even worse in this regard. I merely think that a ton of potential is being wasted due to one design decision here because the rest of the app is top-notch.

Improve your OpenSSH experience on Windows 10

Since Windows 10 1709 Microsoft offers an optional SSH client feature. This is pretty sweet and much needed. Unfortunately, the current [as of writing this post] version that you can install via the Features panel or the command line lacks some neat features like VT mouse support.

I can already hear the sneering from some people. Hey, sometimes I love me a simple click instead of remembering some keybindings across different applications. I am getting old and forgetful! So let’s give this old man some mouse support for his Windows OpenSSH.

Thankfully the OpenSSH version Microsoft offers via the optional feature is nothing special. You can simply go to Microsoft’s GitHub project for OpenSSH on Windows and download a newer release.

In fact, you do not even need to explicitly uninstall the existing feature, since it’s home directory is added to the PATH environment variable (C:\Windows\system32\OpenSSH), so you can simply unpack a freshly downloaded zip archive with a newer version of OpenSSH, extract it a location of your convenience and add that location higher in the PATH hierarchy.

And just like that you can have your mouse support, so you can pull things around in tmux.

OpenEdge 10.2b – Invalid version Error 2888

We service different versions of OpenEdge for a software project. The entire build has been automated through Jenkins, Ant and PCT to produce artefacts for different OpenEdge version targets. So far, so good. Let’s not comment on the fact that OpenEdge 10.2b’s lifespan has ended, and focus on the more interesting part of the post.

I was recently asked to analyze a somewhat odd issue that has cropped up in the last few weeks. The application ran as expected, however one r-code caused the error “Invalid version, 1005 (excepted 1006) in object file myApplication.myClass (2888)” on runtime. Quite odd, to say the least.

According to the Progress Knowledge Base we somehow managed to produce r-code for OpenEdge 10.0a. Impossible – we always compile every file in a strictly regulated environment during our pipelines runs and I have never even seen a release pre 10.2b on our build slaves. There was just no way for this message to be accurate. Or was there a way…?

Suspecting that PCT perhaps contained old r-code which would cause us trouble during the compilation process, I set the PCT-SRC property to true to force JIT compilation of the required tool procedures. No success.

The solution came in form of the xref directives within the PCTCompile task. Setting the xmlXref property to false fixed the issue. This makes sense, considering the functionality is only available starting with OpenEdge 11.

It is, however, sort of spooky that there were no compilation problems and most of the r-code worked flawlessly… except that one, cursed class.

ZNC Playback script for mIRC/AdiIRC

The main reason I use an IRC bouncer is so I can detach from the bouncer and get the messages I missed the next time I attach to it again. ZNC provides support for this feature by default, however, there is a third-party module called Playback that has some additional bells and whistles.

To properly utilize the Playback module, you need to adjust two settings on your bouncer and your IRC client needs to do some minor lifting. After searching the internet far and wide, I have not come across a premade AdiIRC script that worked the way I wanted it to, so I figured it was high time to improve the situation.

So what do we actually need to teach our IRC client? Essentially, the client needs to keep track of when it received the network’s last message, so it can request all newer messages that are newer than this timestamp from the bouncer upon reconnect. Sounds easy enough, especially since there were some example scripts for other clients linked on the wiki page for Playback.

I wired up a basic mIRC/AdiIRC script that will retain timestamps of ZNC connections on a per-network basis. Instead of merely updating the timestamp when a PRIVMSG comes in, the script also updates the timestamp on JOIN/PART events to cover “quiet” channels/networks.

To avoid the odd timezone problems, the script will read the timestamp from IRCv3 enabled timestamp parts within events/messages. I still have some odd timezone issues between my own IRCd, bouncer and client, but this is likely due to a configuration problem on my end. On the major networks, the script operates as intended. The data is held in a small hashtable that gets serialized/deserialized to an INI file on exit/startup.