Corsair K95 Lockups

I am still rocking my beloved Corsair K95 RGB, the original one with the 18 G-keys. I still think there is no keyboard to date that is as great for multiboxing as this one.

Since migrating to my new machine a few months ago, the keyboard would occasionally do quirky things. Letting iCue run for a while caused gamepad detection to “lock-up” and took quite a while. What is worse: No up- or downgrading of iCue made a difference here.

Things like the joy.cpl, GeForce Now, Inner Space or games supporting gamepads would frequently take minutes to load. Disconnecting and reconnecting the keyboard fixed the issue temporarily until something went bonkers again.

The solution to this was to force a reinstallation of the keyboard’s firmware through iCue. I have no idea why I would need to do this, but flashing the firmware again solved the issue permanently.

Windows 11!!one

I have been playing around with Windows 11 in a virtual machine. My thoughts can best be summed up with “a bouquet of unremarkable things nobody wanted”. Windows 11 already made the rounds on the internet over its strict “no old hardware allowed” policy and the back-and-forth over Direct Storage which seemed like nothing more than marketing bullshit.

Personally, I have an entirely different pet peeve with Windows 11: It looks revolting. It looks ugly. It looks disgusting. Windows 11 looks more and more like a failed attempt of skinning Wine to make it hip, fresh and cool. Or like the aftermath of a broken UXThemePatcher run. Or what happens when Window Blinds crashes.

“What is this?” – “…Unique”

Please remember: People, (presumably) actual living people, got paid to do this.

People that know – or should know – that a majority of old applications will look butt-ugly with a half-assed mix of design elements from Windows 2000 (console contents and some colours), Windows 8/10 (the window controls that were meant for rectangular themes) and the lunacy that is Windows 11.

The colours do not match. The icon language does not match. The margins do not match. Nothing matches.

Synergy 1 Pro on Linux clients – Automatically start before logon with LightDM

This is just a very quick and dirty how-to for getting Synergy 1 Pro to run on LightDM before logging in. All the other instructions I have found haven’t really worked out for me, so let this be my best try…

Step 1: Setting up

Before we can set up LightDM’s configuration we first need to create a PEM cert and configuration with the root user, as that is what is my LightDM process is running as.

Log into a normal interactive X session. Start the graphical Synergy 1 Pro client via “sudo synergy”, generate a certificate and set the client up in a way that it actually can connect and is approved by the server.

Step 2: Adjust LightDM’s configuration

I am on Arch, so my configuration sits in /etc/lightdm/lightdm.conf. Open the configuration and add the following block:

[SeatDefaults]
greeter-setup-script=/usr/bin/synergyc --daemon --name <CLIENT_NAME> --enable-crypto --tls-cert /root/.synergy/SSL/Synergy.pem <SERVER_IP/HOST>:24800

Step 3

There is no step 3.

Whenever a session gets terminated, the synergy client will also briefly be killed and respawned for the lightdm greeter. I have found no reason to setup anything other than the greeter-setup-script.

VMware Workstation – Containers with vctl

I never understood why people think Docker is a big thing. To me, it always seemed to solve a problem that does not exist by adding layers of complexity which inevitably always introduce new problems and bugs.

If you wanted to isolate processes, why not use jails or zones? “But Tsukasa”, people sneered at me with mild amusement in the past, “you don’t understand. It’s about the ease of replacing software!”. Yeah, you can do that without Docker, it’s called package management.

Somewhere along the line, the OCI was founded and at least there was some kind of standardized way of handling containers.

Enter VMware Workstation in the middle of 2020. Coming to us courtesy of a technical preview, VMware shipped the new vctl container CLI it plucked from VMware Fusion. And I really wanted to love it, because the idea behind it is good – but…

A Promising Disappointment

I am a VMware guy. After more than a decade with VMware Workstation, I really dig the features. Yes, you can probably achieve similar results with other virtualization solutions – but none make it as easy as VMware. Yes, call me indolent and a fanboy, if you must. So imagine my joy when VMware announced their container CLI.

No more need to install the Hyper-V role, no more fiddling with some wonky plugins – just a clean, supported product that does what Docker does, but with VMware’s hypervisor in the back. One product to be the definitive all-out solution for my desktop (x86) virtualization needs.

VMware creates a new virtual machine in the background that acts as a host for the containers. This machine does not show up on your usual list of running VMs. Instead, it will show you the active containers. Don’t click on them though, the Workstation UI does not really know what to do with containers and you will end up with a botched tab of nothingness.

Since vctl is using VMware’s hypervisor, all the good stuff is already in place and familiar to me. Network configuration is dead simple and I have all the tools to explore/manage the container VM.

The performance is also top-notch, so what could I possibly complain about?

The integration and the polish. vctl creates an alias to docker, so you can issue both a vctl ps or docker ps and get the same result. Unfortunately, vctl does not shim all the commands and parameters Docker has, meaning that a lot of tooling and cool integration simply does not work. Want to use VSCode’s Remote Container extension with VMware? Bad luck, the command does not reply in the expected fashion because it does not understand all the parameters.

This is incredibly disappointing because the container feature in Workstation is so close to being a fantastic proposition in a time where VMware sunsets some long-standing features (cough, Shared VMs, cough, Unity on Linux, cough).

It does what it says – and nothing more… yet!

Please don’t misunderstand: The feature does what it advertises to do. I can easily author a Dockerfile and build it with vctl – without having to install Docker. This by itself is already a godsend because it reduces the amount of software I need to install on my workstation.

But I cannot help but wonder how cool it would be to have a (parameter-compatible) drop-in replacement for Docker from VMware as part of the software I use for full virtualization anyway. And give me a docker-compose, while you are at it. Thanks.

Synology Diskstation – Two Things

I do not get to write neat posts nearly as often as I would like to. But this one does not violate any NDAs and is relevant to an OG post on this blog.

So today I want to talk about two things regarding my beloved DS2413+ that other people might find useful in some capacity. Or at least entertaining.

Be Cool, Be Quiet – Live the Noctua Lifestyle!

I replaced the two Y.S. Tech stock fans in my DS2413+ with two Noctua NF-P12 redux-1300. Technically you can pop in every 3-pin 120mm fan you want, however, due to the way Synology drives the fans, they might not drive enough airflow, stop working (as in stop spinning) or DSM complains about fan failure.

I originally intended to replace the fans with the official replacement parts, however, it seems that I got stiffed so procuring the parts on short notice was no option. After a bit of research, I settled on the NF-P12 because other folks around the internet had positive experiences with the swap-out. I used this rare chance to clean the interior of the NAS, routed the cables nicely and thought I was done – I was wrong. I learned that lesson when the unit started beeping in the middle of the night.

You do want to set the fan setting to “Cool Mode” in your power settings, otherwise, one of the fans will randomly stop spinning after a few hours. Setting the fan speed to “Cool mode” fixes the issue and prevents DSM from issuing alarm beeps.

There are some other hacky ways to edit the fan profiles manually via the console, however, this operation apparently needs to be repeated after each DSM update. I’m way too lazy for that.

As for my cool Noctua lifestyle: The temperatures are virtually identical and the fans are quiet (as you would expect from the mighty Austrian owl!).

If you want to live the dream, please be sure to check the web for other people’s reports of your specific unit. Depending on the model the fan size, pin type and compatible fans will vary.

The big question, though: Is it worth the hassle?

Honestly speaking there is very little difference between the Y.S. Tech and Noctua fan in terms of cooling performance and noise level – at least when used on “Cool mode”. But you want that Noctua lifestyle, don’t you?

Addendum 2021-09-13: After upgrading to DSM 7, something about the way the fans are being addressed seems to have changed. I ran into several instances where DSM would report the fan as “faulty” and turned it off completely. Changing the fan settings around does not seem to make a difference here. I have popped in a new set of Y.S. Tech fans (original Synology replacement parts) for the time being…

Data Scrubbing – Or: Dude, Where is my Data?

I run my data scrubbing tasks regularly. Due to a recent power outage, the system complained about possible Write Cache issues, successfully completed a scrub and prompted whether I want to reboot now or later. It also asked whether I wanted to remap the disk.

“Sure”, I thought to myself, “I like maps!”. I toggled the option and hit “Reboot now”. DSM rebooted… and that was about it.

Blinking status LEDs but no DSM web interface, no SMB and no NFS shares. Slightly nervous I tried to connect to the NAS via SSH. dmesg and the system messages did not show anything of particular interest, so I started poking around the internet.

Google spew upon me pages and pages of horror stories that made my skin crawl: Bad superblocks, broken filesystems, complete loss of data, cats and dogs living together – the whole nine yards to make me break into a cold sweat and fear for the worst.

In this case, though, a simple “top” explained the situation: DSM was performing an e2fsck check of my filesystem.

This obviously caused the logical device to be busy or unavailable and explains why all lvs, pvs and vgs commands listed everything to be in order and mdadm was reporting proper operation. This also explains why the shares were not available, as the logical volume was not mounted.

Personally, I find the design decision to not initialize the web interface a bit weird, as it is truly unsettling to see all your data in limbo, with your only indication that something is or could be happening being the blinking lights on the front of the unit (not the drive indicators).

I hope that DSM 7 might improve on that end. It would be cool if the web interface had come up and indicated that a volume is currently unavailable due to running filesystem checks. This would be much more transparent.

Closing Thoughts

The DS2413+ is still an awesome unit and I very much appreciate the stability and ease of use of it. Synology is doing a great job at being very user friendly, so it really hits hard when something like the e2fsck situation comes up.

Gopher

Good news, everyone! This blog is now also available via Gopher.

I will be working on making the blog look better (as in: remove all the pesky HTML and replace it with proper plaintext) over the coming weeks.

It is honestly great to see that taz.de is still available through Gopher and I hope to join those elitist ranks with a proper and deserving presentation. But until then… please excuse the HTML.

Streamlining your OBS workflow

Building a stream layout is a lot of work. Design elements like colours, fonts and layouts have to be consistent. In the past, I used to design things in Photoshop or Affinity Photo, cut the assets up into smaller pieces and then either use them in OBS directly or run them through DaVinci Resolve for some basic animation. This approach works fine on a rather static layout.

Now I’ve been toying around with the idea of what I call “After Dark” streams that have their own, slightly different style. The fonts and layouts stay the same, however, all the colours change. With my old workflow I would either need to re-export and edit all the assets… or find another way.

For a while, I have been doing my layouts as HTML documents now. Using CSS animations and jQuery as a base for dynamic data processing, I can easily switch things around.

Since I am on Windows, reading/writing the contents of a JSON file is really easy with Powershell. So I can map some Stream Deck keys to perform value toggles in the JSON, causing my layout to dynamically adjust.

Same for the “Now Playing on Pretzel” widget. It processes the JSON file generated by Pretzel’s desktop client, dynamically resizes the widget and even fades out once music stops playing.

HTML stream layout comparison

The overall advantage is obvious: If I ever choose to edit the colour scheme, it is one edit within one CSS file. New font? A couple of changes. Changing the stream title, metadata et al is also just a simple set of nodes in a JSON file – the rest of the layout dynamically adjusts. And it is all easily accessible through one press on my Stream Deck.

Additionally, this approach reduces the number of required scenes/elements drastically. Whereas you would either need to toggle the visibility of sources or duplicate scenes on a more traditional setup, everything runs in proper code here. I have no dedicated intermission scene… the title card simply transforms into it, keeping all elements coherent within the scene.

“But Tsukasa, the performance impact”, people will yell. I dare say that any blur effect on a fullscreen video in OBS has probably a heavier impact on the performance than a reusable browser source. The entire title card sits at around 10% CPU usage, with a good portion of that going towards the VLC video source.

Dynamic changes to the layout

So I feel it is high time people stop using video-based layouts and migrate to proper HTML-based ones.

How Droidcam OBS gets it wrong

Given the current state of the world, you might be in need of a webcam to participate in meetings and prove that you actually wear clothes and/or pay attention. Given the current state of the world you might also have noticed that webcams have shot up in price.

However, fear not. You can use your smartphone as a webcam. Elgato is currently shilling EpocCam for iPhones, which is what lead me to take a look at these applications in the first place. One of the more popular solutions for Android seems to be Droidcam. There is an early access version that is specifically tailored for use with OBS called Droidcam OBS. However, for a solution aimed at streamers, this software gets it wrong so very very badly.

So, what is wrong with the software? Well, it comes with its own OBS plugin to feed the data into OBS, however, misses out on the most basic of the basic things any OBS user would expect: A way to actually change white balance, exposure and focus from within the plugin. In its current state, the video transmission works beautifully with a stable framerate at great performance. However, there are no remote controls that allow you to change the camera settings.

An app that is designed specifically so you can use the back camera of your phone as a webcam expects you to fiddle with a touchscreen on the front which you cannot possibly use when putting the phone in it’s intended capture position. All while the image within the smartphone app is only visible after you have already connected to OBS.

Now I can already hear you typing away “but Tsukasa, if you connect a camera to a Camlink you also have to set the parameters on the camera, you dummy”. This is true. But this is not a Camlink. This is a two-way communication that only works if OBS instructs it to. In other words: There is a channel that could potentially be used for these tasks.

But hey, the app is still in early access, so perhaps this will come at a later date. And surely other solutions offer remote adjustment of the camera parameters, right? Wrong. All the solutions I tested either expect you to fiddle with the touchscreen within the app on your phone or simply do not allow any adjustments at all.

So I suppose my criticism of Droidcam OBS is a bit harsh since every other app I tested is just as bad or even worse in this regard. I merely think that a ton of potential is being wasted due to one design decision here because the rest of the app is top-notch.

Improve your OpenSSH experience on Windows 10

Since Windows 10 1709 Microsoft offers an optional SSH client feature. This is pretty sweet and much needed. Unfortunately, the current [as of writing this post] version 0.0.1.0 that you can install via the Features panel or the command line lacks some neat features like VT mouse support.

I can already hear the sneering from some people. Hey, sometimes I love me a simple click instead of remembering some keybindings across different applications. I am getting old and forgetful! So let’s give this old man some mouse support for his Windows OpenSSH.

Thankfully the OpenSSH version Microsoft offers via the optional feature is nothing special. You can simply go to Microsoft’s GitHub project for OpenSSH on Windows and download a newer release.

In fact, you do not even need to explicitly uninstall the existing feature, since it’s home directory is added to the PATH environment variable (C:\Windows\system32\OpenSSH), so you can simply unpack a freshly downloaded zip archive with a newer version of OpenSSH, extract it a location of your convenience and add that location higher in the PATH hierarchy.

And just like that you can have your mouse support, so you can pull things around in tmux.

OpenEdge 10.2b – Invalid version Error 2888

We service different versions of OpenEdge for a software project. The entire build has been automated through Jenkins, Ant and PCT to produce artefacts for different OpenEdge version targets. So far, so good. Let’s not comment on the fact that OpenEdge 10.2b’s lifespan has ended, and focus on the more interesting part of the post.

I was recently asked to analyze a somewhat odd issue that has cropped up in the last few weeks. The application ran as expected, however one r-code caused the error “Invalid version, 1005 (excepted 1006) in object file myApplication.myClass (2888)” on runtime. Quite odd, to say the least.

According to the Progress Knowledge Base we somehow managed to produce r-code for OpenEdge 10.0a. Impossible – we always compile every file in a strictly regulated environment during our pipelines runs and I have never even seen a release pre 10.2b on our build slaves. There was just no way for this message to be accurate. Or was there a way…?

Suspecting that PCT perhaps contained old r-code which would cause us trouble during the compilation process, I set the PCT-SRC property to true to force JIT compilation of the required tool procedures. No success.

The solution came in form of the xref directives within the PCTCompile task. Setting the xmlXref property to false fixed the issue. This makes sense, considering the functionality is only available starting with OpenEdge 11.

It is, however, sort of spooky that there were no compilation problems and most of the r-code worked flawlessly… except that one, cursed class.

ZNC Playback script for mIRC/AdiIRC

The main reason I use an IRC bouncer is so I can detach from the bouncer and get the messages I missed the next time I attach to it again. ZNC provides support for this feature by default, however, there is a third-party module called Playback that has some additional bells and whistles.

To properly utilize the Playback module, you need to adjust two settings on your bouncer and your IRC client needs to do some minor lifting. After searching the internet far and wide, I have not come across a premade AdiIRC script that worked the way I wanted it to, so I figured it was high time to improve the situation.

So what do we actually need to teach our IRC client? Essentially, the client needs to keep track of when it received the network’s last message, so it can request all newer messages that are newer than this timestamp from the bouncer upon reconnect. Sounds easy enough, especially since there were some example scripts for other clients linked on the wiki page for Playback.

I wired up a basic mIRC/AdiIRC script that will retain timestamps of ZNC connections on a per-network basis. Instead of merely updating the timestamp when a PRIVMSG comes in, the script also updates the timestamp on JOIN/PART events to cover “quiet” channels/networks.

To avoid the odd timezone problems, the script will read the timestamp from IRCv3 enabled timestamp parts within events/messages. I still have some odd timezone issues between my own IRCd, bouncer and client, but this is likely due to a configuration problem on my end. On the major networks, the script operates as intended. The data is held in a small hashtable that gets serialized/deserialized to an INI file on exit/startup.

Ownership

Today I restored this blog from an old backup. The entire process took about an hour with an additional hour of trying to clean up old datasets and get rid of some encoding errors that the blog has had since I last migrated things around in the early 2010s.

This demonstrates not only the usefulness of backups (do your backups!) but also illustrates the point I wanted to make for a while.

The data on my server is mine. I own it. If I feel like it, I can replace every occurance of the word “blog” with “benis”. There is no moderation team that will judge my now-benised posts to be inappropriate, racist or immoral and delete them.

I can take every last bit of my data to modify and/or move it. Something your preferred platform might not allow you to do.

I am a sloth and proud of it

Yes, I have been a terrible, terrible sloth. I neglected this blog (although not the server it is running on) and did not provide any interesting content in quite a while.

In my defense I must say that there are very few problems I need to solve these days. Long gone are the days when I pulled my hair over IR remote controls that would work with LIRC. No more stunts with automatically mirroring certain websites, filtering the local content and presenting it. Building a failsafe fileserver from scratch is no longer required.

Simply put: Things finally work.

Why would I still bother with crawling through hardware compatibility lists to find a cheap USB IR receiver on eBay when I can use my smartphone to control applications? Why mirror and process websites, when there are easier ways. Rolling your own solution gives you all the control – but also means that you are pretty much on your own. And that is fine if you have the necessary time to solve the problems you might run into.

Call it “growing up” or “becoming lazy” – but I like my solutions to be time-efficient and often pre-packaged these days. Yes, it bites me in the buttocks sometimes; Yahoo! Pipes closing put me in a rough spot for about 2 months – parts of my automation infrastructure depended on Pipes doing data transformation. I had to build my own custom solution that would replace Pipes for me. But due to the fact that I used Pipes for years and therefore knew my exact requirements helped me a lot. I knew where I could cut corners without hurting the end result, my own data transformation tool went online 2 weeks before Pipes finally shut down and has been working great ever since. Yes, I no longer run a Solaris fileserver and rely on a Synology NAS instead. And yes, I run Windows instead of Linux these days.

This does not mean that I have lost my passion for tinkering. It means I am more careful with what I spend my time on.

Unlocator + Akamai + The Rest of the Web

If you are using Unlocator, a DNS-based service to bypass region locks on popular streaming services, you might have run into some small troubles as of late. I know a few of the people I assist with their IT-troubles have.

Most prominently the digital storefront Steam presents itself in a somewhat broken fashion and Nvidia’s download sites won’t work.

Why is that? Unlocator tries to inject it’s own SSL certificate into Akamai’s connections, causing all sensible software to abandon the connection.

The only workaround currently present is to not use Unlocator. Simple as that.

Bitcasa Drive Link Generator

With Bitcasa now using it’s new backend, the consumer API being dead and all tools (including reallistic’s BitcasaFileLister) being unusable, one certainly has a hard time getting their data off Bitcasa.

The client seems to omit certain files (despite the fact that a download from the web works fine and the checksum for the files match) and even when a download commences, it is still painstakingly slow, clocking about 200-400kb/s.

Paying Bitcasa for their larger plan for an entire year to download my files is not a valid option for me, especially considering their current financial state. The only way to achieve acceptable download speed is to utilize the web-frontend and a download manager of my choice. For that to work I needed a way to generate lists of download links for the web-frontend, so I came up with the Bitcasa Drive Link Generator, a new Chrome extension that will do just that: Allow you to browse your Bitcasa Drive and grab all the download links as you walk through your folders.

bcd_lg

The extension is not beautiful but works flawlessly and has already helped me to get some of my problematic files/folders off Bitcasa.

Download:

How to install:

Download and extract the extension to a convenient place. Go to your Chrome settings, choose the “Extension” tab, tick the checkbox for “Developer Mode” and then click “Load unpacked extension”, select the previously selected directory. You should see a new icon in your Chrome toolbar.

If not already done, go to drive.bitcasa.com and log-in. Now you can use the extension.