Streamlining your OBS workflow

Building a stream layout is a lot of work. Design elements like colours, fonts and layouts have to be consistent. In the past, I used to design things in Photoshop or Affinity Photo, cut the assets up into smaller pieces and then either use them in OBS directly or run them through DaVinci Resolve for some basic animation. This approach works fine on a rather static layout.

Now I’ve been toying around with the idea of what I call “After Dark” streams that have their own, slightly different style. The fonts and layouts stay the same, however, all the colours change. With my old workflow I would either need to re-export and edit all the assets… or find another way.

For a while, I have been doing my layouts as HTML documents now. Using CSS animations and jQuery as a base for dynamic data processing, I can easily switch things around.

Since I am on Windows, reading/writing the contents of a JSON file is really easy with Powershell. So I can map some Stream Deck keys to perform value toggles in the JSON, causing my layout to dynamically adjust.

Same for the “Now Playing on Pretzel” widget. It processes the JSON file generated by Pretzel’s desktop client, dynamically resizes the widget and even fades out once music stops playing.

HTML stream layout comparison

The overall advantage is obvious: If I ever choose to edit the colour scheme, it is one edit within one CSS file. New font? A couple of changes. Changing the stream title, metadata et al is also just a simple set of nodes in a JSON file – the rest of the layout dynamically adjusts. And it is all easily accessible through one press on my Stream Deck.

Additionally, this approach reduces the number of required scenes/elements drastically. Whereas you would either need to toggle the visibility of sources or duplicate scenes on a more traditional setup, everything runs in proper code here. I have no dedicated intermission scene… the title card simply transforms into it, keeping all elements coherent within the scene.

“But Tsukasa, the performance impact”, people will yell. I dare say that any blur effect on a fullscreen video in OBS has probably a heavier impact on the performance than a reusable browser source. The entire title card sits at around 10% CPU usage, with a good portion of that going towards the VLC video source.

Dynamic changes to the layout

So I feel it is high time people stop using video-based layouts and migrate to proper HTML-based ones.

How Droidcam OBS gets it wrong

Given the current state of the world, you might be in need of a webcam to participate in meetings and prove that you actually wear clothes and/or pay attention. Given the current state of the world you might also have noticed that webcams have shot up in price.

However, fear not. You can use your smartphone as a webcam. Elgato is currently shilling EpocCam for iPhones, which is what lead me to take a look at these applications in the first place. One of the more popular solutions for Android seems to be Droidcam. There is an early access version that is specifically tailored for use with OBS called Droidcam OBS. However, for a solution aimed at streamers, this software gets it wrong so very very badly.

So, what is wrong with the software? Well, it comes with its own OBS plugin to feed the data into OBS, however, misses out on the most basic of the basic things any OBS user would expect: A way to actually change white balance, exposure and focus from within the plugin. In its current state, the video transmission works beautifully with a stable framerate at great performance. However, there are no remote controls that allow you to change the camera settings.

An app that is designed specifically so you can use the back camera of your phone as a webcam expects you to fiddle with a touchscreen on the front which you cannot possibly use when putting the phone in it’s intended capture position. All while the image within the smartphone app is only visible after you have already connected to OBS.

Now I can already hear you typing away “but Tsukasa, if you connect a camera to a Camlink you also have to set the parameters on the camera, you dummy”. This is true. But this is not a Camlink. This is a two-way communication that only works if OBS instructs it to. In other words: There is a channel that could potentially be used for these tasks.

But hey, the app is still in early access, so perhaps this will come at a later date. And surely other solutions offer remote adjustment of the camera parameters, right? Wrong. All the solutions I tested either expect you to fiddle with the touchscreen within the app on your phone or simply do not allow any adjustments at all.

So I suppose my criticism of Droidcam OBS is a bit harsh since every other app I tested is just as bad or even worse in this regard. I merely think that a ton of potential is being wasted due to one design decision here because the rest of the app is top-notch.

Improve your OpenSSH experience on Windows 10

Since Windows 10 1709 Microsoft offers an optional SSH client feature. This is pretty sweet and much needed. Unfortunately, the current [as of writing this post] version 0.0.1.0 that you can install via the Features panel or the command line lacks some neat features like VT mouse support.

I can already hear the sneering from some people. Hey, sometimes I love me a simple click instead of remembering some keybindings across different applications. I am getting old and forgetful! So let’s give this old man some mouse support for his Windows OpenSSH.

Thankfully the OpenSSH version Microsoft offers via the optional feature is nothing special. You can simply go to Microsoft’s GitHub project for OpenSSH on Windows and download a newer release.

In fact, you do not even need to explicitly uninstall the existing feature, since it’s home directory is added to the PATH environment variable (C:\Windows\system32\OpenSSH), so you can simply unpack a freshly downloaded zip archive with a newer version of OpenSSH, extract it a location of your convenience and add that location higher in the PATH hierarchy.

And just like that you can have your mouse support, so you can pull things around in tmux.

OpenEdge 10.2b – Invalid version Error 2888

We service different versions of OpenEdge for a software project. The entire build has been automated through Jenkins, Ant and PCT to produce artefacts for different OpenEdge version targets. So far, so good. Let’s not comment on the fact that OpenEdge 10.2b’s lifespan has ended, and focus on the more interesting part of the post.

I was recently asked to analyze a somewhat odd issue that has cropped up in the last few weeks. The application ran as expected, however one r-code caused the error “Invalid version, 1005 (excepted 1006) in object file myApplication.myClass (2888)” on runtime. Quite odd, to say the least.

According to the Progress Knowledge Base we somehow managed to produce r-code for OpenEdge 10.0a. Impossible – we always compile every file in a strictly regulated environment during our pipelines runs and I have never even seen a release pre 10.2b on our build slaves. There was just no way for this message to be accurate. Or was there a way…?

Suspecting that PCT perhaps contained old r-code which would cause us trouble during the compilation process, I set the PCT-SRC property to true to force JIT compilation of the required tool procedures. No success.

The solution came in form of the xref directives within the PCTCompile task. Setting the xmlXref property to false fixed the issue. This makes sense, considering the functionality is only available starting with OpenEdge 11.

It is, however, sort of spooky that there were no compilation problems and most of the r-code worked flawlessly… except that one, cursed class.

ZNC Playback script for mIRC/AdiIRC

The main reason I use an IRC bouncer is so I can detach from the bouncer and get the messages I missed the next time I attach to it again. ZNC provides support for this feature by default, however, there is a third-party module called Playback that has some additional bells and whistles.

To properly utilize the Playback module, you need to adjust two settings on your bouncer and your IRC client needs to do some minor lifting. After searching the internet far and wide, I have not come across a premade AdiIRC script that worked the way I wanted it to, so I figured it was high time to improve the situation.

So what do we actually need to teach our IRC client? Essentially, the client needs to keep track of when it received the network’s last message, so it can request all newer messages that are newer than this timestamp from the bouncer upon reconnect. Sounds easy enough, especially since there were some example scripts for other clients linked on the wiki page for Playback.

I wired up a basic mIRC/AdiIRC script that will retain timestamps of ZNC connections on a per-network basis. Instead of merely updating the timestamp when a PRIVMSG comes in, the script also updates the timestamp on JOIN/PART events to cover “quiet” channels/networks.

To avoid the odd timezone problems, the script will read the timestamp from IRCv3 enabled timestamp parts within events/messages. I still have some odd timezone issues between my own IRCd, bouncer and client, but this is likely due to a configuration problem on my end. On the major networks, the script operates as intended. The data is held in a small hashtable that gets serialized/deserialized to an INI file on exit/startup.

Ownership

Today I restored this blog from an old backup. The entire process took about an hour with an additional hour of trying to clean up old datasets and get rid of some encoding errors that the blog has had since I last migrated things around in the early 2010s.

This demonstrates not only the usefulness of backups (do your backups!) but also illustrates the point I wanted to make for a while.

The data on my server is mine. I own it. If I feel like it, I can replace every occurance of the word “blog” with “benis”. There is no moderation team that will judge my now-benised posts to be inappropriate, racist or immoral and delete them.

I can take every last bit of my data to modify and/or move it. Something your preferred platform might not allow you to do.

I am a sloth and proud of it

Yes, I have been a terrible, terrible sloth. I neglected this blog (although not the server it is running on) and did not provide any interesting content in quite a while.

In my defense I must say that there are very few problems I need to solve these days. Long gone are the days when I pulled my hair over IR remote controls that would work with LIRC. No more stunts with automatically mirroring certain websites, filtering the local content and presenting it. Building a failsafe fileserver from scratch is no longer required.

Simply put: Things finally work.

Why would I still bother with crawling through hardware compatibility lists to find a cheap USB IR receiver on eBay when I can use my smartphone to control applications? Why mirror and process websites, when there are easier ways. Rolling your own solution gives you all the control – but also means that you are pretty much on your own. And that is fine if you have the necessary time to solve the problems you might run into.

Call it “growing up” or “becoming lazy” – but I like my solutions to be time-efficient and often pre-packaged these days. Yes, it bites me in the buttocks sometimes; Yahoo! Pipes closing put me in a rough spot for about 2 months – parts of my automation infrastructure depended on Pipes doing data transformation. I had to build my own custom solution that would replace Pipes for me. But due to the fact that I used Pipes for years and therefore knew my exact requirements helped me a lot. I knew where I could cut corners without hurting the end result, my own data transformation tool went online 2 weeks before Pipes finally shut down and has been working great ever since. Yes, I no longer run a Solaris fileserver and rely on a Synology NAS instead. And yes, I run Windows instead of Linux these days.

This does not mean that I have lost my passion for tinkering. It means I am more careful with what I spend my time on.

Unlocator + Akamai + The Rest of the Web

If you are using Unlocator, a DNS-based service to bypass region locks on popular streaming services, you might have run into some small troubles as of late. I know a few of the people I assist with their IT-troubles have.

Most prominently the digital storefront Steam presents itself in a somewhat broken fashion and Nvidia’s download sites won’t work.

Why is that? Unlocator tries to inject it’s own SSL certificate into Akamai’s connections, causing all sensible software to abandon the connection.

The only workaround currently present is to not use Unlocator. Simple as that.

Bitcasa Drive Link Generator

With Bitcasa now using it’s new backend, the consumer API being dead and all tools (including reallistic’s BitcasaFileLister) being unusable, one certainly has a hard time getting their data off Bitcasa.

The client seems to omit certain files (despite the fact that a download from the web works fine and the checksum for the files match) and even when a download commences, it is still painstakingly slow, clocking about 200-400kb/s.

Paying Bitcasa for their larger plan for an entire year to download my files is not a valid option for me, especially considering their current financial state. The only way to achieve acceptable download speed is to utilize the web-frontend and a download manager of my choice. For that to work I needed a way to generate lists of download links for the web-frontend, so I came up with the Bitcasa Drive Link Generator, a new Chrome extension that will do just that: Allow you to browse your Bitcasa Drive and grab all the download links as you walk through your folders.

bcd_lg

The extension is not beautiful but works flawlessly and has already helped me to get some of my problematic files/folders off Bitcasa.

Download:

How to install:

Download and extract the extension to a convenient place. Go to your Chrome settings, choose the “Extension” tab, tick the checkbox for “Developer Mode” and then click “Load unpacked extension”, select the previously selected directory. You should see a new icon in your Chrome toolbar.

If not already done, go to drive.bitcasa.com and log-in. Now you can use the extension.

Enabling linked collections in Garry’s Mod dedicated server

Ah Garry’s Mod, a game that is both ingenious and infuriating. And it supports the Steam Workshop to add content.

The Workshop has a nice feature called “linked collections” which basically allow you to put all the maps in one collection and all the models into another. You can then add a third collection to link both together. In theory.

Practically this feature does not work in Garry’s Mod, the game processes all items returned from the Workshop API as downloadable content, regardless of whether that’s true or not (hint: if the filetype is 2, it’s a collection, not an addon!).

Being sick of waiting for Facepunch to fix this trivial problem, I figured I could simply rewire the request to another host which will do the pre-processing of collection data for me. The basic idea is that all the contents of collections linked to the primary collection will get “pushed” into the primary collection so Garry’s Mod will be fooled into downloading the contents of three collections “as one”.

Here’s how I did it (warning: Windows only!), I’m sure there are plenty of better ways to go about this:

  1. Install Fiddler, enable traffic capture and customize the rules for OnBeforeRequest by adding code like this:
    // Rewrite the Steam Workshop request for getting collection contents to target our emulator.
    if (oSession.HostnameIs("api.steampowered.com") && (oSession.PathAndQuery=="/ISteamRemoteStorage/GetCollectionDetails/v0001/")) {
      oSession.hostname="example.com";
      oSession.PathAndQuery="/steam_collections.php";
    }
  2. Create a new PHP script named steam_collections.php on your webserver example.com and edit $process_collection to fit your needs:
    <?php
    
    // Prepare the output header!
    header('Content-Type: text/json');
    
    // Only this collection will be processed, all other collections are passed through.
    $process_collection = '123456789';
    
    $api_url = "//api.steampowered.com/ISteamRemoteStorage/GetCollectionDetails/v0001/";
    
    // These values will be delivered by srcds's POST request.
    $api_key = $_POST['key'];
    $primary_collection = $_POST['publishedfileids'][0];
    $collectioncount = $_POST['collectioncount'];
    $format = $_POST['format'];
    
    // Must be global so every collection can access it.
    $sortorder = 1;
    
    function AddToPrimaryCollection(&$target_collection, $keys_to_add)
    {
     foreach($keys_to_add as &$key)
     {
     $target_collection[] = $key;
     }
    }
    
    function GetCollectionDetails($collection_id, $is_primary_collection = false, $process_children = false)
    {
     global $api_url;
     global $api_key;
     global $collectioncount;
     global $format;
     global $sortorder;
    
     $final_data = array();
    
     $post_fields = array(
     'collectioncount' => $collectioncount,
     'publishedfileids[0]' => $collection_id,
     'key' => $api_key,
     'format' => $format
     );
    
     $post_options = array(
     'http' => array(
     'header' => "Content-type: application/x-www-form-urlencoded\r\n",
     'method' => 'POST',
     'content' => http_build_query($post_fields),
     'timeout' => 120
     ),
     );
    
     $request_context = stream_context_create($post_options);
     $request_result = file_get_contents($api_url, false, $request_context);
     $json_data = json_decode($request_result, true);
    
     if($process_children)
     {
     if ($is_primary_collection)
     {
    
     foreach ($json_data['response']['collectiondetails'][0]['children'] as $key => &$collection_item) {
     if ($collection_item['filetype'] == '2')
     {
     // Grab the subcollection contents and add them to the mix list
     $sub_collection = GetCollectionDetails($collection_item['publishedfileid'], false, false);
     AddToPrimaryCollection($final_data, $sub_collection['response']['collectiondetails'][0]['children']);
    
     // Get rid of the collection reference
     unset($json_data['response']['collectiondetails'][0]['children'][$key]);
     }
     }
    
     // Now mix the aggregated list of all subcollections with the primary collection
     AddToPrimaryCollection($final_data, $json_data['response']['collectiondetails'][0]['children']);
     $json_data['response']['collectiondetails'][0]['children'] = $final_data;
     }
     
     // When in the primary collection, return the merged data array.
     if ($is_primary_collection)
     {
     foreach ($json_data['response']['collectiondetails'][0]['children'] as $key => &$collection_item)
     {
     $collection_item['sortorder'] = $sortorder;
     $sortorder += 1;
     }
     }
     }
    
     return $json_data;
    }
    
    if($primary_collection == $process_collection)
     // It's our target collection with subcollections, process it!
     $result = GetCollectionDetails($primary_collection, true, true);
    else
     // It's something else... don't bother!
     $result = GetCollectionDetails($primary_collection, true, false);
    
    // Now encode the data back to json and let srcds do it's thing...
    echo json_encode($result);
    
    ?>
  3. Launch srcds with the +host_workshop_collection 123456789 parameter and watch the magic happen. The start might take a little longer than usually.

It would be really nice if this would finally get fixed, it has been reported ages ago.

Relaying/Forwarding ports from one Windows server to another

Yesterday I migrated one of my services from one server to another. Since the protocol used by the service does not support a HTTP-esque redirect and the Windows Server version used did not have the RRAS roles available, I had to get a little creative.

Enter Komodia Relay, a great (and free, to boot!) tool to forward a TCP/UDP port to a different system. The basic idea here is that it works like a proxy, clients connecting to the old server will transparently be proxied to the new one through Komodia Relay.

Usage is outstandingly easy and even under loads of several hundred connections the application still performs beautifully.

If you are more the GUI-oriented type and do not mind to pay for your ride, Network Activ’s AUTAPF might be worth checking out.

Howto: Titanfall with Steam Overlay

I am not a big fan of EA’s Origin. The software itself misses a simple list-view and looks like it loves to tell me how little it thinks of me. One of the prominent examples of this behaviour is that I cannot use Steam overlay and Origin overlay. It’s either Origin or nothing. Up to now.

With Titanfall being released I have one more game in my Origin library I am probably going to play quite a bit. So I while looking around I found the usual subpar solution of adding Origin.exe as a non-Steam game. Unacceptable.

Thankfully I came across NoFate’s wonderful homepage and his PAR remover. Simply navigate to your Titanfall game directory, make a copy of the original Titanfall.par and upload the original PAR file to NoFate’s PAR remover site. You will receive a new PAR file that will allow you to directly start Titanfall – and use the Steam overlay.

But what about your friends on Origin? They will still see you playing the game, they will still be able to join you – but you cannot use the Origin overlay anymore. Well, that’s a shame but does not bother me as much because most of my friends are on Steam.

WebDrive: Increasing the “Total Space” value for a drive

South River’s WebDrive is one of the most important tools for me. It connects to several servers and mounts them as drives on my Windows machine.

If you work with a WebDAV or FTP connection and do not have quotas enabled, WebDrive will, by default, assume a total capacity of 100GB for the drive/connection. Especially if you are moving tons of files, 100GB is nothing and the limit gets in your way.

Thankfully you can set the limit per connection via the Windows’ registry:

  • Start regedit
  • Navigate to HKEY_CURRENT_USER\Software\South River Technologies\WebDrive\Connections\YOUR_CONNECTION_NAME
  • Set the QuotaMB key from 102400 to something else, i.e. 1024000

After disconnecting/reconnecting, the new limit should show up. Cool stuff.

Synology DS2413+ Review

A colleague once told me that building your own storage-server is way too much work. “Just order one,” he used to say, “it’s not worth the time and the trouble. Just unbox, pop in the disks, install and you are good to go”. That was seven years ago and I remember arguing about SOHO use-cases where a small NAS would have been too little and a rack-mounted storage would have been too much. “Just get two smaller units,” he laughed at me.

As it turns out he was right. While I was busy replacing obscure hardware, sniffing through HCLs and tinkering with different OpenSolaris’ upgrade paths (side note to myself: Never again upgrade to Solaris Express, go with OpenIndiana!), he called the manufacturer’s tech-support and was good to go.

Almost a decade later I am older and (arguably) a little wiser now. To replace my patchwork Solaris file-server I decided to go with something pre-made: The Synology DiskStation 2313+.

On paper it does everything I need:

  • Comes with 12 hot-swappable 3.5″ SATA disk bays
  • Small, non-rackmounted form factor suitable for storage in offices
  • Supports growing the total volume by replacing a number of disks (combination of lvm/md)
  • Supports encryption (Note: Only via SMB, no support for encryption via NFS!)
  • 2x 1GB Ethernet ports (LACP supported)
  • Support for Infiniband-based expansion with a second unit, giving me a [theoretical] total of 24 bays
  • Intel x86 architecture system with 2GB of memory (can be upgraded to 4GB)

The base unit without any disks set me back 1200 EUR. Instead of continuing the tragic history of getting the largest consumer hard-disk I could find, I opted for longevity by choosing 12x Seagate Constellation CS 2TB drives, giving me 18GB of usable storage in a SHR2 RAID6 configuration. The disks set me back another 1200 EUR, an investment well worth it (I hope?).

So the first conclusion we can draw here: If you want to fully use the DS2413+, it’s not a very cheap device.

The build-quality of the device is pretty nice with no cheapo plastic parts on the exterior. The disk trays are well made, have no splinters, rough edges or deformations so disks slide right in and sit on a nicely padded base.

Synology ships the DS2413+ with a number of stuff; the only noteworthy being the included ethernet cable: A 2m CAT5e cable – haven’t seen one of those in years.

The disk bay can be locked with one of the two included keys. There is no way to lock individual disks, only the entire bay.

After starting the DS2413+ for the first time it needs to install the operating system, Synology’s Linux-based “DSM”. Installation is simple, browse to the DS2413+’s IP-address and follow the web-based wizard which will download the newest DSM automatically. About 10 minutes later the device was online.

You can configure the entire device through a nice-looking web-interface. DSM takes some strong cues from OSX in terms of it’s UI design. If you have ever used a Macintosh with OSX you should have no problems finding the options you want.

Synology gives you the option to install additional packages to extend the functionality of your NAS. Unfortunately all packages get installed onto your storage pool, so when you swap all disks, the packages will be gone. This is a major problem for me, the DS2413+ does not have a dedicated system drive.

The packages range from useless stuff like cloud-backup or media-streaming to Python, Perl or Java. You can install a LAMPP stack on your NAS if you wish to do so. Honestly, this looks more like a gimmick than a really useful feature, especially considering the Linux flavour on the DS runs a bare busybox with a few additional binaries.

The volume management is where things get interesting. Since this is a Linux system, there is no ZFS. Surprisingly, the only file-system supported by DSM is ext4. There are some HFS tools installed as well but they are useless for my use-case and I did not spot any option to create HFS+ volumes.

The DS2413+ supports all common RAID levels and sports it’s own lvm/md-based “SHR” RAID level which allows for dynamic growing of volumes.

I hope that the introduction of DSM5 in January 2014 will bring the option to migrate to btrfs. I enjoyed the option to snapshot file-system states and it has come in handy several times before.

Network performance is okay. LACP works, the setup is a little bit weird and throws away the first ports configuration instead of using it as the aggregated adapter’s configuration, though. It may just be a Linux thing.

SMB2 performance seems to suffer quite a bit when the device is busy, FTP and/or WebDAV do work fine in these cases. NFS works – except on encrypted folders. There are no SMB-to-NFS permission problems.

When changing SMB or NFS options, the DSM will restart all sharing services, meaning that if you change an SMB option and have a client connected via NFS, the client will be disconnected as well. Meh.

So, am I happy to have this device or would I recommend to roll your own build? Simply put: I am happy. There is much to see and tinker with, I have not mentioned any of the energy-saving options or the sound-levels of the device. Both are great.

There are a few nitpicks but the overall build-quality and software is fantastic, making the device easily usable for all target-groups. The option to extend the DS2413+ with another unit via Infiniband is a great idea and hopefully the extension unit will still be for sale in a few years.

Whether you are a passionate home-user with hunger for storage or a small business unwilling to get a rack, the DS2413+ is worth your attention. Otherwise there are plenty of great rack-mounted options for the same price that do the same.