Enabling linked collections in Garry’s Mod dedicated server

Ah Garry’s Mod, a game that is both ingenious and infuriating. And it supports the Steam Workshop to add content.

The Workshop has a nice feature called “linked collections” which basically allow you to put all the maps in one collection and all the models into another. You can then add a third collection to link both together. In theory.

Practically this feature does not work in Garry’s Mod, the game processes all items returned from the Workshop API as downloadable content, regardless of whether that’s true or not (hint: if the filetype is 2, it’s a collection, not an addon!).

Being sick of waiting for Facepunch to fix this trivial problem, I figured I could simply rewire the request to another host which will do the pre-processing of collection data for me. The basic idea is that all the contents of collections linked to the primary collection will get “pushed” into the primary collection so Garry’s Mod will be fooled into downloading the contents of three collections “as one”.

Here’s how I did it (warning: Windows only!), I’m sure there are plenty of better ways to go about this:

  1. Install Fiddler, enable traffic capture and customize the rules for OnBeforeRequest by adding code like this:
    // Rewrite the Steam Workshop request for getting collection contents to target our emulator.
    if (oSession.HostnameIs("api.steampowered.com") && (oSession.PathAndQuery=="/ISteamRemoteStorage/GetCollectionDetails/v0001/")) {
      oSession.hostname="example.com";
      oSession.PathAndQuery="/steam_collections.php";
    }
  2. Create a new PHP script named steam_collections.php on your webserver example.com and edit $process_collection to fit your needs:
    <?php
    
    // Prepare the output header!
    header('Content-Type: text/json');
    
    // Only this collection will be processed, all other collections are passed through.
    $process_collection = '123456789';
    
    $api_url = "//api.steampowered.com/ISteamRemoteStorage/GetCollectionDetails/v0001/";
    
    // These values will be delivered by srcds's POST request.
    $api_key = $_POST['key'];
    $primary_collection = $_POST['publishedfileids'][0];
    $collectioncount = $_POST['collectioncount'];
    $format = $_POST['format'];
    
    // Must be global so every collection can access it.
    $sortorder = 1;
    
    function AddToPrimaryCollection(&$target_collection, $keys_to_add)
    {
     foreach($keys_to_add as &$key)
     {
     $target_collection[] = $key;
     }
    }
    
    function GetCollectionDetails($collection_id, $is_primary_collection = false, $process_children = false)
    {
     global $api_url;
     global $api_key;
     global $collectioncount;
     global $format;
     global $sortorder;
    
     $final_data = array();
    
     $post_fields = array(
     'collectioncount' => $collectioncount,
     'publishedfileids[0]' => $collection_id,
     'key' => $api_key,
     'format' => $format
     );
    
     $post_options = array(
     'http' => array(
     'header' => "Content-type: application/x-www-form-urlencoded\r\n",
     'method' => 'POST',
     'content' => http_build_query($post_fields),
     'timeout' => 120
     ),
     );
    
     $request_context = stream_context_create($post_options);
     $request_result = file_get_contents($api_url, false, $request_context);
     $json_data = json_decode($request_result, true);
    
     if($process_children)
     {
     if ($is_primary_collection)
     {
    
     foreach ($json_data['response']['collectiondetails'][0]['children'] as $key => &$collection_item) {
     if ($collection_item['filetype'] == '2')
     {
     // Grab the subcollection contents and add them to the mix list
     $sub_collection = GetCollectionDetails($collection_item['publishedfileid'], false, false);
     AddToPrimaryCollection($final_data, $sub_collection['response']['collectiondetails'][0]['children']);
    
     // Get rid of the collection reference
     unset($json_data['response']['collectiondetails'][0]['children'][$key]);
     }
     }
    
     // Now mix the aggregated list of all subcollections with the primary collection
     AddToPrimaryCollection($final_data, $json_data['response']['collectiondetails'][0]['children']);
     $json_data['response']['collectiondetails'][0]['children'] = $final_data;
     }
     
     // When in the primary collection, return the merged data array.
     if ($is_primary_collection)
     {
     foreach ($json_data['response']['collectiondetails'][0]['children'] as $key => &$collection_item)
     {
     $collection_item['sortorder'] = $sortorder;
     $sortorder += 1;
     }
     }
     }
    
     return $json_data;
    }
    
    if($primary_collection == $process_collection)
     // It's our target collection with subcollections, process it!
     $result = GetCollectionDetails($primary_collection, true, true);
    else
     // It's something else... don't bother!
     $result = GetCollectionDetails($primary_collection, true, false);
    
    // Now encode the data back to json and let srcds do it's thing...
    echo json_encode($result);
    
    ?>
  3. Launch srcds with the +host_workshop_collection 123456789 parameter and watch the magic happen. The start might take a little longer than usually.

It would be really nice if this would finally get fixed, it has been reported ages ago.

Relaying/Forwarding ports from one Windows server to another

Yesterday I migrated one of my services from one server to another. Since the protocol used by the service does not support a HTTP-esque redirect and the Windows Server version used did not have the RRAS roles available, I had to get a little creative.

Enter Komodia Relay, a great (and free, to boot!) tool to forward a TCP/UDP port to a different system. The basic idea here is that it works like a proxy, clients connecting to the old server will transparently be proxied to the new one through Komodia Relay.

Usage is outstandingly easy and even under loads of several hundred connections the application still performs beautifully.

If you are more the GUI-oriented type and do not mind to pay for your ride, Network Activ’s AUTAPF might be worth checking out.

Howto: Titanfall with Steam Overlay

I am not a big fan of EA’s Origin. The software itself misses a simple list-view and looks like it loves to tell me how little it thinks of me. One of the prominent examples of this behaviour is that I cannot use Steam overlay and Origin overlay. It’s either Origin or nothing. Up to now.

With Titanfall being released I have one more game in my Origin library I am probably going to play quite a bit. So I while looking around I found the usual subpar solution of adding Origin.exe as a non-Steam game. Unacceptable.

Thankfully I came across NoFate’s wonderful homepage and his PAR remover. Simply navigate to your Titanfall game directory, make a copy of the original Titanfall.par and upload the original PAR file to NoFate’s PAR remover site. You will receive a new PAR file that will allow you to directly start Titanfall – and use the Steam overlay.

But what about your friends on Origin? They will still see you playing the game, they will still be able to join you – but you cannot use the Origin overlay anymore. Well, that’s a shame but does not bother me as much because most of my friends are on Steam.

WebDrive: Increasing the “Total Space” value for a drive

South River’s WebDrive is one of the most important tools for me. It connects to several servers and mounts them as drives on my Windows machine.

If you work with a WebDAV or FTP connection and do not have quotas enabled, WebDrive will, by default, assume a total capacity of 100GB for the drive/connection. Especially if you are moving tons of files, 100GB is nothing and the limit gets in your way.

Thankfully you can set the limit per connection via the Windows’ registry:

  • Start regedit
  • Navigate to HKEY_CURRENT_USER\Software\South River Technologies\WebDrive\Connections\YOUR_CONNECTION_NAME
  • Set the QuotaMB key from 102400 to something else, i.e. 1024000

After disconnecting/reconnecting, the new limit should show up. Cool stuff.

Synology DS2413+ Review

A colleague once told me that building your own storage-server is way too much work. “Just order one,” he used to say, “it’s not worth the time and the trouble. Just unbox, pop in the disks, install and you are good to go”. That was seven years ago and I remember arguing about SOHO use-cases where a small NAS would have been too little and a rack-mounted storage would have been too much. “Just get two smaller units,” he laughed at me.

As it turns out he was right. While I was busy replacing obscure hardware, sniffing through HCLs and tinkering with different OpenSolaris’ upgrade paths (side note to myself: Never again upgrade to Solaris Express, go with OpenIndiana!), he called the manufacturer’s tech-support and was good to go.

Almost a decade later I am older and (arguably) a little wiser now. To replace my patchwork Solaris file-server I decided to go with something pre-made: The Synology DiskStation 2313+.

On paper it does everything I need:

  • Comes with 12 hot-swappable 3.5″ SATA disk bays
  • Small, non-rackmounted form factor suitable for storage in offices
  • Supports growing the total volume by replacing a number of disks (combination of lvm/md)
  • Supports encryption (Note: Only via SMB, no support for encryption via NFS!)
  • 2x 1GB Ethernet ports (LACP supported)
  • Support for Infiniband-based expansion with a second unit, giving me a [theoretical] total of 24 bays
  • Intel x86 architecture system with 2GB of memory (can be upgraded to 4GB)

The base unit without any disks set me back 1200 EUR. Instead of continuing the tragic history of getting the largest consumer hard-disk I could find, I opted for longevity by choosing 12x Seagate Constellation CS 2TB drives, giving me 18GB of usable storage in a SHR2 RAID6 configuration. The disks set me back another 1200 EUR, an investment well worth it (I hope?).

So the first conclusion we can draw here: If you want to fully use the DS2413+, it’s not a very cheap device.

The build-quality of the device is pretty nice with no cheapo plastic parts on the exterior. The disk trays are well made, have no splinters, rough edges or deformations so disks slide right in and sit on a nicely padded base.

Synology ships the DS2413+ with a number of stuff; the only noteworthy being the included ethernet cable: A 2m CAT5e cable – haven’t seen one of those in years.

The disk bay can be locked with one of the two included keys. There is no way to lock individual disks, only the entire bay.

After starting the DS2413+ for the first time it needs to install the operating system, Synology’s Linux-based “DSM”. Installation is simple, browse to the DS2413+’s IP-address and follow the web-based wizard which will download the newest DSM automatically. About 10 minutes later the device was online.

You can configure the entire device through a nice-looking web-interface. DSM takes some strong cues from OSX in terms of it’s UI design. If you have ever used a Macintosh with OSX you should have no problems finding the options you want.

Synology gives you the option to install additional packages to extend the functionality of your NAS. Unfortunately all packages get installed onto your storage pool, so when you swap all disks, the packages will be gone. This is a major problem for me, the DS2413+ does not have a dedicated system drive.

The packages range from useless stuff like cloud-backup or media-streaming to Python, Perl or Java. You can install a LAMPP stack on your NAS if you wish to do so. Honestly, this looks more like a gimmick than a really useful feature, especially considering the Linux flavour on the DS runs a bare busybox with a few additional binaries.

The volume management is where things get interesting. Since this is a Linux system, there is no ZFS. Surprisingly, the only file-system supported by DSM is ext4. There are some HFS tools installed as well but they are useless for my use-case and I did not spot any option to create HFS+ volumes.

The DS2413+ supports all common RAID levels and sports it’s own lvm/md-based “SHR” RAID level which allows for dynamic growing of volumes.

I hope that the introduction of DSM5 in January 2014 will bring the option to migrate to btrfs. I enjoyed the option to snapshot file-system states and it has come in handy several times before.

Network performance is okay. LACP works, the setup is a little bit weird and throws away the first ports configuration instead of using it as the aggregated adapter’s configuration, though. It may just be a Linux thing.

SMB2 performance seems to suffer quite a bit when the device is busy, FTP and/or WebDAV do work fine in these cases. NFS works – except on encrypted folders. There are no SMB-to-NFS permission problems.

When changing SMB or NFS options, the DSM will restart all sharing services, meaning that if you change an SMB option and have a client connected via NFS, the client will be disconnected as well. Meh.

So, am I happy to have this device or would I recommend to roll your own build? Simply put: I am happy. There is much to see and tinker with, I have not mentioned any of the energy-saving options or the sound-levels of the device. Both are great.

There are a few nitpicks but the overall build-quality and software is fantastic, making the device easily usable for all target-groups. The option to extend the DS2413+ with another unit via Infiniband is a great idea and hopefully the extension unit will still be for sale in a few years.

Whether you are a passionate home-user with hunger for storage or a small business unwilling to get a rack, the DS2413+ is worth your attention. Otherwise there are plenty of great rack-mounted options for the same price that do the same.

Change the locale in Battlefield 4

I won’t humor my esteemed readers with my personal opinion on Battlefield 4 but there is one thing I must get out of the door: Localization in games usually sucks. And it sucks if developers, publishers and digital distribution channels alike will not give you the option to change the language of the game.

Thankfully Battlefield 4 can be reset to English in numerous different ways. You have probably read about deleting the unnecessary extra locales (everything under Data\Win32\Loc that does not start with en*) but unfortunately these files will be restored on the next patch.

A much better and less intrusive way is available by altering your registry:

  1. Start regedit
  2. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\EA Games\Battlefield 4
  3. Set the “Locale” to “en_EN”

This should work on every localized version and teach the game some manners.

SyncBackSE: Schedule a Move Operation on Windows

I have several file-system operations I cannot perform during the day, the machine’s performance would suffer and I would get angry e-mails. So I have to schedule simple move operations.

Now I could do this with Windows’ own task scheduler but I would have to write either a vbscript or a batch file to specify the details. Performing a dry run also sucks. Apparently there’s no dedicated software that gives a new “Schedule Move” or “Schedule Copy” context operation (hint: I’ll develop one once I have beaten Grand Theft Auto V) for quick use, so I started experimenting.

It seems the amazing SyncBackSE fits the bill. I already own a license for this great piece of wizardry to perform sync operations between multiple machines and backup my files. Turns out you can configure a new, one-time job to be your scheduled file mover:

  1. Create a new backup profile and choose the directory above the one you want to move.
  2. Choose “Select Subdirectories and Files” to specify the directory/directories you want to move.
  3. Now select your target directory.
  4. Add a schedule
  5. As a condition set “Move file to target”

SyncBackSE will automatically move your file, produce a nice log for you to review and even allows for a dry run.

Component is no option for me.

I’m a quality whore. Give me quality. What, is costs 5 additional bucks? I think you did not hear me. I said. Give. Me. Quality.

When I got my Hauppauge PVR2 GE Plus it only came with component multi-av cables for the Playstation 3. “Big deal”, people say, “you don’t see the difference on YouTube anyway.”

That may be true if you play Battlefield or Call of Duty all day. However I almost wanted to cry when I saw my glorious Playstation outputting mushy pictures…


(It’s Jena-san from Planetarian, I’m sure!)

As this small video should demonstrate there are quite a few differences, starting from the foggy picture to the blurry outlines to the ashen white color to the strange color impacts. Simply put: Playstation 2 era quality.

My advise: Grab a simple HDMI-to-DVI adapter cable, one of those DVI+Toslink-to-HDMI converters and output beautiful, sharp material. Even if you think the quality won’t be visible after your post-processing, it is still visible on your own television set during your recording, at least.

My streaming setup

As you have probably noticed if you follow my projects for an extended amount of time, I do love streaming. The idea of personal media has become an incredible creative influx in today’s web culture. Think of great podcasts, let’s plays and weekly shows you enjoy.

Since I’ve changed my workflows and my software stack around a bit in the past few months, here is a small look at how I work. I am not suggesting this setup is generally awesome (because it is clearly not) but at least it’s a solid, mostly software-based foundation.

I’m not the average player who simply streams his progress. I am too lazy for producing a continuous series of videos. Another problem is that gameplay is – in my opinion – only interesting at 720p and/or higher resolutions. Unfortunately, due to the poor cut-throat politics of the German Telekom, it is impossible to get proper broadband internet access. I have to put up with ~100 kiloByte/s upload and mere 10 Megabit/s downstream. The best I can manage with that is 480p with about 700-750kbps video data and 96/128kbps AAC audio.

However, I do want to be able to record in high-definition anyway. Ideally I record in 720p@30 and stream in 480p@30 – in realtime, that is. Technically this should not be an issue, my computer supports Intel Quick Sync so I could (in theory) encode my local high-definition copy of a video without suffering any performance penalty. I specifically mention “in theory” because reality leaves me in despair.

In the past I have used Dxtory and Xsplit to stream. Dxtory can output data to both file and a pseudo-camera. The camera output could then be used in Xsplit to stream in 480p. Unfortunately Dxtory does not give any specific resolution details to it’s camera output so the content is always 4:3 and blurry as hell in Xsplit. That may be an acceptable short-term solution for 480p crap quality but no keeper. Another bummer is that Dxtory does not make use of Quick Sync. The same is the case with Xsplit (except when doing local recording – which renders the entire feature moot).

I also want to mix several input sources (like multiple webcams, microphones, my Hauppauge PVR2 plus local media files [avi, mkv] etc.) so my choices are rather limited. Again, I use Xsplit as my weapon of choice here. I have tried Open Broadcaster Software and while the software did perform well, the user-experience and some kinks with capturing DirectX and OpenGL surfaces once again left me in despair. Capturing an exclusive madVR surface is impossible with OBS in it’s current state, there is flickering all over the place.

So yeah, Xsplit it is for preparing and switching stages. Starting with Xsplit 1.3 it has also become a useful tool for local recording due to Quick Sync support. Again, I could use OBS here or even Mirillis Action! but I already own an Xsplit license and there’s too little difference in the output to warrant extra software setup.

As mentioned before I use a Hauppauge PVR2 Gaming Edition Plus device to capture HDMI and Component material from my Xbox360, Playstation 3, Nintendo Wii and Playstation Portable. It works fine, the quality is acceptable, even if the blurry Playstation 3 component output makes me cry. One little thing has to be noted: The PVR2 has a streaming lag of about 3750ms.

It would be rather ugly to have commentary about 3-4 seconds early, so I manually keep my microphone input in a 3750ms Virtual Audio Cable repeater buffer that also allows me local playback in realtime from my line-in while adding latency to the audio data for use in Xsplit. It’s a great piece of software, I’ve fiddled around with VB-Cable before but VAC is just a much better experience for me. Your mileage may vary, especially since my  requirement here is inducing latency while most people want to reduce latency.

So, what’s left to do? Well, I still need to get a proper microphone that does not sound like I’m trapped in Buffalo Bill’s basement. I also need an additional, dedicated SSD for dumping the video data. And yeah… proper upstream – the one thing I will never get.

In short:

– Dxtory for capturing “strange” sources
– Hauppauge PVR2 for capturing consoles
– Virtual Audio Cable for mixing, splitting and postprocessing live incoming audio data
– Xsplit for bringing all sources and media together

I am not saying that the software listed above is perfect or the best there is. God knows Xsplit is far from perfect and OBS shows SplitMedia Labs who is boss in some departments (and no – “having more features” is not a good excuse for having the world’s slowest UI or not implementing features supported by libx264 [like OpenCL]). But my workflow could be a lot more miserable, so I guess this could pass as a recommendation.

Bring order to your chaos with File Juggler

I’m a sucker for sweet file-management tools. My ever-growing/changing list of essentials has a new addition and I welcome the fabulous File Juggler. File Juggler is a rather simple, yet powerful tool that allows you to define rules based on file-names, modification date and other criteria and perform operations on those files.

The reason I decided to shell out the 25$ for the tool is because it just works. No bells and whistles, no stupid, overloaded crap UI. Select a few sources to monitor, define your rules, done. File Juggler will automatically keep watch of the files and move, delete, rename or extract them when the rules apply.

In the current version 1.3 you cannot move entire folders around, unfortunately. So if I wanted to move .\a\b to .\c\b the files from .\a\b would end up in .\c\. Fortunately the developer behind the application is already working on folder operations for version 1.4, so I have high expectations 🙂 .

Windows web stack woes

For quite a while I was not satisfied with the performance of one of my Windows 2008 servers. While the machine had reasonable processing power, a fair amount of RAM and almost no disk IO the rendering performance of PHP pages on IIS 7 was simply atrocious.

Different PHP versions, lots of TCP and Wincache tweaking – no cigar. What could possibly cause the server to wait for about 8 seconds to render a simple WordPress front page?

The answer puzzled me: localhost.

Due to the IPv6 address of the machine, some kinky routine preffered the IPv6 address over the IPv4 one, causing significant slowdowns on each and every request to MySQL.

After simply replacing “localhost” with “127.0.0.1” in all configuration parameters I got the kind of snappy performance I expected. Crazy stuff.

Transitioned

After a period of transition I finally decided to fully go with blog.tsukasa.eu and redirect requests from tsukasa.jidder.de to here.

That way links won’t be broken and I can finally utilize all the modern shennigans I’ve installed.

Future ahoy!

Edit 2013-06-15: The missing comments from the transition period are also on board now, me hearties!

The 80s called, they want their Hulk-Hogan muscle shirt back.

Bitcasa Everywhere Chrome modification for infinite queue

Addendum 2014-05-21: I received a mail from Bitcasa informing me that this modification polls Bitcasa’s services so much that it has undesired side effects. Contrary to what you might believe it was not a threat or any sort of lesson in legal issues but a simple request backed by very reasonable, technical arguments. If you are not familiar with how the modification worked, here is the short version: BCE Mod created a background timer that would poll Bitcasa’s endpoint every x seconds to update it’s internal status, log you in, trigger new downloads and so on. One person using this method is not a problem. Add an undefined number of people and the trouble starts. Every user with this mod increases the stress on Bitcasa’s web interface considerably due to the unending stream of requests. Now here is where my dilemma starts: I was out to improve the user-experience and show that it does not take much to do so, not to harm the service I want to prosper for years to come. Unfortunately though, that seems to be the case now, making the life of the good folks at Bitcasa harder – not cool. So please understand that I will not offer or work on this modification anymore. I do recommend that if you still use the modification, you should uninstall it immediatly because it will not work as intended anymore; all it will do at this point is lock you out of My Bitcasa for a few minutes due to the number of requests. If you are interested in…

  • An infinite queue for your Bitcasa Everywhere downloads
  • Automatic login to My Bitcasa
  • Tighter integration with 3rd party services
  • A more up-to-date Bitcasa client update check (possibly an official announcement for each new release via Twitter?)

…please vote for these features on the official feature request section! The more votes a feature gets, the better! If a feature is not feasonable you will receive official word on why it will not make the cut. Also consider voting for the addition of some kind of file-download extension to Bitcasa’s API, giving third-party developers more freedom to interact with Bitcasa without having to play the “middle man” for file caching. Again, sorry to everyone at Bitcasa for the inconvenience caused and sorry to everyone who came here expecting a turbocharger for their Bitcasa Everywhere!

Quick note: Bitcasa + prepaid credit-cards

I’m not a fan of credit-cards. Personally speaking, I think Paypal, despite all it’s flaws, is the slightly lesser evil.

Paypal gets the one thing right about payment online: Don’t allow charges without user authorization. That’s where credit-cards fall short, in my opinion.

Needless to say I was quite disheartened to learn Bitcasa only allows credit-cards as their method of payment (although the legal page hinted strongly towards that during beta). Luckily enough, services like Kalixa, Wirecard or Neteller seem to work fine with Bitcasa. While not a perfect solution, this does at least postpone the problem a year for me.

Once again, Wuala gets it right while others seem to fail miserably with the same tools at their disposal: Paypal recurring, Paypal subscription (usable without credit-card) and even Bitcoin are offered as payment options.