mount: /run/miso/sfs/livefs: can't read superblock on /dev/loop0. dmesg(1) may have more information after failed mount system call.
I have been struggling with this and similar issues for a while now. After checking the affected ISOs, the thumb drive and all, I finally chose the nuclear option: Completely re-installing Ventoy.
Turns out this was a good call. My Ventoy thumb drive has always been updated for roughly 2 years and somewhere down the line, something did go sideways. After completely re-installing the entire drive (partitioning and all), all ISO files finally work properly now.
I recently did a livestream where I was unearthing the (probably) forgotten mIRC script VirusScript by Virus. It is the mIRC script I used back in 2000 and it always held a very special place in my heart, in good part due to nostalgia.
I figured that I could at least turn this into a nice blog post since a lot of the heavy lifting was already done during the stream and the story has, for all intents and purposes, a happy ending. Plus, this way we can all browse history together.
If you are in the mood for reminiscing about an obscure Turkish bit of mIRC scripting from 23 years ago, today is your lucky day. Otherwise, you might want to stop reading here.
The Year: 2000
Ah, the year 2000. Back then, people actually had a sense of taste and played games like Unreal Tournament, Team Fortress Classic and Counter-Strike. Usenet was still drawing some measurable amount of breath. We were chatting on ICQ, IRC and Roger Wilco. And imagine that, the hot smartphone of the day was the venerable Nokia Communicator. Fancy that.
Back then, I used mIRC as my IRC client of choice. IRC clients are a somewhat strange breed of application, as most of them have kind of scripting language or plugin interface to extend their functionality. My choice of mIRC script was the aforementioned Virus Script by Turkish programmer Virus.
The Year: 2022
Fast forward to the present day. A lot of the “old web” has been lost to either time, page rank irrelevance or data rot. With many services ceasing operation in the mid-2000s, we lost a ton of historic data.
So, how do we find the website of a script, that has been defunct for more than 20 years? With a name that brings up all sorts of irrelevant search results?
Thankfully, some remnants still remain – given you feed search engines with the correct term.
Obviously, searching for “VirusScript” does not really help much. “VirusScript ME” is a better search term because that is the full name of the script.
Depending on your search engine, you might get a relevant result… or nothing at all.
Bing, StartPage and Kagi fail, with no relevant search results. Google gives us what we need, though:
Bingo, an Angelfire hit that lists “mIRC Scripts -Just The Best-“. But do we get a link or a download?
Luckily, we do! A link to the homepage on a cjb.net subdomain. The story of CJB Management and mIRC-X is a great and interesting tale for another day, so let’s just press on for now.
Unfortunately, CJB.net stopped the entire subdomain and hosting thing back in the mid to late 2000s. So we need to take a slight detour through archive.org‘s Wayback Machine to proceed. Since VirusScript is old enough to drink, there is a fair chance we can unearth a fully working copy of the website.
After checking the Wayback Machine’s records for vscript.cjb.net, we come up with these possible snapshots:
After stabbing a 2008 snapshot, we can see a barely-working landing page with nothing behind that. Let’s try a 2004 snapshot instead… same result. Perhaps 2002 is a lucky year – and indeed, it is!
With shaking hands, we try the download. A white page. A moment of silence. And then…
Yes, this is a working download for VirusScript ME by Virus. A script not only forgotten by most people but also lost to the internet’s constantly shifting sands.
Unfortunately, though, VirusScript is true to its name and contains a few nasties. So I recommend some caution if you are interested in this.
So, let’s fire up a virtual machine, install the script and see what’s up. It turns out VirusScript comes with a few “hacking tools” that were the talk of the town back in the late 1990s – all of which VirusTotal still has a grudge against. Windows Defender is a lot more lenient and only complains about one tool.
There are also some false positives due to the age of the mIRC executable and the fact that this specific mIRC version was also used for malicious deeds back in the day.
I suppose these tools are considered harmful due to being “hacking tools” or “attack tools”. So how does VirusScript look? Well…
Launching the script is no issue, it does try to do some funky stuff with the vmm_32.vxd – but to no avail, thanks to NTFS and the fact that this is not Windows 98 anymore.
The script comes with mIRC v5.8. Surprisingly, this is not a pre-cracked copy of mIRC or anything. So, let’s take a look at the script editor…
The script editor’s font has been changed. Combine that with some strange default settings for DCC (auto accept and auto run) and we’ve got ourselves a bit of fun. Even back in 1999/2000, automatically running received files was bad practice. So it is reasonable to assume this setting was chosen deliberately to possibly infect users.
There are some shenanigans going on with the script. I do not recommend people to run it. If you must, at least run it in an isolated environment that you can discard later on.
It is still a cool script, all things considered. Especially all the ASCII art fun stuff is as neat as it was 20 years ago. And it is great that archive.org preserves these otherwise forgotten parts of internet history.
So there you have it. From a memory to a broken link to a full download. All the emotional highs and lows you want from a good story.
After roughly 44.000 hours in service, I decided to slowly replace my trusty Western Digital Red 4TB drives with some spanking new Western Digital Red Pro 8TB ones. Partly because I am running out of storage and partly because some of the older drives started giving me elusive but terrifying dmesg outputs.
With Synology, this is usually a pretty pain-free process. You remove one old drive at a time, run an extended SMART check on the new drive and then repair/restore the storage pool with the replaced drive.
This causes three scrubs of your drives/pools for each md software RAID device (md2, md3 and md4):
With the number of drives in my ageing DS2413+, this takes about 14-ish hours per run. Except when it does not.
Sunday is SMART day. All drives run an automated extended SMART test at precisely midnight. This also takes about 14 hours. Except when it does not.
Turns out that running the SMART test during a pool restore is a mighty bad idea and slows both processes down to a crawl. I am not talking about a few performance hits, I am talking full-blown “this takes 3 times as long as it should” madness.
After about 38 hours I finally cancelled the SMART tests and the restore process finally reached acceptable speeds again.
Kagi, unrelated to the prior (and now defunct) shareware payment provider of the same name and domain, is a new search engine that has received a bit of attention over the past few weeks.
The company promises to respect the user’s privacy while still delivering a set of compelling features and high-quality, relevant, user-tunable search results. This sounds awesome, especially since other sites like Brave, DuckDuckGo, Ecosia and Startpage had their share of negative press over the last few years. SearX is a nice idea – but often does not deliver relevant results. So one could say the market is ripe for a new competitor.
Kagi offers two tiers of service: A free tier limited to 50 search queries per month and a paid 10$/month tier for unlimited queries.
The company is US-based but seemingly employs an international team of people worldwide via remote work.
First off, Kagi ticks all the right boxes for me. It integrates relevant additional data as well as quick access to archived copies of a site on archive.org. This does not sound like much of a feature, but I do a lot of research and this saves me some clicks.
The ability to rate the relevance of certain domains is also absolutely stellar.
As for the quality of the search results, I have no complaints. The ability to use specific “lenses” to skew results to a certain set (i.e. programming related or PDFs) is great.
There is no way to sugarcoat this: 10$/month for a search engine is too much of an ask. I’d happily pay 5$/month for a service like this.
However, even that would not work because Kagi uses Stripe and only accepts credit cards. No Google Pay, no Paypal, no nothing – only credit cards. This is a typical issue with US-based services that do not realize credit cards are not the primary payment method in the rest of the world.
Kagi states on Hackernews that they anticipate a low search volume for their regular users (citation/link missing). I heavily disagree here. When I am using a search engine, I do not send one query. I usually do some research by using one term, run some variations on that term and new search queries based on the information I have learned from previous results.
I use my search engine of choice more than 1.7 times a day, so the free tier of Kagi would be unusable for me. And if I cannot use the search engine, there is no way I will fully commit to switching to it.
Unfortunately, the long-term sustainability and growth of the service are murky topics and something that warrants further analysis in 2-3 years – assuming we will get any kind of published data from Kagi. Will the company be able to convert enough users into customers to be sustainable and/or profitable?
The company is US-based. For many, this might seem like a great selling point, however from a privacy perspective the US are a terrible haven. The fact that government can order the silent exfiltration of data via gag orders is worrisome. Kagi assures us they do not log or collect data – but the same was also true for many VPN providers that were logging in the past decade and handed over data to the feds. An independent audit of the infrastructure, configuration and software – similar to how Mullvad operates – would go a long way to verify the claims and build trust.
Lastly, Kagi has a worrying amount of products in the pipeline. Their Orion browser is in beta and they have already announced an e-mail service on their FAQ. On the one hand, it is a good strategy to branch out and offer many different products in various categories. On the other hand, you might be spreading yourself a little thin here, Kagi.
The Bottom Line
Despite me sounding pretty negative, I do like what Kagi offers. However, the price and available payment methods (and I am not alone in this) are a big turn-off right now. A price of 10$/month is just too high for me when Newsblur takes 36$/year (which comes down to 3$/month). If Kagi magically manages to knock the price down to 5-6$/month I’d immediately subscribe.
The free tier is virtually useless for me and acts as a nice gimmick to show off how Kagi works, what features are present and what kind of results you can expect.
Ironically, this is similar to the methods employed by the shareware processor Kagi (fully functional but limited to x uses). We will see if Kagi search will last as long as the company whose name and domain it is using – or whether the party will come to a sobering end much earlier.
And even if this bitter end should come to pass, I think that having a service like Kagi is important. It shows that an increasing number of users are growing sick of being the product. And Kagi might be able to more easily innovate/refine in a similar fashion as XenForo managed to one-up vBulletin back in the day.
If you want a simple, quotable takeaway from this post, then here you go: While I currently would not pay for Kagi, I highly recommend you try it out yourself. It is an elegant search engine that did not fail me on my queries yet.
Despite the fact that I do not stream as much anymore, I continue to tinker with OBS on a daily basis. For a while now I wanted to have what I would call a “video soundboard”, a simple mechanism that allows me to quickly play short clips during my stream.
Now, this doesn’t really sound too hard. Create many scenes, add media sources, slap a StreamDeck button on top of that – boom, you are done. This is a crappy solution because it requires a ton of additional work to add new videos into the mix.
I wanted to have a mechanism that relies entirely on one single scene with one single media source for all the content. Control over what gets played is wholly set on the StreamDeck and the StreamDeck only, meaning that adding new content is as easy as adding a single new command to a new button on the StreamDeck.
Sounds interesting? Here is how it works.
Before we start, you will need the following things ready:
Create a new button on your StreamDeck with the Text File Tools plugin.
Specify a target filename and set the Input Text to the location of your desired media file (i.e. C:/Temp/heheboi.mp4). Leave the “Append” checkbox unchecked so the entire file gets rewritten.
This is all you will need to do to add new media to the scene. We can now set up OBS.
Within OBS, create a new scene (i.e. “Scene – Memes”) and add a media source to that scene (i.e. “Media Source – Memes”). Make sure this is a media source and not a VLC source!
Open the properties of the media source and be sure to check the “Local File” checkbox, “Restart playback when source becomes active”, “Use hardware decoding when available” and “Show nothing when playback ends”.
Hide the media source via the Sources list by clicking on the “eye” icon in the source list.
Setting up Advanced Scene Switcher macros
Now open the Advanced Scene Switcher plugin via Tools – Advanced Scene Switcher, navigate to the Macro tab and add a new Macro called “Meme Scene (Start)”.
Check the “Perform actions only on condition change” checkbox and add the following condition:
[ If ] [ File ]
Content of [ local file ] [ <PATH ON STREAMDECK BUTTON> ] matches:
(Yes, that is dot asterisk – no space or anything else before, after or inbetween) Check the “use regular expressions” checkbox and the “if modification date changed” checkbox, and leave “if content changed” unchecked.
Now add the following actions:
[ Switch scene ] Switch to scene [ Scene - Memes ] Check the "Wait until transition to target scene is complete" checkbox.
[ Scene item visibility ] On [ Scene - Memes ] [ Show ] [ Source ] [ Media Source - Memes ]
This takes care of actually playing the video when a change in the file is detected. But we also want to switch back to the previous scene when playback has finished, so we must add another macro.
Add the second macro “Meme Scene (End)”, check the “Perform actions only on condition change” checkbox and add the following conditions:
[ If ] [ Scene ]
[ Current scene is ] [ Scene - Memes ]
[ And ] [ Scene item visibility ] (Click the clock) [ For at least] [ 1.00 ] [ seconds ]
On [ Scene - Memes ] [ Media Source - Memes ] is [ Shown ]
[ And ] [ Media ]
[ Media Source - Memes ] state is [ Ended ]
Add the following actions to the second macro:
[ Switch scene ]
Switch to scene [ Previous Scene ]
(Check the "Wait until transition to target scene is complete" checkbox)
[ Scene item visibility ]
On [ Scene - Memes ] [ Hide ] [ Source ] [ Media Source - Memes ]
Now we should be good, right? Well, almost. While we react to changes in the file thanks to the macro and switch between the scenes, we still do not set the media file on the source. This is handled by the Lua script which we must set up as a final step.
Setting up the Lua script
Open the Scripts window via Tools – Scripts and add the VideoFileFromText.lua script.
You should see some options on the right side of the window.
Set the interval to 50ms, browse to select the same text file you used on the Elgato StreamDeck button for the Video File List, and select the “Scene – Memes” scene for the Scene, as well as the “Media Source – Memes” for the Media Source. Finally, check the “Enable script” button and you are done.
Tying it all together
Be sure that the Advanced Scene Switcher is active and press the button on the StreamDeck. The scene should switch to your Meme scene, play the video and then switch back. Add another button on the StreamDeck that writes a different video file path to the same text file.
Now press the new button, and the second video file should play.
This makes adding short clips really simple and pain-free. No need to manually create multiple scenes or deal with multi-action steps on the StreamDeck. Adding a new video is as quick as adding a new button, setting the path to the desired media file and giving it a nice button image.
Of course, this is just the solution that I came up with, so your mileage may vary.
However, I do think that the inherent simplicity makes it an ideal solution. What do you think?
After more than a decade I have finally migrated away from WebDrive. It is not that I am particularly unhappy with the product, so South River should not feel bad here. My use-case for the software simply changed over the years and WebDrive did not cater towards that.
Back in the 2000s I primarily used WebDrive to keep a connection to an FTP or SFTP system to easily manage and edit files. A perfect fit, a very reliable tool. Especially in a landscape where many applications only knew how to work with local (as in: on a local drive) files. With the advent of rich media content online, however, WebDrive’s approach to file access no longer fits my requirements.
These days, I manage pools of video and audio on remote systems. I do not want to download an entire file to “peek” into it. It is problematic for collaboration. And more importantly: It is slow and inefficient.
Enter RaiDrive, a Korean-based software that does this very well (on supported protocols).
I read some buzz online about Raidrive not being reliable, however, I cannot mirror that sentiment. The software has been reliable for me during the 2 months of daily use.
Being the old fogey that I am, I make no use of any of the hip “cloud” integrations both products offer, so I cannot speak to the quality of those. However, Raidrive uses EldoS/Callback’s reliable components – the same ones also used in my favourite sync tool Syncback Pro.
I love Powershell. Unfortunately, as soon as we cross into the realm of trying to grep for a specific string in gigabytes worth of large files, Powershell becomes a bit of a slowpoke.
Thankfully I also use the incredible FileLocator Pro, a highly optimized tool for searching file contents – no matter the size. The search is blazingly fast – and you can easily utilize FileLocator’s magic within Powershell!
For the sake of clarity: I will be using Powershell 7.1.3 for the following example.
# Add the required assembly
Add-Type -Path "C:\Program Files\Mythicsoft\FileLocator Pro\Mythicsoft.Search.Core.dll"
# Prepare the base search engine and criteria
$searchEngine = New-Object Mythicsoft.Search.Core.SearchEngine
$searchCriteria = New-Object Mythicsoft.Search.Core.SearchFileSystemCriteria
$searchCriteria.FileName = "*.log"
$searchCriteria.FileNameExprType = [Mythicsoft.Search.Core.ExpressionType]::Boolean
$searchCriteria.LookIn = "C:\Temp\LogData"
$searchCriteria.LookInExprType = [Mythicsoft.Search.Core.ExpressionType]::Boolean
$searchCriteria.SearchSubDirectory = $true
$searchCriteria.ContainingText = ".*The device cannot perform the requested procedure.*"
$searchCriteria.ContentsExprType = [Mythicsoft.Search.Core.ExpressionType]::RegExp
# Actually perform the search, $false executes it on the same thread as the Powershell session (as in: it's blocking)
foreach($result in $searchEngine.SearchResultItems)
# SeachResultItems are on a per-file basis.
foreach($line in $result.FoundLines)
"Match in $($result.FileName) on line $($line.LineNumber): $($line.Value)"
Wowzers, that’s pretty easy! In fact, a lot easier (and quicker, to boot!) than playing around with Get-Contents, StreamReaders and the like.
One thing of note here: Between running this on a loop for every file in a directory, it is actually quicker to process an entire tree of folders/files. The larger the dataset, the larger the gains through invoking FileLocator.
And yeah, you can use FileLocator on the command line through flpsearch.exe – however the results are not as easily digestable as the IEnumerables you get through the assembly.
There are two things you cannot ever have enough of: Good text editors and good file managers.
One of the arguably best commercial console-based editors for Windows with a history going back all the way to the 1980s is now available for free: The SemWare Editor.
If you have never heard or tried TSE Pro, imagine a mix between the simple and intuitive CUI of a EDIT.COM with the rich feature set of a VI, allowing you to extend and alter how the editor works by adding or modifying the included macros. The editor comes in two flavours: A true console application and a Windows-only pseudo console that has a few more bells and whistles. Of course, the purebred console version works great via SSH/telnet.
Now is a great time to give TSE a try, as the following announcement came on the mailing list:
Yes, this and future versions will be free. The good Lord Willing, (ref: James 4:13-15), I plan to continue working on TSE.
I cannot praise the editor enough and will vouch that it is worth every penny of its previous license cost.
I am still rocking my beloved Corsair K95 RGB, the original one with the 18 G-keys. I still think there is no keyboard to date that is as great for multiboxing as this one.
Since migrating to my new machine a few months ago, the keyboard would occasionally do quirky things. Letting iCue run for a while caused gamepad detection to “lock-up” and took quite a while. What is worse: No up- or downgrading of iCue made a difference here.
Things like the joy.cpl, GeForce Now, Inner Space or games supporting gamepads would frequently take minutes to load. Disconnecting and reconnecting the keyboard fixed the issue temporarily until something went bonkers again.
The solution to this was to force a reinstallation of the keyboard’s firmware through iCue. I have no idea why I would need to do this, but flashing the firmware again solved the issue permanently.
I have been playing around with Windows 11 in a virtual machine. My thoughts can best be summed up with “a bouquet of unremarkable things nobody wanted”. Windows 11 already made the rounds on the internet over its strict “no old hardware allowed” policy and the back-and-forth over Direct Storage which seemed like nothing more than marketing bullshit.
Personally, I have an entirely different pet peeve with Windows 11: It looks revolting. It looks ugly. It looks disgusting. Windows 11 looks more and more like a failed attempt of skinning Wine to make it hip, fresh and cool. Or like the aftermath of a broken UXThemePatcher run. Or what happens when Window Blinds crashes.
Please remember: People, (presumably) actual living people, got paid to do this.
People that know – or should know – that a majority of old applications will look butt-ugly with a half-assed mix of design elements from Windows 2000 (console contents and some colours), Windows 8/10 (the window controls that were meant for rectangular themes) and the lunacy that is Windows 11.
The colours do not match. The icon language does not match. The margins do not match. Nothing matches.
This is just a very quick and dirty how-to for getting Synergy 1 Pro to run on LightDM before logging in. All the other instructions I have found haven’t really worked out for me, so let this be my best try…
Step 1: Setting up
Before we can set up LightDM’s configuration we first need to create a PEM cert and configuration with the root user, as that is what is my LightDM process is running as.
Log into a normal interactive X session. Start the graphical Synergy 1 Pro client via “sudo synergy”, generate a certificate and set the client up in a way that it actually can connect and is approved by the server.
Step 2: Adjust LightDM’s configuration
I am on Arch, so my configuration sits in /etc/lightdm/lightdm.conf. Open the configuration and add the following block:
I never understood why people think Docker is a big thing. To me, it always seemed to solve a problem that does not exist by adding layers of complexity which inevitably always introduce new problems and bugs.
If you wanted to isolate processes, why not use jails or zones? “But Tsukasa”, people sneered at me with mild amusement in the past, “you don’t understand. It’s about the ease of replacing software!”. Yeah, you can do that without Docker, it’s called package management.
Somewhere along the line, the OCI was founded and at least there was some kind of standardized way of handling containers.
Enter VMware Workstation in the middle of 2020. Coming to us courtesy of a technical preview, VMware shipped the new vctl container CLI it plucked from VMware Fusion. And I really wanted to love it, because the idea behind it is good – but…
A Promising Disappointment
I am a VMware guy. After more than a decade with VMware Workstation, I really dig the features. Yes, you can probably achieve similar results with other virtualization solutions – but none make it as easy as VMware. Yes, call me indolent and a fanboy, if you must. So imagine my joy when VMware announced their container CLI.
No more need to install the Hyper-V role, no more fiddling with some wonky plugins – just a clean, supported product that does what Docker does, but with VMware’s hypervisor in the back. One product to be the definitive all-out solution for my desktop (x86) virtualization needs.
VMware creates a new virtual machine in the background that acts as a host for the containers. This machine does not show up on your usual list of running VMs. Instead, it will show you the active containers. Don’t click on them though, the Workstation UI does not really know what to do with containers and you will end up with a botched tab of nothingness.
Since vctl is using VMware’s hypervisor, all the good stuff is already in place and familiar to me. Network configuration is dead simple and I have all the tools to explore/manage the container VM.
The performance is also top-notch, so what could I possibly complain about?
The integration and the polish. vctl creates an alias to docker, so you can issue both a vctl ps or docker ps and get the same result. Unfortunately, vctl does not shim all the commands and parameters Docker has, meaning that a lot of tooling and cool integration simply does not work. Want to use VSCode’s Remote Container extension with VMware? Bad luck, the command does not reply in the expected fashion because it does not understand all the parameters.
This is incredibly disappointing because the container feature in Workstation is so close to being a fantastic proposition in a time where VMware sunsets some long-standing features (cough, Shared VMs, cough, Unity on Linux, cough).
It does what it says – and nothing more… yet!
Please don’t misunderstand: The feature does what it advertises to do. I can easily author a Dockerfile and build it with vctl – without having to install Docker. This by itself is already a godsend because it reduces the amount of software I need to install on my workstation.
But I cannot help but wonder how cool it would be to have a (parameter-compatible) drop-in replacement for Docker from VMware as part of the software I use for full virtualization anyway. And give me a docker-compose, while you are at it. Thanks.
I do not get to write neat posts nearly as often as I would like to. But this one does not violate any NDAs and is relevant to an OG post on this blog.
So today I want to talk about two things regarding my beloved DS2413+ that other people might find useful in some capacity. Or at least entertaining.
Be Cool, Be Quiet – Live the Noctua Lifestyle!
I replaced the two Y.S. Tech stock fans in my DS2413+ with two Noctua NF-P12 redux-1300. Technically you can pop in every 3-pin 120mm fan you want, however, due to the way Synology drives the fans, they might not drive enough airflow, stop working (as in stop spinning) or DSM complains about fan failure.
I originally intended to replace the fans with the official replacement parts, however, it seems that I got stiffed so procuring the parts on short notice was no option. After a bit of research, I settled on the NF-P12 because other folks around the internet had positive experiences with the swap-out. I used this rare chance to clean the interior of the NAS, routed the cables nicely and thought I was done – I was wrong. I learned that lesson when the unit started beeping in the middle of the night.
You do want to set the fan setting to “Cool Mode” in your power settings, otherwise, one of the fans will randomly stop spinning after a few hours. Setting the fan speed to “Cool mode” fixes the issue and prevents DSM from issuing alarm beeps.
There are some other hacky ways to edit the fan profiles manually via the console, however, this operation apparently needs to be repeated after each DSM update. I’m way too lazy for that.
As for my cool Noctua lifestyle: The temperatures are virtually identical and the fans are quiet (as you would expect from the mighty Austrian owl!).
If you want to live the dream, please be sure to check the web for other people’s reports of your specific unit. Depending on the model the fan size, pin type and compatible fans will vary.
The big question, though: Is it worth the hassle?
Honestly speaking there is very little difference between the Y.S. Tech and Noctua fan in terms of cooling performance and noise level – at least when used on “Cool mode”. But you want that Noctua lifestyle, don’t you?
Addendum 2021-09-13: After upgrading to DSM 7, something about the way the fans are being addressed seems to have changed. I ran into several instances where DSM would report the fan as “faulty” and turned it off completely. Changing the fan settings around does not seem to make a difference here. I have popped in a new set of Y.S. Tech fans (original Synology replacement parts) for the time being…
Data Scrubbing – Or: Dude, Where is my Data?
I run my data scrubbing tasks regularly. Due to a recent power outage, the system complained about possible Write Cache issues, successfully completed a scrub and prompted whether I want to reboot now or later. It also asked whether I wanted to remap the disk.
“Sure”, I thought to myself, “I like maps!”. I toggled the option and hit “Reboot now”. DSM rebooted… and that was about it.
Blinking status LEDs but no DSM web interface, no SMB and no NFS shares. Slightly nervous I tried to connect to the NAS via SSH. dmesg and the system messages did not show anything of particular interest, so I started poking around the internet.
Google spew upon me pages and pages of horror stories that made my skin crawl: Bad superblocks, broken filesystems, complete loss of data, cats and dogs living together – the whole nine yards to make me break into a cold sweat and fear for the worst.
In this case, though, a simple “top” explained the situation: DSM was performing an e2fsck check of my filesystem.
This obviously caused the logical device to be busy or unavailable and explains why all lvs, pvs and vgs commands listed everything to be in order and mdadm was reporting proper operation. This also explains why the shares were not available, as the logical volume was not mounted.
Personally, I find the design decision to not initialize the web interface a bit weird, as it is truly unsettling to see all your data in limbo, with your only indication that something is or could be happening being the blinking lights on the front of the unit (not the drive indicators).
I hope that DSM 7 might improve on that end. It would be cool if the web interface had come up and indicated that a volume is currently unavailable due to running filesystem checks. This would be much more transparent.
The DS2413+ is still an awesome unit and I very much appreciate the stability and ease of use of it. Synology is doing a great job at being very user friendly, so it really hits hard when something like the e2fsck situation comes up.