HOME - RSS

### DEVELOPMENT ON LINUX ###

6 July 2019 - 18 minute read
A bit over a month ago, I was scheming about converting my work machine to Linux
to work from. I suggest you read my original plan here first so you have
context. 2 weeks ago, I sent a polite reminder to the CTO about my email as I
expect it had got lost in his inbox and slipped his mind. He read through and
sent a response back. He said it was totally fine. The only real requirements
being that I should always have a Windows environment available for doing work,
and to talk to the head of security as all the work machines use full-disk
encryption and I should know what the expectations for that are.

Naturally, I was over the fucking moon about it. I'd just been given permission
to transition my work machine over to Linux, stick Windows in a VM, and overall
be a lot comfier and happier using my machine. I'd also picked the time to send
the reminder pretty well as I was already booked off for holiday the following
week, so I'd have 9 days to turn my laptop into a paperweight and back.

### INSTALLATION ###

So, the first step was to get Linux installed. I was tied between Ubuntu and
Arch, because on the Ubuntu hand, I know it would work immediately and without
any hardware problems which is exactly what I'd want for a work machine, but on
the Arch hand, it's so much better for development due to the newer software,
smaller updates, minimal install, and broader software availability. Given that
I had 9 days rather than just a weekend (which is what the original plan was), I
thought I'd try Arch, if that didn't work go to Ubuntu, then if all-else fails I
can always put Windows back on there as an emergency last resort.

Backup made, I booted my Arch USB, went through the steps with wpa_supplicant
to get connected to the internet, and discovered that my USB couldn't actually
see the SSD in the laptop. Bit of reading Arch forums later and I find the Dells
use some weird RAID thing on the SSD by default. Booting to BIOS, going to SATA
settings, and there were 2 radio button selectors. One said "RAID", the other
said "AHCI". Swapping it over to "AHCI" brought up a warning saying "oh no look
out your partitions might not be bootable any more", but that's fine because
those partitions are going to get wiped in a few minutes.

Booted the USB again, reconnected to the internet and the SSD is visible! Great,
we can start. Working through the installation guide essentially, the first real
new thing I ran into was disk encryption. 2 things I've never worked with at all
are disk encryption and LVM as I've never really used either. I ended up using
LUKS and no LVM to encrypt a single root partition with a password protecting
it. I could've looked into using the TPM, but I didn't want to do anything fancy
the first time I was working with this. I just want something that works.

You'll notice that in the first post, I talked about dual-booting at first so I
always have a Windows environment. A combination of the disk-encryption, the
weird SSD settings, and the extra time threw that plan out the window. Going
straight to single-boot made the whole process dramatically simpler.

So, finished install, doesn't boot. I made the rookie mistake of storing my
bootloader on the encrypted partition. One thing you find out quite quickly is
that it's very hard for a bootloader to decrypt itself as you can't start it up
when it's encrypted. So, whole process again, bigger boot partition to store the
initramfs image, and then it all boots fine. Due to me starting late in the
evening on the Friday and doing the installation twice, it was Saturday evening
at this point. Aside from the stress after installing an OS that didn't boot and
rendering my work machine a paperweight for nearly a whole day, it was a pretty
entertaining process.

### INITIAL SETUP ###

Now Sunday, I woke up pretty excited to get a nice rice going. I was going to
pretty much just copy the rice on my laptop as it looks nice, works pretty well,
and I already have all the work done for it so my week won't suddenly have
disappeared after I spend it all ricing and not actually getting anything done.
I got my fish config across, i3 config, installed i3, dunst, lemonbar, compton
et al, ran startx aaaaaaaaand... TTY hangs. "Shit".

Bunch of reading later and I find that this laptop uses Nvidia Optimus, which
apparently is mandatory. Searching around a bit more and I find bumblebee. The
Arch Wiki page says that it doesn't have terrific performance and xyz other
things, but I'm only after CPU performance on this machine as I'm doing
development, not gaming. Went through the quick install process, and now X
starts and my cardiovascular system can come back on.

So, with i3 now starting fine, I add the line to my fish config to autostart it
in TTY1 on login, get my terminals and bar setup, then I get Firefox on there
and do a quick config from my privacy guide. I also get Thunderbird on there and
hook it up to my work email.

### WINDOWS VM ###

This was strangely one of the more interesting parts. I hadn't really touched
QEMU before - let alone KVM - so I was interested to see how well it worked. I
decided to use virt-manager to make the setup process easier, and it certainly
did exactly that. I had a VM up and running in a couple of minutes - or 20 if
you include Windows' excruciatingly tedious installation process. Bearing in
mind that the host OS (i.e. excluding the VM image size) took up a mere 6GiB of
storage at this point and even my desktop with big DEs and games on it is less
than 40GiB, I gave this VM a 40GiB drive. It was full in 2 days. I genuinely did
not expect Windows to be so morbidly obese. I expected it to be fat of course,
but not that fat.

So what I ended up having to do was clone the image which in the process merged
all my snapshots into a single image, at which point qemu-img would let me
increase the image size, so a tripled it to be sure I wouldn't need to go
through this process another time.

This was after I'd installed stuff like Teams, Visual Studio, and Firefox on the
VM granted, but even just post-install, it was eating up a good 15-20GiB, which
is just ridiculous.

### SANDBOXING ###

The next thing to do was to sandbox this thing. Now that I had most of it setup,
I could take a last snapshot and then restore to it each time I booted up the
VM. This was on Monday, and I sent a little update to my boss just to assure him
that all was going well and I hadn't set fire to it or anything.

There were a number of extra things I'd need to do, but they'd have to wait
until I got back to work. I can only download the installer for the work VPN
from the local network, and I couldn't connect to TFS or anything until I was
there too. I didn't really touch the laptop much if at all for the rest of the
week and spent it doing the other things I was planning for that week.

### FIRST WEEK BACK ###

I'll be honest, it does feel pretty nice to have the coolest computer in the
building. The people with Macbooks get thermal issues and have to deal with
Apple's joke of a walled garden. The people with Surface books well... have
surface books, which is enough to deal with. That doesn't stop them from having
serious performance and stability issues as well though! The people with Dells
running Windows have to put up with Windows, and the Dell Thunderbolt docks
absolutely love causing BSODs at all the wrong times. Me though? I've got all
the stability of a real *nix OS, none of the resource-hogging GUIs and
interfaces, and all the flashy novelty terminal programs like cmatrix and sl.

On the Monday back though, I was quite nervous. Because I don't have a dock at
home, I hadn't tested Linux with the dock or the external monitors yet. So, I
put the laptop on the stand, plugged in the dock, and rather cautiously pressed
the power button. Laptop boots as normal - with Dell's UEFI taking far longer
than it should need to as normal - I put in the password (using the built-in
keyboard just to be sure) for LUKS, and then I get dropped on the login screen.
I try typing in my username on my keyboard connected through the dock... it's
alive
. I excitedly type in my password and watch the few X log messages to
the TTY before the laptop screen goes grey (because my "wallpaper" is #4E4E4E),
then the right monitor, then the middle monitor. I open a terminal and hop it
left and right and all 3 monitors are there and recognised and connected. I
check acpi and the dock is charging the laptop, and I'm also connected to
ethernet through the dock too. It all works! It was a deeply satisfying moment
seeing it all just come to life and work seamlessly - especially as I was
expecting I'd need to install some obscure drivers or something.

So, after a little bit of manic laugher and celebratory hammering my fists onto
my desk, I needed to actually get the last few things setup. I grab the VPN
installer and get that there so I know I'll have it later, and I go to connect
Visual Studio to TFS. Can't reach it. When I was making the VM, I first tried a
shared network as you might normally do, but the VM wasn't having any of that
and refused to connect to the internet, so I ended up just using a bridge
adapter so the VM could reach the internet and the host, but not LAN. On the
ethernet adapter though, it seemed perfectly happy. My final solution then was
to have the VM connected to both the ethernet and the bridge, so it had full
access when on ethernet, but could still access the host and the internet when
on WiFi.

Having remembered that a few months ago, Windows caught up with 1998 and got
SSH, I decided to make use of that to get file sharing between the host and VM.
I installed an SSHFS daemon on the VM and was then able to access the host's
home directory. Personally, I'd've preferred to limit access to a specific
subdirectory, but it'll do fine. So, I now have a file path that I can use to
read and write directly between host and VM, so the first thing I wanted to do
was store all my source code on the host so the VM can stay sandboxed. I connect
VS to TFS, set the workspace mapping to the UNC path, and so begins the chaos.

See, it shouldn't matter where a file is. If you can access the file, then it's
just that: a file. Files are files are files, and there's not really anything
more to them. VS seems to disagree though. It doesn't disagree in a remotely
sane manner though. You'd think if it was designed not to allow certain things,
then attempting said things should make an error window pop up and forbid you
from doing the thing right? Not here. The geniuses at Microsoft though that
of course tho way to do it is to have VS not give any warning at all, and then
proceed to freeze and crash the instant you try and access anything ever. Just
having the reference to the network share somewhere meant than even just
accessing local files broke it. This got drawn out into 2 days lost to terrible
software design making it virtually impossible to work with or fix the program,
because the instant you try and change any setting, it crashes and you have to
start all over again. I got hit with some 2 digit number of BSODs this week, and
they were almost all accompaniment to VS killing itself. Microsoft programs and
Microsoft operating systems really do not mix well it seems.

I needed a solution though - some means by which I can save my uncommitted
changes to persist between VM reboots when it gets reset back to the last
snapshot.

### PROBABLY THE STUPIDEST THING I'VE DONE TO FIX A PROBLEM ###

Because of the nature of TFS as opposed to git, branching isn't really a thing.
There are what are called "shelve sets", but they're like a branch only just a
single commit, and if you want more commits, you need to just spam loads more
shelve sets which becomes a pain. I can't just commit my changes, because then I
break the build because it's all master. It's a terrible system. Were we to be
using git (which thank god most of the other devs are pushing for a migration
to), I could create a new branch for each thing I work on, then commit to and
push that branch all I want without breaking anything, then it's fine if
everything is lost from my VM when I reboot because I can just clone and rebase
to my branch and I'm back where I left off. My changes are stored on the server,
rather than precariously perched on my local machine.

Having concluded that if I want VS to work for more than 7.3 seconds, I need to
store code locally on the VM, I needed a way to back it up to somewhere. Here's
what I did:

Now, the task being done here would actually be done pretty well if I were to
run a tiny git server on the Linux host, because then I can just commit and push
as I go, then clone when I boot the VM. What you might notice then is that I
would then be using a VCS as patchwork for the ridiculous short-comings of
another VCS - using a VCS to manage code being managed by another VCS. While it
would've been absolutely hilarious to run such a setup, I thought it best to
stick to my 2 tiny scripts and desperately hope that we move to git soon - at
which point all my troubles kind of fizzle out and things are nice again.

### DEVELOPMENT ON LINUX ###

This flaming Rube-Goldberg machine of a system is ok for now, but of course what
I really want is to not need the VM at all and be able to just do everything
from Linux. Sadly, G (as introduced in the previous post)'s SDK still isn't .NET
Core, so all our programs are still locked to Windows. One thing I might
experiment with when I have more time is to setup the VM more like a build
server, because if I'm able to work with the code fine and use something like
YouCompleteMe to get all the nice completion for NuGet packages, then I'd be
able to work on Linux, then build and test on Windows. That would immediately be
a tremendous improvement, but not all the way there.

The C# completer for YouCompleteMe isn't quite there yet as of writing this. It
spent a horrifically long time being based on the ancient language server,
rendering it essentially useless for all that time. It was fairly recently
upgraded to the Roslyn server, which has been a thing for several years now mind
you, and we're so very close to being able to use it. All that's left is for the
installer for the Vim plugin to switch to using the new completer, because then
it'll no longer need all the old msbuild and mono packages and I'll finally be
able to start really tinkering with this dev in Vim and tweaking for maximum
comfort.

The completion of the YouCompleteMe update, our hopeful upgrade to git, and the
eventual SDK upgrade to .NET Core are the 3 last obstacles in the way of
progress towards liberation of development. The first will be done on a scale of
days I expect, the second ideally in weeks, and the third more likely in months,
but patience is key, and if we're lucky, then these things might come sooner
than we expect.

Here's to liberation of development from Windows and escape into the wonderful
world of Linux. It's only up from here!

### CATEGORIES ###

Linux - Work
HOME - RSS
Copyright Oliver Ayre 2019. Site licensed under the GNU Affero General Public
Licence version 3 (AGPLv3).