Coding with Jesse

Setting up a new computer

I love getting a new computer. I don't copy over all my files from my old computer anymore. Instead, I like to use it as a chance for a fresh start.

I have a vision, but so far it's been only a dream. My vision is that I could get access to any new computer, and within a few minutes be totally up and running with my full developer work environment, all my photos and videos, my documents, and everything else I have and need. The reality is nothing like this, of course. But I'm getting closer to it. Here's how I did it with my new laptop this past month.

Software

The first thing I need to do is set up my operating system (arch btw) and download all the software I need and use on a regular basis.

For me, this includes installing i3, fish, VS Code, git, rsync, rclone, mariadb, node, keepassxc, aws-cli, terminator, chromium, libreoffice, spotify, syncthing, workrave, and a few other things.

I could probably automate this and install everything that I had on my old computer, but I actually love the process of starting from scratch here and only installing the software I actually need and use. Arch Linux starts with a very minimalist environment, so I know that there's really nothing on this computer that I haven't explicitly installed.

Starting from scratch also gives a chance to try out some new internal services. For example, I'm now trying out using iwctl to manage my wifi connections instead of wpa_supplicant.

SSH keys

Once I have my software installed, the only thing I have to copy over from my old computer on a usb stick are my SSH keys, ie. the contents of ~/.ssh/.

These keys give me access to everything else. Once I have these keys, I'm already starting to feel at home.

Git

The SSH keys give me access to servers. On one of these server lives my private Git repositories. I like to keep all these git clones in a /code directory on my computer. I clone all my active projects via ssh:

sudo mkdir /code
sudo chown jesse:jesse /code
cd /code
git clone ssh://[email protected]/~/git/codingwithjesse
git clone ssh://[email protected]/~/git/joyofsvelte
git clone ssh://[email protected]/~/git/dotfiles
# etc..

dotfiles

One of the Git repos I cloned is a private dotfiles repository that has all the configuration I care about. I make sure to push changes to this repo from my old computer one last time before cloning here.

I use symlinks in my home directory so that the files live in the repo. I have an install.sh in my dotfiles repo that sets it all up:

#!/bin/bash

BACKUP=backup-`date +%s`

mkdir "$BACKUP"
mv ~/.bashrc "$BACKUP"
mv ~/.bash_prompt "$BACKUP"
mv ~/.bash_profile "$BACKUP"
mv ~/.aws "$BACKUP"
mv ~/.gitconfig "$BACKUP"
mv ~/.config/i3 "$BACKUP"
mv ~/.config/i3status "$BACKUP"
mv ~/.config/fish "$BACKUP"
mv ~/.config/rclone "$BACKUP"
mv ~/.local/share/fish/fish_history "$BACKUP"

DIR=`pwd`

ln -s $DIR/.bashrc ~/.bashrc
ln -s $DIR/.bash_profile ~/.bash_profile
ln -s $DIR/.bash_prompt ~/.bash_prompt
ln -s $DIR/.aws ~/.aws
ln -s $DIR/.gitconfig ~/.gitconfig
ln -s $DIR/.config/i3 ~/.config/i3
ln -s $DIR/.config/i3status ~/.config/i3status
ln -s $DIR/.config/fish ~/.config/fish
ln -s $DIR/.config/rclone ~/.config/rclone
ln -s $DIR/.local/share/fish/fish_history ~/.local/share/fish/fish_history

Of course, the set of dotfiles you care about will probably be different.

rsync

I also keep a backup of all my important documents (taxes, contracts, PDFs and spreadsheets) and my passwords (keepass database) on my server. I use rsync to backup these files, and I also use it to restore my backups:

rsync -avz [email protected]:~/docs ~/docs
rsync -avz [email protected]:~/passwords ~/passwords

Perhaps I could keep these in Git repos as well, for simplicity. It might be nice to have versioning on my tax documents and contracts, even though they don't change much.

rclone

For larger files, like photos and videos, I use rclone to manage an encrypted backup in object storage. I really enjoy using rclone. I love how it provides a really easy user command-line user interface, abstracting away a wide variety of cloud storage systems. I've switched between these services based on price a few times, and it was really easy to do.

rclone also has a useful ability to mount a backup to a directory. For the first time on this new computer, I have this set up in /mnt/media, with directories like /mnt/media/photos and /mnt/media/videos so I can easily browse and view all my content without copying anything to my computer.

I have this set up as a user-based systemd service. It's user-based so that it has access to my credentials in ~/.config/rclone.

I created a file in ~/.config/systemd/user/rclone.service:

[Unit]
Description=rclone
AssertPathIsDirectory=/mnt
# Make sure we have network enabled
After=network.target

[Service]
Type=simple

ExecStart=/usr/bin/rclone mount --allow-other --vfs-cache-mode full media: /mnt/media

# Perform lazy unmount
ExecStop=/usr/bin/fusermount -zu /mnt/media

# Restart the service whenever rclone exists with non-zero exit code
Restart=on-failure
RestartSec=15

[Install]
# Autostart after reboot
WantedBy=default.target

I enabled and started it with systemctl:

systemctl --user daemon-reload
systemctl --user enable rclone
systemctl --user start rclone

This was my first time creating a systemd service manually, and the first time I added a user-based service, and I found it really cool. I would like to learn more about systemd. It seems like a really simple and powerful system, so I can see why so many people have strong feelings about it.

Home! Sweet home!

From here I'm all set-up and ready to go. I immediately feel at home, and quickly forget that this isn't the same computer I've always used.

All my code lives in Git repos that I push to a remote server. All my important configuration files live in a Git repo. All my important documents and passwords get backed up to a remote server. All my photos and videos live in a remote bucket storage. As long as I have access to my SSH keys, I'll be able to get up and running from scratch on a new computer within a few hours.

There's really nothing that lives only on this computer, and that makes me feel great.

Published on October 30th, 2024. © Jesse Skinner

Svelte 5 is here!

In case you missed it, Svelte 5 was finally released!

It's been here for a while as a pre-release, but I held off on using it until it was finalized. (I didn't want to learn how to use the new syntax and then have to un-learn and re-learn as the team reworked things and added or removed functionality. But now these decisions have been finalized, the syntax is cemented, and I'm excited to start trying out these new ways of doing things.)

Svelte 5 marks a significant change in the language itself. "Runes" are a new way of writing reactive code with Svelte. This allows a major simplification of the API and surface area of Svelte, making Svelte easier for beginners to learn and use. I think it was a smart move, as it surely makes it easier for people to switch to Svelte.

For those of us who are already deeply invested in Svelte, the new syntax will take some getting used to. Fortunately, the old syntax will continue to be available until Svelte 6 or 7 is released, so we have some time to adapt and learn. Also, there is a migration script. I haven't tried it yet, but it should automatically switch your code over to the new syntax.

Some of the more "magic" Svelte features, like the reactive $: statement, are now deprecated. So are some features I really loved, such as being able to add modifiers to event handlers like on:click|preventDefault. Stores will continue to be supported, but there is some pressure to switch over to using runes instead. I haven't made up my mind yet as to whether I'll rewrite all my stores with runes, or whether I still prefer how stores work in some cases.

I'm also excited that this marks the beginning of a new chapter in the Svelte story. I think it's similar to when React introduced hooks, and changed the way we write React code. I can't imagine hooks will ever go away, and similarly I think runes are here to stay.

If you still haven't tried out Svelte, I think this is the perfect time to learn it. It's already a very mature platform, and this recent change means things will likely be quite stable from here on out. I started using Svelte in production in 2019, and I haven't looked back.

To read more about migrating to Svelte, including all the new syntax, check out the well-written Svelte 5 migration guide. Be sure to click on the "Why we did this" sections for more background and context on the reasoning for the changes. I found these to be very helpful to understand the logic around the changes.

Published on October 29th, 2024. © Jesse Skinner

Does your web server scale down?

A laptop computer sleeping in the moonlight

Are you paying for servers sitting idle in the middle of the night?

When we talk about scaling a web server, we often focus on scaling up. Can your server handle a spike in traffic? As your business grows, can your database handle the growth?

There's less focus on scaling down. It makes sense, because most businesses are focused on growth. Not too many are looking to shrink. But if you're not careful, your server costs might go up and never come back down.

No web traffic is completely consistent. It grows during the day when people are awake. It shrinks at night when people sleep. It spikes with a popular marketing campaign. It retracts after a marketing campaign winds down.

A simple approach to scaling is to turn up the dial when a server gets overwhelmed. Upgrade to a server with a more powerful CPU. Increase the memory available. Unfortunately, this approach only moves in one direction.

A better solution is to have a dial that can turn both up and down. The way to achieve this is through a pool of servers and a load balancer. When traffic increases, start up new servers. When traffic decreases, terminate the excess capacity. Keep all your servers as busy as possible.

For lower volume sites, serverless deployments handle this beautifully. When nobody is using the server, you don't pay anything. When there's a spike, it can scale up to handle it.

At some point, it becomes cheaper and faster to run your own servers. If you do, you'll want an autoscaling pool and a load balancer. It might only have a small server in it most of the time. You'll need to define some rules so that it scales up when it gets overwhelmed. When things calm down, make sure it scales back down to one server.

You'll sleep better at night knowing that your servers and costs are resting too.

Published on June 12nd, 2024. © Jesse Skinner

Goldilocks and the Three Developers

Goldilocks was the lead of a software development team. She needed to review pull requests from three of her team members.

The first developer's code was a mess. It relied on some deprecated features of an outdated library. The few modules were long and complex, trying to do too many different things. There were no tests, so it was impossible to be sure the code was bug-free. The architecture needed to run on a single server, so it could never scale up. There was no way to know whether it did what it was supposed to do.

The second developer's code was also a mess. The system was built using some brand new libraries and coding paradigms. The system comprised of a dozen different interconnected microservices. There was a very thorough test suite, testing every implementation detail. The system included infrastructure as code, but couldn't run on a single computer. There was no way to know whether it did what it was supposed to do.

The third developer's code was just right. It used the latest versions of libraries the team was familiar with. The system was split up into a dozen simple modules. It was obvious what each module did, and how it fit within the business requirements. There were a few tests for the core functionality, so she knew that it was working. The system was easy to get running, but was simple enough to scale up infinitely. It was very easy to understand, and to know that it did what it was supposed to do.

Goldilocks then had a meeting with the three developers.

She told the first developer their code was under-engineered. She said they should take some time to simplify it and make it easier for other developers to understand and work with it.

She told the second developer their code was over-engineered. She said they should take some time to simplify it and make it easier for other developers to understand and work with it.

She said well done to the third developer and approved the pull request.

Published on June 5th, 2024. © Jesse Skinner

Unable to locate credentials in AWS

The Problem

If you have servers in AWS doing a high volume of AWS service requests, you may come across some rare but frustrating sporadic credential errors like these:

"Unable to locate credentials"

or if you're using aws-sdk in Node.js:

"CredentialsProviderError: Could not load credentials from any providers"

I'm not totally sure why these errors happen, but typically I see them happen across multiple services, accounts and regions around the same time, which leads me to believe that there can be some sporadic flakiness in the metadata service used for fetching IAM credentials.

I tried using metadata retries and other configuration parameters to prevent this, but they didn't seem to make any difference.

The Solution

Looking for a solution, I found this buried in the AWS documentation for instance metadata retrieval:

"If you're using the IMDS to retrieve AWS security credentials, avoid querying for credentials during every transaction or concurrently from a high number of threads or processes, as this might lead to throttling. Instead, we recommend that you cache the credentials until they start approaching their expiry time."

Now, I don't think this throttling was the source of all the errors I was seeing, but it may be playing a role. Maybe the metadata service tolerance for throttling changes over time as demand changes, I don't know.

Either way, this gave me an idea to write a bash script to cache the IAM credentials in ~/.aws/credentials so they could be used by both the AWS CLI, and also any Node.js or Python clients accessing the AWS services:

#!/bin/bash

IMDS_URL="http://169.254.169.254/latest/meta-data/iam/security-credentials/"
AWS_CREDENTIALS_PATH="~/.aws/credentials"
PROFILE_NAME="default"

# 4.5 minutes, because new credentials appear 5 minutes before expiry
EXPIRY_BUFFER=270

get_aws_credentials() {
    local role_name=$(curl -s $IMDS_URL)
    local credentials_url="${IMDS_URL}${role_name}"
    local response=$(curl -s $credentials_url)

    local access_key_id=$(echo $response | jq -r '.AccessKeyId')
    local secret_access_key=$(echo $response | jq -r '.SecretAccessKey')
    local token=$(echo $response | jq -r '.Token')
    local expiration=$(echo $response | jq -r '.Expiration')
    local expiration_time=$(date -d "$expiration" +%s)

    echo "[$PROFILE_NAME]" > $AWS_CREDENTIALS_PATH
    echo "aws_access_key_id = $access_key_id" >> $AWS_CREDENTIALS_PATH
    echo "aws_secret_access_key = $secret_access_key" >> $AWS_CREDENTIALS_PATH
    echo "aws_session_token = $token" >> $AWS_CREDENTIALS_PATH
    echo "expiration = $expiration_time" >> $AWS_CREDENTIALS_PATH
}

should_fetch_credentials() {
    if [[ ! -f $AWS_CREDENTIALS_PATH ]]; then
        return 0
    fi

    local expiration_time=$(grep 'expiration' $AWS_CREDENTIALS_PATH | cut -d ' ' -f 3)
    local current_time=$(date +%s)

    if (( $current_time + $EXPIRY_BUFFER > $expiration_time )); then
        return 0
    fi

    return 1
}

if should_fetch_credentials; then
    get_aws_credentials
fi

Since the credentials have to be refreshed every few hours, I set it up to run in a cron job every minute, to check if the expiration time has come:

* * * * * /home/ec2-user/credentials.sh > /dev/null 2>&1

Voila! No more credential errors! I hope that helps. Let me know if you've run into the same errors, and if you found this approach useful.

Published on May 30th, 2024. © Jesse Skinner
<< older posts newer posts >> All posts