EBS Volume Attachment Issues (Linux)

On the rare occasion that I cannot find documentation that explains issues I see in AWS, I do enter a ticket for support! Thankfully, I work for a company that does pay for support on Production accounts.

Recently, I came across an oddity by way of special circumstances. Here’s how it reared its head and how I solved it – with a stamp of approval from AWS support.

Let’s assume the following:

  1. You have the need for a Linux based server, in this case it will be Debian based – specifically Ubuntu 18.04. This solution should work on 16.04 as well, but I have not tested it.
  2. You also have the need to mount one or more EBS volumes to this server, and they can vary in flavors. What I mean by that is, one can be io1, another gp2 and even another an HDD option.
  3. The last requirement is that these EBS volumes get attached and mounted to a folder of your choice through user-data. Forward thinking here: The reason that this is needed instead of just logging in and doing it manually is for the ability to use Auto Scaling Groups. Imagine that you needed to scale this OR an instance goes bad and you want it automatically replaced without human intervention. Doing this at 5am on Saturday morning after a night out might be challenging.

Now that we have the assumptions down, and you are thinking “this should be no problem! Right!? Just make a new folder, use the device name that comes after your root device and mount it to the new folder.” In theory, this is great. In reality, it does not work.

As per AWS (Linux) support the volumes get attached in random order and this is completely expected behavior. So on one EC2 instance your root volume may have the device name of “nvme1n1” and on another it may have “nvme0n1”. This means that you just can’t go sequentially. I tried guessing, outguessing, predicting, forcing, etc, etc… how they attached until finally I gave in.

Now knowing that it was expected behavior AWS support said I had to “introduce some logic for this to be successful through user-data automation”.

This was tricky on 2 parts. First, i was attaching 2 additional drives that were the same size. And second, they were different EBS volume types. One was io1 and the other gp2.

To get around this, I just made one 100GB and the other 110GB so I could tell them apart through some nifty data manipulation.

And now, the magic…

My user-data (bash) code had to go from this:

...
"sudo mkdir -p /mnt/urfolder",
"\n",
"(echo n; echo p; echo ; echo ; echo ; echo w) | sudo fdisk /dev/nvme1n1 # MANUAL: n, p, enter, enter, enter, w",
"\n",
"sudo mkfs -t ext4 /dev/nvme1n1p1,
"\n",
"sudo mount -o defaults,noatime,_netdev,nofail /dev/nvme1n1p1 /mnt/urfolder",
"\n",
"sudo sh -c 'echo \"/dev/nvme1n1p1 /mnt/urfolder ext4 defaults,noatime,_netdev,nofail 0 2\" >> /etc/fstab'",
"\n",
...

To this:

...
"sudo mkdir -p /mnt/urfolder",
"\n",
"VERVOL=$(lsblk | grep \"110G\" | grep -v \"p\" | awk '{print $1}' )",
"\n",
"(echo n; echo p; echo ; echo ; echo ; echo w) | sudo fdisk /dev/\"$VERVOL\" # MANUAL: n, p, enter, enter, enter, w",
"\n",
"sudo mkfs -t ext4 /dev/\"$VERVOL\"p1",
"\n",
"sudo mount -o defaults,noatime,_netdev,nofail /dev/\"$VERVOL\"p1 /mnt/urfolder",
"\n",
"sudo sh -c 'echo \"/dev/\"$VERVOL\"p1 /mnt/urfolder ext4 defaults,noatime,_netdev,nofail 0 2\" >> /etc/fstab'",
"\n",
...

If you have some experience in user-data and bash you can see what’s going on. I am creating a variable called “VERVOL” and figuring out which device name AWS assigned to my 110GB drive. The line breaks down like this in making the variable:

lsblk | grep \"110G\" | grep -v \"p\" | awk '{print $1}'

lsblk is used to list the devices and passed on to grep that singles out the 110GB drive. That result produces two lines of output, the device itself and the partition under it. We need the partition, which has a “p” in it (example: nvme1n1p1) and we send that line to awk. awk takes that result and prints out the first word by using ‘{print $1}’.

Essentially you go from this output (I apologize for the formatting, WordPress isn’t being nice):

NAME               MAJ:MIN     RM          SIZE    ....
...
nvme1n1              8:0 0               110G 0    ....
--nvme1n1p1          8:1 0               110G 0    ....
...

To:

nvme1n1p1

Basically the first word of the last line and that becomes your variable VERVOL (the emphasis above is my own). Your VERVOL partition mounts to “urfolder” and the rest of it makes it permanent through reboots.  You just rinse and repeat for all the other attached EBS volumes. In my experience, you can make each volume 1GB different from the rest and it will work, although your mileage may vary based on Linux distro.

I ended up testing this out 3-4 times and slapping that code right in to my response of the ticket to AWS support. The support rep said it was “spot on” and claimed he was happy I had figured out such an “elegant” and quick solution.

There you have it. Hopefully if you came needing this answer, it helped you. Otherwise, I hope that at least you get a glimpse of how to manipulate your data so you can use it to your advantage. As always, I am sure it is not the only solution, but it works!

For reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html

Happy Building!

1 Reply to “EBS Volume Attachment Issues (Linux)”

  1. Unless you’re using a drastically different version of grep than I am, I think you don’t want to use the ‘-v’ switch to grep in your command, that would filter out the line with the ‘p’, which is what you’re looking for, no?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.