I have a tiny EC2 instance for small experiments and needed to resize my the root EBS volume from 8G to 10G. There is a good official guide for this: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modify-volume.html, but it didn’t quite work for me. Below is what I tried and how I solved it.
growpart error NOCHANGE
The aws CLI tool is great and I was able to request the resize. The next step was to SSH into the VPS, lsblk, and resize the first partition. Here’s what I got:
12
$ sudo growpart /dev/nvme0n1 1NOCHANGE: partition 1 could only be grown by -33 [fudge=2048]
All the posts that I found online mostly reiterated the steps from the guide, there was nothing about such an error anywhere. The closest I could find is this thread on the AWS forum https://forums.aws.amazon.com/thread.jspa?threadID=293496 saying:
When I used the same instance type (c5.2xlarge) and the same OS (Debian Linux 8 (jessie)) as you, I faced the same issue as you. However, when I repeated the same test using Amazon Linux AMI and I wasn’t able to reproduce the issue. Hence the issue is related to the OS and its configuration, rather than the C5 instance type per-se.
In my case, I’m using an Arch Linux image from https://www.uplinklabs.net/projects/arch-linux-on-ec2/. Based on the guides, I should be able to resize the volume without detaching it, but that didn’t happen here; maybe the Linux kernel isn’t built with some necessary patches. I finally noticed that lsblk still detected the old size:
1234
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 8G 0 disk
└─nvme0n1p1 259:1 0 8G 0 part /
And that’s obviously the cause of the error above.
growpart “failed to resize”
When I restarted the instance, the nvme0n1 device was 10 GB indeed whereas the first partition was still 8 GB. Here we go:
$ sudo growpart -v -v -v /dev/nvme0n1 1update-partition set to trueresizing 1 on /dev/nvme0n1 using resize_sfdisk_dos
running[sfd_list][erronly] sfdisk --list --unit=S /dev/nvme0n1
20971520 sectors of 512. total size=10737418240 bytes
running[sfd_dump][erronly] sfdisk --unit=S --dump /dev/nvme0n1
## sfdisk --unit=S --dump /dev/nvme0n1label: dos
label-id: 0xba136a53
device: /dev/nvme0n1
unit: sectors
sector-size: 512/dev/nvme0n1p1 : start=2048, size=16775168, type=83padding 33 sectors for gpt secondary header
max_end=20971487tot=20971520pt_end=16777216pt_start=2048pt_size=16775168attempt to resize /dev/nvme0n1 failed. sfdisk output below:
| Backup files:
| MBR (offset 0, size 512): /tmp/growpart.VTEND5/orig.save-nvme0n1-0x00000000.bak
|| Disk /dev/nvme0n1: 10 GiB, 10737418240 bytes, 20971520 sectors
| Disk model: Amazon Elastic Block Store
| Units: sectors of 1 * 512=512 bytes
| Sector size (logical/physical): 512 bytes / 512 bytes
| I/O size (minimum/optimal): 512 bytes / 512 bytes
| Disklabel type: dos
| Disk identifier: 0xba136a53
|| Old situation:
|| Device Boot Start End Sectors Size Id Type
| /dev/nvme0n1p1 20481677721516775168 8G 83 Linux
|| >>> Script header accepted.
| >>> Script header accepted.
| >>> Script header accepted.
| >>> Script header accepted.
| >>> line 5: unsupported command|| New situation:
| Disklabel type: dos
| Disk identifier: 0xba136a53
|| Device Boot Start End Sectors Size Id Type
| /dev/nvme0n1p1 20481677721516775168 8G 83 Linux
| Leaving.
|FAILED: failed to resize
***** WARNING: Resize failed, attempting to revert ******
512+0 records in512+0 records out
512 bytes copied, 0.000578528 s, 885 kB/s
***** Restore appears to have gone OK ****
Uh-oh, it didn’t work. growpart is mentioned in all the guides, I thought it was AWS-specific, but it’s actually a script from Debian as the mkinitcpio-growrootfs package mentions here: https://github.com/GregSutcliffe/aur-projects/tree/master/mkinitcpio-growrootfs. This package was installed in the AMI by default, and the README also mentions the grow hook for mkinitcpio, which should resize the root FS at boot. The hook was already setup, but it clearly didn’t work.
I was going to resort to the hassle of starting a new VPS, attaching the volume and resizing it there, when I launched sudo cfdisk /dev/nvme0n1p1 — and it had an option to resize the partition to 10G, write the partition table, and done!
12345678910
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 10G 0 disk
└─nvme0n1p1 259:1 0 10G 0 part /
$ sudo resize2fs /dev/nvme0n1p1
resize2fs 1.45.6 (20-Mar-2020)Filesystem at /dev/nvme0n1p1 is mounted on /; on-line resizing required
old_desc_blocks=1, new_desc_blocks=2The filesystem on /dev/nvme0n1p1 is now 2621184(4k) blocks long.
I tried to use fdisk later, but apparently it can’t resize partitions.