How to move from a linear disk to an LVM disk and join the two disks into an LVM-like RAID-0

I had the recent need for adding a disk to an existing installation of Ubuntu, to make the / folder bigger. In such a case, I have two possibilities: to move my whole system to a new bigger disk (and e.g. dispose of the original disk) or to convert my disk to an LVM volume and add a second disk to enable the volume to grow. The first case was the subject of a previous post, but this time I learned…

How to move from a linear disk to an LVM disk and join the two disks into an LVM-like RAID-0

The starting point is simple:

  • I have one 14 Gb. disk (/dev/vda) with a single partition that is mounted in / (The disk has a GPT table and UEFI format and so it has extra partitions that we’ll keep as they are).
  • I have an 80 Gb. brand new disk (/dev/vdb)
  • I want to have one 94 Gb. volume built from the two disks
root@somove:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 14G 0 disk
├─vda1 252:1 0 13.9G 0 part /
├─vda14 252:14 0 4M 0 part
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 80G 0 disk /mnt
vdc 252:32 0 4G 0 disk [SWAP]

The steps are the next:

  1. Creating a boot partition in /dev/vdb (this is needed because Grub cannot boot from LVM and needs an ext or VFAT partition)
  2. Format the boot partition and put the content of the current /boot folder
  3. Create an LVM volume using the extra space in /dev/vdb and initialize it using an ext4 filesystem
  4. Put the contents of the current / folder into the new partition
  5. Update grub to boot from the new disk
  6. Update the mount point for our system
  7. Reboot (and check)
  8. Add the previous disk to the LVM volume.

Let’s start…

Separate the /boot partition

When installing an LVM system, it is needed to have a /boot partition in a common format (e.g. ext2 or ext4), because GRUB cannot read from LVM. Then GRUB reads the contents of that partition and starts the proper modules to read the LVM volumes.

So we need to create the /boot partition. In our case, we are using ext2 format, because has no digest (we do not need it for the content of /boot) and it is faster. We are using 1 Gb. for the /boot partition, but 512 Mb. will probably be enough:

root@somove:~# fdisk /dev/vdb

Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p):

Using default response p.
Partition number (1-4, default 1):
First sector (2048-167772159, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-167772159, default 167772159): +1G

Created a new partition 1 of type 'Linux' and of size 1 GiB.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

root@somove:~# mkfs.ext2 /dev/vdb1
mke2fs 1.44.1 (24-Mar-2018)
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: 24618637-d2d4-45fe-bf83-d69d37f769d0
Superblock backups stored on blocks:
32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Writing superblocks and filesystem accounting information: done

Now we’ll make a mount point for this partition, mount the partition and copy the contents of the current /boot folder to that partition:

root@somove:~# mkdir /mnt/boot
root@somove:~# mount /dev/vdb1 /mnt/boot/
root@somove:~# cp -ax /boot/* /mnt/boot/

Create an LVM volume in the extra space of /dev/vdb

First, we will create a new partition for our LVM system, and we’ll get the whole free space:

root@somove:~# fdisk /dev/vdb

Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): n
Partition type
p primary (1 primary, 0 extended, 3 free)
e extended (container for logical partitions)
Select (default p):

Using default response p.
Partition number (2-4, default 2):
First sector (2099200-167772159, default 2099200):
Last sector, +sectors or +size{K,M,G,T,P} (2099200-167772159, default 167772159):

Created a new partition 2 of type 'Linux' and of size 79 GiB.

Command (m for help): w
The partition table has been altered.
Syncing disks.

Now we will create a Physical Volume, a Volume Group and the Logical Volume for our root filesystem, using the new partition:

root@somove:~# pvcreate /dev/vdb2
Physical volume "/dev/vdb2" successfully created.
root@somove:~# vgcreate rootvg /dev/vdb2
Volume group "rootvg" successfully created
root@somove:~# lvcreate -l +100%free -n rootfs rootvg
Logical volume "rootfs" created.

If you want to learn about LVM to better understand what we are doing, you can read my previous post.

Now we are initializing the filesystem of the new /dev/rootvg/rootfs volume using ext4, and then we’ll copy the existing filesystem except from the special folders and the /boot folder (which we have separated in the other partition):

root@somove:~# mkfs.ext4 /dev/rootvg/rootfs
mke2fs 1.44.1 (24-Mar-2018)
Creating filesystem with 20708352 4k blocks and 5177344 inodes
Filesystem UUID: 47b4b698-4b63-4933-98d9-f8904ad36b2e
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000

Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: done

root@somove:~# mkdir /mnt/rootfs
root@somove:~# mount /dev/rootvg/rootfs /mnt/rootfs/
root@somove:~# rsync -aHAXx --delete --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/boot/*,/lost+found} / /mnt/rootfs/

Update the system to boot from the new /boot partition and the LVM volume

At this point we have our /boot partition (/dev/vdb1) and the / filesystem (/dev/rootvg/rootfs). Now we need to prepare GRUB to boot using these new resources. And here comes the magic…

root@somove:~# mount --bind /dev /mnt/rootfs/dev/
root@somove:~# mount --bind /sys /mnt/rootfs/sys/
root@somove:~# mount -t proc /proc /mnt/rootfs/proc/
root@somove:~# chroot /mnt/rootfs/

We are binding the special mount points /dev and /sys to the same folders in the new filesystem which is mounted in /mnt/rootfs. We are also creating the /proc mount point which holds the information about the processes. You can find some more information about why this is needed in my previous post on chroot and containers.

Intuitively, we are somehow “in the new filesystem” and now we can update things as if we had already booted into it.

At this point, we need to update the mount point in /etc/fstab to mount the proper disks once the system boots. So we are getting the UUIDs for our partitions:

root@somove:/# blkid
/dev/vda1: LABEL="cloudimg-rootfs" UUID="135ecb53-0b91-4a6d-8068-899705b8e046" TYPE="ext4" PARTUUID="b27490c5-04b3-4475-a92b-53807f0e1431"
/dev/vda14: PARTUUID="14ad2c62-0a5e-4026-a37f-0e958da56fd1"
/dev/vda15: LABEL="UEFI" UUID="BF99-DB4C" TYPE="vfat" PARTUUID="9c37d9c9-69de-4613-9966-609073fba1d3"
/dev/vdb1: UUID="24618637-d2d4-45fe-bf83-d69d37f769d0" TYPE="ext2"
/dev/vdb2: UUID="Uzt1px-ANds-tXYj-Xwyp-gLYj-SDU3-pRz3ed" TYPE="LVM2_member"
/dev/mapper/rootvg-rootfs: UUID="47b4b698-4b63-4933-98d9-f8904ad36b2e" TYPE="ext4"
/dev/vdc: UUID="3377ec47-a0c9-4544-b01b-7267ea48577d" TYPE="swap"

And we are updating /etc/fstab to mount /dev/mapper/rootvg-rootfs as the / folder. But we need to mount partition /dev/vdb1 in /boot. Using our example, the /etc/fstab file will be this one:

UUID="47b4b698-4b63-4933-98d9-f8904ad36b2e" / ext4 defaults 0 0
UUID="24618637-d2d4-45fe-bf83-d69d37f769d0" /boot ext2 defaults 0 0
LABEL=UEFI /boot/efi vfat defaults 0 0
UUID="3377ec47-a0c9-4544-b01b-7267ea48577d" none swap sw,comment=cloudconfig 0 0

We are using the UUID to mount / and /boot folders because the devices may change their names or location and that may lead to breaking our system.

And now we are ready to mount our /boot partition, update grub, and to install it in the /dev/vda disk (because we are keeping both disks).

root@somove:/# mount /boot
root@somove:/# update-grub
Generating grub configuration file ...
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Found linux image: /boot/vmlinuz-4.15.0-43-generic
Found initrd image: /boot/initrd.img-4.15.0-43-generic
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Found Ubuntu 18.04.1 LTS (18.04) on /dev/vda1
done
root@somove:/# grub-install /dev/vda
Installing for i386-pc platform.
Installation finished. No error reported.

Reboot and check

We are almost done, and now we are exiting the chroot and rebooting

root@somove:/# exit
root@somove:~# reboot

And the result should be the next one:

root@somove:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 14G 0 disk
├─vda1 252:1 0 13.9G 0 part
├─vda14 252:14 0 4M 0 part
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 80G 0 disk
├─vdb1 252:17 0 1G 0 part /boot
└─vdb2 252:18 0 79G 0 part
└─rootvg-rootfs 253:0 0 79G 0 lvm /
vdc 252:32 0 4G 0 disk [SWAP]

root@somove:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.0G 0 2.0G 0% /dev
tmpfs 395M 676K 394M 1% /run
/dev/mapper/rootvg-rootfs 78G 993M 73G 2% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vdb1 1008M 43M 915M 5% /boot
/dev/vda15 105M 3.6M 101M 4% /boot/efi
tmpfs 395M 0 395M 0% /run/user/1000

We have our / system mounted from the new LVM Logical Volume /dev/rootvg/rootfs, the /boot partition from /dev/vdb1, and the /boot/efi from the existing partition (just in case that we need it).

Add the previous disk to the LVM volume

Here we are facing the easier part, which is to integrate the original /dev/vda1 volume in the LVM volume.

Once we have double-checked that we had copied every file from the original / folder in /dev/vda1, we can initialize it for using it in LVM:

WARNING: This step wipes the content of /dev/vda1.

root@somove:~# pvcreate /dev/vda1
WARNING: ext4 signature detected on /dev/vda1 at offset 1080. Wipe it? [y/n]: y
Wiping ext4 signature on /dev/vda1.
Physical volume "/dev/vda1" successfully created.

Finally, we can integrate the new partition in our volume group and extend the logical volume to use the free space:

root@somove:~# vgextend rootvg /dev/vda1
Volume group "rootvg" successfully extended
root@somove:~# lvextend -l +100%free /dev/rootvg/rootfs
Size of logical volume rootvg/rootfs changed from <79.00 GiB (20223 extents) to 92.88 GiB (23778 extents).
Logical volume rootvg/rootfs successfully resized.
root@somove:~# resize2fs /dev/rootvg/rootfs
resize2fs 1.44.1 (24-Mar-2018)
Filesystem at /dev/rootvg/rootfs is mounted on /; on-line resizing required
old_desc_blocks = 10, new_desc_blocks = 12
The filesystem on /dev/rootvg/rootfs is now 24348672 (4k) blocks long.

And now we have the new 94 Gb. / folder which is made from /dev/vda1 and /dev/vdb2:

root@somove:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 14G 0 disk
├─vda1 252:1 0 13.9G 0 part
│ └─rootvg-rootfs 253:0 0 92.9G 0 lvm /
├─vda14 252:14 0 4M 0 part
└─vda15 252:15 0 106M 0 part /boot/efi
vdb 252:16 0 80G 0 disk
├─vdb1 252:17 0 1G 0 part /boot
└─vdb2 252:18 0 79G 0 part
└─rootvg-rootfs 253:0 0 92.9G 0 lvm /
vdc 252:32 0 4G 0 disk [SWAP]
root@somove:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.0G 0 2.0G 0% /dev
tmpfs 395M 676K 394M 1% /run
/dev/mapper/rootvg-rootfs 91G 997M 86G 2% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vdb1 1008M 43M 915M 5% /boot
/dev/vda15 105M 3.6M 101M 4% /boot/efi
tmpfs 395M 0 395M 0% /run/user/1000

(optional) Having the /boot partition to /dev/vda

In case we wanted to have the /boot partition in /dev/vda, the procedure will be a bit different:

  1. Instead of creating the LVM volume in /dev/vdb1, I would prefer to create a single partition /dev/vdb1 (ext4) which does not imply the separation of /boot and /.
  2. Once created /dev/vdb1, copy the filesystem in /dev/vda1 to /dev/vdb1 and prepare to boot from /dev/vdb1 (chroot, adjust mount points, update-grub, grub-install…).
  3. Boot from the new partition and wipe the original /dev/vda1 partition.
  4. Create a partition /dev/vda1 for the new /boot and initialize it using ext2, copy the contents of /boot according to the instructions in this post.
  5. Create a partition /dev/vda2, create the LVM volume, initialize it and copy the contents of /dev/vdb1 except from /boot
  6. Prepare to boot from /dev/vda (chroot, adjust mount points, mount /boot, update-grub, grub-install…)
  7. Boot from the new /root+LVM / and decide whether you want to add /dev/vdb to the LVM volume or not.

Using this procedure, you will get from linear to LVM with a single disk. And then you can decide whether to make the LVM to grow or not. Moreover you may decide whether to create a LVM-Raid(1,5,…) with the new or other disks.

How to move an existing installation of Ubuntu to another disk

Under some circumstances, we may have the need of moving a working installation of Ubuntu to another disk. The most common case is when your current disk runs out of space and you want to move it to a bigger one. But you could also want to move to an SSD disk or to create an LVM raid…

So this time I learned…

How to move an existing installation of Ubuntu to another disk

I have a 14Gb disk that contains my / partition (vda), and I want to move to a new 80Gb disk (vdb).

root@somove:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 14G 0 disk
└─vda1 252:1 0 13.9G 0 part /
vdb 252:16 0 80G 0 disk
vdc 252:32 0 4G 0 disk [SWAP]

First of all, I will create a partition for my / system in /dev/vdb.

root@somove:~# fdisk /dev/vdb

Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-167772159, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-167772159, default 167772159):

Created a new partition 1 of type 'Linux' and of size 80 GiB.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

NOTE: The inputs from the user are: n for the new partition and the defaults (i.e. return) for any setting to get the whole disk. Then w to write the partition table.

Now that I have the new partition, we’ll create the filesystem (ext4):

root@somove:~# mkfs.ext4 /dev/vdb1
mke2fs 1.44.1 (24-Mar-2018)
Creating filesystem with 20971264 4k blocks and 5242880 inodes
Filesystem UUID: ea7ee2f5-749e-4e74-bcc3-2785297291a4
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000

Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: done

We have to transfer the content of the running filesystem to the new disk. But first, we’ll make sure that any other mount point except for / is unmounted (to avoid copying files in other disks).:

root@somove:~# umount -a
umount: /run/user/1000: target is busy.
umount: /sys/fs/cgroup/unified: target is busy.
umount: /sys/fs/cgroup: target is busy.
umount: /: target is busy.
umount: /run: target is busy.
umount: /dev: target is busy.
root@somove:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=2006900k,nr_inodes=501725,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=403912k,mode=755)
/dev/vda1 on / type ext4 (rw,relatime,data=ordered)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=403908k,mode=700,uid=1000,gid=1000)
root@somove:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 14G 0 disk
└─vda1 252:1 0 13.9G 0 part /
vdb 252:16 0 80G 0 disk
└─vdb1 252:17 0 80G 0 part
vdc 252:32 0 4G 0 disk [SWAP]

Now we will create a mount point for the new filesystem and we’ll copy everything from / to it, except for the special folders (i.e. /tmp, /sys, /dev, etc.). Once completed, we’ll create the Linux special folders:

root@somove:~# mkdir /mnt/vdb1
root@somove:~# mount /dev/vdb1 /mnt/vdb1/
root@somove:~# rsync -aHAXx --delete --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost+found} / /mnt/vdb1/

Instead of using rsync, we could use cp -ax /bin /etc /home /lib /lib64 …, but you need make sure that all folders and files are copied. You also need to make sure that the special folders are created by running mkdir /mnt/vdb1/{boot,mnt,proc,run,tmp,dev,sys}. The rsync version is easier to control and to understand.

Now that we have the same directory tree, we just need to make the magic of chroot to prepare the new disk:

root@somove:~# mount --bind /dev /mnt/vdb1/dev
root@somove:~# mount --bind /sys /mnt/vdb1/sys
root@somove:~# mount -t proc /proc /mnt/vdb1/proc
root@somove:~# chroot /mnt/vdb1/

We need to make sure that the new system will try to mount in / the new partition (i.e. /dev/vdb1), but we cannot use /dev/vdb1 id because if we remove the other disk, it will modify its device name to /dev/vda1. So we are using the UUID of the disk. To get it, we can use blkid:

root@somove:/# blkid
/dev/vda1: UUID="135ecb53-0b91-4a6d-8068-899705b8e046" TYPE="ext4"
/dev/vdb1: UUID="eb8d215e-d186-46b8-bd37-4b244cbb8768" TYPE="ext4"

And now we have to update file /etc/fstab to mount the proper UUID in the / folder. The new file /etc/fstab for our example is the next:

UUID="eb8d215e-d186-46b8-bd37-4b244cbb8768" / ext4 defaults 0 0

At this point, we need to update grub to match our disks (it will get the UUID or labels), and install it in the new disk:

root@somove:/# update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.15.0-43-generic
Found initrd image: /boot/initrd.img-4.15.0-43-generic
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Found Ubuntu 18.04.1 LTS (18.04) on /dev/vda1
done
root@somove:/# grub-install /dev/vdb
Installing for i386-pc platform.
Installation finished. No error reported.

WARNING: In case we get error “error: will not proceed with blocklists.”, please go to the end part of this post.

WARNING: If you plan to keep the original disk in its place (e.g. a Virtual Machine in Amazon or OpenStack), you must install grub in /dev/vda. Otherwise, it will boot the previous system.

Finally, you can exit from chroot, power off, remove the old disk, and boot using the new one. The result will be next:

root@somove:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:16 0 80G 0 disk
└─vda1 252:17 0 80G 0 part /
vdc 252:32 0 4G 0 disk [SWAP]

What if we get error “error: will not proceed with blocklists.”

If we get this error (only if we get this error), we’ll need to wipe the gap between the init of the disk and the partition and then we’ll be able to install grub in the disk.

WARNING: make sure that you know what you are doing, or the disk is new, because this can potentialle erase the data of /dev/vdb.

$ grub-install /dev/vdb
Installing for i386-pc platform.
grub-install: warning: Attempting to install GRUB to a disk with multiple partition labels. This is not supported yet..
grub-install: warning: Embedding is not possible. GRUB can only be installed in this setup by using blocklists. However, blocklists are UNRELIABLE and their use is discouraged..
grub-install: error: will not proceed with blocklists.

In this case, we need to check the partition table of /dev/vdb

root@somove:/# fdisk -l /dev/vdb
Disk /dev/vdb: 80 GiB, 85899345920 bytes, 167772160 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device Boot Start End Sectors Size Id Type
/dev/vdb1 2048 167772159 167770112 80G 83 Linux

And now we will put zeros in /dev/vdb (skipping the first sector where the partition table is stored), up to the sector before to where our partition starts (in our case, partition /dev/vdb1 starts in sector 2048 and so we will zero 2047 sectors):

root@somove:/# dd if=/dev/zero of=/dev/vdb seek=1 count=2047
2047+0 records in
2047+0 records out
1048064 bytes (1.0 MB, 1.0 MiB) copied, 0.0245413 s, 42.7 MB/s

If this was the problem, now you should be able to install grub:

root@somove:/# grub-install /dev/vdb
Installing for i386-pc platform.
Installation finished. No error reported.

How to Recover Partitions from LVM Volume

Yesterday I had a problem with a disk… while trying to increase the size of a LVM volume, I lost the disk. What I did was: add the new disk to the LVM volume, mirror the volume and removing the original disk. But the problem is that I added the disk back again and the things started to go wrong. The volume did not boot, etc.

The original system was a Scientific Linux 6.7 and it had different partitions: one /boot ext-4 partition and a LVM volume in which we had 2 partitions: lv-root and lv-swap.

At the end of the LVM problem I had the original volume with LVM bad metadata that did not allow me to use the original information. Luckily I had not written any other information in the disks… so the information had to be there.

Once the disaster was there…

I learned how to recover the partitions from a LVM volume.

I had to recover the partitions, to create a new disk with the partitions.

Recovering partitions

After some failed tests, I got to the situation in which I had /dev/sda with a single GPT partition in there. I remembered about TestDisk, which is a tool that helps in forensics, and I started to play with it.

1

The first that I did was to try to figure out what could I do with my partition. So I started my system with a ubuntu 14.04 desktop LiveCD and /dev/sda, downloaded TestDisk and tried

$ ./testdisk_static /dev/sda

Then I used the GPT option and analysed the disk. In the disk I found two partitions: my boot partition and a LVM partition. I wrote the partition table and got to the initial page where I entered in the advanced options to dump the partition (Image Creation).

2

Then I had the boot partition dumped and it could be used as a raw partition dumped with dd.

Then I exited from the app and started it back again, because the LVM partition was now accesible in /dev/sda2. Now I tried

$ ./testdisk_static /dev/sda2

Now I selected the disk and selected the Intel partition option. TestDisk found the two partitions: the Linux and the Linux Swap.

3.png

And now I dumped the Linux partition.

Disclaimer

This is my specific organization for the disk, but the key is that TestDisk helped me to figure out where the partitions were and to get their raw images.

Creating the new disk

Now that I have the partition images: image.dd (for the boot partition) and sda1.dd, and now I have to create a new disk. So I booted the ubuntu desktop again, with a new disk (/dev/sdb).

The first thing is to get the size of the partitions and we will use the fdisk utility on the dumped files:

$ fdisk -l image.dd
$ fdisk -l sda1.dd

Using these commands I will get the size in sectors of the disk. And using that size I can make the partitions in /dev/sdb. According to my case, I will create a partition for the boot and other for the filesystem. My options were the next (please pay attention to the images to check that the values of the sectors match between the size of the partitions and the size of the NEW partitions and so on).

$ fdisk /dev/sdb
n
<enter>
<enter>
<enter>
+1024000
n
<enter>
<enter>
<enter>
+15810560
w

3.png

The key is to pay attention to the size (in sectors) of the partitions obtained with the fdisk -l command issued before. If you have any doubt, please check the images.

And now you are ready to dump the images in the disk:

$ dd if=image.dd of=/dev/sdb1
...
$ dd if=sda1.dd of=/dev/sdb2
...

Check this cool tip!

The process of dd costs a lot. If you want to see the progress, you can open other commandline and issue the command

$ killall -USR1 dd

The dd command will output the size that it has dumped in its console. If you want to see the progress, you can issue a command like this one:

$ watch -n 10 killall -USR1 dd

This will make that dd outputs the size dumped every 10 seconds.

More on this

Once I had the partition dumped, I used gparted to resize the second partition (as it was almost full). My disk was far bigger than the original, but if you only want to get the data or you have free space, this won’t probably be useful for you (so I am skipping it).