Utilize the Added Disk on the Lighthouse VM

After you have added additional disk space to Lighthouse using your preferred platform, you can expand the logical volume to use the new disk space using one of the following methods:

Note: The preferred method is to increase the LH_Data logical volume.

Increase the lh_data Logical Volume

Note: This is the preferred method to use added disks on the Lighthouse VM.

  1. Add the new disk to the LH VM (platform dependent, see Add Disk Space to Lighthouse).

  2. Log into the shell on Lighthouse. you should see the new "unused" disk listed in the welcome message. This is the case for any non-system disks aren't currently being used by the LVM system.

  3. Create a partition on the new disk:

    fdisk /dev/sdb (or /dev/xvdb, or /dev/(sd|xvd)X

    Note:  Be sure specify the correct disk, it might be /dev/xvdb on AWS.

  4. Type 'n' and ENTER to create a new partition.

  5. Type 'p' and ENTER to create a primary partition.

  6. Continue hitting ENTER to accept the defaults to use the whole disk.

  7. Type 'w' and ENTER to write the changes and exit:

    fdisk.

  8. Add the new partition as a physical volume:

    pvcreate /dev/sdb1

    Note:  Assuming you are now using /dev/sdb1 that /dev/xvdb1 will now be mapped to /dev/sdb1 so make sure you use sdb1.

  9. Extend the volume group with the new physical volume:

    vgextend lhvg /dev/sdb1

  10. Assuming the new disk gives you at least 2GB of extra space, expand the lh_data logical volume:

    lvextend -L +2G /dev/mapper/lhvg-lh_data

  11. Update the file system of the lh_data disk to use the extra space:

    resize2fs /dev/mapper/lhvg-lh_data

  12. When you log into the shell, the disk should no longer be listed as "unused".

Mount the Hard Disks with ogconfig-cli

Extra hard disks can be mounted in the Lighthouse VM by adding them to the configuration. Each new disk must have a partition created and formatted. Partitions can be created using fdisk or cfdisk, and should be formatted using the ext4 filesystem, using the mkfs.ext4 command:

root@lighthouse:~# mkfs.ext4 /dev/sdb1

The directory in which to mount the filesystem must be created. In general, new filesystems should be mounted in the provided mountpoint of /mnt/au. Any other filesystems should be mounted within the filesystem mounted here. The UUID can be obtained by running blkid. This will output the UUID's of all the extra hard disks on the system. When referencing the UUID, ensure the entire UUID is enclosed within quote marks like this:

"UUID=33464920-f54f-46b6-bd84-12f76eeb92da"

else the command will not run correctly.

Add the information to the configuration system using ogconfig-cli as follows, modifying the path for the specific situation.

ogcfg> var m !append system.mountpoints map
{8435270-fb39-11e7-8fcf-4fa11570959}: Map <>
ogcfg> set {m}.node "UUID=33464920-f54f-46b6-bd84-12f76eeb92da"
{b8c37c6-fb39-11e7-971c-23517b19319}: String </dev/sdb1>
ogcfg> set {m}.path "/mnt/aux"
{1fb50d8-fb39-11e7-994c-0f10b09cbd4}: String </mnt/aux>
ogcfg> push
OK