Add an SSD to Parallels Cloud Storage Server

jesse's picture

Adding a Sollid State Disk drive to Parallels Cloud Servers can speed things up nicely, and the PCS installation disk will recognize and configure a SSD disk. If you didn't install with a SSD disk in place, you can add one later, and the online manuals give some examples.

Here we combine the instructions from the manuals for each cloud storage function (MDS, CS and Client), and setup the filesystem to mimic what is done by the PCS installer.

Desired setup

When a SSD disk is found and configured by PCS installer, it mounts it at /pstorage/ssd0, sets up the CS write journals there, the Client data cache, and creates the MDS using the SSD exclusively. If your PCS cluster name is pcs1 you will have:

[root@pcs1 ~]# ls -l /pstorage/
total 16
drwx------ 1 root root    0 May  8 16:04 pcs1
drwxr-xr-x 4 root root 4096 Apr 25 17:17 pcs1-cs
drwxr-xr-x 4 root root 4096 Apr 25 17:04 pcs1-cs1
drwxr-xr-x 4 root root 4096 Apr  4 07:41 pcs1-cs2
lrwxrwxrwx 1 root root   18 Apr 25 17:03 pcs1-mds -> /pstorage/ssd0/mds
drwxr-xr-x 7 root root 4096 Apr 25 17:50 ssd0
 
[root@pcs1 ~]# ls -l /pstorage/ssd0
total 16496312
-rw------- 1 root     root     16891183104 May 16 17:49 client_cache
drwx------ 2 root     root           16384 Apr 24 18:44 lost+found
drwxr-xr-x 3 root     root            4096 Apr 25 17:01 mds
drwx------ 3 pstorage pstorage        4096 Apr 25 17:27 pcs1-cs0-ssd
drwx------ 3 pstorage pstorage        4096 Apr 25 17:36 pcs1-cs1-ssd
drwx------ 3 pstorage pstorage        4096 Apr 25 17:36 pcs1-cs2-ssd
 
[root@pcs1 ~]# cat /etc/pstorage/clusters/pcs1/mds.list
# autogenerated, unique
/pstorage/ssd0/mds/data
 
[root@pcs1 ~]# cat /etc/pstorage/clusters/pcs1/cs.list
# autogenerated, unique
/pstorage/pcs1-cs/data
/pstorage/pcs1-cs1/data
/pstorage/pcs1-cs2/data
 
[root@pcs1 ~]# ls -l /pstorage/pcs1-cs*/data/control/journal
lrwxrwxrwx 1 pstorage pstorage 31 Apr 25 17:36 /pstorage/pcs1-cs1/data/control/journal -> /pstorage/ssd0/pcs1-cs1-ssd
lrwxrwxrwx 1 pstorage pstorage 31 Apr 25 17:36 /pstorage/pcs1-cs2/data/control/journal -> /pstorage/ssd0/pcs1-cs2-ssd
lrwxrwxrwx 1 pstorage pstorage 31 Apr 25 17:26 /pstorage/pcs1-cs/data/control/journal -> /pstorage/ssd0/pcs1-cs0-ssd
 
[root@pcs1 ~]# grep pstorage /etc/fstab
/dev/mapper/vg_pcs1-lv_pstorage_pcs1cs /pstorage/pcs1-cs   ext4    defaults,noatime 1 2
UUID=a133908b-2b39-46d9-b697-c8803189cd9c /pstorage/ssd0 ext4 defaults,noatime 0 0
UUID=e3cb2e35-717c-44c1-861f-7408a15a9a27 /pstorage/pcs1-cs1 ext4 defaults,noatime 0 0
UUID=6a57d4c4-e55a-42d0-93a5-41f26bab7a40 /pstorage/pcs1-cs2 ext4 defaults,noatime 0 0
pstorage://pcs1 /pstorage/pcs1 fuse.pstorage cache=/pstorage/ssd0/client_cache,cachesize=16081 0 0

Current Setup

The server I'll be setting up only has 2 devices for CS, vs. 3 in the Desired Setup above.

[root@pcs2 ~]# ls -l /pstorage/
total 8
drwx------ 1 root root    0 Oct  3 15:38 pcs1
drwxr-xr-x 4 root root 4096 Apr 24 12:48 pcs1-cs
drwxr-xr-x 5 root root 4096 Apr 24 12:48 pcs1-cs1
lrwxrwxrwx 1 root root   26 Apr 24 12:48 pcs1-mds -> /pstorage/pcs1-cs1/mds
 
[root@pcs2 ~]# cat /etc/pstorage/clusters/pcs1/mds.list
# autogenerated, unique
/pstorage/pcs1-cs1/mds/data
 
[root@pcs2 ~]# cat /etc/pstorage/clusters/pcs1/cs.list
# autogenerated, unique
/pstorage/pcs1-cs/data
/pstorage/pcs1-cs1/data
 
[root@pcs2 ~]# ls -l /pstorage/pcs1-cs*/data/control/journal
ls: cannot access /pstorage/pcs1-cs*/data/control/journal: No such file or directory
 
[root@pcs2 ~]# grep pstorage /etc/fstab
/dev/mapper/vg_pcs2-lv_pstorage_pcs1cs /pstorage/pcs1-cs   ext4    defaults,noatime 1 2
UUID=a2c2d67b-81be-40cf-9598-9d61b0943ac8 /pstorage/pcs1-cs1 ext4 defaults,noatime 0 0
pstorage://pcs1 /pstorage/pcs1 fuse.pstorage defaults 0 0

I'm adding a 100GB SSD drive, which I'll allocate as:

  • 21GB for the Client cache
  • 21GB for each CS journal (42GB total)
  • 36GB (the remainder) for MD

Here we go

According to the storage admin guide, preparing the SSD should be this simple:

[root@pcs2 ~]# /usr/libexec/pstorage/prepare_pstorage_drive --ssd /dev/sdc

However that didn't work for me:

[root@pcs2 ~]# /usr/libexec/pstorage/prepare_pstorage_drive --ssd /dev/sdc
Given device --ssd is not valid

So I'll set it up manually. We want to mount the SSD on /pstorage/ssd0. First we need to partition the disk (/dev/sdc in my case, yours may differ):

[root@pcs2 ~]# parted /dev/sdc
GNU Parted 2.1
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart pcs-ssd0 ext4 0 100GB
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? Ignore                                                     
(parted) p                                                                
Model: ATA INTEL SSDSC2BA10 (scsi)
Disk /dev/sdc: 100GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
 
Number  Start   End    Size   File system  Name      Flags
 1      17.4kB  100GB  100GB               pcs-ssd0
 
(parted) quit                                                                
Information: You may need to update /etc/fstab.                           

Then create a filesystem:

[root@pcs2 ~]# mkfs -t ext4 /dev/sdc1
mke2fs 1.41.12 (17-May-2010)
Discarding device blocks: done                            
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
6111232 inodes, 24421437 blocks
1221071 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
746 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424, 20480000, 23887872
 
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
 
This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Now we add an entry to /etc/fstab and mount the disk:

[root@pcs2 ~]# blkid /dev/sdc1
/dev/sdc1: UUID="385767aa-e4d0-4ceb-b3b9-faca94fbe792" TYPE="ext4" 
[root@pcs2 ~]# echo 'UUID=385767aa-e4d0-4ceb-b3b9-faca94fbe792 /pstorage/ssd0 ext4 defaults,noatime 0 0' >> /etc/fstab
[root@pcs2 ~]# mkdir /pstorage/ssd0
[root@pcs2 ~]# mount /pstorage/ssd0 
[root@pcs2 ~]# df -k /pstorage/ssd0
Filesystem     1K-blocks   Used Available Use% Mounted on
/dev/sdc1       96151552 192176  91075092   1% /pstorage/ssd0
 

Now shut down and move the services involved in cloud storage. Refer to the
Shutting Down Parallels Cloud Storage Clusters section of the
Cloud Storage Admin Guide.

[root@pcs2 ~]# service pvaagentd stop
Shutting pvaagentd:                                        [  OK  ]
 
   #
   #  Shut down all containers/virtual machines here!!
   #
 
[root@pcs2 ~]# service vz stop
Shutting down Container:  105 1
Container 105 suspend:                                     [  OK  ]
Container 1 suspend:                                       [  OK  ]
Shutting down Container: 
Shutting down vzeventd:                                    [  OK  ]
Umount pfcache image /vz/pfcache.hdd                       [  OK  ]
Stopping Parallels Cloud Server:                           [  OK  ]
 
[root@pcs2 ~]# service shamand stop
Shutting down shamand-monitor:                             [  OK  ]
 
[root@pcs2 ~]# service pstorage-mdsd stop
Shutting down pstorage metadata server pcs1:/pstorage/pcs1/mds/data:  [  OK  ]
 
[root@pcs2 ~]# service pstorage-csd stop
Shutting down pstorage chunk server pcs1:/pstorage/pcs1-cs/data:  [  OK  ]
Shutting down pstorage chunk server pcs1:/pstorage/pcs1-cs1/data: [  OK  ]

Adding the read cache to the Client is straightforward:

# add Client cache
[root@pcs2 ~]# umount /pstorage/pcs1
[root@pcs2 ~]# sed -i 's|^\(pstorage:.*\)\(defaults\)|\1cache=/pstorage/ssd0/client_cache,cachesize=21441|g' /etc/fstab
[root@pcs2 ~]# mount /pstorage/pcs1

Next we add a write journal to each CS. I don't know if there's a safe way to do this to an existing CS, so I just remove the old CS first.

Note I set replicas=3:2 for this cluster, so I'm not going to loose data here. Run pstorage top to check your Replication and make sure your cluster is healthy first!

[root@pcs2 ~]# cat /etc/pstorage/clusters/pcs1/cs.list 
# autogenerated, unique
/pstorage/pcs1-cs/data
/pstorage/pcs1-cs1/data
 
[root@pcs2 ~]# pstorage -c pcs1 rm-cs -l /pstorage/pcs1-cs/data
[root@pcs2 ~]# pstorage -c pcs1 rm-cs -l /pstorage/pcs1-cs1/data
[root@pcs2 ~]# rm -rf /pstorage/pcs1-cs/data/{control,}.deleted 
[root@pcs2 ~]# rm -rf /pstorage/pcs1-cs1/data/{control,}.deleted 
 
[root@pcs2 ~]# pstorage -c pcs1 make-cs -r /pstorage/pcs1-cs/data -j /pstorage/ssd0/pcs1-cs-ssd -s 20480
[root@pcs2 ~]# pstorage -c pcs1 make-cs -r /pstorage/pcs1-cs1/data -j /pstorage/ssd0/pcs1-cs1-ssd -s 20480

MD is straightforward, lets just move the current mds directory and change the pointer to it:

# move MD
[root@pcs2 ~]# cat /etc/pstorage/clusters/pcs1/mds.list 
# autogenerated, unique
/pstorage/pcs1-cs1/mds/data
[root@pcs2 ~]# mv /pstorage/pcs1-cs1/mds/ /pstorage/ssd0/
[root@pcs2 ~]# sed -i s/pcs1-cs1/ssd0/ /etc/pstorage/clusters/pcs1/mds.list
[root@pcs2 ~]# cat /etc/pstorage/clusters/pcs1/mds.list 
# autogenerated, unique
/pstorage/ssd0/mds/data

The setup is now done, so just start things up in reverse order:

[root@pcs2 ~]# service pstorage-csd start
Starting pstorage chunk server pcs1:/pstorage/pcs1-cs/data: [  OK  ]
Starting pstorage chunk server pcs1:/pstorage/pcs1-cs1/data: [  OK  ]
 
[root@pcs2 ~]# service pstorage-mdsd start
Starting pstorage metadata server pcs1:/pstorage/ssd0/mds/data: [  OK  ] 
 
root@pcs2 ~]# /etc/init.d/shamand start
Starting shamand-monitor:                                  [  OK  ]
Waiting 600 seconds for shaman-monitor to start            [  OK  ]
 
   #
   #  Start all containers/virtual machines again here
   #
 
[root@pcs2 ~]# service vz start
Starting Parallels Cloud Server:                           [  OK  ]
Bringing up interface venet0:                              [  OK  ]
Load OOM groups                                            [  OK  ]
Starting vzeventd:                                         [  OK  ]
Loading Parallels Cloud Server license:                    [  OK  ]
Set vzprivrange:                                           [  OK  ]
Configuring Parallels Cloud Server accounting:             [  OK  ]
Waiting for license monitor start:.
Mount pfcache image /vz/pfcache.hdd                        [  OK  ]
Starting Container: 1 105 500
Container 1 started:                                       [  OK  ]
Container 105 started:                                     [  OK  ]
Container 500 started:                                     [  OK  ]
 
[root@pcs2 ~]# service pvaagentd start
Starting pvaagentd:                                        [  OK  ]

I actually had a problem with starting shamand above, and had to stop/start it a couple times. Try shaman -c pcs1 stat, then start it.

And we're done... nearly. I see the two removed CS show as offline in pstorage top, so I'll remove them by id (look up the CS_ID in pstorage top).

[root@pcs ~]# pstorage -c pcs1 rm-cs 1038
connected to MDS#13
Removing CS#1038 ...
[root@pcs2 ~]# pstorage -c pcs1 rm-cs 1039
connected to MDS#13
Removing CS#1039 ...

Things look good, so I rebooted to make sure they come up good on their own. Checking things out we have:

[root@pcs2 ~]# ls -ltr /pstorage/
total 12
lrwxrwxrwx 1 root root   26 Apr 24 12:48 pcs1-mds -> /pstorage/pcs1-cs1/mds
drwxr-xr-x 4 root root 4096 Oct 10 15:20 pcs1-cs1
drwxr-xr-x 4 root root 4096 Oct 10 16:18 pcs1-cs
drwxr-xr-x 6 root root 4096 Oct 10 16:18 ssd0
drwx------ 1 root root    0 Oct 10 16:49 pcs1
 
[root@pcs2 ~]# ls -ltr /pstorage/pcs1
total 0
drwx------ 1 root root 0 Apr  4  2014 private
drwx------ 1 root root 0 Apr  4  2014 vmprivate
drwx------ 1 root root 0 Apr  4  2014 del
 
[root@pcs2 ~]# ls -ltr /pstorage/ssd0/
total 21993312
drwxr-xr-x 3 root     root            4096 Apr 24 12:48 mds
drwx------ 2 root     root           16384 Oct 10 11:51 lost+found
drwx------ 3 pstorage pstorage        4096 Oct 10 16:21 pcs1-cs-ssd
drwx------ 3 pstorage pstorage        4096 Oct 10 16:21 pcs1-cs1-ssd
-rw------- 1 root     root     22521118720 Oct 10 16:49 client_cache
 
[root@pcs2 ~]# ls -ltr /pstorage/ssd0/mds/data/
total 248992
-rw------- 1 pstorage pstorage         8 Apr 24 12:48 cluster_name
-rw------- 1 pstorage pstorage         2 Apr 24 12:48 id
lrwxrwxrwx 1 root     root            39 Apr 24 13:41 logs -> /var/log/pstorage/pcs1/mds-6E4dhC25
-rw------- 1 pstorage pstorage         0 Apr 24 13:41 lock
-rw------- 1 pstorage pstorage   8318976 Oct 10 07:34 journal.485.sn
-rw------- 1 pstorage pstorage        22 Oct 10 16:49 journal
drwx------ 2 pstorage pstorage      4096 Oct 10 16:49 control
-rw------- 1 pstorage pstorage 246628864 Oct 10 16:55 journal.485.lj
 
[root@pcs2 ~]# cat /etc/pstorage/clusters/pcs1/cs.list
# autogenerated, unique
/pstorage/pcs1-cs/data
/pstorage/pcs1-cs1/data
 
[root@pcs2 ~]# grep pstorage /etc/fstab
/dev/mapper/vg_pcs2-lv_pstorage_pcs1cs /pstorage/pcs1-cs   ext4    defaults,noatime 1 2
UUID=a2c2d67b-81be-40cf-9598-9d61b0943ac8 /pstorage/pcs1-cs1 ext4 defaults,noatime 0 0
pstorage://pcs1 /pstorage/pcs1 fuse.pstorage cache=/pstorage/ssd0/client_cache,cachesize=21441 0 0
UUID=385767aa-e4d0-4ceb-b3b9-faca94fbe792 /pstorage/ssd0 ext4 defaults,noatime 0 0

All services are running, cluster status is healthy and things look good. I've got a couple more servers to go, but I hope this helps you too.

Tags: 

Services: 

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.