Module 4 - Introduction to XenServer Storage Repositories

Module 4 - Introduction to XenServer
Storage Repositories
Page 1
Table of contents
Scenario ........................................................................................................................................................ 3
Exercise 1: Creating an NFS Storage Repository ................................................................................... 4
Exercise 2: Probing an Storage Repository ............................................................................................. 9
Exercise 3: Detach and Reattach a Storage Repository ....................................................................... 12
Exercise 4: Forget and Introduce a Storage Repository using the CLI ............................................ 25
Exercise 5: Configure a Dedicated Storage Network Interface ......................................................... 34
Exercise 6: Configure Multipathing ....................................................................................................... 38
Exercise 7: Review volumes and virtual disks ...................................................................................... 48
Exercise 8: Migrate a virtual disk (VDI) ................................................................................................ 58
Exercise 9: Create and Resize a Virtual Disk ........................................................................................ 65
Page 2
Scenario
You work for an Infrastructure as a Service (IaaS) provider and have been tasked
with the configuration of the virtualization infrastructure based on XenServer.
The base installation of XenServer has been completed and you are now in a
position to configure the storage system for use in the environment. You are also
required to review the configuration to confirm functionality and perform
administrations tasks in support of the hosted virtual machines.
The requirements are that administrators of virtual machines need to me agile and
move between servers without interrupting their services.
Page 3
Exercise 1: Creating an NFS Storage
Repository
In this exercise we will create an NFS shared Storage Repository that will allow for
virtual machine images (disks/VDIs) to be stored and accessed from an NFS share.
This SR will be available to a XenServer host or all hosts in a XenServer pool.
Step-by-step guidance
Estimated time to complete this lab: 5 minutes.
Step
Action
Create a new NFS storage repository via XenCenter
1.
Using XenCenter click New Storage.
2.
Select NFS VHD and click Next.
Page 4
3.
Keep the default name NFS virtual disk storage and click Next.
Note: By default a description will be auto-generated based on information you
provide in the wizard.
4.
Enter the vStorage server IP and path to the export/share; 192.168.10.12:/nfs
5.
Click Scan.
6.
No advanced options will be specified at this time.
Page 5
7.
Select Create a new SR and click Finish.
Note: All existing SRs detected on the NFS export/share will be listed and can be
reattached.
8.
Confirm the NFS SR has been attached to the pool.
Page 6
9.
Right-click Shared VM Storage and click Set as Default.
10.
On the General tab for the new SR expand the Status drop down field and
confirm all hosts in the pool are connected to the SR.
Note: An NFS SR is not capable of using Multipathing.
Page 7
Key
takeaways
Notes
The key takeaways for this exercise are:
 You will be able to create an NFS SR to store virtual machine
images (disks/VDIs)
 You will be able to set a default SR which is used to store
crash dump data and images of suspended VMs, and which
will be the default SR used for new virtual disks.


The Advanced Options field allows administrators to configure
the transport protocol to UDP using the useUDP=true
parameter.
Note that it is possible to use different SRs for VMs, crash
dump data and suspended VM using the XenServer xe
command line interface (CLI). See the XenServer
Administrator's Guide for more information.
Page 8
Exercise 2: Probing an Storage
Repository
Overview
In this exercise we probe an iSCSI and NFS SR to confirm the iSCSI IQN of the
target, confirm which LUN or export has been exposed to the XenServer host, and
whether it contains an existing storage repository.
Step-by-step guidance
Estimated time to complete this lab: 15 minutes.
Step
Action
Use sr-probe to review iSCSI server accessible LUN information
1. From the landing desktop launch an SSH session using PuTTY and connect to
vXS01
2. Execute the following:
xe sr-probe type=lvmoiscsi device-config:target=192.168.10.12
Note: For steed and accuracy it is recommended to copy and paste commands
from the lab guide into your lab.
3. The output from the command will provide the Target IQN of the iSCSI Target.
Page 9
4. Now that we have the IQN execute the following:
xe sr-probe type=lvmoiscsi device-config:target=192.168.10.12 deviceconfig:targetIQN=iqn.2015-01.com.storage:server.target1
5. The output from this command will provide the SCSI IDs for the LUNs available
on the target.
6. Now that we have the SCSI ID execute the following:
xe sr-probe type=lvmoiscsi device-config:target=192.168.10.12 deviceconfig:targetIQN=iqn.2015-01.com.storage:server.target1 deviceconfig:SCSIid=1IET_00010001
7. The output from this command will provide information on existing SRs on the
LUN.
8.
Use sr-probe to locate NFS server export/share information
9. Using the SSH connection, execute the following command to view available
exports:
xe sr-probe type=nfs device-config:server=192.168.10.12
Page 10
10. The output from the command will list the target export, export path, and
access list i.e. which IPs or hosts can access the export.
Note: In this example the export/share “/nfs” is accessible by all IPs as
represented by the “*” wildcard/asterisk.
Key
takeaways
The key takeaways for this exercise are:
 Probe an iSCSI target via the CLI
 Confirm the IQN of a target storage service or device, confirm
which LUN is available to the server, and confirm whether any
existing SR are located on the LUN.
 Probe an NFS server via the CLI
 Confirm the exports provided by NFS server
Page 11
Exercise 3: Detach and Reattach a
Storage Repository
Overview
In this exercise you will detach a storage repository from a hosts and/or pool.
Detaching an SR allows for an Administrator to migrate an SR to a different host or
pool and to perform maintenance on an SR. You will perform these steps by using
XenCenter and the CLI.
Step-by-step guidance
Estimated time to complete this lab: 15 minutes.
Step
Action
Detach and SR using XenCenter
1. Using XenCenter select the NFS virtual disk storage SR in the Infrastructure
pane and click the Storage tab.
2. Click Add…
Page 12
3. Name the virtual disk nfs-disk1, specify 2GB as the size, enter a description
“NFS Virtual disk used for demo”, and ensure the NFS virtual disk storage
SR is selected in the location: field.
4. Click Add
5. Confirm the virtual disk was created.
Note: The Virtual Machine field is empty as this virtual disk/vdi is not currently
attached to any virtual machines.
Page 13
6. Right-click on the NFS SR and click Detach.
7. Review the warning message and click Yes to confirm.
Page 14
8. Confirm the SR has been detached by the state icon and the Status field on
the General tab in XenCenter.
9.
Repair a storage repository using XenCenter
10. Select the NFS virtual disk storage which is currently detached.
11. Right-click the SR and select Repair
Page 15
12. Review the status of the SR, all hosts should be Unplugged
13. Click Repair
Page 16
14. Repair process should complete with all hosts in the pool Connected to the SR
15. Close the Repair wizard when connection to the SR is repaired.
16. Confirm the SR is connected and select the NFS SR Storage tab.
17. Confirm the nfs-disk1 is again available
18.
Forget an SR using XenCenter
19. Right-click on the NFS SR and select Detach
20. Click Yes on the warning message
Page 17
21. Confirm the SR is detached/Unplugged from all hosts
22. Right-click on the detached SR and select Forget
23. Review the warning and click Yes, Forget
Reattaching an existing SR
24. Click New Storage
25. Select NFS VHD SR type and click Next
26. Leave all fields default and click Next
27. Enter share name 192.168.10.12:/nfs
28. Click Scan
Page 18
29. Select Reattach an existing SR and highlight the SR UUID listed
30. Click Finish
31. Review the warning message and click Yes
Page 19
32. Confirm the SR has been reattached and set the SR as Default
33. Select the Storage tab
34. Confirm a 2GB virtual disk is available.
Note: The name and description you provided is no longer available and was
lost as part of the Forget process.
35. Select the virtual disk and click Properties
Page 20
36. Name the disk nfs-disk1 and add a description demo disk
37. Confirm the name and description is added to the disk
38.
Detach a storage repository via CLI
39. Using your SSH connection to vXS01 locate the PBD associated with the NFS
SR by running the following command:
xe pbd-list host-name-label=vXS01
Page 21
40. Locate and note the PBD UUID that references the NFS server path in the
device-config field.
41. Now that we know the PBD UUID run the following command:
xe pbd-unplug uuid=<PBD UUID>. (Remember to use tab-completion for
UUIDs by entering the first 4 characters of a UUID and press TAB e.g. 4483)
Note: the command should complete without any confirmation or error message
42. Confirm the PDB is unplugged by running:
xe pbd-list uuid=<PBD UUID> params=all
Note: the currently-attached field should state false
Page 22
43. Switch to XenCenter and confirm the NFS SR is detached
Note: as we only unplugged the PBD for vXS01 it is the only host in the pool
that is disconnected from the SR.
44.
Reconnect a storage repository using CLI
45. Switch back to the SSH connection
46. Reconnect/Repair the SR using the CLI run the following command:
xe pbd-plug uuid=<PBD UUID that was unplugged>
Note: the command should complete without any message
Page 23
47. Navigate to XenCenter and confirm that the SR is now reconnected to host
vXS01
Key takeaways
The key takeaways for this exercise are:
 Detach an SR via XenCenter
 Reattached an SR via XenCenter
 Detach an SR vie the CLI
 Reattached an SR via CLI
Notes
Page 24
Exercise 4: Forget and Introduce a
Storage Repository using the CLI
Overview
In this exercise you will detach a storage repository (SR) from a pool and forget the
association data for the SR. You will then reintroduce an SR via the CLI preserving
any existing data on the SR. This process includes creating an SR object, creating a
physical block device (PBD) object, and plugging these objects into a pool.
Step-by-step guidance
Estimated time to complete this lab: 15 minutes.
Step
Action
Forget an SR
1.
2.
Select the NFS SR in the Resources pane
3.
Right-click and click Detach
4.
Click Yes. As per the previous exercise the SR will be detached.
5.
Right-click on the detached SR and click Forget.
6.
Review the warning message and click Yes, Forget to confirm.
7.
Introduce and SR
Page 25
8.
Unlike detaching and reattaching an SR, forgetting an SR will remove the data
that matches a virtual disk to a virtual machine. Although it is possible to
recreate the SR as in previous exercise we will reintroduce and SR using the
CLI.
Probe the NFS storage server to locate the exports available:
xe sr-probe type=nfs device-config:server=192.168.10.12
9.
10.
Probe the export for existing SRs on the export:
xe sr-probe type=nfs device-config:server=192.168.10.12 deviceconfig:serverpath=/nfs
Page 26
11.
Using the detail collected from the sr-probe reintroduce the SR:
xe sr-introduce content-type=user type=nfs name-label=’NFS virtual disk
storage’ shared=true uuid=<UUID from probe>
TIP: copy the SR UUID from the probe output by highlighting the UUID and
right-click the shell.
Note: The output of this command will be a SR UUID you will use in the next
step.
Page 27
12.
Switch to XenCenter and confirm the SR status:
Page 28
13.
Create a PBD between the host and the SR:
xe pbd-create device-config:server=192.168.10.12 deviceconfig:serverpath=/nfs sr-uuid=<UUID of SR-introduce> host-uuid=<UUID
of vXS01>
TIP: now that the SR UUID was created using the Introduce command you can
use TAB completion for the UUIDs in this command.
TIP: get a host UUID for a host from General tab in XenCenter.
Note: the output of this command will be a PBD UUID you will use in the next
step.
Page 29
14.
Switch back to XenCenter and confirm the Status of the SR:
15.
Plug the PBD to the hosts:
xe pbd-plug uuid=<UUI of PBD you created)
Page 30
16.
Switch back to XenCenter and confirm the Status of the SR:
17.
When using the CLI the PBD creation process needs to be completed on all
other hosts in the pool. To speed up this process we will use XenCenter to
repair the SR.
Right-click the NFS SR and select Repair
Page 31
18.
Click Repair.
19.
Confirm the PBD were automatically created and the SR is Connected and
click Close.
20.
Set the NFS SR as Default
Page 32
Key
takeaways
Notes
The key takeaways for this exercise are:
 You will be able to Forget an SR
 You will be able to introduce an SR
 You will be able to create a PBD
 You will be able to Plug a PBD
 You will be able to repair an SR
All these introduction steps will have to be completed on each hosts
in the pool. The preferred method to work around this is to use the
Repair option in XenCenter to create and attach the required PBDs
for the rest of the hosts.
Page 33
Exercise 5: Configure a Dedicated
Storage Network Interface
Overview
In this exercise we will create and configure a pool wide Secondary Interface which is dedicated
to storage traffic.
When you want to dedicate a secondary interface for a specific purpose, you must ensure the
appropriate network configuration is in place to ensure the NIC is used only for the desired
traffic. For example, to dedicate a NIC to storage traffic, the NIC, storage target, switch, and/or
VLAN must be configured so that the target is only accessible over the assigned NIC. If your
physical and IP configuration do not limit the traffic that can be sent across the storage NIC, it is
possible to send other traffic, such as management traffic, across the secondary interface.
Step-by-step guidance
Estimated time to complete this lab: 10 minutes.
Step Action
1. Using XenCenter select the pool node
Page 34
2. Select the Networking tab
3. Click Configure…
Page 35
4. Click Add IP address
5. Name the secondary interface Storage Network
6. Select Network 2
7. Enter the following IP address information:
IP address range: 192.168.11.21
Subnet mask: 255.255.255.0
Gateway: <leave blank>
Note: the additional IP listed that will be assigned to the members servers in the
pool.
8. Click OK
Page 36
9. Confirm each server in the pool has a dedicated storage network created.
10. Ping each servers in the pool using the new IP to confirm communication.
Note: use either the host console tab in XenCenter or the SSH connection.
The key takeaways for this exercise are:
Key
takeaways
Notes

You will be able to designate and configure a pool wide
dedicated storage network
When selecting a NIC to configure as a secondary interface for use with
iSCSI or NFS SRs, ensure that the dedicated NIC uses a separate IP
subnet that is not routable from the management interface. If this is not
enforced, then storage traffic may be directed over the main
management interface after a host reboot, due to the order in which
network interfaces are initialized.
For more information review the Configuring a Dedicated Storage NIC
section in the admin guide.
Page 37
Exercise 6: Configure Multipathing
Overview
In this exercise you will enable multipathing within the pool. Configuring multipathing
helps provide redundancy for network storage traffic in case of partial network or
device failure. The term multipathing refers to routing storage traffic to a storage
device over multiple paths for redundancy (failover) and increased throughput.
Step-by-step guidance
Estimated time to complete this lab: 20 minutes.
Step
Action
Configure Multipathing
1. Using XenCenter right-click server vXS03 and select Enter Maintenance
Mode…
Page 38
2. Click Enter Maintenance Mode
3. Repeat step 1-2 for server vXS02 and lastly vXS01. All server should now be in
maintenance mode.
4. A host cannot enter maintenance mode with running VM. Leave the
Maintenance Mode windows open and Shut Down the Demo Linux 2 VM.
Page 39
5. Click Enter Maintenance Mode on the pool master.
Note: there will be a short delay while XenCenter migrates any active virtual
machines and unplugs the existing storage; if the server is a pool master, it will
be disconnected and may disappear from the Resources pane temporarily while
a new pool master is assigned.
6. Right-click vXS01 and select Properties
Page 40
7. Select Multipathing and tick Enable multipathing on this server and click OK
Note: when enabling multipathing all SRs need to reattach. This can be
confirmed in XenCenter:
8. Repeat step 6 and 7 for server vXS02 and lastly vXS03. All server should now
have multipathing enabled. Wait for the process to complete on a server
before moving to the next server.
9. Right-click on server vXS01 and click Exit Maintenance mode.
Page 41
10. Select Skip
Note: as we shut down the VM during while running the Maintenance wizard
XenCenter will prompt to restore the VM to the host.
11. Complete the previous step for vXS02 and vXS03. All hosts in the pool should
now be out of Maintenance mode.
Page 42
12. Select the iSCSI SR and expand the Multipathing field. Notice that only 1
path is available per server.
13. For some storage server/devices manual configuration is required via CLI.
Switch to or Connect to SSH session to server vXS01
14. Use the iscsiadm command to discover the available targets on the vStorage
server:
iscsiadm -m discovery -t st –p 192.168.10.12
Note: -t refers to the type, st refers to the sendtargets parameter (it must
always be sendtargets); -p refers to the ip:port parameter.
Note: for more information on the iscsiadm command use man iscsiadm.
Page 43
15. Discover the target available for IP 192.168.11.12:
iscsiadm -m discovery -t st -p 192.168.11.12
Note: the same iSCSI target is accessible from both IPs (dedicated storage
network and management network). In a real-world scenario the dedicated
storage network should be isolated from all other traffic including management
and VM traffic.
16. Manually login to the targets:
iscsiadm -m node --login
Note: In our lab we have 1 target. If the vStorage server (TGT iSCSI Target)
had multiple targets available you would specify the target IQN as part of this
command. The errors are expected as a 1 session already existed prior to
executing the command.
17. Switch to XenCenter; select the iSCSI SR and confirm the current Multipath
status
Note: as we have manually logged on vXS01 it now shows two paths.
Page 44
18. Repeat steps 14-16 on all servers vXS02 and vXS03
19. Switch to XenCenter; select the iSCSI SR and confirm the current Multipath
status
20.
Review multipath status via the CLI
21. Switch to your SSH connection
Page 45
22. Confirm there are two sessions from the host to the target:
iscsiadm -m session
Note: using this command we can see we have two sessions although
XenCenter indicates one.
23. Review additional details relating to the iSCSI sessions:
iscsiadm -m session -P3
Note: this command lists detailed information about the current iSCSI sessions
Page 46
24. List the paths as seen by multipath:
multipath –ll
Note: the errors in the output is due to a bug in the multipath.conf file which is
fixed in later versions.
25. Although it is recommended to use the multipath command you can also list the
paths using mpathutil:
mpathutil status
Key
takeaways
The key takeaways for this exercise are:
 You will be able to enable multipathing on a per-host basis
 You will be able to manually login to an iSCSI target
 You will be able to review the current multipathing status
Notes
Page 47
Exercise 7: Review volumes and virtual
disks
Overview
In this exercise you will review the layout of virtual disks in XenServer. You will
review the LVM layout used by XenServer for managing virtual disks and the virtual
disk objects (VDI) managed by XAPI.
Step-by-step guidance
Estimated time to complete this lab: 15 minutes.
Step
1.
2.
Action
Connect to vXS01 or switch to SSH session
Display and review the physical volume information for the host:
pvs
Note the physical volume (PV) path, volume group (VG) name, and physical size
(Psize).
3.
Display and review additional information about the physical volume:
pvdisplay
Page 48
4.
5.
Display and review the volume group information for the host:
vgs
Note: that in this configuration there is one logical volume that spans the whole
physical volume. On this logical volume we implement the EXT filesystem.
Display and review additional information about the volume group:
vgdisplay
Page 49
6.
Display and review the logical volumes in the pool:
7.
Display and review additional information on the logical volumes in the pool:
Note: in the lab we have a logical volume for the Local SR which has an EXT file
system loaded on top. This provide support for IntelliCache. We can also see the
logical volume (virtual disks) on the iSCSI SR with the VG Name VG_XenStorage…
Page 50
8.
Display and review all current snapshots in the pool:
xe snapshot-list
9.
List the parameter of the Initial Status snapshot you took:
xe snapshot-list uuid=<UUID of Initial Status snapshot> params=all
Page 51
10.
Locate the VM UUID of the snapshot to by reviewing the field name. This indicates
which VM the snapshot belongs to:
snapshot-of ( RO): < uuid>
11.
List all the virtual disks and VBDs associated with the specific VM:
xe vm-disk-list uuid=<VM UUID>
Note: the output lists the VBD which is the disk connection to the VM and the VDI
which represents the virtual disk.
Page 52
12.
List all the parameters of a virtual disk (VDI)
xe vdi-params-list uuid=<VDI of VM>
13.
Locate “vhd-parent” VDI UUID from the sm-config field
Page 53
14.
Use the vhd-util tool to list the associations between or VHD tree for the virtual disks:
TIP: use the vgs command to locate the volume group name of the iSCSI SR required
in this step.
vhd-util scan -fpm "VHD-*" -l VG_XenStorage-<UUID>
15.
TIP: remember to use the highlight+right-click action to copy the VG (volume group)
name.
From the output, locate the virtual disk UUID of the snapshot/child VHD and the UUID
of the Parent VHD
1
TIP: you can review all disk/storage related activities by monitoring the Storage
Manager (SM) log files /var/log/SMlog
Page 54
16.
Switch to XenCenter and start the Demo Linux 1 VM on server vXS01
17.
Select Snapshots tab and delete all snapshots you created previously.
Page 55
18.
Locate the call information in the SM logs on vXS01 and review the associated
actions:
less /var/log/SMlog | grep –iA10 deletion
19.
Locate and review the VHD tree information in the SM logs on vXS01
less /var/log/SMlog | grep –i tree –A5
Page 56
Key
takeaways
Notes
The key takeaways for this exercise are:
 You will be able to locate and review the LVM layout on a host
 Review snapshot objects and virtual disks
 Review the VHD tree/child-parent relationship
When deleting a child virtual disk the coalescing process is automatically
initiated. This process can be reviewing in the SM log file. For more
information on coalescing and methods to manually trigger the process
review the admin guide.
Page 57
Exercise 8: Migrate a virtual disk (VDI)
Overview
In this exercise you will move the Virtual Disk Image (VDI) of a running VM from
iSCSI shared storage to NFS shared storage, without moving the VM. This process
can be used by Administrators when migrating disks to different storage tiers.
The potions available for moving a VDI depends on the VM state i.e. running or shut
down and you will review these options for moving a VDI between Storage
Repositories.
Step by step guidance
Estimated time to complete this lab: 15 minutes.
Step Action
1.
Using XenCenter select the running Demo Linux 1 VM
2.
Select the console node and login (root/Citrix123)
3.
Configure a static IP address (192.168.10.50):
nano /etc/sysconfig/network-scripts/ifcfg-eth0
vi /etc/sysconfig/network-scripts/ifcfg-eth0
Page 58
Step
4.
Action
Configure a gateway address (192.168.10.1):
nano /etc/sysconfig/network or vi /etc/sysconfig/network
5.
Restart network service:
/etc/init.d/network restart
6.
Note: the error on eth1 is expected as the interface is set to be loaded on boot.
Initiate a ping to a public IP address to verify liveness:
ping 8.8.8.8
7.
Select the Storage tab
Page 59
Step
8.
9.
10.
Action
Highlight the virtual disk attached to the running VM
Click Move…
Select the NFS virtual disk storage repository from the list of available
locations:
Page 60
Step
11.
Action
Click Move
12.
Note: you can also migrate the virtual disk to local storage if required.
Note the icon for the VM changes as the virtual disk is migration between
storage repositories.
13.
Switch the to the console tab and confirm the ping is still active.
Page 61
Step
14.
15.
16.
Action
Confirm the virtual disk now resides on the NFS SR using the VM storage tab:
Note: the same process can be initiated directly from the SR node and has the
same result.
Select the Demo Linux 1 VM and shut it down
In the Storage tab, notice that the Move… option is not available when the VM
is shut down
Page 62
Step
17.
18.
19.
Action
Right click on the VM and select Move VM
Select the iSCSI SR
Click Move
Note: you can also migrate the virtual disk to local storage if required.
Note: the progress indicator at the bottom of XenCenter. You can also follow the
Move process in the Notifications node in XenCenter.
Page 63
Summary
Key
Takeaways
The key takeaways for this exercise are:
 You have created NFS shared storage for Pool A.
 You will be able to move a virtual disk between shared
storage SRs.
 You will be able to move a virtual disk from shared storage
to local storage SR.
NOTES
There are a number of cases whereby just moving the storage
alone is required and not moving the VM itself from one XenServer
to another. A good example of this is moving storage from a
development NAS or SAN to a production SAN. If a storage
appliance is upgraded or replaced, it may also be desirable to keep
VMs on the servers they reside on and just migrate the storage.
Page 64
Exercise 9: Create and Resize a Virtual
Disk
Overview
In this exercise we will resize the virtual disk (VDI) size for a Linux virtual machine.
Storage on XenServer VMs is provided by virtual disks. A virtual disk is a persistent,
on-disk object that exists independently of the VM to which it is attached. Virtual
disks are stored on XenServer Storage Repositories (SRs), and can be attached,
detached and re-attached to the same or different VMs when needed. New virtual
disks can be created at VM creation time (from within the New VM wizard) and they
can also be added after the VM has been created from the VM's Storage tab.
In many cases an administrator will be required to increase the disk size of a
particular virtual machine i.e. to store memory dump files, etc. We will resize our
Windows 7 virtual machine via XenCenter and extend it using Disk Management. We
will then increase our Linux virtual machine disk size via the XenServer host CLI and
extended it using the Linux fdisk.
Step-by-step guidance
Estimated time to complete this lab: 20 minutes.
Page 65
Step Action
1. In this exercise we will use a VM running on our physical host and not the virtual
XenServer VMs.
Shut down the Win 7-2 VM and click on the Storage tab.
2.
Select the virtual disk (VDI) and click Properties and select Size and Location node.
Page 66
Step Action
3. Increase the disk size to 30GB by changing the Size parameter and click OK.
4.
The change should now be reflected in the Store information tab.
5.
There are two options for extending the volume for Windows machines to utilize the
additional space allocated i.e. via the Disk Management GUI or via CLI command
diskpart.
Let’s review how to complete the task via the Disk Management GUI.
6.
Start the Win7-2 VM and login as user 1 ia the Console tab. Right click on Start >
Computer and select Manage
Page 67
Step Action
7. In Computer Management select Disk Management. Confirm the additional space
allocated (6GB) is detected.
Page 68
Step Action
8. Right click on the active C partition and select Extend Volume
Page 69
Step Action
9. Click Next
Page 70
Step Action
10. Extend the partition to maximum size by using the default settings. Click Next.
Page 71
Step Action
11. Click Finish
Page 72
Step Action
12. Confirm the partition has been extended.
Resize your Demo Linux 2 VM virtual disk (VDI) via XenServer host CLI
Page 73
Step Action
13. Ensure the Demo Linux 2 VM is shut down.
14. Connect to vXS01 via a SHH (PuTTY)
Page 74
Step Action
15. Locate the VDI UUID for the Demo Linux 2 VM that we will resize:
xe vm-disk-list vm=Demo\ Linux\ 2
Note: In the above example the Demo Linux 2 VM has two VDIs associated with it
(swap and root). We will increase the size of the root partition. We can locate the
correct VDI by comparing the disk sizes to find the correct UUID i.e. root= 4GB and
swap=512MB.
16.
Note: For each VDI listed an associated VBD is displayed.
Extend the VDI size by 2 GB
xe vdi-resize uuid=[uuid of VDI] disk-size=6GiB
Note: The disk size needs to be specified in GibiBytes (Binary size value) i.e. GiB
Page 75
Step Action
17. Confirm the resize was completed successfully by listing the VDI again:
xe vm-disk-list vm=Demo\ Linux\ 2
18.
Extend the volume of your Linux VM
Start and login to Demo Linux 2 VM
Page 76
Step Action
19. List the amount of disk space free on the file system of the VM
df- h
20.
Note: We will still see the original size value
List the partition information
fdisk –l
21.
Note: We can now see the additional space available for /dev/xvda
Extend the file system using resize2fs
resize2fs /dev/xvda1
Page 77
Step Action
22. Confirm the file system size has increased using the df command
df –h
Summary
Key
Takeaways
The key takeaways for this exercise are:
 You will be able to resize a virtual disk using XenCenter.
 You will be able to resize a virtual disk using the CLI.
 You will be able to extend a partition using diskpart for Windows
VMs and fdisk for Linux VMs.
 You will be able to resize a Linux file system using resize2fs.
Notes
Virtual disks on paravirtualized VMs (that is, VMs with XenServer Tools installed) can
be "hotplugged", that is, you can add, delete, attach and detach virtual disks without
having to shut down the VM first. VMs running in HVM mode (without XenServer Tools
installed) must be shut down before you carry out any of these operations; to avoid
this, you should install XenServer Tools on all HVM virtual machines.
Page 78