VIRSH:KVM – Перемещение дисков в другое хранилище

Migration of disks around is part of the life cycle of a guest. Disks in the storage pools (local or network) may fail or fill up due to bad capacity management. Ordinarily, one would have to shut down the guest, copy the storage volume file elsewhere (if it is a file), wait, update xML configuration, and launch it again. However, in mission-critical enterprises, this may not always be possible.

To get the source path, need check XML configuration file or storage volume. This requires to know which storage pool in use.

1
2
3
4
5
6
7
8
9
10
11
12
[root@nuc4 instances]#  virsh vol-list --pool images
 Имя               Путь
------------------------------------------------------------------------------
 vm0.img              /mnt/images/vm0.img

[root@nuc4 instances]#  virsh vol-list --pool images | awk '$1 ~ /^vm0.img$/ {print $2}'
/mnt/images/vm0.img
[root@nuc4 instances]#

[root@nuc4 mnt]# virsh pool-dumpxml images |awk '/<path>.*<\/path>/ {print $1}'
<path>/mnt/images</path>
[root@nuc4 mnt]#

Ensure that destination is an existing storage pool; if not, go ahead and create it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
[root@nuc4 pools]# virsh pool-define-as images2 dir - - - - "/mnt/images2"
Пул images2 определён

[root@nuc4 pools]# virsh pool-build images2
Пул images2 собран

[root@nuc4 mnt]# virsh pool-list --all
 Имя               Статус Автозапуск
-------------------------------------------
 images               активен yes
 images2              не активен no
 iso                  активен yes


[root@nuc4 pools]# virsh pool-start images2
Пул images2 запущен

[root@nuc4 pools]# virsh pool-autostart images2
Добавлена метка автоматического запуска пула images2

[root@nuc4 pools]# virsh pool-info images2
Имя:         images2
UUID:           ae7d1787-1790-4c42-9555-c3ce85242fb2
Статус:   работает
Постоянство: yes
Автозапуск: yes
Размер:   102,03 GiB
Выделение: 24,90 GiB
Доступно: 77,13 GiB

[root@nuc4 pools]# virsh vol-list --pool images2
 Имя               Путь
------------------------------------------------------------------------------
 vm10.img              /mnt/images2/vm10.img

[root@nuc4 pools]#

Moving disks can take some time, so ensure that we have plenty of time available. Perform the following steps:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# 1 - Dump the inactive XML configuration file for the guest:
[root@nuc4 instances]# virsh dumpxml --inactive vm0 > vm0.xml
 

# 2 - Undefine the guest through the following command:
[root@nuc4 instances]# virsh undefine vm0
Определение домена vm0 удалено
[root@nuc4 instances]#


# 3 Copy the virtual disk to another location by executing the following:
[root@nuc4 instances]# virsh blockcopy --domain vm0 --path /mnt/images/vm0.img --dest /mnt/images2/vm0.img --wait --verbose --pivot
Блочное копирование: [100 %]
Операция поворота цепочки завершена успешно
[root@nuc4 instances]#
 

# 4 Now, edit the guest's XML configuration file and change the path of the disk to the new location.
[root@nuc4 instances]# diff -u vm0.xml.orig vm0.xml
--- vm0.xml.orig        2018-02-11 19:18:57.399105600 +0300
+++ vm0.xml     2018-02-11 19:29:14.198094975 +0300
@@ -33,7 +33,7 @@
     <emulator>/usr/libexec/qemu-kvm</emulator>
     <disk type='file' device='disk'>
       <driver name='qemu' type='qcow2'/>
-      <source file='/mnt/images/vm0.img'/>
+      <source file='/mnt/images2/vm0.img'/>
       <target dev='vda' bus='virtio'/>
       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
     </disk>
[root@nuc4 instances]#
 
 
# 5 Redefine the guest, as follows:
[root@nuc4 instances]# virsh define vm0.xml
Домен vm0 определён на основе vm0.xml
[root@nuc4 instances]#

 
# Remove the source disk. Run the following command:
[root@nuc4 instances]# virsh vol-delete --pool images --vol vm0.img
Том vm0.img удалён
[root@nuc4 instances]#

Notes:

The moving of disks can only be performed on transient domains, which is the reason we execute the

1
virsh undefine

.
In order to be able to make it persistent again after transfer, we also need to dump the XML configuration
file and modify the storage volume path.

Moving the disk does two things, which are:

  • Firstly, it copies all the data of the source to the destination
  • Secondly, when the copying is complete, both source and destination remain mirrored until it is either canceled with
    1
    blockjob --abort

    or actually switched over to the new target by executing the

    1
    blockjob --pivot

    command

The preceding

1
blockcopy

command does everything at the same time.
The

1
--wait

command will not give control back to the user until the command fails or succeeds. It is essentially the same as the following:

1
2
3
4
5
6
7
# virsh blockcopy --domain vm0 --path /mnt/images/vm0.img --dest /mnt/images2/vm0.img

# #  Monitor the progress of the copy by executing the following:
# watch -n 5 "virsh blockjob -domain vm0 --path /mnt/images/vm0.img --info"

# # When it's done, execute this:
# virsh blockjob -domain vm0 --path /mnt/images/vm0.img --pivot

It is also possible to change the disk format on the fly, by specifying the

1
--format

argument with the format that we want to convert disk into. If we want to copy it to a block device, specify

1
--blockdev

.

Scroll to top