Qemu xml file format


















Learn more. Asked 5 years, 5 months ago. Active 5 years, 5 months ago. Viewed 2k times. Improve this question. Sol Sol 1 1 silver badge 9 9 bronze badges.

Add a comment. Active Oldest Votes. After you modify the XML file you should virsh define myguest. Improve this answer. Stephen Harris Stephen Harris That did it. Time to go read the documentation. If your file system supports holes for example in ext2 or ext3 , then only the written sectors will reserve space.

Although Raw images give optimal performance, only very basic features are available with a Raw image. For example, no snapshots are available. Use it to have optional AES encryption, zlib-based compression, support of multiple VM snapshots, and smaller images, which are useful on file systems that do not support holes.

Note that this expansive feature set comes at the cost of performance. Although only the formats above can be used to run on a guest virtual machine or host physical machine, qemu-img also recognizes and supports the following formats in order to convert from them into either raw , or qcow2 format.

QEMU driver support since 0. The optional emulatorpin element specifies which of host physical CPUs the "emulator", a subset of a domain not including vCPU or iothreads will be pinned to. If this is omitted, and attribute cpuset of element vcpu is not specified, "emulator" is pinned to all the physical CPUs by default. It contains one required attribute cpuset specifying which physical CPUs to pin to. If this is omitted and attribute cpuset of element vcpu is not specified, the IOThreads are pinned to all the physical CPUs by default.

See the iothreadids description for valid iothread values. The optional shares element specifies the proportional weighted share for the domain. If this is omitted, it defaults to the OS provided defaults. NB, There is no unit for the value, it's a relative measure based on the setting of other VM, e.

The value should be in range [2, ]. The optional period element specifies the enforcement interval unit: microseconds. Within period , each vCPU of the domain will not be allowed to consume more than quota worth of runtime. The value should be in range [, ].

A period with value 0 means no value. Only QEMU driver support since 0. The optional quota element specifies the maximum allowed bandwidth unit: microseconds. A domain with quota as any negative value indicates that the domain has infinite bandwidth for vCPU threads, which means that it is not bandwidth controlled. The value should be in range [, ] or less than 0.

A quota with value 0 means no value. You can use this feature to ensure that all vCPUs run at the same speed. The value should be in range , ]. Only QEMU driver support since 1. Only QEMU driver support since 2. You can use this feature to ensure that all IOThreads run at the same speed. The optional vcpusched , iothreadsched and emulatorsched elements specify the scheduler type values batch , idle , fifo , rr for particular vCPU, IOThread and emulator threads respectively. The element emulatorsched does not have that attribute.

Valid vcpus values start at 0 through one less than the number of vCPU's defined for the domain. Valid iothreads values are described in the iothreadids description. If no iothreadids are defined, then libvirt numbers IOThreads from 1 to the number of iothreads available for the domain. For real-time schedulers fifo , rr , priority must be specified as well and is ignored for non-real-time ones. The value range for the priority depends on the host kernel usually Optional cachetune element can control allocations for CPU caches using the resctrl on the host.

Whether or not is this supported can be gathered from capabilities where some limitations like minimum size and required granularity are reported as well. The required attribute vcpus specifies to which vCPUs this allocation applies.

A vCPU can only be member of one cachetune element allocation. The vCPUs specified by cachetune can be identical with those in memorytune, however they are not allowed to overlap. The optional, output only id attribute identifies cache uniquely.

Supported subelements are:. Type of allocation. Can be code for code instructions , data for data or both for both code and data unified. The size of the region to allocate. The value by default is in bytes, but the unit attribute can be used to scale the value. The optional element monitor creates the cache monitor s for current cache allocation and has the following required attributes:. The default monitor has the same vCPU list as the associated allocation.

For non-default monitors, overlapping vCPUs are not permitted. Optional memorytune element can control allocations for memory bandwidth using the resctrl on the host. Whether or not is this supported can be gathered from capabilities where some limitations like minimum bandwidth and required granularity are reported as well.

A vCPU can only be member of one memorytune element allocation. The vcpus specified by memorytune can be identical to those specified by cachetune. However they are not allowed to overlap each other. The maximum allocation of memory for the guest at boot time. The memory allocation includes possible additional memory devices specified at start or hotplugged later.

The units for this value are determined by the optional attribute unit , which defaults to "KiB" kibibytes, 2 10 or blocks of bytes. Valid units are "b" or "bytes" for bytes, "KB" for kilobytes 10 3 or 1, bytes , "k" or "KiB" for kibibytes bytes , "MB" for megabytes 10 6 or 1,, bytes , "M" or "MiB" for mebibytes 2 20 or 1,, bytes , "GB" for gigabytes 10 9 or 1,,, bytes , "G" or "GiB" for gibibytes 2 30 or 1,,, bytes , "TB" for terabytes 10 12 or 1,,,, bytes , or "T" or "TiB" for tebibytes 2 40 or 1,,,, bytes.

However, the value will be rounded up to the nearest kibibyte by libvirt, and may be further rounded to the granularity supported by the hypervisor. Some hypervisors also enforce a minimum, such as KiB. In case NUMA is configured for the guest the memory element can be omitted. In the case of crash, optional attribute dumpCore can be used to control whether the guest memory should be included in the generated coredump or not values "on", "off".

The run time maximum memory allocation of the guest. The slots attribute specifies the number of slots available for adding memory to the guest. The bounds are hypervisor specific. Note that due to alignment of the memory chunks added via memory hotplug the full size allocation specified by this element may be impossible to achieve. The actual allocation of memory for the guest. This value can be less than the maximum allocation, to allow for ballooning up the guests memory on the fly.

If this is omitted, it defaults to the same value as the memory element. The unit attribute behaves the same as for memory. The optional memoryBacking element may contain several elements that influence how virtual memory pages are backed by host pages. This tells the hypervisor that the guest should have its memory allocated using hugepages instead of the normal native page size. The page element is introduced. It has one compulsory attribute size which specifies which hugepages should be used especially useful on systems supporting hugepages of different sizes.

The default unit for the size attribute is kilobytes multiplier of If you want to use different unit, use optional unit attribute. From the example snippet, one gigabyte hugepages are used for every NUMA node except node number four.

For the correct syntax see this. Instructs hypervisor to disable shared pages memory merge, KSM for this domain. When set and supported by the hypervisor, memory pages belonging to the domain will be locked in host's memory and the host will not be allowed to swap them out, which might be required for some workloads such as real-time.

Thus, enabling this option opens up to a potential security risk: the host will be unable to reclaim the locked memory back from the guest when it's running out of memory, which means a malicious guest allocating large amounts of locked memory could cause a denial-of-service attack on the host. Using the type attribute, it's possible to provide "file" to utilize file memorybacking or keep the default "anonymous". Using the mode attribute, specify if the memory is to be "shared" or "private".

This can be overridden per numa node by memAccess. Using the mode attribute, specify when to allocate the memory by supplying either "immediate" or "ondemand". When set and supported by hypervisor the memory content is discarded just before guest shuts down or when DIMM module is unplugged.

Please note that this is just an optimization and is not guaranteed to work in all cases e. The optional memtune element provides details regarding the memory tunable parameters for the domain. The last piece is hard to determine so one needs guess and try. For backwards compatibility, output is always in KiB. The units for this value are kibibytes i.

The optional memory element specifies how to allocate memory for the domain process on a NUMA host. It contains several optional attributes. Attribute mode is either 'interleave', 'strict', 'preferred', or 'restrictive', defaults to 'strict'. The value 'restrictive' specifies using system default policy and only cgroups is used to restrict the memory nodes, and it requires setting mode to 'restrictive' in memnode elements.

Attribute nodeset specifies the NUMA nodes, using the same syntax as attribute cpuset of element vcpu. Attribute placement since 0.

If placement of vcpu is 'auto', and numatune is not specified, a default numatune with placement 'auto' and mode 'strict' will be added implicitly. Optional memnode elements can specify memory allocation policies per each guest NUMA node. For those nodes having no corresponding memnode element, the default from element memory will be used. Attribute cellid addresses guest NUMA node for which the settings are applied. Attributes mode and nodeset have the same meaning and syntax as in memory element.

This setting is not compatible with automatic placement. QEMU Since 1. The optional blkiotune element provides the ability to tune Blkio cgroup tunable parameters for the domain. The value should be in the range [, ].

After kernel 2. The domain may have multiple device elements that further tune the weights for each host block device in use by the domain.

Each device element has two mandatory sub-elements, path describing the absolute path of the device, and weight giving the relative weight of that device, in the range [, ]. Hypervisors may allow for virtual machines to be placed into resource partitions, potentially with nesting of said partitions.

The resource element groups together configuration related to resource partitioning. It currently supports a child element partition whose content defines the absolute path of the resource partition in which to place the domain.

If no partition is listed, then the domain will be placed in a default partition. Only the hypervisor specific default partition can be assumed to exist by default. Resource partitions are currently supported by the QEMU and LXC drivers, which map partition paths to cgroups directories, in all mounted controllers.

This can be configured by using the appid attribute of fibrechannel element. The attribute contains single string max bytes and it is used by kernel to create VMID. Since 7. Requirements for CPU model, its features and topology can be specified using the following collection of elements.

In case no restrictions need to be put on CPU model and its features, a simpler cpu element can be used. The cpu element is the main container for describing guest CPU requirements. Its match attribute specifies how strictly the virtual CPU provided to the guest matches these requirements. Possible values for the match attribute are:. A better CPU will be provided to the guest if it is possible with the requested hypervisor on the current host. This is a constrained host-model mode; the domain will not be created if the provided virtual CPU does not meet the requirements.

The virtual CPU provided to the guest should exactly match the specification. If such CPU is not supported, libvirt will refuse to start the domain.

The domain will not be created unless the host CPU exactly matches the specification. This is not very useful in practice and should only be used if there is a real reason. Sometimes the hypervisor is not able to create a virtual CPU exactly matching the specification passed by libvirt. Since 3. It is usually safe to omit this attribute when starting a domain and stick with the default value.

Once the domain starts, libvirt will automatically change the check attribute to the best supported value to ensure the virtual CPU does not change when the domain is migrated to another host. The following values can be used:. Libvirt does no checking and it is up to the hypervisor to refuse to start the domain if it cannot provide the requested CPU. Libvirt will check the guest CPU specification before starting a domain, but the rest is left on the hypervisor.

It can still provide a different virtual CPU. Possible values for the mode attribute are:. In this mode, the cpu element describes the CPU that should be presented to the guest.

This is the default when no mode attribute is specified. This mode makes it so that a persistent guest will see the same hardware no matter what host the guest is booted on. Since the CPU definition is copied just before starting a domain, exactly the same XML can be used on different hosts while still providing the best guest CPU each host supports.

The match attribute can't be used in this mode. Specifying CPU model is not supported either, but model 's fallback attribute may still be used. Using the feature element, specific flags may be enabled or disabled specifically in addition to the host model. This may be used to fine tune features that can be emulated.

On the other hand, the ABI provided to the guest is reproducible. During migration, complete CPU model definition is transferred to the destination host so the migrated guest will see exactly the same CPU model for the running instance of the guest, even if the destination host contains more capable CPUs or newer kernel; but shutting down and restarting the guest may present different hardware to the guest according to the capabilities of the new host.

Prior to libvirt 3. Thus the CPU configuration created using host-model may not work as expected. With this mode, the CPU visible to the guest should be exactly the same as the host CPU even in the aspects that libvirt does not understand. Though the downside of this mode is that the guest environment cannot be reproduced on different hardware.

Thus, if you hit any bugs, you are on your own. Further details of that CPU can be changed using feature elements. Migration of a guest using host-passthrough is dangerous if the source and destination hosts are not identical in both hardware, QEMU version, microcode version and configuration.

If such a migration is attempted then the guest may hang or crash upon resuming execution on the destination host. Depending on hypervisor version the virtual CPU may or may not contain features which may block migration even to an identical host. When running a guest with hardware virtualization this CPU model is functionally identical to host-passthrough , so refer to the docs above. When running a guest with CPU emulation, this CPU model will enable the maximum set of features that the emulation engine is able to support.

Both host-model and host-passthrough modes make sense when a domain can run directly on the host CPUs for example, domains with type kvm. However, for backward compatibility host-model may be implemented even for domains running on emulated CPUs in which case the best CPU the hypervisor is able to emulate may be used rather then trying to mimic the host CPU model.

If an application does not care about a specific CPU, just wants the best featureset without a need for migration compatibility, the maximum model is a good choice on hypervisors where it is available. The content of the model element specifies CPU model requested by the guest. If a hypervisor is not able to use the exact CPU model, libvirt automatically falls back to a closest model supported by the hypervisor while maintaining the list of CPU features.

Supported values for fallback attribute are: allow this is the default , and forbid. It must be exactly 12 characters long. If not set the vendor id of the host is used. If this element is missing, the guest can be run on a CPU matching given features regardless on its vendor. The topology element specifies requested topology of virtual CPU provided to the guest. Four attributes, sockets , dies , cores , and threads , accept non-zero positive integer values.

They refer to the number of CPU sockets per NUMA node, number of dies per socket, number of cores per die, and number of threads per core, respectively. The dies attribute is optional and will default to 1 if omitted, while the other attributes are all mandatory. Hypervisors may require that the maximum number of vCPUs specified by the cpus element equals to the number of vcpus resulting from the topology.

The cpu element can contain zero or more feature elements used to fine-tune features provided by the selected CPU model. The list of known feature names can be found in the same file as CPU models. The meaning of each feature element depends on its policy attribute, which has to be set to one of the following values:. Guest creation will fail unless the feature is supported by the host CPU or the hypervisor is able to emulate it.

Individual CPU feature names are specified as part of the name attribute. If the element is missing, the hypervisor will use a sensible default.

This optional attribute specifies which cache level is described by the element. Missing attribute means the element describes all CPU cache levels at once. Mixing cache elements with the level attribute set and those without the attribute is forbidden.

Guest NUMA topology can be specified using the numa element. Mixing cells with and without the id attribute is not recommended as it may result in unwanted behaviour. This is valid only for hugepages-backed memory and nvdimm modules. Each cell element can have an optional discard attribute which fine tunes the discard feature for given numa node as described under Memory Backing.

Accepted values are yes and no. The sibling sub-element is used to specify the distance value between sibling NUMA cells. If no distances are given to describe the SLIT data between different cells, it will default to a scheme using 10 for local and 20 for remote distances.

The cache element has a level attribute describing the cache level and thus the element can be repeated multiple times to describe different levels of the cache. Describes cache associativity accepted values are none , direct and full. Describes cache write associativity accepted values are none , writeback and writethrough. The cache element has two mandatory child elements then: size and line which describe cache size and cache line size.

Both elements accept two attributes: value and unit which set the value of corresponding cache attribute. The interconnects element can have zero or more latency child elements to describe latency between two memory nodes and zero or more bandwidth child elements to describe bandwidth between two memory nodes. Both these have the following mandatory attributes:. The type of the access.

Accepted values: access , read , write. The actual value. For latency this is delay in nanoseconds, for bandwidth this value is in kibibytes per second.

Use additional unit attribute to change the units. To describe latency from one NUMA node to a cache of another NUMA node the latency element has optional cache attribute which in combination with target attribute creates full reference to distant NUMA node's cache level. It is sometimes necessary to override the default actions taken on various events. Not all hypervisors support all events and actions. Using virsh reboot or virsh shutdown would also trigger the event. The following collections of elements allow the actions to be specified when a guest OS triggers a lifecycle operation.

A common use case is to force a reboot to be treated as a poweroff when doing the initial OS installation. This allows the VM to be re-configured for the first post-install bootup. The domain will be terminated and then restarted with a new name. Only supported by the libxl hypervisor driver. The crashed domain's core will be dumped, and then the domain will be terminated completely and all resources released.

The crashed domain's core will be dumped, and then the domain will be restarted with the same configuration. The following actions are recognized by libvirt, although not all of them need to be supported by individual lock managers. When no action is specified, each lock manager will take its default action.

NB: Only qemu driver support. If nothing is specified, then the hypervisor will be left with its default value. Note: This setting cannot prevent the guest OS from performing a suspend as the guest OS itself can choose to circumvent the unavailability of the sleep states e. S4 by turning off completely. All features are listed within the features element, omitting a togglable feature tag turns it off. The available features can be found by asking for the capabilities XML and domain capabilities XML , but a common set for fully virtualized domains are:.

Depending on the state attribute values on , off enable or disable use of Hardware Assisted Paging. The default is on if the hypervisor detects availability of Hardware Assisted Paging. Always create a private network namespace. This is automatically set if any interface devices are defined.

This feature is only relevant for container based virtualization drivers, such as LXC. Since 8. Enable all features currently supported by the hypervisor, even those that libvirt does not understand.

Migration of a guest using passthrough is dangerous if the source and destination hosts are not identical in both hardware, QEMU version, microcode version and configuration. The mode attribute can be omitted and will default to custom. Notify the guest that the host supports paravirtual spinlocks for example by exposing the pvticketlocks mechanism.

Depending on the state attribute values on , off , default on enable or disable the performance monitoring unit for the guest. Depending on the state attribute values on , off , default on enable or disable the emulation of VMware IO port, for vmmouse etc. For example, the 'aarch64' architecture uses gic instead of apic.

The optional attribute version specifies the GIC version; however, it may not be supported by all hypervisors. Accepted values are 2 , 3 and host. Depending on the state attribute values on , off , default on enable or disable System Management Mode. The size can be specified as a value of that element, optional attribute unit can be used to specify the unit of the aforementioned value defaults to 'MiB'.

If set to 0 the extended size is not advertised and only the default ones see above are available. If the VM is booting you should leave this option alone, unless you are very certain you know what you are doing. This value is configurable due to the fact that the calculation cannot be done right with the guarantee that it will work correctly. Starting with pc-q The values may also vary based on the loader the VM is using.

Due to the nature of this setting being similar to "how much RAM should the guest have" users are advised to either consult the documentation of the guest OS or loader if there is any , or test this by trial-and-error changing the value until the VM boots successfully. See Memory Allocation for more details about the unit attribute. Possible values for the resizing attribute are enabled , which causes HPT resizing to be enabled if both the guest and the host support it; disabled , which causes HPT resizing to be disabled regardless of guest and host support; and required , which prevents the guest from starting unless both the guest and the host support HPT resizing.

If the attribute is not defined, the hypervisor default will be used. The optional maxpagesize subelement can be used to limit the usable page size for HPT guests. Enable QEMU vmcoreinfo device to let the guest kernel save debug details. Possible values for the state attribute are on and off. Configure nested HV availability for pSeries guests. This needs to be enabled from the host L0 in order to be effective; having HV support in the L1 guest is very desiderable if it's planned to run nested L2 guests inside it, because it will result in those nested guests having much better performance than they would when using KVM PR or TCG.

It's possible to switch this by setting unknown attribute of msrs to ignore. If the attribute is not defined, or set to fault , unknown reads and writes will not be ignored. Possible values for the value attribute are broken no protection , workaround software workaround available and fixed fixed in hardware.

Configure ibs Indirect Branch Speculation availability for pSeries guests. Possible values for the value attribute are broken no protection , workaround count cache flush , fixed-ibs fixed by serializing indirect branches , fixed-ccd fixed by disabling the cache count and fixed-na fixed in hardware - no longer applicable. The guest clock is typically initialized from the host clock. Most operating systems expect the hardware clock to be kept in UTC, and this is the default.

Windows, however, expects it to be in so called 'localtime'. The offset attribute takes four possible values, allowing fine grained control over how the guest clock is synchronized to the host.

NB, not all hypervisors support all modes. The guest clock will always be synchronized to UTC when booted. If the value is 'reset', the conversion is never done not all hypervisors can synchronize to UTC on each boot; use of 'reset' will cause an error on those hypervisors.

A numeric value forces the conversion to 'variable' mode using the value as the initial adjustment. The default adjustment is hypervisor specific. The guest clock will be synchronized to the host's configured timezone when booted, if any. The guest clock will be synchronized to the requested timezone using the timezone attribute.

The guest clock will have an arbitrary offset applied relative to UTC or localtime, depending on the basis attribute. The delta relative to UTC or localtime is specified in seconds, using the adjustment attribute. The guest is free to adjust the RTC over time and expect that it will be honored at next reboot.

A clock may have zero or more timer sub-elements. Each timer element requires a name attribute, and has other optional attributes that depend on the name specified. Various hypervisors support different combinations of attributes. The name attribute selects which timer is being modified, and can be one of "platform" currently unsupported , "hpet" xen, qemu, lxc , "kvmclock" qemu , "pit" qemu , "rtc" qemu, lxc , "tsc" xen, qemu - since 3.

The hypervclock timer adds support for the reference time counter and the reference page for iTSC feature for guests running the Microsoft Windows operating system.

The track attribute specifies what the timer tracks, and can be "boot", "guest", or "wall", or "realtime". The tickpolicy attribute determines what happens when QEMU misses a deadline for injecting a tick to the guest. This can happen, for example, because the guest was paused. Continue to deliver ticks at the normal rate. The guest OS will not notice anything is amiss, as from its point of view time will have continued to flow normally.

The time in the guest should now be behind the time in the host by exactly the amount of time during which ticks have been missed. Deliver ticks at a higher rate to catch up with the missed ticks. Once the timer has managed to catch up with all the missing ticks, the time in the guest and in the host should match. Merge the missed tick s into one tick and inject.

The guest time may be delayed, depending on how the OS reacts to the merging of ticks. Throw away the missed ticks and continue with future injection normally. The guest OS will see the timer jump ahead by a potentially quite significant amount all at once, as if the intervening chunk of time had simply not existed; needless to say, such a sudden jump can easily confuse a guest OS which is not specifically prepared to deal with it.

Assuming the guest OS can deal correctly with the time jump, the time in the guest and in the host should now match. If the policy is "catchup", there can be further details in the catchup sub-element. The catchup element has three optional attributes, each a positive integer. The attributes are threshold , slew , and limit. Other timers are always emulated. The present attribute can be "yes" or "no" to specify whether a particular timer is available to the guest.

Some platforms allow monitoring of performance of the virtual machine and the code executed inside. To enable the performance monitoring events you can either specify them in the perf element or enable them via virDomainSetPerfEvents API.

This includes minor, major, invalid and other types of page faults. The final set of XML elements are all used to describe devices provided to the guest domain. All devices occur as children of the main devices element.

The contents of the emulator element specify the fully qualified path to the device model emulator binary. To help users identifying devices they care about, every device can have direct child alias element which then has name attribute where users can store identifier for the device. The identifier has to have "ua-" prefix and must be unique within the domain.

Any device that looks like a disk, be it a floppy, harddisk, cdrom, or paravirtualized driver is specified via the disk element. The disk element is the main container for describing disks and supports the following attributes:. Valid values are "file", "block", "dir" since 0. Indicates how the disk is to be exposed to the guest OS. Possible values for this attribute are "floppy", "disk", "cdrom", and "lun", defaulting to "disk". Using "lun" since 0. Configured in this manner, the LUN behaves identically to "disk", except that generic SCSI commands from the guest are accepted and passed through to the physical device.

Indicates the emulated device model of the disk. Typically this is indicated solely by the bus property but for bus "virtio" the model can be specified further with "virtio-transitional", "virtio-non-transitional", or "virtio".

See Virtio transitional devices for more details. Indicates whether the disk needs rawio capability. Valid settings are "yes" or "no" default is "no". This attribute is only valid when device is "lun". NB, rawio intends to confine the capability per-device, however, current QEMU implementation gives the domain process broader capability than that per-process basis, affects all the domain disks.

To confine the capability as much as possible for QEMU driver as this stage, sgio is recommended, it's more secure than rawio. Valid settings are "filtered" or "unfiltered" where the default is "filtered".

Only available when the device is 'lun'. Indicates the default behavior of the disk during disk snapshots: " internal " requires a file format such as qcow2 that can store both the snapshot and the data changes since the snapshot; " external " will separate the snapshot from the live data; and " no " means the disk will not participate in snapshots. Read-only disks default to " no ", while the default for other disks depends on the hypervisor's capabilities.

Some hypervisors allow a per-snapshot choice as well, during domain snapshot creation. Not all snapshot modes are supported; for example, enabling snapshots with a transient disk generally does not make sense.

Representation of the disk source depends on the disk type attribute value as follows:. The file attribute specifies the fully-qualified path to the file holding the disk. The dev attribute specifies the fully-qualified path to the host device to serve as the disk. The dir attribute specifies the fully-qualified path to the directory to use as the disk. The protocol attribute specifies the protocol to access to the requested image. Possible values are "nbd", "iscsi", "rbd", "sheepdog", "gluster", "vxhs", "nfs", "http", "https", "ftp", ftps", or "tftp".

For "nbd", the name attribute is optional. For protocols http and https an optional attribute query specifies the query string. For "iscsi" since 1. If not specified, the default LUN is zero. For "vxhs" since 3. If the tls attribute is set to "yes", then regardless of the qemu. The underlying disk source is represented by attributes pool and volume.

Attribute pool specifies the name of the storage pool managed by libvirt where the disk source resides. Attribute volume specifies the name of storage volume managed by libvirt used as the disk source.



0コメント

  • 1000 / 1000