Performance Tuning on Virtual Machines

The qcow2 virtual disk format uses copy-on-write to delay allocation of storage until it is needed, reducing the amount of disk space used. It can use Zlib compression to further save space. Last week I showed you an example of a virtual machine with a 16 GB disk with 2.7 GB in use on its file system just post-installation. The virtual disk image file occupied only 1.1 GB of actual storage thanks to compression.

Yes, in that case the qemu-kvm hypervisor must run all disk I/O through a compression module, and that will have to have some impact on performance. But don’t assume that it will be objectionable until you do some testing. For full details on testing methods and the concepts introduced here, check out Learning Tree’s Linux optimization and troubleshooting course.

Provisioning

The underlying infrastructure will impose limits on possible performance. Virtualization isn’t magic, you can’t endlessly pile more VMs onto a physical platform. They have to run somewhere.

Start with the physical platform. Most fundamentally, make sure that your CPU has hardware virtualization support — VT-x on Intel, AMD-V on AMD — and your host Linux kernel has the KVM modules.

Memory is crucial. Save at least 50% of your RAM for the host OS.

Provision each VM with enough RAM so that it can do its job without using a swap area. Make sure that the sum across all VMs doesn’t get over 50% of the physical RAM. If you run out, you’ve run out. Add more physical RAM or move VMs to another physical platform.

Tuning the Host OS

Start your tuning with the host OS. Make sure that your file systems don’t get too full. There’s no way to quote a single number, it depends on your patterns of file system use, but when a file system gets over some percentage use its fragmentation will lead to a noticeable slowdown.

If you use raw disk images so they start at full size, and keep those images on a dedicated file system, fragmentation shouldn’t be an issue. But if you use qcow2 images that grow, and if they are stored on a file system that stores, say, growing log files that are rotated weekly, fragmentation will be a bigger problem.

Select the appropriate disk queueing algorithm: deadline if your VMs are used interactively, noop if they’re for unattended computation.

Once you select an algorithm, you may be able to do some useful tuning of its parameters. Again, see the Linux optimization and troubleshooting course, or the short overview here.

Virtual memory use for storage is tuned under /proc/sys/vm/*, the dirty_ratio and vfs_cache_pressure settings are most likely to be helpful. Again, your decision comes down to interactive use (tune for lower latency) or unattended computation (tune for higher throughput).

Provisioning The Virtual Storage Devices

I can’t easily imagine a situation where qcow2 wouldn’t be fine for testing or building prototypes. But it will be the slowest, especially when you enable compression.

A raw format disk image file will be faster. Some people warn that raw lacks the features of qcow2, but you need to realize that some of them aren’t really features after all. For example, encryption — there are a number of problems with the way qcow2 does encryption, read the qemu-img manual page for details.

The best storage performance is achieved when you give each VM its own block device. This doesn’t have to be an entire physical device. Use Logical Volume Management (or LVM) to create new block storage devices in /dev/mapper/*. With LVM you can adjust the storage device size later, as long as you are careful to shut down the VM while you’re doing it.

That’s all I can fit into a short blog post, performance tuning is a big area. Check out the Linux server administration course to get a handle on SysFS, the mechanisms by which you test and adjust things like disk queueing algorithms, then the optimization and troubleshooting course to put that into practice.

Type to search blog.learningtree.com

Do you mean "" ?

Sorry, no results were found for your query.

Please check your spelling and try your search again.