作者:张华 发表于:2016-03-24
版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明
( http://blog.csdn.net/quqi99 )
理论
- vCPU topology, libvirt将第一个vCPU在虚机中视为: 1 socket with 1 core and no hyper-threads. 但有的操作系统的license会限制socket的数目,另外一般地在不同的core上的2个threads的性能会比2个在同相core上的threads的性能要好。
- NUMA topology, NUMA节点有自己独立的RAM、Bus、PCI和pCPU sockets, 为一个VM分配cpu及PCI设备时应该尽量分配同一个NUMA节点上的pCPU。
- Guest NUMA, 如果一个VM要求的vCPU/RAM/PCIe大于一个NUMA节点所具有的物理pCPU/RAM/PCIe呢?
- 可划分为多个NUMA节点(hw:numa_nodes=N).
- Large pages, CPU支持4k, 2M/4M, 1G的Hugepage模式(page_sizes=(any|small|large),这时cpu的页表数目会减少提升TLB页表的命中率,但操作系统默认为4k,运行时间长了很难找到连续的大页空间分配。当前Kernel不允许为NUMA节点预留大页,NUMA节点有特定的RAM,也就是说,NUMA节点也有特定的大页。在的NUMA节点有足够的大页空间,有的可能刚好没有了。
- Dedicated resource, overcommit_ram=0,overcommit_vcpus=0
- KSM(kernel shared memory), 将相同的内存分页进行合并,合并之后若再遇到写就再用CoW打开一份,要能阻止宿主机将特定的内存分页合并。
实际操作
- Image方式, glance image-update --property hw_numa_nodes=2 hw_numa_cpus.0=0 hw_numa_mem.0=512 hw_numa_cpus.1=0 hw_numa_mem.1=512 image_name
- Flavor方式, nova flavor-key flv_name set hw:numa_nodes=2 hw:numa_cpus.0=0 hw:numa_mem.0=512 hw:numa_cpus.1=0 hw:numa_mem.1=512
- hw:numa_nodes=NN #guest NUMA node数量
- hw:numa_mempolicy=preferred|strict #RAM占用方式
- hw:numa_cpus.N=<cpu-list> #guest NUMA node N中的vcpu列表
- hw:numa_mem.1=<ram-size> #guest NUMA node N中的RAM大小
- hw:cpu_thread_policy=prefer|isolate|require
- hw:cpu_policy=shared|dedicated
libvirt.xml
<cputune>
/** pin vCPU to pCPU set in cputune **/
<vcpupin vcpu="0" cpuset="4-7,12-15"/>
</cputune>
/** expoert guest numa in cpu/numa **/
<cpu>
<topology sockets="2" cores="2" threads="1"/>
<numa>
<cell id="0" cpus="0-1" memory="1048576"/>
</numa>
</cpu>
nova-scheduler
一些命令
hua@node1:~$ sudo virsh nodeinfo
CPU model: x86_64
CPU(s): 4
CPU frequency: 3348 MHz
CPU socket(s): 1
Core(s) per socket: 4
Thread(s) per core: 1
NUMA cell(s): 1
Memory size: 32753068 KiB
hua@node1:~$ sudo virsh freecell 0
0: 9974312 KiB
参考
https://wiki.openstack.org/wiki/VirtDriverGuestCPUMemoryPlacement
https://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/virt-driver-cpu-pinning.html
http://blog.csdn.net/canxinghen/article/details/41810241
https://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/virt-driver-cpu-thread-pinning.html
http://docs.openstack.org/developer/nova/testing/libvirt-numa.html
https://www.berrange.com/posts/2010/02/12/controlling-guest-cpu-numa-affinity-in-libvirt-with-qemu-kvm-xen/
https://review.openstack.org/#/c/140290/
http://slideplayer.com/slide/4868412/