**作者:张华 发表于:2016-12-06
版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明
( http://blog.csdn.net/quqi99 )**
$ lspci -nn |grep 82576
06:00.0 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10c9] (rev 01)
06:00.1 Ethernet controller [0200]: Intel Corporation 82576 Gigabit Network Connection [8086:10c9] (rev 01)
07:10.0 Ethernet controller [0200]: Intel Corporation 82576 Virtual Function [8086:10ca] (rev 01)
07:10.1 Ethernet controller [0200]: Intel Corporation 82576 Virtual Function [8086:10ca] (rev 01)
07:10.2 Ethernet controller [0200]: Intel Corporation 82576 Virtual Function [8086:10ca] (rev 01)
07:10.3 Ethernet controller [0200]: Intel Corporation 82576 Virtual Function [8086:10ca] (rev 01)
$ sudo virsh nodedev-list --tree
...
+- pci_0000_00_1c_4
| |
| +- pci_0000_06_00_0
| | |
| | +- net_enp6s0f0_2c_53_4a_02_20_3c
| +- pci_0000_06_00_1
| | |
| | +- net_enp6s0f1_2c_53_4a_02_20_3d
| +- pci_0000_07_10_0
| +- pci_0000_07_10_1
| +- pci_0000_07_10_2
| +- pci_0000_07_10_3
$ ip link show enp6s0f0
3: enp6s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 2c:53:4a:02:20:3c brd ff:ff:ff:ff:ff:ff
vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
$ ip link show enp6s0f1
4: enp6s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 2c:53:4a:02:20:3d brd ff:ff:ff:ff:ff:ff
vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
#/etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=pt intel_iommu=on pci=assign-busses"
$ update-grub
# Network info
neutron net-create private --provider:network_type flat --provider:physical_network physnet1
neutron subnet-create --allocation-pool start=10.0.1.22,end=10.0.1.122 --gateway 10.0.1.1 private 10.0.1.0/24 --enable_dhcp=True --name private
neutron net-create public -- --router:external=True --provider:network_type flat --provider:physical_network public
neutron subnet-create --allocation-pool start=10.230.56.100,end=10.230.56.104 --gateway 10.230.56.1 public 10.230.56.100/21 --enable_dhcp=False --name public-subnet
neutron router-create router1
EXT_NET_ID=$(neutron net-list |grep ' public ' |awk '{print $2}')
ROUTER_ID=$(neutron router-list |grep ' router1 ' |awk '{print $2}')
SUBNET_ID=$(neutron subnet-list |grep '10.0.1.0/24' |awk '{print $2}')
neutron router-interface-add $ROUTER_ID $SUBNET_ID
neutron router-gateway-set $ROUTER_ID $EXT_NET_ID
[ml2_type_flat]
flat_networks = physnet1,physnet2,sriov1,sriov2
[ml2_sriov]
supported_pci_vendor_devs = 8086:10ca
[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
[sriov_nic]
physical_device_mappings = sriov1:enp6s0f0,sriov2:enp6s0f1
exclude_devices =
neutron-sriov-nic-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
[pci]
passthrough_whitelist = {"devname": "enp6s0f0", "physical_network": "sriov1"}
passthrough_whitelist = {"devname": "enp6s0f1", "physical_network": "sriov2"}
[DEFAULT]
pci_alias={"vendor_id":"8086", "product_id":"10ca", "name":"a1"}
scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, PciPassthroughFilter
neutron net-create sriov_net --provider:network_type flat --provider:physical_network sriov1
neutron subnet-create --allocation-pool start=192.168.9.122,end=192.168.9.129 sriov_net 192.168.9.0/24 --enable_dhcp=False --name=sriov_subnet
nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova flavor-key m1.medium set "pci_passthrough:alias"="a1:1"
neutron port-create sriov_net --name sriov_port --binding:vnic-type direct
nova boot --key-name mykey --image trusty-server-cloudimg-amd64-disk1 --flavor m1.medium --nic port-id=$(neutron port-list |grep ' sriov_port ' |awk '{print $2}') i1
FLOATING_IP=$(nova floating-ip-create |grep 'public' |awk '{print $4}')
nova add-floating-ip i1 $FLOATING_IP
sudo ip addr add 192.168.101.1/24 dev br-ex
ssh ubuntu@192.168.101.101 -v
It equals:
neutron port-create sriov_net --name sriov_port --binding:host_id desktop --binding:vnic-type direct --binding:profile type=dict pci_vendor_info=8086:10ca,pci_slot=0000:07:10.0,physical_network=sriov1
nova interface-attach --port-id $(neutron port-show sriov_port |grep ' id ' |awk -F '|' '{print $3}') i1
1, VFIO_MAP_DMA cannot allocate memory
ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/
ln -s /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper /etc/apparmor.d/disable/
apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd
apparmor_parser -R /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper
1, 'nova boot' one VM will invoke _validate_and_build_base_options.
pci_request.get_pci_requests_from_flavor is used to create InstancePCIRequests(instance_uuid, InstancePCIRequest(count, spec, alias_name, is_new, request_id)) by the flavor with pci_alias.
self.network_api.create_pci_requests_for_sriov_ports is used to fill in request_net.pci_request_id
def _validate_and_build_base_options(...)
...
# PCI requests come from two sources: instance flavor and
# requested_networks. The first call in below returns an
# InstancePCIRequests object which is a list of InstancePCIRequest
# objects. The second call in below creates an InstancePCIRequest
# object for each SR-IOV port, and append it to the list in the
# InstancePCIRequests object
pci_request_info = pci_request.get_pci_requests_from_flavor(
instance_type)
self.network_api.create_pci_requests_for_sriov_ports(context,
pci_request_info, requested_networks)
def get_pci_requests_from_flavor(flavor):
pci_requests = []
if ('extra_specs' in flavor and
'pci_passthrough:alias' in flavor['extra_specs']):
pci_requests = _translate_alias_to_requests(
flavor['extra_specs']['pci_passthrough:alias'])
return objects.InstancePCIRequests(requests=pci_requests)
fields = {
'instance_uuid': fields.UUIDField(),
'requests': fields.ListOfObjectsField('InstancePCIRequest'),
}
2, build_and_run_instance will invoke instance_claim then invoke _test_pci to clain pci for the instance by retrieving InstancePCIRequests via instance_uuid.
def _test_pci(self):
pci_requests = objects.InstancePCIRequests.get_by_instance_uuid(
self.context, self.instance.uuid)
if pci_requests.requests:
devs = self.tracker.pci_tracker.claim_instance(self.context,
self.instance)
if not devs:
return _('Claim pci failed.')
def _claim_instance(self, context, instance, prefix=''):
pci_requests = objects.InstancePCIRequests.get_by_instance(context, instance)
...
devs = self.stats.consume_requests(pci_requests.requests,instance_cells)
...
return devs
3, Final pci_request_id conducts _populate_neutron_binding_profile to use pci_manager.get_instance_pci_devs find pci_dev for the instance, then set binding:profile for Neutron.
def _populate_neutron_binding_profile(instance, pci_request_id, port_req_body):
if pci_request_id:
pci_dev = pci_manager.get_instance_pci_devs(
instance, pci_request_id).pop()
devspec = pci_whitelist.get_pci_device_devspec(pci_dev)
profile = {'pci_vendor_info': "%s:%s" % (pci_dev.vendor_id,
pci_dev.product_id),
'pci_slot': pci_dev.address,
'physical_network':
devspec.get_tags().get('physical_network')
}
port_req_body['port']['binding:profile'] = profile
NOTE: for os-attach-interfaces extension api, allocate_port_for_instance invoked by attach_interface function will simply pass pci_request_id=None before running self.allocate_for_instance/_populate_neutron_binding_profile, so seems 'nova interface-attach' operation need to re-deploy the existing VM to have sr-iov ports after defining pci_alias for the flavor.