Data Center Bridging is a collection of standards-based extensions to classical Ethernet. It provides a lossless data center transport layer that enables the convergence of LANs and SANs onto a single unified fabric. In addition to supporting Fibre Channel Over Ethernet (FCoE) and iSCSI Over DCB, it enhances the operation of other business-critical traffic.
Data Center Bridging is a flexible framework that defines the capabilities required for switches and end points to be part of a data center fabric. It includes the following capabilities:
There are two supported versions of DCBX.
CEE Version: The specification can be found as a link within the following document: http://www.ieee802.org/1/files/public/docs2008/dcb-baseline-contributions-1108-v1.01.pdf
IEEE Version: The specification can be found as a link within the following document: https://standards.ieee.org/findstds/standard/802.1Qaz-2011.html
Note: | The OS DCBX stack will defaults to the CEE version of DCBX, and if a peer is transmitting IEEE TLVs, it will automatically transition to the IEEE version. |
For more information on DCB, including the DCB Capability Exchange Protocol Specification, go to http://www.ieee802.org/1/pages/dcbridges.html
For system requirements click here.
Many DCB functions can be configured or revised using Intel® PROSet for Windows Device Manager, from the Data Center tab.
Click here for instructions on installing and using Intel® PROSet.
You can use the Intel® PROSet to perform the following tasks:
Non operational status: If the Status indicator shows that DCB is non-operational, there may be a number of possible reasons:
A non-operational status is most likely to occur when Use Switch Settings is selected or Using Advanced Settings is active. This is generally a result of one or more of the DCB features not getting successfully exchanged with the switch. Possible problems include:
Note: | Configuring a device in the VMQ + DCB mode reduces the number of VMQs available for guest OSes. |
In the 2.4.x kernel, qdiscs were introduced. The rationale behind this effort was to provide QoS in software, as hardware did not provide the necessary interfaces to support it. In 2.6.23, Intel pushed the notion of multiqueue support into the qdisc layer. This provides a mechanism to map the software queues in the qdisc structure into multiple hardware queues in underlying devices. In the case of Intel adapters, this mechanism is leveraged to map qdisc queues onto the queues within our hardware controllers.
Within the Data Center, the perception is that traditional Ethernet:
In an effort to address these issues, Intel and a host of industry leaders
have been working on these problems. Specifically, within the IEEE
802.1 standards body, a number of task forces are working on enhancements
to address these concerns. Listed below are the applicable standards bodies:
Enhanced Transmission Selection
IEEE 802.1Qaz
Lossless Traffic Class
Priority Flow Control: IEEE 802.1Qbb
Congestion Notification: IEEE 802.1Qau
DCB Capability exchange protocol: IEEE 802.1Qaz
The software solution that is being released represents Intel's implementation
of these efforts. It is worth noting that many of these standards have not been
ratified - this is a pre-standards release, so users are advised to check
open-fcoe.org or open-lldp.org often. While we have worked with some of the major ecosystem vendors
in validating this release, there are many vendors which still have solutions in
development. As these solutions become available and standards get ratified, we
will work with ecosystem partners and the standards body to ensure that the
Intel solution works as expected.
-h | show usage information |
-f | configfile: use the specified file as the config file instead of the default file - /etc/sysconfig/lldpad/lldpad.conf |
-d | run lldpad as a daemon |
-v | show lldpad version |
-k | terminate current running lldpad |
-s | remove lldpad state records |
lldpad and dcbtool can be used to configure a DCB-capable driver, such as the
ixgbe driver, which supports the rtnetlink DCB interface. Once the DCB features
are configured, the next step is to classify traffic to be identified with an
802.1p priority and the associated DCB features. This can be done by using the 'tc'
command to setup the qdisc and filters to cause network traffic to be
transmitted on different queues.
The skbedit action mechanism can be used in a tc filter to classify traffic
patterns to a specific queue_mapping value from 0-7. The ixgbe driver will place
traffic with a given queue_mapping value onto the corresponding hardware queue
and tag the outgoing frames with the corresponding 802.1p priority value.
Set up the multi-queue qdisc for the selected interface:
# tc qdisc add dev ethX root handle 1: multiq
Setting the queue_mapping in a TC filter allows the ixgbe driver to classify
a packet into a queue. Here are some examples of how to filter traffic into
various queues using the flow ids:
# tc filter add dev ethX protocol ip parent 1: u32 match ip dport 80 \
0xffff action skbedit queue_mapping 0
# tc filter add dev ethX protocol ip parent 1: u32 match ip dport 53 \
0xffff action skbedit queue_mapping 1
# tc filter add dev ethX protocol ip parent 1: u32 match ip dport 5001 \
0xffff action skbedit queue_mapping 2
# tc filter add dev ethX protocol ip parent 1: u32 match ip dport 20 \
0xffff action skbedit queue_mapping 7
Here is an example that sets up a filter based on EtherType. In this example the
EtherType is 0x8906.
# tc filter add dev ethX protocol 802_3 parent 1: handle 0xfc0e basic match \
'cmp(u16 at 12 layer 1 mask 0xffff eq 35078)' action skbedit queue_mapping 3
To test in a back-to-back setup, use the following tc commands to setup the qdisc and filters for TCP ports 5000 through 5007. Then use a tool, such as iperf, to generate UDP or TCP traffic on ports 5000-5007.
Statistics for each queue of the ixgbe driver can be checked using the ethtool utility: ethtool -S ethX
# tc qdisc add dev ethX root handle 1: multiq
# tc filter add dev ethX protocol ip parent 1: \
u32 match ip dport 5000 0xffff action skbedit queue_mapping 0
# tc filter add dev ethX protocol ip parent 1: \
u32 match ip sport 5000 0xffff action skbedit queue_mapping 0
# tc filter add dev ethX protocol ip parent 1: \
u32 match ip dport 5001 0xffff action skbedit queue_mapping 1
# tc filter add dev ethX protocol ip parent 1: \
u32 match ip sport 5001 0xffff action skbedit queue_mapping 1
# tc filter add dev ethX protocol ip parent 1: \
u32 match ip dport 5002 0xffff action skbedit queue_mapping 2
# tc filter add dev ethX protocol ip parent 1: \
u32 match ip sport 5002 0xffff action skbedit queue_mapping 2
# tc filter add dev ethX protocol ip parent 1: \
u32 match ip dport 5003 0xffff action skbedit queue_mapping 3
# tc filter add dev ethX protocol ip parent 1: \
u32 match ip sport 5003 0xffff action skbedit queue_mapping 3
# tc filter add dev ethX protocol ip parent 1: \
u32 match ip dport 5004 0xffff action skbedit queue_mapping 4
# tc filter add dev ethX protocol ip parent 1: \
u32 match ip sport 5004 0xffff action skbedit queue_mapping 4
# tc filter add dev ethX protocol ip parent 1: \
u32 match ip dport 5005 0xffff action skbedit queue_mapping 5
# tc filter add dev ethX protocol ip parent 1: \
u32 match ip sport 5005 0xffff action skbedit queue_mapping 5
# tc filter add dev ethX protocol ip parent 1: \
u32 match ip dport 5006 0xffff action skbedit queue_mapping 6
# tc filter add dev ethX protocol ip parent 1: \
u32 match ip sport 5006 0xffff action skbedit queue_mapping 6
# tc filter add dev ethX protocol ip parent 1: \
u32 match ip dport 5007 0xffff action skbedit queue_mapping 7
# tc filter add dev ethX protocol ip parent 1: \
u32 match ip sport 5007 0xffff action skbedit queue_mapping 7
dcbtool is used to query and set the DCB settings of a DCB-capable Ethernet interface. It connects to the client interface of lldpad to perform these operations. dcbtool will operate in interactive mode if it is executed without a command. In interactive mode, dcbtool also functions as an event listener and will print out events received from lldpad as they arrive.
dcbtool -h
dcbtool -v
dcbtool [-rR]
dcbtool [-rR] [command] [command arguments]
-h shows the dcbtool usage message
-v shows dcbtool version information
-r displays the raw lldpad client interface messages as well as the readable output.
-R displays only the raw lldpad client interface messages
help | shows the dcbtool usage message |
ping | test command. The lldpad daemon responds with "PONG" if the client interface is operational. |
license | displays dcbtool license information |
quit | exit from interactive mode |
The following commands interact with the lldpad daemon to manage the daemon and DCB features on DCB-capable interfaces. |
|
lldpad general configuration commands: |
|
<gc|go> dcbx | gets the configured or operational version of the DCB capabilities exchange protocol. If different, the configured version will take effect (and become the operational version) after lldpad is restarted. |
sc dcbx v:[1|2] | sets the version of the DCB capabilities exchange protocol which
will be used the next time lldpad is started. Information about version 1 can be found at: <http://download.intel.com/technology/eedc/dcb_cep_spec.pdf> Information about version 2 can be found at: <http://www.ieee802.org/1/files/public/docs2008/az-wadekar-dcbx-capability-exchange-discovery-protocol-1108-v1.01.pdf> |
DCB-per interface commands |
|
gc <ifname> <feature> | gets configuration of feature on interface ifname. |
go <ifname> <feature> | gets operational status of feature on interface ifname. |
gp <ifname> <feature> | gets peer configuration of feature on interface ifname. |
sc <ifname> <feature> <args> |
sets the configuration of feature on interface ifname. |
dcb | DCB state of the port |
pg | priority groups |
pfc | priority flow control |
app:<subtype> | application specific data |
ll:<subtype> | logical link status |
0|fcoe Fiber Channel over Ethernet (FCoE)
e:<0|1> | controls feature enable |
a:<0|1> | controls whether the feature is advertised via DCBX to the peer |
w:<0|1> | controls whether the feature is willing to change its operational configuration based on what is received from the peer |
[feature-specific args] | arguments specific to a DCB feature |
On/off enable or disable for the interface. The go and gp commands are not needed for the dcb feature. Also, the enable, advertise and willing parameters are not required.
pgid:xxxxxxxx | Priority Group ID for the 8 priorities. From left to right (priorities 0-7), x is the corresponding Priority Group ID value, which can be 0-7 for Priority Groups with bandwidth allocations or f (Priority Group ID 15) for the unrestricted Priority Group. |
pgpct:x,x,x,x,x,x,x,x | Priority Group
percentage of link bandwidth. From left to right
(Priority Groups 0-7), x is the percentage of
link bandwidth allocated to the corresponding
Priority Group. The total
bandwidth must equal 100%. |
uppct:x,x,x,x,x,x,x,x | Priority percentage of Priority Group bandwidth. From left to right (priorities 0-7), x is the percentage of Priority Group bandwidth allocated to the corresponding priority. The sum of percentages for priorities which belong to the same Priority Group must total 100% (except for Priority Group 15). |
strict:xxxxxxxx | Strict priority setting. From left to right (priorities 0-7), x is 0 or 1. 1 indicates that the priority may utilize all of the bandwidth allocated to its Priority Group. |
up2tc:xxxxxxxx | Priority to traffic class mapping. From left to right (priorities 0-7), x is the traffic class (0-7) to which the priority is mapped. |
pfcup:xxxxxxxx | Enable/disable priority flow control. From left to right (priorities 0-7), x is 0 or 1. 1 indicates that the corresponding priority is configured to transmit priority pause. |
appcfg:xx | xx is a hexadecimal value representing an 8 bit bitmap where bits set to 1 indicate the priority which frames for the applications specified by subtype should use. The lowest order bit maps to priority 0. |
status:[0|1] |
For testing purposes, the logical link status may be set to 0 or 1. This setting is not persisted in the configuration file. |
Enable DCB on interface eth2
dcbtool sc eth2 dcb on
Assign priorities 0-3 to Priority Group 0, priorities 4-6 to Priority Group 1
and priority 7 to the unrestricted priority. Also, allocate 25% of link
bandwidth to Priority Group 0 and 75% to group 1.
dcbtool sc eth2 pg pgid:0000111f pgpct:25,75,0,0,0,0,0,0
Enable transmit of Priority Flow Control for priority 3 and assign FCoE to
priority 3.
dcbtool sc eth2 pfc pfcup:00010000
dcbtool sc eth2 app:0 appcfg:08
How did Intel verify their DCB solution?
Answer - The Intel solution is continually evolving as the relevant standards
become solidified and more vendors introduce DCB-capable systems. That said, we
initially used test automation to verify the DCB state machine. As the state
machine became more robust and we had DCB-capable hardware, we began to test
back to back with our adapters. Finally, we introduced DCB-capable switches in
our test bed.
Prior to kernel 2.6.26, tso will be disabled when the driver is put into DCB
mode.
A TX unit hang may be observed when link strict priority is set when a large
amount of traffic is transmitted on the link strict priority.
lldpad and dcbtool - DCB daemon and command line utility DCB configuration
Copyright(c) 2007-2011 Intel Corporation.
Portions of lldpad and dcbtool (basically program framework) are based on:
hostapd-0.5.7
Copyright (c) 2004-2007, Jouni Malinen <j@w1.fi>
This program is free software; you can redistribute it and/or modify it under
the terms and conditions of the GNU General Public License, version 2, as
published by the Free Software Foundation.
This program is distributed in the hope it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
details.
You should have received a copy of the GNU General Public License along with
this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
The full GNU General Public License is included in this distribution in the file
called "COPYING".
Contact Information:
open-lldp Mailing List lldp-devel@open-lldp.org