IBM Netfinity Imbedded 10/100 Ethernet Adapter and Adapter 2 Device Driver Installation README File This README file contains the latest information about installing NetWare and UNIX ethernet device drivers for IBM Netfinity servers whose imbedded ethernet controllers are compatible with the IBM Netfinity 10/100 Ethernet Adapter. CONTENTS ________ 1.0 Known Problems 2.0 Change History 3.0 Installation and Configuration 3.1 Netware 3.1.1 Installation using Novell Utilities INSTALL or NWCONFIG 3.1.2 Instructions for manually loading drivers 3.1.3 Command Line Keywords and Parameters 3.1.4 Problem Solving 3.1.5 NetWare Teaming and VLAN 3.1.5.1 VLAN Memory Considerations 3.1.5.2 Teaming and VLAN Problem Solving 3.1.6 DMI-SNMP 3.1.6.1 Installation Instructions 3.1.6.2 Attributes Support 3.2 SCO 3.2.1 Open Server Installation Procedures 3.2.2 UnixWare 7 Installation Procedures 3.3 Advanced Features 3.3.1 Adapter Fault Tolerance (AFT) 3.3.2 Adaptive Load Balancing (ALB) 3.3.3 Fast EtherChannel* (FEC) 3.3.4 Virtual LAN (VLAN) 3.3.5 Wake on LAN (WOL) 3.3.5.1 Troubleshooting Wake on LAN 3.3.6 Boot Agent 3.3.6.1 Parameters 3.3.6.2 Troubleshooting Boot Agent 3.3.7 Priority Packet 3.3.7.1 IEEE 802.1p tagging 3.3.7.2 Priority Queuing 4.0 Web Sites and Support Phone Number 5.0 Trademarks and Notices 6.0 Disclaimer 1.0 Known Problems ____________________ o None 2.0 Change History _____________________ Changes made in this diskette, version 3.7.2c: - none (first release) 3.0 Installation and Configuration ____________________________________ 3.1 NetWare ------------- Location of driver: \NWSERVER\IBMFE.LAN (AHSM 3.31 driver) IBMFE.LDI CIBMFE.LAN (CHSM 1.11 driver) CIBMFE.LDI There are two drivers provided for the 10/100 ethernet controller: the ODI Assembly (AHSM) Specification v3.31 driver (IBMFE.LAN) and the ODI C (CHSM) Specification v1.11 driver (CIBMFE.LAN). Both are located in the \NWSERVER directory on the diskette. Only CIBMFE.LAN supports IBM's Advanced Network Services. NOTE: NetWare 5 ships with drivers for the 10/100 EtherJet PCI Adapters that give base functionality (i.e. connection). To use the Advanced Network Services (such as Teaming and VLAN), use the drivers from this diskette. The required minimum versions of the three support NLMs are included on this diskette for your convenience. For NW3.1x For NW4.1x,4.2,5.x ---------------------------- -------------------------- NBI31X.NLM v2.25 10/20/98 NBI.NLM v2.25 9/17/98 MSM31X.NLM v3.95 5/12/99 MSM.NLM v3.95 5/12/99 ETHERTSM.NLM v3.8 3/1/99 ETHERTSM.NLM v3.8 3/1/99 NOTE: NBI.NLM (or NBI31X.NLM) must be loaded before MSM and ETHERTSM. NetWare 3.12 needs MSM31X.NLM renamed to MSM.NLM. IBM recommends you use the latest Novell Service Packs. The latest server NLMs and support files for NetWare can be found on Novell's automated services under the heading of "Minimum Patch List". 3.1.1 Installation using Novell Utilities INSTALL or NWCONFIG: -------------------------------------------------------------- 1) Load Novell's Install.nlm (3.x & 4.x) or nwconfig.nlm (5.x). 2) From the Installation Options screen, choose "Driver options" and press Enter. 3) Choose "Configure network drivers" and press Enter. If any drivers are already loaded, a list of them appears. 4) Choose "Select an additional driver" and press Enter. A list of drivers appears but you will be installing an Unlisted Driver. 5) Insert the disk and press the Insert key. If the path is not a:\nwserver, press F3 and type in the full path. Press Enter. 6) CE100b.LAN and E100b.Lan will be displayed. Choose one and Press Enter. Select Yes to loading the driver. It is up to you if you wish to save the old driver or not. 7) Next the install utillity will give you a choice of Modifying or Saving parameters. If you choose modify, you may move around with the arrow keys. You should specify slot and usually speed and duplex before arrowing down to save. To avoid parameters (not recommended) you may press the escape key multiple times and back out of the install screens. For more information on parameters, see Section E: Command Line Keywords and their Parameters. 8) The utility will load the driver and assign network numbers for IPX with all four frame types. You may overide these numbers by entering your own number at each prompt. (If you wish to limit the number of frame types loaded, edit autoexec.ncf.) 9) When returned to the Drivers screen, press the Esc key 3 times to exit the utility and return to the prompt. 3.1.2 Instructions for manually loading drivers ----------------------------------------------- 1) Copy the 10/100 Ethernet server driver (\NWSERVER\CIBMFE.LAN) and any updated NLMs, to the NetWare server's hard drive. If you can't log in to the server (before starting the server) copy the IBMFE.LAN driver from the \NWSERVER directory on the diskette to the root directory of the server's hard drive. If you can log into the server, copy the driver to the SYSTEM subdirectory. If you do this, you won't need to specify a path on the load line. If you copy it to another directory, make sure the LOAD statement includes the correct path. 2) Start the server. At the server console, load NBI, MSM and ETHERTSM (for NetWare 3.12 load NBI31X, MSM31X and ETHERTSM) in this order. Next, load and bind the server driver. 3) Add the load and bind statements you need to the server's AUTOEXEC.NCF file so the 10/100 EtherJet PCI Adapter driver loads automatically each time the server starts. SAMPLE LOAD COMMANDS (server): ------------------------------- LOAD IBMFE SLOT=n SPEED=100 FORCEDUPLEX=2 FRAME=ETHERNET_802.2 BIND IPX TO IBMFE NET=yy **IMPORTANT** When using command line options with the .LAN driver for NWSERVER, make sure the equal sign is followed by a value. Otherwise, unpredictable results may occur. 3.1.3 Command Line Keywords and Parameters ------------------------------------------ Used with the LOAD command (in net.cfg or startnet.bat) SPEED: Syntax: SPEED=n (where n = 10 or 100) ----- Default: 10, the adapter automatically senses speed If unable to autosense (including no network cable), default=10 NOTE: Match the speed/duplex of your switch (if set). If you don't have an auto-negotiating switch and are forcing the duplex mode, you must specify the speed. NOTE: You must set the SPEED parameter to either 10 or 100 if you're setting the FORCEDUPLEX parameter to either half or full. FORCEDUPLEX: Syntax: FORCEDUPLEX=n ----------- Where n = 0 auto-negotiate (10/100 Ethernet TX adapter only) 1 half duplex 2 full duplex Default:auto-negotiate Auto-negotiate: The adapter negotiates with the switch whether to use full or half duplex. If unsuccessful, the adapter defaults to half duplex. You must have an auto-negotiating switch to get full duplex support using auto-negotiation. Full duplex: The adapter sends and receives packets at the same time. This improves the performance of your adapter. Half duplex: The adapter communicates in one direction at a time. It either sends or receives. Note: If you use the FORCEDUPLEX command, you must also set the SPEED parameter to either 10 or 100. (see SPEED above). SLOT: Syntax: SLOT=n ---- (required only when multiple adapters are installed) where n = 1,2,3,4,...) For PCI adapters, SLOT is derived from bus number and device location as defined by the PCI specification and NBI. One way to determine the slot number is to load the driver from the command line. You'll be prompted with valid device number(s) for the adapter(s). Select one of them. FRAME: Syntax: FRAME=n ----- where n = Ethernet_802.2 Ethernet_802.3 Ethernet_II Ethernet_SNAP Default: Ethernet_802.2 Configures the adapter to process the valid NetWare Ethernet frame types. TXTHRESHOLD: Syntax: TXTHRESHOLD(=)n (n = number of 8 bytes). ------------ Default: dynamically set Represents the threshold for transmits from extender SRAM FIFO (output buffer). If n=16 then the bytes are set at 128 (16x8). In this case, the LAN controller transmits after copying 128 bytes from the host memory. The maximum number that you can specify is 200 (200x8=1600 bytes) which ensures there will not be any underruns. IRQMODE (VLM clients ONLY): Syntax: IRQMODE n ------- Where n = 0 automatically selects interrupt sharing mode 1 interrupt sharing is disabled 2 interrupt sharing is enabled Default: 0 automatically selects This parameter enables or disables interrupt sharing mode of the driver. It has the capability to automatically select the enabled or disabled state depending on system configuration. If the IRQ assigned to the driver is not being shared with another device, then interrupt sharing is disabled. If the IRQ assigned to the driver is being shared, then the interrupt sharing is enabled. NODE ADDRESS: Syntax: NODE=xNxxxxxxxxxx ------------ where N must = 2, 6, A, or E; x = hexadecimal number Default: The adapter's assigned address (UAA Universal Address) Specifies a locally administered address (LAA) unique to each adapter. The node address is a 12-digit hexadecimal number; the second digit must be one of the following digits: 2, 6, A, or E. 02AA => LAA, 02 is set by the driver if not specified. 00A0 => Typical IBM address (default) PROTOCOL (VLM clients ONLY): Syntax: Protocol IPX E0 Ethernet_802.2 -------- Values: E0=Ethernet_802.2 0=Ethernet_802.3 8137=Ethernet_II 8137=Ethernet_SNAP Indicates the standard protocol in use. 3.1.4 Problem Solving --------------------- 1) If the error message "Loader cannot find public symbol: " is encountered: Upgrade the ETHERTSM.NLM and MSM31x, or MSM.NLM and be sure to rename MSM31x.NLM to MSM.NLM. 2) Installing multiple adapters: If you have multiple adapters in a single server, each adapter must have a different NET number and SLOT number. Also, you may want to name each adapter. For example: LOAD C:\IBMFE SLOT=3 NAME=LAN_A BIND IPX TO LAN_A NET=222 LOAD C:\IBMFE SLOT=4 NAME=LAN_B BIND IPX TO LAN_B NET=333 3) If you have problems loading the driver on multiple adapters and the initialization fails due to "Insufficient RCBs," increase the number of buffers allocated to the server. Add the following to STARTUP.NCF: SET MINIMUM PACKET RECEIVE BUFFERS = 100 (or larger) The MINIMUM value you specify should be at least 100 times the number of 10/100 EtherJet PCI Adapters in the computer. The MAXIMUM you can specify depends on the amount of memory in the server, but it must be greater than the MINIMUM. In NetWare 4.1x, this can be set in the STARTUP.NCF, but NetWare 3.12 requires that it be set in the AUTOEXEC.NCF: SET MAXIMUM PACKET RECEIVE BUFFERS = 2000 (or larger) 4) Installing one adapter with multiple frame types: When binding multiple frame types to one adapter, enter a LOAD and BIND statement for each frame type. Each LOAD statement uses the same SLOT number, but each BIND statement needs a unique network number. You must also include a name on each load line to avoid being prompted for the adapter to bind IPX to. Example: LOAD C:\IBMFE SLOT=3 FRAME=ETHERNET_802.2 NAME=LAN8022 BIND IPX TO LAN8022 NET=88888 LOAD C:\IBMFE SLOT=3 FRAME=ETHERNET_802.3 NAME=LAN8023 BIND IPX TO LAN8023 NET=77777 3.1.5 NetWare Teaming and VLAN ------------------------------ With NetWare, VLAN and ANS (teaming) features may be used simultaneously. Teaming includes: Adapter Fault Tolerance (AFT), Adaptive Load Balancing (ALB), Port Aggregation (IBM Link Aggregation, Cisco* Fast EtherChannel* (FEC) or Gigabit EtherChannel* (GEC), VLAN (Virtual LAN) support includes support of IEEE 802.1q and Cisco ISL*. BANS.LAN is the IBM software for NetWare 4.11 and higher that provides a variety of advanced networking services (ANS). These benefits include the teaming and VLAN based features. The install utility (INSTALL.NLM for NetWare 4.x, and NWCONFIG.NLM for NetWare 5.0) should not be used to configure BANS.LAN because of the configuration complexity of this driver. It may be used to copy BANS.LAN into place which will be done automatically if the utility is used to install CIBMFE. However, an attempt to install and configure BANS.LAN with that tool will result in several lines being added to the AUTOEXEC.NCF that appear as a normal LAN driver load and bind statement. These lines will need to be removed or replaced with a valid set of BANS commands. An alternative is to manually copy the BANS.LAN driver to the sys:\system directory. When using BANS.LAN, do NOT bind the network protocols (IPX, IP, etc) directly to the base driver of an adapter used with BANS. Instead, bind BANS to the base driver and the protocol(s) to BANS. Doing otherwise can cause routing error messages, but most likely the protocols bound directly to the base drivers will simply not work. The basic steps for setting up teaming and/or VLAN support are: LOAD base driver (e.g. CIBMFE.LAN) with appropriate parameters. LOAD BANS driver For a VLAN you would include a VLAN ID number. For a VLAN or multiple frame type configuration, include a name. For a multiple team setup include a team number. BIND BANS driver to adapter name. If creating multiple teams, include the team # here to define which team uses which adapter(s). If you want to specify which adapter is to be primary or secondary, list on this line. LOAD BANS COMMIT . Creates the team. ( will distinguish which advanced services to enable.) If creating multiple teams, include the team #. BIND the protocol to the BANS driver using the name that was assigned when BANS was loaded. If you are using only one frame type, a single team and no VLAN, you can bind the protocol directly to the BANS driver without using an assigned name. Include a net=n number (especially in a script since the system will request the number if not given.) Example of Mixed Speed Fault Tolerance Team: Load CE100b name=100Meg Load IBMGE name=Gigabit Load BANS Bind BANS Gigabit Primary Bind BANS 100Meg Load BANS COMMIT MODE = AFT Bind IPX BANS Net=2 To display the current status for all adapters in a BANS team: LOAD BANS STATUS team=(name) NOTE: All adapters must use the same frame type parameter. AFT teams must be connected to the same segment but may consist of adapters using different speeds and duplex modes. Only fault tolerance should use mixed speed teams. If using different speed adapters on an AFT team, set the fastest adapter as the "primary". Link all adapters of an ALB, FEC or GEC team to the same segment using the same speed and duplex, otherwise performance will be greatly degraded. All other adapters compatible with your onboard controller may be part of an ANS team with all features except for some VLAN capability. All compatible adapters will work for 802.1q VLAN. 3.1.5.1 VLAN Memory Considerations ---------------------------------- When using multiple VLANs, the server's default packet buffers will probably need to be increased. To do this, add the following lines to the STARTUP.NCF file which is located in the same directory that NetWare is launched from, usually C:\NWSERVER\STARTUP.NCF: SET MINIMUM PACKET RECEIVE BUFFERS = 200 (or higher) SET MAXIMUM PACKET RECEIVE BUFFERS = 500 (or higher) "SET MINIMUM PACKET" designates the minimum number of packet receive buffers the system will allocate and "SET MAXIMUM PACKET" designates the maximum. Make sure that the maximum setting is equal to or greater than the minimum setting. The number of buffers required is based on the number of VLANs and whether or not load balancing or FEC / GEC is in use. For every VLAN used, BANS will request 64 buffers. When in load sharing modes (ALB, FEC and GEC) 64 buffers are needed for each adapter in the team. The non load sharing fault tolerant mode (AFT) only requires the 64 buffers for one of the adapters in the team. As an example, an ALB team of 2 adapters that uses 12 VLANs would require 64 [buffers] * 2 [adapters] * 12 [VLANs] = 1536 for the minimum packet receive buffers. This number is in addition to any other buffers that the server may require for other purposes. The default amount of memory used by each NetWare receive buffer is approximately 4K (varies slightly with different versions). If the extra memory requirement for VLAN is a problem, there are several things that can be done to reduce the impact. If ethernet is the only network topology that the server uses and you are not using "jumbo frames", the size of the buffer can be reduced to 2000 bytes (the maximum ethernet frame size plus some out of band data used by BANS) without impacting the performance of the server. This is done by adding the following line to the STARTUP.NCF file: SET MAXIMUM PHYSICAL RECEIVE PACKET SIZE = 2000 Note that this will not work if ethernet "jumbo frames" are in use. In fact, the packet size will have to be increased to accommodate the Jumbo Frames. Jumbo frames are supported only with a Gigabit adapter and requires a switch infrastructure that supports Jumbo Frames. A keyword is included for BANS that allows the administrator to reduce the buffer requirement per VLAN from 64 down to as low as 32, however, this will negatively impact the server's performance. The syntax is: LOAD BANS TX_ECBS_TO_USE = X Where "X" is the number of buffers to use for each VLAN. Example for a Single 802.1q VLAN Team: Load cibmfe slot=5 frame=ethernet_802.3 name=e83 Load bans vlanid=2 frame=ethernet_802.3 name=T1-VL2 team=1 Load bans vlanid=3 frame=ethernet_802.3 name=T1-VL3 team=1 Bind bans e83 team=1 Load bans commit mode=AFT team=1 Bind ipx T1-VL2 net=2 Bind ipx T1-VL3 net=3) In order to function properly, the adapters configured for VLAN must be connected to a "tagged" port (called a trunk port by Cisco) on an 802.1q capable switch. 3.1.5.2 Teaming and VLAN Problem Solving ---------------------------------------- If you receive the ERROR MESSAGE (at commit): "Failed to create new team," check the following: a. All adapters have loaded drivers and have the same frame types loaded. b. BANS is loaded once for each vlan and frame type. c. All adapters are connected to the same network segment. d. BANS is not being bound to an unsupported adapter. e. An "BANS BIND" command has been issued for each adapter and frame type in the team. NOTE: Novell's CONFIG command will not reflect the BANS BIND statements until after the COMMIT has been successfully issued. 3.1.6 DMI-SNMP -------------- The files needed for DMI-SNMP support are located in the \NWSERVER\SNMP directory of this diskette. They are: IBMNDM.NLM - DMI-SNMP Network Device Manager NLM, Version 1.00 IBMNI.NLM - DMI-SNMP Client Instrumentation NLM, Version 1.00 IBMNI.MIF - DMI-SNMP Client Instrumentation MIF, version 1.0 MIF file. IBMDRV.DAT - DMI-SNMP managed Drivers Name and Export Symbols, version 1.0 DAT file. DMI2SNMP.NLM - SNMP Extension module NOTE: Has been tested only with the Intel DMI service Provider, HP-OpenView* Network Node Manager 5.0, Intel Deviceview for web and Intel DMI Explorer. Prerequisites: 1. Novell NetWare v4.11 2. Novell NetWare SNMP Master Agent. 3. Intel DMI Service Provider. Please download the Service provider from www.intel.com/ial/wfm/tools/netsdk/index.htm and follow installation instructions. NOTE: It is strongly recommended that both NetWare SNMP and Intel DMI Service Provider be installed before installing this package. 3.1.6.1 Installation Instructions --------------------------------- 1. Install the Intel DMI Service provider for NetWare taking extra care to follow the instructions provided with the download. This requires a Windows NT 4.0 station on your network to install from, with your NetWare server mapped as a drive. 2. On the NetWare server load the Install NLM from the command prompt. Select Other Product options and appropriate Source drive e.g A:\dmi-snmp\snmp\nwserver 3. Follow the directions given by the 'DMI-SNMP Client Instrumentation' install from there. Usage with Applications: 1. With SNMP Agent: - Make sure that NetWare SNMP Agent NLM and Intel DMI Service Provider are installed and configured 2. With SNMP Manager: (HP-Open View etc.) - Load/compile ibmni.mif and ibmfenic.mib 3. Use Intel DMI Explorer or any other DMI compliant Management application on the Management Station (any Win32 Client connected to NetWare Server) to access management information. 3.1.6.2 Attributes Support -------------------------- The following shows what you should expect to see: GROUP ATTRIBUTE SUPPORT -------------- ---------------- --------------------------------- Component ID Manufacturer Yes Product Yes Version No Serial Number No Installation Yes Verification Yes 802 Driver Index Yes Name Yes Version Yes Description Yes Size Yes Interface Yes Interface Ver. Yes Interface Desc. Yes 802 Port Index Yes Permanent addr. Yes Current addr. Yes Connector type Yes Data rate Yes Packets xmitted Yes Bytes xmitted No Packets rcv'd Yes Bytes rcv'd No Xmit errors Yes Receive errors Yes Host errors Yes Network errors Yes Ether Group Number of frames recv'd with checksum errors Yes. Frames received with alignment error Yes. Frames transmitted with one collision Yes. Frames transmitted with more than one collision Yes. Frames transmitted after deferral Yes. Frames not transmitted due to collisions Yes. Frames not received due to overrun Yes. Frames not transmitted due to underrun Yes. SQE Test Errors No. Times carrier sense signal lost Yes. Late collisions detected Yes. Frames received with errors Yes. Ethernet Chip set Yes. Unsupported features in the DMI-SNMP Client Instrumentation =========================================================== 1. An exclusive Instrumentation directory for placing the NLM files 2. SETS/TRAPS. 3.2 SCO ------- 3.2.1 Open Server Installation Procedures ------------------------------------------- Location of driver package: \SCO5\IBMFE.VOL 1) Copy the ibmfe.vol file to any directory, say /tmp, on the SCO system, renaming the file as VOL.000.000. Also, make the file read-only by using 'chmod'. For example, # cp ibmfe.vol /tmp/VOL.000.000 # chmod 444 /tmp/VOL.000.000 2) If there is an older version of the ibmfe driver on the system, you must first remove it. To do this, run 'netconfig'. Remove all instances of the "IBM 100/10..." adapters. Exit netconfig without opting to relink the kernel. Run 'custom'. Remove the older driver for the IBM 100/10 adapter. Exit from 'custom'. 3) Install the new driver using 'custom'. When asked for the installation media, choose 'media images', and type the directory path to the VOL.000.000 file. (In step 1, if if you copied it to /tmp, type '/tmp'). After the installation of the driver is complete, exit 'custom'. 4) Run 'netconfig' and add the adapters. For each adapter that is present in the system, enter the appropriate TCP/IP parameters. By default, the driver automatically detects the line speed and duplex mode. If you want to force any of these settings, choose 'Advanced Options' and set the speed and duplex modes. Exit 'netconfig' and choose to relink the kernel. 5) Reboot the system. 3.2.2 UnixWare 7 Installation Procedures ------------------------------------------ Location of driver package: \UW7\IBMFE.PKG (for UnixWare 7.0) \UW7DDI8\IBMFE8.PKG (for UnixWare 7.1) 1) Copy the appropriate package file into any directory on the UnixWare system, such as in the /tmp directory. 2) Make sure no other users are logged on and all user applications are closed. 3) If there is an older version of the ibmfe driver on the system, first run 'netcfg' and remove any configured NICs. Exit 'netcfg'. Remove the old driver by typing 'pkgrm ibmfe'. (You can find the driver version by typing 'pkginfo -l ibmfe'.) 4) Install the new driver using 'pkgadd'. For example: # pkgadd -d /tmp/ibmfe.pkg 5) Run 'netcfg' to add and configure the NICs. 6) Reboot the system. NOTE: If you require Hot Plug PCI capabilities, the DDI 8 ibmfe driver must be used. The DDI 8 driver is supported on UnixWare 7.1.0 and later versions. For more information about Hot Plug PCI capabilities please refer to SCO UnixWare 7 documentation. 3.3 Advanced Features --------------------- 3.3.1 Adapter Fault Tolerance (AFT) ----------------------------------- Adapter Fault Tolerance creates a team of 2 - 8 controllers to provide automatic redundancy for your ethernet connection. If the primary controller adapter fails, a secondary takes over enabling you to maintain uninterrupted network performance. AFT is implemented with a primary controller and one or more backup, or secondary, controllers. These ethernet controllers can be the imbedded controller(s) in your server or IBM Server Adapters. During normal operation, the backup will have transmit disabled. If the link to the primary adapter fails, the link to the next backup adapter automatically takes over. 3.3.2 Adaptive Load Balancing (ALB) ----------------------------------- Adaptive Load Balancing creates a team of 2 - 8 controllers to increase transmission throughput. With ALB, as you add adapters to your server, you can group them in teams to provide up to a 800 Mbps transmit rate and a 100 Mbps receive rate, with a maximum of eight controllers. The ALB software continuously analyzes transmit loading on each adapter and balances the transmission across the adapters as needed. Adapter teams configured for ALB also provide the benefits of AFT described in the preceeding paragraph. Receive rates remain at 100 Mbps. To use ALB, you must have two to eight compatible network controllers installed in your server and all linked to the same network switch/segment. ALB works with any 100BASE-TX switch. 3.3.3 Fast EtherChannel* (FEC) ------------------------------ Fast EtherChannel* creates a team of 2, 4, 6 or 8 controllers to increase transmission and reception throughput. FEC is a performance technology developed by Cisco to increase your server's throughput. Unlike ALB, FEC can be configured to increase both transmission and reception channels between your server and switch. FEC works only with FEC-enabled Cisco switches such as the Catalyst 5000 series. With FEC, as you add adapters to your server, you can group them in teams, with a maximum of eight compatible controllers. The FEC software continuously analyzes loading on each controller and balances network traffic across the controllers as needed. Adapter teams configured for FEC also provide the benefits of AFT. To use FEC, you must have 2, 4, 6, or 8 compatible network controllers installed in your server and all linked to the same FEC-enabled Cisco switch. (Note that the switch must support more than 4 controllers in FEC in order for more than 4 controllers to work in FEC . Consult your switch documentation.) 3.3.4 Virtual LAN (VLAN) ------------------------ Virtual LAN is a logical grouping of network devices put together as a LAN regardless of their physical grouping or collision domains. VLANs let a user see and access only specified network segments. This increases network performance and improves network security. VLANs offer the ability to group users and stations together into logical work-groups. This can simplify network administration when connecting clients to servers that are geographically dispersed across the building, campus, or enterprise network. Typically, VLANs consist of co-workers within the same department but in different locations, groups of users running the same network protocol, or a cross-functional team working on a joint project. Joining workers with VLANs forms logical working groups. Normally, VLANs are configured at the switch and any computer can be a member of one VLAN per installed network adapter. The controller in your IBM server supersedes this by communicating directly with a switch, allowing multiple VLANs on a single adapter (up to 64 VLANs). To set up VLAN membership, your adapter must be attached to a switch with VLAN capability. 3.3.5 Wake on LAN (WOL) ----------------------- NOTE: The information in this section applies only to servers which implement the WOL function. Consult your server documentation. The Wake on LAN (WOL) feature of the server allows it to be powered-on remotely by a network management program. In some operating systems, it can also wake the computer from suspend mode. This is accomplished by sending a special type of data packet containing the adapter's specific Ethernet address. When the computer is powered off, the ethernet controller continues to operate by using standby power. As long as AC power is available to the power supply, the controller will draw standby power when the machine is powered off, allowing it to "listen" to the network for a wake up packet. 3.3.5.1 Troubleshooting Wake on LAN ----------------------------------- NOTE: The reception of a wake up packet will set the adapter to a special state. This condition must be reset before the adapter will accept another wake up packet and power-on the computer. The drivers for this adapter are written to reset this condition when they load. Once you have sent the computer a wake up packet and powered-on the computer, you must let a driver load or it will no longer accept any wake up packets. The only other way of resetting this condition is removing AC power from the computer for a short duration (~10 - 15 seconds). If the computer still will not power-on when a wake up packet is sent, check the computer's BIOS for power settings. If you are connected to a hub or switch, the link LED on the back of the adapter should be on, even though computer power is off. If the link LED it is not on, try powering the computer on. If the LED now lights, then the controller is not receiving power in standby mode. If this is the case, the computer may need to be serviced. If the link LED does not come on when the computer is powered-on, you have not established link with your hub or switch. Check your cabling or substitute it with a cable that has been verified to work correctly. Make sure your hub or switch is either 10baseT or 100baseTX and is powered-on and fully functional. If everything listed above is correct and the computer will still not respond to a wake up packet, your computer may need to be serviced. 3.3.6 Boot Agent ---------------- The Boot Agent is a utility program that is stored in a portion of system BIOS, allowing the ethernet controller to remotely boot the system from the network using either of 2 methods. The default method is PXE, a remote boot procedure defined by the "Wired for Management" specifications and used by powerful network management programs. The alternate method is RPL, an established industry standard historically utilized for remote booting of diskless workstations from network operating systems such as NetWare* and Windows NT* Server. Computers do not need to be Wake on LAN enabled to use this feature. When the computer is first powered-on, the Boot Agent will execute and display a message similar to the following, with the current version number: Initializing Boot Agent Version X.X Press Ctrl+S to enter the Setup Program.. By default, this message will display for 2 seconds, then attempt to boot from a local drive. If the attempt to boot from a local drive fails, the agent will attempt to boot remotely. To change the configuration of the Boot Agent, press the "Ctrl" key and "S" key simultaneously during the time that this message is displayed. This will bring up the Boot Agent configuration screen. NOTE: Depending on the current setup options, the "Press Ctrl+S" message may not appear. In this case the user can still press the "Ctrl" and "S" keys to enter the Setup Program. 3.3.6.1 Parameters ------------------ There are 7 configurable parameters. Follow the on-screen instructions to select, change and save the different parameters. The different parameters are explained below, with the default parameter listed first. Boot Protocol Selections are PXE and RPL. Select PXE for use with Wired for Management compliant network management programs. Select RPL for legacy style remote booting. PnP/BEV Boot Selections are Disable and Enable. Select Disable to use the Boot Agent for remote boot operation. Select Enable if your computer BIOS has a BEV (Boot Entry Vector) capable remote boot program built in and you wish to use that remote boot agent instead of the IBM Boot Agent. Default Boot Selections are Local and Network. If Local is selected, the Boot Agent will attempt to boot according to the boot sequence option (defined in the system BIOS setup) first, then attempt to boot from the network if local boot fails. If Network is selected, the Boot Agent will attempt to boot from the network first, regardless of the boot sequence option defined in system BIOS. Local Boot Selections are Enable and Disable. If Enable is selected, the system will be able to boot from a local drive (floppy drive or hard drive). If disable is selected, the system will not be able to boot from a local drive. This will be true regardless of the Default Boot setting. Prompt Time Selections are 2, 3, 5 and 8. The number represents the amount of time in seconds the "Initializing Boot Agent Version 2.2 - Press Ctrl+S to enter the Setup Program.." message is displayed every time the system is booted. Setup Message Selections are Disable and Enable. If enabled, the message "Initializing Boot Agent Version 2.2 - Press Ctrl+S to enter the Setup Program.." will be displayed during boot up. If Disable is selected, only the message "Initializing Boot Agent Version 2.2" will appear. However, you will still be able to enter Ctrl+S to enter the setup program at that time. Power Management The selections are ACPI and APM. ACPI should work in most computers. In servers supporting Wake-on-LAN, the APM selection will pre-enable the Wake-on-LAN function of the adapter. Set this selection to APM if you are having difficulty with remote wake up in computers that are compliant to the PCI 2.2 specification and are running an OS that is not ACPI (Advanced Control and Power Interface) aware. 3.3.6.2 Troubleshooting Boot Agent ---------------------------------- If you do not see a message similar to " Initializing Boot Agent Version X.X" during the computer start-up, check the following: In the system BIOS setup menu, select "Devices and I/O Ports". Then check that the parameter "Planar Ethernet PXE/DHCP" option is enabled. When using an RPL remote boot with a plug-in adapter based on an Intel 82557, 82558 or 82559 LAN controller chip, the boot agent may try to remote boot using the controller built into your server instead of the plug-in adapter. To avoid this, disable the built-in controller in the computer BIOS settings. 3.3.7 Priority Packet --------------------- This adapter provides support for Priority Packet. Software for Priority Packet is located on a separate diskette which is available from this same site. Priority Packet is a program that adds IEEE 802.1p tagging (also known as Traffic Class Expediting) and Priority Queue features to adapters. 3.3.7.1 IEEE 802.1p tagging --------------------------- IEEE 802.1p tagging (IEEE 802.1p) is a new IEEE standard for tagging, or adding additional bytes of information to, packets with different priority levels. Tagging is a method of assigning different levels of priority to data packets based on user defined Priority Filters. This allows you to grant a greater share of available network bandwidth to critical applications. Packets are tagged with 4 additional bytes, which increase the packet size and indicate a priority level. The addition of 4 bytes to the packet increases the maximum Ethernet packet size from 1514 bytes to 1518 bytes. This increase in maximum packet size imposes certain restrictions on the use of this technology. First, your interconnecting network infrastructure must support this standard or you will realize no gain from tagging. Furthermore, many LAN adapters, switches and routers that do not support 802.1p will reject any packets over 1514 bytes. Check with the hardware vendor if you are not sure if your network hardware will forward packets up to 1518 bytes in length. Simple repeater type hubs (class I and II) will forward the tagged packets, but no gain will be realized from tagging. 802.1p compliant hardware must be configured to remove the additional 4 bytes from the tagged packet before forwarding it to internetworking hardware or end nodes that will reject packets over 1514 bytes in length or do not support 802.1p tagging. If your network infrastructure does not support 802.1p tagging, set IEEE 802.1p/802.1q tagging to "Disable" (the default setting) in the adapter advanced properties. Priority queuing will still prioritize packets based on your priority filter settings. In systems equipped with more than one supported adapter, each adapter must have IEEE 802.1p/802.1q tagging enabled separately. Priority filters apply to all adapters in a system and cannot be applied individually to adapters in multi-homed systems. Tagging is currently not supported in conjunction with adapter teaming features (Adapter Fault Tolerance, Adaptive Load Balancing and Fast EtherChannel*). 3.3.7.2 Priority Queuing ------------------------ Priority queuing is a feature that creates a separate high priority queue in addition to the existing queue. Data packets are entered into these queues prior to being transmitted. The priority assigned to a packet determines which queue it goes into. Packets with a priority of 4-7 go into the high priority queue and are sent ahead of packets in the normal queue. This feature does not modify the packet contents and will work over legacy network equipment. You can assign priorities based on network layer properties, such as the Node address of the destination computer, the Ethernet type, or by various properties of the TCP/IP and IPX protocol suites. 4.0 WEB Sites and Support Phone Number ________________________________________ IBM Support Web Site: http://www.pc.ibm.com/support IBM Marketing Netfinity Web Site: http://www.pc.ibm.com/netfinity If you have any questions about this update, or problems applying the update go to the following Help Center World Telephone Numbers URL: http://www.pc.ibm.com/qtechinfo/YAST-3P2QLY.html. 5.0 Trademarks and Notices ____________________________ The following terms are trademarks of the IBM Corporation in the United States or other countries or both: IBM OS/2 Netfinity Microsoft, Windows NT, and Windows 2000 are trademarks or registered trademarks of Microsoft Corporation. Cisco and FastEtherchannel are trademarks or registered trademarks of Cisco Systems, Inc. Novell and NetWare are registered trademarks of Novell Corporation. SCO is a registered trademark of Santa Cruz Operations. Intel is a registered trademark of Intel Corporation. Other company, product, and service names may be trademarks or service marks of others. 6.0 Disclaimer _______________ THIS DOCUMENT IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND. IBM DISCLAIMS ALL WARRANTIES, WHETHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE AND MERCHANTABILITY WITH RESPECT TO THE INFORMATION IN THIS DOCUMENT. BY FURNISHING THIS DOCUMENT, IBM GRANTS NO LICENSES TO ANY PATENTS OR COPYRIGHTS. Note to U.S. Government Users -- Documentation related to restricted rights -- Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.