White Paper- Windows Server High Availability with Microsoft MPIO 下载本文

Making MPIO-Based Solutions Work

The Windows operating system relies on the Plug and Play (PnP) Manager to dynamically detect and configure hardware (such as adapters or disks), including hardware used for high availability/high performance Multipathing solutions.

Note: A reboot is required when the MPIO feature is first installed.

Device Discovery and Enumeration

MPIO/Multipath drivers cannot work effectively or efficiently until it discovers, enumerates and

configures different devices that the OS sees through redundant adapters into a logical group. We will briefly outline in this section as to how MPIO works along with DSM in discovering and configuring the devices.

Without any multipath driver the same devices through different physical path would look like totally different devices thereby leaving room for data corruption. Figure1 depicts this scenario

ServerServerA1A2A1A2XX’XWithout multipathing software,the host incorrectly interpretsthe two paths as leading to twostorage unitsWith multipathing software, theserver correctly interprets thetwo paths as leading to thesame storage unit

Figure 1: What the operating system “sees” with and without MPIO

The following are the sequence of steps that the device driver stack walks through in discovering, enumerating and grouping the physical devices, device paths into a logical set. (Assuming a scenario where a new device is being presented to the server)

1. New device arrives.

2. PnP manager detects the arrival of this device.

3. MPIO driver stack is notified of this device arrival (it will take further action if it is a

supported MPIO device).

4. MPIO driver stack creates a pseudo device for this physical device.

5. MPIO driver walks through all the available DSM’s to find out which vendor specific DSM

can claim this device. After a DSM claims a device it is associated only with the DSM that claimed it.

6. The MPIO driver, along with the DSM, makes sure the path to this device is connected,

active, and ready for IO. If a new path for this same device arrives, MPIO then works with the DSM to determine whether this device is the same as any other claimed device. It then groups this physical path for the same device into a logical set called multipath group.

Highly Available Storage: Multipath Solutions in Windows Server 2008 and Windows Server 2003

7

Unique Storage Device Identifier

For dynamic discovery to work correctly, some form of identifier must be identified and obtainable regardless of the path from the host to the storage device. Each logical unit must have a unique hardware identifier. The MPIO driver package does not use disk signatures placed in the data area of a disk for identification purposes by software. Instead, the Microsoft provided generic DSM

manufactures a unique serial number from the hardware data reported by standard SCSI INQUIRY commands. MPIO also provides for optionally using device manufacturer assigned unique serial numbers.

Since not all storage IHV’s assign their devices a unique hardware serial number, Microsoft includes in its sample generic DSM source code a means of deriving one, using other SCSI INQUIRY data. Alternatively, vendor-specific mechanisms can be implemented in IHV’s DSM.

Dynamic Load Balancing

Load balancing, the redistribution of read/write requests for the purpose of maximizing throughput between server and storage device, is especially important in high workload settings or other settings where consistent service levels are critical. Without Multi Path I/O software, a server sending I/O requests down several paths may operate with very heavy workloads on some paths while others are underutilized.

The Microsoft MPIO software supports the ability to balance I/O workload, without administrator intervention. MPIO determines which paths to a device are in an active state and can be used for load balancing. Each vendor’s load balancing policy (which may use any of several algorithms,

such as round robin, the path with the fewest outstanding commands, or a vendor unique algorithm) is set in the DSM. This policy determines how the I/O requests are actually routed.

Note: In addition to the support for load balancing provided by MPIO, the hardware used must support the ability to use multiple paths at the same time rather than just fault tolerance.

Error Handling, Failover and Recovery

The MPIO driver, in combination with the DSM, supports end-to-end path failover. The process of detecting failed paths and recovering from the failure, like load balancing, is automatic, usually fast, and completely transparent to the IT organization. The data ideally remains available at all times. Not all errors result in failover to a new path. Some errors are temporary and can be recovered using a recovery routine in the DSM; if recovery is successful, MPIO is notified and path validity checked to verify that it can be used again to transmit I/O requests.

When a fatal error occurs, the path is invalidated and a new path is selected. The I/O is resubmitted on this new path without requiring the Application layer to resubmit the data. Differences in load balancing terminology:

There are 2 primary types of load balancing technology referred to within Microsoft Windows. This whitepaper discusses the use of the first only.

1. MPIO Load Balancing – This type of load balancing supported by MPIO is the use of

multiple data paths between server and storage to provide greater throughput of data then could be achieved with only one connection.

Highly Available Storage: Multipath Solutions in Windows Server 2008 and Windows Server 2003

8

2. Network Load Balancing (NLB) – This is a Microsoft Windows Cluster technology that

provides load balancing of network interfaces to provided greater throughput across a network to the server, and is most typically used with Internet Information Server (IIS).

Differences in failover technology:

When speaking about data path failover, such as the failover of HBA or iSCSI connections to storage, the following main types of failover are available:

1. MPIO based Fault Tolerant (FT) failover – In this scenario, multiple data paths to the

storage are configured, and in the event that one path fails, the HBA or NIC is able to failover to the other path, and re-send any outstanding IO.

a. For a server that has one or more HBA or NIC, MPIO offers support for redundant

switch fabrics or connections from the switch to the storage array. b. For a server that has more than one HBA or NIC, MPIO also offers protection

against the failure of one of those adapters within the server directly.

2. MPIO based Load Balancing – In this scenario, multiple paths to storage are also defined,

however the DSM is able to balance the data load to maximize throughput. This

configuration can also employ FT behavior so that if one path fails, all data would go over an alternate path.

In some hardware configurations you may have the ability to perform dynamic firmware updates on the storage controller, such that a complete outage is not required for firmware updates. This capability is hardware dependant and requires (at a minimum) that more than one Storage Controller be present on the storage so that data paths can be moved off of a storage controller for upgrades. 3. The third type of failover, such as Windows Server Failover Clustering (WSFC). This type

of configuration offers resource failover at the application level from one cluster server node to another. This type of failover is more invasive than storage path failover in that it will require client applications to reconnect after failover, and resend data from the application layer. This method can be combined with scenarios 1 and/or 2 above to further mitigate risk exposure to different types of hardware failures.

Different behaviors are available depending on the type of failover technology used, and whether it is combined with a different type of failover or redundancy. Consider the following scenarios: Scenario 1: Using MPIO without clustering:

This scenario provides for either a fault tolerant connection to data, or a load balanced connection to storage. Since this layer of FT operation protects only the connectivity between the server and storage, it does not provide protection against server failure.

Highly Available Storage: Multipath Solutions in Windows Server 2008 and Windows Server 2003

9

Scenario 2: Combining the use of MPIO in fault tolerant mode with WSFC:

? ?

This configuration provides the following advantages:

If a path to the storage fails, MPIO can use an alternate path without requiring client application reconnection.

If an individual server experiences a critical event such as hardware failure, the application managed by WSFC is failed over to another cluster node. While this does require client reconnection, the time to restoration of service may be much shorter than that required for replacing the failed hardware.

Scenario 3: Combining the use of MPIO in load balancing mode with WSFC:

?

This scenario provides the same benefits as scenario 2 plus:

During normal operation, multiple data paths may be employed to provide greater aggregate throughput than one path could provide.

Scenarios 2 and 3 may also be used as an aid to software update management to reduce total downtime perceived by clients by making the managed application available again on a different server while one is updated.

Highly Available Storage: Multipath Solutions in Windows Server 2008 and Windows Server 2003

10