Note: This is a reprint from Open-E Blog
There is plenty of talk about Bonding and Multipath IO, but it is very difficult to get solid information about either one. Typically what documentation can be found is very bulky and the most important practical questions go unanswered.
As a result the following questions are often heard:
- When should I use bonding and when should I use multipath?
- I was expecting better throughput with bonding, why am I not seeing this?
- My RAID array shows 400MB/sec with local test, how can I get 400/MB sec outside?
Before we answer the above questions, let’s understand first how MPIO and bonding works.
MPIO allows a server with multiple NICs to transmit and receive I/O across all available interfaces to a corresponding MPIO-enabled server. If a server has two 1Gb NICs and the storage server has two 1Gb NICs, the theoretical maximum throughput would be about 200 MB/s.
Link aggregation (LACP, 802.3ad, etc.) via NIC teaming does not work the same way as MPIO. Link aggregation does not improve the throughput of a single I/O flow. A single flow will always traverse only one path. The benefit of link aggregation is seen when several “unique” flows exist, each from different source. Each individual flow will be sent down its own available NIC interface which is determined by a hash algorithm. Thus with more unique flows, more NICs will provide greater aggregate throughput. Link aggregation will not provide improved throughput for iSCSI, although it does provide a degree of redundancy.
Bonding works between a server and switch. Numerous workstations using each using a single NIC connected to the switch will benefit from bonded connections between the switch and storage server.
MPIO works between a storage server and the client server, whether or not there is a switch in the path.
With these basics fact, it will now be easier to answer our questions.
Q: When do I need bonding, and when is multipath appropriate?
A: Bonding works for a NAS server with multiple workstations connected.
MPIO works between hosts and initiators on FC or iSCSI. An example of MPIO configuration with a performance test showing 200MB/sec using dual Gb NIC’s is demonstrated step-by-step at: How to configure DSS V6 MPIO with Windows 2008 Server .
- Bonding works for NAS
- MPIO works for SAN