With all the recent Twitter chatter on the topic, I feel compelled to throw in my two cents on Fiber Channel over Ethernet. Here goes!
FCoE Selling Point
Traditionally, data center designers built separate Ethernet-based Data Networks (LANs) and FC-based Storage Networks (SANs). This was necessary because storage has a requirement for lossless operation, and Ethernet did not have the ability to meet this requirement. This requirement cost IT departments considerable extra money in fiber cabling and FC adapters in servers.
Data Center Bridging (DCB) - which IBM calls Converged Enhanced Ethernet (CEE) and Cisco has referred to as Data Center Ethernet (DCE) - adds lossless operation to the Ethernet standard. This allows us to multiplex storage traffic onto our data networks. Voila, we can save a relative fortune in cabling costs and dedicated FC equipment, and to a lesser degree, we can save on server network adapters by using Converged Network Adapters (CNA).
Where Are Things Going?
This is great… Now we’re saving a bunch of money, and we still have the same basic network. But is this the best we can do? Of course not! Why do we want the same basic network? Why don’t we put our storage traffic into IP packets and zip it along like everything else? Then we won’t need a separate network at all, virtual or otherwise.
If this story sounds familiar, that’s because it is. At one time, SNA traffic had a dedicated network. Then we decided to encapsulate it in a layer 2 protocol via RSRB, SR/TLB, etc. Eventually, we tossed the traffic into IP packets via DLSW+. Ultimately, we put IP-capable adapters in our mainframes and dispensed with the legacy technologies.
Or maybe you're thinking about voice… Who remembers the MC3810? That was my first introduction to Voice over X technology. We plugged PBX T1s into a router and encapsulated voice into Frame-Relay or ATM. There’s even a parallel here with the FCoE multi-hop controversy. Later we encapsulated the T1 traffic into IP packets on a router. Eventually, we put IP-capable adapters into our PBXs and dispensed with the legacy technologies.
As far as storage goes, we’re still stuck on step 2, encapsulating storage into a layer 2 protocol. Meanwhile, there are plenty of Storage over IP options available that don’t quite meet our performance needs. Does anyone actually want to bet that NAS or iSCSI is never going to meet our performance needs? Sure, there will always be specific high performance computer (HPC) needs that push the envelope, but the vast majority of corporate needs will eventually be met by IP-based storage.
So What Should We Do?
Does this mean we should ignore FCoE? Definitely not. I am in no hurry to swap out any existing FC-based SANs for FCoE. I don’t see the financial justification for it, since I’ve already sunk my money into the cabling. If I were to build a new data center, and I could demonstrate that IP-based storage would not meet my performance need, I would absolutely go for an FCoE solution. But I would also spend a lot of time determining if I truly needed the extra performance offered by a SAN. Any new DC build is going to be based on 10gbit Ethernet, so that should factor into the decision.
Ultimately, Fiber Channel will go away, just like every other dedicated network technology before it. During the transition phase, which we’re currently in, it often makes financial sense to go with the interim technology. I can’t come up with a scenario where it would make sense to replace an existing, working Fiber Channel network, but new builds should seriously consider going with FCoE.
Worth Reading: Who’s Protecting the Cloud API
16 hours ago