Wednesday, August 4, 2010

Migrating from Catalyst to Nexus

There are already a couple great resources on the Internet for network engineers who are migrating from a Catalyst-based data center to a Nexus-based data center. Two of my favorites are Cisco site, which hosts a bunch of IOS –> NX-OS comparison pages, and a page put together by Carole Warner Reece of Chesapeake Netcraftsmen. Both of these resources helped me quite a bit in working out the syntax of NX-OS commands. This blog post is an attempt to supplement that base of knowledge with my own experiences in converting a production data center over to the NX-OS platform.

My Catalyst network was a fairly standard data center design, with a pair of 6509s in the core and multiple Top-of-Rack switches cascaded below. We used RAPID-PVST, with blocking occurring in the middle of a TOR stack.

The new Nexus environment looks pretty much the same. We have a pair of Nexus 7010s in the core with a layer of Nexus 5020 switches at the edge. Each 5020 supports 4 – 6 FEXs. The FEXs only uplink to a single 5020 switch. This new network was built alongside the existing Catalyst one. The plan was to interconnect the two environments at the cores (at layer 2) and migrate server ports over whenever possible. Once we reached a critical mass of devices on the Nexus side of the network we planned to move the Layer 3 functionality from the IOS environment to the NX-OS side.

A few months ago we finished building out the Nexus LAN and interconnected it to the Catalyst LAN. We used vPC on the Nexus side to reduce the amount of SPT blocking. All was well, and we began migrating servers to the Nexus infrastructure without any issues. Eventually we reached our pre-determined “critical mass” and scheduled a change window to migrate the SVIs to the Nexus side and reconfigure the core Catalyst 6509s as Layer-2 only devices. The configuration work for this migration was around 1500 lines, so it was not by any means trivial, but it was also quite repetitive due to the number of SVIs and size of the third party static routing tables. Here’s where the fun began.

The first issue we ran into was with an extranet BGP peering connection through a firewall. In our design, we connect various third parties to an aggregation router in a DMZ. The routes for these third parties are advertised to our internal network via BGP, through a statically-routed firewall. Most of our third party connections also utilize BGP, so we receive a variety of BGP AS numbers. In two cases, the BGP AS number chosen by the third party overlaps with one of our internal AS numbers. To rectify this, we enabled the “allowas-in” knob on the internal BGP peering routers. Unfortunately this knob will not be available on the Nexus platform until NX-OS 5.1. I should have caught this in my pre-implementation planning. This was fixed with a small set of static routes. Our medium-term plan is to work with the two third parties to change their AS numbers, and eventually we will implement “allowas-in”, once we upgrade to NX-OS 5.1. Another interesting thing to note about BGP on NX-OS is that the routers check the AS-path for loops for both eBGP and iBGP neighbors. IOS does not do any loop-checking on iBGP advertisements.

With that behind us, we moved on to the SVIs and migrating our spanning-tree root to the Nexus switches. The SVI migration was trivial, but the SPT root migration caused issues. We were bitten by the behavior of Bridge Assurance, a default feature in NX-OS that was unavailable in our version of IOS (SXF train). Surprisingly, the lack of Bridge Assurance support didn’t prevent the Catalyst<->Nexus interconnect from working while the SPT root was on the Catalyst side of the network, but once we moved the root to the Nexus side, Bridge Assurance shut down the interconnects. The only acceptable solution to this issue (that I could find) was to disable Bridge Assurance globally on the Nexus switches. My error here is that I took for granted that my interconnect was properly configured, because it had been working for several months.

After encountering this issue I took another look at Terry Slattery’s blog post on Bridge Assurance, and at Cisco’s Understanding Bridge Assurance IOS Configuration Guide. The problem I experienced is Bridge Assurance requires switches to send BPDUs upstream (towards the root), while normal RSTP behavior is to suppress the sending of BPDUs towards the root. When the Catalyst side of the network contained the root, the Catalyst switches sent BPDUs downstream to the Nexus switches (normal RSTP behavior) and the Nexus switches sent BPDUs upstream to the Catalysts, which is abnormal for RSTP, but harmless. The Catalyst switches simply discarded the BPDUs. Once the SPT root was migrated, the Nexus switches sent BPDUs to the Catalyst (normal), and the Catalysts suppressed all BPDUs towards the Nexus switches (normal for RSTP, but not correct for Bridge Assurance). For the first few seconds, this was not a problem and forwarding worked fine, but eventually the Bridge Assurance timeout was reached and the Nexus switches put the ports into BA-Inconsistent state. The “right” way to solve this issue is to upgrade the Catalyst switches to SXI IOS and re-enabled Bridge Assurance. My preference is to simply retire the 6509s, so I’ll have to keep tabs on the migration effort. If it looks like it will drag on for a while, I’ll schedule the upgrades.

(Edit on 8/9/2010 - commenter "wojciechowskipiotr" noted that configuring the port-channels towards the 6500s with "spanning-tree port type normal" would also disable Bridge Assurance, but only for those specific ports. If I get an opportunity to try this configuration, I will report on whether it is successful.)

The remaining issues I faced were minor, but in some cases are still lingering or just annoyed me:

1) Static routes with “name” tags are unavailable. I had gotten into the habit of adding a named static routes to the network, especially for third-party routing. It appears that NX-OS does not support this.

2) VTP is unavailable. Based on conversations with other networkers, I’m probably the last living fan of VTP. I am sad to see it go. Fortunately in the Nexus environment there are fewer places to add VLANs (only the 5ks and 7ks).

3) Some of the LAN port defaults are different (when compared to IOS). For example, QoS trust is enabled by default. Also, “shutdown” is the default state for all ports. If a port is active, it’ll have a “no shutdown” on the config.

4) OSPF default reference bandwidth is now 40gb, rather than the 100mb value in IOS. This is a good thing, since 100mb is woefully low in today’s networks.

5) Proxy-arp is disabled by default. Our migration uncovered a few misconfigured hosts. Not a big deal, but it is noteworthy. Proxy-arp can be enabled per SVI, but do you really want to do it?

6) We ran into a WCCP bug in NX-OS 4.2(3). For some reason NX-OS is not load-balancing among our two WAN accelerators. Whichever WAE is activated last becomes the sole WAE that receives packets. I have an active TAC case to find a solution to this issue. For now, we are running through a single WAN accelerator, which reduces the effectiveness of our WAN acceleration solution.

I hope this helps someone with their own migration. This is going to be a common occurrence in our industry for the next few years, especially if Cisco has their way. If anyone has questions, please send me an email or post a comment.


Ben Story said...

Don't worry Jeremy, you're not the only one still in love with VTP. At my current job, we upgraded from Nortel to Cisco a year ago and it was VTP that I missed the most when I was doing the Nortel gear.

Gustavo Rodrigues Ramos said...

Another cavet I've run into was: when enabling vPC and SVI interfaces on a pair of N7K using hsrp as first hop redundancy protocol, you need to disable ip redirects on SVI interfaces. Without doing it, you'll eventually get some packet loss when routing packets throughout the N7K boxes.

Again, congrats for the excellent post!


Jeremy Filliben said...


Thanks, I really thought I was alone on the VTP issue :)


That is a good point. Besides the difference in HSRP configuration (which I think is an improvement), there are some gotchas with vPC. The one that I struggled with the most was the 'vpc peer-gateway' command. Our NetApp storage boxes insist on replying back to the real MAC address of any sender, which breaks vPC.

Gustavo Rodrigues Ramos said...

Jeremy, I got luck on the peer-gateway: saw it on the docs just before going to finish up my configs. But I stuck on the the "no ip redirects" for some days. Recently, I noticed there's a feature request to disable ip redirects automatically when using vPC, peer-gateway and SVI's.

Unknown said...

Hi Jeremy,

I've also ran in a few glitches, but this time from a L2 perspective. Static mac-address adding only works for unicast-mac addresses. I had a checkpoint cluster that used a multicast mac for unicast IP default gw addressing, and NX-OS didn't allow me to enter them statically.
Also static ARP entries are now inserted at interface level configuration, instead of globally.
My two cents. Keep up posting!

(another) Gustavo.

Anonymous said...

I'm getting familiar with Bridge Assurance too, as I'm going to make migration of 6500 to Nexus 7000 in Data Center too soon, but instead of disabling BA globally isn't configuring 'spanning-tree port type normal' on edge ports toward 6500 or any other device a solution?

Unknown said...

Excellent post Jeremy,

did you have any QoS related issues in migration?

Jeremy Filliben said...


That's a good point. I did not come across this suggestion during my troubleshooting. I'll update the blog post to reflect this information.


Nexus QoS is worthy of its own blog post :) In brief, yes, I did run into issues, mostly with the lack of DSCP marking capability on the Nexus 5000. I had to make some uncomfortable compromises in how I perform packet marking.

Thank you both for your comments!

Anonymous said...

Sigh. The more I play with my Nexus gear, and the more I hear other people talking about it, the more I miss IOS. These Nexus boxes are nifty in their own way, but they still don't seem quite finished.

routerworld said...

I need to migrate from 6509 IOS to N7k in our situation.
I am still learning. peer-gateway command indeed did the trick for my EMC Celerra.

I am trying to understand this behavior though ,Please guys help me if I understand this non rfc compliant behaviour.

when NAS device get the L2 frame with non local router mac ,in response to that it does not do the router lookup so return traffic goes through vpc peer link ( this is allowed only when local vpc member port is down ) & than it try to goes through vpc member port where they got dropped.I am big fan of VTP too.

Thank you

Jeremy Filliben said...


I started to write a brief response to your question, but I couldn't find a suitable link to demonstrate the reason for vpc peer-gateway. So I'm writing a new blog post that should be up in 24 hours or so.


Unknown said...

Hi Jeremy, this is my first time commenting on your posts, but I am a heavy lurker and I really enjoy your blog. :-)

With that being said, I was wondering if you could elaborate more on your actual migration process. Were your vlans contained within the same access switches or spanning across multiple? Was your N7k pair already configured with vPC during the migration or did you handle that as a second phase? During a phased migration, you would have had a scenario where your N7Ks using vPC were passing vPC and non-vPC VLANs across the interconnect for some time.

Jeremy Filliben said...


Thanks for reading. The previous Catalyst environment consisted of about 50 Top of Row (TOR) switches, with a pair of Catalyst 6500s as a core. The migration was accomplished in three steps:

1) Configured and activated the Nexus 7010 switches and all Nexus 5K/2Ks. All VLANs were created on these switches and vPC was configured for the links to the Nexus 5000 switches. This step left us with two distinct networks. The Nexus network was not 'active', so no end hosts were connected to it.

2) vPC was configured for the links between the Nexus 7Ks and the Catalyst 6Ks. Physical cables were connected and the Nexus environment was 'activated' for all production VLANs. No SVIs were created on the Nexus side, and the STP Root Bridge priority was set to a high value ensure the 6K was the active root bridge.

3) The final step is the one described in the blog post. I migrated the root bridge and SVIs to the Nexus 7Ks. There was no requirement to move all SVIs at the same time (nor did I have to reset the root bridge for all VLANs at once), but in my environment it made more sense to do this in one step.

I hope this helps,

Unknown said...

Jeremy, thanks a lot.

If you can elaborate on step 2, I'd appreciate it. It sounds like the 7ks were inserted in parallel to the 6ks (I'm guessing there were L3 core uplinks on the 7ks, etc?) and each 7k was connected to each 6k using vPC? I have more commonly heard of the approach of single connecting the 7ks to the 6ks.

Jeremy Filliben said...


You are correct, each Nexus 7K has an L3 uplink to the routing core (which happens to be another vdc on the same physical 7K chassis).

Each 6K has four cables to each 7K. The eight cables coming out of an individual 6K are configured as a single port-channel. vPC on the 7K-side allows us to split the port-channel to two different physical boxes.

This gives us 8gb of usable bandwidth from each 6K. The cross-connect between the two 6Ks is blocked by Spanning-tree (and we actually admin-down'd the interfaces to be extra careful).

Unknown said...

Very helpful post, I am also running into similar nature of situation in near future. For me it’s not Cisco we are migrating from Nortel 8600 to Nexus layer. Currently using OSPF within Nortel layer. My plan for this migration is;
Single L2 link b/w 8600 & NK7; don’t want to test vpc with Nortel.
All of the ports, uplink and down to the servers, are on the same VLAN as the new Nexus 5000 L2 switches. Thus, the servers will be on the same subnet whether they are connected to the existing switch or to the new fabric.

Both cores will be interconnected on 10Gig interfaces and will be running independent ospf.

We are running different L3 services, including internet, intranet, WAN, load balancers with mix of different vendor products like juniper, Nortel and Cisco.

We will define floating static routes on old core for all L3 services.

Once the required service is unplugged from existing core and plugged into new nexus core, layer 3 connectivity for that particular service will be lost within the existing core and the floating static route will route the traffic to new Nexus layer.

suggestion is required....

Jeremy Filliben said...


Without knowing all the details I can't be certain it'll work, but that sounds like a good plan to me. You want to be careful about adding too much redundancy between the cores... Spanning-tree will block a lot of it anyway. You also want to carefully plan out your SPT root and backup root during the migration. Do you intend to move it per-vlan, or for the entire DC? The latter is better for performance, but requires a bit more juggling.

Good luck!

Unknown said...

thanks for your comment,

Is it possible for me to get any Low level document of similar nature? ;)

Jeremy Filliben said...


I'm not sure what I could provide to be more helpful. If you are building sufficiently large connections between the new and old networks, you shouldn't have an issue with performance. Therefore, I would recommend moving all the SPT roots at one time.


Anonymous said...

Hi Jeremy.

Interesting article. thanks.
Just a update on the amount of FEX's supported on the 5010/5020. Both the 5010 and the 5020 supports upto 12 Fex. The big 5500 currently supports up to 16, but there is word to increase that end-2011.

Keep up the good work
< ruhann >

2501 said...

Hello Jeremy - Thank you for the great blog posts. I am starting on a 6K to 7K/5K/2K migration scenario and am curious about the wccp redirect bug you mentioned. I searched the Bug Toolkit and couldn't find a reference to anything similar to what you mentioned. Can you post the specific BugID or other details? (found/fixed in versions, severity, etc) Also, if you have any other "post implementation" thoughts, that would be beneficial. Thanks!

drkc said...

Can the catalyst 4900M support bridge assurance with SXI IOS? Directed to Jeremy.

MisYahd said...
This comment has been removed by a blog administrator.
Unknown said...


This time i am involved in a migration plan from Cisco gear to H3C, this means PVST+ to MST. Also the Physical location of old DC & new DC is not the same, so we can't just unplug the cable from cisco and plug into H3C.

Any ideas how spanning tree will behave and how L2 & L3 services will be migrated..

Jeremy Filliben said...


That is a very different project than the one I have described. If it is possible to re-address the servers, that would be my first recommendation. Then you can avoid the difficulties of SPT over the WAN.

If this is not possible, you will need to extend L2 over your WAN network. Point-to-point Ethernet circuits would be easiest, as you can then configure them as 802.1q trunks and run traditional spanning-tree. If you must go over a L3 backbone (MPLS, etc) you should consider technologies such as L2TPv3 or (if using Nexus 7k) OTV.

This sort of project can be quite simple in the best case, or maddeningly complicated in the worst. If you have any specific questions, please feel free to email me.