In my previous blog entry, I explained the role Performance Routing was expected to play in my network. This time around, I'd like to show the steps we've taken to prepare for its deployment.
My network core consists of three major locations. All three sites house significant users populations, while two of the locations also host our major North American data centers. The locations are interconnected with OC-3 & Ethernet technologies, and each site is connected to one of our of two MPLS providers. The previous iteration of our network design used EIGRP to route between the three core locations, with filtering and summarization to prevent an unnecessarily large routing table, especially at the edges.
Our MPLS-based WAN uses BGP. With careful route injection at the core locations, and a small amount of AS prepending for the default route, we were able to achieve a rudimentary but effective load-balancing scheme. More effective load-balancing could have been achieved by sending the same routing table through both providers and using "bgp bestpath as-path multipath-relax", but we had two significant limiting factors that made our providers less than equal. First, only one of our providers could route our multicast traffic. The second provider also had an onerous Quality of Service charge, so we treated their network as a best-effort path.
Here's a diagram of the previous network design:
"MPLS B" supported QoS and multicast, so we connected our "Core B" location to it, as that site housed the majority of our voice and video equipment.
In my view, one of the most intriguing features of Performance Routing is the ability to inject new routes into a routing domain based on the network performance. This can be done via BGP or static. The alternative to route injection-based path manipulation is Policy-based routing (PBR), which makes me uncomfortable from a troubleshooting perspective. I can relatively easily track changes to my routing table, but how will I recreate a PBR-based network if I need to document a transient issue? Static-based route injection has promise, as these routes can be redistributed into any IGP, but for us, BGP made the most sense.
As we considered how best to implement PfR, I kept coming back to the design decision to assign individual AS numbers to our core locations. When PfR injects a BGP route, it sets the 'no-export' community, which prevents that route from being advertised to an eBGP peer. The right choice during our initial MPLS implementation was now a stumbling block in our potential PfR deployment. It was also clear that we needed matching QoS policies with our MPLS providers. For this reason, and several others, we chose to migrate to a new provider for "MPLS A".
Transitioning to a Real Core
We decided to take this opportunity to create an independent Core AS. While site-level BGP AS assignments worked well for our remote locations, the core is a special case. By carving out a small network consisting of core WAN devices and interconnect circuits, we have compartmentalized our core attachments. The following diagram provides a view of the new topology. As before, redundancy has been eliminated to provide clarity.
In the previous topology, as-path length dictated that packets would enter the nearest MPLS network when exiting a core location. In the new design, packets are free to enter and exit the core at the most appropriate point. Overlaying BGP-based PfR on this topology is relatively straightforward, and is nearly the next step in the project. We first have to deal with the placement of our Cisco WAAS WAN accelerators, which will be the topic of a later blog entry. Once WAN accelerator placement is dealt with, we'll be ready to start our PfR pilot.