1

welcome everybody to ccmp enterprise infrastructure in this section we're going to be talking about campus network architecture my name is Brian McGann I'm a Cisco Certified design expert as well as a CCIE in routing and switching security service provider and data center for any questions that you have about this course you can feel free to email me beam again a tiny comm follow me on twitter at prime again or on linkedin.com slash in /p McGann now in this section we're mainly going to be focusing on the hierarchical land design models so building hierarchy for the land is going to divide the network architecture into modular layers where each layer is going to be implementing a specific function the goal of this is to make the network easier to scale and to provide fault isolation and to ease troubleshooting as things go wrong in the network now specifically the hierarchical land design model breaks the network into three building blocks which we call the access layer the distribution layer and the core layer the access layer is where our endpoints attach like pcs and IP phones or IP cameras the distribution layer is going to be the agora segregation point for the access layer and it's going to be the services boundary between the access layer and the core the core layer is going to provide connectivity between the distribution blocks and provide high speed switching of the traffic between the distribution blocks so here we have a visual representation of what the three-tier campus design network looks like where we have the access layer multihomed up to the distribution layer the idea being that if one of the distribution switches fails we still have an alternate link to forward traffic up towards the distribution block and likewise we have multi-home connectivity from the distribution up to the core block in case one of the core switches or the distribution switches fail we have redundancy in terms of the nodes and we also have redundancy in terms of the links now in terms of scaling the network with blocks the number of layers that we're actually going to use is going to depend on the network deployment characteristics so a single building for sample might only require distribution and access layers which would be called a tier 2 design or in other words a collapsed core but as the network begins to grow the core layer is used to scale out the network where scale out means that we're adding more boxes horizontally as opposed to scale up which means we're adding more powerful boxes to scale the network to scale an individual box vertically okay now the first of these layers the access layer is also known as the network edge so as I mentioned this is where our end devices are attaching so for example pcs phones wireless access points printers IP cameras any device that's running the IP protocol is going to be attached at the access layer ok this provides high bandwidth access to both wired and wireless access points and it's also going to be our first stop for our first hop I should say for quality of service trust points ok the access layer generally can be logically segmented using VLANs where we have different logical networks for performance management and security and communication between the access blocks is going to occur through the distribution blocks so from a physical point of view the access switches are connected north-south to the distribution and the access switches are not connected east-west to each other so if we go back to our diagram we notice that the access layer does not have a connection between the two axis switches we would have to go up to the distribution block in order to go between these two different pcs that are connected between two different access switches ok next we have the distribution layer which is also known as the aggregation block and this is going to be aggregating our access switches in the building or the campus which is going to do a couple things for us first and foremost it's going to reduce the number of cable runs and we're not going to have to daisy chain the access switches together ok and our most normal circumstances the distribution block is also going to be the layer 2 to layer 3 boundary where we're going to be running layer 2 spanning tree down to the access layer and we're going to be running layer 3 routing up to the core layer ok this is also the logical place where we would do IP summarization so we would take the IP subnets of the access lei and we would summarize those as we're advertising them northbound or upstream to the core okay the distribution switches should be deployed in pairs for redundancy so again we have not only link redundancy in terms of how we're physically interconnecting the switches but we also have node redundancy that if we lose an individual switch that we're able to keep forwarding traffic around that failed node okay this is also where we would see the feature called stateful switchover or SSO in chassis based solutions where we have multiple Supervisors or multiple CPUs that are running the chassis based switch and if one of the CPUs dies were able to switch over to the other one in a stateful manner meaning that we're able to continue to forward traffic without any loss in the the data playing okay the distribution switches themselves will interconnect to each other with either layer 2 links or layer 3 links depending on our particular design ok and last but not least we have the core layer so the core layer is the backbone and it's the aggregation point for multiple networks it's going to provide high-speed layer 3 connectivity between the distribution blocks and the WAM where the WAM is where we would have potentially private access to multiple remote sites or where we're connecting to cloud providers like Amazon AWS versus the Internet where we're just doing regular Internet connectivity or where our VPN users are coming in from our remote sites whether this be remote offices or end-users at their homes that are connecting over VPN connectivity we also have the data center connected here in the core layer so any type of like e-commerce transactions that we're doing or any type of big data is going to be maintained in the data center and then the network services layer is well where we're doing things like authentication and other services like load balancing would happen in the network services layer ok the other goal of this is to reduce the cabling needs from a full mesh of connectivity which can be represented by the formula n times n minus 1 over 2 number of links that we would be required to connect the distribution blocks to end number of links for the mutant switches okay in terms of different network design options multiple options are going to be available for building the campus network depending on the scale so we could have a two-tier design where we're collapsing the core and the distribution together in a single layer we could have the three-tiered design where we have the core distribution and access and in either of these solutions we could either be running layer 2 down to the access layer where we're running spanning tree from the access layer up to distribution or we could be doing layer 3 access where we're doing layer 3 routing from the access layer to the distribution layer ok we'll also talk about the simplified campus design where we're using multiple chassis x' that are going to be clustered together like in a virtual switching system or in stack wise that is going to simplify the overall design in terms of the spanning tree and in terms of the layer 2 connectivity ok the first of these the two-tiered design or the collapse core might be useful for smaller networks that don't require a core layer for interconnectivity so this is going to be a cost savings that we don't need additional devices in the core layer in order to connect the devices in the network so the two-tier design is going to use just an access layer in a distribution layer and in this case the distribution layer is also acting as the core so we're connecting from the distribution southbound to the access layer switches and we're connecting northbound to our services like the WAM the server farm the internet and our network services ok as the network starts to grow though larger networks are going to require a core network for interconnectivity between the distribution blocks so this might be a function of the cabling requirements between buildings where it might be administrative leap rohit 'iv to run the amount of fibers that we need to connect the distribution blocks directly together or it might be a function of the northbound connectivity requirements meaning the LAN or the Internet as we start to overload the individual distribution block we might need to separate these into the core connectivity or of the core block ok the idea is that the core layer is going to be comprised of high speed switches lunt running layer 3 routing and the course switches are going to connect two pairs of distribution switches using a layer three protocol such as OSPF or yeah arp but in any case we're always going to be deploying the core layer the distribution layer in pairs so that if one of the switches goes down we're able to continue forwarding our traffic in the data plane around the failure that is introduced into the network okay in the traditional network design we use layer 2 access layer and a layer 3 distribution layer where the distribution layer is going to be hosting the default gateways for our end stations ok the potential problem with this is that loops in the layer 2 network are going to cause spanning tree protocol to block links and we're done in uplinks from the access layer to the distribution layer are going to remain unutilized ok the ideal recommendation would be to limit VLANs to an individual access switch so that loops could be removed by using layer 3 connectivity between the distribution switches but instead of layer 2 but this might not be possible if VLAN services need to span between multiple access switches ok with layer 2 access were typically running what's called a first hop redundancy protocol in the distribution switches because these are going to be the default gateways for our end hosts ok so a first hop redundancy protocol should be run between the distribution switches to provide default gateway redundancy and we have one of three protocols we could use either at the hot standby router protocol or H SRP the virtual router redundancy protocol or vrrp or the gateway load balancing protocol or GL BP ok the first of these the hot standby router protocol or H SRP provides active standby redundancy for the default gateways where the switches are going to agree on a virtual IP address in a virtual MAC address of the default gateway ok the end stations are then going to send an address resolution protocol request or an ARP request for the default gateway and they're going to be returned back with the virtual MAC address of the HSR P group now the potential limitation for this is that only one of the distribution switches is to be the active forwarder so once which is going to be the active gateway one is going to be the standby gateway and the result of this is that the up links from the access layer to the distribution layer might be underutilized okay technically we could do load balancing by running multiple copies of H SRP so for example we could use one H SRP group for the even VLANs we could use another H SRP group for the odd number VLANs but unfortunately the actual traffic load is generally not going to be split 50-50 between the individual VLANs so we're going to still end up in underlie underutilized links that are going from the access layer to the distribution layer he has second solution for this is to run the open standards equivalent version of H SRP which is called the virtual router redundancy protocol or vrrp okay as I mentioned it is an open standard it's defined in RFC 57 98 the virtual router redundancy protocol version 3 okay in this case the switches likewise agree on a virtual IP address in a virtual MAC address of the default gateway and the end stations are for that default gateway and they're going to return be returned back a virtual MAC address ok still the limitation that we run into is that only one of the distribution switches is going to be the active forwarder okay specifically we call one of the switches the master gateway and we call the other one the backup gateway but the result is that the redundant under up links from the access layer to the distribution layer might be underutilized because we only have one active forwarder ok we run this in what we call an active standby type design ok likewise we could potentially do load balancing with vrrp by doing multiple copies of the protocol where we could have one default gateway for the even VLANs another default gateway for the odd VLANs but again unfortunately in most real-world solutions the split of traffic is not going to be a 50/50 share between the VLANs and the services that they're running on those particular virtual LANs ok a third potential solution for this is to run the gateway load balancing protocol or GL BP which does allow for Active Act forwarding from both distribution switches at the same time so the idea here is that the switches agree on a single virtual IP address but there's going to be multiple virtual MAC addresses that represent the default gateway so the end stations are going to ARP for the default gateway and they're going to be returned with one of the virtual MAC addresses that is representing the distribution switch so load-balancing is going to occur because some end stations are going to point to one MAC address which relates to one of the distribution switches others are going to point to a second MAC address which is going to point to the second distribution switch okay it's still not the greatest of the solutions because again we don't have a 50/50 split in the load between the individual hosts traffic and we're not going to be load balancing per flow we only are going to be load balancing per host okay now in an alternate design called layer 3 access in a layer 3 access design we're moving the routed edge from while we're moving this southbound from the distribution layer to the access layer okay so in the layer 2 design we would be running layer 2 up to the distribution so layer 2 is going to be going southbound from the distribution switches to the access layer and then we're going to be running layer 3 routing as we go upstream from the distribution layer to the core layer now in the routed access mode we're going to be pushing the layer 3 boundary down all the way to the access layer where we're going to be running layer 2 switching southbound to the access devices and we're going to be running layer 3 northbound as we go upstream towards the distribution switches and then likewise we're going to be running layer 3 routing as we go towards the core so the difference here is that these interconnections of links are going to be running layer 3 IP routing as opposed to running layer 2 switching which means that they're not going to be subject to spanning tree protocol so this is going to be the main advantage of layer 3 access is that the spanning we protocol is going to be limited down to the access layer okay so the access to the distribution layer to trunks are going to be replaced with layer three routed links and the access layer now needs to participate in routing protocols so either OSPF or hrp for example our typical design options that we would use in our campus environment okay this also means that the default gateway moves down one layer so the access layer switches are going to be the default gateways for the end hosts which is going to remove the need for the first hop redundancy protocols because the end stations are typically only single home to an access switch okay if we had multiple connections from the end host of the access which that would be a different story that we would run first up redundancy protocols at the access layer but in the vast majority of cases we only have single connectivity that's going from the access device to the access wedge okay also as I mentioned in the layer 3 access design and this removes the need for spanning tree protocol so there is no longer going to be any blocked links that go from the access layer to the distribution layer and load balancing is now going to occur based on layer 3 routing ok so the big advantage of this is that load balancing can now occur per flow so an individual transaction like web browsing would have multiple flows as we're doing HTTP gets going to the server we could be load balancing those between the multiple links that are going from the access layer up to the distribution layer ok the big advantage of this is that we're increasing the uplink bandwidth utilization and we're not doing active standby forwarding like we would be doing in spanning tree ok this also makes the network easier to troubleshoot now that spanning tree is going to be removed and it also is going to give us faster convergence now that spanning tree is removed because convergence is going to be a function of either OSPF or ERP whatever are routed protocol is and we can tune them for sub second convergence assuming we have the network configured properly ok the potential limitation though of the layer 3 access design is that we cannot span VLANs between multiple access because we're not running layer two trunk links as we go from the access layer up to the distribution layer okay now the best of both worlds would be if we were using what would be considered a simplified campus design where we're using switch clustering techniques such as the virtual switching system or VSS or stack why's that the access layer where multiple switches are going to act as one okay this is going to remove the requirement for our first out redundancy protocols or rfh RPS so we don't need H SRP vrrp or G LBP when we're running the virtual switching system because we're going to have one logical address one logical IP address and one logical MAC address that's going to represent multiple physical chassis x' that exists in the physical topology okay it's also going to reduce the need for a spanning tree so the access layer is going to be using poor channels or ether channels to connect to the distribution layer where spanning-tree would still be used as a fallback in the case of system failures okay so here we have an example of the physical connectivity versus the logical connectivity of the VSS and so the VSS is going to consist of a pair of switches a pair of distribution switches that are multi-home down to the access layer and in the access layer we're running a poor channel or an ether channel that's going northbound to the VSS now from a logical point of view the southbound access which is going to think that the upstream device is just one physical switch which means that if we lose an individual link or we lose an entire node so if the entire switch goes down the convergence is going to be based on the poor channel load balancing mechanism so we would just remove a member interface from the poor channel channel and then we would be able to continue forwarding our traffic northbound up to the distribution layer okay so in the simplified campus design this is going to increase our uplink bandwidth utilization so traffic is going to be load balanced per flow from the access layer to the distribution layer across the port channel based on the port channel load balancing mechanism and it's also going to reduce convergence time so the poor channels offer a sub-second reconvergence on the failure of either a member interface or a failure of the node in the VSS okay another advantage of the simplified campus design is that we can distribute our VLANs between multiple access switches without blocked links because the poor channels are going to be layer 2 interfaces that are going from the access layer up to the distribution layer okay last but not least this is also going to simplify our management because the cluster of switches is going to be managed as if they were one single logical device

Loading