This was a miserable week at work. Lots of production issues, lots of meetings to review status of production issues, etc. Every once in a while, I’d get time to actually WORK on the issues, so that we can put them behind us. ;) But it was long days, long phone calls, come home with a brain like tapioca, try to get enough sleep (generally failing), and then struggle out of bed the next day to try again.
I’ve been hacking on the rack for about 8 hours today, and was able to complete scenario 6, except for the RSVP QoS, which I need to read up on. These scenarios are getting harder, I think. Or maybe just more dependencies. These last 2 scenarios (5 & 6) have had a lot of little gotchas that would hose much of the rest of the lab if you weren’t paying attention. And just a TON of detail. Frankly, it’s been frustrating. It’s hard to see ahead through all of the issues, and then choose the one solution that will work just right and meet all of the requirements. Each scenario I get a little smarter and certain ways of thinking about a problem come easier. But still, getting the speed down to where I’ll need it to be to actually pass the lab is still intimidating.
Right now, I’m just under 25% complete with the NMC practice labs. I have learned a ton of stuff. I wish that some of the material was recycled more, to reinforce some concepts. To be fair, core routing and redistribution issues are a re-hash, lab after lab, as are most of the switching issues. So, I’m getting speedier with what needs to be done to meet esoteric requirements for RIP, OSPF, EIGRP and BGP. I’m still lame at redistribution, in this sense: I need to be able to look at the redistribution requirements, and know at a glance where I’m going to have to tweak distance and control inbound or outbound routes. I don’t have that down yet. I definitely understand the administrative distances and all that, no problem. I can rattle them all off the top of my head. But if you throw down a diagram with all the IGP domains and then mark the mutual redistribution routers, I can’t immediately say, “Oh, well those RIP (120) routes are going to redistributed into EIGRP (170), and then EIGRP is going pop them into OSPF (110). So on this OSPF (110) router that’s also running RIP (120), we’ll need to tweak distance to make sure that RIP native routes stay RIP and don’t converge towards the OSPF cloud.” I’m aware of those sorts of issues, but thinking through 8 or 10 or 12 potential issues like that at once is still hard for me.
This lab had me going out of my mind trying get a particular BGP route distributed throughout the network as required, with synchronization on everywhere except for one router (you choose which one). Oh, man…that was killing me. They had me redistributing a prefix that was already redistributed from OSPF. But then they wanted the route to show up as learned via BGP on certain routers, as D EX on other routers, etc. Depending on the router, the router may have been learned via eBGP (AD 20), or iBGP (AD 200), meaning that it was almost case-by-case to think of what you needed to tweak on each router to get the prefix to show up in each routing table as required.
The multicast scenario was simple, as far as the multicast part went – dense mode, make sure everyone joins and responds to the ping. Easy enough. But the kicker was this screwball requirement to evenly load balance the multicast traffic to one of the routers, where there was an equal-cost path to get to that router from the multicast source. I didn’t have a clue how to do this. The solution was to create a GRE tunnel, static mroute so that the traffic would come via the tunnel, and then do CEF per-packet load-sharing so that the GRE packets (with the multicast inside) would be evenly distributed across the equal-cost paths. Wow. I NEVER would have conjured that one up on my own, but it made total sense once I saw it. That’s just creative right there…
There was some IPv6 tunneling over IPv4, which I have a feeling is going to show up on the lab. It seems like a no-brainer to me – since most of us are running IPv4 networks, and will be working on IPv6 at some point in the future, doesn’t it make sense that the CCIE lab would cover IPv4 to IPv6 dual-stack issues? Sure does…and IPv6 over IPv4 tunneling is a major component of that. Thankfully, configuring IPv6-in-IPv4 tunnels is generally easy. BUT – there are several different ways to perform this task. Understanding all of the IPv6 over IPv4 tunnel methods, and when each method is appropriate is not so easy. This time around, I needed to use IPv6inIP (easy, makes sense, like every tunnel you’ve ever built in your life), and also an ISATAP tunnel. I built the ISATAP tunnel, following the instructions like a good little puppy. But I still have no idea what was going on there. The endpoints were autodiscovering each other, they were assigning themselves IPv6 addresses…it was out of control. Again, I felt like a monkey looking at a helicopter. So I definitely need to go back through ISATAP tunneling again.
They tossed in a little bit of multiple spanning tree, where you can map multiple VLANs to a specific instance of STP. I didn’t get that working exactly like the answer key said I should have, where there was one port that remained a “boundary” port (at the edge of the MST region) when supposedly it should have come up as an “internal” port. Part of the reason may be that I didn’t get the dot1q tunneling working quite right either, even after diligently reviewing all of the code and making sure I matched up with the answer key. I missed something. I ended up bridging 3 VLANs, when I was only supposed to have been bridging 2. I blew a LOT of time trying to figure that out, because it seemed like it should have been so straightforward. I just missed something somewhere, and time was marching on.
I don’t think I’ve mentioned it in any posts yet, but some of scenarios have included NAT. Today was simple enough. NAT address X to address Y when it flows through these points on the router. No problem. NAT can be confusing, but we do a lot of it in my world, so I’m pretty comfortable with it.
So, a long day, a lot done. I’ll pick up with scenario 7 on Monday, and hopefully caught up on my sleep in the meantime.