Tag Archives: ACI

Watch out for DOCKER hosts

Had an issue with endpoint learning that was perplexing.  I traced the MAC address to a VM that was running DOCKER.

Interestingly enough the IP address that I did the show endpoint for does not exist in the fabric.  I masked the IP addresses so they are not the actual IPs but you’ll see the results.

Leaf_105# show endpoint ip 10.299.66.16
Legend:
O – peer-attached H – vtep a – locally-aged S – static
V – vpc-attached p – peer-aged L – local M – span
s – static-arp B – bounce
+———————————–+—————+—————–+————–+————-+
VLAN/ Encap MAC Address MAC Info/ Interface
Domain VLAN IP Address IP Info
+———————————–+—————+—————–+————–+————-+
105 vlan-1615 0050.56bf.30d7 LV po7
common:CM_Primary_PN vlan-1615 10.299.38.20 LV po7
common:CM_Primary_PN vlan-1615 172.299.221.37 LV po7
common:CM_Primary_PN vlan-1615 172.299.221.38 LV po7
common:CM_Primary_PN vlan-1615 172.299.49.19 LV po7
common:CM_Primary_PN vlan-1615 10.300.112.19 LV po7
common:CM_Primary_PN vlan-1615 10.300.88.40 LV po7
common:CM_Primary_PN vlan-1615 10.300.88.33 LV po7
common:CM_Primary_PN vlan-1615 10.299.38.24 LV po7
common:CM_Primary_PN vlan-1615 10.299.66.110 LV po7
common:CM_Primary_PN vlan-1615 172.299.213.70 LV po7
common:CM_Primary_PN vlan-1615 172.299.223.71 LV po7
common:CM_Primary_PN vlan-1615 172.299.213.96 LV po7
common:CM_Primary_PN vlan-1615 10.300.156.71 LV po7
common:CM_Primary_PN vlan-1615 10.300.88.20 LV po7
common:CM_Primary_PN vlan-1615 10.300.88.35 LV po7
common:CM_Primary_PN vlan-1615 172.299.222.116 LV po7
common:CM_Primary_PN vlan-1615 10.400.120.116 LV po7
common:CM_Primary_PN vlan-1615 10.300.112.32 LV po7
common:CM_Primary_PN vlan-1615 10.400.120.42 LV po7
common:CM_Primary_PN vlan-1615 10.300.9.163.106

<80 more lines of the same stuff>

Solution was to check the “enforce subnet check for IP learning” check box in the bridge domain L3 configuration tab.

BD-Setting

You can read up on DOCKER fun-ness https://docs.docker.com/v1.6/articles/networking/

This does not occur in “traditional” networks because the endpoint learning is in the hardware now and it learns IP’s many different ways.

ACI hell part 1

When connecting access ports with static paths within an EPG that has trunking what a pain.

So basically if you have a static path binding using 802.1p then try and put an access port with 802.1p Access Untagged things may not work.

The reason is that the 802.1p Access Untagged setting it sets the vlan to 0 in the header, but it still has a vlan tag in there.  Some access devices don’t accept it because they are not  expecting a tag period.  This is especially meaningful with appliances.

If you set your mode to 802.1p Access Untagged and use the same Encapsulation VLAN tag as trunked ports, it will not work.  ACI will give you an error saying that you can’t have tagged and untagged in the same EPG.  Yet you can if you change the encapsulation VLAN ID to a different number it will work.

Remember that a VLAN in ACI is just bogus because ACI uses VXLAN, but endpoint devices care about that VLAN number.  Below is an example of 1 EPG with multiple endpoints in the same bridge domain with different VLAN encapsulations.

ACI8021P

 

Little Gotcha with APICs within ACI

This applies to versions of APIC controller software up to 1.1(4e)

Turns out the there is a bug that occurs when you connect an APIC to multiple leaves.  And yes that is stupid because you’re supposed to connect them to different leaves.  This bug manifests itself when integrating the VMM with the fabric.

Just be sure that you create an APIC policy in the policy groups. Fabric>Access Policies>Interface Policies>Policy Group

It’s a simple policy mine is:
LLP=default
CDP= Disabled
MCP=Enabled
LLDP=Enabled
L2 Interface Policy= default
AEP=default

Then bind the policy to the leaves that the APIC’s are connected to.  Fabric>Access Policies>Interface Policies>Profiles>Leaf###.  The click the plus sign (+) and add in your newly created APIC Policy group for the interfaces that the APIC is connected to.

Can’t log into your APIC?

I ran into a split fabric issue setting up my test lab and got the following error trying to log into my 2nd APIC:

REST Endpoint user authorization datastore is not initialized – Check Fabric Membership Status of this fabric node

I was able to get logged into the APIC with the follow username and a blank password:

rescue-user

NOTE: as in the past physical access to a Cisco device equal total ownership.

Basically when installing the fabric for the 1st time you should only power on 1 APIC and discover the entire fabric, then add the other APICs 1 at a time.