Sunday, July 31, 2022

Cisco ACI Fabric Access Policies (Physical) Constructs

 

Fabric Access Policies enable communication of systems that are attached to the Cisco ACI fabric.

You build a fabric access policy with multiple configuration elements as:

  • Pool: Defines a range of identifiers, such as VLANs
  • Physical domain: References a pool. You can think of it as a resource container
  • Attachable Access entity profile (AAEP): Reference a physical domain, and therefore specifies the VLAN pool that is activated on an interface.
  • Interface policy: Defines a protocol or interface properties that are applied to interfaces.
  • Interface policy group: Gathers multiple interface policies into one set and binds them to an AAEP.
  • Interface profile: Chooses one or more access ports and associates them with an interface policy group.
  • Switch Profile: Chooses one or more leaf switches and associates them with an interface profile.


VLAN Pool

A pool represents a range of traffic encapsulation identifiers (For example: VLAN IDs, VNIDs, and multicast address). A pool is a shared resource and can be consumed by multiple domains, physical or virtual. A leaf switch does not support overlapping VLAN pools, so you must not associate different VLAN pools with the same virtual domain.

When you’re creating a vlan pool you must define the type of allocation which is used by the vlan pool. There are two types:

Static Allocation:

  • It requires the administrator to make a choice about which vlan will be used. This is used primarily to attach physical devices to the fabric.
  • The EPG has a relation to the domain, and the domain has a relation to the pool. The pool contains a range of encapsulated VLANs and VXLANs. For static EPG deployment, the user defines the interface and the encapsulation. The encapsulation must be within the range of a pool that is associated with a domain with which the EPG is associated.

Dynamic Allocation:

  • It means that ACI decides which vlan is used for a specific EPG. Most often you’ll see this when integrating with a hypervisor like VMware.
  • In this case ACI defines the vlan that will be used (and will configure the portgroup on the hypervisor to use that specific vlan). This is ideal for situations in which you don’t care which vlan runs underneath the traffic, if it is mapped into the right EPG.

Note: For completeness, there are also VXLAN pools. You can use these to attach to devices that support VXLAN. This could be your hypervisor. Most fabrics only use Vlan pools. Be aware that they exist and that you could use them if required.

Step to Navigate to Access Policies and create a VLAN pool for a physical domain:

Step: A: Navigate

  1. Click Fabric
  2. Click Access Policies
  3. Expand Pools by clicking the toggle arrow (>)
  4. Right-click on VLAN
  5. Click Create VLAN Pool


Step: B1:Create Static VLAN Pool and its VLAN range

  1. Name the VLAN Pool: <User defined name>
  2. Ensure Static Allocation is selected
  3. Then click the plus sign (+) button to add your VLAN pool range
  4. VLAN Range: For example, 2900 – 2949
  5. Click Ok


Step: B2:Create Dynamic VLAN Pool and its VLAN range

    1. Name the VLAN Pool: <User defined name>
    2. Ensure Dynamic Allocation is selected
    3. Then click the plus sign (+) button to add your VLAN pool range
    4. VLAN Range: For example, 2950 – 2999
    5. Click Ok



    Physical Domain

    A domain is used to define the scope of VLANs in the Cisco ACI fabric. In other words, where and how a VLAN pool will be used.
    Domains are used to map an EPG to a vlan pool. An EPG must be member of a domain, and the domain must reference a vlan pool. This makes it possible for an EPG to have a vlan encap.

    There are several types of domains:

    • Physical domains (physDomP): Typically used for bare metal server attachment and management access.
    • Virtual domains (vmmDomP): Required for virtual machine hypervisor integration
    • External Bridged domains (l2extDomP): Typically used to connect a bridged external network trunk switch to a leaf switch in the ACI fabric.
    • External Routed domains (or L3 domains) (l3extDomP): Used to connect a router to a leaf switch in the ACI fabric. Within this domain protocols like OSPF and BGP can be used to exchange routes
    • Fibre Channel domains (fcDomP): Used to connect Fibre Channel VLANs and VSANs

     Step to Navigate to Physical Domains for L2 Connections

    1. Click Fabric
    2. Click Access Policies
    3. In the left navigation pane, all the way the bottom, expand Physical and External Domains by clicking the toggle arrow (>)
    4. Right-click on Physical Domains
    5. Click Create Physical Domain
    6. Name the Physical Domain: <User-defined Name>
    7.  In the VLAN Pool dropdown, select your VLAN Pool created in the previous section
    8.  Click Submit

       Step to Navigate to Physical Domains for L3 External Domain

      1. Click Fabric
      2. Click Access Policies
      3. In the left navigation pane, all the way the bottom, expand Physical and External Domains by clicking the toggle arrow (>)
      4. Right-click on L3 Domains and Click Create Layer 3 Domain
      5. Name the Layer 3 Domain: aci_p29_extrtdom
      6. Click Submit

      Attachable Access entity profile (AAEP)

      The AAEP is another connector. It connects the domain (and thereby the vlan and the EPG) to the Policy Group which defines the policy on a physical port. When defining an AAEP you need to specify which domains are to be available to the AAEP. These domains (and their vlans) will be usable by the physical port.

      Sometimes you need to configure a lot of EPGs on a lot of ports. Say for example you’re not doing any VMware integration, but you do need to have ESXi hosts connected to your fabric. The old way of doing this was to create trunk ports and trunk all the required vlans to the VMware host. In ACI you’d need to configure a static port to the ESXi host on all EPGs that need to be available on the ESXi host. If you’re not automating this, it could take a lot of work. Even with automation this might be a messy way to do this.

      That’s why you can configure an EPG directly under the AAEP. This will cause every port that will be member of the same AAEP to automatically have all the EPGs defined at the AAEP level.


      Steps to Navigate and Create AAEP

      1. Click Fabric
      2. Click Access Policies
      3. Expand Policies by clicking the toggle arrow (>)
      4. Expand Global by clicking the toggle arrow (>)
      5. Right-click on Attachable Access Entity Profiles
      6. Click Create Attachable Access Entity Profile
      For Physical Domain
      1. Name the AEP: <User-defined Name>
      2. Click the plus button (+) to add a Domain
      3. In the Domain Profile dropdown, select your Physical Domain created in the previous section:
      4.  Click Update
      5.  Click Next
      For L3 Domain
      1. Name the AEP: <User-defined Name>
      2.  Click the plus button (+) to add an Domain
      3.  In the Domain Profile dropdown, select your Layer 3 Domain created in the previous section:
      4.  Click Update
      5.  Click Next

      Interface policy group

      The Interface Policy Group is a group of policies. These policies define the operation of the physical interface. Think of stuff like the Speed of the interface, CDP settings, BPDU settings, LACP and more.

      This is also the place where the AAEP is referenced. So, the Interface Policy Group takes care of attaching the vlan, domain and EPG to an interface through the AAEP.

      The specific policies are interface policies which are configured beforehand.

      Step to Navigate to Interface Policy Groups to Create Access Port Policy Group

      1. Fabric
      2. Access Policies
      3. Expand Interfaces by clicking the toggle arrow (>)
      4. Expand Leaf Interfaces by clicking the toggle arrow (>)
      5. Expand Policy Groups by clicking the toggle arrow (>)
      6. Right-click on Leaf Access Port
      7. Click Create Leaf Access Port Policy Group


      • Name the Policy Group: <User-defined name>
      • For the AEP, select aci_p29_l3_aep
      • For Link Level Policy, select aci_lab_10G
      • For CDP Policy, select aci_lab_cdp
      • For LLDP Policy, select aci_lab_lldp
      • For MCP Policy, select aci_lab_mcp
      • For L2 Interface Policy, select aci_lab_l2global
      • Scroll-down
      • Click Submit

      Similarly, we can create Port Channel policy group that will be used as a Layer 2 connectivity policy that is part of a single node port channel. In ACI, each policy group for either Port Channel or Virtual Port Channel identifies the bundle of interfaces as a singular interface policy in the fabric.

      Interface Profile/Selector

      Interface Profiles are the way the Policy Group is attached to a switch. Part of an Interface Profile is the Interface Selector. The Interface selector specifies the interfaces and attaches the policy to that specific interface. However, it does not specify which switch(es) those interfaces belong to.

      You can have multiple interface selectors listed under a single Interface Profiles. It depends on the way you like to work how you’re going to use them.

      • Interface Profiles per switch
      • Interface Profiles per policy group

      The advantage of using a Interface Profiles per policy group is that you can use consistent naming to map policy groups to Interface profiles, making it easier to find the interface profile where a policy group is attached to. However, if you have a lot of policy groups, this could cause long lists in the GUI. This way of working is better suited for automation when you’re working in large fabrics.

      Step to Create Interface Profiles

      1. Fabric
      2. Access Policies
      3. Expand Quick Start by clicking the toggle arrow (>)
      4. Right-click on Interface Configuration
      5. Click Configure Interface


      Now  you can create the interface profiles for:
      • Access port Interface
      • Port-channel Interface
      • VPC Interface

      Steps to Create Access Port Interface

      1. Set the Leafs to 203
      2. Set the Interfaces to 1/29
      3. Ensure the Interface Type is set to Individual
      4. In the dropdown, select your Leaf Access Port Policy Group you created earlier: aci_p29_intpolg_access
      5. The Leaf Profile Name will be aci_p29_access_sp
      6. The Interface Profile Name will be aci_p29_acc_intf_p

      Steps to Create Port-channel Interface

        1. Set the Leafs to 205
        2. Set the Interfaces to 1/57-58
        3. Ensure the Interface Type is set to Port Channel (PC)
        4. In the dropdown, select your Port Channel Policy Group you created earlier: aci_p29_intpolg_pc
        5. The Leaf Profile Name will be aci_p29_pc_sp
        6. The Interface Profile Name will be aci_p29_pc_intf_p
        7. Click Next

        Steps to Create VPC Interface

        1. Set the Leafs to 207 - 208
        2. Set the Interfaces to 1/29
        3. Ensure the Interface Type is set to Virtual Port Channel (VPC)
        4. In the dropdown, select your VPC Port Policy Group you created earlier: aci_p29_intpolg_vpc
        5. The Leaf Profile Name will be aci_p29_vpc_sp
        6. The Interface Profile Name will be aci_p29_vpc_intf_p
        7. Click Next

        Switch Profiles

        A switch profile is the mapping between the policy model and the actual physical switch. The switch profile maps the Leaf Interface Policy, containing the interface selectors to the physical switch. So, as soon as you apply a Interface profile onto a switch profile it will program the ports according to the policy group you defined.

        Step to Create Switch Profiles

        1. Fabric
        2. Access Policies
        3. Expand Quick Start by clicking the toggle arrow (>)
        4. Expand the Switch policies
        5. Right-click on Profile to create switch Profile
        6. Configure Switch Profile Name and assign leaf switch and Interface profile to it created above
        7. Click Submit

        Wrapping it all up together

        So, we’ve just read that all these policies in the end configure a port with specific parameters. We’ve also read that the domain and the AAEP ensure that an EPG can be programmed onto a port. But how does the ACI fabric know which EPGs to put onto the port?

        Several options exist. The most common ones are:

        • Static configuration
        • Dynamic configuration through VMM domains

        Static configuration

        As to static configuration. You as an administrator configure static ports at the EPG level. You need to define which port (or portchannel) to use and which encap must be used. Encap in this context is usually a vlan tag but could in theory also be a VXLAN or QinQ tag.


        Another way is to attach an EPG directly onto the AAEP. This causes the EPG with the specified encap to be attached to all policy groups that are configured with this AAEP as described earlier.

        Dynamic Configuration

        The dynamic configuration based on VMM domains automatically created a port group in the virtual machine manager that corresponds to the EPG when the EPG is configured to be a member of the VMM domain.

        Saturday, July 30, 2022

        Application Centric Infrastructure (ACI) Fabric Initialization

        How do I connect the APICs to the fabric?

        To setup the Application Centric Infrastructure (ACI) Fabric, below task need to be done as:

        • Rack and Cable the Hardware
        • Configure each Cisco APIC's Integrated Management Controller (CIMC)
        • Check APIC firmware and software
        • Check the image type (NX-OS/Cisco ACI) and software version of your switches
        • APIC1 initial setup
        • Fabric discovery
        • Setup the remainder of APIC Cluster

        Rack and Cable the Hardware

        APIC Connectivity

        The APICs will be connected to Leaf switches. When using multiple APICs, we recommend connecting APICs to separate Leafs for redundancy purposes.

        For #9: If it's APIC M3/L3, VIC 1445 has four ports (port-1, port-2, port-3, and port-4 from left to right). Port-1 and port-2 make a single pair corresponding to eth2-1 on the APIC; port-3 and port-4 make another pair corresponding to eth2-2 on the APIC. Only a single connection is allowed for each pair. For example, you can connect one cable to either port-1 or port-2 and another cable to either port-3 or port-4, but not 2 cables to both ports on the same pair. All ports must be configured for the same speed, either 10G or 25G.

        Switch Connectivity

        All Leaf switches will need to connect to spine switches and vice versa. This provides your fabric with a fully redundant switching fabric.  In addition to the fabric network connections, you'll also connect redundant PSUs to separate power sources, Management Interface to your 1G out-of-band management network, and a console connection to a Terminal server (optional, but highly recommended).

        Configure each Cisco APIC's Integrated Management Controller (CIMC)

        When you first connect your CIMC connection marked with "mgmt." on the Rear facing interface, it will be configured for DHCP by default.  Cisco recommends that you assign a static address for this purpose to avoid any loss of connectivity or changes to address leases.  You can modify the CIMC details by connecting a crash cart (physical monitor, USB keyboard and mouse) to the server and powering it on.  During the boot sequence, it will prompt you to press "F8" to configure the CIMC.  From here you will be presented with a screen like below – depending on your firmware version.

        • For the "NIC mode" we recommend using Dedicated which utilizes the dedicated "mgmt." interface in the rear of the APIC appliance for CIMC platform management traffic. 
        • Using "Shared LOM" mode which will send your CIMC traffic over the LAN on Motherboard (LOM) port along with the APICs OS management traffic.  This can cause issues with fabric discovery if not properly configured and not recommended by Cisco. 

        Aside from the IP address details, the rest of the options can be left alone unless there's a specific reason to modify them.  Once a static address has been configured you will need to Save the settings & reboot.  After a few minutes you should then be able to reach the CIMC Web Interface using the newly assigned IP along with the default CIMC credentials of admin and password.  It’s recommended that you change the CIMC default admin password after first use.

        Logging into the CIMC Web Interface

        To log into the CIMC, open a web browser to https://<CIMC_IP>. You'll need to ensure you have flash installed & permitted for the URL.  Once you've logged in with the default credentials, you'll be able to manage all the CIMC features including launching the KVM console.

        Note: Launching the KVM console will require that you have Java version 1.6 or later installed.  Depending on your client security settings, you may need to whitelist the IMC address within your local Java settings for the KVM applet to load.   Open the KVM console and you should be at the Setup Dialog for the APIC assuming the server is powered on.  If not powered up, you can do so from the IMC Web utility. 

        Check APIC firmware and software

        Equally important to note is that all your APICs require to run the same version when joining a cluster.  This may require manually upgrading/downgrading your APICs manually prior to joining them to the fabric.  Instructions on upgrading standalone APICs using KVM vMedia can be found in the "Cisco APIC Management, Installation, Upgrade, and Downgrade Guide" for your respective version.

        Switch nodes can be running any version of ACI switch image and can be upgraded/downgraded once joined to the fabric via firmware policy.

        Check the image type (NX-OS/Cisco ACI) and software version of switches

        For a Nexus 9000 series switch to be added to an ACI fabric, it needs to be running an ACI image.  Switches that are ordered as "ACI Switches" will typically be shipped with an ACI image.  If you have existing standalone Nexus 9000 switches running traditional NXOS, then you may need to install the appropriate image (For example, aci-n9000-dk9.14.0.1h.bin).  For detailed instructions on converting a standalone NXOS switch to ACI mode, please see the "Cisco Nexus 9000 Series NX-OS Software Upgrade and Downgrade Guide" on CCO for your respective version of NXOS.

        APIC1 initial setup

        Now that you have basic remote connectivity, you can complete the setup of your ACI fabric from any workstation with network access the APIC. If the server is not powered on, do so now from the CIMC interface.  The APIC will take 3-4 mins to fully boot. Next thing we'll do is open a console session via the CIMC KVM console. Assuming the APIC has completed the boot process it should sitting at a prompt "Press any key to continue…".  Doing so will begin the setup utility.

        From here, the APIC will guide you through the initial setup dialogue.  Carefully answer each question.  Some of the items configured can't be change after initial setup, so review your configuration before submitting it.

        Fabric Name: User defined, will be the logical friendly name of your fabric.

        Fabric ID: Leave this ID as the default 1.

        Number of Controllers in fabric: Set this to the number of APICs you plan to configure. This can be increased/decreased later.

        Pod ID: The Pod ID to which this APIC is connected to.  If this is your first APIC or you don't have more than a single Pod installed, this will always be 1.  If you are located additional APICs across multiple Pods, you'll want to assign the appropriate Pod ID where it's connected.

        Standby Controller: Beyond your active controllers (typically 3) you can designate additional APICs as standby.  In the event you have an APIC failure, you can promote a standby to assume the identity of the failed APIC.

        APIC-X: A special-use APIC model use for telemetry and other heavy ACI App purposes.  For your initial setup this typically would not be applicable.  Note: In future release this feature may be referenced as "ACI Services Engine".

        TEP Pool:  This will be a subnet of addresses used for Internal fabric communication.  This subnet will NOT be exposed to your legacy network unless you're deploying the Cisco AVS or Cisco ACI Virtual Edge.  Regardless, our recommendation is to assign an unused subnet of size between /16 and /21 subnet.  The size of the subnet used will impact the scale of your Pod.  Most customer allocate an unused /16 and move on. This value can NOT be changed once configured. Having to modify this value requires a wipe of the fabric.

        Note: The 172.17.0.0/16 subnet is not supported for the infra TEP pool due to a conflict of address space with the docker0 interface. If you must use the 172.17.0.0/16 subnet for the infra TEP pool, you must manually configure the docker0 IP address to be in a different address space in each Cisco APIC before you attempt to put the Cisco APICs in a cluster.

        Infra VLAN: This is another important item.  This is the VLAN ID for all fabric connectivity.  This VLAN ID should be allocated solely to ACI, and not used by any other legacy device in your network.  Though this VLAN is used for fabric communication, there are certain instances where this VLAN ID may need to be extended outside of the fabric such as the deployment of the Cisco AVS/AVE.   Due to this, we also recommend you ensure the Infra VLAN ID selected does not overlap with any "reserved" VLANs found on your networks.  Cisco recommends a VLAN smaller than VLAN 3915 as being a safe option as it is not a reserved VLAN on Cisco DC platforms as of today. This value can NOT be changed once configured. Having to modify this value requires a wipe of the fabric.

        BD Multicast Pool (GIPO): Used for internal connectivity.  We recommend leaving this as the default or assigning a unique range not used elsewhere in your infrastructure. This value can NOT be changed once configured. Having to modify this value requires a wipe of the fabric.

        Once the Setup Dialogue has been completed, it will allow you to review your entries before submitting.  If you need to make any changes enter "y" otherwise enter "n" to apply the configuration.  After applying the configuration allow the APIC 4-5 mins to fully bring all services online and initialize the REST login services before attempting to login though a web browser.

        Fabric discovery

        With our first APIC fully configured, now we will login to the GUI and complete the discovery process for our switch nodes.

        When logging in for the first time, you may have to accept the Cert warnings and/or add your APIC to the exception list.

        Now we'll proceed with the fabric discovery procedure.  We'll need to navigate to Fabric tab > Inventory sub-tab > Fabric Membership folder.

        From this view you are presented with a view of your registered fabric nodes.  Click on the Nodes Pending Registration tab in the work pane and we should see our first Leaf switch waiting discovery.  Note this would be one of the Leaf switches where the APIC is directly connected to.

        To register our first node, click on the first row, then from the Actions menu (Tool Icon) select Register.

        The Register wizard will pop up and require some details to be entered including the Node ID you wish to assign, and the Node Name (hostname).

        Hostnames can be modified, but the Node ID will remain assigned until the switch is decommissioned and remove from the APIC.  This information is provided to the APIC via LLDP TLVs.  If a switch was previously registered to another fabric without being erase, it would never appear as an unregistered node.  It's important that all switches have been wiped clean prior to discovery.   It's a common practice for Leaf switches to be assigned Node IDs from 100+, and Spine switches to be assigned IDs from 200+.  To accommodate your own numbering convention or larger fabrics you can implement your own scheme.  RL TEP Pool is reserved for Remote Leafs usage only and doesn't apply to local fabric-connected Leaf switches. Rack Name is an optional field.

        Once the registration details have been submitted, the entry for this leaf node will move from the Nodes Pending Registration tab to the Registered Nodes tab under Fabric Membership.  The node will take 3 to 4 minutes to complete the discovery, which includes the bootstrap process and bringing the switch to an "Active" state.  During the process, you will notice a tunnel endpoint (TEP) address gets assigned.  This will be pulled from the available addresses in your Infra TEP pool (such as 10.0.0.0/16).

        In depth, Fabric Discovery process:

        First, Cisco APIC uses LLDP neighbor discovery to discover a switch.

        After a successful discovery, the switch sends a request for an IP address via DHCP

        Cisco APIC then allocates an address from the DHCP pool. The switch uses this address as a TEP address. You can verify the allocated address from shell by using the acidiag fnvread command and by pinging the switch from the Cisco APIC.

        In the DHCP Offer packet, Cisco APIC passes the boot file information for the switch. The switch uses this information to acquire the boot file from Cisco APIC via HTTP GET to port 7777 of Cisco APIC.

        The boot file HTTP GET 200 OK response from the Cisco APIC contains the firmware that the switch will load. The switch then retrieves this file from the Cisco APIC with another HTTP GET to port 7777 on the Cisco APIC. 

        At last, Cisco APIC initiates the encrypted TCP session when the switch is listening on TCP port 12183 to establish the policy element Intra-Fabric Messaging (IFM).


        In summary, the initial steps of the discovery process are:

        • LLDP neighbor discovery
        • Cisco APIC assigns TEP address to the switch via DHCP
        • The switch downloads the boot file from Cisco APIC and performs firmware upgrade if necessary.
        • Policy element exchange via IFM, also known as intra-fabric messaging (IFM).

        Note: Communication between the various nodes and processes in the Cisco ACI Fabric uses IFM, and IFM uses SSL-encrypted TCP communication. Each Cisco APIC and fabric node has 1024-bit SSL keys that are embedded in secure storage. The SSL certificates are signed by Cisco Manufacturing Certificate Authority (CMCA).

        In the discovery process, a fabric node is considered active when the Cisco APIC and the node can exchange heartbeats through the IFM process.

        Node status may fluctuate between several states during the fabric registration process. The states are shown in the Fabric Node Vector table. The APIC CLI command to show the Fabric Node Vector table acidiag fnvread .
        Following are the States and descriptions:

        • Unknown – It states that Node discovered but no Node ID policy configured
        • Undiscovered – It states that Node ID configured but not yet discovered
        • Discovering – It states that Node discovered but IP not yet assigned
        • Unsupported – It states that Node is not a supported model
        • Disabled – when Node has been decommissioned, it will show Disabled
        • Inactive – if you have No IP connectivity
        • Active – When Node is active

        Note: ACI uses inter-fabric messaging (IFM) packets to communicate between the different nodes or between leaf and spine. These IFM packets are typically TCP packets, which are secured by 1024-bit SSL encryption, and the keys used for encryption are stored on secure storage. These keys are signed by Cisco Manufacturing Certificate Authority (CMCA). Any issues with IFM process can prevent fabric nodes communicating and from joining the fabric.

        After the first Leaf has been discovered and move to an Active state, it will then discovery every Spine switch it's connected to.  Go ahead and register each Spine switch in the same manner.

        Since each Leaf Switch connect to every Spine switch, once the first Spine completes the discovery process, you should see all remaining Leaf switch pending registration.  Go ahead with Registering all remaining nodes and wait for all switches to transition to an Active state.

        With all the switches online & active, our next step is to finish the APIC cluster configuration for the remaining nodes.  Navigate to System > Controllers sub menu > Controllers Folder > apic1 > Clusters as Seen by this Node folder.

        From here you will see your single APIC along with other important details such as the Target Cluster Size and Current Cluster Size.  Assuming you configured apic1 with a cluster size of 3, we'll have two more APICs to setup.

        Setup the remainder of APIC Cluster

        At this point we would want to now open the KVM console for APIC2 and begin running through the setup Dialogue just as we did for APIC1 previously.  When joining additional APICs to an existing cluster it's imperative that you configure the same Fabric Name, Infra VLAN and TEP Pool.  The controller ID should be set to ID 2.  You'll notice that you will not be prompted to configure Admin credentials.  This is expected as they will be inherited from APIC1 once you join the cluster.

        Allow APIC2 to fully boot and bring its service online.  You can confirm everything was successfully configure as soon as you see the entry for APIC2 in the Active Controllers view.  During this time, it will also begin syncing with APIC1's config.  Allow 4-5 mins for this process to complete.  During this time, you may see the State of the APICs transition back & forth between Fully Fit and Data Layer Synchronization in Progress. Continue through the same process for APIC3, ensuring you assign the correct controller ID.

        This concludes the entire fabric discovery process.  All your switches & controllers will now be in sync and under a single pane of management.  Your ACI fabric can be managed from any APIC IP.  All APICs are active and maintain a consistent operational view of your fabric.

        The Complete steps of IFM (Intra-Fabric Messaging)

        After this all process is completed, the fabric is ready for Production configuration.

        1. Link Layer Discovery Protocol (LLDP) Neighbor Discovery
        2. Tunnel End Point (TEP) IP address assignment to the node via DHCP
        3. Node software upgraded if necessary
        4. ISIS adjacency mode
        5. Certification Validation
        6. Start of DME Process on switches.
        7. Tunnel Setup (iVxlan)
        8. Policy Element IFM Setup

        Fabric Initialization Tasks

        • Configure APIC1
        • Add first Leaf to fabric.
        • All all spines to fabric
        • Add remaining Leaf’s to fabric.
        • Add remaining APIC to fabric
        • Setup NTP
        • Configured OOB Management IP Pool
        • Configure Export Policies for Configuration and Tech Support Exports
        • Configure Firmware Policies (For Upgrades)

        Friday, July 29, 2022

        Application Centric Infrastructure (ACI) Overview

        What is ACI?

        • Old: IP endpoint-based network
        • New: Application based network

        • Old: Manually configured network
        • New: Software based network

        • Declarative model à Promise Theory
          • We don’t want to tell every single port how to behave explicitly, we want to use the promise theory to describe how we want the application to behave and let them translate from the fabric to the hardware.
          • For example: We get into the taxi, and tell the driver where we like to go, we don’t tell how to go, where to take turn, how fast to go, we just tell them the destination, and that similar is the promise theory is based. We just tell ACI what we want to accomplish, and the ACI ACI translate down to the hardware as per the requirement.

        • Separation of Control Plane and Data Plane

        Cisco Application Centric Infrastructure (Cisco ACI) in the data center is a holistic architecture with centralized automation and policy-driven application profiles. Cisco ACI delivers software flexibility with the scalability of hardware performance that provides a robust transport network for today's dynamic workloads. Cisco ACI is built on a network fabric that combines time-tested protocols with new innovations to create a highly flexible, scalable, and resilient architecture of low-latency, high-bandwidth links.

        This system-based approach simplifies, optimizes, and accelerates the entire application deployment life-cycle across data center, WAN, access and cloud environments. In doing so, this system empowers IT to be more responsive to changing business and application needs. This ability enhances agility and adds business value.

         Cisco ACI characteristics:

        • Application-centric fabric connectivity for:
          • Multi-tier applications
          • Traditional application
          • Virtualized applications
        • Multivendor support
        • Physical and virtual endpoints
        • Policy abstraction

        ACI Starts with a Better Switch – Nexus 9000

        The Cisco Nexus 9000 platforms has two modes of operation.

        • In the first mode, Nexus 9000 utilizes an enhanced version of the NXOS operating system to provide a traditional switching model with advanced automation and programmability capabilities.
        • In the second mode, ACI mode the Nexus 9000 provides an Application Centric Representation of the network, utilizing advanced features and profile-based deployments to abstract the complexity of the underlying network while improving application visibility and greater agility through DevOps methodologies.

        Standalone Mode

        • Nexus 9300 and 9500
        • Behave as a regular Nexus L2/L3 switch.
        • Best in Class efficiency
        • Low latency and High 10G/40G Port Density

        ACI Mode

        • Nexus 9300, Nexus 9500 Switches
        • Run an “ACI version” of software.
        • Managed by APIC
        • Spine and Leaf fabric Design

        ACI Network Topology

        ACI topology is a CLOS Fabric

        • All leafs uplink to all spines with 40/100 GigE
        • APICs connect to leafs with redundant 10 GigE links
        • Leafs do not plug into leafs
        • Spines  do not plug into spines
        • Traffic flows is Host > Leaf >  Spine > Leaf > Host
        • Scale out bandwidth by adding more spines

        ACI is made up of 3 main components

        • Nexus 9K spine switches
        • Nexus 9K leaf switches
        • Application Policy Infrastructure Controller (APIC)

        Cisco APIC

        Cisco APIC is a policy controller. It relays the intended state of the policy to the fabric. The APIC does not represent the control plane and does not sit in the traffic path. The hardware consists of a cluster of three or more servers in a highly redundant array.

        Key Point:

        • Policy Controller
        • Holds the defined policy - Management plane (Not the control plane, not in the traffic path)
        • Redundant cluster of three or more servers - Each server dual-homed for resilience
        • Leaf port density determines cluster requirements Verified Scalability Guide for Cisco ACI
        • Instantiates the policy changes

        The Cisco APIC software is delivered on Cisco Unified Computing System (UCS) C-Series server appliances. The product consists of the server hardware and pre-installed Cisco APIC software.

        • Currently, two models and two generations
          • APIC-L2 (Large) and APIC-M2 (Medium) - C220 M4
          • APIC-L1 (Large) and APIC-M1 (Medium) - C220 M3
        • APIC controls the topology via a single GUI
          • Like UCSM, APIC is a shared management plane
          • APIC also supports CLI and APIs for automation

        ACI Fabric Initialization

        ACI Fabric supports discovery, boot, inventory, and systems maintenance processes via the APIC.

        • Fabric Discovery and Addressing
          • APIC finds a leaf
          • Leaf finds the spines
          • Spines find all other leafs
          • Minimal GUI configuration steps
        • Image Management
        • Topology validation through wiring diagram and system checks

        More in detail for fabric initialization in next section

        Spine-Leaf Topology

        By using spine-leaf topology, the fabric is easier to build, test, and support. Scalability is achieved by simply adding more leaf nodes if there are not enough ports for connecting hosts, and adding spines nodes if the fabric is not large enough to carry the load of the host traffic. The symmetrical topology allows for optimized forwarding behavior, needing only two hops for any host-to-host connection.
        Advantages:

        • Simple and consistent topology
        • Scalability for connectivity and bandwidth
        • Symmetry for optimization of forwarding behavior
        • Least-cost design for high bandwidth
        • Low-latency and oversubscription

        IS-IS Fabric Infrastructure Routing

        The fabric leverages a densely tuned environment utilizing Level 1 connections within the topology for advertising loopback addresses. These loopback addresses are the VTEPs (VXLAN Tunnel Endpoints) that are used in the integrated overlay and advertised to all other nodes in the fabric for overlay tunnel use.
        Main feature about IS-IS in Cisco ACI
        IS-IS is responsible for infrastructure connectivity

        • Advertises VTEP addresses
        • Compute multicast trees
        • Announces tunnels from every leaf to all other fabric nodes
        IS-IS is tuned for densely connected fabric
        IS-IS is also responsible for generating the multicast forwarding tag (FTAG) trees in the fabric using vendor TLVs.

        Decoupling of Endpoint Location and Policy

        The Cisco ACI fabric decouples the endpoint address from the location of that endpoint and defines the endpoint by its locator or VTEP address. Forwarding between VTEPs leverages an enhanced VXLAN header format. The mapping for host and tenant MAC and IP address to the location is performed using a distributed mapping database for reachability.
        Main points about Endpoint location and policy:

        • Endpoints identified by IP or MAC address
        • Endpoint location specified by VTEP address
        • Forwarding occurs between VTEPs
        • Transport based on enhanced VXLAN header format
        • Distributed reachability database maps endpoints to VTEP locations

        Notes: ACI Behind the Scenes

        • An automated VXLAN overlay tunnel system
        • Support both layer 2 and layer 3 VXLAN gateways
        • VLANs now have port-local significance
        • Underlay network uses IS-IS for transport

        Leafs are VXLAN Tunnel Endpoints (VTEPs)
        Provides VTEP to VTEP IP transport through spines

        Physical, Virtual and Distributed

        • We can have some endpoint as a BareMetal, or some endpoint as a virtual, and in today we have lots of more workload transitioning to virtualization environment.
        • So, we must support any type of hypervisors, any type of BareMetal and hosts.
        • And that what ACI can do, which can hypervisor which support Microsoft, Hyper-V, KVM, BareMetal, etc. and policy can be applied to anything.

        The other great thing that ACI can do for us, is called Normalization.

        • We take the encapsulation that come in the fabric, which can be standard VLAN 802.1Q tag, they can be VXLAN id, they can NVGRE. We can normalize the traffic into application endpoint groups (which we be talk about in next section).
        • Essentially, we can speak any languages coming into the fabric.
        • Once any endpoint is in the fabric, ACI will treat any endpoint with the same policy regardless of the encapsulation type they are using.

        Important Point to be Remember

        • STP (Spanning-tree) is not used in ACI because STP blocks one of the links due to its behavior.
        • ACI uses ECMP (Equal cost multi-pathing) between two Leaf switches, Spine is the only hop and when Cost is same that traffic will be load balanced.
        • ACI is a layer 3 fabric, we use IS-IS routing protocol to build the routing table.
        • VXLAN is used for building Overlay Network.
        • Every network in ACI is a Host Based i.e., /32.
        • LLDP is the protocol for discovering the switches at Layer 2.
        • DHCP is used for allocating IPs to each switch by APIC.
        • In ACI, we follow Whitelisting model, by default everything is blocked unless we allow it. It is very good from the security point of view.
        • In ACI, everything we configure is stored in the form of objects and policies which can be accessed using Cisco API.
        • Configuration is stored in XML or JSON format; these can be configured using APIs as well.