Every developer talks about SDN, the spirited abbreviation for Software Defined Network, so that it is impossible in the IT work environment to not come across this expression. As a customer, you sometimes pose a provocative question: “Is SDN the new IPv6?” – or putting it another way: “Are SDN solutions with its completely novel approaches in reality only suitable for the smaller market and its specific requirements?”

One can discuss in length if Cisco ACI constitutes an SDN solution in the original sense owing to its architecture. Some suppliers are trying to approach this topic with a similar philosophically abstract questioning. As the consolidation of network, server and application in the data center may entail definitely many advantages, the complexity of finding the right product is growing for the customer at the same time as the suppliers of network, server and applications are suddenly seen as competitors. If an SDN solution or not, Cisco ACI is a definite product with definite approaches.


Are there ACI use cases for each data center?

What is ACI able to offer me compared to the traditional network? How complex is a migration?


In this blog I would like to address these question from the perspective of a “networker on the front line”.



In network engineering, we are used to think historically in VLAN and IP network structures. Opposed to that, are the developers of applications that generally use an IP network inherently as a given, preferably flat structured design for the communication in between their applications. In between several server and visualization architectures are in existence. These so far rather encapsulated areas of work with their individual requirements, cause structural contradictions in an overall solution concerning stability, flexibility and security. The provisioning of new services are complex, as different configurations are necessary on many issues in order to arrive at a target.

Cisco ACI follows a completely new approach to address these issues, by moving applications and their communication on the network or via server respectively visualization architectures into the focus of a comprehensive, centrally controllable solution.


There are shorter papers on the subject of ACI, a futile attempt to make the product attractive quicker. It is impossible to describe the concept in a few paragraphs. Because of the greater scope of this blog, I would like to summarize already now my assessment of the advantages and disadvantages of a Cisco ACI solution in short. The individual topics will be discussed more in detail in the individual chapters.

From a global perspective, the advantages are substantial by all means:


  • “East-West” security without problems of scalability in the data center via EPG and security contracts. This security concept is available in ACI Fabric in full forwarding performance. This is a main argument in support of the usage of ACI at the moment.
  • Optimized traffic forwarding in the ACI Fabric by reduction of broadcasts and unknown unicast frames
  • Central administration by means of well organised GUI, CLI and API programming
  • Automated provisioning of EPG (Endpoint Group) communication up to the VM directly from the central administration of the ACI solution
  • Policy based configuration, whereby each new service with its dependencies is efficiently provisionable any number of times
  • Simple, modular topology with a maximum of possibilities for flexible extensions


The disadvantages are a very complex and sometimes a tedious migration of services of an existing data center into a service and contract based ACI policy and the initial training effort for employees. Apart from that, the implementation of the integrated ACI security does not conform to a stateful firewall function, but to a so-called “reflexive” access control list (ACL). For a clean implementation of security contracts, you have to know exactly which communication is in existence between the servers in the data center.




It is one argument that VLANs are not able to scale due to the set limit of 4,000. It is correct that ACI scales much better (up to 16 million segments) due to the integral VXLAN encapsulation. However, this cannot be a main sales argument considering the size of most of the Cisco networks that are known to us.

Data transmission via a traditional VLAN network entails serious limitations:


  • Spanning tree for the dissolution of network loops with known limitations
  • Decentralized MAC address table learning in order to prevent a central view onto connected devices
  • Flooding behavior in VLANs is hardly manageable
  • Transport of layer 2 frames beyond layer 3 boundaries is not possible


For the removal of these limitations VXLAN overlay features will be used in an ACI Fabric. An overview on VXLAN technology can be looked up in my blog: VXLAN – overlays in the data center.


However, ACI is much more than a mere VXLAN overlay; here ACI specific functions will be discussed more in detail.


VXLAN can be operated on the Nexus 9000 platform, it is the classic NX-OS mode without ACI Fabric. There, the complex configuration, respectively the decentralized management, via the command line presents the biggest disadvantage. This disadvantage does not exist with Cisco ACI as the Fabric automatically configures and manages the VXLAN!

VLAN encapsulation still exists with ACI, but only for some connections between end systems and the ACI Fabric. One differentiates between “localized” and “normalized” encapsulation in the Fabric. The transition for this is with the leaf switches that carry out the transmission from a local to a normalized VXLAN encapsulation by taking up the function of a VTEP (Virtual Tunnel Endpoint).



Cisco ACI demands a rigid “leaf spine” Fabric topology, which does not permit alternative variations of connections. If phrased like that, it sounds like an inflexible solution that one actually does not want in one’s own network. In reality however, this approach can be classified as a big advantage concerning scalability and stability.

Leaf switches establish the connection to the end systems or the external networks. Spine switches only establish the connections to all leaf switches (in special cases to external connections). Neither spine nor leaf switches may be connected to each other. This guarantees that the “inter-leaf” traffic always has to pass by a spine switch – a very important requirement of the ACI solution.

The available bandwidth between leaf and spine is of course of vital importance in this topology. In the meantime, we talk of 40G and 100G in the data center for prices that were unthinkable just 2 years ago. Moreover, the VXLAN encapsulation in conjunction with the ISIS routing protocol in the ACI Fabric, takes care of an even load during several uplinks between leaf and spine switches.



Leaf and spine switches consist of the Cisco Nexus 9000 platform that is operated with an ACI software instead of the NX-OS operating system. Besides the mentioned leaf, respectively spine components, the APIC (Application Policy Infrastructure Controller) is the focal point for all configurations. Thereof result individual building blocks of an ACI solution with the following main functions.


The selection of leaf switches is based on the following parameters, which are different depending on the particular generation.

  • Actual leaf switches allow a connectivity of up to 25 Gbps to end systems and 100 Gbps to the spine
  • Physical connection via 10GBase-T or 10/25 Gbps SFP to end systems
  • Buffer and queue management, dynamic load balancing and QoS for latency sensitive flows
  • Policy CAM is a hardware storage for the segmentation of endpoint groups in security zones. Thereon, security contracts are implemented.
  • Multicast routing support
  • Analyze support
  • Scaling for endpoints; depends on the size of the TCAM tables on the model
  • FCoE support
  • Layer 4-7 redirect features via so-called service graphs
  • Micro segmentation – isolation of endpoints for instance based on VM properties, IP-or MAC addresses, etc. (comparable function to private VLANs)


In the ACI Fabric, traffic will be routed based on host lookups. Each leaf has a local lookup database for end systems that is connected to the leaf. Spine switches possess a general view over all endpoints of the entire Fabric; all known endpoints are saved in the ACI spine database. This allows for less demand for the hardware on the leafs. Spine switches are dimensioned, besides according to the bandwidth of the uplink, and also according to the amount of endpoints.


There are modular and fixed spine switches:

  • Nexus 9336PQ fixed supports up to 200,000 endpoints
  • Nexus 9500, 4 Slots – up to 300,000 endpoints
  • Nexus 9500, 8 Slots – up to 600,000 endpoints
  • Nexus 9500, 16 Slots – up to 1.2 million endpoints
  • Cisco Application Policy Infrastructure Controller – APIC



The APIC controller consists of 3 Cisco UCS C220 servers that are supplied and it represents the central administration of the entire ACI infrastructure. They are connected physically to the leaf switches via 10G bandwidth; additionally, a controller comes with a 1G OOB connection. The APIC is available in two sizes: APIC-M for up to 1,000 edge ports and APIC-L from 1,000 edge ports.


In the first versions of the APIC software, a generic graphic user interface was present that had been developed very closely to the modular policy approach of the ACI solution. This interface is used predominantly for the configuration in an ACI Fabric. This GUI is really a visual representation of an underlying API to the APIC controller. This REST API is additionally available to the customer as an open interface for the programmatic control of the entire ACI Fabric with all included functions. At the moment, Python, Ruby and Window Powershell SDKs are supplied for the API.

Upon numerous customer requests, an additional GUI mode (basic) and a full-fledged NX-OS CLI were added later for the purists. This CLI is very helpful for many troubleshooting processes.




Cisco ACI software supports the most popular hypervisor solutions. It can therefore be connected to various VM managers such as VMware vSphere, Microsoft SCVMM and OpenStack.

This integration generates an additional advantage of an ACU solution versus classical data center networking, where the engineer first has to provision VLANs on each component and then additionally in the same way on the VM infrastructure. With the SCI VM integration, port profiles can be created automatically on the VM management based on the ACI service definition. Therefore, there is only one point of the configuration of the provisioning of new network services in the data center and it contains the VM infrastructure up to the virtual machine, too.

In the current ACI software version, one could also follow the opposite path by configuring the ACI networking via vCenter Plugin from the VM manager’s perspective. Additional plugins for orchestration and automation are currently available for Cisco UCS Director, CloudCenter, Microsoft Azure Pack and VMware vRealize.



In a traditional design, security is generally implemented between network segments on the Layer 3 transition.

Filters that are inside a layer 2 segment are hardly possible in the data center. As uplink bandwidths are currently around 40 to 100 Gbps, it is inefficient to place a firewall at this point. However, there are increasing requirements to isolate the server and its applications from each other respectively to exactly control the access. ACI fulfills these demands with the concept of Endpoint Groups (EPG) and contracts in between them.

Here, devices as a server (Baremetal, VM, etc.) for instance will be allocated via EPG based on defined criteria such as VLAN tag, IP address, MAC address. A device can only be allocated to one endpoint group. Systems in an endpoint group are allowed to communicate unrestrictedly with each other (as long as no micro-segmentation is used). Devices in different EPGs are by default not allowed to communicate with each other; this can only take place via so-called contracts (correspond to access rules).

From a technical standpoint, contracts are so-called “reflexive” access lists, whereby server and client roles (consumer and provider) are defined and where the retour traffic can accordingly be activated automatically. These rules are implemented directly into the ACI leaf hardware. Hence, this kind of security is available in the Fabric “line rate”, e.g. with full ACI forwarding performance.




The described concepts of EPG and contracts are one of the unique characteristics of the ACI architecture. No matter if the data transfer happens inside of an EPG or across with contracts, VXLAN is used for the forwarding inside the ACI Fabric. Here too, sensible adaptions have been carried out, but we will not discuss this here for now but later, further in detail.

It should not matter for a device if in communicates via a traditional network or via an ACI Fabric. For layer 2 frames, the concept of bridge domains (BD), which represent broadcast domains at the same time, exists in ACI. Endpoint groups are always subordinated to a BD; several EPG could belong to a bridge domain.

Bridge domains are often compared to VLANs as they define the limits for layer 2 frames analogue to VLANs, as they handle, amongst others ARP and other broadcast traffic. Additionally, bridge domains can be allocated (several) IP addresses as gateways, analogue to default gateways in VLAN structures. This comparison is nevertheless misleading as bridge domains operate completely independent of VLANs. There are no VLAN tags in the ACI Fabric. Server in different EPG can still be allocated to the same bridge domain, although they can be found locally (e.g. on VM level) in different VLANs!

For ARP, broadcast and multicast traffic, special optimizations were implemented into ACI. These optimizations ideally target to minimize traffic flooding within a broadcast domain. Without going further into detail it needs mentioning that many broadcast packages could be converted in the Fabric to Unicast packages as the ACI Fabric is already aware of the target systems (respectively leaf, onto which the systems are connected) due to the internally administered mapping tables.



The top level of the ACI architecture consists of tenants. These tenants can be understood as clients and you can run several clients in parallel.

Each tenant can administer one or several virtual routing (VRF) instances. Below that, each VRF can in turn host one or several bridge domains (BD) and each BD supports several IP subnets (and their default gateways for hosts). Finally, end systems are led to the ACI Fabric via End Point Groups (EPG). These EPG can comprise of several end systems and are ultimately subordinated to the bridge domains. Several EPG could belong to a BD.

From this deduction, one can draw up the following example of a hierarchy with two tenants and the relevant subordinated ACI building blocks:

Devices within an endpoint group are able to communicate “freely” without security contracts as already mentioned. In every case, an endpoint group can only belong to a bridge domain. Devices in different EPG can communicate to each other via contracts. This is also possible between EPG in different bridge domains and VRFs. Devices in different tenants can communicate within the ACI Fabric as well, although the devices are situated in different VRFs. This is possible thanks to a route-leaking feature within the ACI Fabric. However, this inter-tenant communication has to be configured consciously and explicitly in line with the “zero trust” contract architecture of ACI.

If contracts are exported amongst tenants, they can communicate to each other via integrated route leaking:

The fundamental routing architecture inside and outside of an ACI Fabric will be touched upon in the next chapter.



A leaf switch that receives a package from a host has to determine if the target IP address is situated inside or outside of the ACI Fabric. To make this possible, a leaf switch keeps all subnets of all bridge domains of its own tenants. All these ACI internal nets point to spine switches (proxy spine address).


Thus, in the ACI Fabric both possibilities are covered:

  • ACI internal nets are with tenants and their bridge domains
  • ACI external nets equate to L3out routes that are learned inside the ACI Fabric

IP routing has to be specifically activated per bridge domain. If it is switched on, a spine learns besides MAC addresses of a BD additionally the IP addresses as well. Additionally, leaf is holding a cache with local and remote destinations. A leaf switch is trying to deliver packages directly with its local cache. If this is not possible, spine is taking over this function as long as it is registered on the mapping table.

If the spine layer has no knowledge of the target as well, ARP packages for all leaf switches will be generated in this broadcast domain.

Leaf switches learn these nets via MP-BGP routing protocol for targets outside the ACI Fabric. This becomes necessary as spines only conduct “exact match route lookups” instead of “longest match lookups” as usually done by routers. By means of comparing the destination IP with all subnets of all bridge domains of the own tenant, external destinations are recognized. In case a match with the internal nets cannot be found, leaf is consulting its routing table for external routes.

Inter-tenant communication is always happens automatically between VRF instances due to the described ACI hierarchy. The logical consequence would be: endpoints in different tenants have to inevitably be realized via an ACI extern routing. However, in the ACI Fabric the possibility was created to allow this traffic between tenants inside of the Fabric – by means of route leaking that has to be configured purposely in both tenants via contracts export. This guarantees that a tenant (e.g. customer) does not import the routes of another one without knowledge of the latter.

Finally a reminder: fall contracts for targets outside of their own EPG have to exist from the standpoint of endpoint groups and communication between EPG must be permitted. This process is basically ident no matter if a package “switches” inside of a bridge domain or if it must be delivered via BD / VRF limits through a routing process. Said differently, EPG with all their security contracts are a logic that is abstracted from the forwarding control plane.


Cisco ACI supports a randomized mixing of leaf respectively spine platforms generations when running an ACI Fabric. This is an advantage for a long-term operation, which should not be underestimated. You can always add leaf switches of the most recent generation to existing Fabrics and thereby you receive new features for all these added devices as well. Local features such as EPG Classification Mode are hence available on the new leaf immediately. However, global features, for instance multicast routing, would have to be put into operation isolated from these leafs and will require a more thorough planning.

In a Fabric extension, a maximum of flexibility exists for the spine hardware. The utilized spine type has no influence on the supported features of the entire Fabric.

Generally speaking, the choice of a spine goes only by the following criteria:

  • Uplink bandwidth between leaf and spines
  • Scaling of the mapping data base (maximal amount of simultaneously operated devices in the Fabric)

An operation of different software versions on leafs is supported. In this case, not supported features of leafs with older software will be acknowledged with a “reject” on the APIC controller, however for this feature a fault will not be generated in the entire Fabric.



Now some final thoughts, how you will be able to integrate an ACI infrastructure into an existing network. The described functions of an ACI solutions state it clearly: a lot has to be considered to migrate an existing, VLAN based data center network with all existing devices into a new ACI architecture.



The simplest implementation will be possible if you can start from the beginning. Naturally, this happens with a new data center location and one has the best conditions for a clean implementation of ACI according to the concepts contained therein.


The advantages are obvious:

  • New IP address concept adjusted to a bridge domain design
  • Definition of suitable service groups and applications in endpoint groups
  • security policy definitions in contracts are derived from it
  • Controlled and manageable implementation per service / application from the beginning


It is not necessarily the case that you have to completely replace all components in an existing data center network. A clearly defined layer 3 coupling between an existing network and a new ACI infrastructure enables the structured approach as described above, in the form of a migration with a manageable complexity. In that case, there is a disadvantage, as you will have to provide all existing services of the “old world” with new IP addresses during migration into the ACI Fabric in order to have them defined and implemented according to an ACI security contract in a controlled way. This can be a very tedious process and one has to evaluate the initial planning stage on a case-to-case basis if this is a valid approach.


The implementation of this version is well controllable and manageable:

  • Layer 3 only coupling at the transition to the existing network
  • Contracts according to ACI design guidelines
  • Migration onto new server components or onto existing servers, which can be provisioned completely new into the ACI Fabric, is carried out service by service.


This approach could be defined by a requirement to migrate existing server, services or applications on the side of the end systems into an ACI Fabric with the lowest possible effort. To be able to do this, one has to implement next to a layer 3 coupling a layer 2 coupling in between the existing network and the ACI Fabric as well.

Cisco calls these brownfield migrations “Network Centric ACI”. Here, the existing VLAN structure will be effectively transferred into the ACI Fabric via identical bridge domains whilst it remains unchanged. In this case an entire synchronization of the ACI hierarchy occurs (VLAN = EPG = BD)

By means of this mapping based on existing VLANs, security contracts may and have to happen between VLANs (=BD and EPG), as it possibly may have already been the case, via a central data center firewall. The advantage in this case is the integrated security policy directly in the Fabric with a maximum forwarding performance.

In case the ACI network is prepared in this manner, servers have to be merely “pushed” into the ACI Fabric (for example via VMware or VMotion). Bare metal servers can much easier be migrated that way.

After the completion of the server migration, one could start to implement a granular security by means of allocating individual end system resources into different endpoint groups in order to achieve a greater measure of security in the data center. With this approach, the allocated bridge domains have to remain unchanged in this case because of the given IP address. New systems, of course, can in future be lead to the ACI Fabric randomly.

A direct migration from an existing VLAN structure into different EPG is possible too, but hardly manageable as you would have to know exactly during the switching of the resources which communications exist between the servers and the data center. This kind of migration is not recommended.

A significantly prominent feature of ACI is the granular security, which is integrated into a high performing data center network. However, many other implemented and improved concepts that we are aware of from known limitations from the past, shall not be disregarded. On the other hand, it must also be mentioned that certain requirements will not be so easy to be implemented in a Cisco ACI as we were used to in the past.

There is of course a lot more to report on Cisco ACI, here, we only scratched the surface. Herwig Gans is gladly available to assist you with any further information you need or questions you have.

Herwig Gans
Senior Systems Engineer

Similar topics