Featured

NSX-T + eVPN: Understanding and walkthrough configuration guide for NSX-T 3.0 and Cisco ASR 9000

So, NSX-T 3.0 has finally landed with a plethora of new features, one of great interest to the Telco space is that of (Ethernet VPN) eVPN . Hopefully this blog post will give you a insight into what eVPN is, how’s it’s delivered within NSX-T and also a step-by-step configuration guide how to get this to work between an NSX-T Edge Node and a Cisco ASR9000.

eVPN Primer:

The first thing to note is that eVPN is not a new technology – it’s new to NSX-T, also eVPN generally runs in two different modes (L2 vs. L3) – more on that later – but the focus for NSX-T is the L3 mode, more specifically the use case of mapping a VRF through Multi Protocol BGP (MP-BGP), eVPN Address Family (AF) between an NSX-T Tier 0 (T0) router and a DataCenter Gateway (DCGW) implementation (Nexus 9000, Cisco ASR9k etc.).

One key thing to note is that the implementation of eVPN requires two new constructs for NSX-T

  • Virtual eXtensible Local Area Networks (VXLAN) enablement
  • Tier 0 Virtualised Routing & Forwarding Instances (VRFs)

Yes, you read that right, for the implementation of eVPN, NSX-T leverages VXLANs between the T0 and the DCGW, this is primarily due to the fact that the number of upstream DCGWs supporting GENEVE encapsulation for this is low (and may require a hardware refresh).

So, back to eVPN, MultiProtocol BGP (MP-eBGP) is implementated as a control plane protocol for VXLANS, this model introduces the capability of control-plane learning for hosts / networks connected beyond the Tunnel EndPoint (TEP) – this gives the ability for control / data plane separation (watch this space in the future) for L3 forwarding over a VXLAN Network.

In the case of the NSX-T implementation, because we are focusing on a L3 eVPN interconnect the routes we exchange are eVPN type 5 Prefixes, referencing back to the iETF bess draft we can see that this Type 5 IP Prefix looks as can be seen below:

With this structure, we have the ability to transport IP (IPv4 and IPv6) prefixes over a MP-BGP session, to do this we also leverage the EVPN address family (aka bgp l2vpn evpn AF in Cisco terminology).

To quote from the RFP the eVPN RT 5 decouples IP prefix advertisements from the MAC/IP route advertisements (RT2) in eVPN (RFC 7432)

We will get into more detail later in the post with examples of these prefixes (both IPv4 and IPv6) as well as decoding the BGP NRLI so that we can reference back to the draft RFC and see how this implementation actually works.

MP-EBGP (eVPN) Use Cases

A common telco use-case for eVPN involves transportation of Corporate APNs(C-APN) over a L3 Data Center fabric, in the Evolved Packet Core (EPC) use cases each Corporate Access Point Network (APN) has its own VRF that typically extends into the (now virtualized) packet-gateway node.
The challenge as shown below is how to maintain this routing isolation on top of a common / shared L3 fabric that does not implement MPLS, basically how to extend the VRF from the VM to the PE / DCGW… Sure you could provide a regular VLAN and then create an overlay on the L3 Fabric, but that’s not entirely a simple or scalable solution – the idea being to deploy the underlying fabric and (as much as possible) – leave it alone.

This image has an empty alt attribute; its file name is l3fab.png

Hence MP-BGP eVPN and NSX-T VRF capability, although the former diagram doesn’t include the T0 / Edge Nodes in the diagram, this is a fundamental element of the NSX-T eVPN solution, basically the connectivity model looks similar to the model below: (Image to be added)…

Note: this diagram shows only a single NSX-T Edge node, however the solution fully works in active/active mode (you just need to repeat the configuration on multiple NSX-T Edge Nodes) and configure BGP Multipath on the VM, T0 and DCGW.


Lets break this down a little bit so that is clearer to understand, what we have is a Virtual Network Function (VNF – VM) that is running VRFs (in the example configs we will see later this is a Cisco CSR1Kv node), this runs vanilla eBGP between the VRF (or global construct) and the new T0-VRF construct on NSX-T – Note: this needs to be a .1q tagged interface on the VM, thus we need a Geneve segment with Guest VLAN tagging (GVT) enabled – the GVT is used to distinguish the VRF constructs on the VNF.

That takes care pretty much of the VNF side of things, but what about the T0 and this new VRF construct

This image has an empty alt attribute; its file name is t0-vrf-1.png

Here we see the new configuration elements of the Tier0 Gateway – as you can see we have a VRF defined: VRFBlue, this is the new NSX-T T0 VRF construct that provides:

  • Tenant Routing isolation
  • Increased T0 Scale on a single edge node –

Prior to T0 VRFs we could only have a single T0 per edge node – this now increases the effective number of T0s to 100 through the use of VRFs).

This image has an empty alt attribute; its file name is t0-definition-1.png

In addition to creating these T0 VRFs, we also have some specific VRF settings that are required for configuration of the evPN service, here under the new section VRF settings we see some options:

Route-Distinguisher (RD): RD leveraged to create unique RD+IPvX prefixes in the routing tables
Route-Targets (RT): well known BGP EXT-Community to determine route import and export targets
eVPN Transit Vxlan Network Interface (VNI): this is the upstream VXLAN (between T0 and DCGW that will be used for transporting prefixes (and traffic) between the T0 and the DCGW — this is the VLXAN encapsulation addition to NSX-T that allows interop between the T0 and the DCGW for eVPN.


So, thus far we have seen the draft that is used for the eVPN implementation, a common use-case and a little snippet of detail around the T0 VRF construct, lets start a walkthrough of how to get this configured from scratch on NSX-T.

I will make the assumption here that you have NSX-T deployed, a host cluster prepared as well as having deployed an Edge Cluster with at least one edge-node and a regular T0 configured.
Note: All configuration is under the Policy UI (not the legacy Manager UI – in NSX- you won’t see the option for toggling the UI unless you have something configured that relies on it anyway)

Step 1: Create a VXLAN pool used for the upstream connectivity between the T0 and the DCGW:

Under Network settings goto VNI pool, here create a new VNI pool, give it a name and a start-end range, these are the VXLAN segments that will be used upstream towards the DCGW

This image has an empty alt attribute; its file name is vnipool.png

Once this is created, go to the main T0 and select this pool under the eVPN settings, if you don’t do this you will have an issue creating your T0 VRF later, also enable BGP and allocate your BGP Local-AS , but do not add any neighbours (yet)…

This image has an empty alt attribute; its file name is evpnpool.png

Step 2: Create the Geneve GVT segment

Under segments, create a new segment in your Overlay domain, note here we have set the VLAN range (100-300), this is the VLANS that will be used as sub-interfaces towards the VNF (VM), remember all traffic from the T0 to the VM is sent as a VLAN inside the Geneve overlay – we need to make sure we set a valid range here.

This image has an empty alt attribute; its file name is segment.png

Step 3: Create a new Tier 0 VRF construct

Under Tier-0 Gateways select Add Gateway and select VRF, this will deploy our first T0 VRF, you need to fill in the following details:

  • Name – Give the VRF a name
  • T0 Gateway – Connect this to your main (Parent) T0 Gateway
  • VRF Settings: RD – Give this VRF a unique RD value
  • eVPN Transit VNI – Select a VXLAN number you wish to use upstream for this VRF: Note this must exist within the range configured in step 1.
  • Route Targets – Here you have a choice to used Auto RD or manually select route-targets
This image has an empty alt attribute; its file name is deploy-t0vrf-1.png

Personally, I prefer to set the route-targets manually that way I have a little more control and i can track what values are used between the T0 and the DCGW, add your route-targets here in the form of BGP ASN:xxx where xxx represents a number you wish to use to define how prefixes are imported / exported between peers.

This image has an empty alt attribute; its file name is evpn-rts-1.png

Step 4: Create an interface between the T0 VRF and the VNF (VM)

Here we will build out interface between our T0 VRF and the VM, the following information needs to be configured:

  • Name – Interface Name
  • Type – This must be configured as a Service interface
  • IP Address – this is the IPv4 and / or IPv6 address of the T0 Interface
  • Connected to (Segment) – use the Segment we built in step 2 that has guest VLAN tags configured
  • Edge Node – which edge node you want this interface to belong to (in my case EN1)
  • Access VLAN ID – this is the Guest VLAN Tag (GVT) that is used between the T0 and the VM, this needs to be within the permitted range of VLANS allowed when the segment was created in step 2.
This image has an empty alt attribute; its file name is t0vnf-interface.png

Note: for now i’m only configuring an interface between NSX-Edge Node 01 and the VM, if I wanted to have Edge Node redundancy I would create second service interface on the VRF with a different set of IP addresses AND a different Access VLAN ID and bind this to NSX-Edge Node 02 – this would mean from a single vNIC of the VM there would be two GVT’s one which would terminate on EN1 and the other on EN2.

At this point we can hop over to our VM and check connectivity towards this new T0 VRF interface, in the case of this lab, I’ll connect to my CSR-1Kv node:

CSR-1KV#show run int gig 2.197
Building configuration…
Current configuration : 121 bytes
!
interface GigabitEthernet2.197
 encapsulation dot1Q 197
 vrf forwarding blue
 ip address 10.1.1.1 255.255.255.252
end

Here we can see the CSR-1Kv is using gigabitethernet 2 (under vSphere this interface is connected to the Geneve segment we built in step 2), we have a matching encapsulation (VLAN 197).
Note: here i’ve configured VRFs on the CSR-1Kv that way I can create many VRF and test 1,2, 10 VRFs with a single VM, you don’t have to do this, but its recommended if you want to use a single VM and demonstrate multiple isolated routing tables end-to-end between VM and DCGW.

CSR-1KV#ping vrf blue 10.1.1.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.1.1.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/3 ms
CSR-1KV#ping vrf blue ipv6 aaaa:bbbb::2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to AAAA:BBBB::2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
CSR-1KV#

Success! , we have both IPv4 AND IPv6 reachability between our VM and the T0 VRF, lets add some BGP to this to make it interesting

Step 5 – Configuring BGP between VNF-VM and Tier-0 workload

Under the VRF Tier0 goto BGP and configure your BGP neighbours, be sure to set the matching Remote AS and so on.
Note: the BGP configuration of the T0-VRF (in terms of BGP-AS# etc.) is inherited from the Parent T0 – so for now at least make sure you have BGP enabled and a BGP-AS configured there (we will start to add the eVPN part of the BGP configuration to the parent T0 shortly).

This image has an empty alt attribute; its file name is config-bgp.png

Below is the configuration from the CSR-1Kv, remember we have the interface in a VRF here, so we need to configure BGP IPv4 and IPv6 inside the vRF, you need to enable the command vrf upgrade-cli on the CSR-1Kv to be able to implement this – i’ve also added the global VRF configuration as well as a loopback interface I want to advertise over BGP to the DCGW VRF constructs.

vrf definition blue
 rd 65100:1
!
address-family ipv4
 exit-address-family
!
address-family ipv6
 exit-address-family
!
interface Loopback6
 vrf forwarding blue
 ip address 11.11.11.11 255.255.255.255
 ipv6 address 6666:6666::1/128
end
!
router bgp 65100
bgp router-id 192.168.100.194
bgp log-neighbor-changes
!
address-family ipv4 vrf blue
 network 11.11.11.11 mask 255.255.255.255
 neighbor 10.1.1.2 remote-as 65000
 neighbor 10.1.1.2 update-source GigabitEthernet2.197
 neighbor 10.1.1.2 version 4
 neighbor 10.1.1.2 timers 5 15
 neighbor 10.1.1.2 activate
 neighbor 10.1.1.2 send-community both
 neighbor 10.1.1.2 soft-reconfiguration inbound
exit-address-family
!
address-family ipv6 vrf blue
 network 6666:6666::1/128
 neighbor AAAA:AAAA::1 remote-as 65000
 neighbor AAAA:AAAA::1 activate
 neighbor AAAA:AAAA::1 send-community both
 neighbor AAAA:BBBB::2 remote-as 65000
 neighbor AAAA:BBBB::2 update-source GigabitEthernet2.197
 neighbor AAAA:BBBB::2 timers 5 15
 neighbor AAAA:BBBB::2 activate
 neighbor AAAA:BBBB::2 send-community both
 neighbor AAAA:BBBB::2 soft-reconfiguration inbound
exit-address-family
!

Looking at some common BGP commands on the CSR-1Kv I can see that BGP v4 and v6 neighbours are up!
Note: here i’m receiving prefixes because my lab is already finished end-to-end at this point you shouldn’t be seeing any prefixes (although you could go to the T0 VRF and configure a loopback and advertise that).

CSR-1KV#show bgp vrf blue all sum
For address family: IPv4 Unicast


For address family: IPv6 Unicast


For address family: VPNv4 Unicast
BGP router identifier 192.168.100.194, local AS number 65100
BGP table version is 8, main routing table version 8
2 network entries using 512 bytes of memory
3 path entries using 384 bytes of memory
3/2 BGP path/bestpath attribute entries using 840 bytes of memory
1 BGP AS-PATH entries using 40 bytes of memory
2 BGP extended community entries using 80 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 1856 total bytes of memory
1 received paths for inbound soft reconfiguration
BGP activity 16/12 prefixes, 27/21 paths, scan interval 60 secs

Neighbor        V           AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
10.1.1.2        4        65000    1992    1948        8    0    0 02:45:48        1

For address family: VPNv6 Unicast
BGP router identifier 192.168.100.194, local AS number 65100
BGP table version is 5, main routing table version 5
2 network entries using 560 bytes of memory
3 path entries using 468 bytes of memory
3/2 BGP path/bestpath attribute entries using 840 bytes of memory
1 BGP AS-PATH entries using 40 bytes of memory
2 BGP extended community entries using 80 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 1988 total bytes of memory
1 received paths for inbound soft reconfiguration
BGP activity 16/12 prefixes, 27/21 paths, scan interval 60 secs

Neighbor        V           AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
AAAA:BBBB::2    4        65000    1992    1947        5    0    0 02:45:48        1

Step 6 – Build the T0 Uplink (towards the fabric)

Here we are using a VLAN uplink (in my case VL102) towards the fabric – in my cases the T0 Edge Node and Cisco ASR9000 are connected on the same segment – in most cases this will NOT be the case thus the use of eBGP MultiHop is possible for the VXLAN use case.
Remember this is just a VLAN for the uplink, the VXLAN is IP routed and NOT L2 switched.

This image has an empty alt attribute; its file name is t0uplink.png

Right now, this lab is set only to use a single ASR9000, so in this case i’ve bound this segment to NSX-Edge Node 01, if you wanted a redundant uplink you would build a second uplink to NSX-Edge Node 02 and obviously give that a different IP address.

Step 7 – Set the eVPN TEP address between (Parent) Tier-0 and the DCGW (Cisco XRv in this lab)

First lets go back to the parent T0 gateway (not the VRF), edit this and you can set the eVPN tunnel endpoint, in my case i am just using the same address as the interface, although you could use an altogether different address (more on this shortly).

This image has an empty alt attribute; its file name is evpn-tep.png
This image has an empty alt attribute; its file name is t0-tep.png
This image has an empty alt attribute; its file name is t0redist.png

Note: if you set the tunnel endpoint to a different IP address, then you need to make sure this is reachable from the DCGW, effectively converting the MP-eBGP session into a multi-hop session
(which is fully supported), in order to do this you will see a new option to advertise eVPN TEP address across the IPv4 portion of the MP-BGP session between the T0 and the DCGW, under route-redistribution (which is a little different in 3.0) you will see an option to advertise the TEP address (here as i’m using the interface address i dont need to do this).

Step 8 – Configuring MP-eBGP on the Tier 0

First lets take a look at regular BGP configuration, nothing special – we already configured the BGP AS, configure ECMP (if you want to have multipath), Inter-SR iBGP allows for an iBGP session to be established between the Edge Nodes for uplink resiliency (may be important if your VM is not dual-connected to both edge nodes).

This image has an empty alt attribute; its file name is bgp1.png

Once this is created, go to the main T0 and select this pool under the eVPN settings, if you don’t do this you will have an issue creating your T0 VRF later, also enable BGP and allocate your BGP Local-AS , but do not add any neighbours (yet)…

This image has an empty alt attribute; its file name is bgp2.png

What’s important is to configure the Route-Filter, click on this and now we can see a new L2VPN_EVPN address family that we need to enable to ensure that the eVPN address family is configured.

This image has an empty alt attribute; its file name is bgp3.png

Voila! By this point if everything is good you should have an MP-eBGP session between your T0 and the ASR9k, wait whats that you say how do I configure the ASR9K, oh yeah…

Step 9 – Configure your ASR9000 (or Xrv Node)

So lets take a little look at the ASR9K configuration and go through this is some detail so we can see what each section is doing, first we need to build our VRFs on the ASR9k, I’ve been very creative and used the same VRF name here throughout (keep it simple), note here we need the stitching keyword to redistribute prefixes between the eVPN and the IP BGP (VRF).

vrf blue
 address-family ipv4 unicast
  import route-target
   65000:120 stitching
  !
  export route-target
   65000:120 stitching
  !
 !
 address-family ipv6 unicast
  import route-target
   65000:120
   65000:120 stitching
  !
  export route-target
   65000:120
   65000:120 stitching
  !
 !
!

For good measure i aso added a loopback (to test if this prefix makes it all the way through to the VM we configured earlier)

interface Loopback66
 vrf blue
 ipv4 address 65.65.65.1 255.255.255.0
 ipv6 address 7777:7777::1/64
!

The interface towards the T0 in my lab is Gig 0/0/0/0 with a simple configuration:

interface GigabitEthernet0/0/0/0
 description VM NetworkAdapter4
 mtu 9000
 ipv4 address 192.168.102.243 255.255.255.0
!

Next i need some new constructs on the ASR-9k a BVI and and NVE interface, the NVE is the Network Virtualisation interface, we also need to configure the L2VPN section.

interface BVI100
 host-routing
 ipv4 address 22.222.22.1 255.255.255.0     <-- Can use any IP here.
!
interface nve1
 member vni 120020
  vrf blue
  host-reachability protocol bgp
 !
 overlay-encapsulation vxlan
 source-interface Loopback0
!
l2vpn
 bridge group blue
  bridge-domain blue
   routed interface BVI100
   !
   member vni 120020
   !
  !
 !

What we are doing here on the interface nve1 is creating a VXLAN (in this case 120020), you will note this matches the eVPN tunnel-ID that we configured back in step 3, we are binding this VXLAN segment to VRF Blue and using BGP for the reachability, we are also telling the VNI construct that we want to use VXLAN as our overlay, and source the tunnels from loopback0.

Lastly under L2VPN (which is traditionally used for different configurations) we build a bridge-group + domain and use the routed BVI interface and bind this again to VNI 120020

If you remember IOS-XR, without configuring BGP policies no routes are allowed ingress or egress, so first lets not forget to make a simple policy:

route-policy IN
  pass
end-policy

Lastly, we have the BGP configuration, this is a little longer than we have seen before, so i’ll try to comment on the relevant parts inline..

router bgp 3301
 bgp router-id 44.44.44.1
 address-family ipv4 unicast
  network 44.44.44.1/32                                  <--- Advertise our Lo0 IP via IP4 AF, this is needed as this is the source of the ASR9k TEP.
 !
 address-family vpnv4 unicast
  advertise best-external
 !
 address-family ipv6 unicast
 !
 address-family vpnv6 unicast
 !
 address-family l2vpn evpn
 !
 neighbor 192.168.102.240                                 <-- This is the T0 neighbour
  remote-as 65000                                         <--- The T0 BGP AS
  ebgp-multihop 10
  address-family ipv4 unicast
   route-policy IN in
   route-policy IN out                                    <--- Lets not forget those route-policies
  !
  address-family l2vpn evpn                               <--- Enabling L2VPN EvPN AF
   import stitching-rt re-originate                       <--- Import eVPN prefixes with Stitching RT
   send-community-ebgp
   route-policy IN in
   encapsulation-type vxlan                               <--- This is VERY important, without this line I was not able to see BGP prefixes being sent to the NSX T0
   route-policy IN out
   send-extended-community-ebgp
   advertise vpnv4 unicast re-originated stitching-rt     <--- 
   advertise vpnv6 unicast re-originated stitching-rt     <--- Advertise our V4 and V6 prefixes
   soft-reconfiguration inbound always
  !
 !
 vrf blue                                                 <--- Remember the VRF Blue configuration
  rd 65000:120                                            <--- Set the VRF RD
  address-family ipv4 unicast                             <--- Enable IPv4
   advertise best-external
   maximum-paths ebgp 4
   redistribute connected metric 1 route-policy IN
  !
  address-family ipv6 unicast                            <--- Enable IPv6
   redistribute connected route-policy IN
  !
 !
!

At this point we should be set and everything should be working … lets have a look at some troubleshooting commands on the ASR9K and the NSX-T Edge Node:

// Lets check the interface NVE and its configuration 

interface nve1
 member vni 120020
  vrf blue
  host-reachability protocol bgp
 !
 overlay-encapsulation vxlan
 source-interface Loopback0
!

// Lets also make sure its UP!

RP/0/RP0/CPU0:ASR9000-7.1.1#show int nve 1
Sun Apr 19 18:18:09.849 UTC
nve1 is up, line protocol is not ready
  Interface state transitions: 1
  Hardware is Overlay
  Internet address is Unknown
  MTU 1500 bytes, BW 0 Kbit
     reliability Unknown, txload Unknown, rxload Unknown
  Encapsulation Unknown(0),  loopback not set,
  Last link flapped 3d11h
  Last input Unknown, output Unknown
  Last clearing of "show interface" counters Unknown
  Input/output data rate is disabled.

// Lets check the VNI (VXLAN) mapping and make sure that its also UP

RP/0/RP0/CPU0:ASR9000-7.1.1#show nve vni
Sun Apr 19 18:19:20.450 UTC
Interface  VNI          MCAST        VNI State        Mode
nve1       120020       0.0.0.0      Up               L3 Control

// If everything is working you should also see a NVE peer to the NSX-T0

RP/0/RP0/CPU0:ASR9000-7.1.1#show nve peers
Sun Apr 19 18:19:48.438 UTC
Interface  Peer-IP        Local VNI  Output VNI Peer-MAC        Mode      Flags
nve1       192.168.102.240   120020     120020     0250.5600.0001  control   0xc

//We should also see a L2VPN BGP peering established with the NSX-T0, NOTE: here we only show one peer //because we only configured the T0 on Edge Node 01, if we had allocated an uplink / BGP Neighbour to Edge //Node 02 we would see two. !

RP/0/RP0/CPU0:ASR9000-7.1.1#show bgp l2vpn evpn summary
Sun Apr 19 18:20:24.705 UTC
BGP router identifier 44.44.44.1, local AS number 3301
BGP generic scan interval 60 secs
BGP table state: Active
Table ID: 0x0   RD version: 0
BGP main routing table version 44
BGP scan interval 60 secs

BGP is operating in STANDALONE mode.


Process       RcvTblVer   bRIB/RIB   LabelVer  ImportVer  SendTblVer  StandbyVer
Speaker              44         44         44         44          44          44

Neighbor        Spk    AS MsgRcvd MsgSent   TblVer  InQ OutQ  Up/Down  St/PfxRcd
192.168.102.240   0 65000    5025    5094       44    0    0 02:49:55          2

// Here we also see we are receiving two prefixes, so lets take a look:

RP/0/RP0/CPU0:ASR9000-7.1.1#show bgp l2vpn evpn
Sun Apr 19 18:21:17.035 UTC
BGP router identifier 44.44.44.1, local AS number 3301
BGP generic scan interval 60 secs
BGP table state: Active
Table ID: 0x0   RD version: 0
BGP main routing table version 44
BGP scan interval 60 secs

Status codes: s suppressed, d damped, h history, * valid, > best
              i - internal, r RIB-failure, S stale, N Nexthop-discard
Origin codes: i - IGP, e - EGP, ? - incomplete
   Network            Next Hop            Metric LocPrf Weight Path
Route Distinguisher: 65000:100
*> [5][0][32][11.11.11.11]/80
                      192.168.102.240          0             0 65000 65100 i
*> [5][0][128][6666:6666::1]/176
                      192.168.102.240          0             0 65000 65100 i

Processed 2 prefixes, 2 paths

//So, we see the two prefixes (one v4, one v6), the NH is 192.168.102.240 (the IPv4 peer address of the NSX //T0 on Edge Node 01.

//What you might find interesting is that at this point we are NOT seeing the local prefix of VRF Blue in the /eVPN table, but I can assure  you that it's being
//sent tothe NSX-T0. Even if you check the ASRS9K advertised prefixes it will show a count of 0! but lets check on NSX.

nsx-edge01>
nsx-edge01> get logical-routers
Logical Router
UUID                                   VRF    LR-ID  Name                              Type                        Ports
736a80e3-23f6-5a2d-81d6-bbefb2786666   0      0                                        TUNNEL                      3    
b2e75c32-5cb2-4cc6-91ac-55421daf1716   1      5      SR-Main T0 GW                     SERVICE_ROUTER_TIER0        10   
bb9b07dc-8f8d-43b2-a7e3-417492890676   5      31     SR-VRF-VRFBlue                    VRF_SERVICE_ROUTER_TIER0    6

// Lets switch to the main T0 SR construct (also called VRF - although this is an internal VRF construct no the RF we have just configured).

nsx-edge01> vrf 1

// Lets check BGP eVPN

nsx-edge01(tier0_sr)> get bgp evpn
BGP table version is 147, local router ID is 10.1.1.60
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal
Origin codes: i - IGP, e - EGP, ? - incomplete
EVPN type-2 prefix: [2]:[EthTag]:[MAClen]:[MAC]:[IPlen]:[IP]
EVPN type-3 prefix: [3]:[EthTag]:[IPlen]:[OrigIP]
EVPN type-4 prefix: [4]:[ESI]:[IPlen]:[OrigIP]
EVPN type-5 prefix: [5]:[EthTag]:[IPlen]:[IP]

   Network          Next Hop            Metric LocPrf Weight Path
Route Distinguisher: 65000:100
*> [5]:[0]:[32]:[11.11.11.11]
                    192.168.102.240          0             0 65100 i
*> [5]:[0]:[128]:[6666:6666::1]
                    192.168.102.240          0             0 65100 i
Route Distinguisher: 65000:120
*> [5]:[0]:[24]:[65.65.65.0]
                    44.44.44.1               1             0 3301 ?
*> [5]:[0]:[64]:[7777:7777::]
                    44.44.44.1               0             0 3301 ?

Displayed 4 prefixes (4 paths)

Here we’re seeing not 2 but 4 prefixes in the eVPN table, you can tell from the AS_PATH that two come from AS#65100, thats the BGP AS of our CSR-1K VM and two come from AS#3301, thats the loopback IP prefixes from the ASR9k.

Lets just have a little peek at the detail of some of these prefixes, its more descriptive getting this from the ASR than NSX-T, so lets have a look at a prefix that we receive from NSX-T.

RP/0/RP0/CPU0:ASR9000-7.1.1#show bgp l2vpn evpn rd 65000:100 [5][0][32][11.11.11.11]/80
Sun Apr 19 18:25:29.216 UTC
BGP routing table entry for [5][0][32][11.11.11.11]/80, Route Distinguisher: 65000:100
Versions:
  Process           bRIB/RIB  SendTblVer
  Speaker                 43          43
Last Modified: Apr 19 15:30:30.408 for 02:54:58
Paths: (1 available, best #1)
  Not advertised to any peer
  Path #1: Received by speaker 0
  Not advertised to any peer
  65000 65100, (received & used)
    192.168.102.240 from 192.168.102.240 (10.1.1.60)
      Received Label 120020
      Origin IGP, metric 0, localpref 100, valid, external, best, group-best, import-candidate, reoriginate, not-in-vrf
      Received Path ID 0, Local Path ID 1, version 43
      Extended community: Flags 0x6: Encapsulation Type:8 Router MAC:0250.5600.0001 RT:65000:120
      EVPN ESI: 0000.0000.0000.0000.0000, Gateway Address : 0.0.0.0

So, looking at this prefix (and referencing back to the RFC highlighted earlier), we can see the following:

  • Received Label: 120020 – This is the VXLAN VNI that we configured on both the T0 and the ASR
  • EVPN ESI is all zero’s indicating this is NOT an overlay index but a regular T5 prefix
  • a GW IP of all zeros also indicating this is not an overlay index.
  • RT’s are set as per advertisment from the NSX-T T0
  • RMAC is that of the VXLAN construct on the NSX-T Edge node
nsx-edge01(tier0_sr)> get vrf vni
VRF                                   VNI        VxLAN IF             L3-SVI               State Rmac
VRF-30                                120020     vxlan-120020         br-120020            Up    02:50:56:00:00:

Note: Future models may set the ESI and GWIP, but thats a story for the future, the overlay index allows for a recursive lookup, that would be to use the GW-IP as the NH for the VXLAN TEP instead of the BGP NH (as used in this example).


Thats it ! Hopefully you found this interesting, please let me know!

VMware Telco Cloud Automation – Deploy your first VNF (kind of)…

So, in the last post we went through and deployed VMware Telco Cloud Automation, with a link to vCloud Director, now how about we take that deployment and actually deploy our first VNF ??

For this example, I have a demo ‘VNF’ (a simple 3 tier application) that has some scaling and workflows built-in, thank to Umar Shaikh for both the workload and the permissions to let me make this available to those who might want to try it..

First up, lets take a look at the stack that i’m going to use to deploy this dummy workload, we will be basically using the vCloud NFV 3.2.1 stack, you can see the release builds used here, as well as the TCA and HCX builds, at time of writing there is a new TCA image [Rel 140] which i haven’t upgraded to (yet), maybe an opportunity for another blog post after one regarding HCX/TCA Upgrade process 😉

So, what are we going to deploy I hear you ask, as alluded to earlier we have a very simple workloads – its basically 3 VMs (a DB, App and Web tiered application), I will go through this in a little more detail later, but for now, lets download these files click here , this should take you to a onedrive share where you can download the 4 files
Note: each OVA is around 700mb or so (just so you know :))

So, in order to save time i’m going to make the assumption that you have a working environment, this means a working vSphere, NSX-V, vCloud Director, TCA, HCX, RabbitMQ and vRO platform, if you have this setup and ready then lets proceed.

Step 1 – Create a vCloud Director Organization.

Before we onboard any workloads, we first need to create the Virtual Infrastructure on TCA, but in order to be able to do this we first need to create the relevant constructs within vCloud Director – note I’m going to assume you’ve at least added your vCenter / NSX environment into vCD and have created your
provider Virtual Data Centers (pVDC), so lets quickly go through the steps of creating this organisation, for simplicity we’ll call it vTelco 😉

Note: I’m not going to go through the effort in this example to integrate this with LDAP, for the purposes of this demo, we can make do with a local user

Here’s where we make the local user – we will need to remember the credentials for later when we add this to TCA!

Nothing really to do with Catalog, Email or policies unless we want to make any specific changes, you’ll understand why Catalog is not important (in this release of TCA) later.

Click Finish and now we should have our Organisation created within vCloud Director !

Step 2 – Create an Organization Virtual Data Center (OrgvDC)

Now we have our Org construct the next step we need to do is create an OrgvDC – this allows us to assign resources to the Organisation, we have different models of OrgVDC available, for deployment the type of OrgvDC you select is important (in a production environment), but i’m not going to get into that here, I’ll use a simple PAYG OrgVDC, as you can see below:

First lets make sure we select the vTelco Organization that we created in step 1.

We need to pick the relevant Provider vDC (we did talk about this earlier right), so in this case i’m picking my pVDC called RefArch – this is backed by an NSX-V provisioned cluster, we should also take NOTE of the available external VLANS
to this provider vDC (this will become relevant later when we deploy our VNF).

As mentioned we will use a PAYG allocation model

Lets configure our PAYG OrgVDC, I’m using 10% guarantee as I have limited resources (sob).

Next lets allocate storage, this is important to make sure we have the right storage allocated – we will see why later, if you don’t see the policies here make sure they’re imported into the pVDC ! Here im going to use my ZFS storage array and use thing provisioning…

We also want to make sure that we are using our VXLAN overlay network pool for the internal networks that will be used by this VNF!

We can skip the edge gateway, lets give this OrgVDC a name and then click finish, we are now good to go !

Ok, so now we have our Organization and an associated OrgVDC, lets head over to TCA (Yay) and add this …

Note, we should also create an external network for this OrgvDC, so lets head into the OrgVD (as a vCD Admin) and click the + sign on OrgvDC networks.

Here we want to create an external network, pick the network segment you want.. Note: this needs to be first create on the vSphere Distributed switch AND under the PVDC.
Give it a name and finish..

Here you’ll see the newly created external network in this case I called it vTelco-VLAN 102

Step 3 – Create the new Virtual Infrastructure

So, lets go to Intrastruture – Virtual Infrastructure (VI) and click ‘+ Add’

We are going to add a new vCloud Director IA, some things of note here, at the end click validate and hopefully this will be successful and you’ll be set..

  • Cloud URL – this is the HCX URL that you deployed earlier (you did deploy this right?) – this is NOT the vCD url!
  • Username – this is the user we created as part of step 1 (you do remember the password right! :))
  • Tenant name – this is whatever you called the tenant (Organization) in vCD.

If it’s all good, click add and we should be set !

Give it a few seconds and click refresh, you should see that the VI is connected !

Step 4 – Create a compute profile

So, now we have associated a tenant with TCA, what we need to do is create a compute profile for that tenant (Organization). In a nutshell the compute profile is a link to the OrgvDC within the actual Organization.

If you click on the VI then you will see it’s current state, here lets click add compute profile to allocate the PAYG OrgvDC we just created in vCloud Director, we also need to assign the correct storage profile and supply any tags (maybe time for another post?)

Once this is added you’ll see the compute profile , clearly if you have multiple OrgvDCs you would make multiple compute profiles, you could also have multiple profiles linked to the same OrgvDC but using different storage policies (for example).

So, thus far we have created pretty much all the constructs we need, now what do we with those pesky OVAs we had earlier

Step 5 – Upload OVAs

In this current build of TCA, the OVAs that will comprise the VNF will need to be loaded into vCenter — yes thats right, vCenter – for now TCA doesn’t support picking these up from a vCloud Director Catalog (hence why the catalog setup wasn’t so important earlier ;)) , so lets go ahead to the vCenter and upload the three OVAs.

Basically here you need to upload the OVAs into vCenter, them (and this is important) – DON’T power them on, just convert them to templates, once you’ve done this on the VMs and templates view you should see your 3 centos vms as highlighted below.

Step 5 – Upload the Tosca template to TCA !

OK, so now we have all our stuff ready in vCD (Orgs, OrgvCDs – you did add some external networks to your Org VDC right?)

Lets go the Network Functions -> Catalog and upload the CSAR you downloaded earlier..

After you do this, you should be taken to the VNF designer where you will see a graphical representation of the VNF.
Note: You cannot change this at this point in time

Lets take a quick look at what we have here:

First up we have the VDUs (Virtualisation Deployment Units) in this case we actually have 3 of these, these are the ‘VMs’ that actually make up the workload.

  • App – the centos-small-app OVA we deployed earlier
  • db – the centos-small-db OVA we deployed earlier
  • loadbalancer – the centos-small-lb OVA we deployed earlier

In addition to this we have 3 networks (called Virtual Links in ETSI terminology)

  • app_network – this is used between the LB and the app VDUs
  • db_network – this is used between the app and LB VDUS

You will also node on the loadbalancer a blue square, this is used as an external connection point (this links to the external network provided by vCD).

We will see the options that we have when we instantiate this.

Sadly, in this build we cannot easily click on the objects to see the properties, if you want to take a look then hit the source button the snippet below shows you the TOSCA configuration for the app VDU

Step 6 – Instantiate the VNF !

Ok, so far so good, we have all our constructs ready.. lets try and deploy this workload, goto the catalog, select the workload you want to deploy and click instantiate!

This will bring you do a deployment screen a shown below, the first thing we need to do here is give our VNF a name, then we need to pick to which CLOUD we are going to deploy this to, this is where TCA gets to be pretty cool, in this case we have only deployed HCX for vCD and connected TCA to that, but
we could also incorporate openstack or other ‘cloud infrastrutures’ into TCA (note: this would require deploying multiple HCXs – one per cloud endpoint) – we could then take this TOSCA model and deploy to vCD, Openstack (VIO) or some other supported cloud.

What we actually pick here is the VI we added earlier, remember the VI is effectively pointing to the Organization within vCD (if this was VIO it would be the Tenant / Project) – so i’m picking my vTelco Demo VI which we created in step 3, – this links back to the vTelco Organization we created in step 1 (phew!)

Now we need to select the ‘Compute Profile’, this we created in step4 and basically links back to the OrgvDC we created in step 2 and the associated storage policy etc.

Lastly for now leave the instantation level as default, maybe something to cover in another blog post ? 🙂

Click Next.

Now you’ll see the VDUs and Virtual Links as above, note two things:

  1. we have to SELECT an external network
  2. we can EDIT the internal networks

Lets look at the internal networks first, if we don’t do anything, by default TCA will tell the VI to create these networks, we can choose to select existing networks if we have already provisioned networks in vCD (in this case) that we would want to use as these network segments.

Now for the external network (remeber that blue square we talked about earlier..?) well we need to select that..

Here its telling us the name of the external cp (lb-external), the external network inside vCD tht we will use is called vTelco-VLAN 102, note: here is says vxlan, but don’t worry this is actually a VLAN backed port group that we are connecting to in vCD – you’ll note its the External network
we created back in step 3 (you were paying attention there right?)

Pick that, click OK and then we are on to the VNF inputs!

So what do we have here — we need to enter some parameters here… first up is the workflows tab.

Workflows: this is used to execute the workflows included in this demo.. But in order to execute them we neesshd to know WHERE to execute them, so we are putting in the LB_IP (the other values are defaulted). in this case i’m using 192.168.102.210

Once we’ve done this we need to enter the details in the Loadbalancer item, so lets go there

For IP.0 (the extenally facing address), lets make sure we use the same address as we used before (192.168.102.210), the prefix in my lab for this is /24 (255.255.255.0) and the GW is 192.168.102.1 — note this is the externally accessible interface, there needs to be routing to this segment (the internal interfaces will use VXLAN overlay segments).

Note to self: make sure you use the prefix length (not a.b.c.d) otherwise the VM won’t come up properly and the deployment will fail. 😦

The other things you can leave as their defaults, you will see the second network on the LB is 192.168.50.5, you’ll see from the app layer this is on the same subnet, also the app layer has an IP on the DB network (192.168.100.11) which you will see is in the same subnet as the DB VDU (its IP is 192.168.100.11).

So, hopefully you can now see that you have these 3 VDUS and the 3 Virtual Links (Network segments) that are aligned as per the topology diagram we saw earlier

Click next, take a look at the summary and then click instantiate (you also crossed your fingers right?)

Now lets go and track the deployment of this VNF…

Step 7 – Checking the Deployed VNF

So this will take a few minutes, while its going through the motions ew can look in both vCenter and vCloud director to see the tasks, looking here in vCenter we can see the networks being created these are created through VCD but that then uses vCenter/NSX APIs to create the actual networks, we see the VMs being cloned (from template to VM) and then some reconfiguration actions happening, ultimately these will power on and we should be up and running.

Looking in vCD we can see the networks have been created

1536

looking in My Cloud we see the vAPPS created, currently TCA creates one vAPP per VDU (I’ve asked for this behaviour to be adjustable)…

So lets hit back to TCA.. we can see this has been completed and the status was successful and that the NF state is ‘Instantiated’, congratulations your first VFN has been deployed.. but lets take a quick look through the steps it took to get us here.

  • Grant – This is the standard grant of NFVO talking to VNFM (to get the resources / policy management) through the VIM.
  • Create Network – This is TCA creating the internal networks (Virtual Links) for the App and DB Layers, this is made through the API calls to vCloud Director.
  • Create Server – This is TCA requesting the servers be cloned (this is done through vCenter today), these VMs are then imported into vCloud Director as we just saw.
  • Post Instantiate – this is a post instantiation script that runs after the VM has powered up, this script basically just adds the app node to the LB.

Thats all folks (for now)… We now have our VNF deployed .. oh wait, just to prove it works we can https: to the website (in this case 192.168.102.210) and see if we
get a response.

Up next I’ll go over some of the finer stuff regarding this VNF, how to scale, how to run the workflows and so on.

#enjoy.

VMware Telco Cloud Automation: Overview and Deployment Walkthrough with vCloud Director

As you are probably aware, a few weeks ago (April 2020) VMware released the VMware Telco Cloud Automation (TCA) platform, an innovative new platform designed to help CSPs (Telcos) onboard new applications, automate the design and deployment both VNFs and CNFs.

Note: this blog post focuses on a brief overview and the deployment of TCA, subsequent posts (coming soon) will walk through deploying and operating your first application (application and TOSCA manifest provided .. watch this space), so at the end of this you’ll have a working deployment of VMware TCA but nothing to do with it… I’ll address that in my next post !

What are the problems that this product is trying to solve, first there are a number of challenges in the MANO (Management and Orchestration) space, Telcos are seeing a real lack of cloud expertise by many of the vendors; it is indeed true that this has come a long way, however there are still challenges.
The vendor lock-in approach is also common, take my VNF, take my MANO / VNFM stack, this helps with efficiency but doesn’t really solve the promise of an open, agnostic NFV Infrastructure. There are also challenges with integration, adhering to ETSI standards that have made the progress towards MANO slower than most would have liked.

Most would like to avoid a monolithic, slow development cycle and vendor lock-in approach commonly offered, pure OSS/BSS providers offer a rich integration approach but requiring a lot of work, and boutique players offering pure-play orchestration have challenges keeping up with a multi-cloud approach.

Levergaging the VMware Cloud-First approach, coupled with compliance to ETSI standards and VMwares agnostic approach towards Vendor Functions allows TCA users to achieve the following at a minimum.

  • Orchestrate & Automate Virtual Network and Container Network Functions, from private to public cloud infrastructure, innovating and automating workloads.
  • Unify the management of any vendor functions with a focus on ETSI compliance, leveraging VMware open ecosystem and wide range of partners under the NFV Ready program

As can be see, VMware Telco Cloud Automation supports multiple clouds, both public and private, this delivers on the promise of effectivness as delivered through the traditional NEP MANO offerings while offering the multi-vendor approach delivered by the integrated solutions

VMware Telco Cloud Automation delivers on multiple pillars, acting as a Generic VNFM with a TOSCA based composer for Network Functions (VM or Container) AND Network Services (a service comprised of multiple VNFs and/or CNFs), with full lifecycle management.

The NFVO porting leverages the Network Services element, allowing the deployment not not just a single element, but multiple elements built together to create a overall network service

Lastly through intelligent placement and policy VMware Telco Cloud Automation delivers a wide range of automaton capabilites and placement opportunities, delivering services across a single or multiple cloud components, providing monitoring and reporting of faults in the environment and offering the ability to heal and scale network functions and services.

As part of this walkthrough we will be using VMware vCloud NFV stack as documented below.

For this walkthrough, this is the stack that we will be using, there are 4 elements that need to be properly configured before we can get to the stage where we are ready to onboard our first VNF, these four applications need to be properly configured:

  • vCloud Director – used in this exampe as the NFV VIM
  • RabbitMQ – Used by vCD for notifications and blocking tasks, also used by HCX to collect vCloud Director Inventory
  • VMware HCX (v 3.53)
  • VMware Telco Cloud Automation

Step 1: Configuring vCloud Director

We won’t go through the deployment of vCloud Director here, suffice to say there are enough blogs out there that cover this, however we will ensure the Extensibility settings are configured correctly.

Whats important here is the AMPQ settings, the following elements need to be accurately configured, you should also test these to make sure they are working correctly.
Here I have used an Exchange called systemExchange97m, I used this specific exchange so I use the same RabbitMQ implementation for different versions of vCloud Director, so here whats important is the Exchange and Prefix.

Note: I’m just using guest here to keep it simple, its the default user for RabbitMQ, Also, make sure you have a signed or self-signed certificate for vCloud Director. In my case i’m using a self-signed certificate.

Step 2: Configure RabbitMQ

As before, i’m not going to go through the setup and configuration of Rabbitmq, here we can see the systemExchange97 Exchange, this is of type topic

Also, make sure the user has the correct permissions for this exchange:

Step 3: Deploy HCX

So, start off by deploying the HCX OVA into vCenter, There’s nothing overly complicated here – the VM has a requirement for a single interface, so nothing complex here.

One the VM has been deployed, power it on and wait a few minutes, open a web browser and goto the url / ip address where you deployed HCX.

The first thing you will need to do is enter your HCX License key (you DO have a license right?? 🙂 ), once you’ve entered the correct license the VM will connect to VMware and download the relevant software.
Note: the same installer will be used to deploy TCA, however a different license key will be used for that deployment

You should see two screen as follow, click continue and then want for the upgrade; Note the license description here is HCX for Telco Cloud, you will see this will be different when we install TCA later.

The upgrade will kick off wait a few minutes, go get a coffee ! 🙂

Almost Done !

Step 4: Configure HCX

Now that the system has been upgraded, we need to go back to our TCA application, only this time on port 9443, this allows us to start configuring HCX, in thie case we will use a vCloud director endpoint as discussed.

The first thing you’ll be prompted for is to configure the location of your datacenter.

Pick a system name — go on be innovative ! 🙂

Now we get to pick our cloud, we have the following options:

vSphere – native vCenter environments (not used in Telco for lack of real Multi-Tenancy)
vCloud Director – our VIM of choice here (maybe i’ll do a VIO post later)
VMware Integrated Openstack (just in case you’re not a fan of vCD)
Kubernetes – leveraging TKG or some Kubernetes deployment..

So, go through and pick vCloud director and then use a system account (default administrator@system) to register against vCloud director, you need to ensure two things are properly configured to have vCD properly registered.

  • Ensure Public address is set – this is important as in vCloud Director 9.7 this sets specific parameters that allow HCX to connect to it , if you try using vCloud Director 10.0 you will need to first enable the Flex client and set the public address via the FlexUI.. For some reason this sets additional information that is not configured when setting this through the System H5 client (/provider).
  • Ensure you’ve configured AMQP as discussed earlier.

Once vCloud Director is registered, you will need to configure AMQP as part of HCX, this is used to collect all the inventory information from vCloud Director – we will see how it does this through RMQ later.

Thats it ! youre done, restart the application service and you will have a functioning HCX deployment.

Note: 9443 is where you log in to administer HCX, normal https (443) is where you login to HCX generally (for the purposes of TCA we don’t need to do this !)

Step 5 – Trust Certificates

One thing to do is go in and add your trusted root certificate (if you have one) and your NSX certificates also, to do this login to the administration portal of your fresh HCX deployment (port 9443) and goto Adminstration / Certificates.

Use the import button to import either from a file or from a URL, I imported the ROOT cert from file and then went on to add vCloud Director (although this should already be imported) and NSX-V certificates (by URL to make it easier).

While you’re here — check the rest of the setting to to make sure everything is good… now lets move onto the TCA side of things

Step 6 – Installing TCA

As before, deploy (again) the same OVA that we just used for HCX, once deployed and powered on you can https to the ip/url and you will will go through the same steps – note here that when adding the TCA license key you will see that the actual description and graphic changes slightly (note the red: you are here icon).

Ready for another coffee.? wait a few minute for the package to download and install….

Once the upgrade has completed, you’ll have to go back through the config stages like we did with HCX, configuring where the System is located and what the system name is as shown in the below diagrams.

Whats interesting next is that we we asked for a vCenter to connect to — wait why do we need to connect to a vCenter if we already configured HCX to talk to vCloud Director ?

So,, the reasoning is that VMware TCA uses the vCenter / SSO login for its internal RBAC, thus you configure access to TCA based on the users / groups in vCenter (using local domain or AD) – my home SSO domain is home.lab (I just wanted to use something other than vsphere.local), so go ahead and configure your vCenter / SSO domain as shown below

Once this is configured the base TCA should be up and configured, as with the HCX deployment, login to the administrative console (port 9443) and ensure to trust the certificates of the HCX node we just deployed, if needs be then restart the web / application service before logging into the VMware TCA UI.

Step 8 – Login to VMware Telco Cloud Automation

So, now we’re ready to login to VMware Telco Cloud Automation – goto the URL for and login, Initially you will need to use the user you configured as part of step 7 when configuring the vCenter connection, for now only that use will be able to login – you can create more users with different roles and permissions, i’ll make a blog post about that later..

OK ! Congratulations

So now we have VMware Telco Cloud Automation deployment, but how do we onboard a VNF to this … Watch for my next blog post (coming soon) where we will learn how to onboard a vCloud director environment (OrgvDC into VMware Telco Cloud Automation, create deployment profiles, and then onboard our first workload (I’ll supply a dummy workload / CSAR also)…

Happy playing, hopefully the follow-up post will come in the next day or two…

Design a site like this with WordPress.com
Get started