This next post may seem like science fiction. I woke up this morning and checked that my output files – MP3 files – really did exist and that I actually made my Cisco network “talk” to me!
This post is right out of Star Trek so strap yourself in!
Can we make the network talk to us?
After my success with my #chatbot my brain decided to keep going further and further until I found myself thinking about how I could actually make this real. Like most problems let’s break it down and describe it. How much of this can I already achieve and what tools do I need to get the rest of the solution in place?
I know I can get network data back in the form of JSON (text) – so in theory all I need is a text-to-speech conversion tool.
Enter: Google Cloud !
That’s right Google Cloud offers exactly what I am looking for – a RESTful API that accepts text and returns “speech” ! With over 200 languages in both male and female voices I could even get this speech in French Canadian spoken in a dozen or so different voices!
I am getting ahead of myself but that is the vision:
Go get data, automatically, from the network (easy)
Convert to JSON (also easy)
Feed the JSON text to the Google Cloud API (in theory, also easy)
The process – Google Cloud setup
There is some Google Cloud overhead involved here and this service is “free” – for up to 1 million processed text words or 3 months whichever comes first. It also looks like you get $300 in Google bucks and only 3 months of free access to Google Cloud.
Credit card warning: You need a credit card to sign up for this. They assured me, multiple times, that this does not automatically roll over to a paid subscription after the trial expires you have to actually engage and click and accept a full registration. So I hope this turns out to be free for 3 months and no actual charges show up on my credit card. But in the name of science fiction I press on.
So go setup a Google Cloud account and then your first project and eventually you will land on a page that looks like this.
Enable an API and search for text
Enable this API and investigate the documentation and examples if you like.
Now Google Cloud APIs are very secure to the point of confusion. So I have not fully ironed out the whole automation pipeline yet – mainly because of how complex their OAuth2 requests seem to be – but for now I have a work around I will show you to at least achieve the theoretical goal. We can circle back and mess with the authentication later. (Agile)
Setup OAuth2 credentials (or a Service Account if you want to use a JSON file they provide you)
Make sure it is a Web Application
This will provide you your clientID and secret.
For most OAuth2 that’s all you need – maybe I am missing the correct OAuth2 token URL to request a token but for now there is another tool you can use to get a token.
Google has an OAuth2 Developer Playground you can use to get a token while we figure out the OAuth stuff in the background
Follow the steps to select the API you want to develop (Cloud Text-To-Speech)
Then in the next step you can request and receive a development token
You can also refresh this token / request a new token. So copy that Access Token we will need it for Postman / Ansible in the next steps.
Moving to Postman
Normally under my collection I would setup my OAuth – here is the screenshot of the settings I’m just not sure of. And here is the missing link to full automation.
So far so good here
Again, this might be something trivial and I am 99% sure its because I have not setup this redirection or linked the two things but it was getting late and I wanted to get this working and not get stuck in the OAuth2 weeds
First here is what I think I have right:
Token name: Google API Auth URL: https://accounts.google.com/o/oauth2/auth Access Token URL: https://accounts.google.com/o/oauth2/token Client ID: Client Secret: Scope: https://www.googleapis.com/auth/cloud-platform
But what I’m not really sure what to do with is this Callback URL – I don’t have one of those ?
Callback URL: This is my problem I am really not sure what I need to do here ?
I believe I need to add it here:
But I have no cookie crumbs to follow all of the ? help icons are sort of “This is where your Authorized redirect URLs go” and thats it ?
Open call to Google Cloud Development – I just need this last little step then the rest of this can be automated
Anyway moving along – pretending we have this OAuth2 working – we can cheat for now using the OAuth2 Playground method.
So here is the Postman request:
We want a new POST request in the Google Cloud API Postman Collection. The URL is:
So cheat (for now) and grab the token from the OAuth2 Playground and then in your Postman Request Authorization tab – select Bearer Token and paste in your token. From my experience this will likely start with ya29 (so you know you have the right data here to paste in)
Tab over to Headers and double-check you have Bearerya29.(your key here)
As far as the body goes – again we want RAW JSON
Now for the science fiction – in the body, in JSON, we setup what text we want to what speech
The canned example they provide
So we need the input (text) which is the text string we want converted.
We then select our voice – including the languageCode and name (and there are over 200 to choose from) – and the gender of the voice we want.
Lastly we want the audioConfig including the audioEncoding – and since MP3 was life for me in the mid to late 90s – let’s go with MP3 !
Hit Send and fire away!
Good start – a 200 OK with 52KB of data returned. It is returned as a JSON string:
Incredible stuff – this is the human voice pattern saying the text string – expressed as a base64-encoded audio string !
Ok – very cool stuff – but what am I supposed to do with it?
Fortunately Google Cloud has the insight we need
Hey – it’s exactly where we are at ! We have the audioContent and now we need to decode it!
Since I am still developing in my Windows world with Postman let’s decode our canned example
Carefully right-click copy (don’t actually click the link Postman will spawn a GET tab against the URL thinking you are trying to follow the URL along)
Now create a standard text (.txt) file in a C:\temp\decode folder and paste in the buffered response
I’ve highlighted the “audioContent”: “ – you have to strip this out / delete it as well as the last trailing “ at the end of the string – we just want the data starting with // and beyond to the end of the string
Lauch cmd and change to the C:\Temp\Decode folder and run the command
certutil -decode sample.txt sample.mp3
As you see if your text file was valid you should get a completed successfully response from the certutil utility. If not – check your string again for leading and trailing characters.
Otherwise – launch the file and have a listen!
How cool is that?!?!?
Enter: Network Automation
As I’ve said before anything I can make with with Postman I can automate with the Ansible URI module! But instead of some static text – I plan on getting network information back and having my network talk to me!
The playbook:
First we will prompt for credentials to authenticate
Now – let’s start with something relatively simple – can I “ask” the Core what IOS version it’s running? Sure – let’s go get the Ansible Facts, of which the IOS version is one of them, and pass the results along to the API !
For now we will hard code our token – again once I figure this out I will just have another previous Ansible URI step to go get my token with prompted ClientID / Client Secret at the start of the playbook along with the Cisco credentials. Again, temporary work around
Again because I have used body_format: json I can write the body in YAML.
Ok so for our body let’s have some fun and try an English (US) WaveNet-AMale.
For the actual text lets mix a static string “John the Lab Core is running version” and then the magic Ansible Facts variable {{ ansible_facts[‘net_version’] }}
And see if this works
We need to register the response from the Google Cloud API and delegate the task to the localhost:
So now we need to parse and get just the base64-audio string into a text file. Just like in Postman this is contained in the json.audioContent key:
Now we have to decode the file! But this time with a Linux utility not a Windows utility
We can call the shell from Ansible easily to do this task:
Now in theory this should all work and I should get a text file and an MP3 file. Let’s run the playbook!
Lets check if Git picked a new file!
Ok ! What does it sound like!?!
Ok this is incredible!
Let’s try some Genie / pyATS parsing and some different languages !
Copy and paste and rename the previous playbook and call the new file GoogleCloudTextToSpeech_Sh_Int_Status.yml
Replace the ios_facts task with the following tasks
So now for our actual API call we want to conditionally loop over each interface if is DOWN / DOWN (meaning not UP / UP and not administratively DOWN)
Now as an experiment – let’s use French – Canadian in a FemaleWavenet voice.
Does this also translatethe English text to French? Do do I need to write the text en francais? Lets try it!
So this whole task now looks like this:
Now we need another loop and another condition to register the text from the results. We loop over the results of the first loop and when there is audioContent send that content to the text file.
Caution! RegEx ahead! Don’t be alarmed! Because of the “slashes” in an interface (Gigabit10/0/5) the file path will get messed up so let’s regex them to underscores
Then we need to decode the text files!
So let’s run the playbook!
So far so good – now our conditionals should kick in – we should see some items skipped in light blue text then our match hits in green
Similarly, our next step should also have skipped items and then yellow text indicating the audioContent has been captured
Which is finally converted to audio
What does it sound like!??! Did it automatically translate the text as well as convert it to speech?
TenGig
Gig
A little more fun with languages
I won’t post all the code but I’ve had a lot of fun with this !
How about the total number of port-channels on the Core – in Japanese ?!
Summary
In my opinion this Google Cloud API and network automation integration could change everything! Imagine if you will:
Global, multilingual teams
Elimination of technical text -> Human constructed phrasing, context, and simplicity
Audio files in your source of truth
Integrated with #chatops and #chatbots
Visually impaired or otherwise physical challenges with text-based operations
A talking network!
This was a lot of fun and I hope you found it interesting! I would love to hear your feedback!
As you may know I love to play with new toys. I especially love connecting new toys with my old toys. What you may not know is that I am also an avid World of Warcraft fan and player! In order to run what are known as “raids”, group content designed for 10 – 30 players, I use a program called Discord.
My goal was simple – could I send myself messages in Discord from my Ansible playbooks with network state data? Could I create a #chatbot in this way ?
As it turns out not only could I achieve this – it is actually pretty straight forward and simple to do!
Setup
There are not a lot of steps in the setup.
Download and install Discord
Setup an account
Create a Server
Create a Channel
I named my channel AutomateYourNetwork and set it up as private with invite only RBAC to see the channel
Once we have a server and channel setup we need to setup the Integrations
Now setup a WebHook
Select the channel you want your chatbot to send messages to
We will need the Webhook URL
Postman Development
As with all new API development I always start in Postman to sort out the authentication, headers, and body to POST against this new Discord Webhook.
First let’s setup a new Postman Collection called Discord; add a new POST request to this collection
For the request itself make sure you change the default GET to a POST and then use the following URL:
https://discord./com/api/webhooks/< your URL copied from Discord here>
The body is flexible but to get started let’s just use a small set of values.
Set your body to RAW JSON
And add this body (change your username unless you want this message to look like it came from me!!)
Now if you really want to see how fast / real-time / amazingly cool this is – make sure you have your Discord logged in but minimized to your system tray
Hit SEND in Postman
Your Discord should have notified you about a new message! In my system tray the icon has also changed!
Which is no surprise because back in Postman we see we received a valid 204No Content response from the API
Lets see what Discord looks like
How cool is this?!?
Integrating with Network Automation and Infrastructure as Code
Ok this is great – but can we now integrate this into our CI/CD pipeline Ansible playbooks?
Can I send myself network state data ? Can we create a #chatops bot ?
Yes we can!
Lets start with Ansible Facts – and see if we can chat ourselves the current version of a device.
First let’s setup our credential prompts
Then, let’s use ios_facts
Thats more or less all I need to do – next let’s use the URI module to send ourselves a chat!
I will break down this next URI task; first setup the URL – again after /webhooks/ {{your URL here }}
This is a POST
I like to do this next step for two reasons; one to set the body of the POST to JSON (obvious) but also to allow me to use YAML syntax in Ansible to write the body of the POST (not so obvious). Without this my body would need the JSON formatting (think moustaches and brackets) which is hard enough to write on it’s own, and very hard to write inside a YAML Ansible task
Meaning I can format the body as such (in YAML):
And, like we saw in Postman, we are expecting a 204 back and no content
Make sure you are delegating to the localhost (you dont want this step to run on your Cisco switch)
Again back to the body we are accessing the Ansible magic variable ansible_facts and the key net_version
Lets run the playbook!
Discord looks happy
And, with the power of Internet Magic, here is our message!
This is incredible – let me integrate Cisco Genie / pyATS now and send some parsed network state data next – to show the real power here
The playbook structure is more or less the same so save the Ansible Facts version playbook and copy / rename it to Show Int Status.
Keep the prompts; remove the ios_facts task and replace it with this task
Followed by the Genie parsing step
Then we need to adjust the Discord message – for my example I only want a message if, for example, an interface is configured to be UP / UP (meaning it is not administratively down) but is DOWN / DOWN (notconnected state). I don’t care about UP/UP or Administratively Down interfaces.
Again, I will break down this
Most of this is the same
Here comes my magic with Genie. We want to loop over each interface Genie has parsed into the registered variable pyatsint_status_raw.interfaces. We need to convert this dictionary into a list of items so filter it | dict2items
Now we want a condition on this loop; only call the Discord API when the {{ item.value.status }} key (that is to say each iteration in the loops status value) when it equals “notconnect“
Now we can reference the item.value for the per-interface name and the item.value.status for the notconnect status when it hits a match in the body of the message we are sending to Discord.
The task as a whole looks like this:
So we run this new playbook which sort of looks like this. Remember we have a condition so the light blue text indicates non-matches / skipped interfaces (because they are connected or admin down); green indicates a hit.
Drumroll please
And now in Discord I have this wonderful, pager-like, real-time “alert”
Now go build one!
Here is the GitHub repository – go try to build one for your network!
Ansible playbooks that chat with Discord using Ansible, Genie/pyATS, and the Discord webhooks that send network state information as a Discord message!
The security of my network keeps me up at night. Honestly it does. We live in a world where enterprise networks are defending themselves against state-sponsored attacks. But what do attackers look for? Typically, open, well-known, vulnerabilities.
Well, at least to the hackers, attackers, and even script-kiddies, these vulnerabilities are well-known. Those playing defense are often a step or two behind just identifying vulnerabilities – and often times at the mercy of patch-management cycles or operational constraints that prevent them of addressing (patching) these waiting-to-be-exploited holes in the network.
What can we do about it?
The first thing we can do is to stay informed ! But this alone can be a difficult task with a fleet of various platforms running various software versions at scale. How many flavours of Cisco IOS, IOS-XE, and NXOS platforms make up your enterprise? What version are they running? And most importantly is that version compromised?
The old way might be to get e-mail notifications (hurray more e-mail!), maybe RSS-feeds, or go device-by-device, webpage-by-webpage, looking up the version and if it’s open to attack or not.
Do you see why, now, how an enterprise becomes vulnerable? It’s tedious and time-intensive work. And the moment you are done – the data is stale – what, are you going to wake up every day and review threats like this manually? Assign staff to do this? Just accept the risk and do they best you can trying to patch quarterly ?
Enter: Automation
These types of tasks beg to be solved with automation !
So how can you do it?
Let’s just lay out a high level, human language, defined use-case / wish list.
Can we, at scale, go get the current IOS / IOS-XE / NXOS software version from a device?
Then can we send that particular version somewhere to find out if it has been compromised ?
Can we generate a report from the data above?
Technically the above is all feasible; easy even!
Yes, we can use Ansible and the Cisco Genie Parser to capture the current software version
As with any new REST API I like to start with Postman and then transform working requests into Ansible playbooks.
First, under a new or existing Cisco.com Collection, add a new request called IOS Vulnerabilities
Cisco.com uses OAuth2 authentication mechanism where you first must authenticate against one REST API (https://cloudsso.cisco.com/as/token.oauth2) which provides back authorization Bearer token used to then authenticate and authorize against subsequent Cisco.com APIs
Your Client ID and Secret are found in the API portal after you register the OpenVuln API
Let’s hard code a version and see what flaws it has
Ok so it has at least 1 open vulnerability!
Does it tell us what version fixes it?
Ok let’s check that version quickly while we are still in Postman
This version has no disclosed vulnerabilities!
One interesting thing of note – and our automation is going to need to handle it – is that if there are no flaws found we get a 404 back from the API not a 200 like our flawed response!
The Playbook
For the sake of the example I am using prompted inputs / response but these variables could easily be hardcoded and Ansible Vaulted.
So first prompt for your Cisco hosts username and password and your Cisco.com ClientID and Client Secret
Register the response
Then in the IOS.yml group_vars I have my Ansible network connections
I’ve put all IOS-platforms in this group in the hosts file to target them
Next step run the show version ios_command and register the reponse
Genie parse and register the JSON
Now we need, just like in Postman, to go get the OAuth2 token but instead of Postman we need to use the Ansible URI module
Then we have to parse this response and setup our Bearer and Token components from the response JSON.
Now that we have our token we can authenticate against the IOS open Vulernability API
There are a couple of things going on here:
We are passing the Genie parsed .version.version key to the API for each IOS host in our list
We are using the {{ token_type }} and {{ access_token }} to Authorize
We have to expect two different status codes; 200 (flaws found on a host) and 404 (no flaws for the host software version)
I’ve added until and delay to slow down / throttle the API requests as not to get a 406 back because I’ve overwhelmed the 10 requests per second upper limit
We register the JSON response from the API
As always I like to create a “nice” (easy to read) version of the output in a .json file
Note we need to Loop over each host in our playbook (using the Ansible magic variable ansible_play_hosts) so the JSON file has a data set for each host in the playbook.
Lastly we run the template module to pass the data into Jinja2 where we will create a business-ready CSV file
Which looks like this broken apart:
Like building the JSON file first we need to loop over the hosts in the playbook.
Then we check if the errorCode is defined. You could also look at the 404 status code here. Either way if you get the error it means there are no vulnerabilities.
So add a row of data with the hostname, “N/A” for most fields, and the json.errorMessage from the API (or just hardcode “No flaws” or whatever you want here; “compliant”)
Now if the errorCode is not defined it means there are open flaws and we will get back other data we need to loop into.
I am choosing to do a nested set of two loops – one for each advisory in the list of advisories per software version. Then inside that loop another loop, adding a row of data to the spreadsheet, for each BugID found as well (which is also a list).
There are a few more lists like CVEs for example – you can either loop of these as well (but we start to get into too much repetition / rows of data) – or just use regex_replace(‘,’,’ ‘) to remove all commas inside a field. The result is the list spaced out inside the cell sorted alphabetically but if you do not do this it will throw off the number of cells in your CSV
The results
What can you do with “just” a CSV file?
With Excel or simply VS Code with the Excel Preview extension – some pretty awesome things!
I can, for example, pick the ID field and filter down a particular flaw in the list – which would then provide me all the hosts affected by that bug
Or pick a host out of the list and see what flaws it has, if any. Or any of the columns we have setup in the Jinja2
Included in the report is also the SEVERITY and BASE SCORE for quick decision making or the Detailed Publication URL to get a detailed report for closer analysis.
Here you can easily start capturing Ansible Facts for IOS and NXOS and transform the JSON into CSV and Markdown !
Also included are a bunch of valuable Genie parsed show commands which transform the response into JSON then again transforms the JSON into CSV and Markdown!
The playbooks use Prompts so you should be able to clone the repo and update the hosts file and start targeting your hosts! For full enterprise support I suggest you refactor the group_vars and remove the prompts moving to full Ansible Vault – but for portability and ease of start-up I’ve made them prompted playbooks for now.
I would love to hear how they work out for you – please comment below if you have success!
One of my favourite recipes is the Hakuna Frittata both because not only am I a big fan of puns, I also enjoy this hearty vegetarian meal that even I can handle putting together.
Inspired by this simple recipe I have decided to try and document my highly successful Ansible Cisco NXOS Facts playbook that captures and transforms raw facts from the data centre into business-ready documentation – automatically.
Ansible Cisco NXOS Facts to Business-Ready Documentation Prep: 60-90 Min Cook: 2-3 Min Serves: An entire enterprise
Ingredients
1 Preheated Visual Studio Code 1 Git repository and Git 1 stick of Linux (a host with Ansible installed and SSH connectivity to the network devices) 3 pinches of Python filters 1 Cup of Ansible playbook (a YAML file with the serially executed tasks Ansible will perform) 1 Cup of Ansible module – NXOS_Facts 2 Tablespoons of Jinja2 Template 1 Teaspoon of hosts file 1 Tablespoon of group_vars 2 Raw Eggs – Cisco NXOS 7000 Aggregation Switches
Helpful Tip
This is not magic but did not necessarily come easy to me. You can use debug and print msg to yourself at the CLI. At each step that I register or have data inside a new variable I like to print it to the screen (one to see what the data, in JSON format, looks like; and two, to confirm my variable is not empty!)
Directions
1. You will need to first setup a hosts file listing your targeted hosts. I like to have a hierarchy as such:
hosts [DC:children] DCAgg DCAccess
[DCAgg] N7K01 N7K02
[DCAccess] N5KA01 N5KB01 N5KA02 N5KB02
Or whatever your logical topology resembles.
2. Next we need to be able to securely connect to the devices. Create a group_vars folder and inside create a file that matches your hosts group name – in this case DC.yml
DC.yml +
3. Create all the various output folder structure you require to store the files the playbook creates. I like something hierarchical again:
4. Create a playbooks folder to store the YAML file format Ansible playbook and a file called CiscoDCAggFacts.yml
In this playbook, which runs serially, we first capture the facts then transform them into business-ready documentation.
First we scope our targeted hosts (hosts: DCAgg)
Then we use the NXOS_Facts module to go gather all of the data. I want all the data so I choose gather_subset : – all but I could pick a smaller subset of facts to collect.
Next, and this is an important step, we take the captured data, now stored in the magic Ansible variable – {{ ansible_facts }} and put that into output files.
Using the | to_nice_json and | to_nice_yamlPython filters we can make the “RAW JSON” inside the variable (one long string if you were to look at it) into human-readable documentation.
4b. Repeatable step
NXOS Facts provides facts that can be put into the following distinct reports:
Platform information (hostname, serial number, license, software version, disk and memory information) A list of all of the installed Modules hosted on the platform A list of all IP addresses hosted on the platform A list of all VLANs hosted on the platform A list of all of the enabled Features on the platform A list of all of the Interfaces, physical and virtual, including Fabric Extenders (FEX) A list of all connected Neighbors Fan information Power Supply information
For some of these files, if the JSON data is structured in way that lends itself, I will create both a Comma-Separated Values (csv; a spreadsheet) file and a markdown (md; “html-light”) file. Some of the reports is just the csv file (IPs, Features, VLANs specifically).
The follow code can be copied 9 times and adjusted by updating the references – the task name, the template name, and the output file name – otherwise the basic structure is repeatable.
In order to create the HTML mind map you will also need mark map installed.
Another example of the code – this is the Interfaces section – notice only the name, src, and dest file names need to be updated as well as the MD and HTML file names in the shell command.
5. The Jinja2 Templates
Now that we have finished our Ansible playbook we need to create the Jinja2 templates we reference in the Ansible template module (in the src line)
Create the following folder structure to store the templates:
roles\dc\dc_agg\templates
Then, for each of the 9 templating tasks, create a matching .j2 file – for example the “base facts” as I like to call them – CiscoDCAggFacts.j2
In this template we need an If Else End If structure to test if we are templating csv or markdown then some For Loops to iterate over the JSON lists and key value pairs.
Add a header row with columns for the various fields of data. Reference your Nice JSON file to find the key value pairs.
No “For Loop” is required here just straight data from the JSON
Since its not csv it must be md; so add the appropriate markdown header rows
Then add the data row using markdown pipes for delimiters instead of commas
Close out the If
An example with For Loops might be Interfaces or Neighbors but the rest of the syntax and structure is the same
Now because there are multiple interfaces I need to loop or iterate over each interface.
Now add the row of data
Note you can include “In-line” If statements to check if a variable is defined. Some interfaces might not have a Description for example. Test if it is defined first, and if not (else) use a default of “No Description”
Other fields are imperative and do not need to be tested.
Close the Loop
Now do the markdown headers for Interfaces
Then the For Loop again and data row again but using pipes
Then close out the If statement
Complete the remaining templates. Save everything and Git commit / push up to your repo.
Cooking Time
Lets run the playbook against two fully-loaded production Nexus 7000s using the Linux time command
Two minutes in the oven !
Results
Some samples of the output.
First the Nice JSON – note the lists have been collapsed to be brief but any of the lists can be expanded in VS Code for the details
Interfaces
Neighbors
Now some prefer YAML to JSON so we have the exact same data but in YAML format as well
Now the above is already incredible but I wouldn’t call JSON and YAML files “business-ready” – for that we need a spreadsheet!
The real tasty stuff are the CSV files!
The general facts
Interfaces
Note that you can filter these csv files directly in VS Code – here I have applied a filter on all interfaces without a description
This captures all types of interfaces
Including SVIs
The Markdown provides a quickly rendered VS Code or browser experience
And the Interactive HTML is pretty neat!
Now remember we have all of these file types for all of the various facts these are just a few samples I like to hand out to the audience – for the full blown experience you can hopefully follow this recipe and cook your own Cisco NXOS Ansible Facts playbook!
Please reach out if you need any additional tips or advice ! I can be reached here or on my social media platforms.
In the wake of some very high profile IT security breaches and state sponsored attacks using compromised software today I wrote some infrastructure as code Ansible playbooks to create some business-ready documentation to help us understand our Cisco software version footprint against what release the vendor recommends. It is very important to run “Safe Harbor” code in the form of the Gold Star release. These releases are as close as it gets to being bug-free, secure, tested, and supported in production environments.
The ‘old-way’ involved getting the Cisco Part ID (PID) or several PIDs and looking up the recommended release on Cisco.com using an ever deepening hierarchy of platforms, operating systems, and PIDs. At scale this is like a day’s worth of work to go gather all of this information and present it in a way the business can understand.
Building on my recent success with the Serial2Info Cisco.com API as well as Ansible Facts I thought this might be another nice use-case for business-centric, non-technical (not routes, IP addresses, mac addresses, etc), extremely important and critical insight.
Use Case
Can I automatically get the PID from a host or group of hosts and provide it to the Cisco.com Software Suggestion API building business-ready reports in CSV and markdown?
Answer: Yes!
The Playbook
Again you are going to need:
* A Linux Host with SSH access to your Cisco IOS devices and HTTPS access to the Cisco.com API * Credentials for the host and for the OAuth2 API * We are not using Genie parsers here so just “base” Ansible will work
Step 1. Setup credential handling
Create a playbook file called CiscoCoreRecommendedReleaseFacts.yml
Again I use prompted methodology here same as the Serial2Info API
Gather the username, enable secret, Cisco.com API ClientID, Client Secret
Step 2. Gather Ansible Facts
Using the ios_facts module gather just the hardware subset
Because we are using Ansible Facts we do not need to register anything – the JSON is stored in the Ansible magic variable ansible_facts
I need 2 keys from this JSON – the PID and ideally the current running version. These can be found as follows in the ansible_facts variable:
Which is accessed as ansible_facts.net_model
Which again is accessed as ansible_facts.net_version
With the information above – without going any further – I could already build a nice report about what platforms and running versions there are!
But let’s go a step further and find out what Cisco recommends I should be running!
Step 2. Get your OAuth2 token
First, using the Ansible URI module
We need to get our token using the registered prompted credentials.
The API requires the following headers and body formatting; register the response as a variable (token):
We have to break apart the RAW JSON token to pass it to the ultimate Recommended Release API:
Now we are ready to send PIDs to the API.
Step 3 – Send PID to Cisco.com API
Again using the URI module:
Here we pass the ansible_facts.net_model Fact to the API as an HTTP GET:
The headers and body requirements. Notice the authentication and how we pass the Bearer Token along. We also register the returned JSON:
Here is what the returned JSON looks like:
The highest level key is json or accessed via RecommendedRelease.json
There is a productlist
Which as you can see is a listas denoted by the [ ]
Inside this list is another product key with the values from the API about the product itself
A little further down we find the recommended software release
Step 4 – Transform technical documentation into business ready CSV / MD files
These JSON and YAML (I also use the | to_nice_yaml filter to create a YAML file along with the JSON file) files are create for technical purposes but we can do a bit better making the information more palatable using business formats like CSV and mark down.
It is just a matter of using Jinja2 to template the CSV and Markdown files from the structured JSON variables / key-value pairs.
Add a final task in the Ansible playbook that will loop over the CSV and MD file types using the template module to source a new .j2 file – CiscoCoreRecommendedReleaseFacts.j2 – where our logic will go to generate our artifacts.
The Jinja2 starts with an If Else EndIf statement that checks if the Ansible loop is on CSV or not. If it is it uses the CSV section of templated file format otherwise it uses markdown syntax.
First we want to add a CSV header row
Then we need a For Loop to loop over each product in the productList
Now we add our “data line” per product in the loop using the various keys
Hostname for example uses the Ansible magic variable inventory_hostname
Then we want the Base PID. We use the Ansible default filter to set a default value in case the variable happens to be empty.
We continue accessing our keys and then we close the loop.
Now we need to create the Markdown syntax
And the same logic for the “data row” but with pipes instead of commas. Make sure to close off the If statement
Step 5 – Run playbook and check results
We run the playbook as ansible-playbook CiscoCoreRecommendedReleaseFacts.yml
Answer the prompts
Let the playbook run and check the results!
Summary
Again with a few free, simple tools like Ansible and the Cisco.com API we can, at scale gather and report on the current running version and the vendor recommended version quickly and easily and fully automatically!
Now go and start protecting your enterprise network armed with these facts!
Layer 9 issues – finance – are often some of the most challenging a network engineer faces. Contract management can be particularly difficult in any scale organization especially if you are not “sole source” purchasing. Serial numbers and contracts are also not typically things the “network people” want to deal with but when that P1 hits and you try to open a SEV 1 TAC CASE – only to find out you are not under contract – I’ve been in less terrifying car accidents than this nightmare scenario.
I have good news ! Using a mix of automation and developer-like tools the network engineer can now create a real source of truth that, along with routes and MAC address-tables and other technical information, can include inventory and contractual business documentation from stateful, truthful, real-time, facts from Cisco.
Ok so let’s get into it!
As a rough outline for our logic here is the use case:
Can I automatically gather the serial numbers from Cisco device hostnames and then provide them to Cisco and get my contractual state for each part on each device?
Answer: YES !
What you will need:
* Linux host with Ansible, Genie parser * Linux host requires both SSH access to the Cisco host and Internet Access to the OAuth2 and Cisco.com API HTTPS URLs * Cisco SmartNet Total Care – I have written up instructions in this repo under the “OnBoarding Process” section
The Playbook
Step 1 – We will need to get the serial number for every part for a given hostname. For this we will use the standard show inventory command for IOS using the Ansible ios_commandmodule. I will be using prompted methods for demonstration purposes or for on-demand multi-user (each with their own accounts) runtime, but we could easily Ansible Vault these credentials for fully hands-free run time or to containerize this playbook. I am also targeting a specific host – the Core – but I could easily change this to be every IOS device in the enterprise. This playbook is called CiscoCoreSerial2InfoFacts.yml
First prompt for username, enable secret, Cisco Customer ID, Cisco Customer Secret and register these variables:
Then run the ios_commandshow inventory and register the results in a variable.
Step 2 – Parse the raw output from the IOS command
Next, we use Genie to parse the raw results and register a new variable with the structured JSON. Genie requires, for show inventory, the command, the operating system, and the platform (in this case a Cisco 6500)
And here is what that structured JSON looks like:
So now we have a nice list of each part and their serial number we can feed the Cisco.com API to get back our contract information.
Step 3 – Get an OAuth 2 token from Cisco web services.
Cisco.com APIs use OAuth2 for authentication meaning you can not go directly against the API with a username and password. First you must retrieve a Bearer Token and then use that limited time token within it’s lifetime against the ultimate API.
Using the Ansible URI module go get a token and register the results as a variable. Provide the Customer ID and Client secret prompts to the API for authentication. This is an HTTP POST method.
With the new raw token setup the token type and access token from the raw response
Step 4 – Provide token to the Serial2Contract Cisco API to get back contractual information for each serial number.
In this step we are going to use an Ansible loop to loop over the Genie parsed structured JSON from the show inventory command providing the sn key for each item in the list. We need to use the Python | dict2items Ansible filter to transform the dictionary into a list we can iterate over.
The loop is written as
loop: “{{ pyats_inventory.index| dict2items }}”
And each serial number is referenced in the URL each iteration through the loop:
We register the returned structured JSON from the API as Serial2Info which looks like this:
So now I have the JSON – let’s make it a business ready artifact – a CSV file / spreadsheet and a markdown file – using Jinja2
Step 5 – Using Jinja2 lets template the structured JSON into a CSV file for the business.
Create a matching Jinja2 template called CiscoCoreSerial2InfoFacts.j2 and add a task to Ansible that uses the template module to build both a CSV file and a markdown file from the JSON.
In the Jinja2 file we need a section for CSV (if item = “csv”) and a section for markdown (else) based on their respective syntax. Then we need to loop over each of the responses.
result inSerial2Info[‘results’] is the loop used. I also add a default value using a filter | default (‘N/A’) in case the value is not defined. SFPs for example do not have all of the fields that a supervisor module has so to be safe it’s best to build in a default value for each variable.
The final Jinja2 looks something like this:
Which results in a CSV and Markdown file with a row for every serial number and their contractual facts from the API.
Summary
Large scale inventory and contract information can easily be automated into CSV spreadsheets that the business can easily consume. Ansible, Genie, Cisco.com APIs, Jinja2 templating and a little bit a logic come together into an automation pipeline that ensures contractual compliance and inventory fidelity at scale!
And he’s right! I can’t stop making new GitHub repositories with Genie parsed show commands to documentation! Like show ip interface brief as seen above!
“We become what we behold. We shape our tools, and thereafter our tools shape us.” ― Marshall McLuhan
My first attempt at network automation I used the tools that I had used to manage infrastructure with for the past 20+ years. A file editor (notepad) and a file transfer program (WinSCP) on my Windows 10 machine and a Linux (CentOS (at the office) / Ubuntu (at home)) machine. Ansible was the first new tool introduced to me and every other tool here has followed to either support my Ansible development or I was led down the path to discovery because of Ansible. So – installed Ansible on the Linux environment at the office and made sure it had SSH connectivity to my in-band management network.
Ready to go! I had everything I needed to write YAML files and make my playbooks, group / host variable files, and inventory file. I could either write these in Notepad in Windows and transfer them to Linux or just “vi” them directly on Linux. All set right?
As a beginner trying to orchestrate a series of serially executed commands in Ansible playbook tasks I was suffering from a mix of ignorance and arrogance. I didn’t know what I didn’t know. And while yes, my playbook would eventually go onto be successful and make a large scale, complex, change across multiple devices without causing an outage of any kind, the process was brutal. Back and forth trial and error with all of these, what turned out to be, unnecessary steps of trying to track my latest version of working code across my development environments.
Shameful filenames like “John_Working_Code_v2_latest_new02.yml” were sprawling out of control and I was starting to feel like a “The Price is Right” contestant where they guess the value of 5 items, pull a lever to see how many they got right, then run back and guess again and try to figure out which ones were correct, then run back and pull the lever again.
Eventually I got all 5 items priced correctly but it was a lot of panic-driven, chaotic, running around, as it turns out, for no reason.
Was there a better way? Surely this isn’t what people mean when they say DevOps or infrastructure as code or network automation. Why would anyone do it this way? It doesn’t scale. It took weeks longer than had I just logged into each router at the CLI and configured the device manually by hand.
Ansible wasn’t that hard – but the tools I was using were simply wrong. Enter the modern toolkit.
TL:DR
– This toolkit took about three years to put together through a lot of hard work and discovery. – Tools make all the difference. – You need an Integrated Development Environment (IDE). – Version and source control are good things. They are not just middle management talk. – Network automation means treating infrastructure as code. – You have crossed over from IT into Development; act accordingly. – Use software development tools to solve software development problems. – Git. Git. Git. Git. Git. Git. – Powerful stuff. – Leads to Continuous Integration / Continuous Delivery (CI/CD) in the long run. – Both for configuration management as well as state capture, validation, and testing.
Version / Source Control – Git
I start with Git because in order of software installation Git should come first. You want to build up a development environment in order to work with infrastructure as code and you want to have both version (current, working, code; previous working code; testing new things without affecting old working things) and source (source of truth, which copy is the master copy / primary copy / main copy, allow for distributed development) controls (like RBAC but over your code base). For Windows you need to download and install Git (first, before your IDE so Git can be integrated when you install your IDE). But for Linux Git comes pre-installed as a standard.
Git is the actual software that does the version and source control. Git creates a hidden .git folder that tracks changes inside that Git-enabled folder. Git has commands used to work with code.
GitHub is an online Git repository. The largest collection of code in the universe GitHub provides a free place to store Git repositories (the folder with the .git subfolder tracking all artifacts within the parent folder). GitHub, or other Git hosting repository sites or services, provides the source control over your infrastructure as code.
Git is used to clone Git repositories (GitHub or other Git repository hosting site) locally, that is take a full copy of the remote repository locally, where developers can make changes and then push those changes back into the remote repository.
Branching
Git uses branching as it’s version control system. A main branch (previously and sometimes still referred to as master) serves as the known-good, stable, current, source of truth, intent; the master copy of a key for example.
A branch, another full copy of the code with a different identifier from main/master, can be created for development purposes. Bug fixes, feature releases, scaling, or routine changes can be done within a branch, protecting the main branch, and once tested and QA has been performed, the branch is merged through a mechanism known as a pull request, back into main, updating main’s artifacts accordingly.
It might not seem obvious at first but in larger distributed development environments the pull request system allows for cross-team orchestration and collaboration. Pull requests can require approvals and reviews and can then also be used to trigger software builds and releases. The ever evolving history of a piece of infrastructure is completely documented and tracked in the pull request history handling the entire lifecycle of any given product, platform, or host.
I love VS Code. I really do. After Git is installed download and install VS Code. VS Code is where you will be writing all of your code and reviewing the artifacts your code generates. It is fully Git integrated and aware and things like Git clone, Git add, Git commit, and Git push are all simply point-and-click operations. Split-screen editing, syntax checking, and a vast library of extensions make VS Code my number one pick for an IDE.
VS Code Extensions
Extensions are plug-ins you can install to further enhance VS Code’s already awesome capabilities. There are thousands of extensions available out there. Here is my VS Code extension list that I find helps enhance my infrastructure as code development experience.
Formerly Microsoft Team Foundation Server (TFS), AzureDevOps provides development services for distributed development teams in the form of work boards (KANBAN; other Agile systems; waterfall), Git repositories, software builds, tests, and software releases allowing for full CI/CD DevOps.
ADO has the advantage, for me, as being an on-prem / private cloud solution with full enterprise controls (RBAC; AD integration) and feature sets.
Git repositories can transition to SSH key authentication. Docker container images can be build and deployed based on Git triggers and actions which build and deploy Ansible playbooks automatically.
Rich history, version and source controls, and an amazing collaboration space – particularly around Git Pull Requests – which fully enable and charge up infrastructure as code development. Adapt and adopt Agile practices to infrastructure teams.
Docker Integration
Moving towards infrastructure as code and full CI/CD in AzureDevOps Docker has become a very important tool and a key component of my success in DevOps. A Docker container image can be thought of as an immutable CD-ROM/DVD-like ISO (hence the “image” part) which can run, self-contained and without the need for a full blown hypervisor like VMWare or Hyper-V, an operating system and software inside of it. Docker images can be interactive and you can “log into” / shell into them, but any changes made inside this session are discarded when the session ends. Ansible and pyATS can both be “containerized” and run inside Docker container images.
Why is this important?
It allows me to setup a software build (create a Docker container image based on a specified Dockerfile) and software release in AzureDevOps CI/CD pipeline. Now any Ansible playbook or pyATS test I previously scheduled with a human operator executing the automation to fully automated and human-independent CI/CD that is trigger based on Git actions like Pull Requests.
A quick approach to Docker:
– Make it work at the CLI. – Wrap this / convert this to Ansible playbook. – Wrap the Ansible playbook in Docker. – Build and release Docker image based on Git actions that trigger the CI/CD.
A sample infrastructure as code build:
And the matching release:
With detailed logs showing the Ansible playbook and Docker status.
Automation Tool – Ansible
After discovering Ansible in April of 2017 my entire approach to solving infrastructure problems changed. Now I work with an automate-first principal and nearly every solution I’ve developed in the past three years has been an Ansible playbook of some kind. It really has been a one-size-fits-all tool for me. Cisco, Microsoft, Linux, VMWare, Azure, anything with a RESTful API; Ansible has done it all for me.
My key points about Ansible:
– Simple, powerful, agentless. – No previous coding skills required. This is not like learning Python from scratch. – Can be used for anything from gathering facts, tactical one-time changes at scale, or full configuration management.
The loaded term “infrastructure as code” or even “network automation” really boils down to the fact that you will be working with a few new file types to create artifacts like data models, templates and playbooks.
YAML Ain’t Markup Language (YAML)
YAML is a human readable data serialization language. In terms of Ansible both your data models (infrastructure represented as intent-based code) and playbooks (the file containing the serially executed tasks to perform) will be YAML files.
A data model for a switch might look like this:
As you can see the file format is simple, made up of lists of key-value pairs, and very human readable. This represents the intent for this particular device.
A playbook, on the other hand, might look like this:
This playbook is scoped for the CampusAccess group in the Ansible hosts inventory file. Prompts for username and password and then runs the ios_facts module printing the gathered facts on the screen.
JavaScript Object Notation (JSON)
You may not need to write JSON but you should be able to consume JSON if you are working with Ansible. All Ansible facts, for example, are represented in JSON. This is much like a RESTful API that returns JSON in response to an HTTP GET. You may need to write JSON if you are POST / PUT (creating / updating) records with an API as the body of the HTTP POST / PUT will be JSON data.
Ansible facts get returned by default like this as an example of an Ansible-related JSON artifact:
Jinja2
Jinja2 is Ansible’s (and Python’s) templating language. Saved as .j2 files, a Jinja template is exactly that – a template of another file type. Configuration stanzas, JSON, YAML, CSV, markdown, HTML, plain-text; more or less anything can be templated with Jinja2.
Logic is often applied inside a Jinja2 template such as For Loops or If Else End If declarations. Jinja2 also allows for the use of variables – which reference the original data model examples.
The VLANs from the data model of example could be templated for Cisco IOS as follows:
{% for vlan in host_vlans %} vlan {{ vlan }} name {{ host_vlans[vlan].name }} {% endfor %}
Automated Documentation with Ansible filters – RAW JSON, Nice JSON, Nice YAML, CSV, Markdown, and Interactive HTML Mind Maps
Probably my favourite, and often overlooked, Ansible capability is to generate automated network and infrastructure state documentation. I do this with Ansible filters. Starting with this simple playbook:
The Ansible magic variable – ansible_facts can be transformed. To simply take the RAW JSON inside ansible_facts you can use the Ansible copymodule to copy them into a file:
But using Ansible filters – adding | and then the filter, the ugly RAW JSON can be transformed into “nice” human readable JSON:
Or even better – Nice YAML!
Which looks like this:
CSV and Markdown files can also be created using the ansible_facts JSON and another filter, JSON_Query, an SQL-like query tool used against JSON.
So we set_facts (create our own variables) from the parsed JSON:
Which we can then use to make CSV:
Or Markdown:
Mark Map
Mark Map is a nifty little program that converts any valid, well-formed Markdown file into an interactive HTML “mind map”.
You need to have node.js and npm installed in the Linux environment.
Then simply run the one line command referencing the markdown file you want to convert to a mind map.
npx markmap-cli <filename>
The output, which is fully interactive, looks like this:
Ansible Vault
A big part of moving to CI/CD and automated builds and releases from human-driven automation is securing the credentials used to authenticate against any given infrastructure. As all of the infrastructure as code is in a central Git repository you don’t want to store your credentials in plain / clear text. One approach is to use prompted mechanisms for securely handling credentials but this does not lend itself to full autonomous automation in Docker containers.
Ansible Vault provides a way to encrypt full files or in our case specific variables, such as the credentials key-value. Once vaulted the encrypted variable can then be safely stored inside the Ansible group_vars file inside the Git repo for all to see. The matching password to unlock the variable can be provided at runtime (again counter intuitive) or, and this is my approach, saved in plaintext in a file in a secure location on the Linux host.
The magic happens at Docker container image runtime where the password file is mounted as a volume into the Docker image so the Ansible playbook can dynamically unlock the credential variables at runtime. Because the lock and key are separate this is a very secure way to automate Ansible with Docker safely.
To move from something like this, that uses prompted inputs from a human operator
Vaulted variables that can be run non-interactively
Now you can replace the ansible_ssh_pass variable with the vaulted password.
To make it fully non-interactive save your Vault password ( <A Strong Encryption password here>) to a plain-text file (yes it seems counter intuitive but this is fine and safe) saved somewhere secure on the Linux host.
sudo vi /etc/secrets/AnsibleVault.txt < A Strong Encryption password here >
(ESC, wq! )
Then, in your ansible.cfg file stored in the same location as the playbooks add the following line under [defaults]
The playbook will now securely and automatically authenticate without the need for prompts or for insecurely saving naked credentials in the clear.
Latest tool – Application Programmable Interfaces (APIs) for Infrastructure
APIs have finally arrived for infrastructure in the form of RESTful interfaces on switches and routers, on the Cisco Catalyst 9000 series for example, and other appliances and web services. I have had great success with F5’s AS3 API. Cisco.com has amazing APIs. Cisco Prime Infrastructure and Cisco Identity Services Engine APIs are extremely capable and easy to use. BlueCat Address Manager has an API. They are popping up everywhere!
Command Line Interface: cURL
Client Uniform Resource Locator (cURL) is a command-line tool used to interact with an API. As of May 2018 cURL is even included in Windows by default.
Try it yourself – launch a command prompt (Start -> Run; cmd) and type in:
curl https://quotes.rest/qod
You should get back a Quote of the Day in JSON from the public open RESTful API:
Graphical User Interface: Postman
Postman is a GUI API client. Postman can be used for quick and simply API interactions but is also a very powerful API automation and development tool that should not be dismissed as just a simple API browser.
The same Quote of the API would look like this in Postman:
Automation: Ansible URI Module
Ansible has a universal module, the URI module, that allows for API automation. The follow Ansible playbook, quoteoftheday.yml, can be created to automate the Quote of the Day.
Using additional Ansible tools the JSON can be manipulated to create usable reports from the JSON.
The Quote of the Day playbook is available on Automate Your Network’s GitHub.
State Validation: Cisco Testing Automation Solution (CTAS)
The foundation here is parsing. Using the Genie library framework various infrastructure CLI commands are parsed and transformed into JSON output. From there pyATS can run automated boolean tests against the Genie parsed returned key-value pairs. xPresso is a GUI based ecosystem that provides for RBAC, scheduling, and much more advanced and easy to build testing workflows.
Similar to Ansible which has connection strings inside group variables CTAS uses testbed files which describe and provide shell connectivity to run the parsing and testing.
A sample testbed file for a Cisco ISR. Note the password is encrypted using pyATS cryptography methods.
A sample crc_errors pyATS test file, written in Python. This test could be used with the ISR testbed to check for CRC errors on all interfaces.
A sample log, taken from an AzureDevOps CI/CD release inside a Docker container image:
xPresso also offers the automation and orchestration capabilities:
Summary
Over the past three years my toolkit as a network engineer has grown dramatically from a humble text editor and a file transfer program to dozens of new tools each with their own amazing capabilities. In short – go get these tools and start using them to solve problems.
– Git – VS Code – VS Code Extensions – GitHub account and repositories for public use – AzureDevOps for Enterprise use – Linux of any flavour – Ansible – Python – Docker – Postman – cURL – Ansible URI module – YAML experience – JSON experience – Jinja2 templating experience – Markdown experience – HTML experience – Mind map transformations – Genie parsers – pyATS – xPresso
I am always open to questions, comments, or feedback if you need help getting started!
Downloading the tools and exploring yourself is the best way to get started but I’m here to help!
Infrastructure as Code and Network Automation – Where to Start
Learning any new skill takes time, patience, a willingness to try and fail, and ideally continuously learn and grow from our mistakes until we grow more and more proficient. The number one question I get is “How do you get started?”. I got started the hard way – trying to automate a tactical, one-time, unique, complicated, large-scale problem out of necessity with little time to learn the best way to approach such a problem. This post is to provide you with safe, easy, valuable, scalable, Ansible playbooks you can copy, study, and modify to fit your infrastructure. I want to stress that the following code does not attempt to change, modify, add, remove, update, or delete any data or configurations. They simply connect, securely, to a target host or set of hosts, capture stateful facts, that is to say, truthful key-value pairs and lists of information about the current state or configuration, parse those facts, and then transform them into useable, human-readable, automated documentation.
TL:DR
– Documenting enterprise networks and servers is tedious work at best. – Most enterprise documentation is, for a lack of a better word, wanting, if it exists at all. – Various Ansible modules can be used to gather stateful, truthful, facts from infrastructure. – Not limited to network devices. Windows, Linux, VMWare provide facts to Ansible as well. – Easy. – After you capture facts they are easily transformed into automated state documentation. – RAW JSON, Nice JSON, Nice YAML, CSV (spreadsheets!), Markdown, and interactive HTML mind maps from Ansible facts. – Scales n+x devices. – Safe, secure, no possibility of disrupting the network. Think of it as running a bunch of show commands or doing HTTP GETs. – Loved by management everywhere.
Enter: Ansible
If you are familiar with me at all you likely already know Ansible is my automation tool of choice. If you are new around here – let me tell you why. I believe Ansible is so easy that I can write a simple blog post with a few lines of code that you should be able to reproduce and make it work for you. There is little to no barrier to entry and your solution complexity will scale along with your personal growth and muscle memory with the tool. So let’s get started.
Linux
You are going to need a Linux environment. If you are a traditional Windows user who may not have access to a RHEL, CentOS, Debian, Ubuntu, or other Linux platform you can use the Windows Subsystems for Linux (WSL2) on Windows 10 to run a Linux environment.
For example to install Ubuntu on Windows 10:
Right-click the Windows Start icon – select Apps and Features.
In the Apps and Features window – click Programs and Features under Related Settings on the right side of Apps and Features.
Click Turn Windows Features On or Off in the left (with the shield icon) side of the Programs and Features window.
Scroll to bottom of the Features window and put a check mark beside Windows Subsytem for Linux; Click Ok and close the open windows.
Launch the Microsoft Store.
Search for Ubuntu – click the first result.
Click Install.
Wait for Ubuntu to install.
Press Windows Key and start typing Ubuntu – click and launch Ubuntu.
The first time Ubuntu launches it has to setup – give this some time.
Enter your username and password for Ubuntu.
Update Ubuntu – this step will take some time.
$ sudo apt update
$ sudo apt-get upgrade -y
Install Ansible
Make sure Python is installed.
$ sudo apt-get install python -y
Install Ansible.
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible -y
hosts
You will need a hosts file. This is the foundation for a good, scalable, modular Ansible install base. Hosts can be organized hierarchically to match your physical or logical topologies. The machine hosting Linux must be able to resolve the hosts if you use their hostname and have IP connectivity for the playbooks to work. For a standard Cisco enterprise design you might have a hosts file like this:
Ansible needs to be able to securely connect to the targeted host. There are no agents and Ansible uses SSH, WinRM, or HTTPS as transport protocols. For most devices a username and password are required to authenticate and authorize the Ansible session. There are a few ways that this can be handled but for beginner’s I would setup a prompted mechanism to get going. Eventually you can learn about Ansible Vault but to avoid hard coding plain-text passwords to get started, a mistake even I made when I was beginning to use Ansible, start with prompted interactive playbooks where a human has to enter a username and password.
These connections strings are first setup in what’s known a group variable or group_vars where all of the individual hosts in a group (ie dist01 and dist02 in DIST group) inherit the variables set. Because we have everything nested in [ENTERPRISE], in a folder called group_vars, create the following file.
This tells all the hosts in the Enterprise hosts group to use the Ansible network_cli connection mechanism; that the target operating system is Cisco IOS; and that the Ansible user and Ansible passwords are variables.
Playbooks
At the heart of Ansible are playbooks. Playbooks are YAML files made up of key-value pairs and lists of serially executed tasks. The first step in the playbook is to establish the scope of the playbook tasks from either a group or single host in the hosts file or locally using the localhost option. For this example target the Campus Access layer. One of the tasks in these facts playbooks will either call a specific facts module (like ios_facts), use the setup module, or target an API using the uri module. But first, we have to prompt the user for their credentials and store them in variables to be used by the Ansible connection strings in the group vars files.
Create a file called CiscoAccessFacts.yml inside the playbooks folder as follows:
Now that we have connection to our devices in the ACCESS group using the prompted credentials which are passed to the group_vars Ansible connection strings we are ready to perform the actual IOS Facts Ansible task as follows:
– name: Gather Ansible IOS Facts ios_facts: gather_subset: – all
That’s it! Now we have captured the Ansible IOS Facts. Because these are Ansible facts we do not need to register them as a variable; they are stored in the ansible_facts magic variable.
To print your facts to the screen you can use the Ansible debug with the following message as the next task in your playbook:
– debug: msg=”{{ ansible_facts }}”
Save and run the file.
ansible-playbook CiscoAccessFacts.yml
Answer the prompts for credentials. After authenticating and gathering the facts something like this should be displayed on the screen, except with actual data values completed.
Cisco NXOS_Facts
Much like IOS, Ansible has an NXOS fact module as well. The NXOS module, as expected, provides the same baseline facts as IOS but adds hardware facts such as modules, fans, and power supplies as well as software facts such as features, licensing, and VLANS.
Copy the Campus files and update them accordingly. Typically in a data center where the NXOS facts will gather there is HA configured and paired-devices. These playbooks have been tested on Nexus 9000, Nexus 7000, Nexus 5000, and Nexus 2000 FEX modules.
– name: Gather Ansible NXOS Facts about DC Access nxos_facts: gather_subset: -all
– debug: msg=”{{ ansible_facts }}”
Save and run the playbook.
ansible-playbook CiscoNXOSAccessFacts.yml
Review the output on the screen and notice the new sets of facts only NXOS can provide.
Notice again the change from ios_facts to nxos_facts but that’s about it. Now you have all of your Data Centre Ansible Facts as well as your Campus!
This is great right? What other facts can we get? How about compute facts! Yes that’s right we can use Ansible to get Windows, Linux, and VMWare (bare metal or virtual guest) facts too using more or less the same steps.
Compute Facts
Ansible is not limited to gathering facts from Cisco or other network devices. In fact Ansible can be used to gather even more facts from compute platforms like Microsoft Windows, Linux of any flavour, and VMWare (both bare metal hosts and virtual guests).
Microsoft Windows Facts
That’s right. We can use Ansible, a Linux-only tool, to gather Microsoft Windows facts! More or less the same approach and building blocks are the same; a hosts file, group vars file, and playbook. Windows hosts, like Cisco hosts, can be logically organized anyway you see fit. Windows hosts can be grouped by product line, OS, function, location, or other values. For now create a simple hosts file with one parent group called Windows.
The requirements and WinRM installation and configuration guide can be found here. Either HTTP or HTTPS can be used because Kerberos is ultimately securing the payload, even if the transport is only HTTP.
hosts [Windows] Server01 Server02
The group vars Ansible connectivity variables for Microsoft Windows are as follows:
Note the Ansible WinRM scheme needs to be setup as either HTTP or HTTPS and the corresponding Ansible port (5985 for HTTP; 5986 for HTTPS) needs to be selected depending on the transport protocol. Ansible connection is using WinRM and the WinRM transport is specified as Kerberos.
Now in the playbook target the Windows group of hosts and use the same prompted mechanism code as before updating it to reflect Windows cosmetically. The only change to the Facts task is to change from the ios or nxos facts module to the setup module.
– name: Gather Ansible Windows Facts about Windows hosts setup: gather_subset: -all
– debug: msg=”{{ ansible_facts }}”
Save the playbook as playbooks/WindowsFacts.yml and run the playbook.
ansible-playbook WindowsFacts.yml
Notice all of the amazing facts Ansible can discover about a Windows host or groups of Windows hosts.
Linux Facts
The great thing about the setup module is that it can be used against Windows and Linux hosts. Meaning you simply need to clone the Windows artifacts (group_vars file, playbook, and hosts inventory) and refactor all of the Windows references to Linux (Linux.yml group var file; [Linux] hosts list; Windows to Linux cosmetic references) but the core Ansible task remains the same:
– name: Gather Ansible Linux Facts about Linux hosts setup: gather_subset: -all
– debug: msg=”{{ ansible_facts }}”
However, much like IOS vs NXOS facts, the amount of Linux facts eclipses even the huge list of facts from the Windows hosts. This is due to the native Ansible / Linux coupling and support.
VMWare
VMWare does not use the generic setup module and has a specific facts module like Cisco IOS or NXOS. VMWare facts actually use the downstream VSphere API and there are 2 additional required fields in addition to an authorized username and password; hostname and esxi_hostname. This module, vmware_host_facts gathers facts about the bare metal hosts; not the virtual guests. From my testing I found it best to target the hostname and esxi_hostname using the esxi hostname in the Ansible hosts inventory file.
Very rich JSON similar to that of Linux are provided back including all hardware information about virtual NICs, VMWare datastores, BIOS, and processors.
Microsoft Azure
Even clouds have Ansible Facts! Azure Facts are actually even easier to retrieve because of the simplified authentication mechanism. Username and password still works or you could setup Service Principal Credentials. Inside Azure you need to create an account with at least API read-only permissions. There are some prerequisites to install. First pip install the Ansible Azure libraries.
$ pip install ‘ansible[azure]’
You can create the following file $HOME/.azure/credentials to pass credentials to the various Azure modules without username and password prompts or credential handling.
In the list of Ansible cloud modules find the Azure section. Each Azure module has two components – a config module and an info (facts) module.
Using the same process, along with JSON_Query, and a with_together loop, for example, capture all Azure Virtual Network info. First we have to capture the Azure resource groups and then pass the resource group along to a second API to get the associated networks.
– name: Get Azure Facts for Resource Groups azure_rm_resourcegroup_info: register: azure_resource_groups
– name: Get Azure Facts for all Networks within all Resource Groups azure_rm_virtualnetwork_info: resource_group: “{{ item.0 }}” register: azure_virtual_network with_together: – “{{ azure_resource_groups | json_query(‘resourcegroups[*].name’) }}”
Ok great. So what? What can I do with these facts?
So far we have simply dumped the facts to console to explore the various modules. What I like to do with these facts is to create living, automated, stateful, truthful, human-readable, management and operations loves me for it, documentation. With a little work and changing the playbooks from interactive on-demand playbooks to non-interactive fully scheduled and automatically executed periodically these playbooks can now run all by themselves creating snapshots of state in the form of reports.
First I like to capture the RAW JSON as a forensic artifact. The raw facts unchanged and unfiltered, in case audit, compliance, security, or other possible downstream machine code that requires unchanged RAW JSON.
This is easily done in Ansible using the copymodule. We have the RAW JSON in a variable, the Ansible magic variable {{ ansible_facts }}, we just need to copy it into a file.
We will need a repository for the new output files so create a documentation folder structure with subfolders for your various platforms.
Add the following line of code, customizing the output file name based on the playbook environment, after the debug. For example the IOS Access Facts playbook.
The Ansible magic variable {{ inventory_hostname }} can be used to reference the current iterated inventory host file target which we will use to identify the parent switch for each of the facts.
Save and re-run the playbook. All the IOS facts will now be, albeit ugly and unusable, stored in a RAW JSON file.
to_nice filters
Ansible has various filters that can be used to help parse or transform data. Using two of these filters, to_nice_json and to_nice_yaml, we can create human-readable, nice, pretty, and easy to consume JSON and YAML files.
Simply copy and paste the Create RAW JSON file task and modify the new stanzas as follows:
Save and re-run the playbook. Now you should have 2 human readable files. The _Nice.json (displayed in the first screenshot) file and now an even easier to read YAML file:
Traditional Reports from Facts
While the RAW and Nice JSON and YAML files are great for programming, data modeling, logic, templating, and other infrastructure as code purposes they are still not exactly consumable by a wider audience (management; operations; monitoring; capacity planning). Using Ansible’s ability to parse the registered variable JSON and another filter, JSON_Query, an SQL-like tool used to query and parse JSON, we can capture individual fields and place them into CSV or markdown ordered structure.
First we are going to use Ansible’s set_facts module to create our own variables out of the key-value pair and lists in JSON which we can then re-use to create reports.
Now that we have set our own facts / variables from the JSON facts we simply put them into order to create a CSV file.
– name: Create Cisco IOS Access Facts CSV copy: content: | {{ inventory_hostname }},{{ image }},{{ version }},{{ serial }},{{ model }},{{ disk_total }},{{ disk_free }} dest: ../documentation/FACTS/CAMPUS/ACCESS/{{ inventory_hostname }}_IOS_facts.csv
Some of the RAW JSON characters need to be cleaned up to pretty up the CSV file. The Ansible replace module can be used in combination with Regular Expression (RegEx) to clean up the file as follows:
– name: Format and cleanup CSV replace: path: ../documentation/FACTS/CAMPUS/ACCESS/{{ inventory_hostname }}_IOS_facts.csv regexp: ‘[|]|”‘ replace: ”
– name: Format and cleanup CSV replace: path: ../documentation/FACTS/CAMPUS/ACCESS/{{ inventory_hostname }}_IOS_facts.csv regexp: “‘” replace: ”
Now we can add the header row to the CSV using Ansibles lineinfilemodule.
Save and re-run the playbook. You should now have a CSV file that looks similar to this but with data values in the rows following the header row.
Markdown
Think of markdown as HTML-lite. Markdown reports from facts render nicely in browsers or VS Code with the Markdown Preview extension. More or less the same process as the CSV file place the variables between pipes and create a header row. Markdown has strict rules to make well-formed .md files so pay close attention.
(There is more formatting clean up required which you can find in the GitHub repo links at the bottom)
Using the Ansible looping mechanism, with_items, we need to create 3 header rows for the valid markdown file as follows:
– name: Header Row lineinfile: path: ../documentation/FACTS/CAMPUS/ACCESS/{{ inventory_hostname }}_IOS_facts.md insertbefore: BOF line: “{{ item.property }}” with_items: – { property: ‘| ——– | —– | ——- | ————- | —– | ———- | ——— | ‘ } – { property:’| Hostname | Image | Version | Serial Number | Model | Total Disk | Free Disk |’ } – { property:’# Cisco IOS Facts for {{ inventory_hostname }}’ }
This generates a mark down file like this:
Mark Map / Interactive HTML Mind Map
Now that we have a well-formed markdown file we can use a relatively new tool to create a relatively new file type. Markmap is a node.js tool that can be used to transform any markdown file into an interactive HTML mind map.
First install the required libraries (node.js and npm)
This will generate an interactive HTML page with a mind map of the markdown like this:
Summary
Ansible facts are a great way to get started with network automation and working with infrastructure as code. They are safe, non-intrusive, valuable, and typically management approved playbooks to get you started towards configuration management. They are also a great way to document that enterprise network you’ve been neglecting. Using these simple tools and techniques your full enterprise network, from campus to data centre; cloud to WAN; Cisco to Microsoft to Linux to VMWare to Azure or AWS; can be automatically documented with real time stateful facts!
GitHub Repositories
Here are a collection of Automate Your Network developed GitHub repositories you can use to explore the code or even transform into playbooks customized for your environment.