Merlin Feature: WebEx #chatbots with pyATS

With all the big WebEx news – including a new logo – I wanted to revisit the basic #chatbots I have working using pyATS, Python requests, and the WebEx API after the conversation came up in the #pyATS WebEx Community space today:

First, let’s take a look at what this does, and this is not limited to Merlin; any pyATS job has this capability

If you create the pyats.conf file as Takashi suggests and add the [webex] information it will enable the pyATS job to report the job summary into the WebEx space you provide the config file.

This looks something like this inside of WebEx:

This in itself is pretty handy! And all you need to do is go to the Cisco WebEx for Developers portal and either make a Bot under My Apps

Or, right from the browser, grab one of the 12 hour tokens

The easiest way to get one of these is to go to the Documentation

Find the API Reference

Find Messages

Pick POST

COPY THIS BEARER TOKEN

Paste that into your pyats.conf

But how do I get the Room / Channel / Space ID?

If you browse to Rooms

You can GET your current Room list

This will give you the JSON list – here is the Merlin Room ID

Thats it you are ready to connect your pyATS jobs for a job summary as a WebEx message!

Adding Network State Data

With the above foundational WebEx integration with pyATS and WebEx’s simplicity I thought I would integrate a few sample commands into a Merlin pyATS job for the community to see how you can send Network State data to WebEx!

I want the message to be in Markdown so I am going to use a Jinja2 template to craft the JSON we can POST with Python requests after pyATS has parsed or learned the function

We don’t need a lot to make this happen either here is everything I import

  • Update – I’ve also come to discover we need 1 more import and pip install requests_toolbelt in order to attach files to WebEx messages

We setup our WebEx room and token (12 hour or bot) as variables we can call later

The general_functionalities are important these are object oriented code that gets reused per pyATS learn or parse library call.

Then for this example I will do 2 learn functions, platform and routing and see if I can transform real network state data into meaningful WebEx messages

I tell Python where to find the Jinja2 templates and setup a variable I can use later to load said templates

We then setup our pyATS framework and connect (testbed.connect) to our topology

Again the testbed file looks like this

Now that we have connected we can begin our Test Steps ultimately looping (for) over each device in our topology (testbed)

Yes in this Sandbox there is only 1 device but this could scale to X devices. Just add them to the testbed.

Now we can learn platform

As of right now we have the following JavaScript Object Notation (JSON) data inside the self.learned_platform variable

Our goals:

  1. Send a log of our pyATS Merlin job to a WebEx Room or Individual
  2. Send this data as a human friendly message
  3. Create an XLSX spreadsheet we can attach to our message

Now we start our test steps

We will get a boolean pass/fail from the Create CSV and Sent to WebEx WebEx step

Next I set up a few variables – namely the Jinja2 references, the directory for the XLSX file, and the file name.

Also – for attachments we will declare another variable, the MultipartEncoder with the information required to attach the Learned_Platform.csv file

Next we template the .xlsx file from the Jinja2 template

Which looks like this:

That renders the file that looks like this

We will use 2 more Jinaj2 templates for the actual message we will send. Because The JSON body we post to WebEx is a single line, and in Markdown a header row starts with a # symbol, to avoid making the whole thing a header we will send it first.

Here is the line in Python

And the matching Jinja2 template

Remember, we are sending a long single line / string, as markdown, so if we want multi-line we need to add <br/> the Markdown linebreak command

Here is how we send the header

Which looks like this in WebEx:

Now let’s go ahead and template the Markdown

Which looks like:

Important! I had to “trim” this from what is in the “full” Markdown as there *is* a character limit so watch for that!

But that is also why we can attach the full CSV

So go get #chatbotting using real network state data!

Reach out if you hit any snags and watch for the full development video!

Creating a Network Search Engine

Imagine being able to use a keyword search engine against your network ? A Google-like query for “VLAN 100”, a MAC address, IP address, even an ACL or simply the keyword “Down” that returns real time search results !

It sounds far-fetched but that is exactly what I’ve been able to do in the latest addition to my open source project Merlin ! I’ve made this available as open source!

Dark Mode

merlin (this link opens in a new window) by automateyournetwork (this link opens in a new window)

Network Magic: Transforming the CLI and REST API using Infrastructure As Code automation

High Level Goals

  1. Use the pyATS framework and Genie SDK to:
    a. Learn features
    b. Parse show commands
  2. With the JavaScript Object Notation (JSON) we get back from pyATS
    a. Index the JSON into a database
    b. Index the JSON into a search engine
    c. Visualize the database

Enter: Elastic

As you may know Merlin already creates a no-SQL document database using TinyDB – a serverless database that is very easy to use. My only problem is that I haven’t found (and confirmed by TinyDB author) a UI or frontend to consume and present the TinyDB.

Poking around the Internet I found Elastic – a suite of tools that seem like a perfect fit for my goals. “Elastic – Free and Open Search”

I suggest you start here and read about the ELK Stack

The Solution – Elastic Setup:

I setup a 14-day trial in the Elastic Cloud for the purposes to getting going. Elastic can also be run in a local Docker container or hosted on Linux.

  • Note – I tried using WSL Ubuntu but systemd is not currently supported and you will get this error:
System has not been booted with systemd as init system

Once you have logged into Elastic, you can use a Google account for this, you will want to setup a Deployment

Here is Merlin as a Deployment

Which then opens up a full menu of amazing features and capabilities

Some key information:

When you first setup your Deployment you will get one-time displayed credentials you *need* make sure you capture this information!

Your Endpoint (the URL to your database) is available in a click copy here in the main dashboard. You can also launch Kibana and Enterprise Search / copy their unique endpoint URLs here.

Since we are using Elastic Cloud make note of the Cloud ID

As we need this in Python to connect to our endpoints.

In order to setup the Search Engine click Enterprise Search and then when presented with the option Elastic App Search

Create an Engine

Name your engine (I would suggest whatever you named your deployment -engine or -search)

Now the next screen will present you with four methods of populating the Search Engine with JSON

We are going to be Indexing by API and if you pay attention to the Example it will give you what you need to do this and a sample body of JSON

(You get your URL and Bearer Token; make note of both we need them in the Python)

The Solution – The Python:

Here is the relevant Python / pyATS code you need to build your own Elastic index (Deployment) and then also the ElasticSearch search engine !

First you need to pip install pyATS, Elasticsearch, and elastic_enterprise_search

pip install pyATS[full]
pip install elasticsearch
pip install elastic_enterprise_search

Next, in the actual Python, you will need to import the above libraries into Python

As well as the pyATS framework

Next in the Python we need to setup a few things to interact with Elastic

Now we get into the actual pyATS job first setting up the AE Test section and using testbed.connect to establish our SSH connection to the network device

Next we setup our Test Case and define self, testbed, section, and steps. Each Step is a boolean test in pyATS.

For device in testbed kicks off the loop that runs the commands per device in the testbed topology (list of devices in the testbed file)

Now I have defined a function that can be reused that has a step and try to device.learn(function_name).info (and fail gracefully if the function could not be learned)

Now we simply feed this function the various features we want to learn

In this case it was written for the Cisco DevNet Sandbox – NXOS – Nexus 9k which only supports a limit number of features. In a real environment we can learn even more!

Then we use a different function for the parsed show commands

And run a variety of show commands

Now Merlin has, to date, created business-ready documents (CSV, markdown, HTML) and experimental documents (Mind Maps, Network Graphs) from the JSON we have inside all of these variables.

Now here is how we send the JSON to be Indexed in Elastic

Lets take a few examples – learn BGP – as a second fail-safe check in case it did parse correctly but for some reason was empty I first check if its not None

If its not none, we index it in our Deployment

Then we index it in our Search Engine

Here is show ip interface brief

Its easy and repetitive code – so much so that I will likely write another function for these 6 lines of code and just feed it the learn / show command.

The Outcome – Elastic

In order to confirm your Elastic Deployment is up – you can use cURL or Postman or the Elastic API Console

Wait what? I have just built a database that has an API I can query with Postman???

Y E S !

Check this out

Launch Postman and setup a new Collection called Elastic

Add your username and password (the one-time displayed stuff I told you to write down!) under the Authorization – Type – Basic Auth

Add a Request called Deployment

Copy and Paste your Elastic endpoint ID

Paste it in as a GET in your Deployment Requst

You should get a 200 Status back

And something like this

In Elastic – You can do the same thing!

Launch the API Console

If you leave the field empty and click Submit you get the same status data back

What about our Network Data?!

Now if you pay close attention to the pyATS and Python logs – you will see this call and URL (and status) when you send the data to your Deployment to be Indexed

The 200 means it was successful – but you can take this string into Postman / Elastic API Console !

Back in Postman

Which gives us:

And in the API Console we just GET /devnet_sandbox_nexus9k/_doc/show_ip_interface_brief

Now – check this out – make your next GET against just the base index

In the DevNet Sandbox there is almost 35,000 rows of data ! WHAT !?

The full state as JSON

Over in API Console

Very cool what about the Search Engine??

Well the engine is now populated with Documents and Fields

Which look like this

We can filter on say VRF documents and the search engine magic starts

Now lets check out keyword searches in the Query Tester

VRF

How about an IP Address

What about “Up”?

Visualizations

I want to be open I have not totally developed out any visualizations but I want to show you Kibana and the absolutely incredible dashboards we can create using the Elastic Deployment data

Launch Kibana and then select Kibana again

Now take a look at the incredible things we can do

As I said I have barely scratched the surface but lets look at what we could do in a Dashboard

First thing we have to do is create an Index Pattern

I’ve selected the devnet_sandbox_nexus9k to be my index pattern

Now I have 6670 fields (!) to work with in Kibana Dashboards

Now it becomes, for a beginner like me, a little overwhelming simply because of the vast choices we have to work with this data

Summary

Kibana discovery and learning aside my adventure into network search engines was fun and I learned a lot along the way. I’ve made a video of my development process here if you would like to check it out before you try it yourself.

ipSpace.net Must Read !

When I was tagged on Twitter about @ioshints (Ivan Pepelnjak (CCIE#1354 Emeritus)) latest blog post I thought somebody was telling me I should read the latest blog

I flagged this as “Hey what a coincidence I was just writing about #chatbots with Discord – I gotta read this later”

Turns out it was my article that was Worth Reading!

Why this is so special to me is that I started my automation journey with an ipSpace.net subscription which was a very key part of my early success with Ansible and Cloud automation specifically. Ivan has also personally helped me write better code and taken a personal interest in my success.

I am so incredibly humbled and thankful for Ivan’s recognition but even more by his commitment to be honest and open with his vast knowledge.

In a lot of ways I am trying to emulate Ivan’s approach and appreciate having a virtual mentor of such quality and capability.

Thanks!

Discussing the Future of Network Operations with #init6

I had a chance to meet some of my heroes at #init6 including Daren Fulwell!

Make sure you checkout my open source project fueling the demo

Your Automation Journey can start anywhere!

So you want to automate all the things in your network? Great! more and more enterprises are realizing the benefits of automating the network and are demanding more from their engineering staff. You’re also probably thinking about your career as well and how these automation skills will really come in handy to progress your career. Like I said, Network Automation skills are in high demand now.

So now what?

If you’re like me, you’ll jump straight into it and quickly realize that automating what can be considered the “life and blood” of an enterprise can be a daunting task. Rest assured though, there are things you can do today to begin your automation journey without needing to jump right into the deep end.

Standardize and Document

It would be remiss if I start off by saying that JC in his book “Automate Your Network” recommends starting your Automation Journey off by documenting and standardizing anything and everything in your network. For me, a big benefit of doing this before I dove into python and scripting was that I was able to bring my environment into a standard that makes building automation a very simple task.

I admit that it took me a bit of time and investment to bring my network into a working standard but once things were aligned it made the code I wrote that much more reusable.

Template all the Things!

If your like me, than you probably have a bunch of standard configuration commands stowed away in an Excel Document. You may not know it but one simple thing you can do as your start the automation journey is to convert those Excel Documents into clean (and reusable) code using the JINJA2 python library.

By templating your commands with JINJA2 you’ll immediately see the sky is the limit when it comes to where those commands can be run from. Whether you push those templates to another Python Library like pyATS / netmiko or simply document them for other team members, you’ll quickly see the benefits of “codifying” those CLI commands at an early stage.

Create Tooling for your Team

As a Senior Engineer, whenever I see a team member create a little tool or even a dashboard I get all jittery and excited. In my opinion, enabling your team is a selfless task that is often forgotten about. This is something I recommend you do (regardless of where you are along your automation journey) as it will get you working with other frameworks and you’ll be one step closer to a more automated and programmatic environment.

Reimagining The CLI with Python

The Command Line Interface. A near 60-year old technology a majority of the world’s networking teams solely rely on for their architectures, designs, monitoring, operations, and support. Most of us, pre-DevNet, have been raised and groomed to become CLI wizards and warriors with a super-hero level of ability being the standard expectation.

Command-line interface – Wikipedia

But the dirty secret is the CLI sucks and we are literally using a tool that was released half-a-century-ago. Imagine if COBOL (1959) was still the only programming language in the world ? Welcome to Enterprise IT Networks – here is the CLI – good luck to you!

Well today I am going to reimagine a modern, next-generation, CLI using Python and see if I can breathe new-life into our very old friend the CLI.

This post in Meme format

Why does the CLI still suck?

It is grounded in it’s historical roots. Take for example that the original IOS had 256kB of memory with pre-Pentium chipsets! So they needed a lean and text-driven CLI to configure and operate the early routers and switches.

Cisco IOS – Wikipedia

It is also why the modern next-generation devices come with a REST API that supports RESTCONF, NETCONF, and YANG models – they want to you get away from the CLI and move towards a modern approach.

But what about the legacy fleet? The brownfield? What if my staff are religious about the CLI ? Or fearful to learn REST, start using Postman and JSON, or even dreaded network automation like Ansible or Python ?

Culture will change over time but we need an easy, accessible, and palatable alternate to SSH’ing into devices one by one to perform CLI operations.

Take the most basic, first-to-be-run on any device, command show version.

In this video, the first I’ve ever done, I explore how, using Python, pyATS, Genie, Jinja2 Templates, and Cloud REST APIs, we can create the next-generation show version command – check it out!

My First Pure-Python Network Automation with pyATS / Genie !

I am very proud of this next piece of infrastructure as code for a few reasons.

  1. It addresses the problems with performance and speed at scale I’ve had with my current methodology and tools (Ansible)
  2. I feel like I am ready to “graduate” from Ansible to Python
  3. I’m already using Genie
  4. I’m already using pyATS
    * Limited to the handful of Solution Examples
  5. I already have working automation solutions and I think I can translate / refactor / at least be inspired by previous Ansible-based solutions.

Where to start?

I’ve been down the road of learning network automation from scratch – this time let’s start with simple information gathering and transformation.

Speaking of inspiration – I am going to start with a “Just the Facts” approach and go get – show interfaces status – my favourite command – into a CSV, MD, and this time let’s spice it up and also throw in an HTML page. From Genie parsed JSON.

Only this time using pure Python – no Ansible training wheels (crutches ?)

How to attack this ?

Break it down in human language and then see if we can translate it to Python is one approach. Another is to find working examples and guides provided by the Cisco team. Using a mix of the two and some other online resources here is how I did it.

The job folder is where I will keep the pyATS job file and and code file. Output will hold the 3 output files. I plan on hopefully using Jinja2 just like in Ansible so we need a Templates folder. Finally pyATS uses the concept of testbed files to setup connectivity and authentication. These are very similar to Ansible group_vars.

I’ve included a .gitignore file to keep the .pyc files out of the Git repository.

The Job file. This is a pyATS control file you can use to run the code. You can feed arguments in this way but I have not done that here.

The job file

Pretty simple so far – import the os and run the code.

First thing in the Python code is to setup the Python environment you need. Make sure to import JSON as we need to work with the Genie parsed data.

The actual code

Next we will setup Jinja2 and the File loader

Jinja2 setup

Now we import Genie and pyATS

Setup a logger

Ok so we need 3 source templates one for each file type

Turn on the logger

Let’s load up the testbed file

A testbed looks like this:

Note that yes! We CAN encrypt the string! %ENC{ } represents the pyATS encrypted string! Safe to store in Git repos!

Now some magic – we parse our command into a variable as JSON

Run the results thru the templates

While look like this:

CSV
Markdown
HTML

Then we create the output files back in Python to finish the playbook

Which look like – ha! – we dont know if this works yet! Lets check it out!

The job in action

The command to run the job

Next it loads up the testbed

pyATS is very verbose but in a good verbose with valuable information about your job

Next the actual SSH connection sets up using Unicron (this is different than Ansible which uses paramiko)

Ok my device’s banner is displayed. My banner is left over from some CI/CD work but it’s the right banner – I’m in !

Some basic platform stuff gets dumped to the job log followed by my next job steps

It seems to be working so far
show interfaces status

Ok it’s fired the command! Milestone in the job reached – now it should register this result as JSON in a variable next.

Now during my development I added the following to confirm this step was working to dump the variable to the screen:

print {{ variable name }}

Print replaces debug: msg=”{{ }}” – good!

Similar to an Ansible recap we get a pyATS Easypy Report

Easypy Report > Ansible.log

The Git Add * test

I like to build suspense so I change directories up a folder and try to stage, hopefully, the 3 new files into Git

cd ..

Git add *

Git commit -am “did my first python code work?”

Image

Amazingbut what do they look like?!?

They look incredible!

Image
CSV output
Image
Markdown Output
Image
HTML Output – RAW
Image
HTML Rendered

What does this mean ?

It means, seemingly, I’ve been mastering the wrong tool. That a faster, easier, and more elegant tool is available. This is ok – I feel like Ansible was primary school and I’m moving into the next stage of my life as a developer and moving up into high school with Python.

It also means I have a lot of code to refactor into Python – also fine – a good opportunity to teach my colleagues.

I also means I will be focusing less and less on Ansible I think and more and more on Python

20 years ago I was studying to become a computer programmer analyst in college writing C++, Java, Visual Basic 6, COBOL, CICS, JCL, HTML, CSS, SQL, and JavaScript and now, two decades later, I still have the magic touch and have figured out Python.

You can expect a lot more solutions like this – in fact I am going to see if I can work in my #chatbot / #voicebot capabilities into Python.

Dark Mode

Modern_Show_Interfaces_Status (this link opens in a new window) by automateyournetwork (this link opens in a new window)

A modern approach to the Cisco IOS-XE show interfaces status command using Python pyATS / Genie and Jinja2 templating to create business-ready CSV, Markdown, and HTML files

BlueCat Cisco Live Contest!

I am sharing this because I love both BlueCat and Cisco!

CiscoLive changed my life and where I was introduced to DevNet and automation!

Make sure you get into this giveaway !

Good luck!

Happy Birthday – A retrospective look at “Automate Your Network” two years later

Two years have flown by since self-publishing “Automate Your Network: Introducing the Modern Approach to Enterprise Network Management” and I wanted to reflect on the book, being an author, self-publishing, and how the world of infrastructure as code and network automation has changed since the book’s release.

Why did I write a book in the first place?

My primary motivation was sharing knowledge and experience. I was doing things in an entirely new way with entirely new tools and had also put these tools together in such a way they formed an actual software-development-like ecosystem. An infrastructure as code framework, if you will, that included automation capability. Two years ago, and even still to this day, I found myself baffled that this new methodology had not set the networking world on fire. Why were we still using the CLI, device-by-device, manually handcrafting our work when we could create little robots programmatically to do those manual, repetitive, tasks, or even the full configuration of every single device at scale? I was a 2xCCNP / 5xCisco Specialist but in the dozens of books, hundreds of videos, and hours and hours in the lab and in the live production network I had never been exposed to automation or infrastructure as code. You have to remember Cisco DevNet was around but they did not have any formal training, certifications; the Cisco Certification Apocalypse was in March 2020 – almost 18 months behind when I sat down and started writing my book!

My second driving force was the actual market for this type of material. I had read two books. “Network Programmability and Automation: Skills for the Next-Generation Network Engineer” – in my opinion the book on network automation.

I followed up with “Ansible for DevOps: Server and configuration management for humans”

I have absolutely nothing but 5 Star reviews of both of these books. Edelman, Lowe, and Oswalt’s work covered the broader landscape with Linux and other technology tips outside of just Ansible while Geerling’s is a lazer focused on Ansible.

But, and I mean no disrespect here, those two books basically invented the Network Automation industry, after reading the books I was still a bit lost on how to connect all these new technologies together. They are incredible technical pieces that really paved the way for me but I wanted to cover a little lower hanging fruit and try to write about the entire end-to-end workflow of automation. How do you use Git – beyond the commands – to version and source control your Ansible ? What tools – VS Code and extensions – should I use to write the automation code effectively and easily ? I did not come away with these two key pieces after reading the above books.

Third, I love writing. I was an Arts major coming out of high school; not a technology major. I went back to school to learn how to program. But I love writing. And I thought I had finally found something I could talk about and write that Great Canadian Novel – ok well not a novel but a technology book in my case.

Lastly, and I still believe this, and it’s certainly a little arrogant, but I think my way is the best way to solve technical infrastructure problems. My book was a way of trying to be an authority on this topic – because it’s not theoretical this is how I solve problems on a massive, important, complex, production enterprise network – and I believe it is easy and you can do it my way too!

On writing:

Writing the book took a lot of discipline. And the writing was actually, for me, the easy part. I have a stream-of-consciousness style approach to writing and just let it flow. Working out “universally accessible” code was a bit of a challenge. What if they are on CentOS or Ubuntu or RHEL or whatever. Are you sure this code works?

The hard part was the edits. So. Many. Edits. Now I am not sure if this is because my wife was the editor of the book or if this is just a natural thing authors and editors struggle with.

I really enjoyed writing the book. I tried to find a middle ground between easy enough for any beginner to pick up and figure out and more advanced people who may have used Ansible before.

On self-publishing

So I struck out with about 10 different publishers. I did make it into a formal discussion with 1 but it broke down because ultimately they didn’t see a market for such a book at the time – most of these publishers really didn’t know what to make about a book about network automation – so I looked around and the Amazon Kindle Direct Publishing platform turned out to be the easiest way to get my book published.

And it was! I had a final copy ready to go for the paperback. So using the Kindle Create software I also had an e-Book version as well – and anyone with Kindle Direct or Amazon Prime (I think) can read the book for free!

The KDP portal is easy to use and I’ve had no complaints about Amazon’s printing of my books or hosting it.

Pitfall: marketing!

My only comment on marketing a book if you decide to self-publish – be careful. You can quickly turn what you thought was going to be passive income into a very active drain on your resources. Between Twitter sponsored tweets to Facebook ad campaigns to LinkedIn feature posts and just basic Amazon key word auction bids – it turns out in the first three months my book was costing me money! And a lot of it!

My editor was *not* happy with this arrangement!

I shut down all spend on marketing after it cost me about $2,000. My new approach, and continued approach, was to shitpost like crazy try and establish my own “brand” if you will and setup a larger online profile for myself. This has mainly been on Twitter and LinkedIn and I wanted to focus on podcasts and other interactive new-media.

So how did it go?

I think it’s gone great. The book was won some awards. It is referenced in VMWare’s “Network Automation for Dummies” special edition. I’ve been invited to do some amazing podcasts and continue to see more and more interest in the book. I’ve become a minor, very minor, contributor to a larger automation and infrastructure as code movement. Cisco’s invited me to be a part of DevNet Create. BlueCat has invited me to round table discussion with other industry leaders. I’ve met some amazing people.

And, ultimately, I believe I’ve helped a lot of people learn how to solve problems differently. That is the general feedback I get – the book is great and changed my entire approach to solving problems. I consider this a success.

By the numbers

I do want to stress that while yes, I hoped to sell lots of copies, I did not set out to make a lot of money with my book. I really wanted to try and contribute what I had learned and my new way of working with enterprise networks, using infrastructure as code and automation, with the larger community. I want to thank each and every person who paid for the book – honestly I am so humbled – and for those interested – here are my sales figures to date:

2019:
* Books: 337
* Kindle Pages Read: 13,000

2020
* Books: 117
* Kindle Pages Read: 9,300

2021:
* Books: 58
* Kindle Pages Read: 2,200

Totals:
* Books: 512
* Kindle Pages Read: 24,500

Impact

The book is more like a single raindrop in a storm really – but it was ahead of it’s time in some ways. Cisco DevNet totally revamped their certifications in March 2020 as mentioned but also introduced an entirely new certification track for developers and infrastructure as code and automation. The industry is rapidly catching up to the book and I see more and more Tweet’s about automation than ever. I have connected with some really brilliant people who, like me, are totally absorbed in the world of automating networks. I’ve had a few dozen people reach out directly thanking me and supporting me which is always humbling.

What’s next for the book?

It is early but I’m teaming up with Educative.io to transform the book into an online interactive digital course !

I am in talks about a follow-up book with an actual interested publisher – I’ve submitted my proposal this week !

And I’m actually selling more books than ever – interest has never been higher in network automation or infrastructure as code!

So thank you, again, for joining and supporting me on this journey!

My Network Talks To Me – Literally!

This next post may seem like science fiction. I woke up this morning and checked that my output files – MP3 files – really did exist and that I actually made my Cisco network “talk” to me!

This post is right out of Star Trek so strap yourself in!

Can we make the network talk to us?

After my success with my #chatbot my brain decided to keep going further and further until I found myself thinking about how I could actually make this real. Like most problems let’s break it down and describe it. How much of this can I already achieve and what tools do I need to get the rest of the solution in place?

I know I can get network data back in the form of JSON (text) – so in theory all I need is a text-to-speech conversion tool.

Enter: Google Cloud !

That’s right Google Cloud offers exactly what I am looking for – a RESTful API that accepts text and returns “speech” ! With over 200 languages in both male and female voices I could even get this speech in French Canadian spoken in a dozen or so different voices!

I am getting ahead of myself but that is the vision:

  1. Go get data, automatically, from the network (easy)
  2. Convert to JSON (also easy)
  3. Feed the JSON text to the Google Cloud API (in theory, also easy)

The process – Google Cloud setup

There is some Google Cloud overhead involved here and this service is “free” – for up to 1 million processed text words or 3 months whichever comes first. It also looks like you get $300 in Google bucks and only 3 months of free access to Google Cloud.

Credit card warning: You need a credit card to sign up for this. They assured me, multiple times, that this does not automatically roll over to a paid subscription after the trial expires you have to actually engage and click and accept a full registration. So I hope this turns out to be free for 3 months and no actual charges show up on my credit card. But in the name of science fiction I press on.

So go setup a Google Cloud account and then your first project and eventually you will land on a page that looks like this.

Enable an API and search for text

Enable this API and investigate the documentation and examples if you like.

Now Google Cloud APIs are very secure to the point of confusion. So I have not fully ironed out the whole automation pipeline yet – mainly because of how complex their OAuth2 requests seem to be – but for now I have a work around I will show you to at least achieve the theoretical goal. We can circle back and mess with the authentication later. (Agile)

Setup OAuth2 credentials (or a Service Account if you want to use a JSON file they provide you)

Make sure it is a Web Application

This will provide you your clientID and secret.

For most OAuth2 that’s all you need – maybe I am missing the correct OAuth2 token URL to request a token but for now there is another tool you can use to get a token.

Google has an OAuth2 Developer Playground you can use to get a token while we figure out the OAuth stuff in the background

Follow the steps to select the API you want to develop (Cloud Text-To-Speech)

Then in the next step you can request and receive a development token

You can also refresh this token / request a new token. So copy that Access Token we will need it for Postman / Ansible in the next steps.

Moving to Postman

Normally under my collection I would setup my OAuth – here is the screenshot of the settings I’m just not sure of. And here is the missing link to full automation.

So far so good here

Again, this might be something trivial and I am 99% sure its because I have not setup this redirection or linked the two things but it was getting late and I wanted to get this working and not get stuck in the OAuth2 weeds

First here is what I think I have right:

Token name: Google API
Auth URL: https://accounts.google.com/o/oauth2/auth
Access Token URL: https://accounts.google.com/o/oauth2/token
Client ID:
Client Secret:
Scope: https://www.googleapis.com/auth/cloud-platform

But what I’m not really sure what to do with is this Callback URL – I don’t have one of those ?

Callback URL: This is my problem I am really not sure what I need to do here ?

I believe I need to add it here:

But I have no cookie crumbs to follow all of the ? help icons are sort of “This is where your Authorized redirect URLs go” and thats it ?

Open call to Google Cloud Development – I just need this last little step then the rest of this can be automated

Anyway moving along – pretending we have this OAuth2 working – we can cheat for now using the OAuth2 Playground method.

So here is the Postman request:

We want a new POST request in the Google Cloud API Postman Collection. The URL is:

https://texttospeech.googleapis.com/v1/text:sythesize

So cheat (for now) and grab the token from the OAuth2 Playground and then in your Postman Request Authorization tab – select Bearer Token and paste in your token. From my experience this will likely start with ya29 (so you know you have the right data here to paste in)

Tab over to Headers and double-check you have Bearer ya29.(your key here)

As far as the body goes – again we want RAW JSON

Now for the science fiction – in the body, in JSON, we setup what text we want to what speech

The canned example they provide

So we need the input (text) which is the text string we want converted.

We then select our voice – including the languageCode and name (and there are over 200 to choose from) – and the gender of the voice we want.

Lastly we want the audioConfig including the audioEncoding – and since MP3 was life for me in the mid to late 90s – let’s go with MP3 !

Hit Send and fire away!

Good start – a 200 OK with 52KB of data returned. It is returned as a JSON string:

Incredible stuff – this is the human voice pattern saying the text string – expressed as a base64-encoded audio string !

Curious, I found the Base64 Guru !

Ok – very cool stuff – but what am I supposed to do with it?

Fortunately Google Cloud has the insight we need

Hey – it’s exactly where we are at ! We have the audioContent and now we need to decode it!

Since I am still developing in my Windows world with Postman let’s decode our canned example

Carefully right-click copy (don’t actually click the link Postman will spawn a GET tab against the URL thinking you are trying to follow the URL along)

Now create a standard text (.txt) file in a C:\temp\decode folder and paste in the buffered response

I’ve highlighted the “audioContent”: “ – you have to strip this out / delete it as well as the last trailing at the end of the string – we just want the data starting with // and beyond to the end of the string

Lauch cmd and change to the C:\Temp\Decode folder and run the command

certutil -decode sample.txt sample.mp3

As you see if your text file was valid you should get a completed successfully response from the certutil utility. If not – check your string again for leading and trailing characters.

Otherwise – launch the file and have a listen!

How cool is that?!?!?

Enter: Network Automation

As I’ve said before anything I can make with with Postman I can automate with the Ansible URI module! But instead of some static text – I plan on getting network information back and having my network talk to me!

The playbook:

First we will prompt for credentials to authenticate

Now – let’s start with something relatively simple – can I “ask” the Core what IOS version it’s running? Sure – let’s go get the Ansible Facts, of which the IOS version is one of them, and pass the results along to the API !

For now we will hard code our token – again once I figure this out I will just have another previous Ansible URI step to go get my token with prompted ClientID / Client Secret at the start of the playbook along with the Cisco credentials. Again, temporary work around

Again because I have used body_format: json I can write the body in YAML.

Let’s mix up the voice a little bit too so hit the Voices Reference Guide

Ok so for our body let’s have some fun and try an English (US) WaveNet-A Male.

For the actual text lets mix a static string “John the Lab Core is running version” and then the magic Ansible Facts variable {{ ansible_facts[‘net_version’] }}

And see if this works

We need to register the response from the Google Cloud API and delegate the task to the localhost:

So now we need to parse and get just the base64-audio string into a text file. Just like in Postman this is contained in the json.audioContent key:

Now we have to decode the file! But this time with a Linux utility not a Windows utility

We can call the shell from Ansible easily to do this task:

Now in theory this should all work and I should get a text file and an MP3 file. Let’s run the playbook!

Lets check if Git picked a new file!

Ok ! What does it sound like!?!

Ok this is incredible!

Let’s try some Genie / pyATS parsing and some different languages !

Copy and paste and rename the previous playbook and call the new file GoogleCloudTextToSpeech_Sh_Int_Status.yml

Replace the ios_facts task with the following tasks

So now for our actual API call we want to conditionally loop over each interface if is DOWN / DOWN (meaning not UP / UP and not administratively DOWN)

Now as an experiment – let’s use French – Canadian in a Female Wavenet voice.

Does this also translate the English text to French? Do do I need to write the text en francais? Lets try it!

So this whole task now looks like this:

Now we need another loop and another condition to register the text from the results. We loop over the results of the first loop and when there is audioContent send that content to the text file.

Caution! RegEx ahead! Don’t be alarmed! Because of the “slashes” in an interface (Gigabit10/0/5) the file path will get messed up so let’s regex them to underscores

Then we need to decode the text files!

So let’s run the playbook!

So far so good – now our conditionals should kick in – we should see some items skipped in light blue text then our match hits in green

Similarly, our next step should also have skipped items and then yellow text indicating the audioContent has been captured

Which is finally converted to audio

What does it sound like!??! Did it automatically translate the text as well as convert it to speech?

TenGig

Gig

A little more fun with languages

I won’t post all the code but I’ve had a lot of fun with this !

How about the total number of port-channels on the Core – in Japanese ?!

Summary

In my opinion this Google Cloud API and network automation integration could change everything! Imagine if you will:

  • Global, multilingual teams
  • Elimination of technical text -> Human constructed phrasing, context, and simplicity
  • Audio files in your source of truth
  • Integrated with #chatops and #chatbots
  • Visually impaired or otherwise physical challenges with text-based operations
  • A talking network!

This was a lot of fun and I hope you found it interesting! I would love to hear your feedback!