I already have working automation solutions and I think I can translate / refactor / at least be inspired by previous Ansible-based solutions.
Where to start?
I’ve been down the road of learning network automation from scratch – this time let’s start with simple information gathering and transformation.
Speaking of inspiration – I am going to start with a “Just the Facts” approach and go get – show interfaces status – my favourite command – into a CSV, MD, and this time let’s spice it up and also throw in an HTML page. From Genie parsed JSON.
Only this time using pure Python – no Ansible training wheels (crutches ?)
How to attack this ?
Break it down in human language and then see if we can translate it to Python is one approach. Another is to find working examples and guides provided by the Cisco team. Using a mix of the two and some other online resources here is how I did it.
The job folder is where I will keep the pyATS job file and and code file. Output will hold the 3 output files. I plan on hopefully using Jinja2 just like in Ansible so we need a Templates folder. Finally pyATS uses the concept of testbed files to setup connectivity and authentication. These are very similar to Ansible group_vars.
I’ve included a .gitignore file to keep the .pyc files out of the Git repository.
The Job file. This is a pyATS control file you can use to run the code. You can feed arguments in this way but I have not done that here.
Pretty simple so far – import the os and run the code.
First thing in the Python code is to setup the Python environment you need. Make sure to import JSON as we need to work with the Genie parsed data.
Next we will setup Jinja2 and the File loader
Now we import Genie and pyATS
Setup a logger
Ok so we need 3 source templates one for each file type
Turn on the logger
Let’s load up the testbed file
A testbed looks like this:
Note that yes! We CAN encrypt the string! %ENC{ } represents the pyATS encrypted string! Safe to store in Git repos!
Now some magic – we parse our command into a variable as JSON
Run the results thru the templates
While look like this:
Then we create the output files back in Python to finish the playbook
Which look like – ha! – we dont know if this works yet! Lets check it out!
The job in action
Next it loads up the testbed
Next the actual SSH connection sets up using Unicron (this is different than Ansible which uses paramiko)
Ok my device’s banner is displayed. My banner is left over from some CI/CD work but it’s the right banner – I’m in !
Some basic platform stuff gets dumped to the job log followed by my next job steps
Ok it’s fired the command! Milestone in the job reached – now it should register this result as JSON in a variable next.
Now during my development I added the following to confirm this step was working to dump the variable to the screen:
print {{ variable name }}
Similar to an Ansible recap we get a pyATS Easypy Report
The Git Add * test
I like to build suspense so I change directories up a folder and try to stage, hopefully, the 3 new files into Git
cd ..
Git add *
Git commit -am “did my first python code work?”
Amazing – but what do they look like?!?
They look incredible!
What does this mean ?
It means, seemingly, I’ve been mastering the wrong tool. That a faster, easier, and more elegant tool is available. This is ok – I feel like Ansible was primary school and I’m moving into the next stage of my life as a developer and moving up into high school with Python.
It also means I have a lot of code to refactor into Python – also fine – a good opportunity to teach my colleagues.
I also means I will be focusing less and less on Ansible I think and more and more on Python
20 years ago I was studying to become a computer programmer analyst in college writing C++, Java, Visual Basic 6, COBOL, CICS, JCL, HTML, CSS, SQL, and JavaScript and now, two decades later, I still have the magic touch and have figured out Python.
You can expect a lot more solutions like this – in fact I am going to see if I can work in my #chatbot / #voicebot capabilities into Python.
A modern approach to the Cisco IOS-XE show interfaces status command using Python pyATS / Genie and Jinja2 templating to create business-ready CSV, Markdown, and HTML files
One of my favourite recipes is the Hakuna Frittata both because not only am I a big fan of puns, I also enjoy this hearty vegetarian meal that even I can handle putting together.
Inspired by this simple recipe I have decided to try and document my highly successful Ansible Cisco NXOS Facts playbook that captures and transforms raw facts from the data centre into business-ready documentation – automatically.
Ansible Cisco NXOS Facts to Business-Ready Documentation Prep: 60-90 Min Cook: 2-3 Min Serves: An entire enterprise
Ingredients
1 Preheated Visual Studio Code 1 Git repository and Git 1 stick of Linux (a host with Ansible installed and SSH connectivity to the network devices) 3 pinches of Python filters 1 Cup of Ansible playbook (a YAML file with the serially executed tasks Ansible will perform) 1 Cup of Ansible module – NXOS_Facts 2 Tablespoons of Jinja2 Template 1 Teaspoon of hosts file 1 Tablespoon of group_vars 2 Raw Eggs – Cisco NXOS 7000 Aggregation Switches
Helpful Tip
This is not magic but did not necessarily come easy to me. You can use debug and print msg to yourself at the CLI. At each step that I register or have data inside a new variable I like to print it to the screen (one to see what the data, in JSON format, looks like; and two, to confirm my variable is not empty!)
Directions
1. You will need to first setup a hosts file listing your targeted hosts. I like to have a hierarchy as such:
hosts [DC:children] DCAgg DCAccess
[DCAgg] N7K01 N7K02
[DCAccess] N5KA01 N5KB01 N5KA02 N5KB02
Or whatever your logical topology resembles.
2. Next we need to be able to securely connect to the devices. Create a group_vars folder and inside create a file that matches your hosts group name – in this case DC.yml
DC.yml +
3. Create all the various output folder structure you require to store the files the playbook creates. I like something hierarchical again:
4. Create a playbooks folder to store the YAML file format Ansible playbook and a file called CiscoDCAggFacts.yml
In this playbook, which runs serially, we first capture the facts then transform them into business-ready documentation.
First we scope our targeted hosts (hosts: DCAgg)
Then we use the NXOS_Facts module to go gather all of the data. I want all the data so I choose gather_subset : – all but I could pick a smaller subset of facts to collect.
Next, and this is an important step, we take the captured data, now stored in the magic Ansible variable – {{ ansible_facts }} and put that into output files.
Using the | to_nice_json and | to_nice_yamlPython filters we can make the “RAW JSON” inside the variable (one long string if you were to look at it) into human-readable documentation.
4b. Repeatable step
NXOS Facts provides facts that can be put into the following distinct reports:
Platform information (hostname, serial number, license, software version, disk and memory information) A list of all of the installed Modules hosted on the platform A list of all IP addresses hosted on the platform A list of all VLANs hosted on the platform A list of all of the enabled Features on the platform A list of all of the Interfaces, physical and virtual, including Fabric Extenders (FEX) A list of all connected Neighbors Fan information Power Supply information
For some of these files, if the JSON data is structured in way that lends itself, I will create both a Comma-Separated Values (csv; a spreadsheet) file and a markdown (md; “html-light”) file. Some of the reports is just the csv file (IPs, Features, VLANs specifically).
The follow code can be copied 9 times and adjusted by updating the references – the task name, the template name, and the output file name – otherwise the basic structure is repeatable.
In order to create the HTML mind map you will also need mark map installed.
Another example of the code – this is the Interfaces section – notice only the name, src, and dest file names need to be updated as well as the MD and HTML file names in the shell command.
5. The Jinja2 Templates
Now that we have finished our Ansible playbook we need to create the Jinja2 templates we reference in the Ansible template module (in the src line)
Create the following folder structure to store the templates:
roles\dc\dc_agg\templates
Then, for each of the 9 templating tasks, create a matching .j2 file – for example the “base facts” as I like to call them – CiscoDCAggFacts.j2
In this template we need an If Else End If structure to test if we are templating csv or markdown then some For Loops to iterate over the JSON lists and key value pairs.
Add a header row with columns for the various fields of data. Reference your Nice JSON file to find the key value pairs.
No “For Loop” is required here just straight data from the JSON
Since its not csv it must be md; so add the appropriate markdown header rows
Then add the data row using markdown pipes for delimiters instead of commas
Close out the If
An example with For Loops might be Interfaces or Neighbors but the rest of the syntax and structure is the same
Now because there are multiple interfaces I need to loop or iterate over each interface.
Now add the row of data
Note you can include “In-line” If statements to check if a variable is defined. Some interfaces might not have a Description for example. Test if it is defined first, and if not (else) use a default of “No Description”
Other fields are imperative and do not need to be tested.
Close the Loop
Now do the markdown headers for Interfaces
Then the For Loop again and data row again but using pipes
Then close out the If statement
Complete the remaining templates. Save everything and Git commit / push up to your repo.
Cooking Time
Lets run the playbook against two fully-loaded production Nexus 7000s using the Linux time command
Two minutes in the oven !
Results
Some samples of the output.
First the Nice JSON – note the lists have been collapsed to be brief but any of the lists can be expanded in VS Code for the details
Interfaces
Neighbors
Now some prefer YAML to JSON so we have the exact same data but in YAML format as well
Now the above is already incredible but I wouldn’t call JSON and YAML files “business-ready” – for that we need a spreadsheet!
The real tasty stuff are the CSV files!
The general facts
Interfaces
Note that you can filter these csv files directly in VS Code – here I have applied a filter on all interfaces without a description
This captures all types of interfaces
Including SVIs
The Markdown provides a quickly rendered VS Code or browser experience
And the Interactive HTML is pretty neat!
Now remember we have all of these file types for all of the various facts these are just a few samples I like to hand out to the audience – for the full blown experience you can hopefully follow this recipe and cook your own Cisco NXOS Ansible Facts playbook!
Please reach out if you need any additional tips or advice ! I can be reached here or on my social media platforms.
Infrastructure as Code and Network Automation – Where to Start
Learning any new skill takes time, patience, a willingness to try and fail, and ideally continuously learn and grow from our mistakes until we grow more and more proficient. The number one question I get is “How do you get started?”. I got started the hard way – trying to automate a tactical, one-time, unique, complicated, large-scale problem out of necessity with little time to learn the best way to approach such a problem. This post is to provide you with safe, easy, valuable, scalable, Ansible playbooks you can copy, study, and modify to fit your infrastructure. I want to stress that the following code does not attempt to change, modify, add, remove, update, or delete any data or configurations. They simply connect, securely, to a target host or set of hosts, capture stateful facts, that is to say, truthful key-value pairs and lists of information about the current state or configuration, parse those facts, and then transform them into useable, human-readable, automated documentation.
TL:DR
– Documenting enterprise networks and servers is tedious work at best. – Most enterprise documentation is, for a lack of a better word, wanting, if it exists at all. – Various Ansible modules can be used to gather stateful, truthful, facts from infrastructure. – Not limited to network devices. Windows, Linux, VMWare provide facts to Ansible as well. – Easy. – After you capture facts they are easily transformed into automated state documentation. – RAW JSON, Nice JSON, Nice YAML, CSV (spreadsheets!), Markdown, and interactive HTML mind maps from Ansible facts. – Scales n+x devices. – Safe, secure, no possibility of disrupting the network. Think of it as running a bunch of show commands or doing HTTP GETs. – Loved by management everywhere.
Enter: Ansible
If you are familiar with me at all you likely already know Ansible is my automation tool of choice. If you are new around here – let me tell you why. I believe Ansible is so easy that I can write a simple blog post with a few lines of code that you should be able to reproduce and make it work for you. There is little to no barrier to entry and your solution complexity will scale along with your personal growth and muscle memory with the tool. So let’s get started.
Linux
You are going to need a Linux environment. If you are a traditional Windows user who may not have access to a RHEL, CentOS, Debian, Ubuntu, or other Linux platform you can use the Windows Subsystems for Linux (WSL2) on Windows 10 to run a Linux environment.
For example to install Ubuntu on Windows 10:
Right-click the Windows Start icon – select Apps and Features.
In the Apps and Features window – click Programs and Features under Related Settings on the right side of Apps and Features.
Click Turn Windows Features On or Off in the left (with the shield icon) side of the Programs and Features window.
Scroll to bottom of the Features window and put a check mark beside Windows Subsytem for Linux; Click Ok and close the open windows.
Launch the Microsoft Store.
Search for Ubuntu – click the first result.
Click Install.
Wait for Ubuntu to install.
Press Windows Key and start typing Ubuntu – click and launch Ubuntu.
The first time Ubuntu launches it has to setup – give this some time.
Enter your username and password for Ubuntu.
Update Ubuntu – this step will take some time.
$ sudo apt update
$ sudo apt-get upgrade -y
Install Ansible
Make sure Python is installed.
$ sudo apt-get install python -y
Install Ansible.
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible -y
hosts
You will need a hosts file. This is the foundation for a good, scalable, modular Ansible install base. Hosts can be organized hierarchically to match your physical or logical topologies. The machine hosting Linux must be able to resolve the hosts if you use their hostname and have IP connectivity for the playbooks to work. For a standard Cisco enterprise design you might have a hosts file like this:
Ansible needs to be able to securely connect to the targeted host. There are no agents and Ansible uses SSH, WinRM, or HTTPS as transport protocols. For most devices a username and password are required to authenticate and authorize the Ansible session. There are a few ways that this can be handled but for beginner’s I would setup a prompted mechanism to get going. Eventually you can learn about Ansible Vault but to avoid hard coding plain-text passwords to get started, a mistake even I made when I was beginning to use Ansible, start with prompted interactive playbooks where a human has to enter a username and password.
These connections strings are first setup in what’s known a group variable or group_vars where all of the individual hosts in a group (ie dist01 and dist02 in DIST group) inherit the variables set. Because we have everything nested in [ENTERPRISE], in a folder called group_vars, create the following file.
This tells all the hosts in the Enterprise hosts group to use the Ansible network_cli connection mechanism; that the target operating system is Cisco IOS; and that the Ansible user and Ansible passwords are variables.
Playbooks
At the heart of Ansible are playbooks. Playbooks are YAML files made up of key-value pairs and lists of serially executed tasks. The first step in the playbook is to establish the scope of the playbook tasks from either a group or single host in the hosts file or locally using the localhost option. For this example target the Campus Access layer. One of the tasks in these facts playbooks will either call a specific facts module (like ios_facts), use the setup module, or target an API using the uri module. But first, we have to prompt the user for their credentials and store them in variables to be used by the Ansible connection strings in the group vars files.
Create a file called CiscoAccessFacts.yml inside the playbooks folder as follows:
Now that we have connection to our devices in the ACCESS group using the prompted credentials which are passed to the group_vars Ansible connection strings we are ready to perform the actual IOS Facts Ansible task as follows:
– name: Gather Ansible IOS Facts ios_facts: gather_subset: – all
That’s it! Now we have captured the Ansible IOS Facts. Because these are Ansible facts we do not need to register them as a variable; they are stored in the ansible_facts magic variable.
To print your facts to the screen you can use the Ansible debug with the following message as the next task in your playbook:
– debug: msg=”{{ ansible_facts }}”
Save and run the file.
ansible-playbook CiscoAccessFacts.yml
Answer the prompts for credentials. After authenticating and gathering the facts something like this should be displayed on the screen, except with actual data values completed.
Cisco NXOS_Facts
Much like IOS, Ansible has an NXOS fact module as well. The NXOS module, as expected, provides the same baseline facts as IOS but adds hardware facts such as modules, fans, and power supplies as well as software facts such as features, licensing, and VLANS.
Copy the Campus files and update them accordingly. Typically in a data center where the NXOS facts will gather there is HA configured and paired-devices. These playbooks have been tested on Nexus 9000, Nexus 7000, Nexus 5000, and Nexus 2000 FEX modules.
– name: Gather Ansible NXOS Facts about DC Access nxos_facts: gather_subset: -all
– debug: msg=”{{ ansible_facts }}”
Save and run the playbook.
ansible-playbook CiscoNXOSAccessFacts.yml
Review the output on the screen and notice the new sets of facts only NXOS can provide.
Notice again the change from ios_facts to nxos_facts but that’s about it. Now you have all of your Data Centre Ansible Facts as well as your Campus!
This is great right? What other facts can we get? How about compute facts! Yes that’s right we can use Ansible to get Windows, Linux, and VMWare (bare metal or virtual guest) facts too using more or less the same steps.
Compute Facts
Ansible is not limited to gathering facts from Cisco or other network devices. In fact Ansible can be used to gather even more facts from compute platforms like Microsoft Windows, Linux of any flavour, and VMWare (both bare metal hosts and virtual guests).
Microsoft Windows Facts
That’s right. We can use Ansible, a Linux-only tool, to gather Microsoft Windows facts! More or less the same approach and building blocks are the same; a hosts file, group vars file, and playbook. Windows hosts, like Cisco hosts, can be logically organized anyway you see fit. Windows hosts can be grouped by product line, OS, function, location, or other values. For now create a simple hosts file with one parent group called Windows.
The requirements and WinRM installation and configuration guide can be found here. Either HTTP or HTTPS can be used because Kerberos is ultimately securing the payload, even if the transport is only HTTP.
hosts [Windows] Server01 Server02
The group vars Ansible connectivity variables for Microsoft Windows are as follows:
Note the Ansible WinRM scheme needs to be setup as either HTTP or HTTPS and the corresponding Ansible port (5985 for HTTP; 5986 for HTTPS) needs to be selected depending on the transport protocol. Ansible connection is using WinRM and the WinRM transport is specified as Kerberos.
Now in the playbook target the Windows group of hosts and use the same prompted mechanism code as before updating it to reflect Windows cosmetically. The only change to the Facts task is to change from the ios or nxos facts module to the setup module.
– name: Gather Ansible Windows Facts about Windows hosts setup: gather_subset: -all
– debug: msg=”{{ ansible_facts }}”
Save the playbook as playbooks/WindowsFacts.yml and run the playbook.
ansible-playbook WindowsFacts.yml
Notice all of the amazing facts Ansible can discover about a Windows host or groups of Windows hosts.
Linux Facts
The great thing about the setup module is that it can be used against Windows and Linux hosts. Meaning you simply need to clone the Windows artifacts (group_vars file, playbook, and hosts inventory) and refactor all of the Windows references to Linux (Linux.yml group var file; [Linux] hosts list; Windows to Linux cosmetic references) but the core Ansible task remains the same:
– name: Gather Ansible Linux Facts about Linux hosts setup: gather_subset: -all
– debug: msg=”{{ ansible_facts }}”
However, much like IOS vs NXOS facts, the amount of Linux facts eclipses even the huge list of facts from the Windows hosts. This is due to the native Ansible / Linux coupling and support.
VMWare
VMWare does not use the generic setup module and has a specific facts module like Cisco IOS or NXOS. VMWare facts actually use the downstream VSphere API and there are 2 additional required fields in addition to an authorized username and password; hostname and esxi_hostname. This module, vmware_host_facts gathers facts about the bare metal hosts; not the virtual guests. From my testing I found it best to target the hostname and esxi_hostname using the esxi hostname in the Ansible hosts inventory file.
Very rich JSON similar to that of Linux are provided back including all hardware information about virtual NICs, VMWare datastores, BIOS, and processors.
Microsoft Azure
Even clouds have Ansible Facts! Azure Facts are actually even easier to retrieve because of the simplified authentication mechanism. Username and password still works or you could setup Service Principal Credentials. Inside Azure you need to create an account with at least API read-only permissions. There are some prerequisites to install. First pip install the Ansible Azure libraries.
$ pip install ‘ansible[azure]’
You can create the following file $HOME/.azure/credentials to pass credentials to the various Azure modules without username and password prompts or credential handling.
In the list of Ansible cloud modules find the Azure section. Each Azure module has two components – a config module and an info (facts) module.
Using the same process, along with JSON_Query, and a with_together loop, for example, capture all Azure Virtual Network info. First we have to capture the Azure resource groups and then pass the resource group along to a second API to get the associated networks.
– name: Get Azure Facts for Resource Groups azure_rm_resourcegroup_info: register: azure_resource_groups
– name: Get Azure Facts for all Networks within all Resource Groups azure_rm_virtualnetwork_info: resource_group: “{{ item.0 }}” register: azure_virtual_network with_together: – “{{ azure_resource_groups | json_query(‘resourcegroups[*].name’) }}”
Ok great. So what? What can I do with these facts?
So far we have simply dumped the facts to console to explore the various modules. What I like to do with these facts is to create living, automated, stateful, truthful, human-readable, management and operations loves me for it, documentation. With a little work and changing the playbooks from interactive on-demand playbooks to non-interactive fully scheduled and automatically executed periodically these playbooks can now run all by themselves creating snapshots of state in the form of reports.
First I like to capture the RAW JSON as a forensic artifact. The raw facts unchanged and unfiltered, in case audit, compliance, security, or other possible downstream machine code that requires unchanged RAW JSON.
This is easily done in Ansible using the copymodule. We have the RAW JSON in a variable, the Ansible magic variable {{ ansible_facts }}, we just need to copy it into a file.
We will need a repository for the new output files so create a documentation folder structure with subfolders for your various platforms.
Add the following line of code, customizing the output file name based on the playbook environment, after the debug. For example the IOS Access Facts playbook.
The Ansible magic variable {{ inventory_hostname }} can be used to reference the current iterated inventory host file target which we will use to identify the parent switch for each of the facts.
Save and re-run the playbook. All the IOS facts will now be, albeit ugly and unusable, stored in a RAW JSON file.
to_nice filters
Ansible has various filters that can be used to help parse or transform data. Using two of these filters, to_nice_json and to_nice_yaml, we can create human-readable, nice, pretty, and easy to consume JSON and YAML files.
Simply copy and paste the Create RAW JSON file task and modify the new stanzas as follows:
Save and re-run the playbook. Now you should have 2 human readable files. The _Nice.json (displayed in the first screenshot) file and now an even easier to read YAML file:
Traditional Reports from Facts
While the RAW and Nice JSON and YAML files are great for programming, data modeling, logic, templating, and other infrastructure as code purposes they are still not exactly consumable by a wider audience (management; operations; monitoring; capacity planning). Using Ansible’s ability to parse the registered variable JSON and another filter, JSON_Query, an SQL-like tool used to query and parse JSON, we can capture individual fields and place them into CSV or markdown ordered structure.
First we are going to use Ansible’s set_facts module to create our own variables out of the key-value pair and lists in JSON which we can then re-use to create reports.
Now that we have set our own facts / variables from the JSON facts we simply put them into order to create a CSV file.
– name: Create Cisco IOS Access Facts CSV copy: content: | {{ inventory_hostname }},{{ image }},{{ version }},{{ serial }},{{ model }},{{ disk_total }},{{ disk_free }} dest: ../documentation/FACTS/CAMPUS/ACCESS/{{ inventory_hostname }}_IOS_facts.csv
Some of the RAW JSON characters need to be cleaned up to pretty up the CSV file. The Ansible replace module can be used in combination with Regular Expression (RegEx) to clean up the file as follows:
– name: Format and cleanup CSV replace: path: ../documentation/FACTS/CAMPUS/ACCESS/{{ inventory_hostname }}_IOS_facts.csv regexp: ‘[|]|”‘ replace: ”
– name: Format and cleanup CSV replace: path: ../documentation/FACTS/CAMPUS/ACCESS/{{ inventory_hostname }}_IOS_facts.csv regexp: “‘” replace: ”
Now we can add the header row to the CSV using Ansibles lineinfilemodule.
Save and re-run the playbook. You should now have a CSV file that looks similar to this but with data values in the rows following the header row.
Markdown
Think of markdown as HTML-lite. Markdown reports from facts render nicely in browsers or VS Code with the Markdown Preview extension. More or less the same process as the CSV file place the variables between pipes and create a header row. Markdown has strict rules to make well-formed .md files so pay close attention.
(There is more formatting clean up required which you can find in the GitHub repo links at the bottom)
Using the Ansible looping mechanism, with_items, we need to create 3 header rows for the valid markdown file as follows:
– name: Header Row lineinfile: path: ../documentation/FACTS/CAMPUS/ACCESS/{{ inventory_hostname }}_IOS_facts.md insertbefore: BOF line: “{{ item.property }}” with_items: – { property: ‘| ——– | —– | ——- | ————- | —– | ———- | ——— | ‘ } – { property:’| Hostname | Image | Version | Serial Number | Model | Total Disk | Free Disk |’ } – { property:’# Cisco IOS Facts for {{ inventory_hostname }}’ }
This generates a mark down file like this:
Mark Map / Interactive HTML Mind Map
Now that we have a well-formed markdown file we can use a relatively new tool to create a relatively new file type. Markmap is a node.js tool that can be used to transform any markdown file into an interactive HTML mind map.
First install the required libraries (node.js and npm)
This will generate an interactive HTML page with a mind map of the markdown like this:
Summary
Ansible facts are a great way to get started with network automation and working with infrastructure as code. They are safe, non-intrusive, valuable, and typically management approved playbooks to get you started towards configuration management. They are also a great way to document that enterprise network you’ve been neglecting. Using these simple tools and techniques your full enterprise network, from campus to data centre; cloud to WAN; Cisco to Microsoft to Linux to VMWare to Azure or AWS; can be automatically documented with real time stateful facts!
GitHub Repositories
Here are a collection of Automate Your Network developed GitHub repositories you can use to explore the code or even transform into playbooks customized for your environment.