Skip to content

Category: tech

How to locally host a Terraform plugin offline

I recently had to run a Terraform build in an air gapped network (no internet access) and these are the steps I took to locally host the Terraform provider I needed (vsphere). This was tested on Terraform v1.0.4 on CentOS 7.9 servers.

Local file mirror

Start by download the latest version of the desired plugin from https://releases.hashicorp.com/.

In this example I wanted the vsphere provider (terraform-provider-vsphere_2.0.2_linux_amd64.zip) so I grabbed it from https://releases.hashicorp.com/terraform-provider-vsphere/

Unzip the executable to the following directory on your Terraform machine: ~/.terraform.d/plugins/registry.terraform.io/hashicorp/vsphere/2.0.2/linux_amd64

For example: mkdir -p ~/.terraform.d/plugins/registry.terraform.io/hashicorp/vsphere/2.0.2/linux_amd64

unzip terraform-provider-vsphere_2.0.2_linux_amd64.zip -d ~/.terraform.d/plugins/registry.terraform.io/hashicorp/vsphere/2.0.2/linux_amd64

Keep in mind, your file path will be different depending on which provider you use, so please adjust accordingly!

Further information available here: https://www.terraform.io/docs/cli/config/config-file.html#implied-local-mirror-directories

Network mirror

This method is a bit tricky if you are using a self-signed certificate because Terraform enforces strict TLS checking. There is no command line option that I could find that allows you to turn off TLS enforcement.

To start, use an internet connected machine and run the terraform providers mirror command as described here (https://www.terraform.io/docs/cli/commands/providers/mirror.html). Then dump the resultant files to the local apache/nginx webserver in your offline environment.

Web Server Configuration

I had to mess around quite a bit with the self signed certificate creation on my apache server in order for it to work.

To generate my self signed cert, I executed:

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout self.key -out self.crt -config /etc/ssl/openssl.cnf

I had to make sure my basicConstraints setting in my SSL.cnf file was set to basicConstraints = CA:TRUE. In case it helps anyone, here’s an example of the openssl.cnf file I used:

[req]
default_bits = 2048
prompt = no
distinguished_name = dn
req_extensions = req_ext
x509_extensions = v3_req

[dn]
C = US
ST = MD
L = Annapolis
O = MyOrg
CN = yum01.mydomain.net

[req_ext]
subjectAltName = @alt_names
basicConstraints = CA:TRUE

[v3_req]
subjectAltName = @alt_names
basicConstraints = CA:TRUE

[alt_names]
DNS.1 = yum01.mydomain.net
IP.1 = 192.168.1.20

Then copy the cert and key to the /etc/pki/tls locations referenced by your /etc/httpd/conf.d/ssl.conf file (again, using apache as the example here).

For example:

SSLCertificateFile /etc/pki/tls/certs/self.crt

SSLCertificateKeyFile /etc/pki/tls/private/self.key

Terraform CLI configuration

Make sure to add your web server’s self signed certificate to your Terraform servers trust store. This stackoverflow thread shows you how to do that (https://stackoverflow.com/questions/22509271/import-self-signed-certificate-in-redhat) on CentOS/Red Hat with the following steps:

  1. Copy the CA cert to /etc/pki/ca-trust/source/anchors/
  2. update-ca-trust extract

Create a .terraformrc file to the HOME directory of your Terraform machine, if it doesn’t already exist, with the following inside:

provider_installation {
  network_mirror {
    url = "https://fileserver.yourdomain.com/terraform/"
  }
}

In my case, the top level directory /terraform/ contained the registry.terraform.io folder.

That should be it! Try it out by going to your terraform directory and running terraform init.

Splunk: How to move your .conf files out of /etc/system/local with the Deployment Server

NOTE: I consider this post a DRAFT — I have not validated these settings in my home lab, this article was written from memory of doing this for a customer. I intend to validate these steps later and remove this warning once I do so.

If you have a large number of deployment clients that have a .conf file stuck in /etc/system/local that you need to move out in order to manage it properly in an app – try this. I recently came across this at a customer who needed to update their deploymentclient.conf to point to their new deployment server, but all of their deployment clients had deploymentclient.conf in $SPLUNK_HOME/etc/system/local which couldn’t be easily updated remotely.

To solve this dilemma, we did the following:

First, we created and pushed our new deploymentclient.conf app pointing to our NEW deployment server to all the deployment clients. Of course we know that at this time, this app/conf file will be trumped by the deploymentclient.conf in /etc/system/local until that local file is removed or renamed.

Second, we created a custom app with a one-shot scripted input to delete the deploymentclient.conf file out of $SPLUNK_HOME/etc/system/local. This can be done with either a shell script on Linux or a Powershell script in Windows. Here are some examples:

Linux:

Customapp/bin/removeDeploymentClient.sh

#/bin/sh
rm -f $SPLUNK_HOME/etc/system/local/deploymentclient.conf

Windows:
customapp\bin\removeDeploymentClient.bat

del $SPLUNK_HOME\etc\system\local\deploymentclient.conf

Then add your inputs.conf — the key here is the setting of interval = -1 which means it will only run once on startup. Reference here: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf

Customapp/local/inputs.conf

Linux:

[script://./bin/removeDeploymentClient.sh]
interval = -1
source = removeDeploymentClient
sourcetype = scriptedInput
index = _internal
disabled = 0

Windows:

[script://.\bin\removeDeploymentClient.bat]
interval = -1
source = removeDeploymentClient
sourcetype = scriptedInput
index = _internal
disabled = 0

That’s all! Set your custom app to restart Splunkd on your deployment server and push it to your clients.

NOTE: the Splunk service on your deployment clients may need to be restarted TWICE for the change to take effect. The first time Splunk is restarted, it will remove the $SPLUNK_HOME/etc/system/local/deploymentclient.conf AFTER Splunk starts, so it will already be loaded in memory. After you restart Splunk again, that file will no longer be present, so it will then load the deploymentclient.conf you pushed in your custom app. One way to force a restart of the Splunk service from the DS is to create a dummy app with “Restart splunkd” checked and push it.

Splunk Workaround: Push a *nix TA from a Windows Deployment Server (DS)

Per the Operating system compatibility section of the follow doc https://docs.splunk.com/Documentation/Splunk/latest/Updating/Planadeployment Windows Deployment Servers cannot manage Linux deployment clients because Windows does not understand how to maintain Linux file permissions for scripts and executables. So when you push a TA that has an executable or script to a Linux deployment client, you will receive “Permission denied”.

The best solution is to switch your deployment server to Linux, but if you don’t have the ability to do that, read on.

Follow these steps to workaround the issue.

  1. Download the Linux TA that you want to push and extract it on any Linux machine.
  2. Make any required changes to the app: for example, add your <app>/local/inputs.conf file with whatever changes you want to implement.
  3. Repackage the tgz file WITHOUT the parent directory. For example, if you are repackaging Splunk_TA_nix, you would navigate into Splunk_TA_nix and create your tgz file with your local, default, bin, etc. folders all at the top level of the tgz file.
    tar cvfz <newname.tgz> *
  4. Move the repackaged app to your Windows deployment server and place it under $SPLUNK_HOME/etc/deployment-apps
  5. Backup your serverclass.conf file on your Windows deployment server. If you’ve been managing your Forwarder Management through the web UI, it will typically be located in $SPLUNK_HOME/etc/system/local/serverclass.conf
  6. Edit serverclass.conf to include the following setting under your (third level) app stanza:

appFile = <YourModifiedApp>.tgz

Reference: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverclassconf

Also verify that your (second level) serverClass stanza looks correct for whitelist/blacklist, etc.

7. Restart Splunk on your DS (might not be necessary)

That’s it! When the deployment client phones home, the Windows DS should provide the tgz file with all the Linux file permissions intact.

Verification steps:

  • Search splunkd.log on your DS for the name of your custom tgz app that you created. It should indicate that the app loaded successfully.
  • On your search head, run a search for something like this:
    index=_internal host=<linuxforwarder> Splunk_TA_nix

Where Splunk_TA_nix would be a keyword for whichever app you were having problems with. In my case, we were trying to get the rlog.sh script that comes with the Splunk_TA_nix app to work but it kept erroring out with Permission Denied. So I used “rlog” as my search keyword. I already verified that the Splunk Forwarder service on the Linux host was running as root, so after we did this workaround, it executed as expected.

Ansible Essentials for Beginners

This article is an adapted transcript from the above video.

INTRO
In this post I’ll be going over the basics of Ansible. The goal of this lesson is to take you from having very little understanding of Ansible to being able to run a simple Ansible playbook to automatically configure a system. I’ll start with a brief overview of Ansible and its capabilities, go over the components of Ansible, and then demonstrate how to install and execute some simple tasks.

So, if you don’t know what Ansible is, let’s do a quick review: Ansible is a configuration management tool owned by Red Hat that can be used to automate tasks on a wide variety of systems. So instead of manually configuring systems one by one like IT administrators used to do, Ansible can perform these tasks in an automated fashion, so that once your configuration is setup, all you have to do is press a button and watch the magic happen. Ansible can manage Linux, MacOS, Windows, network devices like routers and switches, hypervisors like VMware, and pretty much any software with an API.

Ansible is declarative, which means that you declare the end-state you would like your system to be in, and Ansible makes the desired changes only once. For example, if you want Ansible to install a particular software package on a system, it will check to see if that program is already installed before attempting to re run the command. This means you can run the same Ansible command over and over but the change will only be performed the first time it is run. You will hear the word ‘idempotence’ to describe this aspect of Ansible. An idempotent operation is one that can be applied multiple times without changing the result beyond the initial application.

Ansible is also agentless, meaning you don’t have to install anything on the systems you would like to manage. In general, all that is needed is an SSH connection. There are some exceptions to this but I’ll get into those later.

ARCHITECTURE

Let’s jump in and review the architecture of Ansible.

One of the nice things about Ansible is that from an architecture standpoint its very simple. At its most basic implementation, all you really need is 1 Ansible controller and 1 system you would like it to manage. The heart of Ansible is the controller, this is usually a Linux system, and it is what YOU the Ansible admin will be interacting with to control Ansible. Ansible is entirely command line based, so you will be using a local shell or SSH to interact with it. The exception to this is Ansible Tower, which is a paid offering from Red Hat that provides a web UI and lots of other additional features. I’m not gonna delve into Ansible Tower in this lesson, but from an architectural standpoint, just know that Ansible Tower would replace your Ansible controller, provide additional premium features, and sadly, is not free.

Now, sitting on your controller you have some of the core components of Ansible, most notably your inventory, playbooks, and modules. I’ll get into these further in just a minute but for now just know that the inventory is a list of all the systems you are managing with Ansible (WHO), Playbooks (or plays) are the instructions that tell Ansible what you want done on those systems (WHAT), and modules are HOW those instructions get executed (HOW).

When you run a playbook on the Ansible controller, it typically uses SSH to connect to the machines in your inventory and execute the tasks you define. In this diagram, the inventory would simply list Node1, Node2, and Node3. A couple notes on this: for Windows systems, Ansible uses WinRM (Windows Remote Management) rather than SSH for management. Or if you are using Ansible to manage software using a RESTful API, for example, Ansible will not connect to the system using SSH, but rather use a module called ‘uri’ to connect to the HTTP web service.

Another important note regarding SSH: you must use pre-shared keys in order for Ansible to authenticate to managed nodes. You cannot use usernames and passwords, and you really wouldn’t want to! That means there is some manual work involved when you are first setting up Ansible in order to ensure your controller has SSH keys for the systems in your inventory.

INVENTORY

Let’s talk about the Ansible inventory real quick. On your Ansible controller, you need a text file that lists the hostnames and/or IP addresses of the systems you would like to manage with Ansible. By default, this is located at /etc/ansible/hosts, but if you want to put it somewhere else, you can do that as long as you tell Ansible where you put it. This can be done either at run time with the -i flag, or statically in your ansible.cfg file.

Ansible supports multiple formats for the inventory, the most popular default formats are INI and YAML. The easiest to get started with is the INI format, where you put your group name in brackets, and list 1 host per line below it, like this:

[hosts]
node1
node2
node3

If you’re just getting started with Ansible, I would stick with a simple INI inventory. As you progress, or your inventories become larger and you start grouping systems, you may find that you prefer another inventory format such as YAML.

Inventory documentation: https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html

PLAYBOOKS

Now, if you’re not familiar with the YAML format, this is the perfect place for me to elaborate as we get into Ansible playbooks. Playbooks are where you input the instructions for the tasks and configurations you would like Ansible to perform. Playbooks are written in YAML format, which is a human-readable data-serialization language. Basically this just means that YAML can be easily read and understood by humans, but it is still structured in a standardized format so that software can interpret it. YAML is an acronym that means “YAML Ain’t Markup Language”. It is commonly used for configuration files and in applications where data is being stored or transmitted.

Here’s an example of a Ansible playbook in YAML format. This example would install the Git version control software package on your system(s).

---
- hosts: all
  tasks:
    - name: Install git
      package:
        name: git
        state: latest

Lets breakdown this example.  On the first line you will see three dashes —, this is required syntax at the beginning of all YAML files.

Also notice the spacing in this example. YAML has strict spacing requirements and Ansible playbooks will not run unless the spacing is done correctly.

Line 2 specifies that the playbook should run on all hosts in your inventory, you could change this to a group of hosts or single host if needed

The ‘tasks:” command in line 3 indicates the start of the Ansible task list, everything that follows this line will be performed in sequential order on the hosts in your inventory, from top to bottom.

Line 4 gives the name of the task, you can name this whatever you like.

Line 5 is the Ansible module to run, in this case we are running the ‘package’ module. Modules are the heart of Ansible, and I’m going to talk more about modules in just a moment.

Lines 6 and 7 are the module arguments, in the case of the “package” module only two arguments are necessary. The “name: git” argument tells the module the name of the software to be installed (Git), and the “state: latest” argument says to install the latest version of git available in the package repository.

Let me pause here for a second and just say: if you’re confused about how you would even know what modules and arguments are available, that’s where the Ansible documentation comes in. When learning and working with Ansible, you will spend a good amount of time searching through the documentation in order to find which modules are required to perform the tasks you want to accomplish. Documentation can be found at docs.ansible.com, or via the command line by running the ansible-docs command.

I should also mention that in addition to playbooks, you can also run ad-hoc commands with Ansible. Ad-hoc commands are commands that you want to quickly perform, like a ping test, that you don’t need to repeat or automate later. To run an ad-hoc command, you would simply type the “ansible” command with the “-m” flag (m for module) followed by the target hosts and module you would like to call. For example: ansible node1 -m ping would ping node1 from your inventory.

MODULES

Like I mentioned earlier, modules are the heart of Ansible. There are thousands of Ansible modules available, and if you run into the unlikely scenario where one is NOT available, you can write your own Ansible module in Python.

Modules tell Ansible how to interact with software or services to perform a given task. In simplistic terms, the way Ansible works is by calling the modules you’ve written in your playbook, which then execute tasks on your managed systems. In the example I used before, I used the ‘package’ module to install Git. As another example, if I wanted to add or modify text in configuration file, I could use the module called ‘lineinfile’. I’ll be demonstrating some more examples in just a minute.

Ansible playbooks are basically a string of modules put together in the sequence you would like them performed (in YAML format, of course).

As I said, when you are getting started with writing your own playbooks you will need to rely heavily on the Ansible documentation in order to find the modules and their parameters.

DEMONSTRATION

Ok now I’m going to jump into a demonstration and show you how to install Ansible, setup the inventory, and write and run a simple playbook.

For my demo environment, I’ve setup two CentOS virtual machines, one will be the Ansible controller, and the other will by a node I’d like it to manage.

First, use your package manager to install Ansible on your controller, since I’m using CentOS I’m going use ‘yum’.

After installed, you can run ansible --version to ensure you are on the latest version of Ansible.

Now that Ansible is installed, let’s edit the inventory in /etc/ansible/hosts. Since I’ve only got one node for this demo, this will be easy.

Next I need to configure SSH pre-shared keys so that my Ansible controller can authenticate to my managed node. Use the ssh-keygen and ssh-copy-id commands to do this.

On the controller type: ssh-keygen and accept the defaults.

Then, still on your controller type ssh-copy-id node1 to copy your SSH key to the managed node.

Now I’ll ping my managed node from my controller to verify connectivity. I can use the ‘ping’ module with a Ansible ad-hoc command to do this.

ansible node1 -m ping

Now I’ll show you how to write a basic playbook. I’ll pause here for a second and go over a hypothetical example of how you I would plan this out.

Generally when writing a playbook, you have some idea of the end-state you would like your system to be in. In my example, I would like to start with a bare installation of CentOS 7 minimal and have it configured as a webserver with my own custom webpage installed. Let’s break down the steps needed to achieve this:

  1. Install the web server software package (httpd)
  2. Configure the system firewall to allow web traffic
  3. Ensure my web server is running.
  4. Copy my web configuration to the root directory of the web server

Pretty simple, right? First I’ll map out which Ansible modules I will need to accomplish these tasks, so I’m going to go to Google and search the Ansible documentation for which modules I need for each one.

  1. For step one I’ll Google “Ansible module package install” and I’ll quickly see that I can use a distro specific module such as ‘yum’ for Red Hat based distros or ‘apt’ for Debian based distros, OR I can use a distribution agnostic module like ‘package’. I’m going to use the ‘package’ module because I want this playbook to work on more than one Linux distribution. So I’ll note the ‘package’ module next to step one, and continue on to my next step.
  2. Next I need to configure my firewall, so I’ll do the same thing I did in step one and Google “Ansible module to configure firewall”. I quickly see that I can use the ‘firewalld’ module so I’ll note that next to step 2.
  3. Third, I need to ensure my web server is started, for this I can use the ‘service’ module.
  4. Finally I need to copy my web configuration, and after searching through the documentation I find that I can use either the template module or the copy module. I’m going to use the copy module.

Now that I know which modules I need to perform my desired tasks, I can get to work with writing my playbook.

I’ll use a text editor to create a file called ‘deploy-httpd.yml’ and start writing my playbook. As I get to each step in my playbook, I can reference the Ansible documentation pages that I searched for earlier to know which arguments are available to use for each module.

View my example playbook on my GitHub page: https://github.com/mikeleidner/ansible-example/blob/master/deploy-httpd.yml

Now that I’ve finished my playbook I can test it out. I’ll run it by executing the ‘ansible-playbook’ command and point it to my newly created websetup yaml playbook.

Looks like it was successful, Ansible didn’t report any errors and showed that it made 4 changes. As I touched on earlier, since Ansible is idempotent, I could now try running the playbook again and no changes would be made since it sees they have already been completed.

Let’s browse to my newly configured web site and admire my work.

One final recommendation: use a version control system like Git when you start writing more complicated playbooks. If you haven’t already used Git, its a great skill to learn and there are many benefits of using it. I put the above example on my Github page if you’d like to check it out: github.com/mikeleidner.

That’s it for this lesson. If you found this beneficial, please press the like button and leave me a comment. Also, feel free to tell me what you’d like me to do a lesson on next! Thanks for reading!