Ansible – Backup Files Before Change

Assuming you’ve used ansible for creating your configuration file then you would not really need to back it up. That is because you should be using version control on your ansible playbooks and included variable files already so you should have a copy of all the configuration changes at various stages. But if you are talking about system configuration files many of these are part of the OS installation and were likely never touched by Ansible until a need arose.

I ran across this as the security team was using CIS Benchmarking to harden our systems. The benchmark would recommend steps to take to close potential vulnerabilities and we would create ansible playbooks to put these configurations in place. Some members of the DevOps team created playbooks that would create time stamped backup copies of the configuration files. This was helpful for the day to day admins that would receive calls when the benchmarks caused something to stop working sometimes days later. The system admin could see what had changed.

However, as more benchmarks were deployed the number of files became too much to handle. Also, if a /etc/sysconfig file were changed any copies with merely a timestamp in the same directory would still be processed by the system services and the configuration that ended up being used was the luck of the draw. That is when I went back to something I had done for years before DevOps and Ansible were a thing. On the systems I administered I would always install RCS, revision control system. I tried to use others like subversion and cvs but they would modify the file ownership and permissions which was a big problem for system configuration files. Before making manual changes on system configuration files, I would check them in and back out of RCS. The Ansible playbook below shows an example of using RCS to create a backup of a file prior to applying a configuration change to the multipath.conf file. This same process may be used with any file to store a version locally.

  pre_tasks:
    - name: Install RCS, device-mapper and device-mapper-multipath packages
      ansible.builtin.package:
        name:
          - device-mapper
          - device-mapper-multipath
          - rcs
        state: present

    - name: Check if /etc/multipath.conf exists
      ansible.builtin.stat:
        path: /etc/multipath.conf
      register: multipath_conf_stat

    - name: Create revision control backup directory
      ansible.builtin.file:
        path: /etc/RCS
        state: directory 
        mode: 0775

    - name: Check existing multipath.conf file into revision control
      ansible.builtin.shell: ci -t-"initial version" -i -l /etc/multipath.conf || ci -m"updated defaults" -l /etc/multipath.conf
      when: multipath_conf_stat.stat.exists


Finally, another reason to store a version locally is if the change happens to knock the system off the network then you will have to work through the console. The system may not have access to a remote version control or offline backup of the modified configuration files. Having the local RCS will allow for faster troubleshooting and recovery.

Linux Command History Logging

If you want to track what users are doing there is of course the command last which shows who has logged in and for how long. Then there is the auditd service which logs transactions in the /var/log/audit/audit.log file. This tracks some of the commands executed on systems and some arguments. Further the audit logs use sets of parameter=value syntax and some values are stored as hex making it not as user friendly.

It is not just about tracking what users are doing incorrectly but also being able to reproduce something which effectively worked on a system on another system. You may need to put the commands or arguments into a script or work them into a DevOps tool like Ansible.

To configure a linux systems so that users shells record command history add the following lines to the /etc/profile will create a .history directory under each users home directory and the HISTFILE will be created under this using the name of the real user not the su or sudo user.

## setup history
export REALUSER=$(/usr/bin/who -m | awk '{print $1}')
export EFCTUSER=$(/bin/whoami)
shopt -s histappend
[[ ! -d ${HOME}/.history ]] && mkdir -p ${HOME}/.history
export HISTTIMEFORMAT="%Y/%m/%d %T "
eval export HISTFILE=${HOME}/.history/.${REALUSER:-$USER}
export HISTSIZE=6000
if [ "${HISTCONTROL:-}" = "ignorespace" ] ; then
    export HISTCONTROL=ignoreboth
else
    export HISTCONTROL=ignoredups
fi
export PROMPT_COMMAND="history -a ; ${PROMPT_COMMAND}"
readonly HISTFILE HISTSIZE

The HISTTIMEFORMAT will date and time stamp each action taken. The ignoredups will only record the last instance of a command run multiple time to save space. The prompt command prepending history -a will force history to be stored after each command is run instead of after logout to ensure actions taken are recorded. If a user looses their connection the session may not record the command history.

Alternatively you may want to store histories under /var/spool/history/{effective user}/{real user}

NAS In-Flight Encryption

Introduction

Information Technology deals with transmitting and storing a lot of data.  Some of this information may be classified as PHI, Personal Health Information, and PII, Personally Identifiable Information, which is regulated by laws enacted by the US (HIPAA & SOXA) and foreign governments (GDPR).  To protect personal information from being corrupted or leaked to others that might use the information to negatively impact an individual there are requirements for protection of integrity and privacy.  To meet privacy requirements either encryption and isolation or both are specified by standard documented or referenced by government regulations.  

Network Attached Storage is typically accessed either as file based storage through NFS exports or SMB CIFS shares or as block storage through iSCSI.  Since NAS storage traverse enterprise networks it is recommended that connections to NAS be encrypted especially in the case where HIPAA or Sensitive PII may be accessed and transmitted across the network.  This article covers how to enable and configure encryption in-flight for NAS Strorage.

Read more of this post

PowerShell – Gather Bitlocker Recovery Keys

If you have enabled bitlocker encryption on your system there are circumstances when it may come up with errors and ask for the recovery keys to the encrypted volume. You can sometimes get these from your Microsoft Account if you are using a Microsoft Account to login to your system. Alternatively in a business office or datacenter you can setup a Key Management server to manage these keys. However it is still a good idea to grab another copy and back it up to a secure location. In smaller environments this may be as simple as sending it to a secured share and backing that up to a secure location or even putting it on a thumb drive and storing that in a firebox/firesafe. But how do you access these keys. There are several ways.

  • Access the Keys from your Microsoft Account if you are using one. However if you are a business then you are likely using a local account or a domain account.
  • Access the keys from your Domain Controller Active Directory. Under {domain}->{Computers}->{Computer Name} and Properties there is a Bitlocker tab with the keys.
  • Use a powershell script to access the keys. The script will need to be run with administrator privileges. This option is what this post is primarily about.

The code below may be saved to a script GatherBitLockerRecoveryKeys.ps1. This script will loop through each partition on a system and list its recovery keys to a file. The BasePath should be set to a secure location. Obviously in this example it is pointing to the C drive which in all likelihood is encrypted and therefore would be of no use in an emergency since you would have chicken and egg scenario where you couldn’t get to you recovery keys if an issue occurred.

Read more of this post

SSH Agent Automation

On Linux systems many of us administrators and engineers have our favorite profiles and configuration file settings. One of the most used tools and a must for securing an environment is secure shell or ssh. Secure shell uses asymmetric encryption which is a public key and private key pair of keys; one used for encryption and the other for decryption. Open SSH allows for several different algorithms such as DES or RSA. The public encryption key may then be shared to other systems in the ~/.ssh/authorized_keys file indicating that a system having the correct key information may be allowed to ssh directly into a system using only the public key challenge. Further the public and private key pair may be associated with a passphrase requiring such to be entered before the asymmetric key pair may be used for authentication.

Many DevOps Infrastructure as Code tools and other management tools and even home grown scripts may use ssh to manage through inquiry and remote execution multiple systems in an environment. The ssh passphrase requirement may get in the way of such automation and cause such batch processes to fail. The ssh-agent was created to resolve this limitation by registering passphrases and keys so that subsequent ssh sessions would not be prompted for passphrases. The script below may be added to a .bashrc or .kshrc user profile to instantiate a ssh-agent which may be used by subsequent session. It createa a link to the ssh-agent special file as ~/.ssh/ssh_auth_sock and updates the SSH_AUTH_SOCK environment variable to point to this link. This then allows sessions going forward to piggyback off the initial ssh-agent instantiation. This may also be used with scheduled jobs.

## Check if the agent is accessible and if not remove socket file and kill agents
export SSH_AUTH_SOCK=~/.ssh/ssh_auth_sock
ssh-add -l >/dev/null 2>&1 ; RT=$?
if [ -h ~/.ssh/ssh_auth_sock -a ${RT} -gt 0 ]; then 
	echo "SSH Agent is dead ${RT}; removing socket link file and killing hung ssh agent!"
	rm -f ~/.ssh/ssh_auth_sock 
	pkill -u $(whoami) -i ssh-agent 
fi
## if the auth socket does not exist start the agent and recreate the auth socket link
if [ ! -h ~/.ssh/ssh_auth_sock ]; then
	echo "Ssh agent socket link does not exist; starting new agent!"
	eval `ssh-agent`
	ln -sf "$SSH_AUTH_SOCK" ~/.ssh/ssh_auth_sock
fi
export SSH_AUTH_SOCK=~/.ssh/ssh_auth_sock
ssh-add -l > /dev/null 2>&1 || ssh-add

VMWare vSphere Discovery and Inventory

As an Infrastructure Engineer or Architect you need to have a good grasp on what systems comprise your environment. In the past this was somewhat straight forward. You kept a configuration item database in your CMDB and teams had their workbooks and playbooks. However, in this new world of DevOps and CI-CD and their automation tool sets such as Terraform, Chef, Ansible, Bladelogic, and many others; developers can stand up their own virtual instances and tear them down. This can make it hard to have a complete picture of your environment. This can be especially difficult for storage infrastructure since many virtual instances can be deployed on large data stores and when there is an IO performance problem tracking down the related hardware can be like following the rabbit to wonderland.

I ran into this while leading several projects for storage infrastructure servicing an ESX environment and developed a powercli script to pull the necessary virtual instance and data-store data from the VSphere systems. First you will need to install powercli for VMware which can be found here:
https://developer.vmware.com/web/tool/12.4/vmware-powercli

Read more of this post

Optimizing Disk IO Through Abstraction

To Engineer or Not To…

When disk capacity is released to a new application or service many times the projects do not consider how best to use the storage that has been provided. Essentially the approaches fall into one of two schools of thought. The first is to reduce upfront engineering into a couple design options and resolve issues when they arise. The second is to engineer several solution sets with variable parameters that will provide a broader pallet of solutions and policies from which an appropriate solution may be selected.

Reduced Simplified Engineering

  • Apply one of a couple infrastructure designs to a project.
  • This approach involves less work upfront, has a simpler execution and involves less work gathering requirements.
  • Potentially more time and effort will be spent resolving issues when resources and design are insufficient.
Read more of this post

Pushing Your Profile and SSH Keys

When ever you start supporting a new environment especially in a large corporation usually you are confronted with many systems.  Security will take care of setting up your access across whatever platforms there may be.  But generally you are left holding the bag with setting up your ssh keys and any profile customizations not to mention distribution of any scripts or tools you have come to rely upon.  Of course before you put any tools on a system there are several things to consider.  You definitely want to consider the environments you are first performing the distributions on and it is always good to start with development or lab environments and move out from there.  Also you will need to consider the corporate policies related to the environment which might limit your ability to even have your own set of tools and scripts.  You may be limited down to simple .profile changes and ssh keys.  Implementing a script to push these keys and profiles out may need to go through various degrees of red tape.  Whatever policies and requirements exist in your organization are your responsibility to know and to determine how or if the tools discussed here may be used.

Read more of this post

Design a site like this with WordPress.com
Get started