DevOps – Missing Version Control

For many DevOps implementation from the perspective of system deployment and configuration management tools such as Ansible, Chef, SaltStack and Puppet, just to name a few , are the staple for meeting DevOps requirements. Version control is used for the playbooks and recipes and of course any source code for components and applications that are compiled and deployed to systems. When I started my first job out of college I worked with a contractor named LeRoy Budnick who had done work on the development of the original IT Service Management and ITIL standards.

LeRoy introduced me to the idea of an expanded role for version control using RCS to manage versions of configuration files. If changes caused issues on systems they could quickly be backed out by checking out the previous version of configuration file. On production systems version control could be monitored to see if unapproved changes were made to a system and could easily be backed out. It was a great idea and I have used it ever since instead of just making backup copies of configuration files. In fact there are times where making backup copies of configuration files can cause problems. In RedHat’s /etc/sysconfig boot configuration directory if you make a backup copy just adding a suffix to a copy you will find that on boot the system will use not only the configuration files but these backups as well. The configuration that ends up running after boot will be dependent upon which config file was last and how the variables overlayed one another. In short you have no idea what your configuration is after boot. Using RCS the versions are placed in the RCS subdirectory and do not end up being executed since they are no longer in the sysconfig or sysconfig/networks-scripts directories.

Enter the world of DevOps and tools for deploying Operating Systems, compiling applications on the fly, installing them on systems and updating configurations across your data centers. One might think that the need for such version control has been deprecated by tools such as Ansible and Chef or at the very least we only need to use version control for the playbooks and recipies. But not so fast. My security team started using Ansible to roll out CIS Benchmark system hardening configurations and occassionally this would hang systems and make them inaccessible. If the system is inaccessible then the DevOps tools cannot access them to back out the changes and an administrator has to get involved. Then figuring out how far along the change was implemented and what changed and then backing those changes out was quite a chore. They then started having the Ansible playbooks make backup copies of the configuration files before they implemented changes. This made it a little easier but then enter /etc/sysconfig. If you recall we mentioned earlier that this doesn’t work; making date stamped backup copies in /etc/sysconfig. The security team ended up creating Frankenstien configs with many versions of config files all of which were being executed. Further not all configuration files are managed by configuration management tools; for example network TCP/IP configurations are specific to each system and sometimes require system specific tuning. At this point it should be clear that there is a good case for using version control on configuration files. In fact I wish devops tools like Chef or Ansible would build it into their implementations.

This prompted me to write the previous article about how to use RCS with Ansible to make backups of configurations before deploying changes to configurations. In the past I tried to use more modern version control like CVS or SubVersion but found that checking in and out the configuration files resulted in the configuration files being recreated with different permissions. RCS doesn’t recreate files but just over-writes them retaining their permissions and ACLs. However now RCS is being deprecated and is no longer available for install on some Linux BaseOS deployment images. It may be available on some supplimentary images but it is clear that RCS is going away. I have been working on transitioning to Git and have come up with some procedures for using git with hooks to backup and restore ACLs and permissions. I will go over how to use git for configuration file version control in the next Article.

Ansible – Backup Files Before Change

Assuming you’ve used ansible for creating your configuration file then you would not really need to back it up. That is because you should be using version control on your ansible playbooks and included variable files already so you should have a copy of all the configuration changes at various stages. But if you are talking about system configuration files many of these are part of the OS installation and were likely never touched by Ansible until a need arose.

I ran across this as the security team was using CIS Benchmarking to harden our systems. The benchmark would recommend steps to take to close potential vulnerabilities and we would create ansible playbooks to put these configurations in place. Some members of the DevOps team created playbooks that would create time stamped backup copies of the configuration files. This was helpful for the day to day admins that would receive calls when the benchmarks caused something to stop working sometimes days later. The system admin could see what had changed.

However, as more benchmarks were deployed the number of files became too much to handle. Also, if a /etc/sysconfig file were changed any copies with merely a timestamp in the same directory would still be processed by the system services and the configuration that ended up being used was the luck of the draw. That is when I went back to something I had done for years before DevOps and Ansible were a thing. On the systems I administered I would always install RCS, revision control system. I tried to use others like subversion and cvs but they would modify the file ownership and permissions which was a big problem for system configuration files. Before making manual changes on system configuration files, I would check them in and back out of RCS. The Ansible playbook below shows an example of using RCS to create a backup of a file prior to applying a configuration change to the multipath.conf file. This same process may be used with any file to store a version locally.

  pre_tasks:
    - name: Install RCS, device-mapper and device-mapper-multipath packages
      ansible.builtin.package:
        name:
          - device-mapper
          - device-mapper-multipath
          - rcs
        state: present

    - name: Check if /etc/multipath.conf exists
      ansible.builtin.stat:
        path: /etc/multipath.conf
      register: multipath_conf_stat

    - name: Create revision control backup directory
      ansible.builtin.file:
        path: /etc/RCS
        state: directory 
        mode: 0775

    - name: Check existing multipath.conf file into revision control
      ansible.builtin.shell: ci -t-"initial version" -i -l /etc/multipath.conf || ci -m"updated defaults" -l /etc/multipath.conf
      when: multipath_conf_stat.stat.exists


Finally, another reason to store a version locally is if the change happens to knock the system off the network then you will have to work through the console. The system may not have access to a remote version control or offline backup of the modified configuration files. Having the local RCS will allow for faster troubleshooting and recovery.

Linux Command History Logging

If you want to track what users are doing there is of course the command last which shows who has logged in and for how long. Then there is the auditd service which logs transactions in the /var/log/audit/audit.log file. This tracks some of the commands executed on systems and some arguments. Further the audit logs use sets of parameter=value syntax and some values are stored as hex making it not as user friendly.

It is not just about tracking what users are doing incorrectly but also being able to reproduce something which effectively worked on a system on another system. You may need to put the commands or arguments into a script or work them into a DevOps tool like Ansible.

To configure a linux systems so that users shells record command history add the following lines to the /etc/profile will create a .history directory under each users home directory and the HISTFILE will be created under this using the name of the real user not the su or sudo user.

## setup history
export REALUSER=$(/usr/bin/who -m | awk '{print $1}')
export EFCTUSER=$(/bin/whoami)
shopt -s histappend
[[ ! -d ${HOME}/.history ]] && mkdir -p ${HOME}/.history
export HISTTIMEFORMAT="%Y/%m/%d %T "
eval export HISTFILE=${HOME}/.history/.${REALUSER:-$USER}
export HISTSIZE=6000
if [ "${HISTCONTROL:-}" = "ignorespace" ] ; then
    export HISTCONTROL=ignoreboth
else
    export HISTCONTROL=ignoredups
fi
export PROMPT_COMMAND="history -a ; ${PROMPT_COMMAND}"
readonly HISTFILE HISTSIZE

The HISTTIMEFORMAT will date and time stamp each action taken. The ignoredups will only record the last instance of a command run multiple time to save space. The prompt command prepending history -a will force history to be stored after each command is run instead of after logout to ensure actions taken are recorded. If a user looses their connection the session may not record the command history.

Alternatively you may want to store histories under /var/spool/history/{effective user}/{real user}

Excel – Formatting MAC/WWN with Colons

Often you will receive data about machine addresses or world wide names from one source without any colons and then another source will have these addresses formatted with colons. You can either remove the colons from one but if you have to provide input from your data in a colon delimited format for another application you will have to add the colons after every two characters. The easiest way to do this is with the following formula:

=TEXTJOIN(":",TRUE,MID(A1,SEQUENCE(1,LEN(A1)/2,1,2),2))

This assumes that your data is in cell A1, you will need to adjust accordingly to your worksheets layout. The formula uses the sequence function to create an array of numbers for every other character according to the length of the data in the cell with a step of two. This will form the starting number of every two characters used by the mid function in array mode. From that number it will then extract the value at that point for the length specified.

Read more of this post

NAS In-Flight Encryption

Introduction

Information Technology deals with transmitting and storing a lot of data.  Some of this information may be classified as PHI, Personal Health Information, and PII, Personally Identifiable Information, which is regulated by laws enacted by the US (HIPAA & SOXA) and foreign governments (GDPR).  To protect personal information from being corrupted or leaked to others that might use the information to negatively impact an individual there are requirements for protection of integrity and privacy.  To meet privacy requirements either encryption and isolation or both are specified by standard documented or referenced by government regulations.  

Network Attached Storage is typically accessed either as file based storage through NFS exports or SMB CIFS shares or as block storage through iSCSI.  Since NAS storage traverse enterprise networks it is recommended that connections to NAS be encrypted especially in the case where HIPAA or Sensitive PII may be accessed and transmitted across the network.  This article covers how to enable and configure encryption in-flight for NAS Strorage.

Read more of this post

PowerShell – Gather Bitlocker Recovery Keys

If you have enabled bitlocker encryption on your system there are circumstances when it may come up with errors and ask for the recovery keys to the encrypted volume. You can sometimes get these from your Microsoft Account if you are using a Microsoft Account to login to your system. Alternatively in a business office or datacenter you can setup a Key Management server to manage these keys. However it is still a good idea to grab another copy and back it up to a secure location. In smaller environments this may be as simple as sending it to a secured share and backing that up to a secure location or even putting it on a thumb drive and storing that in a firebox/firesafe. But how do you access these keys. There are several ways.

  • Access the Keys from your Microsoft Account if you are using one. However if you are a business then you are likely using a local account or a domain account.
  • Access the keys from your Domain Controller Active Directory. Under {domain}->{Computers}->{Computer Name} and Properties there is a Bitlocker tab with the keys.
  • Use a powershell script to access the keys. The script will need to be run with administrator privileges. This option is what this post is primarily about.

The code below may be saved to a script GatherBitLockerRecoveryKeys.ps1. This script will loop through each partition on a system and list its recovery keys to a file. The BasePath should be set to a secure location. Obviously in this example it is pointing to the C drive which in all likelihood is encrypted and therefore would be of no use in an emergency since you would have chicken and egg scenario where you couldn’t get to you recovery keys if an issue occurred.

Read more of this post

Excel – Converting Byte Units to Gb

The previous two posts covered how to extract numbers and then letters from a cell. This post brings it altogether into a function that can convert a number of bytes with a unit to gigabytes. In the past I have done this with VBA macro functions such as the one inserted below. The issue with using VBA macros is it can limit the portability of a spreadsheet. Since hackers started using VBA macros to write malware and spyware many organizations have put policies in place that block the execution of VBA macros or require macros signatures registering the functions to the organization. These limit the ability of a spreadsheet to be distributed and used without significant steps to get approvals and registrations within a company’s policies.

VBA GetNumber Function (Select Arrow to Open Below)

Function GetNumber(rWhere As Variant, Optional Accept_Decimals As Boolean, Optional Accept_Negative As Boolean) As String

Dim CharPostion As Integer, i As Integer, StringLength As Integer
Dim sText As String, mchNeg As String, mchDec As String
Dim ThisNum As String
Dim vChar, vChar2
mchNeg = vbNullString
mchDec = vbNullString

Select Case TypeName(rWhere)
    Case Is = "Range"
        ' Get the text from the supplied range
        sText = Trim(rWhere.Text)
    Case Else
        sText = Trim(rWhere)
End Select

If Accept_Decimals = True Then
    mchDec = "."
End If
If Accept_Negative = True Then
    mchNeg = "-"
End If
    
StringLength = Len(sText)
For CharPostion = StringLength To 1 Step -1
    vChar = Mid(sText, CharPostion, 1)
        If IsNumeric(vChar) Or vChar = mchNeg Or vChar = mchDec Then
                i = i + 1
                ThisNum = Mid(sText, CharPostion, 1) & ThisNum
            If IsNumeric(ThisNum) Then
                If CDbl(ThisNum) < 0 Then Exit For
            Else
                ThisNum = Replace(ThisNum, Left(ThisNum, 1), "", , 1)
            End If
        End If
    If i = 1 And ThisNum <> vbNullString Then ThisNum = CDbl(Mid(ThisNum, 1, 1))
Next CharPostion
'GetNumber = CDbl(ThisNum)
GetNumber = ThisNum
End Function

VBA ByteToGB Function (Select Arrow to Open Below)

Function BytesToGb(Where As Range) As Double

Dim NumMB As Double

' Get the text from the supplied range
strWhere = Trim(Where.Text)

'Extract the number and unit from the text
'NumPart = CDbl(Mid(strWhere, 1, (InStr(1, strWhere, "b", vbTextCompare) - 2)))
NumPart = GetNumber(Where, True, True)
NumUnit = LCase(Trim(Replace(Where, NumPart, "")))
'Mid(strWhere, InStr(1, strWhere, "b", vbTextCompare) - 1, Len(strWhere) - (InStr(1, strWhere, "b", vbTextCompare) - 2))

'Use unit to convert the number part to MB value
Select Case NumUnit
    Case "kb", "k"
        NumMB = NumPart / 1024 ^ 2
    Case "mb", "m"
        NumMB = NumPart / 1024
    Case "gb", "g"
        NumMB = NumPart
    Case "tb", "t"
        NumMB = NumPart * 1024
    Case Else
        NumMB = NumPart / 1024
End Select

' Return the MBs
BytesToGb = CDbl(NumMB)
End Function

Rather than use a VBA macro function like the one above since with newer versions of Excel we now have more available array functions this can be done using the formulae discussed in the previous two posts combined into a named lambda function.

'LAMBDA(vX,IFERROR(VALUE(TEXTJOIN("",TRUE,FILTER(MID(vX,SEQUENCE(1,LEN(vX)),1),ISNUMBER(VALUE(MID(vX,SEQUENCE(1,LEN(vX)),1)))+(MID(vX,SEQUENCE(1,LEN(vX)),1)="."),"")))/10^((MATCH(LEFT(UPPER(TRIM(TEXTJOIN("",TRUE,FILTER(MID(vX,SEQUENCE(1,LEN(vX)),1),NOT(ISNUMBER(VALUE(MID(vX,SEQUENCE(1,LEN(vX)),1))))*(MID(vX,SEQUENCE(1,LEN(vX)),1)<>"."),"")))),1),{"P","T","G","M","K"},0)-3)*3),""))

Formula Explanation

For the explanation of the formulas on either side of the /10^ please see the pervious two posts. So let’s make it into a function using the Name Manager under formulas with the Lambda function. You may be familiar with creating named ranges. Under the Formula menu item in Excel you can select a range of cells and choose define name so that you can refer to it else where. But what you can also do is create a name that behaves like a function. Open Name Manager and add a name BytesToGB and paste into the reference field the formula above. Then you can reuse this formula against any reference just by using =BytesToGB(Ref).

DataFormula Components GB
2.5 TBLAMBDA(vX,
IFERROR(VALUE(TEXTJOIN(“”,TRUE,FILTER(
MID(vX,SEQUENCE(1,LEN(vX)),1),
ISNUMBER(VALUE(MID(vX,SEQUENCE(1,LEN(vX)),1)))+
(MID(vX,SEQUENCE(1,LEN(vX)),1)=”.”),””)))
/
10^((MATCH(LEFT(UPPER(TRIM(TEXTJOIN(“”,TRUE,FILTER(
MID(vX,SEQUENCE(1,LEN(vX)),1),NOT(ISNUMBER(VALUE(
MID(vX,SEQUENCE(1,LEN(vX)),1))))*(
MID(vX,SEQUENCE(1,LEN(vX)),1)<>”.”),””)))),1),
{“P”,”T”,”G”,”M”,”K”},0)-3)*3),””))
2500

In this function we divide the number extracted from the cell by the number created when we match the letters extracted to the array {“P”,”T”,”G”,”M”,”K”} subtracting 3. So for Petabytes 1 minus 3 would equal negative 2 time 3 would equal negative 6. Thus dividing a number by 10 raised to the power of -6 would add six zeros to it arriving at the number of gigabytes. Similarly for Terabytes 2 minus 3 would equal -1 multipled by 3 would be -3 and would result in 3 zeros or decimal places being added to the number converting it to gigabytes.

Excel – Extracting Letters from a String

In the previous post I covered how to extract numbers, including decimal numbers, from a string. The flip side of that is to extract the letters from the same string. The formula to do this are very similar to the extracting numbers formula but with the reverse sense. We don’t want numbers but non-numbers and we don’t want periods or decimals. So here is the formula to do this below and after it I will describe how it works.

=TRIM(TEXTJOIN("",TRUE,FILTER(MID(T4,SEQUENCE(1,LEN(T4)),1),NOT(ISNUMBER(VALUE(MID(T4,SEQUENCE(1,LEN(T4)),1))))*(MID(T4,SEQUENCE(1,LEN(T4)),1)<>"."),"")))
Read more of this post

Excel Extracting Numbers from a String

I was looking for a way to extract numbers from a string in and excel cell. Years ago, I created a VBA module with regular expression functions which could do this kind of work. However, using VBA tends to decrease the portability of an excel spreadsheets depending upon the policies that might be in place. So I have been trying to use standard functions and formulas to improve my spreadsheet portability. I was searching the internet for a solution to this problem and indeed I found several examples of formulas which extracted numbers but what I eventually found is they left the decimals behind.

After more searching, I concluded that I was going to have to figure it out and write my own formula. I thought I should be able to use array-based formulas to solve the problem and a little while later I came up with this.

=VALUE(TEXTJOIN("",TRUE,FILTER(MID(A2,SEQUENCE(1,LEN(A2)),1),ISNUMBER(VALUE(MID(A2,SEQUENCE(1,LEN(A2)),1)))+(MID(A2,SEQUENCE(1,LEN(A2)),1)="."),"")))
Read more of this post

Enterprise SAN Switch Upgrade

Introduction

In an Enterprise setting upgrading storage infrastructure is quite different from running updates on your home PC; or at least it should be. While updates expand functionality, simplify interfaces, fix bugs and close vulnerabilities they can also introduce new bugs and vulnerabilities. Sometimes the new bugs are contingent upon factors which exist in your environment and can result in encountering the issue the bug creates. In an Enterprise environment where many users and sometimes customers rely upon the storage infrastructure the impact of an issue caused by an upgrade can be broad and affect business credibility with potentially even legal ramifications. Therefore, having a process to mitigate as many risks as possible is a necessity. The process presented here rests in a general framework with specific steps related to Cisco and Brocade SAN switch upgrades.

Overview

The process described at a high-level here is a good general framework for any shared infrastructure upgrade in an Enterprise environment.

  1. Planning
    • Document current environment cross section from CMDB and/or direct system inquiry.
    • (Server Hardware Model, OS and Adapter Model/Firmware/Driver as well as SAN Switch Model/Firmware and current Storage Model/Code Level)
    • Ensure the SAN infrastructure is under vendor support so that code may be downloaded and support may be engaged if any problems are encountered.
    • Download and Review Release notes for the top 3 recent code releases.
    • Use vendor interoperability documents or web applications to validate supportability in your environment using this previously gathered information.
    • Choose the target code level. (Often N-1 is preferred over N, bleeding edge latest releases, unless significant vulnerabilities or incompatibility with your environment exists.)
  2. Preparation
    • Download the target release installation code and any upgrade test utilities provided by the vendor.
    • Upload the target code and test utility and run test utility. (clean up old diagnostics and install images no longer needed to provide necessary space for new code and upgrade process)
    • Run initial health checks on the storage systems.
    • Gather connectivity information from SAN and Storage devices and verify connection and path redundancy.
    • Initiate a resolution plan before scheduling the upgrade for any identified issues.
    • Submit change control and obtain approval for upgrade.
  3. Upgrade
    • Rerun the upgrade test utility to verify issues are still resolved.
    • Perform health checks and gather interface status showing pre-upgrade connectivity
    • Clear stats and logs so that all events will be related to the upgrade
    • Run configuration backup, diagnostic snapshot and list logs to a file downloading each to a central configuration repository.
    • Initiate any prerequisite components microcode upgrades (transceiver firmware, etc) and validate completion.
    • Initiate system update and monitor upgrade process
    • Upon completion validate upgrade, perform health checks and gather post-upgrade interface status and validate the dependent systems connectivity.
Read more of this post
Design a site like this with WordPress.com
Get started