Skip to main content

CredStash and Ansible – Hide those CloudFormation Secrets

One of the great features of Infrastructure as a Code (IaC) is the ability to keep all the infrastructure description in a Git repository. You can track any change made to the infrastructure, and even revert the infrastructure to a specific deployment just like with any other code.
CloudFormation is one of the best IaC example out there. I use it to describe almost all of the AWS resources I manage. I generate the JSON template using the wonderful Python based tool named Troposphere. To make the JSON template creation even more flexible, I transform the Troposphere Python files to Ansible templates. I use Jenkins to orchestrate all the templating, JSON file generation, and creation or updating of the CloudFormation stacks. So, the flow is basically as follows:
  1. Create the Troposphere file as an Ansible template and insert Ansible variables or lookups where appropriate.
  2. Generate the Troposphere Python file from the ansible template using the Ansible templating engine.
  3. Run the Troposphere Python file to get the JSON template (I do it with Ansible as well).
  4. Use the JSON template to create or update a stack on the CloudFormation service (I use Ansible CloudFormation module for this step).
  5. Grab a cup of coffee and watch the infrastructure being formed.
I keep both the Ansible templates and playbooks in a Git repository so I can track and revert changes. This solution served me well many times, until I needed to put passwords in the CloudFormation JSON templates.
One good example for this is a RDS instance declaration that requires that the master password be set (MasterUserPassword). An Ansible templated Troposphere code snippet of a PostgreSQL RDS instance declaration will have the following form:
PostgreSqlRDS = t.add_resource(DBInstance(
"PostgreSqlRDS",
AllocatedStorage=100,
AllowMajorVersionUpgrade=False,
AutoMinorVersionUpgrade=True,
CopyTagsToSnapshot=True,
BackupRetentionPeriod=14,
DBInstanceIdentifier="PostgreSqlRDS",
DBInstanceClass="db.t2.medium", DBName="PostgreSqlRDS
Engine="postgres",
DBSubnetGroupName=Ref("{{ VpcName }}RdsSubnetGroup"), EngineVersion="9.6.1", MasterUsername="postadmin",
Name="PostgreSqlRDS",
MasterUserPassword="PA$$WORD", MultiAZ=True, PubliclyAccessible=True, StorageType="gp2", Tags=Tags( ENV="{{ ENV }}",
ImportValue("{{ VpcName }}ManagementSecurityGroupId"),
ROLE="{{ ROLE }}" ), VPCSecurityGroups=[ Ref(PostgreSqlSecurityGroup), ], ))
We can see the usage of the Ansible variables in the code snippet ( {{ VpcName }}{{ ENV }}{{ ROLE }} ), but we can also see that we need to specify a value for the MasterUserPassword key. It will be a very bad idea to commit this code snippet to Git if one should specify a password in cleartext here. Not committing the tempalte to Git will make us lose the big advantages I specified in the beginning.  So to solve this problem, one should encrypt the password string before committing it to Git, and decrypt it before generating the CloudFormation JSON template.
There are a number of tools for the encryption part, but the one that caught my eye was CredStash. This Python based tool leverages AWS KMS to encrypt secrets and store the encrypted secrets together with the secret encryption key on DynamoDB. Setup is beyond this blog scope, but I found it really simple just by following the instructions on the GitHub page. After CredStash and all the necessary AWS resources and IAM permissions are set, we can encrypte the RDS password and store it on DynamoDB just by executing:
1
$ credstash put RDSMasterUserPassword PA$$WORD
Which will store an encrypted value of PA$$WORD for the key RDSMasterUserPassword in DynamoDB.
We can retrieve the decrypted value of the RDSMasterUserPassword key by executing:
1
$ credstash get RDSMasterUserPassword
Now we have the password stored safely encrypted on DynamoDB. But how do we make Ansible use it during the templating step? Well, we just use the Ansible CredStash Lookup. This lookup enables us to run the “credstash get RDSMasterUserPassword” command during the templating step and place the decrypted password in the resulted file. To do this, we set the following value for the MasterUserPassword key (set the region to the AWS region that you are using. In this example, I have set it to eu-west-1):
1
MasterUserPassword="{{ lookup('credstash', 'RDSMasterUserPassword', region='eu-west-1') }}"
The outcome of the Ansible templeting step will be:
1
MasterUserPassword="PA$$WORD"
Following the steps above will produce a template file that can be committed to Git without worries. Moreover, we even don’t need to specify the encrypted value itself but only the name of the key in DynamoDB (RDSMasterUserPassword in the example), which make it even more secure.

Comments

Popular posts from this blog

Merge AVHDX Hyper-V Checkpoints

When you create a snapshot of a virtual machine in Microsoft Hyper-V, a new file is created with the  .avhdx  file extension. The name of the file begins with the name of its parent VHDX file, but it also has a GUID following that, uniquely representing that checkpoint (sometimes called snapshots). You can see an example of this in the Windows Explorer screenshot below. Creating lots of snapshots will result in many  .avhdx  files, which can quickly become unmanageable. Consequently, you might want to merge these files together. If you want to merge the  .avhdx  file with its parent  .vhdx  file, it’s quite easy to accomplish. PowerShell Method Windows 10 includes support for a  Merge-VHD  PowerShell command, which is incredibly easy to use. In fact, you don’t even need to be running PowerShell “as Administrator” in order to merge VHDX files that you have access to. All you need to do is call  Merge-VHD  with the  -Path  parameter, pointing to the  .avhdx  file, and the  -Des

Openstack manila phần 4: Native GlusterFS Driver

Tiếp tục loạt bài về Openstack Manila hôm nay tôi sẽ cấu hình backend sử dụng GlusterFS Yêu cầu phiên bản GlusterFS >= 3.6. Với glusterfs nếu cluster của bạn không hỗ trợ snapshot thì trên manila cũng sẽ mất đi tính năng này. Để cấu hình snapshot ta sẽ cấu hình Thin Provision theo bài hướng dẫn link Với bài lab của mình có 2 node và chạy kiểu replicate. Mình sẽ tạo các thinly provisioned và tạo volume trên đó. Mô hình cài đặt Cài đặt glusterfs-v3.7 add-apt-repository ppa:gluster/glusterfs-3.7 -y apt-get update apt-get install glusterfs-server -y Tham khảo script tạo thin LV và gluster volume Script tạo thinly provisioned chạy trên 2 node apt-get install xfsprogs -y pvcreate /dev/sdb vgcreate myVG /dev/sdb lvcreate -L 8G -T myVG/thinpool for ((i = 1;i<= 5; i++ )) do mkdir -p /manila/manila-"$i" for (( j = 1; j<= 5; j++)) do lvcreate -V "${i}"Gb -T myVG/thinpool -n vol-"$i"-"$j" mkfs.xfs /dev/my

Zabbix, AWS and Auto Registration

One of the things I love the most with AWS is  auto-scaling . You choose an AMI, set some parameters and AWS will spin instances up and down whenever a threshold is breached. But with all these instances spinning up and down there are some unknowns. For example, what is the IP address of the new instance? Its host name? This can be critical when other components of your infrastructure are dependent on knowing these parameters. I had this problem when I started to use  Zabbix  as the monitoring system. At first it seemed like a complicated one, but Zabbix has a wonderful feature called  Auto Registration  which can be used exactly for this situation. I will try to show how to configure auto registration both on the client (EC2 instance running Ubuntu 14.04) and on the Zabbix server (Zabbix Server 2.4.2). Zabbix-agent Installation and Configuration Let’s start with installing zabbix-agent on the Ubuntu client: 1 2 $ sudo apt-get update $ sudo apt-get install -y zab