Friday 12 August 2016

TripleO Deploy Artifacts (and puppet development workflow)



For a while now, TripleO has supported a "DeployArtifacts" interface, aimed at making it easier to deploy modified/additional files on your overcloud, without the overhead of frequently rebuilding images.

This started out as a way to enable faster iteration on puppet module development (the puppet modules are by default stored inside the images deployed by TripleO, and generally you'll want to do development in a git checkout on the undercloud node), but it is actually a generic interface that can be used for a variety of deployment time customizations.




Ok, how do I use it?


Lets start with a couple of usage examples, making use of some helper scripts that are maintained in the tripleo-common repo (in future similar helper interfaces may be added to the TripleO CLI/UI but right now this is more targetted at developers and advanced operator usage).

First clone the tripleo-common repo (you can skip this step if you're running a packaged version which already contains the following scripts):





[stack@instack ~]$ git clone https://git.openstack.org/openstack/tripleo-common




There are two scripts of interest, firstly a generic script that can be used to deploy any kind of file (aka artifact) tripleo-common/scripts/upload-swift-artifacts and a slightly modified version which optimizes the flow for deploying directories containing puppet modules called tripleo-common/scripts/upload-puppet-modules
To make using these easier, I append this to my .bashrc


export PATH="$PATH:/home/stack/tripleo-common/scripts"

 

Example 1 - Deploy Artifacts "Hello World"


So, let's start with a really simple example.  First lets create a tarball containing a single /tmp/hello file:


[stack@instack ~]$ mkdir tmp
[stack@instack ~]$ echo "hello" > tmp/hello
[stack@instack ~]$ tar -cvzf hello.tgz tmp
tmp/
tmp/hello



Now, we simply run the upload-swift-artifacts script, accepting all the default options other than to pass a reference to hello.tgz


[stack@instack ~]$ upload-swift-artifacts -f hello.tgz
Creating heat environment file: /home/stack/.tripleo/environments/deployment-artifacts.yaml
Uploading file to swift: hello.tgz
hello.tgz
Upload complete.

There are currently only two supported file types:

  •     A tarball (will be unpacked from / on all nodes)
  •     An RPM file (will be installed on all nodes)

Taking a look inside the environment file the script generated, we can see it's using the DeployArtifactURLs parameter, and passing a single URL (the parameter accepts a list of URLs).  This happens to be a swift tempurl, created by the upload-swift-artifacts script but it could be any URL accessible to the overcloud nodes at deployment time.

[stack@instack ~]$ cat /home/stack/.tripleo/environments/deployment-artifacts.yaml
# Heat environment to deploy artifacts via Swift Temp URL(s)
parameter_defaults:
  DeployArtifactURLs:
    - 'http://192.0.2.1:8080/v1/AUTH_e9bcd2a11af94c319b164eba73c59a28/overcloud/hello.tgz?temp_url_sig=96ae277d85c3ee38dd61234b8c99351e64c8bd45&temp_url_expires=1502273853'


This environment file is automatically generated by the upload-swift-artifacts script, and put into the special ~/.tripleo/environments directory.  This directory is read by tripleoclient and any environment files included here are always included automatically (no need for any -e options), but you can also pass a --environment option to upload-swift-artifacts if you prefer some different output location (e.g so it can be explicitly included in your overcloud deploy command).

Testing this example, you simply do an overcloud deployment, no additional arguments are needed if you use the default .tripleo/environments/deployment-artifacts.yaml environment path:

[stack@instack ~]$ openstack overcloud deploy --templates

Then check on one of the nodes for the expected file (note the tarball is unpacked from / in the filesystem):

[root@overcloud-controller-0 ~]# cat /tmp/hello
hello

Note the deploy artifact files are written to all roles, currently there is no way to deploy e.g only to Controller nodes.  We might consider an enhancement that allows role specific artifact URL parameters in future should folks require it.

Hopefully despite the very simple example you can see that this is a very flexible interface - you can deploy a tarball containing anything, e.g even configuration files such as policy.json files to the nodes.

Note that you have to be careful though - most service configuration files are managed by puppet, so if you attempt using the deploy artifacts interface to overwrite puppet managed files it will not work - puppet runs after deploy artifacts are created (this is deliberate, as you will see in the next example) so you must use puppet hieradata to influence any configuration managed by puppet.  (In the case of policy.json files, there is a puppet module that handles this, but currently TripleO does not use it - this may change in future though).

Example 2 - Puppet development workflow

There is coupling between tripleo-heat-templates and the puppet modules it interfaces with (and in particular with the puppet profiles that exist in puppet-tripleo, as discussed in my composable services tutorial recently), so a common pattern for a developer is:

  1. Modify some puppet code
  2. Modify tripleo-heat-templates to match the new/modified puppet profile
  3. Deploy an overcloud
  4. *OH NO* it doesn't work!
  5. Debug the issue (hint, "openstack stack failures list overcloud" is a super-useful new heatclient command which helps a lot here, as it surfaces the puppet error in most cases)
  6. Make coffee; goto (1) :)
Traditionally for TripleO deployments all puppet modules (including the puppet-tripleo profiles) have been built into the image we deploy (stored in Glance on the undercloud), so one missing step above is getting the modified puppet code into the image.  There are a few options:

  • Rebuild the image every time (this is really slow)
  • Use virt-customize or virt-copy-in to copy some modifications into the image, then update the image in glance (this is faster, but it still means you must redeploy the nodes every time and it's easy to lose track of what modifications have been made).
  • Use DeployArtifactUrls to update the puppet modules on the fly during the deployment!
This last use-case is actually what prompted implementation of the DeployArtifacts interface (thanks Dan!), and I'll show how it works below:

First, we clone one or more puppet modules to a local directory - note the name of the repo e.g "puppet-tripleo" does not match the name of the deployed directory (on the nodes it's /etc/puppet/modules/tripleo), so you have to clone it to the "tripleo" directory.

mkdir puppet-modules
cd puppet-modules 
git clone https://git.openstack.org/openstack/puppet-tripleo tripleo 

Now you can make whatever edits are needed, pull under review code (or just do nothing if you want to deploy latest trunk of a given module).  When you're ready you run the upload-puppet-modules script:

upload-puppet-modules -d puppet-modules

This works a little bit differently to the previous upload-swift-artifacts script, it takes the directory, creates a tarball using the --transform option, so we rewrite the prefix from /somewhere/puppet-modules to /etc/puppet/modules

The process after we create the tarball is exactly the same - we upload it to swift, get a tempurl, and create a heat environment file which references the location of the tarball.  On deployment, the updated puppet modules will be untarred and this always happens before puppet runs, which makes the debug workflow above much faster, nice!

NOTE: There is one gotcha here - upload-puppet-modules creates a differently named environment file ($HOME/.tripleo/environments/puppet-modules-url.yaml) to upload-swift-artifacts by default, and their content is conflicting - if both environment files exist, one will be ignored as they will get merged together.  (This is something we can probably improve in future when this heat feature lands, but right now the only option is to stick to one script or the other, or accept manually merging the environment files (to append rather than overwrite the DeployArtifactUrls parameter)




So how does it work?

Deploy Artifacts Overview

So, it's actually pretty simple, as illustrated in the diagram above

  • A tarball is created containing the files you want to deploy to the nodes
  • This tarball is uploaded to swift on the undercloud
  • A Swift tempurl is created, so the tarball can be accessed using a signed URL (no credentials needed in the nodes to access)
  • A Heat environment passes the Swift tempurl to a nested stack "deploy-artifacts.yaml", which defines a DeployArtifactUrls parameter (which is a list)
  • deploy-artifacts.yaml defines a Heat SoftwareConfig resource, which references a shell script that can download files from a list of URLs, check the file type and do something (e.g in the case of a tarball, untar it!)
  • The deploy-artifacts SoftwareConfig is deployed inside the per-role "PostDeploy" template, which is where we perform the puppet steps (5 deployment passes which apply puppet in a series of steps). 
  • We use the heat depends_on directive to ensure that the DeployArtifacts deployment (ControllerArtifactsDeploy in the case of the Controller role) always runs before any of the puppet steps.
  • This pattern is replicated for all roles (not just the Controller as in the diagram above)
As you can see,  there are a few steps to the process, but it's pretty simple and it leverages the exact same Heat SoftwareDeployment patterns we use throughout TripleO to deploy scripts (and apply puppet manifests, etc).

6 comments:

  1. Awesomeness here, Steve. I was playing around with this for policy as you alluded to in the article, but great to see the full story, including why you built it in the first place. Using for RPM management will be useful for post-GA Federation work.

    ReplyDelete
  2. "openstack stack failures list overcloud" best ever command , thanks for the article, this is was very useful

    ReplyDelete
  3. `openstack stack failures list overcloud --long` is great for getting a full trace too!

    ReplyDelete
  4. Salt lamp Most of the time I don’t make comments on websites, but I'd like to say that this article really forced me to do so. Really nice post!

    ReplyDelete