Thursday, 11 May 2017

OpenStack Summit - TripleO Project Onboarding

We've been having a productive week here in Boston at the OpenStack Summit, and one of the sessions I was involved in was a TripleO project Onboarding session.

The project onboarding sessions are a new idea for this summit, and provide the opportunity for new or potential contributors (and/or users/operators) to talk with the existing project developers and get tips on how to get started as well as ask any questions and discuss ideas/issues.

The TripleO session went well, and I'm very happy to report it was well attended and we had some good discussions.  The session was informal with an emphasis on questions and some live demos/examples, but we did also use a few slides which provide an overview and some context for those new to the project.

Here are the slides used (also on my github), unfortunately I can't share the Q+A aspects of the session as it wasn't recorded, but I hope the slides will prove useful - we can be found in #tripleo on Freenode if anyone has questions about the slides or getting started with TripleO in general.

Friday, 3 March 2017

Developing Mistral workflows for TripleO

During the newton/ocata development cycles, TripleO made changes to the architecture so we make use of Mistral (the OpenStack workflow API project) to drive workflows required to deploy your OpenStack cloud.

Prior to this change we had workflow defined inside python-tripleoclient, and most API calls were made directly to Heat.  This worked OK but there was too much "business logic" inside the client, which doesn't work well if non-python clients (such as tripleo-ui) want to interact with TripleO.

To solve this problem, number of mistral workflows and custom actions have been implemented, which are available via the Mistral API on the undercloud.  This can be considered the primary "TripleO API" for driving all deployment tasks now.

Here's a diagram showing how it fits together:

Overview of Mistral integration in TripleO


Mistral workflows and actions

There are two primary interfaces to mistral, workflows which are a yaml definition of a process or series of tasks, and actions which are a concrete definition of how to do a specific task (such as call some OpenStack API).

Workflows and actions can defined directly via the mistral API, or a wrapper called a workbook.  Mistral actions are also defined via a python plugin interface, which TripleO uses to run some tasks such as running jinja2 on tripleo-heat-templates prior to calling Heat to orchestrate the deployment.

Mistral workflows, in detail

Here I'm going to show how to view and interact with the mistral workflows used by TripleO directly, which is useful to understand what TripleO is doing "under the hood" during a deployment, and also for debugging/development.

First we view the mistral workbooks loaded into Mistral - these contain the TripleO specific workflows and are defined in tripleo-common


[stack@undercloud ~]$ . stackrc 
[stack@undercloud ~]$ mistral workbook-list
+----------------------------+--------+---------------------+------------+
| Name                       | Tags   | Created at          | Updated at |
+----------------------------+--------+---------------------+------------+
| tripleo.deployment.v1      | <none> | 2017-02-27 17:59:04 | None       |
| tripleo.package_update.v1  | <none> | 2017-02-27 17:59:06 | None       |
| tripleo.plan_management.v1 | <none> | 2017-02-27 17:59:09 | None       |
| tripleo.scale.v1           | <none> | 2017-02-27 17:59:11 | None       |
| tripleo.stack.v1           | <none> | 2017-02-27 17:59:13 | None       |
| tripleo.validations.v1     | <none> | 2017-02-27 17:59:15 | None       |
| tripleo.baremetal.v1       | <none> | 2017-02-28 19:26:33 | None       |
+----------------------------+--------+---------------------+------------+

The name of the workbook constitutes a namespace for the workflows it contains, so we can view the related workflows using grep (I also grep for tag_node to reduce the number of matches).


[stack@undercloud ~]$ mistral workflow-list | grep "tripleo.baremetal.v1" | grep tag_node
| 75d2566c-13d9-4aa3-b18d-8e8fc0dd2119 | tripleo.baremetal.v1.tag_nodes                            | 660c5ec71ce043c1a43d3529e7065a9d | <none> | tag_node_uuids, untag_nod... | 2017-02-28 19:26:33 | None       |
| 7a4220cc-f323-44a4-bb0b-5824377af249 | tripleo.baremetal.v1.tag_node                             | 660c5ec71ce043c1a43d3529e7065a9d | <none> | node_uuid, role=None, que... | 2017-02-28 19:26:33 | None       |  

When you know the name of a workflow, you can inspect the required inputs, and run it directly via a mistral execution, in this case we're running the tripleo.baremetal.v1.tag_node workflow, which modifies the profile assigned in the ironic node capabilities (see tripleo-docs for more information about manual tagging of nodes)


[stack@undercloud ~]$ mistral workflow-get tripleo.baremetal.v1.tag_node
+------------+------------------------------------------+
| Field      | Value                                    |
+------------+------------------------------------------+
| ID         | 7a4220cc-f323-44a4-bb0b-5824377af249     |
| Name       | tripleo.baremetal.v1.tag_node            |
| Project ID | 660c5ec71ce043c1a43d3529e7065a9d         |
| Tags       | <none>                                   |
| Input      | node_uuid, role=None, queue_name=tripleo |
| Created at | 2017-02-28 19:26:33                      |
| Updated at | None                                     |
+------------+------------------------------------------+
[stack@undercloud ~]$ ironic node-list
+--------------------------------------+-----------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name      | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+-----------+---------------+-------------+--------------------+-------------+
| 30182cb9-eba9-4335-b6b4-d74fe2581102 | control-0 | None          | power off   | available          | False       |
| 19fd7ea7-b4a0-4ae9-a06a-2f3d44f739e9 | compute-0 | None          | power off   | available          | False       |
+--------------------------------------+-----------+---------------+-------------+--------------------+-------------+
[stack@undercloud ~]$ mistral execution-create tripleo.baremetal.v1.tag_node '{"node_uuid": "30182cb9-eba9-4335-b6b4-d74fe2581102", "role": "test"}'
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| ID                | 6a141065-ad6e-4477-b1a8-c178e6fcadcb |
| Workflow ID       | 7a4220cc-f323-44a4-bb0b-5824377af249 |
| Workflow name     | tripleo.baremetal.v1.tag_node        |
| Description       |                                      |
| Task Execution ID | <none>                               |
| State             | RUNNING                              |
| State info        | None                                 |
| Created at        | 2017-03-03 09:53:10                  |
| Updated at        | 2017-03-03 09:53:10                  |
+-------------------+--------------------------------------+

At this point the mistral workflow is running, and it'll either succeed or fail, and also create some output (which in the TripleO model is sometimes returned to the Ux via a Zaqar queue).  We can view the status, and the outputs (truncated for brevity):


[stack@undercloud ~]$ mistral execution-list | grep  6a141065-ad6e-4477-b1a8-c178e6fcadcb
| 6a141065-ad6e-4477-b1a8-c178e6fcadcb | 7a4220cc-f323-44a4-bb0b-5824377af249 | tripleo.baremetal.v1.tag_node                           |                        | <none>                               | SUCCESS | None       | 2017-03-03 09:53:10 | 2017-03-03 09:53:11 |
[stack@undercloud ~]$ mistral execution-get-output 6a141065-ad6e-4477-b1a8-c178e6fcadcb
{
    "status": "SUCCESS", 
    "message": {
...

So that's it - we ran a mistral workflow, it suceeded and we looked at the output, now we can see the result looking at the node in Ironic, it worked! :)


[stack@undercloud ~]$ ironic node-show 30182cb9-eba9-4335-b6b4-d74fe2581102 | grep profile
|                        | u'cpus': u'2', u'capabilities': u'profile:test,cpu_hugepages:true,boot_o |

 

Mistral workflows, create your own!

Here I'll show how to develop your own custom workflows (which isn't something we expect operators to necessarily do, but is now part of many developers workflow during feature development for TripleO).

First, we create a simple yaml definition of the workflow, as defined in the v2 Mistral DSL - this example lists all available ironic nodes, then finds those which match the "test" profile we assigned in the example above:


This example uses the mistral built-in "ironic" action, which is basically a pass-through action exposing the python-ironicclient interfaces.  Similar actions exist for the majority of OpenStack python clients, so this is a pretty flexible interface.

Now we can now upload the workflow (not wrapped in a workbook this time, so we use workflow-create), run it via execution create, then look at the outputs - we can see that  the matching_nodes output matches the ID of the node we tagged in the example above - success! :)

[stack@undercloud tripleo-common]$ mistral workflow-create shtest.yaml 
+--------------------------------------+-------------------------+----------------------------------+--------+--------------+---------------------+------------+
| ID                                   | Name                    | Project ID                       | Tags   | Input        | Created at          | Updated at |
+--------------------------------------+-------------------------+----------------------------------+--------+--------------+---------------------+------------+
| 2b8f2bea-f3dd-42f0-ad16-79987c75df4d | test_nodes_with_profile | 660c5ec71ce043c1a43d3529e7065a9d | <none> | profile=test | 2017-03-03 10:18:48 | None       |
+--------------------------------------+-------------------------+----------------------------------+--------+--------------+---------------------+------------+
[stack@undercloud tripleo-common]$ mistral execution-create test_nodes_with_profile
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| ID                | 2392ed1c-96b4-4787-9d11-0f3069e9a7e5 |
| Workflow ID       | 2b8f2bea-f3dd-42f0-ad16-79987c75df4d |
| Workflow name     | test_nodes_with_profile              |
| Description       |                                      |
| Task Execution ID | <none>                               |
| State             | RUNNING                              |
| State info        | None                                 |
| Created at        | 2017-03-03 10:19:30                  |
| Updated at        | 2017-03-03 10:19:30                  |
+-------------------+--------------------------------------+
[stack@undercloud tripleo-common]$ mistral execution-list | grep  2392ed1c-96b4-4787-9d11-0f3069e9a7e5
| 2392ed1c-96b4-4787-9d11-0f3069e9a7e5 | 2b8f2bea-f3dd-42f0-ad16-79987c75df4d | test_nodes_with_profile                                 |                        | <none>                               | SUCCESS | None       | 2017-03-03 10:19:30 | 2017-03-03 10:19:31 |
[stack@undercloud tripleo-common]$ mistral execution-get-output 2392ed1c-96b4-4787-9d11-0f3069e9a7e5
{
    "matching_nodes": [
        "30182cb9-eba9-4335-b6b4-d74fe2581102"
    ], 
    "available_nodes": [
        "30182cb9-eba9-4335-b6b4-d74fe2581102", 
        "19fd7ea7-b4a0-4ae9-a06a-2f3d44f739e9"
    ]
}

Using this basic example, you can see how to develop workflows which can then easily be copied into the tripleo-common workbooks, and integrated into the TripleO deployment workflow.

In a future post, I'll dig into the use of custom actions, and how to develop/debug those.

Monday, 10 October 2016

TripleO composable/custom roles

This is a follow-up to my previous post outlining the new composable services interfaces , which covered the basics of the new for Newton composable services model.

The final piece of the composability model we've been developing this cycle is the ability to deploy user-defined custom roles, in addition to (or even instead of) the built in TripleO roles (where a role is a group of servers, e.g "Controller", which runs some combination of services).

What follows is an overview of this new functionality, the primary interfaces, and some usage examples and a summary of future planned work.


Thursday, 1 September 2016

Complex data transformations with nested Heat intrinsic functions

Disclaimer, what follows is either pretty neat, or pure-evil depending your your viewpoint ;)  But it's based on a real use-case and it works, so I'm posting this to document the approach, why it's needed, and hopefully stimulate some discussion around optimizations leading to a improved/simplified implementation in the future.


Friday, 12 August 2016

TripleO Deploy Artifacts (and puppet development workflow)



For a while now, TripleO has supported a "DeployArtifacts" interface, aimed at making it easier to deploy modified/additional files on your overcloud, without the overhead of frequently rebuilding images.

This started out as a way to enable faster iteration on puppet module development (the puppet modules are by default stored inside the images deployed by TripleO, and generally you'll want to do development in a git checkout on the undercloud node), but it is actually a generic interface that can be used for a variety of deployment time customizations.


Friday, 5 August 2016

TripleO Composable Services 101

Over the newton cycle, we've been working very hard on a major refactor of our heat templates and puppet manifiests, such that a much more granular and flexible "Composable Services" pattern is followed throughout our implementation.

It's been a lot of work, but it's been a frequently requested feature for some time, so I'm excited to be in a position to say it's complete for Newton (kudos to everyone involved in making that happen!) :)

This post aims to provide an introduction to this work, an overview of how it works under the hood, some simple usage examples and a roadmap for some related follow-on work.


Thursday, 9 June 2016

TripleO partial stack updates

Recently I was asked if it's possible to do a partial update of a TripleO overcloud - the answer is yes, so I thought I'd write a post showing how to do it.  Much of what follows is basically an update on my old post on nested resource introspection (some interfaces have changed a bit since I wrote that), combined with an introduction to heat PATCH updates.

Partial update?!  Why?

So, the first question is why would you do this - TripleO heat templates are designed to enforce a consistent state for your entire OpenStack deployment, so in most cases you really should update the entire overcloud, and not mess with the underlying nested stacks directly.

However, for some development usage, this creates a long feedback loop - you change something (perhaps one line in a puppet manifest or heat template), then have to wait several minutes for Heat to walk the entire tree of nested stacks, puppet to run all steps on all nodes, etc.

So, while you would probably never do this in production (seriously, please don't!), it can be a useful technique for developers seeking a quicker hack-then-test cycle, and also when attempting to isolate root-causes for some subset of overcloud stack update behavior.

Ok, with that disclaimer clearly stated, here's how you do it:

Step 1 - Find the nested stack to update


Lets take a specific example - I want to update only the ControllerNodesPostDeployment resource which is defined in overcloud.yaml - this is a resource that maps to a nested stack that uses the cluster configuration interfaces I described in this previous post to apply puppet in a series of steps to all controller nodes.

Here's our overcloud (some CLI output removed for brevity):

$ heat stack-list
| 01c51e7e-ad2f-41d3-b056-3c4c84395114 | overcloud  | CREATE_COMPLETE |
2016-06-08T18:07:00 | None         |








Here's the ControllerNodesPostDeployment resource:


$ heat resource-list overcloud | grep ControllerNodesPost
| ControllerNodesPostDeployment             |
e67fff24-8089-4cf8-adf4-9c6064bf01d6          |
OS::TripleO::ControllerPostDeployment             | CREATE_COMPLETE |
2016-06-08T18:07:00 |
e67fff24-8089-4cf8-adf4-9c6064bf01d6 is the resource ID of
ControllerNodesPostDeployment, which is a nested stack - you can confirm
this via:

$ heat stack-list -n | grep "^| e67fff24-8089-4cf8-adf4-9c6064bf01d6"
| e67fff24-8089-4cf8-adf4-9c6064bf01d6 |
overcloud-ControllerNodesPostDeployment-smy5ygz2lc26
| UPDATE_COMPLETE | 2016-06-08T18:10:34 | 2016-06-09T08:52:45 |
01c51e7e-ad2f-41d3-b056-3c4c84395114 |

Note here the first column is the stack ID, and the last is the parent
stack ID (e.g "overcloud" above).

overcloud-ControllerNodesPostDeployment-smy5ygz2lc26 is the name of the stack that implements ControllerNodesPostDeployment - we can refer to it by either that name or the ID (e67fff24-8089-4cf8-adf4-9c6064bf01d6).

Step 2 - Basic update of the stack

Heat supports PATCH updates, so it is possible to trigger a no-op update without passing any template or parameters (the existing data will be used), or to patch in some specific modification.

Here's now it works, we simply use either the name or ID we discovered above, and use heat stack-update (or the new openstack client equivalent commands.

First, however, we want to get the last event ID before triggering the update (or, on recent heatclient versions you can instead use openstack stack event list --follow):

$ heat event-list overcloud-ControllerNodesPostDeployment-smy5ygz2lc26 | tac | head -n2
+------------------------------------------------------+--------------------------------------+-------------------------------------+--------------------+---------------------+
| overcloud-ControllerNodesPostDeployment-smy5ygz2lc26 | 89e535ef-d414-4121-b726-9924eccb4fc3 | Stack UPDATE completed successfully | UPDATE_COMPLETE    | 2016-06-09T09:09:09 |


So the last event logged by this nested stack has the ID of 89e535ef-d414-4121-b726-9924eccb4fc3 - we can use this as a marker so we hide all previous events for the stack:

 $ heat event-list -m 89e535ef-d414-4121-b726-9924eccb4fc3 overcloud-ControllerNodesPostDeployment-smy5ygz2lc26
+----+------------------------+-----------------+------------+
| id | resource_status_reason | resource_status | event_time |
+----+------------------------+-----------------+------------+
+----+------------------------+-----------------+------------+

 Now, we can trigger the update, and use the marker event-list to follow progress:

heat stack-update -x overcloud-ControllerNodesPostDeployment-smy5ygz2lc26

<wait a short time>

$ heat event-list -m 89e535ef-d414-4121-b726-9924eccb4fc3 overcloud-ControllerNodesPostDeployment-smy5ygz2lc26
+------------------------------------------------------+
| resource_name | id | resource_status_reason | resource_status | event_time |
+------------------------------------------------------+
| overcloud-ControllerNodesPostDeployment-smy5ygz2lc26 | 2e08a022-ce0a-4e57-bf30-719fea6cbb74 | Stack UPDATE started | UPDATE_IN_PROGRESS | 2016-06-09T10:00:52 |
| ControllerArtifactsConfig | a55f9b17-f26c-4664-9ea5-535949c368e8 | state changed | UPDATE_IN_PROGRESS | 2016-06-09T10:01:00 |
| ControllerPuppetConfig | 21679c7f-c354-4319-9688-7fa290168664 | state changed | UPDATE_IN_PROGRESS | 2016-06-09T10:01:00 |
| ControllerPuppetConfig | f5761452-91dd-45dc-92e8-a5c371fa5004 | state changed | UPDATE_COMPLETE | 2016-06-09T10:01:02 |
| ControllerArtifactsConfig | 01abec3c-f472-4ec2-893d-0fddb8fc1696 | state changed | UPDATE_COMPLETE | 2016-06-09T10:01:02 |
| ControllerArtifactsDeploy | f8f7a21f-9169-4f8c-ab46-46ecbb141be8 | state changed | UPDATE_IN_PROGRESS | 2016-06-09T10:01:02 |
| ControllerArtifactsDeploy | 75937a57-e2f0-4d66-9b4c-2308593e56b1 | state changed | UPDATE_COMPLETE | 2016-06-09T10:01:04 |
| ControllerLoadBalancerDeployment_Step1 | 6058e29c-cded-4ad3-94d9-65909fd4911d | state changed | UPDATE_IN_PROGRESS | 2016-06-09T10:01:04 |
| ControllerLoadBalancerDeployment_Step1 | c9f93f1f-177c-4721-827f-a7d409b2cd50 | state changed | UPDATE_COMPLETE | 2016-06-09T10:01:06 |
| ControllerServicesBaseDeployment_Step2 | 92409e4c-24f2-4e68-bad9-47ce09107d7a | state changed | UPDATE_IN_PROGRESS | 2016-06-09T10:01:06 |
| ControllerServicesBaseDeployment_Step2 | a9203aa1-c438-47c0-977b-8e34669777bc | state changed | UPDATE_COMPLETE | 2016-06-09T10:01:08 |
| ControllerOvercloudServicesDeployment_Step3 | aa7d78dc-d243-4d54-8ea6-3b59a6ed302a | state changed | UPDATE_IN_PROGRESS | 2016-06-09T10:01:08 |
| ControllerOvercloudServicesDeployment_Step3 | 4a1a6885-29d7-4708-a884-01f481ac1b35 | state changed | UPDATE_COMPLETE | 2016-06-09T10:01:10 |
| ControllerOvercloudServicesDeployment_Step4 | 7afd52c1-cbbc-431a-a22c-dd7459ed2255 | state changed | UPDATE_IN_PROGRESS | 2016-06-09T10:01:10 |
| ControllerOvercloudServicesDeployment_Step4 | 0dac2e72-0919-4e91-ac94-100d8d811c67 | state changed | UPDATE_COMPLETE | 2016-06-09T10:01:13 |
| ControllerOvercloudServicesDeployment_Step5 | ec57867f-e401-4756-bd30-0a566eced343 | state changed | UPDATE_IN_PROGRESS | 2016-06-09T10:01:13 |
| ControllerOvercloudServicesDeployment_Step5 | 427582fb-acd1-4939-a13c-7b3cbbc7527b | state changed | UPDATE_COMPLETE | 2016-06-09T10:01:15 |
| ExtraConfig | 760fd961-fff6-4f4c-848e-80773e09e04b | state changed | UPDATE_IN_PROGRESS | 2016-06-09T10:01:15 |
| ExtraConfig | caee58b6-01bb-4805-b41f-4c48a8c7d767 | state changed | UPDATE_COMPLETE | 2016-06-09T10:01:16 |
| overcloud-ControllerNodesPostDeployment-smy5ygz2lc26 | 35f527a5-0761-46bb-aecb-6eee0e0f083e | Stack UPDATE completed successfully | UPDATE_COMPLETE | 2016-06-09T10:01:25 |


So, we can see that we triggered an update on the nested stack, and it ran to completion in around 30 seconds (much less time than updating the entire overcloud).

Step 3 - Update of the stack with modifications

So, those paying attention may have noticed that 30 seconds is too fast for puppet to run on all the controller nodes, and it is - the reason being that we did a no-op update, and so Heat detects that no inputs have changed, thus it doesn't cause puppet to re-run.

To work around this, and enable puppet to re-assert state on every overcloud update, we have an identifier in the nested stack that is normally updated to a value that changes every update (in includes a timestamp when updates are triggered via python-tripleoclient vs heatclient directly)

We can emulate this behavior in our patch update, and force puppet to re-run through all the deployment steps - lets first look at the NodeConfigIdentifers parameter value:


$ heat stack-show overcloud-ControllerNodesPostDeployment-smy5ygz2lc26 | grep NodeConfigIdentifiers
"NodeConfigIdentifiers": "{u'deployment_identifier': u'1465409217', u'controller_config': {u'0': u'os-apply-config deployment bb67a1d5-f0a5-48ec-9883-1f2ae578a8bd complet ed,Root CA cert injection not enabled.,TLS not enabled.,None,'}, u'allnodes_extra': u'none'}"

Here we can see various data, including a deployment_identifier, which is the timestamp-derived unique identifier normally passed via python-tripleoclient.

We could update just that field, but the content of this mapping isn't important, only that it changes (this data is not currently consumed by puppet on update, it's just used to trigger the SoftwareDeployment to re-apply the config due to an input value changing).

So we can create an environment file that looks like this (note this must use parameters, not parameter_defaults, so that it overrides the value passed from the parent stack) - any value can be used, but you must change it each update if you want the SoftwareDeployment resources to be re-applied to the nodes.

$ cat update_env.yaml
parameters:
  NodeConfigIdentifiers: 123







Then we can trigger another PATCH update including this data:

heat stack-update -x overcloud-ControllerNodesPostDeployment-smy5ygz2lc26 -e update_env.yaml

This time I'm using the new openstack stack event list --follow approach to monitor progress (if you don't have this, you can repeat the marker event-list approach described above):


$ openstack stack event list --follow2016-06-09 08:52:46 [overcloud-ControllerNodesPostDeployment-smy5ygz2lc26]: UPDATE_IN_PROGRESS  Stack UPDATE started
2016-06-09 08:52:54 [ControllerPuppetConfig]: UPDATE_IN_PROGRESS  state changed
2016-06-09 08:52:54 [ControllerArtifactsConfig]: UPDATE_IN_PROGRESS  state changed
2016-06-09 08:52:56 [ControllerPuppetConfig]: UPDATE_COMPLETE  state changed
2016-06-09 08:52:56 [ControllerArtifactsConfig]: UPDATE_COMPLETE  state changed
2016-06-09 08:52:56 [ControllerArtifactsDeploy]: UPDATE_IN_PROGRESS  state changed
2016-06-09 08:52:58 [ControllerArtifactsDeploy]: UPDATE_COMPLETE  state changed
2016-06-09 08:52:58 [ControllerLoadBalancerDeployment_Step1]: UPDATE_IN_PROGRESS  state changed
2016-06-09 08:53:32 [ControllerLoadBalancerDeployment_Step1]: UPDATE_COMPLETE  state changed
2016-06-09 08:53:32 [ControllerServicesBaseDeployment_Step2]: UPDATE_IN_PROGRESS  state changed
2016-06-09 08:54:00 [ControllerServicesBaseDeployment_Step2]: UPDATE_COMPLETE  state changed
2016-06-09 08:54:00 [ControllerOvercloudServicesDeployment_Step3]: UPDATE_IN_PROGRESS  state changed
2016-06-09 08:54:57 [ControllerOvercloudServicesDeployment_Step3]: UPDATE_COMPLETE  state changed
2016-06-09 08:54:57 [ControllerOvercloudServicesDeployment_Step4]: UPDATE_IN_PROGRESS  state changed
2016-06-09 08:56:14 [ControllerOvercloudServicesDeployment_Step4]: UPDATE_COMPLETE  state changed
2016-06-09 08:56:14 [ControllerOvercloudServicesDeployment_Step5]: UPDATE_IN_PROGRESS  state changed
2016-06-09 08:57:16 [ControllerOvercloudServicesDeployment_Step5]: UPDATE_COMPLETE  state changed
2016-06-09 08:57:16 [ExtraConfig]: UPDATE_IN_PROGRESS  state changed
2016-06-09 08:57:17 [ExtraConfig]: UPDATE_COMPLETE  state changed
2016-06-09 08:57:26 [overcloud-ControllerNodesPostDeployment-smy5ygz2lc26]: UPDATE_COMPLETE  Stack UPDATE completed successfully

So, here we can see the update of the stack took a little longer (around 5 minutes in my environment), and if you were to check the os-collect-config logs on each controller node, you would see puppet re-applying on each node, fore every step defined in the template.

This approach can be extended if you want to e.g test changes to the stack template (or files it references such as puppet manifests or scripts), you would do something like:

$ cp -r /usr/share/openstack-tripleo-heat-templates .
$ cd openstack-tripleo-heat-templates/
$ heat stack-update -x overcloud-ControllerNodesPostDeployment-smy5ygz2lc26 -e update_env.yaml -f puppet/controller-post.yaml

Note that if you want to do a final update of the entire overcloud, you would need to point to this copied tree (assuming you want to maintain any changes), e.g

$ openstack overcloud deploy --templates /path/to/copy/openstack-tripleo-heat-templates