Admins can either run Pulumi Automation API from a developer machine or integrate it into a CI/CD platform using the Pulumi command-line interface (CLI), which talks to the Pulumi engine. This approach is not uncommon. Infrastructure-as-code tools such as Pulumi and Terraform rely on the CLI to read the infrastructure blueprints, queue deployments and talk to the back-end engine for state maintenance.
But this creates an inherent dependency on CLI tools because admins move away from the comfort of their preferred programming language and use a binary to perform lifecycle actions. This was the case with Pulumi, until it introduced the Pulumi Automation API.
Pulumi Automation API provides all the functionality of the Pulumi CLI — such as the ability to create and update stacks — but it is available in the programming languages listed above. Machines still require Pulumi CLI to be installed, but infrastructure developers don’t have to interface with the CLI anymore. Instead, they can use Automation API constructs to perform the same changes the CLI drives. The Automation API delivers the functionality of the CLI in the language admins develop infrastructure on.
The reliance on human intervention to automate workflows adds an extra layer of dependency.
Automation API helps relieve the need for human intervention. It includes features such as the following:
This tutorial will revisit a project created with the Pulumi CLI for a CI/CD project.
This tutorial refactors that project to use the Automation API instead of the Pulumi CLI. Using Automation API with an existing Pulumi project is referred to as a local program in the Pulumi documentation. This shows what Automation API does and how it integrates with existing projects.
A few modifications have been made to the existing project: The project has been switched to the new Azure Native provider and .NET 6, and the test project has been removed from the directory. Due to these modifications, it’s recommended that you complete the first tutorial before tackling this one. The first tutorial, using the Pulumi CLI, walks you through the full setup process via the CLI. Once complete, come back to this tutorial to learn how the Automation API integrates with existing projects and eliminates reliance on the Pulumi CLI.
Automation API documentation often refers to local and inline programming to define how one creates or references Pulumi projects inside the Automation API:
The goal of using Automation API with an existing Pulumi project is to cut the reliance on the Pulumi CLI in the CI/CD pipeline to perform infrastructure lifecycle changes.
In the sample Azure DevOps repo, there are two branches. These include the following:
To achieve this with Pulumi Automation API, follow the steps below:
To start, add a new .NET console project under the src directory named WebServerStack.Automation. This project will contain our own CLI, which will replace the usage of Pulumi CLI. This project has been set up using the Pulumi Automation API sample for a local program using .NET as reference. Walk through using the feature/Automation branch in our sample project.
Below is a list of the Automation API constructs — C# classes — that admins must understand before authoring automation on top of it:
The Program class defines the static async Main() method, which takes input arguments.
Perform a check to see if any arguments passed. If they match destroy or preview, the console application will update the stack by default. We have defined a string variable to hold a stackName, such as stage.
Then, use reflection in the console application to locate and display the directory information. Pulumi retrieves the infrastructure definition using a reflection process, which enables IT admins to see the directory — but not to edit it. The console application examines metadata within the source code, which enables it to create a path to the WebServerStack project in question.
Next, create a path pointing to the WebServerStack project. This is an existing Pulumi project created in the above-mentioned prior tutorial, using the Pulumi CLI, and contains the infrastructure definition to deploy a WebServerStack.
After discovering the path — via reflection — to the directory containing our Pulumi program that defines our infrastructure blueprint, use the Automation API construct named LocalWorkspace to discover the existing Pulumi stack.
The method CreateOrSelectStackAsync() can either create or select an existing stack. In our case, we created a dev stack using the Pulumi CLI originally, but here we will create a new stack called stage with the code below:
After creating the stack, use the code below to set specific stack configurations in the default location. This is equivalent to using the Pulumi CLI for configuration.
After setting the stack configuration, follow the usual Pulumi lifecycle to make changes to the stack. Start by refreshing the stack. This reads the current state from the Pulumi back end.
Based on the arguments specified when running our console application, conditional statements are executed.
If the destroy argument is passed, the condition below is run.
The same happens for a preview argument.
If no argument is specified, then it will update the stack.
Use the .NET CLI to run the project locally and specify the preview argument to mimic Pulumi CLI.
We can specify the create argument as well, but we will wait for our pipeline to run, which is the next step.
Our existing pipeline setup uses a Pulumi Azure DevOps task, which runs Pulumi CLI behind the scenes to perform changes. The azure-pipelines.yaml file in the main branch holds the references to the Pulumi task. Below is the task that generates the preview.
However, since we switched to using the Automation API completely, we’ll be refactoring the pipeline to use a new workflow already familiar to software developers.
Figure 3 shows the changes that will follow.
Our pipeline definition has changed slightly for usage with Automation API. The triggers must also change to run the pipeline on the feature/Automation branch.
This pipeline has two variable groups:
This stage restores and builds the .NET solution, but for brevity, we skipped adding a test project here. It is the same as the Build stage we performed previously without Automation API. We will not publish artifacts in this tutorial.
Figure 4 shows this in a completed pipeline.
Automation API enables us to perform the preview operation from the .NET console app itself rather than having to branch out to an Azure DevOps extension. The extension runs Pulumi CLI behind the scenes and makes these changes.
Below is the YAML snippet, which defines this stage:
We use a PowerShell task to run the .NET CLI locally and pass the preview argument to the console application.
The catch is that we must pass the below environment variables:
The end execution of this stage can be seen in Figure 5.
Finally, let’s revisit our Deploy stage, which will deploy the changes. The YAML definition is similar to the Preview stage, but no argument is passed to the console application. Without any arguments, the application will perform update operations on the stack.
The result of the above stage looks like Figure 6 in an end-to-end pipeline.
Once these changes are in place, run a pipeline and see if the stages were successful. Perform the lifecycle operations for our infrastructure without using PowerShell to invoke the Pulumi CLI.
The roadmap for Docker Desktop will include security updates and serverless support, as industry watchers speculate about …
Haphazard code changes can quickly break any application. Learn what a successful regression testing strategy entails, including …
How can you become a good software tester? It requires more than just technical expertise. Here are some important personality …
Sidecars can do a lot for microservices when it comes to communication with distributed application components, though they also …
While imperative programming is often a go-to, the declarative approach has proved useful in the face of demands for complex, …
On the surface, the contrast between micro apps and microservices simply seems a matter of front-end vs. back-end concerns. But …
Nvidia has launched a cloud-based version of its Omniverse platform for 3D simulations. The company also unveiled an Omniverse …
Meet AWS outages head on by learning how to build a multi-region architecture that achieves resiliency in the event of disaster.
To achieve high availability and fault tolerance in AWS, IT admins must first understand the differences between the two models.
Think you’re ready for the AWS Certified Solutions Architect certification exam? Test your knowledge with these 12 questions, and…
Amazon said its van monitoring system is designed solely for driver safety. But many industry experts have concerns regarding the…
Amazon would like to strengthen its global footprint, but the e-commerce giant faces roadblocks and challenges today that did not…
Don’t ignore suppressed exceptions. In this quick tutorial we show you how to anticipate when code will throw suppressed …
Java’s ‘try-with-resources’ exception handling feature can help you write better, more effective Java code. Here’s a quick …
Just how well do you know exception handling in Java? These 10 tough multiple-choice questions on checked and unchecked …
To improve performance of workloads spanning multiple clouds and on-premises environments, Intel has acquired Granulate Cloud …
Proper backup and DR at the edge requires an awareness of the specific challenges edge devices present, and appropriate plans and…
Review green initiatives in the world of data centers: sustainable designs and practices, what other organizations are doing and …
Organizations require virtualization systems that not only support different types of applications but also simplify IT …
Virtualization brings cost benefits and saves time for IT teams that oversee ROBOs. Effective implementation requires cloud-based…
Admins often evaluate Xen vs. KVM as open source options. The main factors to consider in a primary hypervisor are organizational…
All Rights Reserved, Copyright 2016 – 2022, TechTarget