Skip to main content

· 5 min read

stackql-deploy is a multi-cloud resource provisioning framework using stackql. It is inspired by dbt (data build tool), which manages data transformation workflows in analytics engineering by treating SQL scripts as models that can be built, tested, and materialized incrementally. With StackQL, you can create a similar framework for cloud and SaaS provisioning. The goal is to treat infrastructure-as-code (IaC) queries as models that can be deployed, managed, and interconnected.

This ELT/model-based framework for IaC allows you to provision, test, update, and tear down multi-cloud stacks, similar to how dbt manages data transformation projects, with the benefits of version control, peer review, and automation. This approach enables you to deploy complex, dependent infrastructure components in a reliable and repeatable manner.

Features

StackQL simplifies the interaction with cloud resources by using SQL-like syntax, making it easier to define and execute complex cloud management operations. Resources are provisioned with INSERT statements, and tests are structured around SELECT statements.

Features include:

  • Dynamic state determination (eliminating the need for state files)
  • Pre-flight and post-deploy assurance tests for resources
  • Simple flow control with rollback capabilities
  • Single code base for multiple target environments
  • SQL-based definitions for resources and tests

Installing stackql-deploy

To get started with stackql-deploy, run the following:

pip install stackql-deploy

stackql-deploy will automatically download the latest release of stackql using the pystackql Python package. You can then use the info command to get runtime information:

$ stackql-deploy info
stackql-deploy version: 1.1.0
pystackql version : 3.6.1
stackql version : v0.5.612
stackql binary path : /home/javen/.local/stackql
platform : Linux x86_64 (Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35), Python 3.10.12

Project structure

A stack-deploy project is a directory with declarative SQL definitions to provision, de-provision, or test resources in a stack. The key components and their definitions are listed here:

  • stackql_manifest.yml : The manifest file for your project, defining resources and properties in your stack.
  • stackql_resources directory : Contains StackQL queries to provision and de-provision resources in your stack.
  • stackql_tests directory : Contains StackQL queries to test the desired state for resources in your stack.

Getting started

Use the init command to create a starter project directory:

stackql-deploy init activity_monitor

You will now have a directory named activity_monitor with stackql_resources and stackql_tests directories and a sample stackql_manifest.yml file, which will help you to get started.

Usage

The general syntax for stackql-deploy is described here:

stackql-deploy [OPTIONS] COMMAND [ARGS]...

Commands include:

  • build: Create or update resources based on the defined stack.
  • teardown: Remove or decommission resources that were previously deployed.
  • test: Execute test queries to verify the current state of resources against the expected state.
  • info: Display the version information of the stackql-deploy tool and current configuration settings.
  • init: Initialize a new project structure for StackQL deployments.

Optional global options (for all commands) include:

  • --custom-registry TEXT: Specify a custom registry URL for StackQL. This URL will be used by all commands for registry interactions.
  • --download-dir TEXT: Define a download directory for StackQL where all files will be stored.
  • --help: Show the help message and exit.

Options for build, test, and teardown include:

  • --on-failure [rollback|ignore|error]: Define the action to be taken if the operation fails. Options include rollback, ignore, or treat as an error.
  • --dry-run: Perform a simulation of the operation without making any actual changes.
  • -e <TEXT TEXT>...: Specify additional environment variables in key-value pairs.
  • --env-file TEXT: Path to a file containing environment variables.
  • --log-level [DEBUG|INFO|WARNING|ERROR|CRITICAL]: Set the logging level to control the verbosity of logs during execution.

Example

Using the activity_monitor stack we created previously using the init command, we can start defining a stack and defining the associated queries; here is the manifest file:

version: 1
name: activity_monitor
description: oss activity monitor stack
providers:
- azure
globals:
- name: subscription_id
description: azure subscription id
value: "{{ vars.AZURE_SUBSCRIPTION_ID }}"
- name: location
value: eastus
- name: resource_group_name_base
value: "activity-monitor"
resources:
- name: monitor_resource_group
description: azure resource group for activity monitor
props:
- name: resource_group_name
description: azure resource group name
value: "{{ globals.resource_group_name_base }}-{{ globals.stack_env }}"
# more resources would go here...

globals.stack_env is a variable referencing the user-specified environment label.

Our stackql_resources directory must contain a .iql file (StackQL query file) with the same name as each resource defined in the resources key in the manifest file. Here is an example for stackql_resources/monitor_resource_group.iql:

/*+ createorupdate */
INSERT INTO azure.resources.resource_groups(
resourceGroupName,
subscriptionId,
data__location
)
SELECT
'{{ resource_group_name }}',
'{{ subscription_id }}',
'{{ location }}'

/*+ delete */
DELETE FROM azure.resources.resource_groups
WHERE resourceGroupName = '{{ resource_group_name }}' AND subscriptionId = '{{ subscription_id }}'

Similarly, our stackql_tests directory must contain a .iql file (StackQL query file) with the same name as each resource defined in the stack. Here is an example for stackql_tests/monitor_resource_group.iql:

/*+ preflight */
SELECT COUNT(*) as count FROM azure.resources.resource_groups
WHERE subscriptionId = '{{ subscription_id }}'
AND resourceGroupName = '{{ resource_group_name }}'

/*+ postdeploy, retries=2, retry_delay=2 */
SELECT COUNT(*) as count FROM azure.resources.resource_groups
WHERE subscriptionId = '{{ subscription_id }}'
AND resourceGroupName = '{{ resource_group_name }}'
AND location = '{{ location }}'
AND JSON_EXTRACT(properties, '$.provisioningState') = 'Succeeded'

Now we can build, test, and teardown our example stack using these commands (starting with a dry-run, which will render the target queries without executing them):

# stackql-deploy build|test|teardown {stack_name} {stack_env} [{options}]
stackql-deploy build example_stack prd -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000 --dry-run
stackql-deploy build example_stack prd -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
stackql-deploy test example_stack prd -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
stackql-deploy teardown example_stack prd -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000

Give us your feedback! ⭐ us here!

· One min read

StackQL allows you to query and interact with your cloud and SaaS assets using a simple SQL framework. Use cases include CSPM, asset inventory and analysis, finops and more, as well as IaC and sysops (lifecycle management).

Using stackql and the awscc provider (AWS Cloud Control provider for stackql), here's how you can query your entire AWS estate in real time (globally) and generate a simple report like this...

aws-inventory-example

Check out the code at AWS Global Inventory!

Visit us and give us a ⭐ on GitHub

· 3 min read

StackQL allows you to query and interact with your cloud and SaaS assets using a simple SQL framework. Use cases include CSPM, asset inventory and analysis, finops and more, as well as our IaC and ops (lifecycle management).

The three major cloud providers all offer a built-in Linux shell for executing commands using their respective CLIs; in some cases, they come with tools like terraform pre-installed. They are pre-authorized with your credentials in the cloud console for the user you authenticated with.

Now you can easily use stackql - a unified analytics and IaC dev tool - in all major cloud providers' built-in shells, using cloud shell scripts packaged with the stackql Linux binary (available from v0.5.587 onwards).

StackQL is particularly useful for asynchronously querying across regions in AWS, projects in Google, or resource groups in Azure, which is challenging to do via the CLIs. For example:

SELECT region, COUNT(*) as num_functions
FROM aws.lambda.functions
WHERE region IN (
'us-east-1','us-east-2','us-west-1','us-west-2',
'ap-south-1','ap-northeast-3','ap-northeast-2',
'ap-southeast-1','ap-southeast-2','ap-northeast-1',
'ca-central-1','eu-central-1','eu-west-1',
'eu-west-2','eu-west-3','eu-north-1','sa-east-1')
GROUP BY region;

Additionally, you could authenticate to another provider from one cloud shell simultaneously and run multi-cloud inventory commands. For example:

SELECT 
name,
SPLIT_PART(machineType, '/', -1) as instance_type,
'google' as provider
FROM google.compute.instances
WHERE project IN ('myproject1','myproject2')
UNION
SELECT
instanceId as name,
instanceType as instance_type,
'aws' as provider
FROM aws.ec2.instances
WHERE region IN (
'us-east-1','us-east-2','us-west-1','us-west-2',
'ap-south-1','ap-northeast-3','ap-northeast-2',
'ap-southeast-1','ap-southeast-2','ap-northeast-1',
'ca-central-1','eu-central-1','eu-west-1',
'eu-west-2','eu-west-3','eu-north-1','sa-east-1');

Getting Started

To get started with StackQL in your preferred cloud shell environment, download the StackQL package using the following command:

curl -L https://bit.ly/stackql-zip -O \
&& unzip stackql-zip

This command downloads the StackQL package, unzips it, and sets the appropriate permissions. From there, you can use our tailored scripts for AWS, Google Cloud, or Azure to integrate StackQL seamlessly into your cloud shell environment.

Using StackQL in the AWS Cloud Shell

Run the stackql-aws-cloud-shell.sh as follows to use the StackQL command shell within the AWS cloud shell:

sh stackql-aws-cloud-shell.sh

An example is shown here:

aws-cloud-shell-example

You can also run stackql exec commands using the stackql-aws-cloud-shell.sh script; for instance, this command will write a CSV file for the results of a query that could be downloaded from the Cloud Shell.

sh stackql-aws-cloud-shell.sh exec \
--output csv --outfile instances.csv \
"SELECT region, instanceType FROM aws.ec2.instances WHERE region IN ('us-east-1')"

Additionally, you can supply an IAM role using the --role-arn argument to assume another role for your query or mutation operation, an example is shown here:

sh stackql-aws-cloud-shell.sh \
--role-arn arn:aws:iam::824532806693:role/SecurityReviewerRole exec \
--infile query.iql \
--output csv --outfile output.csv

Using StackQL in the Azure Cloud Shell

Run the stackql-azure-cloud-shell.sh as follows to open a StackQL command shell from the Azure Cloud Shell:

sh stackql-azure-cloud-shell.sh

An example is shown here:

azure-cloud-shell-example

Similar to the AWS script, you can also invoke stackql exec as well, an example is shown here:

sh stackql-azure-cloud-shell.sh exec \
--output csv --outfile instances_by_location.csv \
"SELECT location, COUNT(*) as num_instances FROM azure.compute.virtual_machines WHERE resourceGroupName = 'stackql-ops-cicd-dev-01' AND subscriptionId = '631d1c6d-2a65-43e7-93c2-688bfe4e1468' GROUP BY location"

Using StackQL in the Google Cloud Shell

Run the stackql-google-cloud-shell.sh as shown below to launch a StackQL command shell from within the google cloud shell:

sh stackql-google-cloud-shell.sh

An example is shown here:

google-cloud-shell-example

As with the other two providers, you can run exec commands following the example below:

sh stackql-google-cloud-shell.sh exec \
--output csv --outfile instances.csv \
"SELECT name, status FROM google.compute.instances WHERE project = 'stackql-demo'"

Please give us your feedback! Star us at github.com/stackql.

· One min read

StackQL allows you to query and interact with your cloud and SaaS assets using a simple SQL framework. Use cases include CSPM, asset inventory and analysis, finops and more, as well as our IaC and ops (lifecycle management).

Excited to announce the general availability of the latest StackQL providers for Azure. Includes expanded resource and method coverage including all of the latest Resource Manager services. The StackQL Azure provider catalog now includes:

  • azure - core Azure RM services
  • azure_extras - additional Azure services
  • azure_isv - Azure Native ISV software and services (like Databricks, Datadog, Confluent, Astro and more)
  • azure_stack - Azure Hybrid app framework

by the numbers...

ProviderTotal ServicesTotal MethodsTotal Resources
azure195138413920
azure_extras381164339
azure_isv20906253
azure18470142

More Data Plane services like Azure Container Registry coming as well, stay tuned!

· One min read
info

stackql is a dev tool that allows you to query and manage cloud and SaaS resources using SQL, which developers and analysts can use for CSPM, assurance, user access management reporting, IaC, XOps and more.

We're excited to announce the release of two new StackQL providers: datadog and pagerduty. The daatdog provider includes 41 services and 405 methods at your disposal, you can query and manage everything from APM retention filters, audit logs, to cloud workload security and more. More information on the dataog provider can be found here.

The pagerduty provider includes an array of services like events, metrics, monitors, and users to fully leverage the operational prowess of these platforms. Whether it's maintaining the security posture with cloud_workload_security and security_monitoring or managing resources with containers and incidents, StackQL gives you the visibility and control over pagerduty, datadog or numerous other XaaS platforms. More information on the pagerduty provider can be found here.

Let us know your thoughts! Visit us and give us a ⭐ on GitHub