Skip to main content

GitHub Provider Update - April 2026

· 3 min read
Technologist and Cloud Consultant

We've released an update to the StackQL GitHub provider adding new services and expanding coverage across several existing ones.

New Services

Some of the newly added services of note include:

agent_tasks

The agent_tasks service exposes GitHub's AI agent task API, allowing you to query and manage agent task runs within a repository. Resources include:

ResourceDescription
agent_tasksList, get, and manage agent task runs scoped to a repository
agent_task_stepsRetrieve individual step-level detail for an agent task run
agent_task_labelsManage labels assigned to agent tasks

This covers the audit and observability side of AI agent operations in GitHub - useful for tracking what agent tasks have run, their status, and step-level output.

campaigns

The campaigns service covers GitHub's security campaign management, part of GitHub Advanced Security. Resources include:

ResourceDescription
campaignsCreate, list, get, update, and close security campaigns scoped to an organization
campaign_repositoriesList repositories participating in a campaign

Security campaigns let you coordinate remediation of code scanning or secret scanning alerts across an org. This service lets you query campaign state and participation programmatically.

classroom

The classroom service brings GitHub Classroom into StackQL. Resources include:

ResourceDescription
classroomsList and get classrooms accessible to the authenticated user
assignmentsList and get assignments within a classroom
accepted_assignmentsQuery student-accepted assignment repositories
assignment_gradesRetrieve grade data for accepted assignments

This is useful for institutions managing GitHub Classroom at scale - querying assignment completion or generating reports across multiple classrooms.

hosted_compute

The hosted_compute service covers GitHub's hosted compute networking resources, relevant to organizations using GitHub-hosted runners with custom network configurations. Resources include:

ResourceDescription
hosted_compute_networksManage hosted compute network configurations at the org level
hosted_compute_network_settingsQuery and update settings for a hosted compute network

This is relevant if you're using GitHub's hosted compute with Azure private networking or similar integrations.

private_registries

The private_registries service exposes org-level private registry configurations - credentials and settings stored in GitHub for use by Actions workflows. Resources include:

ResourceDescription
org_private_registriesList, get, create, update, and delete private registry configurations for an organization

This allows you to audit which private registries (npm, Docker Hub, Maven, etc.) are configured at the org level without going through the GitHub UI.

enterprise_teams

The enterprise_teams service provides enterprise-scoped team management, distinct from org-level teams. Resources include:

ResourceDescription
enterprise_teamsList and get teams at the enterprise level
enterprise_team_membersQuery membership for enterprise teams

This is useful for enterprises managing teams that span multiple organizations.

Updates

Notable updates to existing services include:

actions

New resources added to the actions service:

  • hosted_runners - query and manage GitHub-hosted runner configurations at the org or repo level
  • runner_group_network_configurations - network config details for runner groups

orgs

  • New org_roles and org_role_assignments resources for querying custom org role definitions and their assignments

repos

  • New repo_rules_suites resource for querying rule suite evaluation history (useful for auditing branch protection rule evaluations)

Get Started

Pull the latest GitHub provider:

stackql registry pull github

Visit us on GitHub and let us know how you're using it.

stackql-deploy 2.0 - Rewritten in Rust

· 5 min read
Technologist and Cloud Consultant

stackql-deploy 2.0 is a full rewrite in Rust. The Python package (stackql-deploy on PyPi) is archived at 1.9.4. CLI interface and stack file format are unchanged - no migration required.

Why Rust

The move to Rust was primarily about distribution and operational simplicity. Rust also brings stronger guarantees around performance and memory safety. Running everything in-process without Foreign Function Interface (FFI) boundaries simplifies the architecture while maintaining predictable resource usage.

Embedded Postgres Wire Protocol Server

The most significant functional change in 2.0 is that stackql-deploy now runs the StackQL engine as an embedded in-process server over a local postgres wire protocol connection rather than shelling out to the StackQL binary as an external process.

There is nothing to start, stop, or configure. The server is lifecycle-managed by stackql-deploy itself and binds to localhost only - no port is exposed on the network, no inbound firewall rules needed in CI.

The previous model spawned a new StackQL process per operation. The embedded server keeps a persistent connection for the duration of a deployment run. For stacks with many resources, the reduction in process spawn overhead is noticeable - particularly on Windows where process creation is expensive.

Additional Features Added

In addition to added the architectural change to use the embedded server, several other workflow improvements were added including:

  • Enabling resource scoped variable exports in /*+ exists */ queries: When an exists query returns a named field (e.g. vpc_id) instead of count, the value is captured as a resource-scoped variable (this.vpc_id) and made available to all subsequent queries for that resource (e.g. statecheck, exports). This eliminates the need for redundant lookups to resolve resource identifiers between query stages.
  • Support for capturing RETURNING payloads from DML operations: INSERT, UPDATE, and DELETE statements can include a RETURNING clause. Fields from the response can be mapped to resource-scoped variables via return_vals in the manifest, keyed by operation (create, update, delete). This allows identifiers assigned by the provider during creation to be used immediately without a round-trip query.
  • Additional template filters: Including the to_aws_tag_filters filter, which converts global_tags to the AWS Resource Groups Tagging API TagFilters format, and type-preserving YAML-to-JSON serialization that maintains string types through the rendering pipeline.
  • Improved logging and exception handling: Enhanced visibility simplifying troubleshooting.

Installation

The canonical install URL detects your OS and redirects to the latest release asset automatically. You can also download directly from your browser at get-stackql-deploy.io.

curl -L https://get-stackql-deploy.io | tar xzf -

Usage

The CLI interface is unchanged from the Python version:

# deploy a stack
stackql-deploy build my-stack prod \
--e GOOGLE_PROJECT=${GOOGLE_PROJECT}

# test a stack
stackql-deploy test my-stack prod \
--e GOOGLE_PROJECT=${GOOGLE_PROJECT}

# tear down a stack
stackql-deploy teardown my-stack prod \
--e GOOGLE_PROJECT=${GOOGLE_PROJECT}

# dry run
stackql-deploy build my-stack prod \
--e GOOGLE_PROJECT=${GOOGLE_PROJECT} \
--dry-run

Stack files and stackql_manifest.yml structure are unaffected - no migration work needed.

Python Package Deprecation

stackql-deploy 1.9.4 on PyPi is the final Python release. The Python source repository is archived. If you have pip install stackql-deploy in any scripts or CI pipelines, replace it with one of the install methods above. The 1.9.4 package remains on PyPi and installable, but will not receive updates.

Run StackQL Queries from the Databricks Web Terminal

· 2 min read
Technologist and Cloud Consultant

If you have access to a Databricks workspace, you can run StackQL queries directly from the Databricks Web Terminal using your Databricks identity.

How It Works

Download the latest release of stackql, then run the convenience script included (similar scripts are included for other cloud provider terminals - e.g. AWS Cloud Shell).

curl -L https://bit.ly/stackql-zip -O && unzip stackql-zip
sh stackql-databricks-shell.sh

Example Queries

Here are the sample queries run in the video, just change the deployment_name for your workspace.

User entitlements

SELECT
deployment_name,
id,
userName,
displayName,
entitlement
FROM databricks_workspace.iam.vw_user_entitlements
WHERE deployment_name = 'dbc-74aa95f7-8c7e';

All workspace settings

SELECT * FROM
databricks_workspace.settings.vw_all_settings
WHERE deployment_name = 'dbc-74aa95f7-8c7e';

Tag policies filtered by key prefix

SELECT
tag_key as key,
description
FROM databricks_workspace.tags.tag_policies
WHERE deployment_name = 'dbc-74aa95f7-8c7e'
AND key LIKE 'class%';

Catalog count by type

SELECT
catalog_type,
COUNT(*) as num_catalogs
FROM databricks_workspace.catalog.catalogs
WHERE deployment_name = 'dbc-74aa95f7-8c7e'
GROUP BY catalog_type;

Provider Coverage

The databricks_workspace provider covers workspace related services, the databricks_account provider covers account-level operations including provisioning, billing, and account IAM.

The web terminal flow covers workspace-scoped queries using the token of the logged-in user. For account-level queries (provisioning, billing, account IAM), you need a Databricks service principal with account admin rights and OAuth2 credentials:

export DATABRICKS_ACCOUNT_ID="your-account-id"
export DATABRICKS_CLIENT_ID="your-client-id"
export DATABRICKS_CLIENT_SECRET="your-client-secret"

These are the same variables used by the Databricks CLI and Terraform provider, so if you already have those configured the auth story is identical.

Get Started

Full provider documentation:

Visti StackQL on GitHub.

New Databricks Providers for StackQL Released

· 2 min read
Technologist and Cloud Consultant

Updated StackQL providers for Databricks are now available: databricks_account and databricks_workspace, giving you SQL access to the full Databricks control plane across account-level and workspace-level operations.

Provider Structure

The following updated providers are available:

ProviderScopeServices
databricks_accountAccount8
databricks_workspaceWorkspace26

Coverage

There are over 30 services, 300+ resources, and 983 operations spanning IAM, compute, catalog, billing, jobs, ML, serving, sharing, vector search, and more.

Example Queries

List workspaces in an account

SELECT
workspace_id,
workspace_name,
workspace_status,
aws_region,
compute_mode,
deployment_name,
datetime(creation_time/1000, 'unixepoch') as creation_date_time
FROM databricks_account.provisioning.workspaces
WHERE account_id = 'ebfcc5a9-9d49-4c93-b651-b3ee6cf1c9ce';

Query account users and roles

SELECT
id as user_id,
displayName as display_name,
userName as user_name,
active,
IIF(JSON_EXTRACT(roles,'$[0].value') = 'account_admin', 'true', 'false') as is_account_admin
FROM databricks_account.iam.account_users
WHERE account_id = 'ebfcc5a9-9d49-4c93-b651-b3ee6cf1c9ce';

List catalogs in a workspace

SELECT
full_name,
catalog_type,
comment,
datetime(created_at/1000, 'unixepoch') as created_at,
created_by,
datetime(updated_at/1000, 'unixepoch') as updated_at,
updated_by,
enable_predictive_optimization
FROM databricks_workspace.catalog.catalogs
WHERE deployment_name = 'dbc-36ff48e3-4a69';

Download billable usage to CSV

This one is worth calling out. You can pull billable usage data for a given period and write it straight to a CSV file:

./stackql exec \
-o text \
--hideheaders \
-f billable_usage.csv \
"SELECT contents
FROM databricks_account.billing.billable_usage
WHERE start_month = '2025-12'
AND end_month = '2026-01'
AND account_id = 'your-account-id'"

Authentication

Both providers authenticate using OAuth2 with a Databricks service principal. Set the following environment variables:

export DATABRICKS_ACCOUNT_ID="your-account-id"
export DATABRICKS_CLIENT_ID="your-client-id"
export DATABRICKS_CLIENT_SECRET="your-client-secret"

These are the same variables used by Terraform, the Databricks SDKs, and the Databricks CLI.

Get Started

Pull the providers:

registry pull databricks_account;
registry pull databricks_workspace;

Start querying via the shell or exec:

SELECT * FROM databricks_account.iam.account_groups WHERE account_id = 'your-account-id';

Full documentation is available at databricks-account-provider.stackql.io and databricks-workspace-provider.stackql.io. Let us know what you think on GitHub.

New Dedicated AWS Cloud Control Provider Released

· 2 min read
Technologist and Cloud Consultant

We've released a new dedicated StackQL AWS Cloud Control provider, providing full CRUDL operations across AWS services via the Cloud Control API including purpose-built resource definitions leveraging Cloud Control's consistent schema.

Resource Naming Convention

Resources follow a clear pattern to differentiate operations:

Resource PatternOperationsUse Case
{resource} (e.g., s3.buckets)SELECT, INSERT, UPDATE, DELETEFull CRUD with complete resource properties
{resource}_list_only (e.g., s3.buckets_list_only)SELECTFast enumeration of resource identifiers

This separation means listing thousands of resources won't trigger rate limits from individual GET calls:

-- Fast enumeration (list operation only)
SELECT bucket_name
FROM awscc.s3.buckets_list_only
WHERE region = 'us-east-1';

-- Full resource details (get operation)
SELECT *
FROM awscc.s3.buckets
WHERE region = 'us-east-1'
AND data__Identifier = 'my-bucket';

Provider Coverage

The awscc provider includes:

  • 237 services and 2371 resources covering the breadth of AWS
  • Full CRUDL support for all Cloud Control compatible resources
  • Consistent schema derived from AWS CloudFormation resource specifications

Example Operations

Create an S3 Bucket

INSERT INTO awscc.s3.buckets (
BucketName,
region
)
SELECT
'my-new-bucket',
'us-east-1';

Query EC2 Instances

SELECT 
instance_id,
instance_type,
tags
FROM awscc.ec2.instances
WHERE region = 'ap-southeast-2'
AND data__Identifier = 'i-1234567890abcdef0';

Delete a Resource

DELETE FROM awscc.lambda.functions
WHERE data__Identifier = 'my-function'
AND region = 'us-east-1';

Enhanced Documentation

The provider documentation at awscc.stackql.io now features:

  • Interactive schema explorer with expandable nested property trees
  • Complete field documentation including complex object structures
  • Ready-to-use SQL examples for SELECT, INSERT, and DELETE operations
  • IAM permissions reference for each resource operation

Get Started

Pull the new provider:

stackql registry pull awscc

Query your AWS resources:

stackql shell
>> SELECT region, bucket_name FROM awscc.s3.buckets_list_only WHERE region = 'us-east-1';

Let us know your thoughts! Visit us and give us a star on GitHub.