Skip to main content

Confluent Provider Update - May 2026

· 2 min read
Technologist and Cloud Consultant

We've released an update to the StackQL Confluent provider adding eight new services and 40 additional resources across existing services.

New Services

The eight new services added in this update are:

ServiceDescription
cclCustom Code Logging - manage log topics that capture stdout/stderr and worker process logs from custom connectors running in Confluent Cloud
ccpmCustom Connect Plugin Management - upload, version, and manage custom connector plugins at the environment level, including plugin version resources for JAR/ZIP artifacts
endpointsManage PrivateLink access points and private network endpoints used to reach Confluent Cloud clusters and serverless products over private networking
pipelinesManage Stream Designer pipelines - the visual SQL/ksqlDB pipeline builder for connecting sources, transforms, and sinks across Kafka topics
share_groupManage Kafka share groups (KIP-932 / Queues for Kafka), which provide queue-like consumption semantics with per-message acknowledgement and consumer parallelism beyond partition count
streams_groupManage Kafka Streams groups - the broker-side coordination resource for Kafka Streams applications introduced alongside the next-generation consumer rebalance protocol
tableflowMaterialize Kafka topics as Apache Iceberg or Delta Lake tables, including catalog integrations, storage configuration, and table maintenance settings
usmUnified Stream Manager - register and govern self-managed Confluent Platform clusters from Confluent Cloud, including agent deployment and hybrid cluster monitoring

Updates

This release also adds 40 additional resources across existing services, expanding coverage for:

  • kafka - additional cluster configuration and topic-level resources
  • connect - new connector status, offset, and task management resources
  • flink - expanded coverage for Flink statements, compute pools, and artifacts
  • iam - new resources for service accounts, identity providers, and role bindings
  • networking - additional resources for transit gateways, peerings, and DNS forwarders
  • schema_registry - new resources for schema exporters, modes, and compatibility
  • billing - new cost and usage resources
  • metrics - additional query and descriptor resources

Get Started

Pull the latest Confluent provider:

stackql registry pull confluent

Visit us on GitHub and let us know how you're using it.

GitHub Provider Update - April 2026

· 3 min read
Technologist and Cloud Consultant

We've released an update to the StackQL GitHub provider adding new services and expanding coverage across several existing ones.

New Services

Some of the newly added services of note include:

agent_tasks

The agent_tasks service exposes GitHub's AI agent task API, allowing you to query and manage agent task runs within a repository. Resources include:

ResourceDescription
agent_tasksList, get, and manage agent task runs scoped to a repository
agent_task_stepsRetrieve individual step-level detail for an agent task run
agent_task_labelsManage labels assigned to agent tasks

This covers the audit and observability side of AI agent operations in GitHub - useful for tracking what agent tasks have run, their status, and step-level output.

campaigns

The campaigns service covers GitHub's security campaign management, part of GitHub Advanced Security. Resources include:

ResourceDescription
campaignsCreate, list, get, update, and close security campaigns scoped to an organization
campaign_repositoriesList repositories participating in a campaign

Security campaigns let you coordinate remediation of code scanning or secret scanning alerts across an org. This service lets you query campaign state and participation programmatically.

classroom

The classroom service brings GitHub Classroom into StackQL. Resources include:

ResourceDescription
classroomsList and get classrooms accessible to the authenticated user
assignmentsList and get assignments within a classroom
accepted_assignmentsQuery student-accepted assignment repositories
assignment_gradesRetrieve grade data for accepted assignments

This is useful for institutions managing GitHub Classroom at scale - querying assignment completion or generating reports across multiple classrooms.

hosted_compute

The hosted_compute service covers GitHub's hosted compute networking resources, relevant to organizations using GitHub-hosted runners with custom network configurations. Resources include:

ResourceDescription
hosted_compute_networksManage hosted compute network configurations at the org level
hosted_compute_network_settingsQuery and update settings for a hosted compute network

This is relevant if you're using GitHub's hosted compute with Azure private networking or similar integrations.

private_registries

The private_registries service exposes org-level private registry configurations - credentials and settings stored in GitHub for use by Actions workflows. Resources include:

ResourceDescription
org_private_registriesList, get, create, update, and delete private registry configurations for an organization

This allows you to audit which private registries (npm, Docker Hub, Maven, etc.) are configured at the org level without going through the GitHub UI.

enterprise_teams

The enterprise_teams service provides enterprise-scoped team management, distinct from org-level teams. Resources include:

ResourceDescription
enterprise_teamsList and get teams at the enterprise level
enterprise_team_membersQuery membership for enterprise teams

This is useful for enterprises managing teams that span multiple organizations.

Updates

Notable updates to existing services include:

actions

New resources added to the actions service:

  • hosted_runners - query and manage GitHub-hosted runner configurations at the org or repo level
  • runner_group_network_configurations - network config details for runner groups

orgs

  • New org_roles and org_role_assignments resources for querying custom org role definitions and their assignments

repos

  • New repo_rules_suites resource for querying rule suite evaluation history (useful for auditing branch protection rule evaluations)

Get Started

Pull the latest GitHub provider:

stackql registry pull github

Visit us on GitHub and let us know how you're using it.

stackql-deploy 2.0 - Rewritten in Rust

· 5 min read
Technologist and Cloud Consultant

stackql-deploy 2.0 is a full rewrite in Rust. The Python package (stackql-deploy on PyPi) is archived at 1.9.4. CLI interface and stack file format are unchanged - no migration required.

Why Rust

The move to Rust was primarily about distribution and operational simplicity. Rust also brings stronger guarantees around performance and memory safety. Running everything in-process without Foreign Function Interface (FFI) boundaries simplifies the architecture while maintaining predictable resource usage.

Embedded Postgres Wire Protocol Server

The most significant functional change in 2.0 is that stackql-deploy now runs the StackQL engine as an embedded in-process server over a local postgres wire protocol connection rather than shelling out to the StackQL binary as an external process.

There is nothing to start, stop, or configure. The server is lifecycle-managed by stackql-deploy itself and binds to localhost only - no port is exposed on the network, no inbound firewall rules needed in CI.

The previous model spawned a new StackQL process per operation. The embedded server keeps a persistent connection for the duration of a deployment run. For stacks with many resources, the reduction in process spawn overhead is noticeable - particularly on Windows where process creation is expensive.

Additional Features Added

In addition to added the architectural change to use the embedded server, several other workflow improvements were added including:

  • Enabling resource scoped variable exports in /*+ exists */ queries: When an exists query returns a named field (e.g. vpc_id) instead of count, the value is captured as a resource-scoped variable (this.vpc_id) and made available to all subsequent queries for that resource (e.g. statecheck, exports). This eliminates the need for redundant lookups to resolve resource identifiers between query stages.
  • Support for capturing RETURNING payloads from DML operations: INSERT, UPDATE, and DELETE statements can include a RETURNING clause. Fields from the response can be mapped to resource-scoped variables via return_vals in the manifest, keyed by operation (create, update, delete). This allows identifiers assigned by the provider during creation to be used immediately without a round-trip query.
  • Additional template filters: Including the to_aws_tag_filters filter, which converts global_tags to the AWS Resource Groups Tagging API TagFilters format, and type-preserving YAML-to-JSON serialization that maintains string types through the rendering pipeline.
  • Improved logging and exception handling: Enhanced visibility simplifying troubleshooting.

Installation

The canonical install URL detects your OS and redirects to the latest release asset automatically. You can also download directly from your browser at get-stackql-deploy.io.

curl -L https://get-stackql-deploy.io | tar xzf -

Usage

The CLI interface is unchanged from the Python version:

# deploy a stack
stackql-deploy build my-stack prod \
--e GOOGLE_PROJECT=${GOOGLE_PROJECT}

# test a stack
stackql-deploy test my-stack prod \
--e GOOGLE_PROJECT=${GOOGLE_PROJECT}

# tear down a stack
stackql-deploy teardown my-stack prod \
--e GOOGLE_PROJECT=${GOOGLE_PROJECT}

# dry run
stackql-deploy build my-stack prod \
--e GOOGLE_PROJECT=${GOOGLE_PROJECT} \
--dry-run

Stack files and stackql_manifest.yml structure are unaffected - no migration work needed.

Python Package Deprecation

stackql-deploy 1.9.4 on PyPi is the final Python release. The Python source repository is archived. If you have pip install stackql-deploy in any scripts or CI pipelines, replace it with one of the install methods above. The 1.9.4 package remains on PyPi and installable, but will not receive updates.

Run StackQL Queries from the Databricks Web Terminal

· 2 min read
Technologist and Cloud Consultant

If you have access to a Databricks workspace, you can run StackQL queries directly from the Databricks Web Terminal using your Databricks identity.

How It Works

Download the latest release of stackql, then run the convenience script included (similar scripts are included for other cloud provider terminals - e.g. AWS Cloud Shell).

curl -L https://bit.ly/stackql-zip -O && unzip stackql-zip
sh stackql-databricks-shell.sh

Example Queries

Here are the sample queries run in the video, just change the deployment_name for your workspace.

User entitlements

SELECT
deployment_name,
id,
userName,
displayName,
entitlement
FROM databricks_workspace.iam.vw_user_entitlements
WHERE deployment_name = 'dbc-74aa95f7-8c7e';

All workspace settings

SELECT * FROM
databricks_workspace.settings.vw_all_settings
WHERE deployment_name = 'dbc-74aa95f7-8c7e';

Tag policies filtered by key prefix

SELECT
tag_key as key,
description
FROM databricks_workspace.tags.tag_policies
WHERE deployment_name = 'dbc-74aa95f7-8c7e'
AND key LIKE 'class%';

Catalog count by type

SELECT
catalog_type,
COUNT(*) as num_catalogs
FROM databricks_workspace.catalog.catalogs
WHERE deployment_name = 'dbc-74aa95f7-8c7e'
GROUP BY catalog_type;

Provider Coverage

The databricks_workspace provider covers workspace related services, the databricks_account provider covers account-level operations including provisioning, billing, and account IAM.

The web terminal flow covers workspace-scoped queries using the token of the logged-in user. For account-level queries (provisioning, billing, account IAM), you need a Databricks service principal with account admin rights and OAuth2 credentials:

export DATABRICKS_ACCOUNT_ID="your-account-id"
export DATABRICKS_CLIENT_ID="your-client-id"
export DATABRICKS_CLIENT_SECRET="your-client-secret"

These are the same variables used by the Databricks CLI and Terraform provider, so if you already have those configured the auth story is identical.

Get Started

Full provider documentation:

Visti StackQL on GitHub.

New Databricks Providers for StackQL Released

· 2 min read
Technologist and Cloud Consultant

Updated StackQL providers for Databricks are now available: databricks_account and databricks_workspace, giving you SQL access to the full Databricks control plane across account-level and workspace-level operations.

Provider Structure

The following updated providers are available:

ProviderScopeServices
databricks_accountAccount8
databricks_workspaceWorkspace26

Coverage

There are over 30 services, 300+ resources, and 983 operations spanning IAM, compute, catalog, billing, jobs, ML, serving, sharing, vector search, and more.

Example Queries

List workspaces in an account

SELECT
workspace_id,
workspace_name,
workspace_status,
aws_region,
compute_mode,
deployment_name,
datetime(creation_time/1000, 'unixepoch') as creation_date_time
FROM databricks_account.provisioning.workspaces
WHERE account_id = 'ebfcc5a9-9d49-4c93-b651-b3ee6cf1c9ce';

Query account users and roles

SELECT
id as user_id,
displayName as display_name,
userName as user_name,
active,
IIF(JSON_EXTRACT(roles,'$[0].value') = 'account_admin', 'true', 'false') as is_account_admin
FROM databricks_account.iam.account_users
WHERE account_id = 'ebfcc5a9-9d49-4c93-b651-b3ee6cf1c9ce';

List catalogs in a workspace

SELECT
full_name,
catalog_type,
comment,
datetime(created_at/1000, 'unixepoch') as created_at,
created_by,
datetime(updated_at/1000, 'unixepoch') as updated_at,
updated_by,
enable_predictive_optimization
FROM databricks_workspace.catalog.catalogs
WHERE deployment_name = 'dbc-36ff48e3-4a69';

Download billable usage to CSV

This one is worth calling out. You can pull billable usage data for a given period and write it straight to a CSV file:

./stackql exec \
-o text \
--hideheaders \
-f billable_usage.csv \
"SELECT contents
FROM databricks_account.billing.billable_usage
WHERE start_month = '2025-12'
AND end_month = '2026-01'
AND account_id = 'your-account-id'"

Authentication

Both providers authenticate using OAuth2 with a Databricks service principal. Set the following environment variables:

export DATABRICKS_ACCOUNT_ID="your-account-id"
export DATABRICKS_CLIENT_ID="your-client-id"
export DATABRICKS_CLIENT_SECRET="your-client-secret"

These are the same variables used by Terraform, the Databricks SDKs, and the Databricks CLI.

Get Started

Pull the providers:

registry pull databricks_account;
registry pull databricks_workspace;

Start querying via the shell or exec:

SELECT * FROM databricks_account.iam.account_groups WHERE account_id = 'your-account-id';

Full documentation is available at databricks-account-provider.stackql.io and databricks-workspace-provider.stackql.io. Let us know what you think on GitHub.