Skip to main content

4 posts tagged with "analytics"

View All Tags

· 4 min read

stackql is a dev tool that allows you to query and manage cloud and SaaS resources using SQL, which developers and analysts can use for CSPM, assurance, user access management reporting, IaC, XOps and more.

This quick start guide outlines how to create a superset + stackql dashboard on your laptop using docker desktop, helm, and kubernetes. We certainly do not want to go into depth on superset, a third-party application, so this guide is terse.

Supplying secrets

In this example, we use:

All of the associated principals must be granted access using provider-specific access controls.

NOTE keep all of these values secret and certainly do not commit into source control. We have supplied examples for numerous providers, and we suggest that you configure only what you need.

Create a file helm/stackql-dashboards/secrets/secret-values.yaml, containing the following, replacing placeholders:

AWS_ACCESS_KEY_ID: '<your aws access key id>'
AWS_SECRET_ACCESS_KEY: '<your aws secret key>'
AZURE_CLIENT_ID: '<your azure client id>'
AZURE_CLIENT_SECRET: '<your azure client secret>'
AZURE_TENANT_ID: '<your azure tenant id>'
DIGITALOCEAN_TOKEN: '<your digitalocean token>'
STACKQL_GITHUB_TOKEN: '<your github personal access token>'
GOOGLE_APPLICATION_CREDENTIALS: '/opt/stackql/config/google-credentials.json'
google-credentials.json: |
<full google json key>

password: 'mypassword'

Expand templates and deploy locally

Here we will set up and expose a local dashboard using the local kubernetes cluster supplied with docker desktop.

These steps assume that your kubectl config is pointed at your local cluster (depending on your version of docker, something like kubectl config use-context docker-desktop should do the trick) and that you execute from the root directory of the stackql-cloud repository. We will let the system dynamically assign a local port.

helm dependency update  helm/stackql-dashboards

helm template --release-name v1 --namespace default --set superset.service.type=NodePort --set superset.service.nodePort.http="" -f helm/stackql-dashboards/secrets/secret-values.yaml helm/stackql-dashboards > helm/stackql-dashboards/out/stackql-demo-dashboards.yaml

kubectl apply -f helm/stackql-dashboards/out/stackql-demo-dashboards.yaml

Log into and set up superset

Allow a minute or so for init actions to complete.

First, inspect the output of kubectl get svc and note the host port for the service v1-superset. In my case, I see (redacted):

$ kubectl get svc | grep NodePort      
v1-superset NodePort ... ... 8088:31930/TCP ...

So, my local port is 31390 on this occasion. Hereafter let us refer to this port as <SUPERSET_LOCAL_PORT>.

Go to your browser address bar and punch in http://localhost:<SUPERSET_LOCAL_PORT>. Log in using admin / mypassword (or other if you reconfigured), and then you can begin using superset.

From the top RHS Settings dropdown, select Database Connections. Then, select the + DATABASE button (just below Settings) and do the following (the password does not matter in this context, add anything you want):

Initial database settings


Follow up database settings

Press "FINISH"

NOTE: we have enabled DML here so that meta queries like show and describe will work. You certainly do not have to do this if you don't want to.


Here we present a simple GCP scenario; you can follow the same pattern to create many charts and populate a dashboard...

Navigate to SQL > SQL Lab and then input the below, substituting <your gcp project> for whatever google project your service account can access:

select name, guestCpus from google.compute.machine_types where project = '<your gcp project>' and zone = 'australia-southeast1-a';


A table of results should appear.

Press "Save" > "Save Dataset"

Give it whatever name you want.

You can click the option to create a chart immediately or navigate to your chart via the Charts menu item.

Once inside the UI for your new dataset, do something like this (we will leave it to your creativity)...

My First Chart

...and then...

Chart To Dashboard

Click on "SAVE & GO TO NEW DASHBOARD", and you have your first dashboard + stackql!


· 6 min read

This exercise will show you how to run a real-time query across your AWS and Google cloud environments. You may do this for inventory analysis, security analysis, or any other reason you can think of. We will use stackql to query the state of your cloud resources across your AWS and Google environments. You can also use stackql to provision, de-provision or manage resources across different cloud and SaaS providers.

The steps we will take are:

  1. Prepare your environment for stackql usage.
  2. Use stackql to provision some resources in cloud. optional
  3. Use stackql to query resources present in the cloud.
  4. Use stackql to tear down resources created in step (2), if any. Important: you must destroy any resources created through this exercise, or you will incur ongoing charges.


For this exercise, credentials with privileges against google and aws are required. It is outside the scope of this document to go into great detail on the various topics and options relevant to this. Instead, the below steps provide both: (i) reference to vendor documentation and (ii) suggestions for workarounds to get yourself going.

for old hands

All the materials required for this exercise are:

  1. A current stackql executable.
  2. A Google Service Account Key JSON file, where the corresponding Service Account possesses permissions sufficient to create, interrogate and delete compute block storage.
  3. AWS credentials stored in the traditional AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables, where the corresponding Service Account possesses permissions sufficient to create, interrogate and delete ec2 block storage.

step by step

First, please do the following:

  1. Download and install stackql from our website.
  2. For google:
    • (i) Create and download a Google Service Account Key as per Google documentation. Remember the location of your key file.
    • (ii) You will need to grant the Service Account at least read, list, create, and delete privileges. For more information about google iam and Service Accounts in particular, please consult the documentation. For this exercise, grant your service account the roles/compute.storageAdmin role would be adequate.
  3. For AWS:
    • (i) Create and download AWS user credentials as per AWS documentation. We will require long-lived credentials. In keeping with vendor advice, we strongly recommend against using root user credentials. We have created a dedicated CICD user for this exercise.
    • (ii) Set up the AWS CLI environment variables as per the documentation.
    • (iii) The user will need create / read / delete privileges against ec2 volumes. This can be done though the AWS IAM console in various ways. For example, one can use groups and permission policies. Adding your user to a group with AmazonEC2FullAccess will certainly work, although lesser privileges may be adequate.

Then, create some shell variables:

# you will need to edit the file path as appropriate


AWS_AUTH_FRAGMENT='{ "type": "aws_signing_v4", "credentialsenvvar": "AWS_SECRET_ACCESS_KEY", "keyIDenvvar": "AWS_ACCESS_KEY_ID" }'

GOOGLE_AUTH_FRAGMENT='{ "credentialsfilepath": "'"${GOOGLE_DOWNLOADED_KEY_FILE_PATH}"'", "type": "service_account" }'

export STACKQL_AUTH_CTX='{ "aws": '"${AWS_AUTH_FRAGMENT}"', "google": '"${GOOGLE_AUTH_FRAGMENT}"' }'
Setting up Provider Auth in PowerShell
$GOOGLE_DOWNLOADED_KEY_FILE_PATH = "C:\path\to\your\downloaded\key.json"

$AWS_AUTH_FRAGMENT = '{ "type": "aws_signing_v4", "credentialsenvvar": "AWS_SECRET_ACCESS_KEY", "keyIDenvvar": "AWS_ACCESS_KEY_ID" }'

$GOOGLE_AUTH_FRAGMENT = '{ "credentialsfilepath": "' + $GOOGLE_DOWNLOADED_KEY_FILE_PATH + '", "type": "service_account" }'

$env:STACKQL_AUTH_CTX = '{ "aws": ' + $AWS_AUTH_FRAGMENT + ', "google": ' + $GOOGLE_AUTH_FRAGMENT + ' }'

Start a stackql shell session

To start an interactive shell session, in the same shell you setup your envrioment variables, run:

stackql --auth="${STACKQL_AUTH_CTX}" shell

You can exit at any time with ctrl + C.

Setup and meta queries to get started

StackQL providers are installed from the StackQL Provider Registry using the REGISTRY command. StackQL supports meta queries such as SHOW and DESCRIBE which can be used to explore the available services, resources, fields, and operations available in a given cloud or SaaS provider.

-- see available providers
registry pull list;

-- pull the required providers
registry pull google;

registry pull aws;

-- some the installed providers
show providers;

-- some meta queries
show services in google;

show resources in google.compute;

describe google.compute.disks;

Create block storage (optional)

You will need to replace the items in <ANGLE_BRACKETS>.

-- create a google volume, await and verify creation completes successfully
insert /*+ AWAIT */ into google.compute.disks(
'10' ;

-- create an aws volume, operation despatched on a BEST EFFORT basis
insert into aws.ec2.volumes(

Interrogate cloud block storage

-- query one resource from google
split_part(split_part(type, '/', 11), '-', 2) as type,
sizeGb as size
from google.compute.disks
where project = '<YOUR_GCP_PROJECT>'
and zone = 'australia-southeast1-a';

-- query the equivalent from aws
volumeId as name,
volumeType as type,
from aws.ec2.volumes
where region = 'ap-southeast-2';

-- union the equivalent resources across clouds
'google' as vendor,
split_part(split_part(type, '/', 11), '-', 2) as type,
sizeGb as size
from google.compute.disks
where project = '<YOUR_GCP_PROJECT>'
and zone = 'australia-southeast1-a'
'aws' as vendor,
volumeId as name,
volumeType as type,
from aws.ec2.volumes
where region = 'ap-southeast-2';

-- create a view for convenience
create view dual_cloud_block_storage as
'google' as vendor,
split_part(split_part(type, '/', 11), '-', 2) as type,
sizeGb as size
from google.compute.disks
where project = '<YOUR_GCP_PROJECT>'
and zone = 'australia-southeast1-a'
'aws' as vendor,
volumeId as name,
volumeType as type,
from aws.ec2.volumes
where region = 'ap-southeast-2';

-- select from the newly created view, with ordering
select * from dual_cloud_block_storage order by name desc;

Delete block storage (if required)

This will only work if the disks are deletable. For example, aws.ec2.volumes must have status = available; you can check this with the view we created above.

/* delete a google volume, await and verify creation completes successfully.
One at a time only... */
delete /*+ AWAIT */ from google.compute.disks
where project = '<YOUR_GCP_PROJECT>'
and zone = 'australia-southeast1-a'
and disk = 'my-stackql-demo-disk-01';

-- delete an aws volume, operation despatched on a BEST EFFORT basis
delete from aws.ec2.volumes
where VolumeId = 'vol-049ee07b31aff451a'
and region = 'ap-southeast-2';

Verify the cleanup was successful

select * from dual_cloud_block_storage order by name desc;

That's it for the scripted demo!

Get involved

We Need Your Help!

if you find bugs, want features, have tech questions then go to and raise the appropriate issue 🙏

· 2 min read

The StackQL Sumologic provider is now available in the public StackQL Provider Registry. Docs are available at sumologic provider docs.

StackQL is an intelligent API client which uses SQL as a front-end language. StackQL can be used for querying cloud and SaaS providers, as well as provisioning and lifecycle operations.

The StackQL Sumo provider can query, create, update and delete Sumologic collectors and sources, view and manage ingest budgets, health events, dashboards, user and account access and activity, and more.

Some example queries include:

SELECT id, name FROM sumologic.collectors.collectors WHERE region = 'au';

or using built-in functions to simplify and format query outputs, such as:

SELECT alive, datetime(lastSeenAlive/1000, 'unixepoch') AS lastSeenAliveUtc,
datetime(lastSeenAlive/1000, 'unixepoch', 'localtime') AS lastSeenAliveLocal
FROM sumologic.collectors.collectors
WHERE region = 'au' AND id = 116208196;

another example...

SELECT id, email,
firstName || ' ' || lastName AS fullName,
round(julianday('now') - julianday(lastLoginTimestamp), 0) as daysSinceLastLogin
FROM sumologic.users.users WHERE region = 'au';

An example using StackQL with the Sumologic provider to query users and roles and join the results to get a list of users and their roles:

SELECT as email, AS role
FROM sumologic.users.users u
JOIN sumologic.roles.roles r
ON JSON_EXTRACT(u.roleIds, '$[0]') =
WHERE u.region = 'au' AND r.region = 'au';

An example using StackQL and Jupyter is shown here (see stackql/stackql-jupyter-demo):

Use StackQL and Jupyter to query SumoLogic

StackQL can also be used to provision objects in Sumologic, the following query can be used to create a collector for instance:

INSERT INTO sumologic.collectors.collectors(region, data__collector)
SELECT 'au',
'{ "collectorType":"Hosted", "name":"My Hosted Collector", "description":"An example Hosted Collector", "category":"HTTP Collection" }';

Let us know what you think!

· One min read

StackQL is an intelligent API client which uses SQL as a front-end language with support for multi-cloud and SaaS provider environments, you can find more information at

StackQL can provide valuable insights into your cloud and SaaS estates, whether for security posture management, cross-cloud entitlements reporting, cost optimization, or asset/inventory management.

As an interactive analysis tool, Jupyter notebooks can leverage StackQL to provide sources for cloud and SaaS provider data.

GCP Nodes

We've recently added magic function support for running StackQL queries in Jupyter notebooks, making the integration between StackQL and Jupyter more seamless. StackQL magic and be used on a line in a cell or the entire cell itself, as shown here:

The stackql-jupyter-demo Docker image is available from Docker Hub. You can find instructions to run using the Docker Hub image and instructions to run using docker-compose at

Give us your feedback!