Skip to main content

16 posts tagged with "analytics"

View All Tags

· 3 min read
info

stackql is a dev tool that allows you to query, manage, and perform analytics against cloud and SaaS resources in real time using SQL, which developers and analysts can use for CSPM, assurance, user access management reporting, IaC, XOps and more.

The googleadmin StackQL provider is now available, which allows you to query, provision, or manage Google Workspace users, groups, devices, and more using StackQL. The googleadmin provider can be used with the google provider or other cloud providers to generate entitlements reports (or user access reviews) where Google Workspace identites are used in identity federation or IAM bindings.

The full documentation on how to use a Google service account for authentication to the googleadmin provider is available here. Information about the directory resources available and their fields and methods, is available in the StackQL Provider Registry Docs.

Simple Query

A simple query using the googleadmin provider is shown here:

SELECT
primaryEmail,
lastLoginTime
FROM
googleadmin.directory.users
WHERE domain = 'stackql.io'
AND primaryEmail = 'javen@stackql.io';

which would return the following results...

|------------------|--------------------------|                                                                                                                                                   
| primaryEmail | lastLoginTime |
|------------------|--------------------------|
| javen@stackql.io | 2023-07-08T23:30:31.000Z |
|------------------|--------------------------|

Example Query Using Built-In Functions

Here is an example using built-in functions in StackQL (more information about built-in functions is available in the StackQL docs):

SELECT
primaryEmail,
json_extract(name, '$.fullName') as full_name,
lastLoginTime
FROM
googleadmin.directory.users
WHERE domain = 'stackql.io'
AND primaryEmail = 'javen@stackql.io';

which would return results like this...

|------------------|--------------|--------------------------|                                                                                                                                    
| primaryEmail | full_name | lastLoginTime |
|------------------|--------------|--------------------------|
| javen@stackql.io | Jeffrey Aven | 2023-07-08T23:30:31.000Z |
|------------------|--------------|--------------------------|

Example Query Using Aggregate Functions

Here is an example of a summary query that could be useful:

SELECT
isAdmin,
COUNT(*) as num_admins
FROM
googleadmin.directory.users
WHERE domain = 'stackql.io'
GROUP BY isAdmin

results in...

|---------|------------|                                                                                                                                                                          
| isAdmin | num_admins |
|---------|------------|
| false | 9 |
|---------|------------|
| true | 2 |
|---------|------------|

Entitlements Report Using a LEFT JOIN with the google provider

Using the LEFT OUTER JOIN capability with StackQL, you can generate entitlements or user access management reports that span across Google Workspace as an Identity Provider (IdP) and a Google Cloud resource (including Organizations, Folders, Projects, and resources), such as:

SELECT 
split_part(json_extract(iam.members,'$[0]'), ':', 2) as member,
iam.role as role,
users.lastLoginTime
FROM google.cloudresourcemanager.organizations_iam_bindings iam
LEFT OUTER JOIN googleadmin.directory.users users
ON split_part(json_extract(iam.members,'$[0]'), ':', 2) = users.primaryEmail
WHERE users.domain = 'stackql.io'
AND iam.organizationsId = 141318256085
AND users.primaryEmail = 'javen@stackql.io';

which would return...

|------------------|------------------------------|--------------------------|                                                                                                                    
| member | role | lastLoginTime |
|------------------|------------------------------|--------------------------|
| javen@stackql.io | roles/bigquery.resourceAdmin | 2023-07-08T23:30:31.000Z |
|------------------|------------------------------|--------------------------|
| javen@stackql.io | roles/logging.admin | 2023-07-08T23:30:31.000Z |
|------------------|------------------------------|--------------------------|

Let us know what you think!

· 3 min read
info

stackql is a dev tool that allows you to query, manage, and perform analytics against cloud and SaaS resources in real time using SQL, which developers and analysts can use for CSPM, assurance, user access management reporting, IaC, XOps and more.

We are pleased to announce the addition of support for OUTER JOIN operations in StackQL queries. This is a significant addition to the language, and we are excited to see what our users will do with it!

info

An OUTER JOIN is a type of JOIN operation that returns all records from one table (or StackQL resource) and only those records from a second table or resource where the joined fields are equal (i.e. the JOIN condition is met). If there is no match, the missing side of the JOIN is filled with NULL values.

OUTER JOIN operations are important because they allow you to combine data from two or more resources (within a StackQL provider or across StackQL providers), even when there is no match between the two resources. This is a common scenario when performing analytics and reporting on user access management (for example between an IdP (like Okta) and a resource provider like AWS or Google).

Using OUTER JOIN operations in StackQL

If you wanted to find all users in your AWS account that have not logged in to their account in the last 20 days, and compare that to the same information for users in your Google Workspace account, you could use an OUTER JOIN operation to do this. The following query:

select 
aws_users.UserName as aws_user_name
,aws_users.PasswordLastUsed as aws_last_Login_time
,CASE
WHEN aws_users.PasswordLastUsed = '' then 'false'
WHEN ( strftime('%Y-%m-%d %H:%M:%SZ', aws_users.PasswordLastUsed) > ( datetime('now', '-20 days' ) ) ) then 'true'
else 'false' end as aws_is_active
,json_extract(google_users.name, '$.fullName') as google_user_name
,google_users.lastLoginTime as google_last_Login_time
,CASE
WHEN google_users.lastLoginTime is null then 'false'
WHEN google_users.lastLoginTime = '' then 'false'
WHEN ( strftime('%Y-%m-%d %H:%M:%SZ', google_users.lastLoginTime) > ( datetime('now', '-20 days' ) ) ) then 'true'
else 'false' end as google_is_active
from
aws.iam.users aws_users
LEFT OUTER JOIN
googleadmin.directory.users google_users
ON lower(substr(aws_users.UserName, 1, 5)) = lower(substr(json_extract(google_users.name, '$.fullName'), 1, 5))
WHERE aws_users.region = 'us-east-1' AND google_users.domain = 'stackql.io'
;

would produce a result like this:

|------------------------|----------------------|---------------|------------------|--------------------------|------------------|
| aws_user_name | aws_last_Login_time | aws_is_active | google_user_name | google_last_Login_time | google_is_active |
|------------------------|----------------------|---------------|------------------|--------------------------|------------------|
| demo-stackql-cicd-user | null | false | null | null | false |
|------------------------|----------------------|---------------|------------------|--------------------------|------------------|
| github_actions | null | false | null | null | false |
|------------------------|----------------------|---------------|------------------|--------------------------|------------------|
| jeffrey.aven | 2023-06-30T04:29:14Z | true | null | null | false |
|------------------------|----------------------|---------------|------------------|--------------------------|------------------|
| kieran.rimmer | 2023-06-03T08:40:49Z | false | Kieran Rimmer | 2023-06-23T06:01:46.000Z | true |
|------------------------|----------------------|---------------|------------------|--------------------------|------------------|
| ... | ... | ... | ... | ... | ... |
|------------------------|----------------------|---------------|------------------|--------------------------|------------------|

Currently only LEFT OUTER JOIN sppport is available, but we will be adding support for RIGHT OUTER JOIN and FULL OUTER JOIN in the near future. Stay tuned!

· 4 min read
info

stackql is a dev tool that allows you to query and manage cloud and SaaS resources using SQL, which developers and analysts can use for CSPM, assurance, user access management reporting, IaC, XOps and more.

This quick start guide outlines how to create a superset + stackql dashboard on your laptop using docker desktop, helm, and kubernetes. We certainly do not want to go into depth on superset, a third-party application, so this guide is terse.

Supplying secrets

In this example, we use:

All of the associated principals must be granted access using provider-specific access controls.

NOTE keep all of these values secret and certainly do not commit into source control. We have supplied examples for numerous providers, and we suggest that you configure only what you need.

Create a file helm/stackql-dashboards/secrets/secret-values.yaml, containing the following, replacing placeholders:

stackql:
extraSecretEnv:
AWS_ACCESS_KEY_ID: '<your aws access key id>'
AWS_SECRET_ACCESS_KEY: '<your aws secret key>'
AZURE_CLIENT_ID: '<your azure client id>'
AZURE_CLIENT_SECRET: '<your azure client secret>'
AZURE_TENANT_ID: '<your azure tenant id>'
DIGITALOCEAN_TOKEN: '<your digitalocean token>'
STACKQL_GITHUB_TOKEN: '<your github personal access token>'
GOOGLE_APPLICATION_CREDENTIALS: '/opt/stackql/config/google-credentials.json'
extraSecrets:
google-credentials.json: |
<full google json key>

superset:
init:
adminUser:
password: 'mypassword'

Expand templates and deploy locally

Here we will set up and expose a local dashboard using the local kubernetes cluster supplied with docker desktop.

These steps assume that your kubectl config is pointed at your local cluster (depending on your version of docker, something like kubectl config use-context docker-desktop should do the trick) and that you execute from the root directory of the stackql-cloud repository. We will let the system dynamically assign a local port.

helm dependency update  helm/stackql-dashboards

helm template --release-name v1 --namespace default --set superset.service.type=NodePort --set superset.service.nodePort.http="" -f helm/stackql-dashboards/secrets/secret-values.yaml helm/stackql-dashboards > helm/stackql-dashboards/out/stackql-demo-dashboards.yaml

kubectl apply -f helm/stackql-dashboards/out/stackql-demo-dashboards.yaml

Log into and set up superset

Allow a minute or so for init actions to complete.

First, inspect the output of kubectl get svc and note the host port for the service v1-superset. In my case, I see (redacted):

$ kubectl get svc | grep NodePort      
v1-superset NodePort ... ... 8088:31930/TCP ...

So, my local port is 31390 on this occasion. Hereafter let us refer to this port as <SUPERSET_LOCAL_PORT>.

Go to your browser address bar and punch in http://localhost:<SUPERSET_LOCAL_PORT>. Log in using admin / mypassword (or other if you reconfigured), and then you can begin using superset.

From the top RHS Settings dropdown, select Database Connections. Then, select the + DATABASE button (just below Settings) and do the following (the password does not matter in this context, add anything you want):

Initial database settings

Press "CONNECT"

Follow up database settings

Press "FINISH"

NOTE: we have enabled DML here so that meta queries like show and describe will work. You certainly do not have to do this if you don't want to.

Experiment

Here we present a simple GCP scenario; you can follow the same pattern to create many charts and populate a dashboard...

Navigate to SQL > SQL Lab and then input the below, substituting <your gcp project> for whatever google project your service account can access:

select name, guestCpus from google.compute.machine_types where project = '<your gcp project>' and zone = 'australia-southeast1-a';

Press "RUN SELECTION"

A table of results should appear.

Press "Save" > "Save Dataset"

Give it whatever name you want.

You can click the option to create a chart immediately or navigate to your chart via the Charts menu item.

Once inside the UI for your new dataset, do something like this (we will leave it to your creativity)...

My First Chart

...and then...

Chart To Dashboard

Click on "SAVE & GO TO NEW DASHBOARD", and you have your first dashboard + stackql!

Dashboard

· 6 min read

This exercise will show you how to run a real-time query across your AWS and Google cloud environments. You may do this for inventory analysis, security analysis, or any other reason you can think of. We will use stackql to query the state of your cloud resources across your AWS and Google environments. You can also use stackql to provision, de-provision or manage resources across different cloud and SaaS providers.

The steps we will take are:

  1. Prepare your environment for stackql usage.
  2. Use stackql to provision some resources in cloud. optional
  3. Use stackql to query resources present in the cloud.
  4. Use stackql to tear down resources created in step (2), if any. Important: you must destroy any resources created through this exercise, or you will incur ongoing charges.

Preparation

For this exercise, credentials with privileges against google and aws are required. It is outside the scope of this document to go into great detail on the various topics and options relevant to this. Instead, the below steps provide both: (i) reference to vendor documentation and (ii) suggestions for workarounds to get yourself going.

for old hands

All the materials required for this exercise are:

  1. A current stackql executable.
  2. A Google Service Account Key JSON file, where the corresponding Service Account possesses permissions sufficient to create, interrogate and delete compute block storage.
  3. AWS credentials stored in the traditional AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables, where the corresponding Service Account possesses permissions sufficient to create, interrogate and delete ec2 block storage.

step by step

First, please do the following:

  1. Download and install stackql from our website.
  2. For google:
    • (i) Create and download a Google Service Account Key as per Google documentation. Remember the location of your key file.
    • (ii) You will need to grant the Service Account at least read, list, create, and delete privileges. For more information about google iam and Service Accounts in particular, please consult the documentation. For this exercise, grant your service account the roles/compute.storageAdmin role would be adequate.
  3. For AWS:
    • (i) Create and download AWS user credentials as per AWS documentation. We will require long-lived credentials. In keeping with vendor advice, we strongly recommend against using root user credentials. We have created a dedicated CICD user for this exercise.
    • (ii) Set up the AWS CLI environment variables as per the documentation.
    • (iii) The user will need create / read / delete privileges against ec2 volumes. This can be done though the AWS IAM console in various ways. For example, one can use groups and permission policies. Adding your user to a group with AmazonEC2FullAccess will certainly work, although lesser privileges may be adequate.

Then, create some shell variables:

# you will need to edit the file path as appropriate

GOOGLE_DOWNLOADED_KEY_FILE_PATH="/path/to/your/downloaded/key.json"

AWS_AUTH_FRAGMENT='{ "type": "aws_signing_v4", "credentialsenvvar": "AWS_SECRET_ACCESS_KEY", "keyIDenvvar": "AWS_ACCESS_KEY_ID" }'

GOOGLE_AUTH_FRAGMENT='{ "credentialsfilepath": "'"${GOOGLE_DOWNLOADED_KEY_FILE_PATH}"'", "type": "service_account" }'

export STACKQL_AUTH_CTX='{ "aws": '"${AWS_AUTH_FRAGMENT}"', "google": '"${GOOGLE_AUTH_FRAGMENT}"' }'
Setting up Provider Auth in PowerShell
$GOOGLE_DOWNLOADED_KEY_FILE_PATH = "C:\path\to\your\downloaded\key.json"

$AWS_AUTH_FRAGMENT = '{ "type": "aws_signing_v4", "credentialsenvvar": "AWS_SECRET_ACCESS_KEY", "keyIDenvvar": "AWS_ACCESS_KEY_ID" }'

$GOOGLE_AUTH_FRAGMENT = '{ "credentialsfilepath": "' + $GOOGLE_DOWNLOADED_KEY_FILE_PATH + '", "type": "service_account" }'

$env:STACKQL_AUTH_CTX = '{ "aws": ' + $AWS_AUTH_FRAGMENT + ', "google": ' + $GOOGLE_AUTH_FRAGMENT + ' }'

Start a stackql shell session

To start an interactive shell session, in the same shell you setup your envrioment variables, run:

stackql --auth="${STACKQL_AUTH_CTX}" shell

You can exit at any time with ctrl + C.

Setup and meta queries to get started

StackQL providers are installed from the StackQL Provider Registry using the REGISTRY command. StackQL supports meta queries such as SHOW and DESCRIBE which can be used to explore the available services, resources, fields, and operations available in a given cloud or SaaS provider.

-- see available providers
registry pull list;

-- pull the required providers
registry pull google;

registry pull aws;

-- some the installed providers
show providers;

-- some meta queries
show services in google;

show resources in google.compute;

describe google.compute.disks;

Create block storage (optional)

You will need to replace the items in <ANGLE_BRACKETS>.

-- create a google volume, await and verify creation completes successfully
insert /*+ AWAIT */ into google.compute.disks(
project,
zone,
data__name,
data__sizeGb
)
select
'<YOUR_GCP_PROJECT>',
'australia-southeast1-a',
'my-stackql-demo-disk-01',
'10' ;

-- create an aws volume, operation despatched on a BEST EFFORT basis
insert into aws.ec2.volumes(
AvailabilityZone,
Size,
region)
select
'ap-southeast-2a',
10,
'ap-southeast-2';

Interrogate cloud block storage


-- query one resource from google
select
name,
split_part(split_part(type, '/', 11), '-', 2) as type,
status,
sizeGb as size
from google.compute.disks
where project = '<YOUR_GCP_PROJECT>'
and zone = 'australia-southeast1-a';

-- query the equivalent from aws
select
volumeId as name,
volumeType as type,
status,
size
from aws.ec2.volumes
where region = 'ap-southeast-2';

-- union the equivalent resources across clouds
select
'google' as vendor,
name,
split_part(split_part(type, '/', 11), '-', 2) as type,
status,
sizeGb as size
from google.compute.disks
where project = '<YOUR_GCP_PROJECT>'
and zone = 'australia-southeast1-a'
union
select
'aws' as vendor,
volumeId as name,
volumeType as type,
status,
size
from aws.ec2.volumes
where region = 'ap-southeast-2';

-- create a view for convenience
create view dual_cloud_block_storage as
select
'google' as vendor,
name,
split_part(split_part(type, '/', 11), '-', 2) as type,
status,
sizeGb as size
from google.compute.disks
where project = '<YOUR_GCP_PROJECT>'
and zone = 'australia-southeast1-a'
union
select
'aws' as vendor,
volumeId as name,
volumeType as type,
status,
size
from aws.ec2.volumes
where region = 'ap-southeast-2';

-- select from the newly created view, with ordering
select * from dual_cloud_block_storage order by name desc;

Delete block storage (if required)

This will only work if the disks are deletable. For example, aws.ec2.volumes must have status = available; you can check this with the view we created above.

/* delete a google volume, await and verify creation completes successfully.
One at a time only... */
delete /*+ AWAIT */ from google.compute.disks
where project = '<YOUR_GCP_PROJECT>'
and zone = 'australia-southeast1-a'
and disk = 'my-stackql-demo-disk-01';

-- delete an aws volume, operation despatched on a BEST EFFORT basis
delete from aws.ec2.volumes
where VolumeId = 'vol-049ee07b31aff451a'
and region = 'ap-southeast-2';

Verify the cleanup was successful

select * from dual_cloud_block_storage order by name desc;

That's it for the scripted demo!

Get involved

We Need Your Help!

if you find bugs, want features, have tech questions then go to github.com/stackql/stackql/issues and raise the appropriate issue 🙏

· 2 min read

The StackQL Sumologic provider is now available in the public StackQL Provider Registry. Docs are available at sumologic provider docs.

StackQL is an intelligent API client which uses SQL as a front-end language. StackQL can be used for querying cloud and SaaS providers, as well as provisioning and lifecycle operations.

The StackQL Sumo provider can query, create, update and delete Sumologic collectors and sources, view and manage ingest budgets, health events, dashboards, user and account access and activity, and more.

Some example queries include:

SELECT id, name FROM sumologic.collectors.collectors WHERE region = 'au';

or using built-in functions to simplify and format query outputs, such as:

SELECT alive, datetime(lastSeenAlive/1000, 'unixepoch') AS lastSeenAliveUtc,
datetime(lastSeenAlive/1000, 'unixepoch', 'localtime') AS lastSeenAliveLocal
FROM sumologic.collectors.collectors
WHERE region = 'au' AND id = 116208196;

another example...

SELECT id, email,
firstName || ' ' || lastName AS fullName,
isMfaEnabled,
lastLoginTimestamp,
round(julianday('now') - julianday(lastLoginTimestamp), 0) as daysSinceLastLogin
FROM sumologic.users.users WHERE region = 'au';

An example using StackQL with the Sumologic provider to query users and roles and join the results to get a list of users and their roles:

SELECT u.email as email, r.name AS role
FROM sumologic.users.users u
JOIN sumologic.roles.roles r
ON JSON_EXTRACT(u.roleIds, '$[0]') = r.id
WHERE u.region = 'au' AND r.region = 'au';

An example using StackQL and Jupyter is shown here (see stackql/stackql-jupyter-demo):

Use StackQL and Jupyter to query SumoLogic

StackQL can also be used to provision objects in Sumologic, the following query can be used to create a collector for instance:

INSERT INTO sumologic.collectors.collectors(region, data__collector)
SELECT 'au',
'{ "collectorType":"Hosted", "name":"My Hosted Collector", "description":"An example Hosted Collector", "category":"HTTP Collection" }';

Let us know what you think!