Skip to main content

· 2 min read

Multi cloud visibility, SecOps, FinOps, DevOps made easy

Today marks a significant epoch in the evolution of the InfraQL/StackQL project. The StackQL provider registry allows contributors to add support for different providers (major cloud, alt cloud and SaaS providers) using a no-code approach. Developers simply add extensions to the providers OpenAPI spec using configuration documents (currently supporting yaml and json – with future support for toml and hcl). These extensions allow StackQL to map an ORM to provider services, resources, and methods.

For example, for a future AWS provider you could run discovery commands such as:

SHOW SERVICES IN aws;
/* shows the available services in AWS */
SHOW RESOURCES IN aws.ec2;
/* shows the available resources in the AWS EC2 service */
DESCRIBE aws.ec2.instances;
/* show available attributes in the aws.ec2.instances resource schema */
SHOW METHODS IN aws.ec2.instances;
/* shows available lifecycle methods – such as start, stop, etc which can be involved using the EXEC command */

Or create a new EC2 instance using:

INSERT INTO aws.ec2.instances SELECT;

View and report on instances and their properties using:

SELECT col(s) FROM aws.ec2.instances WHERE;

Or clean up resources using:

DELETE FROM aws.ec2.instances WHERE;

The StackQL beta version supporting the provider registry is available for Mac (arm and amd) and Linux, with a Windows version coming in the next few weeks.

Providers are currently available for Google and Okta, see StackQL Provider Registry repo and Developer Guide. We are encouraging developers to contribute – we would be happy to assist, just raise an issue or a PR.

· 3 min read

Queries (particularly) repetitive queries that don't take advantage of results caching can lead to extraordinarily high bills.

StackQL, with it's backend SQL engine, allows you to query Big Query statistics in real time, including identifying queries which are not served from cache and understanding billable charges per query or time slice.

Here is a simple query to break down a time period into hours and show the total queries, queries served from cache and the total query charges per hour.

SELECT
STRFTIME('%H', DATETIME(SUBSTR(JSON_EXTRACT(statistics, '$.startTime'), 1, 10), 'unixepoch')) as hour,
COUNT(*) as num_queries,
SUM(JSON_EXTRACT(statistics, '$.query.cacheHit')) as using_cache,
SUM(JSON_EXTRACT(statistics, '$.query.totalBytesBilled')*{{ .costPerByte }} ) as queryCost
FROM google.bigquery.jobs
WHERE projectId = '{{ .projectId }}'
AND allUsers = 'true'
AND minCreationTime = '{{ .minCreationTime }}'
AND maxCreationTime = '{{ .maxCreationTime }}'
AND state = 'DONE'
AND JSON_EXTRACT(statistics, '$.query') IS NOT null
GROUP BY STRFTIME('%H', datetime(SUBSTR(JSON_EXTRACT(statistics, '$.startTime'), 1, 10), 'unixepoch'));

Many more examples to come, including using this data to create visualisations in a Jupyter notebook, stay tuned!

· 4 min read

Understanding roles is integral to applying the principal of least privilege to GCP environments.

A quick primer on roles in GCP

A Role in GCP is a collection of permissions to services and APIs on the platform. Roles are "bound" to principals or members (users, groups and service accounts).

These bindings are referred to as "policies" which are scoped at a particular level - organisation, folder, project, resource.

There are three types of roles - Primitive Roles, Predefined Roles and Custom Roles.

Primitive (or Basic) Roles

These are legacy roles set at a GCP project level which include Owner, Editor, and Viewer. These are generally considered to be excessive in terms of permissions and their use should be minimised if not avoided altogether.

Predefined Roles

These are roles with fine grained access to discrete services in GCP. Google has put these together for your convenience. In most cases predefined roles are the preferred mechanism to assign permissions to members.

Custom Roles

Custom roles can be created with a curated collection of permissions if required, reasons for doing so include:

  • if the permissions in predefined roles are excessive for your security posture
  • if you want to combine permissions across different services, and cannot find a suitable predefined role although it is preferred to assign multiple predefined roles to a given member

Anatomy of an IAM Policy

An IAM Policy is a collection of bindings of one or more members (user, group or service account) to a role (primitive, predefined or custom). Policies are normally expressed as JSON objects as shown here:

{
"bindings": [
{
"members": [
"group:project-admins@my-cloud-identity-domain.com"
],
"role": "roles/owner"
},
{
"members": [
"serviceAccount:provisioner@my-project.iam.gserviceaccount.com",
"user:javen@avensolutions.com"
],
"role": "roles/resourcemanager.folderViewer"
}
]
}

Groups are Google Groups created in Cloud Identity or Google Workspace (formerly known as G-Suite)

Application of policies is an atomic operation, which will overwrite any existing policy attached to an entity (org, folder, project, resource).

Querying Roles with StackQL

Predefined and primitive roles are defined in the roles resource in StackQL (google.iam.roles) - which returns the following fields (as returned by DESCRIBE google.iam.roles):

NameDescription
nameName of the role in the format roles/[{service}.]{role}
for predefined or basic roles, or qualified for custom roles,
e.g. organizations/{org_id}/roles/[{service}.]{role}
descriptionAn optional, human-readable description for the role
includedPermissionsAn array of permissions this role grants (only displayed with
VIEW = 'full')
etagOutput only, used internally for consistency
titleAn optional, human-readable title for the role (visible in the
Console)
deletedA read only boolean field showing the current deleted state
of the role
stageThe current launch stage of the role, e.g. ALPHA

Get the name for a role

Often, you may know the "friendly" title for a role like "Logs Bucket Writer", but you need the actual role name to use in an Iam policy - which is roles/logging.bucketWriter. A simple query to find this using StackQL is shown here:

SELECT name
FROM google.iam.roles
WHERE title = 'Logs Bucket Writer';
/* RETURNS:
|----------------------------|
| name |
|----------------------------|
| roles/logging.bucketWriter |
|----------------------------|
*/

Conversely, if you have the name but want the friendly title you could use:

SELECT title
FROM google.iam.roles
WHERE name = 'roles/logging.bucketWriter';
/* RETURNS:
|--------------------|
| title |
|--------------------|
| Logs Bucket Writer |
|--------------------|
*/

Wildcards can also be used with the LIKE operator, for example to get the name and title for each predefined role in the logging service you could run:

SELECT name, title
FROM google.iam.roles
WHERE name LIKE 'roles/logging.%';

Get the permissions for a role

To return the includedPermissions you need to add the following WHERE clause:

WHERE view = 'FULL'

An example query to list the permissions for a given role is shown here:

SELECT includedPermissions
FROM google.iam.roles
WHERE view = 'FULL' AND
name = 'roles/cloudfunctions.viewer';
/* RETURNS
["cloudbuild.builds.get","cloudbuild.builds.list",...]
*/

A more common challenge is that you know a particular permission such as cloudfunctions.functions.get and you want to know which roles contain this permission you could run the following query:

SELECT name, title
FROM google.iam.roles
WHERE view = 'FULL'
AND includedPermissions LIKE '%cloudfunctions.functions.get%';

Creating custom roles and more...

In forthcoming articles, we will demonstrate how you can create custom roles using StackQL INSERT operations, as well as how you can construct a simple IAM framework to manage and provision access to resources in GCP, stay tuned!

· 2 min read

I grappled with Terraform for the better part of a day trying to provision a GKE Autopilot cluster in a Shared VPC service project, I was able to do this with StackQL in 2 minutes, this is how...

Before starting you will need the following to use GKE Autopilot in your Shared VPC:

  • control plane IP address range
  • control plane authorized networks (if desired)
  • the host network and node subnet you intend to use
  • pod and services secondary CIDR ranges

(all of the above would typically be pre-provisioned in the Shared VPC design and deployment)

Step 1: Using the GCP console, navigate to your service project, go to Kubernetes Engine --> Clusters --> Create --> GKE Autopilot --> Configure. Enter in all of the desired configuration options (including the network configuration specified above). Do not select CREATE.

Step 2: At the bottom of the dialog used to configure the cluster in the console, use the Equivalent REST button to generate the GKE Autopilot API request body.

Step 3: Supply this as input data to an StackQL INSERT command, either via an iql file, on as inline configuration. Optionally you can convert this to Jsonnet and parameterise for use in other environments.

<<<json
{
"cluster": {
..from equivalent REST command..
}
}
>>>

INSERT INTO google.container.`projects.locations.clusters`(
parent,
data__cluster
)
SELECT 'projects/my-svc-project/locations/australia-southeast1',
'{{ .cluster }}'
;

easy!

· 2 min read

Jsonnet is a fantastic configuration language as discussed in Using Jsonnet to Configure Multiple Environments. Going slightly beyond the basics, this article is an introduction to anonymous functions and the map and format methods in the Jsonnet standard library.

Similar to map methods in various other functional programming languages or data processing frameworks, map in Jsonnet evaluates a named or anonymous function for each element within an array. map is a higher order function, meaning it is a function that calls another function. Its signature is here:

std.map(func, arr)

the func argument could be a named function or an unnamed (or anonymous function). arr is an input array which could include embedded dictionaries or other lists as well.

In this example I am templating some config for a NAT gateway in GCP for use in an StackQL routine, where I have a list of external IP's that need to be formatted in the Google selfLink format. Perfect use for the map method as well as the format command similar to the printf or equivalent commands found in various other languages. The easiest way to use this is similar to the way you would invoke this in Python:

"%s/%s/%s" % [string1, string2, string3]

Putting it all together in the following practical example, you can see the input Jsonnet in the Jsonnet tab and the templated or rendered output in the Json tab. x represents an element of the extIps array, then the function returns the fully qualified selfLink url.

{
local project_id = 'myproject-123',
local region = 'australia-southeast1',
local self_link_prefix = 'https://compute.googleapis.com/compute/v1/projects/',
local extIps = [{name: 'syd-extip1', region: region},{name: 'syd-extip2', region: region}],

nats: [
{
name: 'nat-config',
natIpAllocateOption: 'MANUAL_ONLY',
natIps: std.map((function(x) "%s/%s/regions/%s/addresses/%s" % [self_link_prefix, project_id, x.region, x.name]), extIps),
sourceSubnetworkIpRangesToNat: 'ALL_SUBNETWORKS_ALL_IP_RANGES'
},
],
}

more to come...