gocloud - writing data to a bucket: 403
2022-12-23
Categories: DevOps Programming
Problem
We are writing some integration test using Go CDK. After writing some data to a bucket:
1 writer, err := buckOut.NewWriter(ctx, fileDst, nil) 2 if err != nil { 3 logger.Errorf("failed to write to fileDst: %v", err) 4 return err 5 } 6 defer writer.Close()
we got an error when reading:
1(code=NotFound): storage: object doesn't exist
By reading the documentation, I pay attention to this:
Closing the writer commits the write to the provider, flushing any buffers, and releases any resources used while writing, so you must always check the error of Close.
So, just check error to see what happens:
1 defer func() { 2 if closeErr := writer.Close(); closeErr != nil { 3 logger.Errorf("failed to close the writer: %v", closeErr) 4 } 5 }()
Here’s the result:
1googleapi: Error 403: prod-iac@infra-prod.iam.gserviceaccount.com does not have storage.objects.create access to the Google Cloud Storage object. 2Permission 'storage.objects.create' denied on resource (or it may not exist)., forbidden
We have a script to activate staging service account:
1#!/bin/bash 2 3echo $GCP_SERVICE_ACCOUNT_STAGING | base64 -d > /credential.json 4gcloud auth activate-service-account --key-file=/credential.json 5export GOOGLE_APPLICATION_CREDENTIALS=/credential.json
And it’s called in the BB pipeline:
1 script: 2 - /gcp-auth staging 3 - go test -count=1 -race -v ./...
Why it is still using the prod account instead of the staging account?
Troubleshooting
First, we need to understand how Application Default Credentials works. As you can see in the above doc, ADC searches for credentials in the following locations:
GOOGLE_APPLICATION_CREDENTIALS
env- User credentials setup with Google Cloud CLI
- The attached service account, as provided by the metadata server
So, looks like ADC is using attached service account which is provided by the metadata server.
Why it is not using GOOGLE_APPLICATION_CREDENTIALS
?
So, let see what happens when executing a shell script by adding pstree -p $$
to the end:
1#!/bin/bash 2 3export GOOGLE_APPLICATION_CREDENTIALS=/credential.json 4pstree -p $$
1$ ./gcp-auth.sh 2 \-+= 00407 quanta -fish 3 \-+= 19287 quanta bash 4 \-+= 30895 quanta /bin/bash ./gcp-auth.sh 5 \-+- 30896 quanta pstree -p 30895 6 \--- 30897 root ps -axwwo user,pid,ppid,pgid,command
1$ echo $GOOGLE_APPLICATION_CREDENTIALS 2
Here you can see that gcp-ath
is executed in a subshell (/bin/bash
).
And since the child processes cannot alter the parent’s env, GOOGLE_APPLICATION_CREDENTIALS
is not exported to the parent,
so, ADC fallback to use attached service account from the metadata server.
Solution
To run a shell script in the current shell, use source
or .
:
1$ source gcp-auth.sh 2 \-+= 35696 quanta -fish 3 \-+= 35845 quanta bash 4 \-+= 37050 quanta pstree -p 35845 5 \--- 37051 root ps -axwwo user,pid,ppid,pgid,command
1$ echo $GOOGLE_APPLICATION_CREDENTIALS 2/tmp/credential.json
Related Posts:
- Terraform failed to acquire state lock: 403: Access denied., forbidden
- How to create snippets in Helix?
- A terminal UI for Taskwarrior
- A simple terminal UI for ChatGPT
- Learning how to code