Using Regional Google Storage Buckets with S3 APIs
Learn how to use regional GCS buckets with S3 API compatibility for data sovereignty and compliance. This guide covers creating a bucket, setting permissions, and using HMAC for S3 API access. Test your setup with rclone and create presigned URLs for secure file sharing.
For organizations prioritizing data sovereignty and compliance, regional Google Storage buckets with S3 API compatibility offer a great solution. This blog post will give a quick guide on how you can start using them.
First, create a bucket where there's a supported regional endpoint. We'll use us-central1
for our example.
Next, give a service account the appropriate permissions. In our case, we'll give it access to read and write files. (If we were using this service account to only share existing files, then only Storage Object Viewer
would be the correct and more secure setup.)
And finally, create a service account HMAC. This will allow us to use standard S3 API calls in any application that supports S3 (which is a lot).
That's it! To test out our new bucket, we will use rclone.
We'll copy in a small text file as a test.
rclone copy --progress --header-upload='Cache-Control: public, immutable, max-age=31536000, s-maxage=31536000' --header-upload='Content-Language: en' test.1.txt gcs-testing-1:demo-bucket-for-aimoda-blog/
And use the handy rclone link
to make a presigned URL.
rclone link gcs-testing-1:demo-bucket-for-aimoda-blog/test.1.txt
Which will print out a URL that looks similar to the one below. (Note: this URL won't work when the blog post is published, it's just for demo purposes.)
You can tell this endpoint is definitely in us-central1
by comparing the latency with the standard storage.googleapis.com
endpoint. (Thanks to Globalping!)