Automating Aurora DSQL and Cloudflare Hyperdrive Integration with Workers
Learn how to automatically manage rotating credentials and IAM tokens for your PostgreSQL databases using Cloudflare Workers. Perfect for teams using Aurora DSQL or any database that requires periodic credential updates with Hyperdrive.

When using Cloudflare Hyperdrive with databases that employ rotating credentials or IAM-based authentication, you need a reliable way to keep your connection configurations up to date. This is particularly relevant for services like Amazon Aurora DSQL, which requires IAM tokens that expire after a week.
In this post, we'll walk through a practical solution using Cloudflare Workers to automatically manage Hyperdrive configurations for Aurora DSQL endpoints. The Worker runs on a schedule, generating fresh IAM tokens and updating Hyperdrive configurations before the existing tokens expire.
While we're focusing on Aurora DSQL, the approach we'll discuss can be adapted for any PostgreSQL-compatible database that uses rotating credentials. The solution handles multiple database endpoints, supports different AWS regions, and includes error handling for production use.
The code we'll examine:
- Generates AWS IAM tokens for database authentication
- Manages Hyperdrive configurations through the Cloudflare API
- Runs automatically via Cron Triggers
- Supports multiple database endpoints
- Includes proper logging
Let's dive into the implementation details. (Or, click here to skip to the GitHub repository.)
Understanding IAM Authentication with Aurora DSQL
Before we dive into the code, let's clarify how authentication works with Aurora DSQL. Unlike traditional PostgreSQL setups where you might use a static username and password, Aurora DSQL uses AWS IAM authentication. This means:
- Instead of a password, you need an IAM token
- These tokens are actually signed URLs that expire after a maximum of 7 days
- The token serves as both authentication and authorization
Here's what a typical authentication flow looks like:
// This signed URL becomes your "password"
const token = await generateDbConnectAdminAuthToken(
"your-cluster.region.dsql.amazonaws.com",
"us-east-1",
"DbConnectAdmin",
"YOUR_ACCESS_KEY",
"YOUR_SECRET_KEY"
);
// The connection details end up looking like this
const connection = {
host: "your-cluster.region.dsql.amazonaws.com",
user: "admin", // The default username
password: token, // The signed URL serves as the password
port: 5432,
database: "postgres"
};
The Challenge with Hyperdrive
Cloudflare Hyperdrive needs stable connection details to cache and accelerate your database queries. However, with Aurora DSQL's rotating credentials, we need a way to:
- Generate new IAM tokens before the old ones expire
- Update Hyperdrive's configuration with the new tokens
- Do this automatically and reliably
- Handle multiple database endpoints if you're using cross-region replication
This is where our Worker comes in. It acts as the automation layer between AWS IAM authentication and Hyperdrive configuration management.
The Worker Implementation
Let's break down the key components of our Worker. The core functionality revolves around two main tasks: generating IAM tokens and managing Hyperdrive configurations.
Generating IAM Tokens
We use the aws4fetch
library to handle AWS signature generation. Here's the token generation function:
async function generateDbConnectAdminAuthToken(
yourClusterEndpoint: string,
region: string,
action: string,
accessKeyId: string,
secretAccessKey: string
): Promise<string> {
// Construct the base URL with the maximum allowed expiration
const url = new URL(`https://${yourClusterEndpoint}`);
url.searchParams.set('Action', action);
url.searchParams.set('X-Amz-Expires', '604800'); // 7 days in seconds
// Create and configure the AWS V4 signer
const signer = new AwsV4Signer({
url: url.toString(),
method: 'GET',
service: 'dsql',
region,
accessKeyId,
secretAccessKey,
signQuery: true, // Required for URL signing
});
// Generate the signed URL
const { url: signedUrl } = await signer.sign();
// Return without the https:// prefix (AWS requirement)
return signedUrl.toString().substring('https://'.length);
}
Managing Hyperdrive Configurations
The Worker handles both creating new configurations and updating existing ones through a single upsertConfig
function:
async function upsertConfig(
client: Cloudflare,
accountId: string,
endpoint: EndpointConfig,
existingConfig: any | undefined,
password: string
) {
const origin: HyperdriveOrigin = {
scheme: 'postgres',
database: 'postgres',
user: 'admin',
host: endpoint.host,
port: 5432,
password, // This is our IAM token
};
if (existingConfig) {
return await client.hyperdrive.configs.edit(existingConfig.id, {
account_id: accountId,
origin,
});
}
return await client.hyperdrive.configs.create({
account_id: accountId,
name: endpoint.configName,
origin,
});
}
Scheduled Execution
The Worker runs on a schedule defined in wrangler.toml
:
[env.production.triggers]
crons = [ "30 15 * * *" ] # Runs daily at 15:30 UTC
When triggered, it:
- Fetches existing Hyperdrive configurations
- Generates new IAM tokens for all endpoints in parallel
- Updates or creates configurations with the new tokens
The scheduled handler orchestrates this process:
async scheduled(controller: ScheduledController, env: Env, ctx: ExecutionContext) {
const client = new Cloudflare({
apiToken: env.CLOUDFLARE_API_KEY_HYPERDRIVE,
});
// Define your endpoints
const endpoints: EndpointConfig[] = [
{
configName: "dsql-admin-demo-primary",
host: env.AWS_DSQL_ENDPOINT_PRIMARY,
region: env.AWS_DSQL_REGION_PRIMARY,
},
// Add secondary endpoints as needed
];
// Generate tokens in parallel
const tokens = await Promise.all(
endpoints.map(ep => generateDbConnectAdminAuthToken(/*...*/))
);
// Update configurations
await Promise.all(
endpoints.map((ep, index) => upsertConfig(/*...*/))
);
}
Deployment and Configuration
Let's set up the Worker with proper security and configuration management. We'll use Wrangler's environment-based configuration and secure secret storage.
Environment Configuration
First, create a wrangler.toml
that separates development and production settings:
name = "hyperdrive-dsql-demo-worker-dev"
main = "src/index.ts"
compatibility_date = "2024-11-11"
[vars]
ENVIRONMENT = "dev"
AWS_DSQL_REGION_PRIMARY = "us-east-2"
AWS_DSQL_REGION_SECONDARY = "us-east-1"
[env.production]
name = "hyperdrive-dsql-demo-worker"
[env.production.vars]
ENVIRONMENT = "production"
AWS_DSQL_REGION_PRIMARY = "us-east-2"
AWS_DSQL_REGION_SECONDARY = "us-east-1"
[env.production.triggers]
crons = [ "30 15 * * *" ]
IAM Policy Configuration
Your AWS IAM user needs specific permissions to generate database tokens. Here's a secure inline policy that limits access to specific regions and clusters:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowDSQLAccess",
"Effect": "Allow",
"Action": [
"dsql:DbConnectAdmin",
"dsql:DbConnect"
],
"Resource": [
"arn:aws:dsql:us-east-2:ACCOUNT:cluster/CLUSTER-ID",
"arn:aws:dsql:us-east-1:ACCOUNT:cluster/CLUSTER-ID"
]
},
{
"Sid": "DenyAccessOutsideUS",
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"aws:RequestedRegion": [
"us-east-1",
"us-east-2"
]
}
}
}
]
}
Testing Locally
Before deploying, test the Worker locally:
# Start the local development server
wrangler dev
# Test the scheduled event
curl "http://localhost:60345/__scheduled?cron=30+15+*+*+*"
Watch the console output for:
- Successful token generation
- Hyperdrive configuration updates
- Any error messages
Here's an example of a successful run:
Starting Hyperdrive configuration update...
Generating authentication tokens for each Aurora DSQL endpoint...
[generateDbConnectAdminAuthToken] Starting IAM token generation...
[generateDbConnectAdminAuthToken] Preparing to sign URL: https://REMOVED.dsql.us-east-2.on.aws/?Action=DbConnectAdmin&X-Amz-Expires=604800
[generateDbConnectAdminAuthToken] Starting IAM token generation...
[generateDbConnectAdminAuthToken] Preparing to sign URL: https://REMOVED.dsql.us-east-1.on.aws/?Action=DbConnectAdmin&X-Amz-Expires=604800
Fetching existing Hyperdrive configurations from Cloudflare...
[generateDbConnectAdminAuthToken] Signed URL obtained
[generateDbConnectAdminAuthToken] Trimmed token (used as password): REMOV...
[generateDbConnectAdminAuthToken] Signed URL obtained
[generateDbConnectAdminAuthToken] Trimmed token (used as password): REMOV...
Waiting for all IAM tokens to be generated...
Upserting Hyperdrive configurations for each endpoint...
Creating configuration for endpoint "dsql-admin-demo-primary"
Creating configuration for endpoint "dsql-admin-demo-secondary"
Created new configuration: dsql-admin-demo-primary
Created new configuration: dsql-admin-demo-secondary
Hyperdrive configuration update complete!
Production Deployment
Deploy to production with:
wrangler deploy -e production
The Worker will now run daily at 15:30 UTC, maintaining fresh IAM tokens for your Hyperdrive configurations.
Managing Secrets
For sensitive values, use Wrangler's secret management. Never commit these to your repository:
# AWS Credentials
wrangler secret put -e production AWS_DSQL_ACCESS_KEY_ID
wrangler secret put -e production AWS_DSQL_SECRET_ACCESS_KEY
# Aurora DSQL Endpoints
wrangler secret put -e production AWS_DSQL_ENDPOINT_PRIMARY
wrangler secret put -e production AWS_DSQL_ENDPOINT_SECONDARY
# Cloudflare Configuration
wrangler secret put -e production CLOUDFLARE_API_KEY_HYPERDRIVE
wrangler secret put -e production CLOUDFLARE_ACCOUNT_ID
When running these commands, Wrangler will prompt you to enter each value securely. For example:
$ wrangler secret put -e production AWS_DSQL_ACCESS_KEY_ID
Enter a secret value: ********
🌟 Successfully created secret AWS_DSQL_ACCESS_KEY_ID
These secrets are encrypted and stored securely in your Cloudflare account, and are only accessible to your Worker at runtime.
Best Practices and Considerations
When running this Worker in production, there are a few things to keep in mind:
Token Lifecycle
The IAM tokens we generate are valid for 7 days (604800 seconds), which is the maximum allowed by Aurora DSQL. Our Worker runs daily, meaning we're refreshing tokens well before they expire. This gives us a comfortable safety margin:
- Tokens are refreshed 6 times before the original expires
- Even if multiple updates fail, we have several days to investigate and fix issues
- No need for complex error handling or immediate alerting
Monitoring
While we don't need complex error handling, it's still useful to monitor the Worker's execution. Cloudflare provide a useful way to do this:
[env.production.observability]
enabled = true
head_sampling_rate = 1
With this configuration, you can view the detailed execution logs in the Cloudflare dashboard.
Using the Hyperdrive Connection
Once configured, connecting from another Worker to your database through Hyperdrive is straightforward. See Cloudflare's existing documentation for more details on this.
This Worker handles all the complexity of IAM authentication behind the scenes, allowing your other Workers to remain simple and focused on business logic.
Conclusion
We've demonstrated how to automate the integration between Aurora DSQL and Cloudflare Hyperdrive using a Worker. This solution solves a specific problem: maintaining valid IAM authentication tokens for databases that use rotating credentials.
The key advantages of this approach are:
- Fully automated token management
- Support for multiple database endpoints
- Simple, maintainable code
- Minimal operational overhead
Complete Solution
The entire implementation is available in a single Worker file, with clear separation of concerns:
- Token generation using AWS V4 signing
- Hyperdrive configuration management
- Scheduled execution via Cron Triggers
Next Steps
If you're implementing this solution, you might want to:
- Write a Worker that uses Hyperdrive with Postgres.js
- Add more database endpoints for multiple projects
- Customize the logging to send an email from your Worker using AWS SES if there's any errors
Source Code
The complete source code for this implementation is available on our GitHub.
https://github.com/aimoda/hyperdrive-dsql-demo-worker
Feel free to adapt it for your specific use case!