S3 Proxy Credentials¶
The CloudTaser S3 proxy runs as a sidecar container alongside your workload. It intercepts S3 API calls on localhost:8190, encrypts object bodies using envelope encryption with an EU-hosted Vault Transit key, and forwards requests to the upstream S3 endpoint. The proxy re-signs every request using its own credentials -- the workload's original Authorization header is stripped and never forwarded upstream.
How Re-Signing Works¶
Workload --> S3 SDK (signs with workload creds) --> http://localhost:8190
--> S3 Proxy:
1. Strips original Authorization, X-Amz-* signing headers
2. Encrypts body (PutObject) or decrypts body (GetObject)
3. Re-signs with proxy's own credentials (SigV4)
4. Forwards to upstream S3 endpoint
The workload's credentials are not validated by the proxy -- it accepts any request on localhost. Only the proxy's credentials need access to the upstream S3 bucket.
Credential Scenarios¶
EKS with IRSA (IAM Roles for Service Accounts)¶
The simplest setup on AWS. Both the workload and the proxy sidecar share the pod's service account, so both inherit the same IAM role.
How it works:
- Pod service account is annotated with
eks.amazonaws.com/role-arn - EKS mutating webhook injects
AWS_WEB_IDENTITY_TOKEN_FILEandAWS_ROLE_ARNinto all containers in the pod - The proxy picks up IRSA tokens automatically via the default credential chain
- No explicit credentials needed
IAM policy (attached to the IRSA role):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:HeadObject"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}
Workload change: Set S3 endpoint to http://localhost:8190.
EKS with Pod Identity¶
Similar to IRSA but uses the newer EKS Pod Identity mechanism.
How it works:
- Pod Identity association links the service account to an IAM role
- EKS injects credentials via the pod identity agent
- Both workload and proxy sidecar containers receive credentials automatically
Workload change: Set S3 endpoint to http://localhost:8190.
Pod Identity vs IRSA
Pod Identity is the recommended approach for new EKS clusters. It removes the need for OIDC provider configuration and simplifies IAM role trust policies. Both work identically with the CloudTaser S3 proxy.
GKE with Workload Identity¶
GKE Workload Identity binds a Kubernetes service account to a GCP service account. For S3-compatible access to GCS, the proxy uses HMAC keys.
How it works:
- The pod's KSA is bound to a GCP service account via Workload Identity
- GCS S3-compatible API requires HMAC keys (not OAuth2 tokens)
- HMAC keys are created for the GCP service account and injected as environment variables on the proxy sidecar
- The proxy uses
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYwith the GCS S3 endpoint
Proxy sidecar environment variables:
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: gcs-hmac-credentials
key: access-key
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: gcs-hmac-credentials
key: secret-key
- name: CLOUDTASER_S3PROXY_S3_ENDPOINT
value: "https://storage.googleapis.com"
- name: CLOUDTASER_S3PROXY_S3_REGION
value: "auto"
Creating HMAC keys
HMAC keys are tied to a GCP service account. Create them via gcloud storage hmac create SERVICE_ACCOUNT_EMAIL or Terraform google_storage_hmac_key. The GCS S3-compatible API uses SigV4 signing with region auto.
Workload change: Set S3 endpoint to http://localhost:8190.
Static Credentials¶
For environments without IAM or Workload Identity, or for testing and development.
How it works:
- Proxy sidecar receives
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYvia environment variables or mounted secrets - These credentials are for the proxy only -- the workload can use different (or dummy) credentials
Proxy sidecar environment variables:
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: s3-proxy-credentials
key: access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: s3-proxy-credentials
key: secret-access-key
Static credentials and data sovereignty
Static AWS access keys are long-lived credentials. Consider rotating them regularly or using IAM roles (IRSA / Pod Identity) for production workloads. The S3 proxy credential itself does not carry secret data -- it provides access to the encrypted ciphertext in S3. The encryption keys are held in your EU vault.
Workload change: Set S3 endpoint to http://localhost:8190.
Workload SDK Configuration¶
The only change required in the workload is pointing the S3 client to the proxy sidecar on localhost.
| SDK | Environment Variable | Programmatic |
|---|---|---|
| AWS SDK v2 (Go, Python, JS) | AWS_ENDPOINT_URL_S3=http://localhost:8190 |
endpoint_url="http://localhost:8190" in client config |
| AWS SDK v1 (Go) | -- | aws.Config{Endpoint: aws.String("http://localhost:8190")} |
| AWS CLI | AWS_ENDPOINT_URL_S3=http://localhost:8190 |
--endpoint-url http://localhost:8190 |
| boto3 (Python) | AWS_ENDPOINT_URL_S3=http://localhost:8190 |
boto3.client('s3', endpoint_url='http://localhost:8190') |
Automatic endpoint injection
The operator webhook can inject AWS_ENDPOINT_URL_S3 automatically into the workload container when the cloudtaser.io/s3-proxy annotation is present. No manual SDK configuration needed.
Path-Style vs Virtual-Hosted Style¶
The proxy expects path-style URLs: http://localhost:8190/bucket/key. Most S3 SDKs default to virtual-hosted style (bucket.s3.amazonaws.com/key), which does not work with localhost. Configure the SDK to use path-style:
| SDK | Setting |
|---|---|
| AWS SDK v2 (Go) | UsePathStyle: true in S3 client options |
| boto3 | Config(s3={'addressing_style': 'path'}) |
| AWS CLI | aws configure set s3.addressing_style path |
Summary¶
| Scenario | Proxy Credential Source | Explicit Keys Needed? | Workload Change |
|---|---|---|---|
| EKS + IRSA | Pod's IAM role (automatic) | No | Endpoint only |
| EKS + Pod Identity | Pod Identity (automatic) | No | Endpoint only |
| GKE + Workload Identity | HMAC keys (env vars) | Yes (HMAC) | Endpoint only |
| Static credentials | Env vars / K8s Secret | Yes | Endpoint only |
In all cases, the workload's only required change is setting the S3 endpoint to http://localhost:8190. The proxy handles re-signing, encryption, and forwarding transparently.