SonataMediaBundle - S3 AWS: 'The configured bucket "my-bucket" does not exist

Spomsoree

I'm trying to configure the AWS s3 filesystem on my Sonata-Project, but I always get the following error:

The configured bucket "my-bucket" does not exist.

My sonata_media.yml:

cdn:
    server:
        path: http://%s3_bucket_name%.s3-website-%s3_region%.amazonaws.com

providers:
    image:
        filesystem: sonata.media.filesystem.s3
    file:
        resizer:    false
        allowed_extensions: ['pdf']
        allowed_mime_types: ['application/pdf', 'application/x-pdf']


filesystem:
    s3:
        bucket: %s3_bucket_name%
        accessKey: %s3_access_key%
        secretKey: %s3_secret_key%
        region: %s3_region%

I added the following parameters to my parameters.yml:

s3_bucket_name: my-bucket
s3_region: eu-central-1
s3_access_key: MY_ACCESS_KEY
s3_secret_key: MY_SECRET_KEY

At the moment I use this library:

    "aws/aws-sdk-php": "2.8.10"

(With the latest versions I got an error with the s3_region parameter.)

Bucket policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AddPerm",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::my-bucket/*"
        }
    ]
}

I think I don't have to say the bucket IS there.

Does anyone have an idea, what the problem is?

Thomas Kekeisen

So, I ran in this issue too and spent about 3 hours to fix it.

TL; DR

I am pretty sure you used aws-sdk-php 3 so you have to switch your configuration to use this one:

services:
    acme.aws_s3.client:
        class: Aws\S3\S3Client
        factory: [Aws\S3\S3Client, 'factory']
        arguments:
            -
                version: latest
                region: %amazon_s3.region%
                credentials:
                    key: %amazon_s3.key%
                    secret: %amazon_s3.secret%

instead of this one:

services:
    acme.aws_s3.client:
        class: Aws\S3\S3Client
        factory: [Aws\S3\S3Client, 'factory']
        arguments:
            -
                key: %amazon_s3.key%
                secret: %amazon_s3.secret%
                region: %amazon_s3.region%

as described here. So you always connected to AWS without any credentials.

Configure knp_gaufrette in a correct way

1) Create a IAM user

Don't use your root access key and access secret to interact with Amazon S3. Create a new account with the access type "Programmatic access" to explicit allow the interaction with a single bucket. I called my user s3-bucket-staging and Amazon gave it the id arn:aws:iam::REMOVED:user/s3-bucket-staging.

aws-create-account

You don't have to add a group or attach any policies. Make sure you save the generated Access key ID and Secret access key since this is the only chance you have to do so.

2) Edit your bucket policy

So for a very basic bucket with global read but no list permission (so people can access single files but not the list of all files) you can then add the following policy:

{
    "Version": "2012-10-17",
    "Id": "Policy1489062408719",
    "Statement": [
        {
            "Sid": "AllowGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::BUCKET-NAME/*"
        },
        {
            "Sid": "AllowListBucket",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::REMOVED:user/s3-bucket-staging"
            },
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::BUCKET-NAME"
        },
        {
            "Sid": "AllowPutObject",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::REMOVED:user/s3-bucket-staging"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::BUCKET-NAME/*"
        }
    ]
}

See also:

Collected from the Internet

Please contact [email protected] to delete if infringement.

edited at
0

Comments

0 comments
Login to comment

Related

How does Heroku use AWS S3 credentials to access images in my bucket?

AWS s3 api error: specified bucket does not exist

AWS S3 Versioning ON: can I delete all objects from my bucket?

Amazon S3 hardcode my bucket URL

New to S3, not sure what to put in my Bucket Policy

AWS Cloud Formation S3 Bucket Name already exist

CREATE_FAILED | AWS::S3::Bucket, the invisible bucket is exist?

Why does AWS SageMaker create an S3 Bucket

'specified bucket does not exist ' error - S3 bucket with Policy disallow upload from particular IP

AWS Tools: Copy-S3Object script fails in 2.x with Error "Bucket does not exist"

How should I set up my bucket policy so I can deploy to S3?

Bad policy on my bucket in S3 and now I don't have any access

Upload to S3 bucket using Pre-Signed URL - "The AWS Access Key Id you provided does not exist in our records."

Terraform init Error: Failed to get existing workspaces: S3 bucket does not exist

Does AWS S3 copy to same bucket preserves TTL (Expiration date - Lifecycle Policy)?

Does Amazon offer a way to reserve AWS S3 bucket name prefixes?

Why does S3 bucket ARN not contain AWS account number?

Does AWS CloudWatch Events Rule supports any wildcards in S3 bucket/key names

Where does the owner *name* for an S3 bucket/AWS account come from?

Does the number or size of objects in an AWS S3 bucket have any performance implications

Does AWS SDK or CLI provide offset support on pagination when listing S3 bucket?

list-buckets s3api is not showing my bucket creation date?

List contents of AWS S3 bucket

Is this a correct idea of AWS s3 bucket

Limited access to AWS S3 bucket

sub-bucket in AWS S3

AWS S3 incorrect bucket object

AWS S3 bucket access token

AWS S3 Bucket policies