Fix "No Access-Control-Allow-Origin header" for S3 and CloudFront

Fix "No Access-Control-Allow-Origin header" for S3 and CloudFront

To avoid the error "No 'Access-Control-Allow-Origin' header is present on the requested resource":

  • Enable CORS on your S3 bucket
  • Forward the appropriate headers on your CloudFront distribution

Enable CORS on S3 Bucket

In S3 -> [your bucket] -> Permissions -> Cross-origin resource sharing (CORS):

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "GET"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": []
    }
]

ref:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManageCorsUsing.html

Configure Behaviors on CloudFront Distribution

In CloudFront -> [your distribution] -> Behaviors -> Create Behavior:

  • Path Pattern: *
  • Allowed HTTP Methods: GET, HEAD, OPTIONS
  • Cached HTTP Methods: +OPTIONS
  • Origin Request Policy: Managed-CORS-S3Origin
    • This policy actually whitelists the following headers:
      • Access-Control-Request-Headers
      • Access-Control-Request-Method
      • Origin

ref:
https://aws.amazon.com/premiumsupport/knowledge-center/no-access-control-allow-origin-error/

Validate it's working:

fetch("https://metadata.perp.exchange/config.production.json")
.then((res) => res.json())
.then((out) => { console.log(out) })
.catch((err) => { throw err });
Upload files to Amazon S3 when Travis CI builds pass

Upload files to Amazon S3 when Travis CI builds pass

Assume that you want to upload a xxx.whl file generated by pip wheel to Amazon S3 so that you will be able to run pip install https://url/to/s3/bucket/xxx.whl.

CAUTION! By default, only master branch's builds could trigger deployments in Travis CI.

Configuration

before_install:
  - pip install -U pip
  - pip install wheel

script:
  - python setup.py test

before_deploy:
  - pip wheel --wheel-dir=wheelhouse .

deploy:
  provider: s3
  access_key_id: "YOUR_KEY"
  secret_access_key: "YOUR_SECRET"
  bucket: YOUR_BUCKET
  acl: public_read
  local_dir: wheelhouse
  upload_dir: wheels
  skip_cleanup: true
# install from an URL directly
$ pip install https://url/to/s3/bucket/wheels/xxx.whl

ref:
https://docs.travis-ci.com/user/deployment/s3

Setup a static website on Amazon S3

Setup a static website on Amazon S3

Say that you would like to host your static site on Amazon S3 with a custom domain and, of course, HTTPS.

Create two S3 buckets

To serve requests from both root domain such as codetengu.com and subdomain such as www.codetengu.com, you must create two buckets named exactly codetengu.com and www.codetengu.com.

In this post, I assume that you want to redirect www.codetengu.com to codetengu.com.

ref:
https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html

Upload your static files

$ cd /path/to/your_project_root/

$ aws s3 sync . s3://codetengu.com \
--acl "public-read" \
--exclude "*.DS_Store" \
--exclude "*.gitignore" \
--exclude ".git/*" \
--dryrun

$ aws s3 website s3://codetengu.com --index-document index.html --error-document error.html

ref:
https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html

Setup bucket policy for public accessing

In your S3 Management Console, click codetengu.com bucket > Properties > Edit bucket policy, enter:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AddPerm",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::codetengu.com/*"
        }
    ]
}

Setup www redirecting

In your S3 Management Console, click www.codetengu.com bucket > Properties > Static Website Hosting, choose Redirect all requests to another host name, type codetengu.com.

Now you're able to access your website via:

Configure a custom domain

In the "Setting Up a Static Website Using a Custom Domain" guide I mentioned above, it uses Amazon Route 53 to manage DNS records; In this post, I use CloudFlare as my website's DNS provider instead.

  • Create a CNAME for codetengu.com to point to codetengu.com.s3-website-ap-northeast-1.amazonaws.com
  • Create a CNAME for www.codetengu.com to point to codetengu.com.s3-website-ap-northeast-1.amazonaws.com

Yep, you CAN create a CNAME record for root domain on CloudFlare, just like your can add an "Alias" on Route 53.

Wait for the DNS records to propagate then visit https://codetengu.com/.

awscli: Command-line Interface for Amazon Web Services

awscli: Command-line Interface for Amazon Web Services

awscli is the official command-line interface for all Amazon Web Services (AWS).

ref:
https://github.com/aws/aws-cli

Configuration

$ pip install awscli

$ aws configure

ref:
https://docs.aws.amazon.com/cli/latest/index.html

S3

Download A Folder

$ aws s3 sync \
s3://files.vinta.ws/static/images/stickers/ \
.

ref:
https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html
https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html#using-s3-commands-managing-objects

Rename A Folder

$ aws s3 cp \
s3://files.vinta.ws/static/images/stickers_BACKUP/ \
s3://files.vinta.ws/static/images/stickers/ \
--recursive

ref:
https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html

Make A Folder Public Read

$ aws s3 sync \
s3://files.vinta.ws/static/ \
s3://files.vinta.ws/static/ \
--grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers

Upload Files

# also make them public read
$ aws s3 cp \
. \
s3://files.vinta.ws/static/images/stickers/ \
--recursive \
--grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers

$ aws s3 cp \
db.sqlite3 \
s3://files.albedo.one/

$ aws s3 sync \
./ \
s3://files.albedo.one/ \
--recursive --exclude "*" --include "*.pickle"

Copy Files Between S3 Buckets

$ aws s3 sync s3://your_bucket_1/media s3://your_bucket_2/media \
--acl "public-read" \
--exclude "track_audio/*"

Remove Files

$ aws s3 rm s3://your_bucket_1/media/track_audio --recursive

ref:
https://docs.aws.amazon.com/cli/latest/reference/s3/rm.html

Grant Access to a Single S3 Bucket via Amazon IAM

Grant Access to a Single S3 Bucket via Amazon IAM

Create AN IAM user to only allow to access specific resources.

Go to Users > Attach User Policy > Policy Generator on the web console.

ref:
https://console.aws.amazon.com/iam/home?#users

Example 1

Allow full access to a certain bucket.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::files.albedo.one",
                "arn:aws:s3:::files.albedo.one/*"
            ]
        }
    ]
}

Example 2

For BackWPup, a WordPress plugin:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": "arn:aws:s3:::*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "s3:CreateBucket",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::files.vinta.ws",
                "arn:aws:s3:::files.vinta.ws/*"
            ]
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*",
                "s3:Put*"
            ],
            "Resource": [
                "arn:aws:s3:::files.vinta.ws",
                "arn:aws:s3:::files.vinta.ws/*"
            ]
        }
    ]
}

ref:
https://console.aws.amazon.com/iam/home#users