Skip to content

How I Built This Site

This site was built using a mix of IaC and CICD. It is designed to completely rebuild in AWS S3 after every push to the master branch. Below are the steps I took to deploy it.

Register Domain and Create Certificate

The first steps I had to perform to make this a production level site was to register a domain and obtain a certificate. This was fairly straightforward from following the AWS docs.

I ended up registering mkrochta.com and created an ACM Certificate for it.

Deploy my CloudFormation to build out most of the IaC.

Next I created and deployed a CloudFormation template to build out my infrastructure. The template contained resources for the following:

  • S3 Bucket
  • Bucket Policy
  • CloudFront
  • Origin Access Identity
  • WAFV2 WebACL with rules - detached in 2022 due to this being a static site along with WAF AWS costs.

Below is a sample of the template:

---
AWSTemplateFormatVersion: '2010-09-09'
Description: Deploys a Static S3 Website behind a CloudFront Distribution protected by WAF.
Parameters:
  FullDomainName:
    Description: FQDN for website
    Type: String
  CertificateARN:
    Type: String
    Description: The ACM Certificate ARN for your websites domain name.
  BucketName:
    Type: String
    Description: The name for your S3 bucket.
Resources:
  Bucket:
    Type: AWS::S3::Bucket
    Properties:
      AccessControl: Private
      WebsiteConfiguration:
        IndexDocument: index.html
        ErrorDocument: error.html
      BucketName:
        Ref: BucketName
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: AES256
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
    DeletionPolicy: Retain
  BucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket:
        Ref: Bucket
      PolicyDocument:
        Id: CFOAIPolicy
        Version: 2012-10-17
        Statement:
          - Sid: Allow CloudFront OAI
            Effect: Allow
            Action: s3:GetObject
            Resource:
              Fn::Join:
                - ''
                - - 'arn:aws:s3:::'
                  - Ref: Bucket
                  - "/*"
            Principal:
              AWS:
                Fn::Sub: arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity
                  ${OriginAccessID}
  OriginAccessID:
    Type: AWS::CloudFront::CloudFrontOriginAccessIdentity
    Properties:
      CloudFrontOriginAccessIdentityConfig:
        Comment:
          Ref: Bucket

Create an IAM user.

After the majority of the infrastructure was configured I then created an IAM user to access the S3 bucket. I scoped the IAM policy permissions down to only access the bucket we previously created and also limited the actions for it.

Below is a sample IAM policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Resource": [
                "arn:aws:s3:::INSERT-BUCKET-NAME",
                "arn:aws:s3:::INSERT-BUCKET-NAME/*"
            ],
            "Sid": "S3CICDPerms",
            "Effect": "Allow",
            "Action": [
                "s3:DeleteObject",
                "s3:GetBucketLocation",
                "s3:GetObject",
                "s3:ListBucket",
                "s3:PutObject"
            ]
        }
    ]
}

I then created a set of IAM access/secret keys to be used in GitHub.

Create GitHub Repo / Configure Secrets

Next I created a new private GitHub repo.

Once that was created I configured two Secrets in the GitHub repo for my IAM keys.

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY

*Note - I also cloned the repo locally at this time and created an initial MkDocs project to ensure the initial config would all be there.

Configure GitHub Actions

After that I created a CICD pipeline by configuring GitHub Actions.

To do this I created a new file/directory in my repo .github/workflows/main.yml.

In the main.yml I configured steps to do the following:

  • Setup python
  • Install dependencies (pip, mkdocs, mkdocs material theme)
  • Build site mkdocs build
  • Configure AWS credentials
  • Deploy static site to S3 bucket aws s3 sync ./site/ s3://INSERT-BUCKET-NAME --delete

Configure Site

At this time my site was mostly up and running. Any push to master would rebuild my MkDocs site in the S3 bucket.

The only thing left to do was to add markdown pages. And then reference them in the mkdocs.yml file.

Build Lambda@Edge function

After configuring my site pages I tested it by going directly to the CloudFront URL. SUCCESS - the home page worked perfectly!

However, clicking any other page came back with a AccessDenied error like you'd get when you hit an S3 path you don't have permissions to. After googling I found out that this is a known issue in S3 CloudFront Origins with subdirectories having index.html files.

Basically my site would give back mkrochta.com/skillscerts instead of mkrochta.com/skillscerts/index.html which would cause an issue in CloudFront.

To fix this I had to deploy a Lambda@Edge function that would add default directory indexes to each subdirectory following these instructions.

Map DNS

After deploying the Lambda@Edge function my site was fully up and running. The final step I did was map mkrochta.com as an Alias to the CloudFront distribution in my Route 53 hosted zone.

Final Product / Security Considerations

With everything deployed and configured my final product ended up being a fully functional static website that is served through a CDN and protected by a WAF.

Security was considered in every piece of infrastructure built. Including some of the following:

  • S3 Bucket Encryption with AES256.
  • S3 Bucket Policy is limited to the CloudFront OAI.
  • S3 Bucket Public Access Settings are all set to block.
  • CloudFront uses the latest TLS version and SSL support method.
  • CloudFront has a GeoRestriction to whitelist the US only.
  • CloudFront uses an ACM Certificate.
  • CloudFront & WAFV2 being used in general. Could've just left this as a public static bucket.
  • WAFV2 is configured a default block policy.
  • WAFV2 has a number of AWS managed rules attached.
  • WAFV2 has a GeoRestriction rule to whitelist the US only.

Things to Improve Upon

Theres always room for improvement! To further enhance this I could always do the following:

  • Automate creating an IAM user with the IAM policy to the bucket.
  • Automate creating the certificate.
  • Automate the Lambda@Edge function I had to create to get subdirectory content to correctly load.