React hosted in AWS S3 with CloudFront and Route53

Learn how you can easily deploy and host your React application as an AWS S3 Website with CloudFront for distribution and connect with Route53 DNS

Photo by Árpád Czapp on Unsplash

Hi people!

One of the challenges of developing a website is to decide where to host it and how to deploy it easily.

AWS offers the option of using S3 buckets as static websites, giving all the availability and reliability of S3 with the convenience of being able to host your web application.

In addition to S3 website hosting, AWS also offers the possibility of easily connecting it to CloudFront, a fast content delivery network (CDN) service to securely deliver your website globally with low latency and high speeds.

And last, AWS also has a DNS service called Route53 that enables you to add a DNS routing to your website easily. So, instead of accessing your website through your S3 or CloudFront URL, you’ll be able to access it via your domain, like https://my-website.com

In this story, I will show you how to easily create your infrastructure in AWS with Terraform and deploy a React application to S3 using GitHub Actions.

Requirements

NodeJSnpmAWS Account

Let’s get to it

Let’s start by generating our React app. We are going to be using Vite for that, which is a very good Frontend tooling for generating and managing projects:

npm create vite@latest my-react-app — –template react-ts

This will generate a new React with Typescript application.

Now, move everything but the to a new folder named app.

You then should have a folder structure similar to:

.
└── app/
├── public/
│ └── vite.svg
├── src/
│ ├── assets/
│ │ └── .gitkeep
│ ├── App.css
│ ├── App.tsx
│ ├── index.css
│ ├── main.tsx
│ └── vite-env.d.ts
├── .eslintrc.cjs
├── .gitignore
├── index.html
├── package.json
├── README.md
├── tsconfig.json
├── tsconfig.node.json
└── vite.config.ts

Now, if you go into the app folder and install the dependencies with:

npm install

Then run the application with:

npm run dev

You should see a message in the console with some instructions and the localhost URL:

vite console message

You can go to the given URL, like http://localhost:5173 and you should have an initial website:

Initial vite React Typescript website

Great! We have a React application running. Now let’s move to the infrastructure and deployment to the S3 static website.

Building our infrastructure

Let’s use Terraform to create our infrastructure in AWS.

We start by creating an iac folder at the root level and add a providers.tf file to define our Terraform configuration:

terraform {
required_providers {
aws = {
source = “hashicorp/aws”
version = “~> 5.0”
}
}

backend “s3” {
bucket = “YOUR_BUCKET”
key = “state.tfstate”
}
}

# Configure the AWS Provider
provider “aws” {}

Note that the backend session is optional if you want Terraform to keep track of your infrastructure state. You need to create the bucket before using Terraform to set up your infrastructure. If the backend section is not provided, Terraform will assume that it needs to create the infrastructure from scratch in every run.

Now, let’s create a website.tf file to define the website infrastructure:

resource “aws_s3_bucket” “website” {
bucket = “YOUR_BUCKET_NAME”
}

resource “aws_s3_bucket_public_access_block” “website_bucket_public_access” {
bucket = aws_s3_bucket.website.id
block_public_acls = false
block_public_policy = false
ignore_public_acls = false
restrict_public_buckets = false
}

resource “aws_s3_bucket_policy” “public_bucket_policy” {
bucket = aws_s3_bucket.website.id
policy = data.aws_iam_policy_document.bucket_policy.json
}

resource “aws_s3_bucket_website_configuration” “website_configuration” {
bucket = aws_s3_bucket.website.id
index_document {
suffix = “index.html”
}
error_document {
key = “index.html”
}
}

data “aws_iam_policy_document” “bucket_policy” {
statement {
principals {
type = “*”
identifiers = [“*”] }
actions = [
“s3:GetObject”
] resources = [
“arn:aws:s3:::${aws_s3_bucket.website.bucket}/*”
] }
}

You need to replace YOUR_BUCKET_NAME for a unique bucket name that you want.

Here we are defining our S3 bucket, enabling public access, setting a bucket policy to allow GET in all resources, and setting the bucket as a static website host.

Last, to deploy we’ll be using Gtihub Actions. So create a folder .github/workflows and add a deploy-infrastructure.yml file:

name: Deploy Infrastructure
on:
workflow_dispatch:
push:
branches:
– main
paths:
– iac/**/*
– .github/workflows/deploy-infrastructure.yml

defaults:
run:
working-directory: iac/

jobs:
terraform:
name: “Terraform”
runs-on: ubuntu-latest
steps:
# Checkout the repository to the GitHub Actions runner
– name: Checkout
uses: actions/checkout@v3

– name: Configure AWS Credentials Action For GitHub Actions
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: YOUR_REGION

# Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token
– name: Setup Terraform
uses: hashicorp/setup-terraform@v3

# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
– name: Terraform Init
run: terraform init

# Checks that all Terraform configuration files adhere to a canonical format
– name: Terraform Format
run: terraform fmt -check

# Generates an execution plan for Terraform
– name: Terraform Plan
run: |
terraform plan -out=plan -input=false

# On push to “main”, build or change infrastructure according to Terraform configuration files
# Note: It is recommended to set up a required “strict” status check in your repository for “Terraform Cloud”. See the documentation on “strict” required status checks for more information: https://help.github.com/en/github/administering-a-repository/types-of-required-status-checks
– name: Terraform Apply
run: terraform apply -auto-approve -input=false plan

Note that you need to set the secrets in your repository for AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY. And you need to replace YOUR_REGION for your region.

Your URL should be http://BUCKET_NAME.s3-website.REGION.amazonaws.com or, depending on the region, http://BUCKET_NAME.s3-website-REGION.amazonaws.com. If you’d like to see the exact URL, you can find it under Static website hosting in the Properties tab of your bucket.

Deploying your website

Now we just need to build and deploy our website through GitHub Actions to our S3 bucket. So let’s do that by creating a deploy-website.yml file in .github/workflows :

name: Deploy Website
on:
workflow_dispatch:
push:
branches:
– main
paths:
– src/**/*
– .github/workflows/deploy-website.yml

defaults:
run:
working-directory: app/

jobs:
terraform:
name: “Deploy”
runs-on: ubuntu-latest
steps:
# Checkout the repository to the GitHub Actions runner
– name: Checkout
uses: actions/checkout@v3

– name: Configure AWS Credentials Action For GitHub Actions
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: YOUR_REGION

– name: Setup NodeJS
uses: actions/setup-node@v4
with:
node-version: 22

– name: Install dependencies
run: npm install

– name: Build
run: npm run build — –mode production

– name: Deploy to S3
run: aws s3 sync dist/ s3://YOUR_WEBSITE_BUCKET_NAME

You need to replace YOUR_WEBSITE_BUCKET_NAME for the bucket you defined as your website.

After pushing to GitHub and waiting for the workflow to complete, you can test your app under the URL given by your S3 bucket. You should see the same page as in running npm run devlocally.

Enabling distribution with CloudFront

Now we just need to link a CloudFront distribution to our S3 website for global content distribution.

In the iac folder, create a cloudfront.tf file with the following content:

locals {
website_origin_id = “WebsiteBucket”
}

resource “aws_cloudfront_origin_access_control” “oac” {
name = “ReactWebsite”
description = “Example Policy”
origin_access_control_origin_type = “s3”
signing_behavior = “always”
signing_protocol = “sigv4”
}

resource “aws_cloudfront_distribution” “s3_distribution” {
origin {
domain_name = aws_s3_bucket.website.bucket_regional_domain_name
origin_access_control_id = aws_cloudfront_origin_access_control.oac.id
origin_id = local.website_origin_id
}

enabled = true
is_ipv6_enabled = true
comment = “My React Website Distribution”
default_root_object = “index.html”

custom_error_response {
error_code = 403
response_code = 200
response_page_path = “/index.html”
}

custom_error_response {
error_code = 404
response_code = 200
response_page_path = “/index.html”
}

default_cache_behavior {
allowed_methods = [“GET”, “HEAD”, “OPTIONS”] cached_methods = [“GET”, “HEAD”] target_origin_id = local.website_origin_id

cache_policy_id = aws_cloudfront_cache_policy.website.id

viewer_protocol_policy = “redirect-to-https”
}

price_class = “PriceClass_All”

restrictions {
geo_restriction {
restriction_type = “none”
locations = [] }
}

viewer_certificate {
cloudfront_default_certificate = true
}
}

# Sends to the origin and caches it
resource “aws_cloudfront_cache_policy” “website” {
name = “react_cache_policy”

parameters_in_cache_key_and_forwarded_to_origin {
headers_config {
header_behavior = “none”
}
cookies_config {
cookie_behavior = “all”
}

query_strings_config {
query_string_behavior = “all”
}
}
}

Here we are doing a few things:

Create an Origin Access Control (OAC), which is a security feature that allows CloudFront to securely access AWS servicesCreate a CloudFront Distribution — Here we link CloudFront to our S3 static website.Set the custom error responses — This is very important because CloudFront needs to know where to go if something goes wrong. If you try to call a /test, for example, CloudFront will display a default error message if these settings are not set.

Now we need to update our bucket policy to allow only our CloudFront OAC to be able to access our resources. So in the website.tf, update the aws_iam_policy_document bucket_policy and block the bucket public access with the following:

resource “aws_s3_bucket” “website” {
bucket = “YOUR_BUCKET_NAME”
}

resource “aws_s3_bucket_public_access_block” “website_bucket_public_access” {
bucket = aws_s3_bucket.website.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}

resource “aws_s3_bucket_policy” “public_bucket_policy” {
bucket = aws_s3_bucket.website.id
policy = data.aws_iam_policy_document.bucket_policy.json
}

resource “aws_s3_bucket_website_configuration” “website_configuration” {
bucket = aws_s3_bucket.website.id
index_document {
suffix = “index.html”
}
error_document {
key = “index.html”
}
}

data “aws_iam_policy_document” “bucket_policy” {
statement {
principals {
type = “Service”
identifiers = [“cloudfront.amazonaws.com”] }
actions = [
“s3:GetObject”
] resources = [
“arn:aws:s3:::${aws_s3_bucket.website.bucket}/*”
] condition {
test = “StringEquals”
variable = “AWS:SourceArn”
values = [aws_cloudfront_distribution.s3_distribution.arn] }
}
}

With this, push your code to GitHub, wait for the workflow to finish, and then go to the CloudFront console in AWS.

You should see your CloudFront Distribution and the domain name, which is how you can access it.

You can paste it into your browser to see your website running.

Add a domain with Route53

Note that this step requires you to own a domain. You can buy them directly in Route53 or any other domain registrar. I recommend Namecheap or Cloudfare.

Now we are going to add a domain to our CDN with Route53. This step can take a while to be enabled if you are bringing your domain from another registrar and you’ll have to do some configuring. To learn more about the Route53 resource, you can access Terraform docs here.

To be able to use Route53 with CloudFront, we are required to create a public certificate for our domain, and it also needs to be created in the region us-east-1. This is a requirement from AWS. So let’s add a new provider for this region to our providers.tf :

terraform {
required_providers {
aws = {
source = “hashicorp/aws”
version = “~> 5.0”
}
}

backend “s3” {
bucket = “YOUR_BUCKET_”
key = “state.tfstate”
}
}

# Configure the AWS Provider
provider “aws” {}

provider “aws” {
alias = “us-east-1”
region = “us-east-1”
}

And now, create a certificates.tf file:

resource “aws_acm_certificate” “root” {
domain_name = “YOUR_DOMAIN”
validation_method = “DNS”
provider = aws.us-east-1
}

Don’t forget to change YOUR_DOMAIN for your desired domain.

We need also to create Route53 records for our certificates. So, let’s create a route53.tf to get our existing hosted zone and configure the records:

data “aws_route53_zone” “hosted_zone” {
name = “YOUR_DOMAIN”
}

resource “aws_route53_record” “certificate” {
for_each = {
for validationOptions in aws_acm_certificate.root.domain_validation_options : validationOptions.domain_name => {
name = validationOptions.resource_record_name
record = validationOptions.resource_record_value
type = validationOptions.resource_record_type
}
}

allow_overwrite = true
name = each.value.name
records = [each.value.record] ttl = 60
type = each.value.type
zone_id = data.aws_route53_zone.hosted_zone.zone_id
}

resource “aws_acm_certificate_validation” “certificate” {
certificate_arn = aws_acm_certificate.root.arn
validation_record_fqdns = [for record in aws_route53_record.example : record.fqdn]}

Don’t forget to change YOUR_DOMAIN for your desired domain.

And now let’s link it to our CloudFront distribution in cloudfront.tf :

locals {
website_origin_id = “WebsiteBucket”
}

resource “aws_cloudfront_origin_access_control” “oac” {
name = “ReactWebsite”
description = “Example Policy”
origin_access_control_origin_type = “s3”
signing_behavior = “always”
signing_protocol = “sigv4”
}

resource “aws_cloudfront_distribution” “s3_distribution” {
origin {
domain_name = aws_s3_bucket.website.bucket_regional_domain_name
origin_access_control_id = aws_cloudfront_origin_access_control.oac.id
origin_id = local.website_origin_id
}

enabled = true
is_ipv6_enabled = true
comment = “My React Website Distribution”
default_root_object = “index.html”

custom_error_response {
error_code = 403
response_code = 200
response_page_path = “/index.html”
}

custom_error_response {
error_code = 404
response_code = 200
response_page_path = “/index.html”
}

default_cache_behavior {
allowed_methods = [“GET”, “HEAD”, “OPTIONS”] cached_methods = [“GET”, “HEAD”] target_origin_id = local.website_origin_id

cache_policy_id = aws_cloudfront_cache_policy.website.id

viewer_protocol_policy = “redirect-to-https”
}

price_class = “PriceClass_All”

restrictions {
geo_restriction {
restriction_type = “none”
locations = [] }
}

viewer_certificate {
cloudfront_default_certificate = true
}

aliases = [“YOUR_DOMAIN_NAME”] viewer_certificate {
acm_certificate_arn = aws_acm_certificate.root.arn
ssl_support_method = “sni-only”
}
}

# Sends to the origin and caches it
resource “aws_cloudfront_cache_policy” “website” {
name = “react_cache_policy”

parameters_in_cache_key_and_forwarded_to_origin {
headers_config {
header_behavior = “none”
}
cookies_config {
cookie_behavior = “all”
}

query_strings_config {
query_string_behavior = “all”
}
}
}

Here we add aliases with our domain name and the property view_certificate with our certificate.

Lastly, we need to create a record in Route53 to point to CloudFront. So, in the route53.tf file, let’s add this record:

data “aws_route53_zone” “hosted_zone” {
name = “andrelopes.tech”
}

resource “aws_route53_record” “example” {
for_each = {
for validationOptions in aws_acm_certificate.root.domain_validation_options : validationOptions.domain_name => {
name = validationOptions.resource_record_name
record = validationOptions.resource_record_value
type = validationOptions.resource_record_type
}
}

allow_overwrite = true
name = each.value.name
records = [each.value.record] ttl = 60
type = each.value.type
zone_id = data.aws_route53_zone.hosted_zone.zone_id
}

resource “aws_acm_certificate_validation” “example” {
certificate_arn = aws_acm_certificate.root.arn
validation_record_fqdns = [for record in aws_route53_record.example : record.fqdn] provider = aws.us-east-1
}

resource “aws_route53_record” “root” {
name = “andrelopes.tech”
zone_id = data.aws_route53_zone.hosted_zone.zone_id
type = “A”
alias {
name = aws_cloudfront_distribution.s3_distribution.domain_name
zone_id = aws_cloudfront_distribution.s3_distribution.hosted_zone_id
evaluate_target_health = true
}
}

Now, you can push it to GitHub and wait for the build to finish. It can take a while because it needs to make a lot of changes and CloudFront takes a while to deploy.

After it completes, you should see your records in Route53.

Note that it can take up to 60 seconds for AWS to replicate the DNS changes. So don’t worry if it doesn’t work right away.

Now you can access your domain and you should see your website working!

Conclusion

In this story, you can see how easy it is to deploy a React application to S3 and link a CloudFront distribution as our global CDN.

Hosting a React application is made easy by leveraging GitHub actions and S3 static website options, which allows us to turn S3 buckets into static website hosts. S3 also provides a DNS CNAME so we can access our website.

Not only hosting, but you could also learn how to make use of AWS CDN, CloudFront, to make your application quickly globally available with edge location cache and fast performance.

By making use of OAC, you managed to learn how to allow your S3 static website to be accessed only by CloudFront, adding an extra layer of security.

And last, we learned how to connect a domain to our CloudFront distribution so we can use a custom domain for our websites.

With Terraform, we made it easy to build each piece of our infrastructure and link it together in AWS with the help of GitHub Actions to run our IaC code.

Thank you for reading!

Happy coding! 💻

The code for this story can be found here.

Hosting your React Application in AWS was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.

​ Level Up Coding – Medium

about Infinite Loop Digital

We support businesses by identifying requirements and helping clients integrate AI seamlessly into their operations.

Gartner
Gartner Digital Workplace Summit Generative Al

GenAI sessions:

  • 4 Use Cases for Generative AI and ChatGPT in the Digital Workplace
  • How the Power of Generative AI Will Transform Knowledge Management
  • The Perils and Promises of Microsoft 365 Copilot
  • How to Be the Generative AI Champion Your CIO and Organization Need
  • How to Shift Organizational Culture Today to Embrace Generative AI Tomorrow
  • Mitigate the Risks of Generative AI by Enhancing Your Information Governance
  • Cultivate Essential Skills for Collaborating With Artificial Intelligence
  • Ask the Expert: Microsoft 365 Copilot
  • Generative AI Across Digital Workplace Markets
10 – 11 June 2024

London, U.K.