J Cole Morrison
J Cole Morrison

J Cole Morrison

Developer Advocate @HashiCorp, DevOps Enthusiast, Startup Lover, Teaching at awsdevops.io



Complete Guides:

React

Build and Deploy a Node API and React Web App on AWS

Posted by J Cole Morrison on .

Build and Deploy a Node API and React Web App on AWS

Posted by J Cole Morrison on .

Build and Deploy a Node API and React Web App on AWS

Over the past couple of weeks I've thrown out a series of guides, that while completely independent, can be used to learn and setup a full node and react api based application. I wanted to use this post glue these together and fill in gaps. From this setup you get:

  1. A Node + Express + Loopback API application. Featuring authorization, authentication and even database migrations!
  2. A React, Redux, Redux Saga, Redux Form and React Router based web application that USES the above API.
  3. Each service "Dockerized" and ready to be ported for both development and production.
  4. Each service hosted on AWS, load balanced, leveraging docker and fault tolerant.
  5. An actual explanation of what's going on from a practical level in addition to the "how"
  6. (Edit 4/3/2017) A Continuous Deploy Pipeline For Git -> Semaphore CI -> CloudFormation -> AWS ECS

The Node API

The guide can be found here:

Authorized Resources and Database Migrations with Strongloop's Loopback

Okay, I know the choice of title was terrible. I was trying to target more on the database migrations, but that wound up just being a very small piece of the guide. The majority is of it is on everything else. A better name would be something like Setting up an Authenticated and Authorized Node + Express + Loopback application. It covers:

  • Setting Up The Development Environment (with Docker)
  • Setting up a Stand Alone MySQL DB (with Docker!)
  • Scaffolding Out Our Models with Relations
  • Automated Database Migrations and Updates
  • Authenticating and Authorizing Resources

If seeing with Docker intimidates is off-putting.. trust me it's so much better to have DBs and settings segmented out and not clobbering other local ones. In fact if there's anything that Docker can help with in day-to-day non devops work, it's keeping development environments from eating each other.

The React Web Application

This is a 3 part series that gives a practical overview, setup and application of the following:

  • Scaffolding out a base line react app wired with Redux, Redux Saga, Redux Form and React Router.
  • Wiring in Redux Devtools
  • Thinking in "redux" mode and creating a variety of states
  • Leveraging Redux Saga to deal with our async redux api logic (as opposed to Thunks)
  • Using Redux Form to make form posting and validation a breeze
  • Hooking up the application to authenticate, authorize and POST/GET to the Node API mentioned above

Part 1 - Scaffolding up the base project and signing up Users.

Part 2 - Making the Login and Authentication/Authorization flow

Part 3 - Working with protected resources and completion!

Further Reading:

Want to setup CRA with Sass? Storybook?? Yarn??? Here ya go:

Create React App with SASS, Storybook and Yarn in a Docker Environment

Deploying the Apps to AWS with Docker and ECS

So heads up - A LOT goes on in this post:

Guide to Fault Tolerant and Load Balanced AWS Docker Deployment on ECS

As the name states - it's a guide. So it covers everything needed to get a full setup on AWS with Docker and ECS. This includes things like Identity and Access Management, Load Balancing, Launch Configurations, Autoscaling Groups and more.

How do you apply this to the React Web Application?

This part's easy - the beginning of the guide explains how to take any react application that's compiled down into a final set of html/css/js and get it setup with an NGINX Dockerfile. Additionally it also shows how to get that up and hosted on AWS EC2 Container Registry (ECR).

So for getting the react web application up, just follow this guide directly. Create the NGINX configuration files and dockerfile as shown and then follow the guide.

If you're looking for CI/CD I actually have an entire setup with AWS CloudFormation + CodePipeline + CodeBuild project setup for that:

https://github.com/jcolemorrison/redux-sagas-authentication-app/tree/deploy-alt

Of course it requires you to figure out CloudFormation, CodePipeline and CodeBuild. I do have a large series in the works on how to use those technologies, but teaching them is very time intensive, so it's a while out. You can also copy the CI process I have setup for the Node application, which I'll go into below.

How do you apply this to the Node API Application?

Setting up the Node application is a little bit more work but most of what needs to be done is the same, but the differences are:

1. Create a production Dockerfile in the API root directory:

FROM node:6.9.5

RUN mkdir -p /usr/src/api  
WORKDIR /usr/src/api  
COPY . .

RUN chmod +x ./entrypoint.sh

ENTRYPOINT ["./entrypoint.sh"]  
CMD ["node", "."]  

You can create an entrypoint shell script if you need anything else to happen on build.

All this does is set the image up with Node and run the node command to get the app started.

We don't need anything else in there, because all that's needed to run it on production is the node process. If you're looking for a CI/CD pipeline you'd want to install dependencies etc, etc on the CI server / image vs. including them in the image and fattening it up with useless stuff not used in runtime.

Once that's built locally, just follow the instructions for pushing it up to AWS ECR that are also in the AWS Guide.

2. Create a server/datasources.production.js file with bindings to a production DB:

Example:

'use strict'

module.exports = {  
  'mysqlDb': {
    'host': process.env.PRODUCTION_RDS_HOST,
    'port': 3306,
    'database': process.env.PRODUCTION_RDS_DB,
    'user': process.env.PRODUCTION_RDS_USER,
    'password': process.env.PRODUCTION_RDS_PWD,
    'name': process.env.PRODUCTION_RDS_DB,
    'connector': 'mysql',
  },
}

This assumes AWS RDS and that those variables are made available to the container upon launch. The process of making environment variables available to the ECS tasks would be done by adding the environment property to the containerDefinition:

{
    "containerDefinitions": [
        {
            "name": "aws-docker-task",
            "image": "<yourawsaccountnumber>.dkr.ecr.us-east-1.amazonaws.com/<yourusername>/api-image",
            "memory": "300",
            "cpu": "256",
            "essential": true,
            "portMappings": [
                {
                    "containerPort": "80",
                    "protocol": "tcp"
                }
            ],
            "environment": [
              {
                "name": "PRODUCTION_RDS_HOST",
                "value": process.env.PRODUCTION_RDS_HOST,
              },
              {
                "name": "PRODUCTION_RDS_DB",
                "value": process.env.PRODUCTION_RDS_DB,
              },
              {
                "name": "PRODUCTION_RDS_USER",
                "value": process.env.PRODUCTION_RDS_USER,
              },
              {
                "name": "PRODUCTION_RDS_PWD",
                "value": process.env.PRODUCTION_RDS_PWD,
              },
              {
                "name": "NODE_ENV",
                "value": "production",
              },
            ],
            "mountPoints": null,
            "volumesFrom": null,
            "hostname": null,
            "user": null,
            "workingDirectory": null,
            "extraHosts": null,
            "logConfiguration": null,
            "ulimits": null,
            "dockerLabels": null
        }
    ],
    "volumes": [],
    "networkMode": "bridge",
    "placementConstraints": [],
    "family": "aws-docker-task",
    "taskRoleArn": ""
}

This is covered without the environment property in in the AWS Guide at Step 61.

If you want a holistic example, I actually have an entire repo of this type of app setup to do full Continuous Deploy to ECS through CircleCI:

https://github.com/jcolemorrison/authorized-migrating-loopback-app

Feel free to use it as a reference.

3. Launch an AWS RDS instance

Yes, you will have to setup either a MySQL or Aurora DB instance on your own. And I'm going to commit a cardinal sin here and reference some AWS documentation:

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateInstance.html

Ugh, I hate doing that, but it is pretty straight forward.. well as straight forward as anything AWS and Databases can be. What you're looking for here is to make the values you use for your RDS instance match the environment variables match:

process.env.PRODUCTION_RDS_DB = the Database name which is specified in the Database Options section of Advanced Settings:

RDS Database Advanced Options

process.env.PRODUCTION_RDS_USER & process.env.PRODUCTION_RDS_PWD = the Master Username and Master Password specified here:

RDS Database Basic Options

Until / if you setup a specific one for your application in the RDS instance post launch.

process.env.PRODUCTION_RDS_HOST = map to the returned endpoint for your DB that will be available once the RDS instance is created (minus the Port since we define that in the node app).

Once you have those values, and make them available for task definitions, the API will hook up to RDS and even migrate it!

But wait, won't I have two different ECS Clusters running after this?

Yep and that's what we want. We want an API that's independent of the web application. Suppose we later decide to add a mobile application, open up our API to other apps or...

*gasp*

a new front-end stack is declared king, everyone loses their minds and we have to rebuild our app with this new imperative technology!!

What then??

Well, with the API stack separated out, we don't have to worry about our hypothetical mobile application or other API users being our immediate revamped web app. (i.e. the API traffic begins rising, while the web app traffic is at the same consistent level. If you need more API support, guess what? With the "fullstack" you wind up scaling out the web app when you didn't need it.)

But wait again, isn't this more difficult to maintain??

Not at all. Easier to cater to the infrastructure, code and teams based on their purpose. Developers can optimize their environment for what they need. Devops can scale the API for the needs of all consumers and not worry about overhead of the webapp. Separation of concerns.

Further reading:

Want to run one shell command and hit all of your servers? Want to unify all instance logs? App logs? Here ya go:

How to Setup Unified AWS ECS Logs in CloudWatch and SSM

Continuous Deployment of a Dockerized Node.js Application to AWS ECS

The aforementioned ECS guide is a very manual process. Again, this was to help give readers a grasp of what is actually going on behind the scenes. What if you want to automate it though? What about CI/CD??

Well! I recently wrote another mini-guide on Semaphore CI's blog that shows how to setup an Continuous Deploy pipeline using AWS ECS, Semaphore CI and CloudFormation! You can find the full guide here:

Continuous Deployment of a Dockerized Node.js Application to AWS ECS

You can view the full codebase, build script and CloudFormation template here

Be forewarned though - this guide does assume knowledge of ECS and CloudFormation. The ECS know how can be gained from the previously mentioned guide, but CloudFormation requires a good amount of deep diving. It's usefulness is directly correlated to how well you know AWS resources in general.

Either way the Guide and Git Repo can help shed some light on the process. The template doesn't completely cover every single resource in the other guides or apps, but it does serve as a great starting point.

Summary

My goal for this post was to unify all of the things I've been blogging about over the past months. While I was aware there was common theme, that holistic view can often get lost in time and the nature of being a web reader. Any questions about the context or feedback is highly appreciated!

None of this stuff is rocket science. It's just generally a maze of bad documentation, lots of gotchas and specific language of 1000 new terms that makes it seem like so. Yes, the above will require filling in a few gaps, that I hope to personally fill later on when time permits.


As usual, if you find any technical glitches or hiccups PLEASE leave a comment or hit me up on twitter or with a message!

Be sure to signup for weekly updates!!


More from the blog

J Cole Morrison

J Cole Morrison

http://start.jcolemorrison.com

Developer Advocate @HashiCorp, DevOps Enthusiast, Startup Lover, Teaching at awsdevops.io

View Comments...
J Cole Morrison

J Cole Morrison

Developer Advocate @HashiCorp, DevOps Enthusiast, Startup Lover, Teaching at awsdevops.io



Complete Guides: