Pre Loader

Continuous integration with AWS ECS, Github & Travis

//Continuous integration with AWS ECS, Github & Travis

Continuous Deployment to Amazon’s Elastic Container Service (ECS) might be possible with AWS CodePipeline, AWS CodeBuild, and AWS CloudFormation, but I have a limited time on planet Earth and to make matters worse, I am trying to reduce complexity so I can spend more time with my family.

AWS Fargate might have the answers for deploying Docker images in which the Unee-T application is packaged, but it’s far far away in us-east-1, in beta.

So to deploy our Docker images, we use:

Explaining exactly how ECS works and all the weird wonderful permission errors and load balancer setup issues you might encounter will not only take a long time, but might cause flash backs and cause me another mental fit. So instead I am just going to list some top tips.

  1. Use the ECS Web console. It’s actually easier to setup the service and load balancer here since wrangling ARNs for everything and gettings permissions right from the CLI (aws or ecs-cli) is just too damn difficult. We have three clusters right now for demo, staging and production and hopefully we won’t need another one anytime soon.
  2. Test locally with a docker-compose.yml and only then try to deploy with ecs-cli. When that’s working automate it into a one step deployment with a script like
  3. You may notice in our a logging driver. Be sure not to forget to create the logging group in advance. Now in CloudWatch those logs should tell you everything you need to know. You should make sure your application leverages this by shoving stuff to /dev/stdout. /var/log should be as active as the Dodo bird.
  4. Also watch ECS events during deployments to make sure everything is going well.
  5. For blue green deployments aka DO NOT DROP A SINGLE HTTP REQUEST WHILST DEPLOYING, use a two instance ECS configuration, using the default balanced spread strategy and use minimumHealthyPercent 50 maximumPercent 100. I wrote some scripts here for you to check your clusters are in that sane configuration: .. it’s easy to make a mistake when creating a service by hand.
  6. Study our so you can see here we deploy to staging whenever something hits master and only when I do a `git tag $tag; git push origin $tag` it deploys to production. We trust Travis with power user AWS credentials to each of our isolated AWS accounts to deploy it. Continuous {Intergration, Deployment}!
  7. By default ecs-cli creates a 512mb hard limit on a task. You probably want to change that, for example `mem_reservation: 1g` and `mem_limit: 3g` a on t2.medium with has 4g of RAM. You need to do the math depending on the EC2 instance type and applications you’re using.
  8. Track HTTPCode_ELB_5xx_Count. Alert on it. That’s your Devops canary metric that something has gone horribly wrong.

Study our opensource Github issues and code and learn lessons and steal ideas from it, like using the EC2 parameter store to fill in environment variables.

If you like working out in the open on this sort of stuff, since it can be a bit like therapy, do give us a shout since we are looking for people to join our team. Otherwise I hope you found this post interesting and if you’re thinking how to manage some property, why not give Unee-T a whirl?

No comments yet.

Leave a comment