Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
The year was 2016. I was a hobbyist with ideas to burn. Naturally, I needed an inexpensive hosting provider for my latest web app. Where would I start? Who would prove to be the elusive âhostess with the mostest?â
This is the story of my migration to and from DigitalOcean, AWS, and Herokuâââthe trial and error, the pros and the pain points. Youâve seen the archetypal heroâs tale. Now cinch up your belt, oil your sword. Prepare to experience firsthand the epic tale of a simple village hobbyistâââand his quest for just the right host.
Background: The idea and the app.
I like to run. I followed a training plan as I prepared for my next race, but all the training plans had one thing in common: they prescribed different paces for different types of runs. For example, I was expected to adjust my pacing depending on whether I was heading out for an âeasyâ run, a âdistanceâ run, or a âlongâ run. But what is an âeasyâ run? Give me the numbers in minutes per mileâââhow do I know when Iâm on target? To find out, I would consult pacing charts based on a runnerâs latest 5K race time and write down my prescribed pace for a scheduled run.
Manually looking up paces in pacing charts was tedious and boring. So I wrote an app to do it for me: RunbyPace. No matter where I am, I can lookup my prescribed pace for a given run type, and hit the ground running.
But where would I host my app? I didnât want to spend much money. I budgeted about $5â10 a month for it. My search began at DigitalOcean.
Hosting with DigitalOcean: The savings (and the struggle) are real.
DigitalOcean was a great start. They have droplets that come pre-installed with the requisite tooling for different types of apps. In my case, I had a Rails app, so Ruby, Postgres, Bundler, Unicorn, and Nginx were all pre-installed and ready to roll on the rails. The biggest pain point I experienced surrounded updatesâââkeeping the OS patched, updating the Ruby ecosystem, and pushing new versions of the app itself. It was all manual and difficult because of the dropletâs limited memory.
DigitalOcean Pros
- Inexpensive, at $5 per month.
- Easy to get started.
- Lots of well-written guides.
DigitalOcean Cons
- Instead of working on new app features, I had to manage everything myself, including Nginx, SSL/TLSÂ certs.
- Manual updates. OS updates were a manual process. I configured a Cron job to automatically install security updates, which helped somewhat, but still.
- Updates to the Ruby ecosystem were also manual.
- Manual deploys. Releasing a new version of the app was not a one-click affair. I had to ssh into the VM and run my scripts by hand.
- Availability. Only one droplet, so availability was so-so. Updates and new app versions meant the app was offline temporarily.
- Scaling. Only possibility for scaling was vertical, with a bigger droplet.
- Limited RAM. The basic droplet only had 512MB RAM, which was only enough headroom for basic operations. I had to kill Unicorn in order for Rake tasks to complete.
- Environmental differences between production and development made me afraid of releasing big changes to the app because I wasnât sure if they would work in production. Often I put off releases.
DigitalOcean got me going, but the pain of releasing new features made me want something more automated and more robust. I got interested in trying AWS.
Hosting with AWS: No one visits your site, but everyone couldâââif they wanted to.
After my DigitalOcean experience, I decided to give AWS a go. I signed up and qualified for AWSâ free tier, which meant I could have a little fun at limited costâââbut for only a year.
Goals with AWS
With DigitalOcean fresh in mind, here were my goals with AWS:
- Parity between development and production environments.
- High availability, even during updates, releases, and deploy failures.
Architecture on AWS
I went all in with the AWS ecosystem. I moved my runbypace.com domain to AWS Route 53 and used their DNS infrastructure. DNS directed visitors to an Elastic Load Balancer (ELB), which then directed visitors to EC2 instances running dockerized versions of Nginx and my Rails app on top of the Elastic Container Service (ECS). On the backend we had Postgres running on AWS RDS. AWS also handled SSL certificates for me.
# AWS architecture diagram
Route 53 | Elastic Load Balancer | | ec2/ecs ec2/ecs (nginx) (nginx) (rails) (rails) | | PostgreSQL
AWS Pros
- Cost. On the free tier, the whole robust world-class architecture only cost me around $11 per month. Most of that was due to the second t2.micro instance.
- Easy deployments. Once the infrastructure was in place and Iâd written my deployment scripts, I could deploy new versions of the app with a single command.
- Dev/Prod parity via containers. On my dev box I only had to run docker compose up and start writing code. Because the app was containerized I had the confidence that if it worked on my box, it would work in the cloud. It wasnât 100% seamless though, as youâll see in the âConsâ section. Dockerfiles made it pretty easy to package up and deploy assets and artifacts that I didnât want open-sourced.
- Availability. The entire time I had the site on AWS, I never had one outage. When I pushed a new app version, ECS would handle stopping one of the web tasks to make room for the newly registered one. Once the new task passed the health check, ECS would stop the other web task(s) and replace them with the new version. The whole process was seamless. If I made a boneheaded move and broke something, the new version would fail the health check and ECS would keep running the old version of the site.
- Less update management. If you run standard AMIs (Amazon Machine Images), they handle the security updates for you.
AWS Cons
- Cost, post free tier. After the free tier expired, I expected to pay around $50 to $65 per month for my existing architecture. Ouch! Way too rich for a hobby app that makes me $0.00.
- AWS documentation is a little scatterbrained
- ECS is awkward. At the time, Kubernetes wasnât a thing on AWS. ECS was the only option. Wait, so you mean I have to manage my containers and the VMs they run on? I just wanted to push my containers to the cloud. And thereâs no easy way to autoscale EC2 instances in response to pressure from the ECS containers?âââI have to manually increase the number of EC2 instances? Ugh. So I better have access to the AWS console if I ever experience a big traffic spike.
- Still have to manage the EC2 instances. Again, why do I have to manage EC2? And why do I have to manually update the ECS container agent when Iâm using AMIs?
- App config differences between dev/prod. For local development, I kept configs in .env files consumed by docker-compose. For production I pushed configs along with the ECS task definitions. When I added or modified an .env config but forgot to update the task definitions, trouble ensued. If Iâd given this more thought, I suppose I could have automated this.
AWS was incredibly robust, but not pain free, and the pain machine was about to be cranked up to 50, $50+ per month! I looked into Elastic Beanstalk, which is supposed to deploy code directly to the cloud without the need to manage EC2, but it is expensive too! The cost would have been about the same as my previous AWS architecture. So our quest for the right hosting provider continuesâââthis time with Heroku.
Hosting with Heroku: So far, so good
I just started using Heroku, so Iâm still in the honeymoon phase, but so far things look pretty good. The price is good for my needs, and deployment is a cinch.
Heroku Pros
- Price. Only $7 per month for a Hobby dyno.
- Scalability. Apparently itâs possible to scale both horizontally (additional dynos) and vertically (more powerful dynos)
- Drop dead simple automatic deploys. Now all I have to do is push to master and Heroku deploys to production automatically.
- Build configuration via Heroku build packs. With my AWS setup I used Dockerfiles to tweak my build setup. With Heroku I can easily create my own build packs. For some examples see a build pack for deploying the current commit hash to an arbitrary path, and another for deploying a keybase proof to an arbitrary directory.
- Free SSLÂ certs
Heroku Cons
- Issues with SSL Certificates on naked domains with Google Domains. After leaving Route 53, I took runbypace.com back to Google Domains. For my custom domain runbypace.com, Heroku requires me to configure my DNS provider with an ANAME record pointing to runbypace.herokudns.com. Unfortunately Google Domains does not support ANAME records⊠_ The workaround on Google Domains is to create a CNAME pointing to www.runbypace.herokudns.com and then a synthetic record to redirect @ to www. Unfortunately again, this breaks down somewhere with the SSL certs, causing https://runbypace.com to have an invalid certificate, even though https://www.runbypace.com works just fine. This was a big bummer, so I switched to PointDNS for now, as it does support ANAME records.
As I gain more life experience with Heroku, Iâll report back with new findings.
A respite in my hosting quest
Fellow travelers, we have reached another respite in this quest for the best hosting provider. Weâve met DigitalOcean, which is inexpensive, but leaves the burden of setup and configuration to you. Then thereâs the Hilton of hosting, AWS. If you have the need for massive scalability and a bazillion 9s of availability, itâs one of your only choices. If you just want to dream dreams and deploy code, Heroku seems like the best fit, for now⊠Safe travels and happy coding!
Originally published at tygertec.com on January 27, 2018.
Hard up for hosting: From DigitalOcean, to AWS, to Heroku. was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.