Tuesday, December 13, 2011

Using Amazon’s AWS

In my last article, “Exploring Amazon’s Cloud IaaS & PaaS”, I introduced Amazon Web Services (AWS), and I used AWS to describe the terms “Infrastructure as a Service” (IaaS), and “Platform as a Service” (PaaS).  In order to better understand the value of AWS, I deployed a real world application.  In this article, I’ll step through how I deployed and scaled my example application on AWS and comment on where I found the greatest value to be.  If you haven’t read my first AWS article, “Exploring Amazon’s Cloud IaaS & PaaS”, I suggest you read it as an introduction to AWS.


Creating an EC2 Server Instance

The first step in developing an AWS app was to create a virtual server in the Amazon cloud.  After, I had learned about all the various services and how to create  Elastic Compute Cloud (EC2) instances. I quickly had my Linux test computer environment up and running with 8GB of storage in about 15 minutes.  The process involved clicking on “EC2” in the AWS Management console and following the step-by-step process.  During the creation process  I had to answer questions about what operating system I wanted, how much compute power it should have, what location it should run from, the key pair to use when connecting to it, and the security group to which it should belong (for firewall settings). (See picture to the right).  AWS offers several flavors of Linux and Windows operating systems but their default Amazon Linux flavor was free for me (see costs below for more details).  Once I pressed the “Launch” button, it deployed the server and booted it up.  Within about five minutes I had the public and private DNS name to use in order to connect to it.  Using the crypto keys issued, I was able to connect from anywhere on the Internet using a computer and any ssh client, (even iSSH on my Apple iPad). 

Creating an Database Instance

Step 2 created a database to be used by my test application.  Rather than installing a database server on my Linux system created in step 1, I decided to use Amazon’s Relational database service (RDS).  Creating a MySQL database instance is much like creating a EC2 instance.  Using the RDS tab on the AWS management console, I started the process by clicking on “Launch a DB instance”.  Like the creation of a EC2 instance, I was led through a step-by-step set of questions regarding type of DB to use, amount of compute power dedicated to my database, amount of storage to allocate, what authentication credentials to use for the master DB, the security group to use (which controls which EC2 instances can connect to the database), and if the database server should be made redundant by placing a replica in another data center.  The setup wasn’t hard, and within ten minutes the MySQL database was up and running.

Install Your Application

In step 3, I installed my application.  Now that my server and database are ready, I poked around my Linux EC2 server, installed Apache httpd, PHP and some supporting libraries using YUM.  I spent a little time configuring Apache and then I uploaded the TikiWiki collaboration package, and unzipped it.  The final step was to run TikiWiki’s web based installer.  The TikiWiki software install took all of about 2 minutes while I answered questions about the database server and what user and password it should use to connect to the database.  Once TikiWiki had what it needed, it went to work creating the tables and necessary seed data on the RDS database and was ready to general use.

Configuring TikiWiki 1000+ features and settings is beyond the scope of this article but in general, at this point, my application is ready to be used by my customers and is fully accessible via the web.  I didn’t configure SSL for better security, but, for the most part, it’s ready for use.  The down side is that this is only a single server implementation.  If I want to handle very large traffic loads and implement some fault tolerance, I have two more things that I need to do.

Creating an EC2 Load Balancer

Step 4 involved setting up a load balancer in order to scale my application and provide redundancy.  This was just as easy to create as an EC2 instance. I clicked on “Load Balancers” in the EC2 tab, then “Create Load Balancer”.  The step-by-step  process allowed me to choose a name, the ports that should be load balanced (normally port 80), configure health check settings, and add the EC2 instances that it will manage.  Once the load balancer was started, I could begin using the load balancer URL to access my application, and now I’m ready to move on to the last step, adding more servers for scale and redundancy.

Adding Servers For Scale

My final step was to scale my web application. The true power of AWS really started to strut its stuff right about here.  Once my application was running and configuration steps created the local files, I cloned my existing application server by making a snapshot of it.  This is done by creating an image of an EC2 instance from the management console which places a Amazon Machine Image (AMI) snapshot into a list.  I then went to that list and, clicked on “Launch”, chose the number of servers that I wanted to launch, the data center it should reside in (putting it in a separate data center provides better fault protection) and pressed “Launch”.  Once the new EC2 instance was running,  it was a simple matter of adding it to the list of machines attached to the load balancer created in step 4.  After attaching the new instances to the load balancer, my application had the ability to handle more concurrent requests, made my application more fault tolerant because the new server was in another data center, and theoretically improved performance under heavy traffic loads.

Additional Considerations

I say my application trial is fully ready, but there are a few others things that I would want to investigate before I go full production.  Here are a few:
  • Ensure I have a good understanding of the backup implementation
  • Test the restore process.
  • Investigate the auto scaling capabilities in AWS
  • Evaluate my high availability implementation and ensure it fits my needs and pocketbook, then test it.  
  • Test AWS database fail-over.
  • If warranted, I should investigate implementation of region fail-over. If both availability zones become unavailable I may want additional redundancy.  Although the probability of a complete AWS region failing is very low, it’s not zero - it happened once in 2011.  

A Note about Windows Servers

Amazon AWS does offer Microsoft Windows server machine images and I did a little testing with a Windows machine image.  I created a small server and connected to it via remote desktop using my PC and an iPad.  To add more storage to a Windows server, use AWS elastic block storage (EBS) and connect it to the server as you would when using Linux.  Once storage is connected you can treat it like any other unformulated disk by formatting it with disk manager, and mounting it.   Microsoft Windows applications (like IIS) are added by mounting EBS volume that is a snapshot of the utilities you want to install.  If you want to experiment with Windows server on AWS, I suggest you search the net for some examples and follow Amazon’s recommendation regarding the administration of your server farm.

The AWS Value Proposition

Ease of setup, management and scalability are the biggest values in my test.  Now that I have experience using AWS, I can setup a fairly large application environment in about an hour.  The ability to scale an application like this in just minutes is amazing.  I could have added 50 servers to my cluster, all sitting behind my load-balancer in a half day.  With a little more work, I could have enabled auto-scaling so that when my application servers become swamped with traffic, AWS would automatically add servers to my cluster and then remove compute instances when they were no longer needed.  I also had better-than-average fault tolerance with a primary and backup database server in two different locations and redundant web servers spread over two different locations, all with auto-fail over should disaster strike.  

Cost / Value

Cost is an area that’s difficult to comment on because it’s going to vary based on consumption, but the value proposition is multi-fold.  With AWS, you only pay for what you use.   If I was deploying this app in my own data center (assuming I had one, and I still had capacity) I would have to plan for server costs, power costs, time to order and setup the hardware and install the base OS, networking and then storage and maintenance.  If I purchased too much compute power, too bad - I’m stuck with it until I can sell it or it becomes fully depreciated and I send it to the recycler.  Conversely, if I didn’t install enough compute power my users suffer while I try to deploy more. If I am delivering a new application to internal or external customers, that first impression will be difficult to shake.  With AWS, there is no hardware setup and I can scale as demand grows without going through the cycle time of order-install-deploy.  

Using my own data center assumes I still have room in the data center, enough power coming in, cooling capacity, and backup power capacity as well as enough storage for my backups.  If I have to add more capacity to my data center in one of these areas, the incremental cost can be quite large.  

Most small and medium companies only have a single data center, so redundant systems in redundant data centers may be out of the question.  With AWS, I can easily have redundant systems in multiple data centers, something that most small and medium businesses would have difficulty affording.  

Take a look at my earlier article on the benefits of cloud computing as you work through the cost benefit analysis for your implementation.  You will need to take a look at AWS pricing plans for all the services that you plan to use and add up the total costs to implement in a private data center in order to complete your analysis.

For my testing, costs were extremely low.  My database was small, and as part of AWS’s Free Usage Tier, new AWS customers can get started with Amazon EC2 for free. Thus, upon sign-up, new AWS customers receive the following EC2 services each month for one year:
  • 750 hours of EC2 running Linux/Unix Micro instance usage
  • 750 hours of Elastic Load Balancing plus 15 GB data processing
  • 10 GB of Amazon Elastic Block Storage (EBS) plus 1 million IOs and 1 GB snapshot storage
  • 15 GB of bandwidth out aggregated across all AWS services
  • 1 GB of Regional Data Transfer

I think that public cloud computing services like what AWS delivers today will continue to go down in cost, as compared to deploying locally.  The general benefits of cloud computing combined with quick scalability, ability to deploy redundancy for high availability, and pay only for what you use make a compelling case for public cloud computing like AWS.

In my next article, I’ll discuss some ideas on how to get started using cloud IaaS for people that already have local compute.

References

- Chris Claborne

No comments:

Post a Comment