Sunday, January 25, 2015

AWS re:Invent 2014

Although I'm getting this out a bit late, here is my annual review of the AWS re:Invent love-in cloud.  I was excited to see what this year’s annual conference in Las Vegas would bring.  I attend the conference for several reasons.  I wanted to see what’s announced at the show, but more importantly, I wanted to find out what the theme and core messaging and trends are.  By attending, you pick up on the theme and cantor of AWS by listening to all of the presenters.  This applies not only the AWS employees at the show, but their customer presenters.  AWS curates their presenters and topics to align to their marketing vector.  


AWS continues to be the leader of the pack by a wide margin, taking their planetary scale to the next level year after year.  What follows are some of my thoughts about AWS as influenced by re:Invent.



Themes

Infrastructure as code

To me, this may be the biggest shift in computing since the client-server was announced in the late 80s.  Today’s tools make AWS admin just a few years ago look barbaric, but it’s more than that.  


Think of the statement “My data center is in this JSON file.  It’s checked-in right next to our application source code.”  


Infrastructure as code enables developers to think different about their application development and deployment.  With this approach, the idea of blades, disk, networking and physical sizing of the deployment infrastructure is just another section of code for their application. Furthermore, a simple change in the code enables an infrastructure to quickly adapt to the needs of the designer.  It strips away the idea of a physical data center with equipment that has to be ordered, shipped, racked, and configured, a process that takes many companies months to complete.  Today’s designers, and developers approach the compute side completely different than yesterday’s programmers.  Compute and software are now just code.  
  • Infrastructure creation and changes are all in code, checked in with the rest of the project.
  • Continuous infrastructure releases is a code change, updating n servers, DB, networking, etc.
  • Bug fixes to the infrastructure code will be more about optimizing cost to performance ratios as well as tweaking auto-scale decisions.
  • Designers are no longer tied to hardware decisions made years ago, or 5 minutes ago.


If my son enters this business like I think he may, how he gets work done will be entirely different than most of my career.  This has an impact on staffing the new IT & development crew.  Developers have to think more about the infrastructure side of their solution, and dev-ops need to be programmers.


“Cloud is the new Normal”

You gotta love the marketing crew at AWS, and the above tag-line is snappy.  What I saw behind this was that AWS cloud is moving out of the “Come to the cloud side, your problems are all solved over here.” to “You know about the cloud, you use it, it’s just they way you now do business.”.  Many of the presenters have been using AWS for a while and presented their implementation history and how some of the tools that are now available make their life easier and less prone to disaster.


“I’m all in” & “The Hybrid Cloud”

Although these are opposing views, the point is that AWS realizes that their customers will be “hybrid” for some time, maybe forever.  It’s easy for companies that grew up in the cloud but established companies like the ones I’ve worked for, will always have on-prem compute for a myriad of reasons.  There were multiple presentations on hybrid cloud, thus enlarging the tent for AWS users who aren’t “all-in”.

Announcements - Here are a few:

Aurora

In my opinion, Aurora is the single biggest announcement at re:Invent 2014.  This is Amazon’s shot across the bow of Oracle.  AWS customers have been struggling for years to achieve web-scale with MySQL on AWS.  Many customers opted for Oracle’s trademark database for scale and performance.  Aurora aims to erase the advantage that Oracle may have here and make life easy for MySQL users.  The secret sauce is in the storage layer, where they provide high durability and continuous scans for corrupt data, repairing it on the fly.
  • Aurora improves performance 5X over MySQL via parallel scatter writes, with up to 15 read replicas achieving 6 million inserts per minute and up to 30 million selects per minute.  
  • Scaling to a larger machine  is fairly straightforward.  Establish a larger replica (up to 32 vCPUs, and 244 GB of RAM) and then initiate a failover when you can take a minute of down-time.  (Nope, the “minute” was not a mistake.  The down-time is as long as it takes for DNS cache to be cleared.)
  • Aurora is designed for the AWS cloud by providing auto-redundancy by default (minimum of two copies in each AZ in a region = 6 copies) with 99.99% availability and autorecover should the worst happen.  4 copies need to be on-line to write and customers can lose up to 3 copies and still keep reading.  It’s also doesn’t require the “replay of redo logs” during recovery, alleviating huge headaches.  Although data is redundant, to have failover, you still need to establish a second replica in a second availability zone to get auto-failover.
  • There is support for 16 read-replicas.  
  • Backups are 11 9’s durability which is most likely glacier.    
  • There’s no more space planning (guessing).   Aurora auto-expands in 10GB increments up to 64TB database.  
  • To ensure compatibility, Aurora is engine compatible with MySQL.  If you have an application that’s using MySQL, you should be able to easily switch to Aurora without any porting.  
  • A check box is all you need to enable encryption and SSL communication.
  • Aurora supports point-in-time restores.  
  • Although failures at the engine can happen, AWS has isolated the page cache  so that if the engine goes down, it’s restarted right away and the cache is already “warmed up”.  This alleviates performance brownouts and reduces recovery times to seconds.  
  • Aurora is free, you pay nothing for the software.  All you pay for is the instance size like you would for MySQL.  There’s no additional licensing like you would if you were to use Oracle or MS SQLserver.  
  • If it’s not yet, expect Aurora to be well integrated with their newly announced key management service


Key Management Service (KMS)

The main purpose of KMS is to make it easier to manage the keys used to protect data.  AWS uses hardware security modules to protect the keys and it’s integrated with other AWS services like EBS, S3, Redshift.  The foundational component removes one of the biggest barriers to implementing “encryption at rest” while using the cloud.


Lambda

Lambda is Amazon’s event driven code engine in the cloud.  Let’s say you have an application that allows users to upload any size image but you have to create a resized thumbnail as part of your process flow.  All you need to do is deploy the event driven code to Lambda and then have S3 fire an event every time it receives a new file.  Lambda’ pricing model is interesting.  Code is deployed to Lambda and Lambda auto-scales to meet demand.  Customers pay by CPU time, that’s it.  This is the first platform-as-a-service that AWS has built this way.  It will be interesting to see if this is the start of a trend.  


AWS Container Service

Amazon EC2 Container Service is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run distributed applications on a managed cluster of Amazon EC2 instances.  What’s interesting is AWS quick response to this fairly new technology.  Containers may be the wave of the future for a lot of deployments and having a service that supports Docker initially and integrates it with the rest of their services is really smart.  I think this is one of the products that was driven directly from customer requests.  AWS has done some interesting things with ECS regarding scaling up and down containers where you need them.


Big EC2 + Intel Partnership

Amazon going bigger normally isn’t news but the Intel partnership to have Intel build a custom CPU for the cloud is.  Amazon talks a lot about how things change “at scale”.  The new Hazwel processor is in the new EC2 instances, allowing customers to choose a 36vCPU system with 60GB RAM.  Speaking of going big, EBS volumes now support up to 16TB each, handling 20K IOPPS.


There were a lot of other announcements but  for me, these were the biggies.


My favorite quotes

Below are a few of my favorite quotes from the vendor show, backs of t-shirts, or statements made by presenters and attendees.
  • My data center is a JSON file.
  • Data is the new bacon.
  • A typical data center is less than 50% utilized.
  • Anytime someone tells me something that rhymes, I become suspicious.
  • Coco-Cola (marketing) realized 40% operational savings in the cloud.
  • Automating my deployments in AWS allows me to go from weeks to minutes.
    [This kills me since saying “weeks” last year was the big deal.]
  • Iterative infrastructure development.


AWS continues to pull away from the pack

Amazon pulling away from the pack probably isn’t huge news to anyone who read Gartner’s latest Lydia Leong report on Cloud Computing.  As AWS continues to mature their product, being the biggest allows them to re-invest considerably into new product offerings to lengthen their lead.
  • While competitors try to follow, AWS is busy creating new tools, DB, key management, and more, pulling further away.
  • Implementation, using OpsWorks, automation, tools and services puts the cloud into code.


Many of the things that were said last year still hold.  AWS has 5 times more capacity than all of the 14 closest competitors combined and they are still adding enough capacity to Run Amazon when they were a $7 bln company in revenue.  


The number of AWS announcements in 2014 is significantly more than the previous year.  Although this is a fun stat, and is easily manipulated by the marketing department, I think the statement is still relevant.  The pace of innovation and growth hasn’t slowed down.

-- Christian Claborne
# chris claborne

No comments:

Post a Comment