Sunday, December 2, 2018

2018 AWS re:Invent Keynotes Review


What follows are my thoughts and takeaways from the AWS re:Invent conference keynote presentations by Andy Jassy and Werner Volgels in November 2018. I attended the keynote sessions virtually and took notes as usual. I've included a lot more detail about the keynote presentations at the end. 

Thoughts, Observations and Takeaways
     Jassy is probably the best pitchman in the industry.  Right out the gate he's on fleek.  AWS is still the gold standard on how to market the cloud (Google, take notes, your marketing stinks!).

     With the announcement of the Graviton, an ARM based CPU processor for lower cost compute, and Inferentia, the machine learning ASIC to boost machine learning, AWS is becoming a hardware company, primarily aimed at making AWS stronger, not providing hardware to customers (but there was some of that too with AWS Outpost.  Paulo Santos has his take on the CPU on his blog, seekingalpha.com.   I agree with Paul but for different reasons.


     I'm sure AWS crunched the numbers on this but I think they would be better off focusing on purpose built chips like the new Inferentia machine learning chip rather than ARM chips.  On Inferentia, they are following Google's lead from over two years ago on the "tensor processing unit".  Designing, chips is hugely expensive and I would leave the R&D for general purpose CPUs to AMD, Intel, or Qualcomm and let them battle it out.  If they want the latest in low power, go to someone like Qualcomm who knows how to do it best.  This would focus the energy on differentiating capabilities and game changer opportunities in ASICS.   Inferentia is just the start here.  I'm looking forward to seeing what else they do in this space (even if all this AI tech is a bit scary).

      There is one other announcement in the hardware space that has me a bit perplexed It's AWS Outposts, a way to order hardware to be plugged in at your data center, fully managed by AWS and supporting all the AWS APIs for provisioning or VMware cloud.  This sounds similar to Microsoft's Azure on-prem approach but probably without all of the licensing fees :).  It will be interesting to see how much interest this generates in AWS customers. 

     Speaking of machine learning, something that is a perfect use case for the cloud, Amazon continues to deepen their capability in this area by making machine learning easier with the announcement of Amazon SageMaker Ground Truth which enables designers to build accurate ML training data sets. 

     SageMaker is turning into a brand of a constellation of services that make machine learning easier. (attribution Bill Houle)

     The announcement of Amazon Timestream, a time-series database, continues their push into custom database tooling that is more purpose built.  Their pitch is, why use a sledgehammer to pound in a finish nail.  Use a purpose built tool that fits into a constellation of services like Kinesis and other tools to get the job done and scale like a boss, because our data is growing like crazy.  The amount of data that is being handled today exceeds the capability of generalized RDBMS solutions.  Hence the need for things like no-sql databases, eg. DynamoDB

     Every conference I've attended (in person or virtual) they are always pitching ways to save you money, like Amazon S3 Intelligent-Tiering at this edition. In past conferences there were a host of announcements about reducing costs for various services.  The only one I heard this year was the reduction for long term storage called S3 Glacier deep Archive, which will lower my Glacier costs by almost 80%.

     Jassy also announced Amazon Quantum Ledger Database (AQLD) and Amazon Managed Blockchain.  If you think of the blockchain ledger as just a database that's what QLDB is, a database that provides transparency, immutability, and cryptographically verifiable transaction logging.  This is for companies that need the features that the blockchain data store provides but without all of the other unneeded complexity of trust networks, replication etc.  Audit logs and transaction logs are a good use case for AQLD.

     The second offering, Amazon Managed Blockchain, is more of a traditional blockchain solution built on either Hyperledger Fabric or Ethereum.  I'm familiar with Ethereum which I think is one of the leading candidates for use by business.

     Serverless services continues to grow.  It means that you can provision code and only be charged when it runs, true pay for use.  Even better, you can control the knobs of bandwidth, scaling up without complex provisioning rules for a farm of servers.  One of the advantages of AWS over on-prem is the ability to easily scale up and down.  The ROI is that you only need to provision what you need at the time vs. buying compute for peak capacity on-prem (blowing $$$ when it's not in use).  Auto-provisioning compute allows designers to automatically provision more compute in whatever sized compute chunks they felt appropriate.  This is complex and based on a lot of guessing and good algorithms that need to be built by the developer.  Auto-demand provisioning basically gives you more, in smaller chunks just when you need it and scales back. This means the potential to be over provisioned is almost eliminated.

Serverless started with Lambda and then moved onto Aurora and other services after they were released.  Today they released DynamoDB Read/Write Capacity On Demand.  It looks like the on-demand model is being made part of all the new offerings, like their new Timestream database.  (Amazon’s billing systems must require an whole separate AWS building just to crunch the numbers for billing customers!)

     Some products focus on making AWS easier to use and or scale.  Making things easier must resonate with their customers and it obviously helps attract more customers.  All of this is great because as AWS grows (I'll say it again, 46% growth YOY). 

     They have a history of releasing features that will cannibalize revenue.  It seems to be working for them though. 

     AWS is good at marketing AND coming up with ways to drive more business.  Examples are things like SageMaker, which will drive more use of ML.  Amazon Personalize and Amazon Forecast  are a good examples of AWS leveraging the value that comes from the use of their systems and generate more business.  Making ML and AI easier to use and driving more business is big business.  It will drive storage revenue (their bread and butter), compute and other services.  Also, the must know that the more data that a company puts in AWS, the bigger the "data gravity" becomes, pulling in more uses of AWS services on that data or additional services like reporting.

     VMware was featured as a big partner.  My sense is that this helps them move more workloads from hesitant customers.  In my opinion, customers have to break the habit, and expense, of VMware and move to AWS control plane and infra as code.  I think the ROI advantages are there.  But, in the end, AWS may not care, as long as they get your workload.  They partnered with Dell it looks like for their AWS on-prem hardware announcement of AWS Outposts.  

     Dr. Werner Vogels, a bigtime technologist and architect keynote was dedicated to lifting the covers on some of the key technologies to gain more condense from customers.  It also showed their drive toward efficiency and better hardware utilization (lowering cost for AWS and allowing more scale).

     As I listen to the keynote, and look at all of the "#ANNOUNCE" tags in my notes below, it seems like it will be a LOT harder for competitors to keep up, and even harder to catch up.  Sure, some of these are more automation script platforms that help drive toward a common known design pattern wrapped in a GUI than actual products.  Products Amazon's new graph database and Aurora are born in the cloud and the cloud is all about scale in size, speed and reliability.  But the barrier to entry is more than cool scalable products, it’s pure experience.  AWS was in launched in 2002 and since then the company has been learning how to scale, listening to customers, and once at scale, they’ve instrumented their infrastructure so that they can learn how to optimize it and make it better.

     Finally, it's really hard to walk away from these two keynote presentations without thinking that AWS is the cloud provider of choice. 



Below are my notes from the two keynotes.
Andy Jassy Keynote, 11/28/2018.
You can find a replay HERE)
     46% growth on $27bln in Revenue
     In Gartner's magic quadrant for IaaS, AWS is #1, 4 times the size of the next 4 competitors combined.
     140 services.
     Windows business growing, AWS has 57% of workloads, Azure 30.9.
     Competitor strategy is to show products that are similar to AWS to "check it off".  But competitors lack the depth, a lot!
     Andy touted encryption and KMS (key management service).
     They have 11 relational or non-relational databases, and a database migration services.
     Multiple container services
     Lambda, their serverless compute offering, is now integrated with 47 other services.
     Storage options and data transfer options and ability to change the size without a infra rebuild.
     #Announced, new transfer services - AWS data sync (faster than rsync), SFTP
     S3 has 10,000 data lakes which has the ability to be audited, applied encryption, and look for unusual access patterns. 
     Users can replica across multiple availability zone.  S3 also supports cross region data replication. 
     Multiple storage options.
     They’ve recently #announced intelligent tiering on S3.  This is machine driven learning that will migrate data to a colder tier when not being accessed.  Data will be migrated back when needed.

     #ANNOUNCE, Glacier deep archive, lower cost glacier. 
     11 9s durability
     Recovery sounds like current Glacier storage, hours.  The target is anyone storing data on tape.
     $.00099/gig ($1 / terabyte / month).  UNREAL!

     #ANNOUNCE - AWS Elastic File System (EFS), just announced a lower cost version
     AWS has 57.7% of cloud Windows workloads compared to Microsoft Azure 30.9% according to Gartner
     #ANNOUNCE Amazon FSx for Windows File Server.  Fully native, managed file service.  Ready to go with PCI, HIPAA compliance.
     Amazon FSx for Lustre for HPC workloads.  High throughput, low latency, millions of IOPS.  HIPAA and PCI ready.  His pitch positioned it as a way to support massive storage workloads and then move data to S3 and shutdown.

CIO Dean Del Vecchio of Guardian insurance takes the stage
     He took over with a lot of tech debt
     Interesting that he mentions the move to implement an innovation environment in the workplace with work-spaces, etc.
     His company took a "Production First" approach.
     They migrated 200 apps in 12 months and shutdown their owned data centers.  Reduced data center by 80%.
     They no longer focused on managing data centers.
     They use a lot of SaaS, sole source to AWS for PaaS, IaaS.
     All acquisitions are migrated to AWS
     They are still migrating apps to AWS.
     AWS is also supporting their digital transformation.

Back with Andy
     #ANNOUNCE AWS Control Tower for setting up and configuring multi-account environments.  These are implementation patterns with best practices and "Guardrails" that make it easy to setup various access and control schemes.  Dashboard shows all accounts, guardrails applied, etc.  Because AWS implements infrastructure as code, it's probably easy for them to implement this design pattern framework.  Great for companies that are just starting
     #ANNOUNCE AWS Security Hub, ASH, one place to go to get a summary of centrally managed security and compliance across an AWS environment.  Initially it looks like this may put a few security vendors out of business if they make it easy but wait... there's more, they have multiple partners that are integrating their services with ASH.  This should make for an interesting 3rd party security product offering.
     #ANNOUNCE Lake Formation making it easier to setup data lakes and solves the problems of creating a lake with metadata tags, security (pre-built policies), encryption, access control policies, etc. 
     Old guard database vendors (he's talking about Oracle)  are constantly auditing you and fining you.  He's taking shots at Microsoft and Oracle.  Especially Oracle who charge customers double to run in AWS.  Amazon Aurora is Amazon's answer to commercial DB with as good or better performance and better durability.  There has been a massive adoption of Aurora. 
     There have been 35 new features on Aurora.  Like Aurora serverless, Aurora global database which is multi-region with sub-second sync.
     RDBMS was fine for gig or terabyte sizes but users have much more data and demand better performance.  (He's building a story for non RDBMS.)  Andy used examples like Lyft with millions of GPS points stored, or gamer data in online games.  Their key-value DB is Dynamodb which has been out for years.
     AirBnB wants an in-memory DB, like Memcached.  Nike has athlete, followers, customers, and the connections between using a graph DB like Amazon Neptune. 
     #ANNOUNCE read/write capacity on demand so that you no longer need to guess what the capacity you need.  Demand will auto-scale and de-scale.
     #ANNOUNCE A new database, Amazon Timestream, fast scalable managed time-series database.  Built from the ground up to serve this specific purpose.  Trillions of daily events, auto-scaling, fully managed, etc. 

Blockchain
     Andy then delved into what AWS thinks about "Blockchain.   They had customers running on AWS for a while now.  They didn't understand what customers needed.  Talked to 100s of customers and found there are two jobs.  Some want a centralized ledger that was immutable and cryptographically secure.  DMV, manufactures, healthcare, etc.  None of this easy to do with RDBMS.  Consensus ledgers are not performant and difficult to setup for this use case.  Other customers have peers that want to work together but have decentralized trust and approve the transaction via consensus (the more classic blockchain we see today).
     #ANNOUNCE Amazon Quantum Ledger Database (QLDB).  Supports first use case with APIs and high performance with SQL like abilities.
     #ANNOUNCE Amazon managed Blockchain fabric for Etherium  (good move) for the second use case.  (I've written about Etherium before and I think it still stands as the best platforms for business blockchains.)

Moving into Machine Learning
Machine learning is still hard, not only the theory but also the laborious job of getting the data loaded in, preparing the data etc.  So they have the following as part of SageMaker, with the goal of making it easier.
     Pre-built notebooks for common problems for collect, prepare, and training
     Built-in, high performance algorithms
     Auto-scale for training as well as using AI to help you tune your training models.  (Using AI to do AI, gotta love it)
     One click deployment to multi-availability zones with auto-scaling.
     They claim that 85% of TensorFlow workloads are running on AWS (probably a shot at Google, which is sad because it was the Google Brain team that developed it).
     10,000+ customers.  GE is all in, Intuit, etc...
     They've worked to improve the TensorFlow framework to improve scaling, moving from a 65% utilization to 90% utilization on GPUs on the neural network.  This is really good for customers who don't want to pay for one more minute of training than absolutely needed.
     Showed a company that used a proprietary tool to do training that took 30 minutes to complete.  On AWS neural nets with the latest deployments, it completes in 14 minutes.  The message or pitch was that AWS makes this available to any customer, not just the guys with cool on-prem secret toys. 
     Pitched the fact that they support all ML frameworks.
     #ANNOUNCE, Amazon Elastic Inference, allowing you to add GPU acceleration to any EC2 instance for faster inference at much lower cost (up to 75% savings).  It will go from 1TFLOP to 32 TFLOPS.  This is supercomputing power for the masses.
     #ANNOUNCE, Inferentia, a custom processing unit to scale inference.  This is clearly answering Googles advantage who using the "Tensor Processing Unit" on their compute.
(I need to look into this more to understand where the advantage is.  Certainly writing code to get better utilization across the training neural net may ease some of the Google TPU advantage.  We'll see.) Andy claim that Inferentia offers another 10x savings in cost.  It's due out next year.
     #ANNOUNCE, AWS SageMaker Ground Truth.  Helps reduce the cost to build highly accurate training databases and labeling that data by up to 70%.
     #ANNOUNCE, AWS Marketplace for machine learning.  More than 150 algorithms and models that can be deployed directly to Amazon Sagemaker.  This is making it much easier for anyone to be a ML expert. 
     #ANNOUNCE, Amazon SageMaker RL.  Capabilities in SageMaker to build, train, and deploy with reinforcement learning.  Fully managed, example notebooks and tutorials, 2D and 3D simulation environments and simulate environments with Amazon Sumerian and AWS RoboMakek, a robotics service.
     #ANNOUNCE, AWS DeepRacer, a 1/18th scale race car with a host of sensors to experiment with reinforcement learning.  Allows users to build a learning algorithm, use a simulator in the cloud, use SageMaker to execute the training and then download it to the car and race.   Order yours today!
     #ANNOUNCE, AWS DeepRacer league.  This came out of the fact that AWS employees got so competitive in building reinforcement learning on their cars they decided to deploy a racing league open to anyone.  Pretty funny.  They will host races at AWS summits and have a championship cup in Vegas next year.  If nothing else, this will drive new developers to use and learn about machine learning and the AWS ecosystem.
     The announcements about DeepRacer were followed by a presentation by Dr. Matt Wood, the GM of Deep Learning and AI at AWS.  He showed how you load your car into a 3D model track simulating all of the sensors and then use reinforcement learning against your algorithms.  This was worth watching.
     #ANNOUNCE, Amazon Textract.  OCR++ service to easily extract text and data from any document.  No ML experience required.  Rather than getting a bag of words out of an OCR, it identifies columns, tables, forms, and recognize what certain chunks of data is (like SSN, name, date). 
     #ANNOUNCE, Amazon Personalize.  Real-time personalization and recommendation service based on same technology used at Amazon.com.  No ML experience required.  This is a good example of AWS leveraging the value that comes from the use of their systems on Amazon.com and generate more business. 
     #ANNOUNCE, Amazon Forecast, time-series forecasting service based on the same technology used at Amazon.com.   Uses machine learning models and algorithms etc to generate time series forecasts.  Benchmark shows 50% better forecasts at one tenth the cost of supply chain software.  This also came from the Amazon.com business.

Presentation from Ross Brawn Obe, Managing Director, Morot Sports, Formula 1
     Primarily discussed how they are using the AWS machine learning platform to add additional content to telecasts and display predictions based on track, driver, and telematics.  Remarkable presentation on how AI is being used for sports in a way we wouldn't expect.  MLB presented last year.
     They also used AWS massive compute to do aerodynamic analysis of their race cars to improve wheel-to-wheel racing.
     The cars have (or will have, it wasn't clear) 120 sensors and generate 1.1 million  telemetry data points per second. 
     Using machine learning to understand if a tire overheating is a problem or not by integrating history data from the car at that point in the race, tire type, track, track conditions, etc.  Crazy cool.
     Future uses of ML will influence race formats, track design, addition of sprint races, and change of grid formations.

Moving on to migrating to the cloud from your on-prem data center. 
     The pitch, the longer you wait, the more opportunity loss and the overall cost of going to cloud.  Most of the world is virtualized using VMware.
     They have VMware Cloud on AWS and VMware has a partnership with AWS.
     Pat Gelsinger, CEO of VMware states that everywhere Amazon is, a VMware cloud instance will be there. 
     #ANNOUNCE, AWS Outposts, the ability to order physical infrastructure, delivered to your on-premise datacenter for a consistent hybrid experience.  Allows customer to order an AWS rack with compute and storage.  You order with VMware cloud or as an AWS Native" Outpost option that allows customers that are use to the AWS control-plane.  AWS designed
     #ANNOUNCE, VMware is announcing more services to better integrate with the VMware cloud in AWS.  Will build on vmotion to make it easier to migrate.  NSX (virtual networking) should also wrap around all of this for customers that are experienced with it. 
     Discussed all of the other solutions that support hybrid for customers. 
     #ANNOUNCE, (a few days ago), Snowball Edge, which a compute optimized storage option where you don't have network connectivity.  It is a 100TB data transfer device with on-board storage and compute capabilities that can act as a temporary storage tier for large local datasets or support local workloads in remote or offline locations.  Better description and use cases here.

Final Pitch "It's for the builders"
     AWS removes the barriers to discovery and experimentation because builders don't have to compete for on-prem resources.
     Mentioned that this changes the culture at companies when it comes to trying things out to get to an end solution.  This resonates with what I've heard from other CIOs at cloud first companies that I spoke to last year.
     Talked about the services that support builders

Dr. Werner Vogels on Thursday, 11/29/2018
(Replay found HERE)
Starting last year, they changed the format a little and had Andy do all of the product announcements and Werner focusing on a architecture and technology presentation, something that I think Werner enjoys a lot more.  This year he did that but was also responsible for announcing products as well.  He's a deep technical thinker and with his experience in distributed reliable systems was most likely a key influence or designer for Dynamodb and S3, two of the first solutions developed to support Amazon.com.  He really speaks well to the engineers and developers in the crowd. 

Discussed the architecture in Aurora, the AWS built database introduced a few years ago.
     Discussed scaling databases in the old days, blast radius containment, cell based architectures, availability zones.
     Aurora is born in the cloud and discussed the high availability and performance designs.
     Discussed the quorum based scale out for Aurora and the 6 replicas.  Aurora uses 10 gig block to improve recovery time (replicate to new node on full failure). 
     In aurora they only move the log over to a write queue that allows other nodes to ingest.  The log is the database since the destination is database aware.
     His pitch ... "Aurora is a Cloud native database as a foundation for innovation. "
     Schema changes in Aurora are done over time rather than a very large copy and build making it a lot faster to change the schema.

Now moving on to other topics.
     Customers started moving into purpose based databases rather than a RDBMS for everything.  Key value pair... DynamoDB, starting in 2006.  Discussed performance at scale which they have touted some huge numbers for users.  (I've seen other presentations on DynamoDB before and the performance is staggering.
     There is a cell based architecture applied to DynamoDB.
     A lot of companies used MySQL and he discussed sharding, shards getting hot so they solvined with automatic resharding.
     The pitch, you can move a RDBMS style architecture to DynamoDB - Amazon did this for the five billion updates for storefront with 30% + growth for items, offers, and variation.  They moved the 600 billion records to Dynamodb.  They did it in near real time.  Moved over the item and offer service to DynamoDB with no loss.  Now they have something that will scale.  Amazon has been trying for years to get off of Oracle for the commercial side.  They must be very close. 

     Discussed S3 storage. 
     They manage exabytes of storage.  In a single region they will managed 60TB of growth to S3 and Glacier per day (I think)
     There is incredible focus on data durability.  235 distributed microservices run S3 and Glacier.  One of these does nothing but prepare and bring on-line new services
     AWS has a culture of durability.  - Includes durability reviews for any new feature like you would for a security review.  They do static analysis, checksums (looking for uncommanded bit flips), proofs, durability checks, operational safeguards, etc.
     S3 and Glacier built on fault tolerance of hardware storage but the design accounts for a full loss of a AZ. 
     The math that characterizes risk of failure and loss of data comes out to 11 9s of durability.  The calculation includes time to fail of hosts and disk, and time to repair.  They don't use mean time to failure but actual time to repair.  They look at the worst case, not the mean or best.  They do check-summing of at rest data and monitoring of data in flight.
     S3 durability resilient to a loss of a datacenter / zone.
      
     With millions of customers they can observe and improve... (sub-pitch, because they are so big, they can learn to improved faster than competitors and certainly on-prem implementations).
     Werner's happiest day.  On Nov 01, 2018, they turned off their Oracle data warehouse and changed over to Redshift.  (See my notes on the performance of Redshift given by Zenga).
     By looking at fleet telemetry of others using Redshift, they have been able to improve performance.  They have improved Redshift 17x for repetitive queries, 10x for bulk-deletes, 3x for single-row inserts, 2x for commits.
     87% of AWS redshift databases don't have sig wait times. (I don't know what the hell that means).
     #ANNOUNCE, Redshift concurrency scaling.  When they see performance dip, they scale it up.  First hour is free.  Most customers never see a cost.

Serverless & Lambda
     Fender CEO gave a presentation on how they use AWS at Fender to deliver and archive video, as well as their subscription based services.  He talked about how they reduced their bill by 15% and delivering 21X the requests using serverless.  Using machine learning for instruction improvement.
     (one of my thoughts here is that once your data is in the cloud, adding machine learning and AI is much easier).

Werner moves onto serverless
     Advantages, no infra to deploy, lower cost because it's truly pay for use, etc.
     Lambda processes trillians of requests per month,
     Discussed firecracker speed, performance and security via isolation.
     Firecracker microVM allows them to run more instances within EC2 for isolation for their serverless computing infrastructure.
     95% of AWS features and services are built on based on direct customer feedback
     Werner moves into "Systems of Systems" discussing how they stitch their services together along with their partner offerings....
     He moved onto agnostic approach to development
     #ANNOUNCE, AWS Toolkits for all the popular IDE's.  If you don't want to use Cloud9, you can use just about anything else.  It allows all the popular IDEs to do serverless development, in addition to AWS's IDE, Cloud9.
     Agnostic on Languages
     #ANNOUNCE, adding Ruby support on Lambda
     #ANNOUNCE, Custom Runtimes allowing you to bring any Linux compatible language to Lambda.
     #ANNOUNCE, Lambda Layers extend the lambda execution environment with any binaries dependencies, or run-times.  Providing "partner layers"
     #ANNOUNCE, Nested applications using serverless Application Repository allowing you to stitch apps together
     #ANNOUNCE, Step Functions service integration to support better orchestration via step functions.  Connect and coordinate AWS services together without writing code.
     Manage APIs with API gateway.
     #ANNOUNCE, WebSocket support for API Gateway allowing you to build stateful applications.
     #ANNOUNCE, ALB support for Lambda to integrate Lambda functions into existing web architectures.
     Discussed how Fender does better quality control using vision and machine learning
     Talking about streams processing.  95% of auto problems can be solved by placing a small mic in the engine compartment.  "Now that is a stream to be analyzed".  Also discussed how Kinesis is being used to analyze video etc.
     "Video and audio are becoming data streams to be analyzed".

Kafka issues... most likely another blast of announcements
     #ANNOUNCE, managed streaming for kafka.  fully managed and highly-available Apache Kafka service.
     Well architected are a set of principles that are used upfront, not after
     Queue Yuri Misnik GM of national Australia bank spoke

Well Architected Framework
     Talking about the AWS well architected framework.  They have formed a set of partners to help do the reviews.
     #ANNOUNCE, AWS Well-architected tool to find best practices, so that reviews are self-service.  Get deep insights into all workloads, for example, finding issues with key rotation.
     "Now Go Build" are a set of videos that follow Werner around on speaking tours on architect in the cloud.  See YouTube.
     Werner loves music and always announces the band that will play at tonight's party.  Werner likes Skrillex... so they're back.

-- Christian Claborne, chris

No comments:

Post a Comment