Monday, April 17, 2017

Google Next 2017 Cloud Conference thoughts



I attended Google’s first broadly accessible cloud conference “Google Cloud Next 2017” back in March.  Google has been lurking in the background of the top three cloud players and I’ve been eager to learn more about their offering and potential.



Executive Summary

My summary observation is that Google cloud is AWS circa 2014 but with a solid container story (kubernetes), strong machine learning and big data capabilities.  Google customers I conversed with spoke highly of Google’s network and routing. BTW, I had a hard time finding attendees that were using Google’s cloud.  Most attendees were AWS customers trying to understand if G is a viable option for a multi-cloud strategy.  

Google’s compute pricing is simpler than AWS.  Rather than up-front fees to get better pricing, Google rewards customers with lower pricing for those who don’t shut down instances, this eliminates one of the biggest complaints by AWS customers in having to manage complex “reserved instance” strategies.  It’s unknown what the impact is when doing blue/green deployments but I think they look at overall load.  What’s interesting here is that a week later, AWS responded by changing their “reserved instance” solution to narrow the gap.  Another thing Google does is that they allow more fine control over number of CPUs and memory vs. AWS pre-packaged machine types.  This model lends itself well to large number of various batch jobs, each with different resource needs, and therefore could potentially save a lot of $$$ over AWS if we can parcel out to the machine level.  Google claims companies that use good habits for CPU and memory configs can save 30%+.

Overall the conference failed to generate any real excitement for customers, most of them telling me that they are AWS customers considering Google except for one startup that decided to be all-in with Google.  G’s customers tell me that they think there is less noisy neighbor issues and love Google’s networks.  Google’s overriding marketing message was “Be multi-cloud.” and “Love our open standards.”, both of which clearly aimed at trying to chip away at AWS’s stronghold.

At this point in time, I don’t recommend investment in Google Cloud unless you can find an ROI that will warrants fragmenting your focus on standing up mature AWS and AZURE operations and the business case can pay for all of the setup and beachhead services necessary to get going.  For cloud based office productivity, I definitely recommend Google’s productivity suite, G Suite.  It’s exceptionally faster than O365 and I think has more features which isn’t surprising given they’ve been doing this longer than any of their competitors.

Impressions

Google needs a better conference partner

My first impression was a bit rough.  We were greeted with a massive line to get in, wrapping around Moscone center and this was about an hour after the keynote was supposed to start.  They were expecting three thousand people (AWS re:Invent is 30,000) but failed to find a vendor that could figure out how to on-board people quickly enough to get them into the first keynote.  The line was still blocks long an hour after the keynote started.  Any excitement people had about Google Cloud was quickly evaporating outside.  Had it been raining, as it can in San Francisco, this would have been an epic disaster. I listened to the keynote while standing in line and finished watching inside.  

Google needs to rethink their marketing

Speakers and format lacked any ability to generate excitement.  Later keynotes weren’t much better, but they did start doing more product introductions.

Google is seen as a company of highly skilled geeks with no marketing and this is attractive to many small startups and others that are after the bleeding edge.  The issue I see is lack of maturity that would address the enterprise.  They are seen as technology leading rather than general compute.  For some, the number of services that AWS has is a bit daunting and AWS may be starting to collect some cruft in the form of things that need to be brought along.  

Many customers that I spoke to see using Google for specific use cases like machine learning or big-data or, would use them as part of a multi-cloud strategy for heightened availability.  As for the latter, AWS helped Google out by accidentally causing an S3 storage outage on the east coast a couple of weeks before the conference.  [As a side note: One thing I will say about the outage, customers who are multi-region at AWS weren’t affected unless they were trying to get into the console, and AWS quickly published a full transparent review of what happened and what they are doing to prevent it in the future.  Google public cloud hasn’t been around long enough to generate an outage story.  A true test of any cloud company is how they handle it, how transparent they are with customers, and how they learn from it.]

Very light on details and lots of BETA

Most of the sessions I attended were extremely light on details which was strange.  For example, we sat in a couple of presentations about their cloud storage as well as one focused on using it for off-prem backups but there were few details other than pricing and what the product offering looked like.  There were no real details about the products, how they worked, nor any solid story about how customers are supposed to get their data into it.  Google is known for their networks but not one mention of using that to move data into storage or out, which was strange.  They have no story at all for AWS devices like snowball or snowmobile to get your data into the Google cloud.  Unless I misunderstood, most of the tools in Google functions (their serverless product), and others, were BETA

Google is trying to use “multi-cloud” as a way to your heart

One message that came through loud and clear is that Google wants to be part of your multi-cloud strategy (did you know you need to be multi-cloud?)  For web service based companies this may make sense when you absolutely must be up no matter what.  Waze, a cool traffic mapping and routing service purchased by Google presented how they moved much of their services from AWS to Google but plan on leaving a portion of it at AWS for availability (I think there may be more to it though).  

Another message that was beat into customers during the final keynote was “We are going to be the open cloud”, with subtle demand that all cloud providers should be using “open standards” like Google.  This is clearly a shot at AWS, where tools like Dynamo DB are proprietary.  Being a little late to the game works well for Google here as open source products are catching up with ones developed by AWS and Google and thus enabling them to possibly leap-frog AWS in some areas but their highly scalable multi-region DB called “Cloud Spanner” (BETA) is every bit as proprietary as some of AWS tools.  If everyone used “standards” then they feel its just a matter of price.  I disagree however as you will see below.

Cloud as a commodity?

There was marketing about easily shifting workloads between vendors.  For example, companies that use a standard container deployment could easily move them to Google. The idea of just moving your workloads around to different providers based on best pricing sounds really cool.  Technically it holds water but there’s more to it.  Here’s the thing, it's harder and more complex than just using similar tooling so you can move a container deployment from one cloud to another.  Most enterprise applications have data right.  “Data ohana means no data left behind.  Until competitors are sitting right next door to each other, the data needs to stay with the core processing.  In the enterprise, moving an application from one regional data center to another or one cloud to another means moving its data, and possibly core dependencies.  No matter what, cloud fungibility means having to test in each cloud for compatibility, network latency, etc.  And if your design is to use micro-services, it also means moving or possibly duplicating all of these micro-services as well.  In addition, in order to treat cloud as a commodity at this level, you’ll need to have tooling for all of your security practices, monitoring, deployment automation, backup and DR plans, and a host of other items that go with cloud operations.  Unless you design specifically for this use-case, I think it’s going to be hard and in my opinion, most companies aren’t going to do this.  As an example, it’s taken Waze over a year of planning and execution to move from AWS to Google so jumping clouds isn’t as easy as the marketing brochure says.  Maybe the fungible cloud service is in our future but I just don’t see this as a viable solution today.  Google should focus on winning customers with better products.  

Google’s Network

I spoke to several Google customers that raved about the network performance and capabilities.  The main comments I heard were regarding routing and load balancing.  A few AWS customers complained that AWS ELB throughput wasn’t big enough and Google blows them away.  Also, because of the configurability and performance of Google’s network, routing and load balancing products, they use Google to tie together their AWS regions.  I thought this was inventive.

Pricing strategy

It looks like Google will try to use price to penetrate further into the public cloud market.  They are pushing hard for people to use open source tools, therefore, if everyone used exactly the same tools, it would work in Google’s favor.  I prefer their approach to pricing, rather than “reserved” pricing, they use volume discounts instead.  Also, they touted the ability to select exactly how much CPU and memory you need (rather than set instance sizing from AWS) potentially saving their customers 30%+.  I’ll leave it up to others to do the analysis but on the surface it looked promising.  If it’s tied to overall consumption, it works, if it’s tied to the compute instance, the benefit breaks down if you do a lot of blue / green deployments.

G Suite

A final piece worth mentioning is that G Suite (formerly Google Apps) was on full display.  I’ve been a huge promoter of Google Apps for years and it just gets better with age.  I’ve written articles years ago on it, and it’s fulfilled its promise of high availability, performance and capability that kicks O-365’s butt, and they’ve continuously improved features over the years.  (I’m currently using it to write this) The G Suite customers gush about the product (with an almost a cult following) and the O-365 users at the conference only use it because they have to (Outlook is a hard habit to break).  My experience with trying to collaborate on a 2000+ row O365 spreadsheet was met with complete failure due to performance which caused it to be unusable.  When I moved the spreadsheet to Google, it handled it with ease and performance nearly matching a local client application.  Google, known for good internet collaboration features within its apps, introduced “Jamboard”, a digital whiteboard to improve virtual meeting experience as well as persist your ideas in Google Drive.  The demo was impressive and geeky.  I’d love to have something like that to enable real-time collaboration, as well as well as being able to persist ideas and share with teams later.  

Key takeaways from Sessions

Below are some quick notes pulled from my session notes
  • Data backed up to Google's “cold storage” backup service is available in seconds, whereas AWS glacier has a 4 hour request window.  Pricing is similar to AWS
  • 15 minutes after Pokemon Go was launched, they were at 50X their worst case traffic estimate.  Google allowed them to scale quickly.
  • Home Depot’s most discussed ROI was decreased release cycles, from 3 months+ time for release cycles to 1 day.  No discussion of any financial business case or other factors were presented.  [To me, this is the benefit of dev-ops and cloud].
  • Home depot core tenets are security, multi-region high availability, infrastructure as code.
  • Spanner is Google’s RDBMS that is horizontally scalable for massive transaction scale and can be load-balanced regionally, but not compatible with any other open tools.  If the marketing is true, Spanner’s scalability and reliability are amazing and it will be interesting to see what the response will be from other cloud vendors.  It’s currently in BETA.
  • Spanner has a SLA of 99.99% up-time  for single region, and 99.999% up-time for multi-region.
  • Google load balancer is a global service supporting regional load balancing.
  • Google also has a serverless product like AWS to support microservices architecture called “Google Functions”.  The solution built around kubernetes and is is currently in BETA.
  • 800 million users a day on google drive.
  • State government of Wyoming saves over $1M/year on video by going to Google hangouts.
  • The ex-CIO of Wyoming state government realized that to get exponential improvement, you have to change your perception.  He had some very compelling stories and now works at Google.
  • Waze uses 100+ microservices spread across regions and also uses AWS.  They have been migrating services to Google for over a year.  The team uses spinnaker.  Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence.
  • There was a very cool demo in the java workloads session.  Google debugger has the ability to add logging to production code without touching the code.  They can do this logging at scale with minimal performance impact.  
  • IntelliJ has a Google plug-in for deployment.
  • App engine basically containerized java apps for deployment.  Google does this in a lot of areas and depends heavily on their kubernetes container management service.
  • Google’s DNS SLA is 100%.  Every G customer I spoke with was very happy with the Google network services.  

Attendee takeaways

  • Several attendees talked about how much they liked Google’s networking.  One attendee uses Google’s network backbone to tie his AWS regions together.  Another specific was that Google's virtual nets can handle a lot more traffic than the AWS equivalent (limited to 2GB).  The attendee also mentioned that they were never able to saturate Google network load balancers.
  • Attendee complaining that Google doesn’t have a spot market for compute.  I thought they did.
  • One attendee mentioned that he thinks AWS will come out with region peering.  We’ll see in November :)
  • Ebay is starting to use Google
  • Disney engineer mentioned that his team likes Google because you don’t have to understand infrastructure like you do in AWS. (other Disney teams use AWS).
  • Turner Broadcasting is looking at Google for their analytics capabilities.

No comments:

Post a Comment