AWS Summit in San Fran: From Monitoring & Log Management to VPC Peering … What Flux7 Learned

As we expected it to be, the AWS Summit this week was an excellent experience. We talked to a lot of interesting people, new and old. We gathered several customer leads, shared technology best practices, talked about business development strategies, and explored several partnership opportunities. As promised, I am now sharing with you my experience in San Francisco, so please read on.

Even before the keynote began, we received a mixed blessing; a surprise. One of our large enterprise customers took part in an interview detailing the benefits they received from migrating to the cloud with the help of the Flux7 team. Even though we had received such an amazing validation of our work, unfortunately, we haven’t yet received the approval from their legal department to promote ourselves as the guys guiding their team on cloud migration.

As for the keynote by Andy Jassy, it was interesting. There were two announcements relating to the AWS users. The first was the release of the next generation of high storage and high memory instances. The second was about steep price cuts in several of the Amazon instance classes and S3.

One recent noticeable turn of events has been that Amazon is breaking into the enterprise with their cloud solution. Borrowing terminology used by Geoffrey A. Moore in his great book “Crossing the Chasm” , it seems that right now the cloud is in the process of “Crossing The Chasm” and finally breaking into the enterprise market. It’s apparent that Amazon is heavily focused on making sure they come out on the other side of the chasm. They are quickly rolling out features demanded by the enterprise market. In fact, Andy, in his keynote, acknowledged the importance of private data centers to the enterprise customers, and talked about the value of hybrid implementations with public and private clouds, as well as using Amazon as a cloud bursting solution.

JLI5621369.1395074657.580x580

Earlier this week, if you recall, I went through the session descriptions and decided on which sessions I was going to attend. I shared that information with you guys. But, alas, I couldn't attend all of them.

The first session I wanted to attend, “Cloud Adoption: Understanding AWS Security,by Stephen Schimdt” was full by the time I got to the doors. So I ended up exploring the expo and got caught up in that for quite some time. To my delight, though, I had several interesting conversations with many vendors, especially on the monitoring and log management side. I now have two observations about these solutions.

The first observation is good monitoring and log management suffer from the same issue as all investments into DevOps. With a lot of upfront investment, over time, the whole team starts to see the value of the solution. But to do that, you must get developer buy in. The problem is when faced with an immediate issue, they will take the decision that will optimize the solution to their current problem. So, to save the time in the short term, the long-term investment in the infrastructure fails to happen. And there is a tipping point where the long-term solution is developed to the point where the long-term investment solution starts to deliver more immediate value than the short-term solution. Which is exactly why many people are interested in using us at Flux7 for setting up their developer workflows and processes for code reviews and continuous integration.

The management team, of course, recognizes the value in having an efficient workflow, but they are unable to get to the tipping point. In the same vein, when the log management software setup needs to provide such a high amount of immediate value, it starts to see organic growth from the combined investment of team members. So, the ramp up needs to be ridiculously easy to cross the tipping point from where it can turn into an effective solution for the customer’s problems.

The second observation is that all the solutions have some weaknesses. For example, most of them made a distinction between log management and performance event management, and did either one or the other. When in reality, the two are very tightly coupled together, if not the same. When debugging an issue, we require a holistic approach that spans across both performance events and logs. So, there needs to be a lot more consolidation happening in this space. I am impressed by the abilities of Splunk in this space in terms of processing new kinds of data and developing new widgets on a dashboard.

The only session that I was able to attend turned out to be extremely useful. And that’s because I hadn’t yet read Amazon’s announcement today about VPC peering http://aws.typepad.com/aws/2014/03/new-vpc-peering-for-the-amazon-virtual-private-cloud.html. I was interested in the talk precisely because the description seemed to present solutions that I didn’t think were possible in the existing AWS infrastructure; especially without a feature like having two VPCs talk to each other. Apparently, I wasn’t missing any architectural secret sauce; Amazon just had an up-their-sleeve approach for implementing the required feature. So, you may wonder why having two VPCs talk to each other is so important.

Having a VPN gateway controlling access to your onsite IT assets is an old school solution to us. In the cloud, we believe that for many applications it makes sense for each one to be brought up in its own VPC. By doing this, we can guarantee isolation of an application from external factors, guarantee better security, and guarantee higher fidelity between development environments and production environments. Unfortunately, because of enterprise security requirements, we have never been able to recommend this solution to our enterprise clients. The only way to do this was with having a ridiculously high number of VPN connections, which is not possible because it requires the purchase of extremely expensive hardware routers to support the tunnels, or setting up SWAN servers to create VPN connections between VPCs.

The ability to peer VPCs, at no additional expense, now allows us programmatically to set up a VPC for each application to an isolated environment, while still meeting the corporate security requirements of keeping all private traffic encrypted on a VPN tunnel. The extra bonus Amazon added to VPC peering is that Amazon supports cross-account VPC peering. So, while we understood the best way to provide an isolated environment to a business unit for billing purposes was to use separate accounts with consolidated billing, this solution was not actually practical for enterprise clients, because it required each business unit to have its own VPN tunnel. So, as a last resort, the customers had to come up with a tagging scheme. This method not only dealt with limitations on tagging, but it also didn’t account for all charges to AWS. And, it kept the operations team from being able to allow developers to log on into an environment and create instances, because operations is responsible for the budget, and there is no guarantee developers will obey the tagging guidelines. So now, thanks to VPC peering, there is a reliable accounting mechanism on AWS that enterprises can use.

At the end of the event, we met with some new startups at the AWS Activate After Party. Afterward, we gave Ofir(Twitter @IAmONdemand), author of the blog, a ride to his hotel. On the way, we were discussing strategy for VyScale. Very interesting conversation. And, it created many thoughts for the future of Flux7.

Overall, the AWS summit was a great experience. It gave us a lot more information about what is happening along the AWS landscape so we can serve you, our customers, better.

About the Author

Ali Hussain

Join Us

Join thousands of technology enthusiasts, subscribe and get expert perspective in your inbox.

Connect With Us

Categories