As we expected it to be, the AWS Summit this week was an excellent experience. We talked to a lot of interesting people, new and old. We gathered several customer leads, shared technology best practices, talked about business development strategies, and explored several partnership opportunities. As promised, I am now sharing with you my experience in San Francisco, so please read on.
After my post last week about using AWS in the cloud, I thought I’d share the sessions at the upcoming AWS Summit in San Francisco that have us excited. These sessions are heavily influenced by my own interest coming from my role within Flux7 and the technology development I work on both internally at Flux7 and for our professional services clients.
It seems like only yesterday that Aater and I were in Las Vegas immersed in the Amazon Web Services re:Invent experience. AWS re:Invent was an incredibly well run conference featuring pertinent and informative sessions. Participants comprised a variety of cloud users, from decision makers to the feet-on-the-ground folks, from those just dipping their toes to those managing massive clusters. It was a rich learning experience and a wonderful networking opportunity. Flux7 Labs received an honorable mention for VyScale, our spot optimization solution, at the Spotathon, and we’ve since made several sales since then. And now it’s already time for the AWS Summit in San Francisco.
Amazon has changed the face of the world of startups with its cloud services. Now it’s possible for two men in a garage to set up large computer clusters for zero capital cost.
Last week we explored how business goals should inform every good DevOps strategy. This week we’ll discuss how to use those goals to validate your DevOps architecture. From our experience at Flux7, the best way to do this is to define the workflows of key users.
An organization moving to the cloud truly understand cloud’s benefits only when setting up good DevOps methodologies and cloud automation to meets its needs. The process is replete with tool choices at every stage and the overall goal is to understand and meet the organization’s needs.
While advising a client with a strong interest in ARM servers, we decided to evaluate the computational overhead of various big data technologies, which led to some interesting discoveries. Since we in the field are all trying to figure out how big data technology will evolve, Flux7 Labs thought we’d share some of what we’ve learned.
I’ve been doing a lot of analysis of latency and throughput recently as a part of benchmarking work on databases. I thought I’d share some insights on how the two are related. For an overview of what these terms mean, check out Aater’s post describing the differences between them here.
On January 11, Aater and I attended Data Day Texas 2014 here in Austin. Sponsored by Geek Austin, it was such a great event that I thought I’d share some highlights. Data Day Texas holds special significance for Flux7 Labs because it was at Data Day 2013 that we made our first presentation, when Aater gave a talk on the role of microservers in big data, which you can find here.