At Flux7 we work with a wide variety of customers and regardless of their level of IT maturity we are passionate about helping them apply DevOps processes in their pursuit of continuous improvement. In doing so, we naturally find ourselves moving from simple application driven problems to layers deeper in the technology stack where DevOps automation can play a significant role in growing efficiency and productivity. Today we’d like to share the story of how we deployed AWS Step Functions to help drive DevOps automation in pursuit of continuous improvement for a Flux7 customer.
We recently worked with a Fortune 500 manufacturer of heavy equipment that is focused on quality, productivity, and effectively connecting its customers with data-driven insights via technology. As an international, publicly traded organization, it is also careful about managing security, risk and compliance. So, when this manufacturer asked if we could set up an audit and notification system, we were happy to roll up our sleeves and begin work. (You can read here the full case study of this Fortune 100 customer.)
This month’s re:Invent in Las Vegas drew over 32,000 attendees and the show did not disappoint as AWS delivered on its precedent to unveil a number of new features and products at the show. With numerous announcements, AWS news was peppered throughout two days of lengthy keynote sessions, we’ve asked Ali Hussain, Flux7 co-founder and CTO, to weigh in on what caught his attention and where he thinks the most impact will be seen to enterprise organizations like those that Flux7 serves.
Flux7 engineer Ahsan Ali and CTO Ali Hussain collaborated on this post
The rise of IoT has given rise to a new generation of needs in the world of big data processing. Now we need to handle data ingress from many sensors around the world, and make real-time decisions to be executed by these devices. As such it is no surprise we see new services to handle processing of streaming data, such as Amazon Kinesis.
Last Saturday, I was talking with one of our Attune customers about their needs and that led to a very useful conversation on the improvements we can make to our Attune package. The customer expressed some features that would be useful for them and I decided that we need to implement those features in Fluxboard (™), our visual dashboard that helps organizations to own their IT. But it got me thinking about papercuts, and how DevOps can be thought of as the business practice of paying attention to the papercuts.
Last week, RackSpace announced its intention to remove unmanaged cloud service offerings. You can read about it here: Where does IaaS fit into the managed Cloud.
This past weekend, we solved two problems for two customers. They both had working configuration management solutions. One used Puppet; the other used Chef. One was Red Hat-based; the other was Debian-based. But, both of them had the same problem.
Recently, I was helping a client setup log management. While talking to the internal team, I found myself frequently arguing that you have to make the setup easy, otherwise it will not get done. As I found myself repeating this statement, I realized I was cutting right to what is needed to create a working DevOps solution.
Recently, I read this article in TIME Magazine about the launch and resurrection of healthcare.gov. In reading the entire article, the only thing I could think of was how the key problem here is summed up in one word: DevOps.
As we expected it to be, the AWS Summit this week was an excellent experience. We talked to a lot of interesting people, new and old. We gathered several customer leads, shared technology best practices, talked about business development strategies, and explored several partnership opportunities. As promised, I am now sharing with you my experience in San Francisco, so please read on.
After my post last week about using AWS in the cloud, I thought I’d share the sessions at the upcoming AWS Summit in San Francisco that have us excited. These sessions are heavily influenced by my own interest coming from my role within Flux7 and the technology development I work on both internally at Flux7 and for our professional services clients.
It seems like only yesterday that Aater and I were in Las Vegas immersed in the Amazon Web Services re:Invent experience. AWS re:Invent was an incredibly well run conference featuring pertinent and informative sessions. Participants comprised a variety of cloud users, from decision makers to the feet-on-the-ground folks, from those just dipping their toes to those managing massive clusters. It was a rich learning experience and a wonderful networking opportunity. Flux7 Labs received an honorable mention for VyScale, our spot optimization solution, at the Spotathon, and we’ve since made several sales since then. And now it’s already time for the AWS Summit in San Francisco.
Amazon has changed the face of the world of startups with its cloud services. Now it’s possible for two men in a garage to set up large computer clusters for zero capital cost.
Last week we explored how business goals should inform every good DevOps strategy. This week we’ll discuss how to use those goals to validate your DevOps architecture. From our experience at Flux7, the best way to do this is to define the workflows of key users.
An organization moving to the cloud truly understand cloud’s benefits only when setting up good DevOps methodologies and cloud automation to meets its needs. The process is replete with tool choices at every stage and the overall goal is to understand and meet the organization’s needs.
While advising a client with a strong interest in ARM servers, we decided to evaluate the computational overhead of various big data technologies, which led to some interesting discoveries. Since we in the field are all trying to figure out how big data technology will evolve, Flux7 Labs thought we’d share some of what we’ve learned.
I’ve been doing a lot of analysis of latency and throughput recently as a part of benchmarking work on databases. I thought I’d share some insights on how the two are related. For an overview of what these terms mean, check out Aater’s post describing the differences between them here.
On January 11, Aater and I attended Data Day Texas 2014 here in Austin. Sponsored by Geek Austin, it was such a great event that I thought I’d share some highlights. Data Day Texas holds special significance for Flux7 Labs because it was at Data Day 2013 that we made our first presentation, when Aater gave a talk on the role of microservers in big data, which you can find here.