Using Docker Containers for Complex IoT Application - Flux7 Blog

container technologyRecently at Flux7 Labs we developed an end-to-end Internet of Things project that received sensor data to provide reports to service-provider end users. Our client asked us to support multiple service providers for his new business venture. We knew that rearchitecting the application to incorporate major changes would prove to be both time-consuming and expensive for our client. It also would have required a far more complicated, rigid and difficult-to-maintain codebase.

We had been exploring the potential of using container technology, Docker, to set up Flux7 Labs’ internal development environments and, based on our findings, believed we could use it in order to avoid a major application rewrite. So we decided to use Docker containers to provide quick, easy, and inexpensive multi-tenancy by creating isolated environments for running app tier multiple instances for each provider.

What is Docker?

Docker provides a user-friendly layer on top of Linux Containers (LXCs). LXCs provide operating-system-level virtualization by limiting a process’s resources. In addition to using the chroot command to change accessible directories for a given process, Docker effectively provides isolation of one group of processes from other files and system processes without the expense of running another operating system.

In the Beginning

The “single provider” version of our app had three components:

  1. Cassandra for data persistence, which we later use for generating each gateway’s report.

  1. A Twisted TCP server listening at PORT 6000 for data ingestion from a provider’s multiple gateways.

  1. A Flask app at PORT 80 serving as the admin panel for setting customizations and for viewing reports.

In the past, we’d used the following to launch the single-provider version of the application:

Both code bases were hard coded inside the Cassandra KEYSPACE.

cassandra keyspace

 

 

Our New Approach

While Docker is an intriguing emerging container technology, it’s still in early stages of development. As might be expected, it has issues remaining to be resolved. The biggest for us was that, at this point, Docker can’t support multiple Cassandra instances running on a single machine. Consequently, we couldn’t use Cassandra to provide multi-tenancy. Another issue for us was that hosting multiple database instances on a single machine can quickly cause resource shortages. We addressed that by implementing the solution in a fairly traditional way for making an application multi-tenant. We used Cassandra KEYSPACE as the namespace for each provider in the data store. We also made corresponding code changes to both the data ingestion and web servers by adding the keyspace parameter to the DB accesses. We passed the Cassandra KEYSPACE (the provider ID) to each app instance on the command line, which makes it possible to use custom skins and other features in the future. Thus, we were able to create a separate namespace for each provider in the data store without making changes to the column family schema. The beauty of our approach was that, by using Docker to provide multi-tenancy, the only code changes needed to make the app multi-tenant were those described above. Had we not used Docker in this way, we’d have had to make major code changes bordering on a total application rewrite.

How We Did It - Multi Tenant Database Architecture

multi tenant database architecture

First we created a Docker container for the new software version by correctly setting up all of the environments and dependencies. Next we started a Cassandra container. Even though we weren’t running multiple instances of Cassandra, we wanted to make use of Docker’s security, administrative and easy configuration features. You can download our Cassandra file from our GitHub here.We used a locally running container serving at PORT 9160 BY using this command:

We then created a keyspace “provider1” using pycassaShell.

We fired up our two code bases on two separate containers like this:
Voila! We had a provider1 instance running in no time.

Automation

We found Docker-py extremely useful for automating all of these processes and used:

To complete the solution, we added a small logic to allocate the port for newly added providers and to create Cassandra keyspaces for each one.

Conclusion

In the end, we quickly brought up a multi-tenant solution for our client with the key “run each provider’s app in a contained space.” We couldn’t use virtual machines to provide that functionality because a VM requires too many resources and too much dedicated memory. In fact, Google is now switching away from using VMs and has become one of the largest contributors to Linux containers, the technology that forms the basis of Docker. We could have used multiple instances, but then we’d have significantly over allocated the resources. Changing the app also would have added unnecessary complexity, expense and implementation time.

At the project’s conclusion, our client was extremely pleased that we’d developed a solution that met his exact requirements, while also saving him money. And we were pleased that we’d created a solution that can be applied to future customers’ needs.

Read more about Docker container technology.

 

Did you find this useful?  

Interested in getting tips, best practices and commentary delivered regularly? Click the button below to sign up for our blog and set your topic and frequency preferences.

Sign Me Up!

January 27, 2014 / Docker, IoT, Data, Energy

About the Author

Anubhav

Join Us

Join thousands of technology enthusiasts, subscribe and get expert perspective in your inbox.

Connect With Us