This week a new DevOps initiative was launched to create a next generation of continuous delivery collaboration. Designed to be a new foundation for the diverse continuous integration and delivery (CI/CD) space, the Continuous Delivery Foundation (CDF) shall serve as a vendor-neutral home for open source continuous delivery projects. The Linux Foundation reports that the CDF aspires to foster collaboration among the industry’s top developers, end users, and vendors to evangelize CI/CD and DevOps methodologies, define/document best practices, provide guidelines, and create training materials to enable any software development team around the world to implement CI/CD best practices.
The first projects the CDF will host include Jenkins, Jenkins X, Spinnaker, and Tekton. Additional projects are expected to join CDF through its soon to be formed Technical Oversight Committee (TOC) with the focus of bringing together the CD ecosystem to build specifications and projects around portability and interoperability.
- Last week F5 acquired NGINX in a bid to bridge NetOps and DevOps. The company reports that the strategic acquisition will enable multi-cloud application services across all environments, providing the ease-of-use and flexibility developers require while also delivering the scale, security, reliability and enterprise readiness network operations teams demand.
- Chef announced it has reached three significant security milestones, helping its government and enterprise customers ensure that they can achieve and maintain the secure infrastructure needed to accelerate their cloud strategies. According to the news announcements, the three achievements include Secure Technical Implementation Guidelines (STIG) profiles for RHEL 7 and Windows Server 2016 in Chef InSpec, along with FIPS 140-2 compliance and Center for Internet Security (CIS) certification for AWS Foundations Benchmarks Level 1 and 2 in Chef Automate. Chef’s ability to automate configurations and ensure compliance and remediate vulnerabilities, is critical to automating infrastructure, particularly in highly-regulated industries.
- Submariner, a new open-source project enabling network connectivity between Kubernetes clusters made its debut last week. Providing network connectivity for microservices deployed in multiple Kubernetes clusters that need to communicate with each other, according to the press release, this new solution overcomes barriers to connectivity between Kubernetes clusters and allows for a host of new multi-cluster implementations, such as database replication within Kubernetes across geographic regions and deploying service mesh across clusters.
- AWS has introduced Open Distro for Elasticsearch, a distributed, document-oriented search and analytics engine. The new Open Distro for Elasticsearch supports structured and unstructured queries, and does not require a schema to be defined ahead of time. According to AWS, Elasticsearch can be used as a search engine, and is often used for web-scale log analytics, real-time application monitoring, and clickstream analytics.
- Applications in an Amazon VPC can now securely access AWS PrivateLink endpoints across VPC peering connections. AWS PrivateLink allows you to privately access services hosted on the AWS network in a highly available and scalable manner, without using public IPs and without requiring the traffic to traverse the internet. Now with the support of VPC peering by AWS PrivateLink, operators can privately connect to a service even if that service's endpoint resides in a different Amazon VPC that is connected using VPC peering.
- AWS CEO Andrew Jassy spoke at the CERAWeek conference last week on the topic of how cloud can help companies in the energy sector manage data costs. Hosted by IHS Markit, Jassey was joined by the company’s VP of engineering, Bill Vass, who commented that, “…things go up and down so the ability to scale massively when you need to, and scale down significantly when you don’t, fits very nicely with the cycles you see in the oil and gas industry.” In fact, a small oil company took 100 days to conduct a reservoir simulation and “now they’re doing it in four days, to simulate all the different possibilities of how they would get more yield and make more money.” Our DevOps consulting team enjoyed CNBC’s full summary here.
Read our latest blog series on the importance of DevOps training and knowledge transfer to successfully achieve enterprise agility and DevOps at scale:
- DevOps Training, Knowledge Transfer Key to Successful Agile Enterprises
- DevOps Training: Flux7 Knowledge Transfer Solutions