Metrics, monitoring and a generous helping of real-world experience set the stage for an intensely informative evening at the latest #AHODevOps.
For those that were unable to join us, and those whose memory needs refreshing, a quick summary of what we covered during the evening lies below, although we can’t possibly hope to capture the energy or passion of the nights two presenters in a blog post – to get a feel for that, you’ll need to attend a session in person!
Now on to our evening.…
Data driven Analytics for DevOps
Andi began the evening’s session with a thought provoking discussion on how decision making could be aided, or even automated, based on data collected from across the software cycle, with an end goal of improving velocity, quality and impact. However, the DevOps ecosystem is much more complicated than the sum of its parts, especially if you consider business impact.
Elaborating on this, Andi introduced the idea of using data as the single dependable constant across an entire business. He argued that if decisions were rationally based on good data that everyone can access, then everyone should be able to understand the individual reasons behind business each business decision and why it was made.
This approach inevitably leads to the following questions - what data is good data, and which metrics matter? It turns out that stakeholders could share more information than initially expected, as their concerns overlap.
Finally, the focus turned to discussing three examples of how to put this process into practice:
- Making data available to internal stakeholders to analyse velocity
- Automatically analysing code quality to provide blameless feedback
- Visualising business impact to give everyone a better idea of what's working and what isn't
Andi's slides (including an impressive list of references) can be found here: Data-Driven Decisions for Better DevOps Outcomes
What we learned about Monitoring Production Container workloads in an Enterprise
During his session Chris shared some of his experiences in monitoring a custom, Kubernetes-based platform for digital education.
He began by describing his vision of "Development on rails" – the process of standardising and automating as much as possible, even offering self-service provisioning of databases etc. so developers can concentrate on actual development.
We then moved on to the idea of - "Monitoring driven development"- easily QOTD (after "the Spider-Man principle" with great power...). Specifically, this is the idea that you should be using APM to inform developers on what needs to change to get their apps containerised.
Throughout both topics, Chris’s passion and belief in proactive, User Experience-centric monitoring came through strongly. This can be a hard thing for enterprises with an established monitoring set-up to come around to, despite it being more important than ever since the advent of containerisation.
The key takeaway here is that monitoring and alerting should be geared towards what affects customers (internal or external!) and NOT simply on error rates or basic infrastructure metrics.
You can also find Pearson’s GitHub page for their custom Kubernetes/Jenkins Pipeline tool here: Custom Kubernetes/Jenkins Pipeline tool
As you can see, the evening covered some really insightful and engaging topics. And as such, we’d encourage you to book your place at our next Meetup before seats fill up!
Thanks to everyone who came along. We always enjoy sharing our knowledge, pizza and beer! We hope everyone had a great time and learned something new.
As always, we'd love to hear any ideas and suggestions you might have for our next event.