Easy Agile Podcast Ep.12 Observations on Observability
On this episode of The Easy Agile Podcast, tune in to hear developers Angad, Jared, Jess and Jordan, as they share their thoughts on observability.
Wollongong has a thriving and supportive tech community and in this episode we have brought together some of our locally based Developers from Siligong Valley for a round table chat on all things observability.
💥 What is observability?
💥 How can you improve observability?
💥 What's the end goal?

"This was a great episode to be a part of! Jess and Jordan shared some really interesting points on the newest tech buzzword - observability""
Be sure to subscribe, enjoy the episode 🎧
Transcript
Jared Kells:
Welcome everybody to the Easy Agile podcast. My name's Jared Kells, and I'm a developer here at Easy Agile. Before we begin, Easy Agile would like to acknowledge the traditional custodians of the land from which we broadcast today, the Wodiwodi people of the Dharawal nation, and pay our respects to elders past, present and emerging, and extend that same respect to any aboriginal people listening with us today.
Jared Kells:
So today's podcast is a bit of a technical one. It says on my run sheet here that we're here to talk about some hot topics for engineers in the IT sector. How exciting that we've got a couple of primarily front end engineers and Angad and I are going to share some front end technical stuff and Jess and Jordan are going to be talking a bit about observability. So we'll start by introductions. So I'll pass it over to Jess.
Jess Belliveau:
Cool. Thanks Jared. Thanks for having me one as well. So yeah, my name's Jess Belliveau. I work for Apptio as an infrastructure engineer. Yeah, Jordan?
Jordan Simonovski:
I'm Jordan Simonovski. I work as a systems engineer in the observability team at Atlassian. I'm a bit of a jack of all trades, tech wise. But yeah, working on building out some pretty beefy systems to handle all of our data at Atlassian at the moment. So, that's fun.
Angad Sethi:
Hello everyone. I'm Angad. I'm working for Easy Agile as a software dev. Nothing fancy like you guys.
Jared Kells:
Nothing fancy!
Jess Belliveau:
Don't sell yourself short.
Jared Kells:
Yeah, I'll say. Yeah, so my name's Jared, and yeah, senior developer at Easy Agile, working on our apps. So mainly, I work on programs and road maps. And yeah, they're front end JavaScript heavy apps. So that's where our experience is. I've heard about this thing called observability, which I think is just logs and stuff, right?
Jess Belliveau:
Yeah, yeah. That's it, we'll wrap up!
Jared Kells:
Podcast over! Tell us about observability.
Jess Belliveau:
Yeah okay, I'll, yeah. Well, I thought first I'd do a little thing of why observability, why we talk about this and sort of for people listening, how we got here. We had a little chat before we started recording to try and feel out something that might interest a broader audience that maybe people don't know a lot about. And there's a lot of movements in the broad IT scope, I guess, that you could talk about. There's so many different things now that are just blowing up. Observability is something that's been a hot topic for a couple of years now. And it's something that's a core part of my job and Jordan's job as well. So it's something easy for us to talk about and it's something that you can give an introduction to without getting too technical. So we don't want to get down. This is something that you can go really deep into the weeds, so we picked it as something that hopefully we can explain to you both at a level that might interest the people at home listening as well.
Jess Belliveau:
Jordan and I figured out these four bullet points that we wanted to cover, and maybe I can do the little overview of that, and then I can make Jordan cover the first bullet point, just throw him straight under the bus.
Jordan Simonovski:
Okay!
Jess Belliveau:
So we thought we'd try and describe to you, first of all, what is observability. Because that's a pretty, the term doesn't give you much of what it is. It gives you a little hint, but it'll be good to base line set what are we talking about when we say what is observability. And then why would a development team want observability? Why would a company want observability? Sort of high level, what sort of benefits you get out of it and who may need it, which is a big thing. You can get caught up in these industry hot buzz words and commit to stuff that you might not need, or that sort of stuff.
Jared Kells:
Yep.
Jordan Simonovski:
Yep.
Jess Belliveau:
We thought we'd talk about some easy wins that you get with observability. So some of the real basic stuff you can try and get, and what advantages you get from it. And then we just thought because we're no going to try and get too deep, we could just give a few pointers to some websites and some YouTube talks for further reading that people want to do, and go from there. So yeah, Jordan you want to-
Jared Kells:
Sounds good.
Jess Belliveau:
Yeah. I hopefully, hopefully. We'll see how this goes! And I guess if you guys have questions as well, that's something we should, if there's stuff that you think we don't cover or that you want to know more, ask away.
Jordan Simonovski:
I guess to start with observability, it's a topic I get really excited about, because as someone that's been involved in the dev ops and SRE space for so long, observability's come along and promises to close the loop or close a feedback loop on software delivery. And it feels like it's something we don't really have at the moment. And I get that observability maybe sounds new and shiny, but I think the term itself exists to maybe differentiate itself from what's currently out there. A lot of us working in tech know about monitoring and the loading and things like that. And I think they serve their own purpose and they're not in any way obsolete either. Things like traditional monitoring tools. But observability's come along as a way to understand, I think, the overwhelmingly complex systems that we're building at the moment. A lot of companies are probably moving towards some kind of complicated distributed systems architecture, microservices, other buzz words.
Jordan Simonovski:
But even for things like a traditional kind of monolith. Observability really serves to help us ask new questions from our systems. So the way it tends to get explained is monitoring exits for our known unknowns. With seniority comes the ability to predict, almost, in what way your systems will fail. So you'll know. The longer you're in the industry, you know this, like a Java server fails in x, y, z amount of ways, so we should probably monitor our JVM heap, or whatever it is.
Jared Kells:
I was going to say that!
Jordan Simonovski:
I'll try not to get too much into-
Jared Kells:
Runs out of memory!
Jordan Simonovski:
Yeah. So that's something that you're expecting to fail at some point. And that's something that you can consider a known unknown. But then, the promise of observability is that we should be shipping enough data to be able to ask new questions. So the way it tends to get talked about, you see, it's an unknown unknown of our system, that we want to find out about and ask new questions from. And that's where I think observability gets introduced, to answer these questions. Is that a good enough answer? You want me to go any further into detail about this stuff? I can talk all day about this.
Jared Kells:
Is it like a [crosstalk 00:08:05]. So just to repeat it back to you, see if I've understood. Is it kind of like if I've got a, traditionally with a Java app, I might log memories. It's because I know JVM's run out of memory and that's a thing that I monitor, but observability is more broad, like going almost over the top with what you monitor and log so that you can-
Jordan Simonovski:
Yeah. And I wouldn't necessarily say it's going over the top. I think it's maybe adding a bit more context to your data. So if any of you have worked with traces before, observability is very similar to the way traces work and just builds on top of the premise of traces, I guess. So you're creating these events, and these events are different transactions that could be happening in your applications, usually submitting some kind of request. And with that request, you can add a whole bunch of context to it. You can add which server this might be running on, which time zone. All of these additional and all the exciters. You can throw in user agency into there if you want to. The idea of observability is that you're not necessarily constrained by high cardinality data. High cardinality data being data sets that can change quite largely, in terms of the kinds of data they represent, or the combinations of data sets that you could have.
Jordan Simonovski:
So if you want shipping metrics on something, on a per user basis and you want to look at how different users are affected by things, that would be considered a high cardinality metric. And a lot of the time it's not something that traditional monitoring companies or metric providers can really give you as a service. That's where you'll start paying insanely huge bills on things like Datadog or whatever it is, because they're now being considered new metrics. Whereas observability, we try and store our data and query it in a way that we can store pretty vast data sets and say, "Cool. We have errors coming from these kinds of users." And you can start to build up correlations on certain things there. You can find out that users from a particular time zone or a particular device would only be experiencing that error. And from there, you can start building up, I think, better ways of understanding how a particular change might have broken things. Or some particular edge cases that you otherwise couldn't pick up on with something like CPU or memory monitoring.
Angad Sethi:
Would it be fair to say-
Jared Kells:
Yeah. It's [crosstalk 00:11:02].
Angad Sethi:
Oh, sorry Jared.
Jared Kells:
No you can-
Angad Sethi:
Would it be fair to say that, so, observability is basically a set of principles or a way to find the unknown unknowns?
Jordan Simonovski:
Yeah.
Angad Sethi:
Oh.
Jess Belliveau:
And better equip you to find, one of the things I find is a lot of people think, you get caught up in thinking observability is a thing that you can deploy and have and tick a box, but I like your choice of word of it being a set of principles or best practices. It's sort of giving you some guidance around these, having good logging coming out of your application. So structured logs. So you're always getting the same log format that you can look at. Tracing, which Jordan talked a little bit about. So giving you that ability to follow how a user is interacting with all the different microservices and possibly seeing where things are going wrong, and metrics as well. So the good thing with metrics is we're turning things a bit around and trying to make an application, instead of doing, and I don't want to get too technical, black box monitoring, where we're on the outside, trying to peer in with probes and checks like that. But the idea with metrics is the application is actually emitting these metrics to inform us what state it is in, thereby making it more observable.
Jess Belliveau:
Yeah, I like your choice of words there, Angad, that it's like these practices, this sort of guide of where to go, which probably leads into this next point of why would a team want to implement it. If you want to start again, Jordan?
Jordan Simonovski:
Yeah, I can start. And I'll give you a bit more time to speak as well, Jess in this one. I won't rant as much.
Jess Belliveau:
Oh, I didn't sign up for that!
Jordan Simonovski:
I think why teams would want it is because, it really depends on your organization and, I guess, the size of the teams you're working in. Most of the time, I would probably say you don't want to build observability yourself in house. It is something that you can, observability capabilities themselves, you won't achieve it just by buying a thing, like you can't buy dev ops, you can't buy Agile, you can't buy observability either.
Jared Kells:
Hang on, hang on. It says on my run sheet to promote Easy Agile, so that sounds like a good segue-
Jess Belliveau:
Unless you want to buy it. If you do want to buy Agile, the [crosstalk 00:13:55] in the marketplace.
Jared Kells:
Yeah, sorry, sorry, yeah! Go on.
Jordan Simonovski:
You can buy tools that make your life a lot easier, and there are a lot of things out there already which do stuff for people and do surface really interesting data that people might want to look at. I think there are a couple of start ups like LightStep and Honeycomb, which give you a really intuitive way of understanding your data in production. But why you would need this kind of stuff is that you want to know the state of your systems at any given point in time, and to build, I guess, good operational hygiene and good production excellence, I guess as Liz Fong-Jones would put it, is you need to be able to close that feedback loop. We have a whole bunch of tools already. So we have CICD systems in place. We have feature flags now, which help us, I guess, decouple deployments from releases. You can deploy code without actually releasing code, and you can actually give that power to your PM's now if you want to, with feature flags, which is great.
Jordan Simonovski:
But what you can also do now is completely close this loop, and as you're deploying an application, you can say, "I want to canary this deployment. I want to deploy this to 10% of my users, maybe users who are opted in for Beta releases or something of our application, and you can actually look at how that's performing before you release it to a wider audience. So it does make deployments a lot safer. It does give you a better understanding of how you're affecting users as well. And there are a whole bunch of tools that you can use to determine this stuff as well. So if you're looking at how a lot of companies are doing SRE at the moment, or understanding what reliable looks like for their applications, you have things like SLO's in place as well. And SLO's-
Jared Kells:
What's an SLO?
Jordan Simonovski:
They're all tied to user experiences. So you're saying, "Can my user perform this particular interaction?" And if you can effectively measure that and know how users are being affected by the changes you're making, you can easily make decisions around whether or not you continue shipping features or if you drop everything and work on reliability to make sure your users aren't affected. So it's this very user centric approach to doing things. I think in terms of closing the loop, observability gives us that data to say, "Yes, this is how users are being affected. This is how, I guess the 99th percentile of our users are fine, but we have 1% who are having adverse issues with our application." And you can really pinpoint stuff from there and say, "Cool. Users with this particular browser or this particular, or where we've deployed this app to," let's say if you have a global deployment of some kind, you've deployed to an island first, because you don't really care what happens to them. You can say, "Oh, we've actually broken stuff for them." And you can roll it back before you impact 100% of your users.
Jared Kells:
Yeah. I liked what you said about the test. I forgot the acronym, but actually testing the end user behavior. That's kind of exciting to me, because we have all these metrics that are a bit useless. They're cool, "Oh, it's using 1% CPU like it always is, now I don't really care," but can a user open up the app and drag an issue around? It's like-
Jess Belliveau:
Yeah, that's a really great example, right?
Jared Kells:
That's what I really care about.
Jess Belliveau:
The 1% CPU thing, you could look at a CPU usage graph and see a deployment, and the CPU usage doesn't change. Is everything healthy or not? You don't know, whereas if you're getting that deeper level info of the user interactions, you could be using 1% CPU to serve HTTP500 errors to the 80% of the customer base, sort of thing.
Angad Sethi:
How do you do that? The SLO's bit, how do you know a user can log in and drag an issue?
Jordan Simonovski:
Yeah. I think that would come with good instrumenting-
Angad Sethi:
Good question?
Jordan Simonovski:
Yeah, it comes down to actually keeping observability in mind when you are developing new features, the same way you would think about logging a particular thing in your code as you're writing, or writing test for your code, as you're writing code as well. You want to think about how you can instrument something and how you can understand how this particular feature is working in production. Because I think as a lot of Agile and dev ops principles are telling us now is that we do want our applications in production. And as developers, our responsibilities don't end when we deploy something. Our responsibility as a developer ends when we've provided value to the business. And you need a way of understanding that you're actually doing that. And that's where, I guess, you do nee do think about observability with a lot of this stuff, and actually measuring your success metrics. So if you do know that your application is successful if your user can log in and drag stuff around, then that's exactly what you want to measure.
Jared Kells:
I think that we have to build-
Jordan Simonovski:
Yeah?
Jared Kells:
Oh, sorry Jordan.
Jordan Simonovski:
No, you go.
Jared Kells:
I was just going to say we have to build our apps with integration testing in mind already. So doing browser based tests around new features. So it would be about building features with that and the same thing in mind but for testing and production.
Jess Belliveau:
Yeah and the actual how, the actual writing code part, there's this really great project, the open telemetry project, which provides all these sort of API's and SDK's that developers can consume, and it's vendor agnostic. So when you talk about the how, like, "How do I do this? How do I instrument things?" Or, "How do I emit metrics?" They provide all these helpful libraries and includes that you can have, because the last thing you want to do is have to roll this custom solution, because you're then just adding to your technical debt. You're trying to make things easier, but you're then relying on, "Well I need to keep Jared Kells employed, because he wrote our log in engine and no one else knows how it works.
Jess Belliveau:
And then the other thing that comes to mind with something like open telemetry as well, and we talked a bit about Datadog. So Datadog is a SaaS vendor that specializes in observability. And you would push your metrics and your logs and your traces to them and they give you a UI to display. If you choose something that's vendor agnostic, let's just use the example of Easy Agile. Let's say they start Datadog and then in six months time, we don't want to use Datadog anymore, we want to use SignalFx or whatever the Splunk one is now.
Jordan Simonovski:
I think NorthX.
Jess Belliveau:
Yeah. You can change your end point, push your same metrics and all that sort of stuff, maybe with a few little tweaks, but the idea is you don't want to tie in to a single thing.
Jordan Simonovski:
Your data structures remain the same.
Jess Belliveau:
Yeah. So that you could almost do it seamlessly without the developers knowing. There's even companies in the past that I think have pushed to multiple vendors. So you could be consuming vendor A and then you want to do a proof of concept with vendor B to see what the experience is like and you just push your data there as well.
Jared Kells:
Yeah. I think our coupling to Datadog will be I all the dashboards and stuff that we've made. It's not so much the data.
Jess Belliveau:
Yeah. That's sort of the big up sell, right. It's how you interact. That's where they want to get their hooks in, is making it easier for you to interpret that data and manipulate it to meet your needs and that sort of stuff.
Jordan Simonovski:
Observability suggests dashboards, right?
Jess Belliveau:
Yeah, perhaps. You used this term as well, Jordan, "production excellence." And when we talk about who needs observability, I was thinking a bit about that while you were talking. And for me, production excellence, or in Apptio we call it production readiness, operational readiness and that sort of stuff is like we want to deploy something to production like what sort of best practices do we want to have in place before we do that? And I think observability is a real great idea, because it's helping you in the future. You don't know what problems you're going to have down the line, but you're equipping your teams to be able to respond to those problems easily. Whereas, we've all probably been there, we've deployed code of production and we have no observability, we have a huge outage. What went wrong? Well, no one knows, but we know this is the fix, and it's hard to learn from that, or you have to learn from that I guess, and protect the user against future stuff, yeah.
Jess Belliveau:
When I think easy wins for observability, the first thing that really comes to mind is this whole idea of structured logging, which is really this idea that your application is you're logging, first of all. Quite important as a baseline starting point, but then you have a structured log format which lets you programmatically pass the logs as well. If you go back in time, maybe logging just looked like plain text with a line, with a timestamp, an error message. Whatever the developer decided to write to the standard out, or to the error file or something like that. Now I think there's a general move to having JSON, an actual formatted blob with that known structure so you can look into it. Tracing's probably not an easy win. That's a little bit harder. You can implement it with open telemetry and libraries and stuff. Requires a bit more understanding of your code base, I guess, and where you want tracing to fire, and that sort of stuff, parsing context through, things like that.
Jordan Simonovski:
I think Atlassian, when you probably just want to know that everything is okay. At a fairly superficial level. Maybe you just want to do some kind of up time on a trend. And then as, I guess, your code might get more complex or your product gets a bit more complex, you can start adding things in there. But I think actually knowing or surfacing the things you know might break. Those would probably be your quickest wins.
Jess Belliveau:
Well, let's mention some things for further reading. If you want to go get the whole picture of the whole, real observability started to get a lot of movement out of the Google SRE book from a few years ago. The Google SRE stuff covers the whole gamut of their soak reliability engineering practice, and observability is a portion of that, there's some great chapters on that. O'Reilly has an observability book, I think, just dedicated to observability now.
Jordan Simonovski:
I think that's still in early release, if people want to google chapters.
Jess Belliveau:
The open telemetry stuff, we'll drop a link to that I think that's really handy to know.
Angad Sethi:
From [inaudible 00:26:12], which is my perspective, as a developer, say I wanted to introduce cornflake use Datadog at Easy Agile. Not very familiar, I'm not very comfortable with it. I know how to navigate, but what's a quick way for me to get started on introducing observability? Sorry to lock my direct job or at my workplace.
Jordan Simonovski:
I would lean, I could be biased here. Jess correct me or give your opinion on this, I would lean heavily towards SLO's for this. And you can have a quick read in the SRE-
Jess Belliveau:
What does SLO stand for, Jordan?
Jordan Simonovski:
Okay, sorry. Buzz words! SLO is a service level objective, not to be confused with service level agreement. An agreement itself is contractual and you can pay people money if you do breach those. An SLO is something you set in your team and you have a target of reliability, because we are getting to the point where we understand that all systems at any point in time are in some kind of degraded state. And yeah, reliability isn't necessarily binary, it's not unreliable or reliable. Most of the time, it's mostly reliable and this gives us a better shared language, I guess. And you can have a read in the SRE handbook by Google, which is free online, which gives you a pretty good understanding of Datadog.
Jordan Simonovski:
I think the last time I used it had a SLO offering. But I think like I was mentioning earlier, you set an SLO on particular functionalities or features of your application. You're saying, "My user can do this 99% of the time," or whatever other reliability target you might want to set. I wouldn't recommend five nines of reliability. You'll probably burn yourself out trying to get there. And you have this target set for yourself. And you know exactly what you're measuring, you're measuring particular types of functionality. And you know when you do breach these, users are being affected. And that's where you can actually start thinking about observability. You can think about, "What other features are we implementing that we can start to measure?" Or, "What user facing things are we implementing that we can start to measure?"
Jordan Simonovski:
Other things you could probably look at are, I think they're all covered in the book anyway, data freshness in a way. You want to make sure the data users are being displayed is relatively fresh. You don't want them looking at stale data, so you can look at measuring things like that as well. But you can pretty much break it down into most functionalities of a website. It's no longer like a ping check, that you're just saying, "Yes, HTTP, okay. My application is fine." You're saying, "My users are actually being affected by things not working." And you can start measuring things from there. And that should give you a better understanding, or a better idea, at least, of where to start with what you want to measure and ow you want to measure it. That would be my opinion on where to get started with this if you do want to introduce it.
Jared Kells:
We're going to talk a little bit about state and how with some of these, like our very front end heavy applications that we're building, so the applications we build just basically run inside the browser and the traditional state as you would think about it, is just pulling a very simple API that writes some things into the database with some authentication, and that sort of stuff. So in terms of reliability of the services, it's really reliable. Those tiny API's just never have problems, because it's just so simple. And well, they've got plenty of monitoring around it. But all our state is actually, when you say, "Observe the state of the system," for the most part, that's state in a browser. And how do we get observability into that?
Jess Belliveau:
A big thing is really, there's not one thing fits all as well. When we talk about the SLO stuff as well, it's understanding what is important to not so much maybe your company but your team as well. If you're delivering this product, what's important to you specifically? So one SLO that might work for me at Apptio probably isn't going to work for Easy Agile. This is really pushing my knowledge, as well, of front end stuff, but when we say we want to observe the state as well, we don't necessarily mean specifically just the state. You could want to understand with each one of those API's when it's firing, what the request response time is for that API firing. So that might be an important metric. So you can start to see if one of those APIs is introducing latency, and so your user experience is degraded. Like, "Hey when we were on release three, when users were interacting with our service here, it would respond in this percentile latency. We've done a release and since then, now we're seeing it's now in this percentile. Have we degraded performance performance?" Users might not be complaining, but that could be something that the team then can look into, add to a sprint. Hey, I'm using Agile terms now. Watch out!
Jared Kells:
That's a really good example, Jess. Performance issues for us are typically not an API that's performing poorly. It's something in this very complicated front end application is not running in the same order as it used to, or there's some complex interaction we didn't think of, so it's requesting more data than expected. The APIs are returning. They're never slow, for the most part, but we have performance regressions that we may not know about without seeing them or investigating them. The observability is really at the individual user's browser level. That makes sense? I want to know how long did it take for this particular interaction to happen.
Jess Belliveau:
Yeah. I've never done that sort of side of things. As well, the other thing I guess, you could potentially be impacted in as well as then, you're dealing with end user manifestations as well. You could perceive-
Jared Kells:
Yeah sure.
Jess Belliveau:
... Greater performance on their laptop or something, or their ISP or that sort of stuff. It'd be really hard to make sure you're not getting noise from that sort of thing as well.
Jordan Simonovski:
Yeah. There are tools like Sentry, I guess, which do exist to give you a bit more of an understanding what's happening on your front end. The way Sentry tends to work with JavaScript, is you'll upload a minified map of your JS to Sentry, deploy your code and then if something does break or work in a fairly unexpected way, that tends to get surfaced with Sentry will tell you exactly which line this kind of stuff is happening on, and it's a really cool tool for that company stuff. I don't know if it'd give you the right type of insights, I think, in terms of performance or-
Jared Kells:
Yeah, we use a similar tool and it does work for crashes and that sort of thing. And on the observability front, we log actions like state mutations in side the front end, not the actual state change, but just labels that represent that you updated an issue summary or you clicked this button, that sort of thing, and we send those with our crash reports. And it's super helpful having that sort of observability. So I think I know what you guys are talking about. But I'm just [crosstalk 00:35:25], yeah.
Jess Belliveau:
Yeah, that's almost like, I guess, a form of tracing. For me and Jordan, when we talk about tracing, we might be thinking about 12 different microservices sitting in AWS that are all interacting, whereas you're more shifting that. That's sort of all stuff in the browser interacting and just having that history of this is what the user did and how they've ended up-
Jared Kells:
In that state.
Jess Belliveau:
In that state, yeah.
Jordan Simonovski:
I guess even if you don't have a lot of microservices, if you're talking about particular, like you're saying for the most part your API requests are fine but sometimes you have particularly large payloads-
Jared Kells:
We actually have to monitor, I don't know, maybe you can help with this, we actually should be monitoring maybe who we're integrating with. It's actually much more likely that we'll have a performance issue on a Xero API rather than... We don't see it, the browser sees it as well, which is-
Jordan Simonovski:
Yeah, and tracing does solve all of those regressions for you. Most tracing libraries, like if you're running Node apps or whatever on your backend. I can just tell you about Node, because I probably have the most experience writing Node stuff. You pretty much just drop in Didi trace, which is a Datadog library for tracing into your backend and your hook itself into all of, I think, the common libraries that you'll tend to work with, I think. Like if you're working for express or for a lot of just HADP libraries, as well as a few AWS services, it will kind of hook itself into that. And you can actually pinpoint. It will kind of show you on this pretty cool service map exactly which services you're interacting with and where you might be experiencing a regression. And I think traces do serve to surface that information, which is cool. So that could be something worth investigating.
Jess Belliveau:
It's funny. This is a little bit unrelated to observability, but you've just made me think a bit more about how you're saying you're reliant on third party providers as well. And something I think that's really important that sometimes gets missed is so many of us today are relying on third party providers, like AWS is a huge thing. A lot of people writing apps that require AWS services. And I think a lot of the time, people just assume AWS or Jira or whatever, is 100% up time, always available. And they don't write their code in such a way that deals with failures. And I think it's super important. So many times now I've seen people using the AWS API and they don't implement exponential back off. And so they're basically trying to hit the AWS API, it fails or they might get throttled, for example, and then they just go into a fail state and throw an error to the user. But you could potentially improve that user experience, have a retry mechanism automatically built in and that sort of stuff. It doesn't really tie into the observability thing, but it's something.
Jared Kells:
And the users don't care, right? No one cares if it's an AWS problem. It's your problem, right, your app is too slow.
Jess Belliveau:
Well, they're using your app. Exactly right. It reflects on you sort of thing, so it's in your interest to guard against an upstream failure, or at least inform the user when it's that case. Yeah.
Jared Kells:
Well, I think we're going to have to call it, this podcast, because it was an hour ago. We had instructed max 45 minutes.
Jess Belliveau:
We could just keep going. We might need a part two! Maybe we can request [cross talk 00:39:21].
Jared Kells:
Maybe! Yeah.
Jess Belliveau:
Or we'll just start our own podcast! Yeah.
Angad Sethi:
So what were your biggest learnings today, given it's been Angad and I are just learning about observability, Angad what was your biggest learning today about observability? My biggest learning was that observability does not equal Datadog. No, sorry! It was just very fascinating to learn about quantifying the known unknowns. I don't know if that's a good takeaway, but...
Jess Belliveau:
Any takeaway is a good takeaway! What about you, Jared?
Jared Kells:
I think, because I we were going to talk about state management, and part of it was how we have this ability, at the moment to, the way our front ends are architected, we can capture the state of the app and get a customer to send us their state, basically. And we can load it into our app and just see exactly how it was, just the way our state's designed. But what might be even cooler is to build maybe some observability into that front end for support. I'm thinking instead of just having, we have this button to send us out your support information that sends us a bunch of the state, but instead of console logging to the browser log, we could be console logging, logging in our front end somewhere that when they click, "send support information," our customers should be sending us the actions that they performed.
Jared Kells:
Like, "Hey there's a bug, send us your support information." It doesn't have to be a third party service collecting this observability stuff. We could just build into our... So that's what I'm thinking about.
Jess Belliveau:
Yeah, for sure. It'll probably be a lot less intrusive, as well, as some of the third party stuff that I've seen around.
Jared Kells:
Yeah. It's pretty hard with some of these integrations, especially if you're developing apps that get run behind a firewall.
Jess Belliveau:
Yeah
Jared Kells:
You can't just talk to some of these third parties. So yeah, it's cool though. It's really interesting.
Jess Belliveau:
Well, I hope someone out there listening has learned something, and Jordan and I will send some links through, and we can add them, hopefully, to the show notes or something so people can do some more reading and...
Jared Kells:
All thanks!
Jess Belliveau:
Thanks for having us, yeah.
Jared Kells:
Thanks all for your time, and thanks everybody for listening.
Jordan Simonovski:
Thanks everyone.
Angad Sethi:
That was [inaudible 00:41:55].
Jess Belliveau:
Tune in next week!
Related Episodes
- Podcast
Easy Agile Podcast Ep.25 The Agile Manifesto with Jon Kern
"Thoroughly enjoyed my conversation with Jon, he shared some great perspectives on the impact of the Agile manifesto" - Amaar Iftikhar
Amaar Iftikhar, Product Manager at Easy Agile is joined by Jon Kern, Co-author of the Agile Manifesto for Software Development and a senior transformation consultant at Adaptavist.
Amaar and Jon took some time to speak about the Agile Manifesto. Covering everything from the early days, ideation, process, and first reactions, right through to what it means for the world of agile working today.
They touch on the ideal state of an agile team, and what the manifesto means for distributed, hybrid and co-located teams.
We hope you enjoy the episode!
Transcript
Amaar Iftikhar:
Hi everyone. Welcome to the Easy Agile Podcast. My name is Amaar Iftikhar. I'm a product manager here at Easy Agile. And before we begin, Easy Agile would like to acknowledge the traditional custodians of the land from which we broadcast today, the people of the Dharawal speaking country. We pay our respects to elders past, present, and emerging. And extend that same respect to all Aboriginal, Torres Strait Islander, and First Nations peoples joining us today.
Today, we have on the podcast Jon Kern, who is the co-author of the Agile Manifesto for Software Development and an Agile consultant. If you're wondering, you're correct. I did mention the Agile Manifesto for Software Development. The Agile Manifesto. So Jon, welcome for being here and thank you for joining us.
Jon Kern:
Oh, my pleasure, Amaar. Thank you.
Amaar Iftikhar:
Yeah, very excited to have you on. Let's just get started with the absolute basic. Tell the audience about, what is the Agile manifesto?
Jon Kern:
Well, it's something that if you weren't around, and I know you're young, so you weren't around 21 years ago, I guess now, to maybe understand the landscape of what software development process and tooling and what most of us were facing back then, it might seem like a really obvious set of really simple values. Who could think that there's anything wrong with what we put into the manifesto? But back in the day, there were, what I practiced under as a... I'm an aerospace engineer, so I was in defense department work doing things like fighter simulation, F-14 flat spins and working with a centrifuge and cool stuff like that. And subject to a mill standard specification, which makes sense for probably weapons systems, and aircraft manufacturing, and all sorts of other things. But they had one, lo and behold, for software development. And so there was a very large, what I would call heavy handedness around software development process. We call it heavyweight process. Waterfall was the common term back then, and probably still used today.
And there were plenty of, I would say the marketing juggernaut of the day, IBM and Rational unified process, these large, very much like Safe. Where it's a really large body of work, awesome amount of information in it, but very heavy process even though everything would, say you tailor it, it could be whatever you wanted. I mapped my own lightweight process into REP for example. Sure. But the reality was we were facing kind of the marketplace leader being heavyweight process that was just soul crushing, and from my perspective, wasting taxpayers' money. That was kind of my angle was, well, I'm a taxpayer, I'm not going to just do this stupid process for process sake. That has to have some value, has to be pragmatic. So lo and behold, there were a handful of us, 17 that ended up there, but there are a handful of us that practiced more lightweight methods. So the manifesto was really an opportunity for coming together and discovering some of the, what you might think of as the commonality between many different lightweight practices. There was the XP contingent. I first learned about Scrum there, for example. Arie van Bennekum, a good friend, he taught us about DSDM. I don't even remember what it stands for anymore. It was a European thing.
Alistair and Jim Highsmith, they had, I forget, like crystal methodologies. So there was a fair amount of other processes that did not have the marketing arm that erupted, or didn't have the mill standard. So it was really all about what could we find amongst ourselves that was some sort of common theme about all these lightweight processes. So it was all about discovering that, really.
Amaar Iftikhar:
You all get together, the principles kind of come to fruition, and let's fast forward a little bit. What was the initial reaction to the original manifesto?
Jon Kern:
Yeah, it was even kind of funny that the four values, the four bullets is as simple as it was. The principles came a bit later. I want to say we collaborated over awards wiki, but the original... If you go to Agile uprising, you can see I uploaded some artifacts, because apparently I'm a pack rat. And I had the original documents that Alistair probably printed out, because he was the one... He and Jim lived there near Salt Lake City. So it was like, "Hey, let's come here." And we like to go skiing, so let's do it here. So he arranged the room and everything. And so there's some funny artifacts that you can find. And the way that it actually came about was an initial introduction of each of us about our methods. And really I think a key, we left our egos at the door. I mean I was a younger one. Uncle Bob, some of these, he was at Luminar, I know I have magazines still in the barn that he was either the editor of, or authors of for people who don't remember what magazines are. Small little booklets that came out. So Uncle Bob was like, Ooh, wow, this is pretty cool.
And I wasn't shy because I had a lot of experience with heavyweight methods. So I really wanted to weigh in on... Because I had published my own lightweight method a few years earlier. So I had a lot of opinions on how to avoid the challenges of big heavyweight process. So the culmination as we were going out the door and after we had come up with the four values was I think Ward said, "Sir, want me to put this on the web?" And again, this is 2001 so dot com and the web's still kind of new so to speak. And we're all like, yeah, sure, why not? What the hell, can't hurt. We got something, might as well publish it. I don't think to a person, anybody said, "Oh yeah, this is going to set the world on fire because we're so awesome." And we were going to anoint the world with all of this wonderful wisdom. So I don't think anybody was thinking that that much would happen.
Amaar Iftikhar:
Yeah. So what were you thinking at that time? So how would the principles that you had come up with together, was that maybe just for the team to take away? Everyone who was there? What was the plan at that time?
Jon Kern:
I think it was a common practice. Like I said, there were other groups that would often meet and have little consortiums or little gatherings and then publish something. So I think it was just, oh yeah, that's a normal thing to do is you spent some time together and you wrote things down, you might as well publish it. So I think it wasn't any deeper than that other than Bob, I think Bob might say that he wanted to come up with some kind of a manifesto of sorts or some kind of a document because that's I think what those sort of... I was never at one of those gatherings, but you know, you could see that they did publish things. I have a feeling it was just something as innocent as, well we talked, wrote some things down, might as well share it.
And then the principles, there were a lot of different practices in the room. So some of what I would say the beauty of even the values page is the humility at the top is it's still active voice. We are uncovering not, hey all peasants, we figured it all out. No, we're still uncovering it. And the other thing is by doing it, because I'm still an active coder. And plus we value this more on the left, more than on the right. Some people might say it's a little ambiguous or a little fuzzy, but that's also a sign of humility and that it's not A or B. And it really is fuzzy, and you need to understand your context enough to apply these things. So from a defense department contracting point of view, certainly three of the four bullets were really important to me because I learned... Sure, we did defense department contracting. But it's way more important to develop a rapport with the customer than it is... Because by the time you get to the contract you've already lost, which goes along with developing a rapport with the customer, the individual.
And one of Peter Codes, when we worked with customers and whatnot, one of our mantras was frequent tangible working results, AKA working software. You can draw a lot and you can do use cases for nine months, but if you don't have anything running, it's pretty, I would guess risky that you don't have anything, no working software yet. So it really was I think an opportunity to share the fact that some people thought two weeks and other people thought a month. Even some of the print principles had a pretty good wide ranging flexibility so to speak. That I think is really important to note.
Amaar Iftikhar:
Yeah, no, absolutely. And it makes sense. Did you or anyone else in the room at that time ever imagine what the impact downstream would be of the work that was being done there?
Jon Kern:
Not that I'm aware of. I certainly did not. I remember a couple times in my career walking in and seeing some diagrams when I worked with the company Together Soft, and we'd build some cool stuff and I'd see people having some of the... Oh yeah, there's a diagram I remember making on their wall. That's kind of cool. But nothing near how humbling and sort of satisfying it is. Especially I would say when I'm in India or Columbia or Greece, it almost seems maybe they're more willing to be emotional about it. But people are, it's almost like they were freed by this document. And in some sense this is a really, really tiny saying it with the most humility possible. A little bit like the Declaration of Independence, and the fact that a handful of people... And the constitution of the United States. A handful of people met in a moment of time, never to be repeated again and created something that was dropped on the world so to speak, that unleashed, unleashed a tremendous amount of individual freedom and confidence to do things. And I think in a very small, similar fashion, that's what the manifesto did.
Amaar Iftikhar:
As you mentioned, there was a point in time when the manifesto was developed and that was almost over 20 years ago. So now the way of working, and the world of working has drastically changed. So what are your thoughts on that? Do you see another version coming? Do you think there are certain updates that need to be made? Do you think it's kind of a timeless document? I'd love to hear your thoughts on that.
Jon Kern:
Yeah, that's a good question. I personally think it's timeless and I welcome other people to create different documents. And they have. Alistair has The Heart of Agile, Josh Kerievsky's got Modern Agile.
There's a few variations of a theme and different things to reflect upon, which I think is great. Because I do believe, unlike the US Constitution, which built in a mechanism to amend itself, we didn't need that. And I believe it captured the essence of how humans work together to produce something of value. Mostly software, because that's what we came to practice from, is the software experience. But it doesn't take a lot of imagination to replace the word software with product or something like that and still apply much of the values that are there with very, very minor maybe adjustments because frequent tangible working results.
There might have to be models, because you're not going to build a skyscraper and tear it down and say, "Oh, that wasn't quite right," and build it again. But nonetheless, there are variations of how you can show some frequent results. So I think by and large it's timeless. And I would challenge anybody. What's wrong with it? Point out something that's somehow not true 20 years later. And I think that's the genius behind it was we stumbled on... And probably because most of us were object modelers, that's one of the things we're really good at, is distilling the essence of a system into the most critical pieces. That's kind of what modeling is all about. And so I think somehow innately, we got down to the core bits that make up what it is to produce software with people, process and tools. And we wrote it down. That's why I think it's timeless.
Amaar Iftikhar:
Yeah, no absolutely. I think that was a really good explanation about why it's timeless. I think one of the principles that comes to mind in a kind of modern hybrid or flexible working arrangement is one of the principles talks about the importance of face to face conversations. And in a world now where a lot of conversations aren't happening physically face to face, they might be happening on Zoom. Do you think that still applies?
Jon Kern:
Yeah, I think what we're finding out with... Remote was literally remote, so to speak, back 20 years ago. I was working with a team of developers in Russia and we had established enough trust and physical... I would travel there every month. So kind of established enough of a team, and enough trust in the communication that we could do ultimately some asynchronous work because different time zones. And me being in the east coast. 7:00 AM in the US was maybe 3:00 PM in Russia if I recall. St. Petersburg. So we were able to overcome the distance, but it's hard to beat real life. And I would often sometimes even spar a little bit with Ron Jeffries that on the one hand you could say the best that you can do is in person. But on the other hand, I could argue a little bit of some of the remoteness makes things... You have to be a little more verbose, possibly a little more precise, but also a little more verbose. A little more relaxed with... You might take a couple of passes to get something just because, I mean there are two time zones passing in the night. But that was based off of some often initial face to face meetings, and then you could go remote and still be successful and highly effective.
So I think it's important that teams don't just say that they can still do everything. And zoom is way better than 20 years ago, admittedly. Zoom gets, at least you can see a face. But nothing replaces the human contact. And I think also for wellbeing, I think human contact is important. So I would still say that the interaction aspect in the manifesto is still best served with a healthy dose of in-person. And that's kind of the key about most things in Agile. It's to me it's about pragmatism, and not just being dogmatic but rather, what might work better for us? And even experimenting with try something a little bit and see how that works. So even how you treat the manifesto, you should treat it in an Agile manner so to speak.
Amaar Iftikhar:
Yeah, no absolutely. That's a great point. On that note, as an Agile consultant or the Agile guy, what have you seen are the best practices or what works, what doesn't work for distributed teams?
Jon Kern:
Well I think the things that are most challenging that I've run across big companies and even smaller ones is that... I don't know if it's natural, God forbid if it's natural, but tendencies that I've seen in some companies to set up silos where you're the quality control, you're the UX, you're the front end, you're the back end, makes my headwater explode. Because that's building in a lag and building in communication roadblocks and building in cooperation which is handed offs from silo to silo, versus collaboration. So I've seen more of that. And I get it, you might want to have a specialty, but customer doesn't care. Customer wants something out the door. If I showed up and I'm going to pull a feature off the stack, what do you mean I can only do part of it? I don't get that. And yeah, I know I'm not an expert in everything but we probably have an expert that we can figure out what the pattern is. So I find that sort of trend, I don't know if it's a trend, but I find that's a step backwards in my opinion. And it's better to try to be more cross-functional, collaborative, everybody trying to work to get the feature out the door, not just trying to do your little part.
Amaar Iftikhar:
Yeah, a hundred percent. I think knocking on silos is a big part of being agile, or even being digital for that matter. And often the remedies for it too are there at hand, but it's a lot harder to actually be practical with it, to actually implement it in an organization, a living, breathing business where there's real people and there's dynamics to deal with, and there's policies and processes to follow. So I guess as generic as you can be, what is your thought as an Agile consultant to a business that's kind of facing that issue?
Jon Kern:
One of the things that... Adaptive is what my colleague John Turley has really opened my eyes to. I tend to call it the secret sauce, or the missing piece to my practice. And it has to do with individual's mindset and what we call vertical development. So it might sound like weird wishy-washy fluffy stuff, but it's actually super critical. And I've always said people, process, and tools for, I want to say since late nineties probably, I mean a long time. And the first I've been able to realize why sometimes I would have just spectacular super high performing teams and other times it'd be just really, really well performing but not always that spark and sometimes kind of like, eh, that was a little meh. And a lot of it comes down to where people lie on in terms of how they make their meaning and what their motivational orientation is, command and control versus autonomy.
So what we do is we've learned that we can help people first off recognize this exists, and help people with what we call developmental practices. Something that, even the phrase, you probably heard it, like safe experiments. Failure, or trying something and failing. Well if you chop someone's head off for it, guess what? They're just going to probably stay pretty still and only do what they're told, not try to... I have a super high dose of autonomy in me, so I've long lived by the, better to beg forgiveness than ask permission, and always felt as long as I'm trying to do the right thing to succeed and do the best for the company, they probably won't fire me if I make a mistake. But not everybody has that amount of freedom in the way they work. So you have to help establish that as management, and that's a big thing that we work with, with teams.
And then we also start with the class. If you've ever watched office space, and if you haven't you should, but the, what is it that you do here? So there's a great, the consultants Bob and Bob coming in, the efficiency consultants, "So Amaar, what is it that you do here?" But literally that's something, whether we're helping teams build a new product, is okay, what's the purpose? What's the business purpose of this product? What is it that you do here? What do you want to do with this product? What value does it provide? Same thing with anything you're working with as a team. And that's why whether it's software, producing some feature that has an outcome that provides value to the customer, or some product. But the point is if you don't understand that, now it's making, the team is going to have a real hard time being able to make decisions which are helping us move forward.
So if you help everybody understand what it is we're here to do, and then try to get the folks that might reflect all the different silos if you're siloed, but all the different elements. How do we go from an idea to cash, so to speak, or idea to value in the customer's hand? And have a good look at that. Because there are so many things that just sort of... Technical data often creeps into software code bases. And the same thing, we sort of say the organizational debt, the same thing can happen. Your process debt. You can just end up with, all right, we want the development team to go faster, John and company, can you come in and help coach us? We want to go agile. Sure, okay yeah. All right. We roll up our sleeves, we look around and after an initial kind of value stream look, like, wait I'm sorry but there's a little tiny wedge, it's about 15%, that's the development. And then you spent the 85% thinking about it.
Let's pretend we could double the speed of development. Which was initially the... Yeah, we need the developers to code faster or something. That's a classic. And no you don't, you need to stop doing all this bullshit up front that's just crazy ass big waterfall project-y stuff with multiple sign-offs. And matter of fact, one of the sign-offs, oh my gosh it only meets once a week, and then if you have a typo in it, you get rejected. You don't come back for another... Are you insane? You spent eight months deciding to do eight weeks worth of work. Sorry, it's not the eight weeks. So things like that, what I recommend anybody self inspect is try to... If you're worried about your team, how you can do better is just start trying to write down what does your process step look like and what is a typical time frame?
How much time are you putting value into the... Because a lot of times people batch things up in sprints. That's a batch, why are you putting things in a batch? Or they have giant issues. Well that's the big batch. So there's lots of often low hanging fruit. But to your point, it's often encrusted in, this is the way we work and nobody feels the ability to change or even to stop and look to see how are we working. So I think that's where we usually start is let's see how you actually work today. And then while we're doing that you can spill your guts, you can tell us all the things that hurt and that are painful and then we'll try to design a better way that we can move towards, in terms of working more effectively. Because our goal is to help teams be able to develop ways to do more meaningful and joyous work, really. Because it's a lot of fun when it's clicking and when you're on a good team and you're putting smiles on the customers' faces, it's hard to almost stay away from work because it's so much fun. But if it's not that, if it's drudgery and you're just a cog in the machine and stuff takes months to get out the door, it's a job. It's not that much fun.
Amaar Iftikhar:
Yeah. A lot of the points that you mentioned there strongly resonated with me, and the common pain points. It sounds like you've kind of seen it all. And by the way if you haven't seen office space, definitely need to watch it. It's a really good one. You've mentioned now a lot about of the element of the challenges that a distributed team faces. Now I want to flip it over and ask you what does the perfect distributed team look like today that lives and breathes agile values?
Jon Kern:
Yeah. I don't know if you can ever have such a thing, a perfect of any kind of team. So I would say harking back to the types of distributed teams that I've worked with, and this goes back to the late nineties. So I've been doing this for a long, long time. Only really done remote, whether it was with developers in Russia or down in North Carolina, or places like that. And I think that the secret was having a combination of in-person... If you want to go somewhere as a group, there are things you can do to break the ice, to establish some, what you might call team building type activities.
And not just, hey let's go do a high ropes course and be scared out of our wits together. But rather also things that are regarding why are we here, what are we trying to achieve? And let's talk about whether it's the product we're trying to build, and take that as an opportunity to coalesce around something and get enough meat on the bone, enough skeletons of what it might look like. Because there's good ways to start up and have a good foundation. And that's part of what I've been practicing for decades. If you get things set up properly with understanding that just enough requirements, understanding... And I do a lot of domain modeling with UML and things like that, just understanding what the problem domain is that we're trying to solve to achieve the goals we're looking for, have a sense of the architecture that we want. So all those things are collaborative efforts.
And so if you have enough of a starting point where you've worked together, you come in and, let's say you even had to go rent someplace, because nobody lived near office, so you all flew somewhere. I mean that's money well spent in my opinion. Because that starts the foundation. If you've broken bread so to speak, or drank some beers, or coded together and did stuff, and then you go back to your remote offices to take the next steps and then realize when you might need to meet again. So that's really important to understand that the value of establishing those relationships early on so that you can talk bluntly. And I have some good folks that I run a production app for firefighters since like 2006.
Amaar Iftikhar:
Yeah, very cool.
Jon Kern:
And that friend that I've worked with, we are so tight that we can... It makes our conversations, we don't have to beat around the bush, we don't have to worry about offending any, we just, boom, cut to the chase. Because we know we're not calling each other's kids ugly. We're just trying to get something done fast.
And building that kind of rapport takes time and effort and working together. And that's what I think a good successful distributed team, you need to come together every so often and build those relationships and know when you might need to come together again if something is a problem. But that I think is a key to success is it shortens the time. Because you may have heard of things like the group forms, if this is performance on the Y axis they form and they're at some performance level, then they need to storm before they get back to normal, and before they start high performing. So it's this form, storm. You get worse when you're storming. And storming means really understanding where we're at. When we argue about, I don't think that should be inheritance, Amaar. And then you're like, "Oh bull crap, it really..."
And again, we're not personal, but we're learning each other's sort of perspectives and we're learning how to have respectful debates and have some arguments, so to speak, to get to the better place. And I've worked in some companies that are afraid to storm, and it feels like you're never high performing.
Everyone's too polite. It's like, come on. And I love when I worked with my Russian colleagues. They didn't give a crap if I was one of the founders. And I'm glad, because I don't want any privilege, I don't want anything like that. No let's duke it out. May the best ideas win. That's where you want to get to. And if you can't get there because you don't have enough of a relationship, and you tend not to say the things that needed to be said because you're being polite, well it's going to take you really long to succeed. And that's a lot of money, and that's a lot of success, and people might leave.
So I think the important thing is if you're remote, that's okay, but sheer remote is a real challenge. And you have to somehow figure out, if you can't get together to learn how to form and storm, and build those bonds face to face, then you need to figure out how to do it over Zoom. Because you need to do it, because if you don't, if you never have words, then trust me, you're still not high performing.
Amaar Iftikhar:
Yeah, I kind of feel like being fully remote now is being offered as almost a competitive advantage to candidates in the marketplace now, because it's a fight for talent. But if I'm understanding correctly, what you are saying is that in-person element is so important to truly be high performing and those ideas kind of contradict each other, I feel.
Jon Kern:
Yeah. And again, having been remote since the late nineties, I've been doing this a long time. And commuting to Russia is the longest commute I ever did, for three years. I mean that's a hell of a long flight to commute there over seven times, or whatever the hell it was. Anyway, I used to say that that being remote is not for everyone, because it really isn't. I mean you have to know how to work without anybody around, and work. I mean it has its own challenges. And yeah, it might be a perk, but I think what you need to do is look at potentially what the perks are and figure out too, can I fold them into... It doesn't have to be all or nothing. And I think that can be a easy mistake to make maybe is to, all right cool, we don't have to have office space. That's a lot of savings for the company. Yeah, but maybe that means you need to have some remote workspaces for occasional gatherings, or figure it out.
But yeah, I think even... And certain businesses might work differently. In the beginning of building a product, I want to have heavy collaboration and I want to get to a point where it's almost, I feel like the product goes like this where once you get things rolling and you kind of get up, get some momentum going, now the hardest thing to do is be in front of an agile team, whether they're in-person or remote. Once things are rolling and rocking and kicking and it's like everything's clicking, you can just bang out features left, like boom, boom, boom. Yeah, okay then we probably need to be...
Unless we've got ways that we're pairing or things like that. I will say when we're together, mobbing is easier. I'm sure there's ways to do it remote, but being in a room, I don't know, it's a lot easier than coordinating over Zoom. You just, hey there's this problem, let's all hang out here after standup because we're just going to mob on this. So it doesn't take a whole lot versus anything remote, there's a little extra, okay, we've got to coordinate, and even different times zones, gets even worse. So yeah, don't get carried away with remote being the end all be all. Because I have a feeling there's going to be a... I would wager there will be a backlash.
Amaar Iftikhar:
And I'll take that back coming from the Agile, the person who does this day to day who helps teams become agile, I'll definitely kind of take your word for it. Plus with my experience too, I've seen nothing really beats a good white-boarding session. That is really hard to replicate online. I mean we have these amazing tools, but nothing quite mimics the real life experience of just having a plain whiteboard and a marker in your hand. That communication is so powerful.
Jon Kern:
Great point. You're so, right, because I had just with the one company that I was with for five years, we were doing high level engineered to order pump manufacturing sales type tool for... So it was my favorite world because it blended my fluid dynamics as an aerospace engineer, plus my love for building SaaS products, and building new software and things like that. And even having a young, we would interview at Lehigh University and we'd have some young graduates that would be working with us, and being able to bring them into the fold, and there was a room behind where my treadmill was and we'd go in there, we'd have jam sessions on modeling and building out new features. And man, you're right. Just that visceral three dimensional experience. Yeah, Miro's great. Or any other kind of tool, but yeah, it's not the same. You're absolutely right. That's a great point. You're almost making me pine for the good old days. [inaudible 00:42:04]
Amaar Iftikhar:
I think the good old days very much still exist. I think even now, it's kind of been a refreshing time for me to be with Easy Agile. I've only been here for just under two months now. And there's a strong in-person dynamic. And again, it's optional, where if people are remote or they're hybrid or they need to commute once in a while, it's a very understanding environment. But once you're in the office or you're in person, you kind of feel the effect you were describing, you're motivated to deliver for the end customer. You just want to come back. It's an addictive feeling of, I want to be back in person and I want to collaborate in real time in person.
Jon Kern:
That's beautifully said, because that's... One of the companies that we're beginning to engage with in South Africa, they're at this very crossroad of struggling with, everybody's been remote, but boy, the couple times we were together, got so much done. And you're describing the flame of, the warmth of delivering and let the moths come to the flame. I mean nurture it and then fan the flames of the good and let people opt in and enjoy it. And still sometimes, yeah, I got to say home, I got the kids or the dog, that's okay too. But giving the option I think is where we're going to head. And I believe the companies that are able to build that hybrid culture of accepting both, and neither mandating one nor the other, but building such a high performing team that basically encourages people to opt into the things that make the most sense at that time. And I think that those companies will rule the day, so to speak.
Amaar Iftikhar:
Yeah, absolutely. It's been so nice to chat with you John, and I've really enjoyed this. I want to leave the audience off with one piece of advice for distributed agile teams from you. We've talked a lot about the importance of in-person collaboration. We've talked about the principles of the agile manifesto. Now, what would the one piece of advice be when you're thinking of both? When you want the agile manifestos to be something that's living and breathing in distributed agile teams, what one piece of advice can you give businesses today right now who are going through the common struggles? What can you tell them as that last piece of advice?
Jon Kern:
Well, I think kind of a one phrase that I like to use to capture the manifesto is, "Mind the gap." In my sort of play on words, what I mean is the gap in time between taking an action and getting a response. Whether it's what do we do about the office, what do we do about remote, what do we do about this feature, what do we do about this line of code? The gap in time is, it's sort of a metaphor about being humble enough to treat things as a hypothesis. So don't be so damn sure of yourself one way or the other about the office or remote or distributed. But instead, treat things as a hypothesis. Be curious and experiment safely with different ways and see what works. And don't be afraid of change. It's not a life sentence to, you got to run your business or your project or your team one way for the rest of your life. No. Don't tell the boss, but work is subsidized learning. I never understood people who just keep doing the same thing because they weren't given permission. Just try it. So that's what my departing phrase would be regarding making those decisions. Mind the gap and really be humble about making assumptions, and test your hypotheses, and shorten the gap in time between taking actions and seeing a reaction.
Amaar Iftikhar:
Oh, that's awesome. Thank you. I really wish we could let the tape roll and just keep talking about this for a couple more hours, but we'll end it right there on that really good piece of advice that you've left the audience off with. Jon, thank you again for being on the podcast. And we've really, really enjoyed hearing you and learning from your experiences.
Jon Kern:
Oh, my pleasure. Any time. Happy to talk another couple hours, but maybe after some beers.
Amaar Iftikhar:
Yeah.
Jon Kern:
Except it's your morning, my evening. I'm going to have to work on that.
Amaar Iftikhar:
Yeah.
Jon Kern:
My pleasure, Amaar.
- Podcast
Easy Agile Podcast Ep.20 The importance of the Team Retrospective
"It was great chatting to Caitlin about the importance of the Team Retrospective in creating a high performing cross-functional team" - Chloe Hall
In this episode, I was joined by Caitlin Mackie - Content Marketing Coordinator at Easy Agile.
In this episode, we spoke about;
- Looking at the team retrospective as a tool for risk mitigation rather than just another agile ceremony
- The importance of doing the retrospective on a regular cycle
- Why you should celebrate the wins?
- Taking the action items from your team retrospective to your team sprint planning
- Timeboxing the retrospective
- Creating a psychologically safe environment for your team retrospective
I hope you enjoy today's episode as much as I did recording it.
Transcript
Chloe Hall:
Hi, everyone. Welcome to the Easy Agile Podcast. I'm Chloe, Marketing Coordinator at Easy Agile, and I'll be your host for today's episode. Before we begin, we'd like to acknowledge the traditional custodians of the land from which I am recording today, the Wodi Wodi people of the Dharawal Speaking nation and pay our respects to elders past, present, and emerging. We extend that same respect to all Aboriginal and to Strait Islander peoples who are tuning in today. So today, we have a bit of a different episode for you. I'm going to be talking with Easy Agile's very own Content Marketing Coordinator, Caitlin Mackie. Caitlin is the Product Owner* of our Brand and Conversions Team*. Now this team is a cross-functional team who have only been together for roughly six months. And within their first few months, as a team, mind you they also had two brand new employees, they worked on a company rebrand.
Chloe Hall:
A new team, a huge task, the possibility of the team being high performing was unlikely at this point in time. So, the team was too new to have already formed that trust, strong relationships, and psychological safety, but somehow they came together and managed to work together, creating a flow of continuous improvement and ship this rebrand. So, I've brought for you today Caitlin onto the podcast to discuss the team's secret for success. Welcome to the podcast, Caitlin.
Caitlin Mackie:
Thanks, Chloe. It's a bit different sitting on this side. I'm used to being in your shoes. I feel [inaudible 00:01:45]. I feel uncomfortable. [inaudible 00:01:46].
Chloe Hall:
Yeah. It's my first time hosting as well, so very strange. Isn't it? How are you feeling today?
Caitlin Mackie:
Yeah. Good. I'm excited. I'm excited to chat about our experience coming together as a cross-functional Agile team, and hopefully share some of the things that worked for us with our listeners.
Chloe Hall:
Yes, I know myself, and I'm sure our audience is very excited to hear what your team's secret to success was. Did you want to start off by telling us what was this big secret that really helped you work together as a team?
Caitlin Mackie:
That's a great question, Chloe. And that's a big question. I'm not sure if there's one key thing, I suppose, it is that ultimate secret source or that one thing that led to the success. I'm sure we all want to hear what that is. I would also love to know if there's just this one key ingredient, but I think something for us, and probably one of the most memorable things that really worked for us, and there was a lot for us to benefit from doing this, was actually doing our retrospectives. So that's probably the first thing that comes to mind when it comes to what led to our success.
Chloe Hall:
Okay. Yeah. In the beginning, why did you start doing the retrospectives?
Caitlin Mackie:
So, we were a new forming team, like you mentioned before, and we seen retrospectives as another Agile ceremony, and we saw other teams doing it and they were having a lot of success from it, so we became to jump on that bandwagon. And I think with being a new forming team, there are so many things that come into play. So, you're trying to figure each other out, how we all like to work and communicate with each other, all of that. And we were the first ever team dedicated to owning and improving our website. And we also knew it was likely that we'd be responsible for designing and launching a rebrand.
Caitlin Mackie:
So when you try and stitch all of that together, and then consider all those elements, we knew that we needed to reserve some time to be able to quickly iterate and call out what works and what doesn't. And what we did understand is that retrospectives are a great opportunity for the whole team to get together and uncover any problematic issues and have an open discussion aimed at really identifying room for improvement, or calling out what's working well, so we can continue to do that. So, I think retros allowed us to understand where we can have the most impact and how to be a really effective cross-functional Agile team.
Chloe Hall:
Wow. That is already so insightful. Yeah, it sounds like the retrospectives really helped you to gain that momentum into finding who your team is, becoming a well-working, high-performing cross-functional team. So, how often were you doing the retro? Were you doing this on a regular cycle, or was it just, "Okay. We have a problem. Some blockers have come up, we need to do a retro"?
Caitlin Mackie:
Yeah. I think initially retro, we kind of viewed retros as this thing where like, "Oh, we've done a few sprints now. We should probably do a retro and just reflect on how those few sprints went." It was kind of like this thing. It was always back of our mind. And we knew we needed to do it, but weren't really sure about the cadence and the way to go about it. So now, we do retros on a Friday morning, which is the last day of our weekly sprint. And then we jump into sprint planning after that. So after bio break as well, so let the team digest everything we talked about in retrospectives. And then we come into sprint planning with all the topics that we're discussed, and we will have a really nice, fresh perspective.
Chloe Hall:
Yeah.
Caitlin Mackie:
So, I think this works really well for us because everything is happening in a timely manner. We've just had a discussion about the best things that happened in the sprint or what worked really well, so you want to make sure you can practice the same behavior in the following, and vice versa for the improvements that you want to make. So, that list of action items that come out of a retrospective provide a really nice contact, context, sorry. And you have them all in mind during sprint planning.
Caitlin Mackie:
So for example, in the previous sprint, it might have come up that you underestimated your story points or there wasn't enough detail on your user stories. So, with each story or task that you are bringing into the sprint, you're then asking the question, is everyone happy with the level of detail? What are we missing? Or we've only story pointed this or two, is it more likely to be a five? So, everything is really fresh in your mind, and I definitely think that helps create momentum. When you've got the whole team working to figure out how you can be more effective with every sprint.
Chloe Hall:
That's such a great point that you just made Caitlin. And I love how going from doing the team retrospective, that you actually can take those action items and go into your sprint and put them into place straight away. It's really good. Otherwise, I feel like if you do the sprint retrospective on the Friday, and you're like, "Okay, these are our action items," get to Monday sprint planning and you're just thinking of the weekend. That [inaudible 00:07:20]
Caitlin Mackie:
Yeah, a hundred percent. Yeah. They're super fresher mind for everyone. So, it might not work for every team, but we find it works really well for us, because we're being really deliberate with how we approach sprint planning.
Chloe Hall:
Yeah. And then with that, I could see how doing the retro, how it could easily go over time, but then your team has sprint planning scheduled after. So, it's like you can't go over time. How have you managed to kind of time box that retrospective?
Caitlin Mackie:
Yeah, that's a really, really good question. And it is on purpose as well that they are scheduled closely together. Som as mentioned above, the discussion you've had in the retrospectives provides a nice momentum going to the sprint planning, but it does mean we have to watch the clock. And initially, this can be quite awkward, because you want to make sure that everyone feels heard and that everybody has the same opportunity to contribute. And I think this responsibility falls on the scrum master, or the product owner, or whoever's facilitating the retrospective to call it out and make sure everyone has the chance to be heard. You'll naturally have people tell the longer story or add a lot of extra context before getting to the point. And then you'll have others that will be a lot more direct. And I'm a lot like the latter. I struggle to get to the point, which doesn't work well when you're trying to time box a retrospective, right?
Chloe Hall:
And I can relate, same personality.
Caitlin Mackie:
Yes. So with this, I think it really comes down to communicating the expectation and the priority from the get go. With our team and with any team, you will want to figure out who you can perform really well and continually improve to exceed expectations and be better and learn and grow together. And I think if you all share that same mindset going into the retrospective and acknowledging that it's a safe
space to have difficult conversations. And as long as you're communicating with empathy, the team knows that it's never anything personal, and it's all in the best interest of the team. And that then helps the less direct communicators, like myself, address their point more concisely and really forces them to be more deliberate and succinct with their communication style.Caitlin Mackie:
And that's really key to being able to stick to that time box, I think. And it does take practice, because it comes down to creating that psychological safety in your team. But once that's there, it's so much easier to call out when someone's going down a windy track, and bring the focus back and sort of say, "I hear you, what's the action item?" And just become a lot more deliberate.
Chloe Hall:
Wow. I couldn't even imagine like how hard it would be, with the personalities that yourself and I have, just trying to be so direct and get rid of all the fluffy stuff. I mean, look at what it's done to form such an amazing team that we have. So, you mentioned that aspect of psychological safety before. And how do you think being in a new cross-functional team... Only six months together, you had those new employees, do you think you were able to create a psychological safety space at any point?
Caitlin Mackie:
That's another fantastic question. And I feel like, honestly, it would be best to have a team discussion around this. It'd be interesting to hear everybody's perspectives around what contributes to that element of psychological safety and if everybody feels the same. So, I can't speak for the team, but my personal opinion on this or personal experience is that creating an environment of psychological safety really comes down to a mutual trust and respect. And at the end of the day, we all share the same goal. So, we all really, really respect what each other brings to the table and understand how all of these moving parts that we are working on individually all come together to achieve the goal. So, when we're having these open discussions in retros, or not even in retros, just communicating in general really, it's clear that we're asking questions in the best interest of the team and individual motives never come into play, or people aren't just offering their opinion when it's unwarranted or providing feedback, or being overly critical when they weren't asked to do so.
Caitlin Mackie:
So, none of those toxic behaviors happen, because we all respect that whatever piece of work is in question or the topic of discussion, the person owning that work, at the end of the day, is the expert. And we trust them, and we don't doubt each other for a second. And I think the other half of that is that we're also really lucky that if something doesn't go as we planned, we're all there to pick each other up and go again. So, this ties quite nicely into actually one of our values at Easy Agile is commit as a team. And this is all about acknowledging that we grow and succeed when we do it together, and to look after one another and engage with authenticity and courage. Som I may be biased, but I wholeheartedly believe that our team completely embraces that. And there's just such an admiration for what we all bring to the table, and I think that's really key to creating the psychological safety.
Chloe Hall:
I love that your team is really embracing our value, commit as a team and putting it into place, because that's what we're all about at Easy Agile, and it's just so great to see it as well. I think the other thing that
I wanted to address was... So again, during this cross functional team, and you've got design and dev, how do you think retros assisted you in allowing to work out what design and dev needed from each other?Caitlin Mackie:
For sure. So, for some extra context for our listeners as well, so in our team, we've got two developers, Haley and David, and a designer, Matt and myself, who's in the marketing. So, we're very much a cross-functional little mini team. So, we all have the same goal and that same focus, but we also are all working on these little individual components that we then stitch together. So,, I think... We doing retros regularly. What we were able to identify was a really effective design and development cycle. So, we figured out a rhythm for what one another needed at certain points. For example, something we discovered really early was making sure that we didn't bring design and dev work into the same sprint. We needed to have a completely finished design file before dev starts working on it. And that might sound really obvious, but initially we thought, "Oh, well, if you have a half finished design file, dev can start working on that. And by the time that's done, the rest of the design file will be done."
Caitlin Mackie:
But what we failed to acknowledge is that by doing that, we weren't leaving enough capacity to iterate or address any issues or incorporate feedback on the first part of that design file, or if dev started working on it and design then gets told, "Oh, this part right here, it's not possible," so the designer is back working on the first part. And it just creates a lot of these roadblocks. So in retros, this came up and we were able to raise it and understand that what design needed from dev and what dev needed from design in order to make sure we weren't blockers for each other. And the action item out of the retro is that we all agreed that a design file had to be completely finished before dev picks up the work.
Chloe Hall:
I think it's so great that you were able to identify these blockers early on. Do you think like doing the retro on a weekly reoccurring basis was able to bring up those blockers quickly, or do you think it wouldn't have made a difference?
Caitlin Mackie:
No, definitely. I, a hundred percent, think that retros allowed us to address the blockers in a way more timely and effective manner. And we kind of touched on that before, but yeah, retros let you address the blockers, unpack them, understand why they're happening and what we need to do to make sure they don't happen again. So, for sure.
Chloe Hall:
Yeah. Yeah. I guess I want to talk a little bit now about the wins, the very exciting part of the retro, the part that we all love. So, how important do you think the wins are within the retro?
Caitlin Mackie:
So important. So, so, so important. It's like, when you achieve something epic as a team, you have to call it out. Celebrate all the wins, big, small. Some weeks will be better than others, but embrace that glass half full mentality. And there's always something to be proud of and celebrate, so call it out amongst
each other, share it with the whole company, publicly recognize it. Yeah, I think it's so important to embrace the wins. It just sort of creates a really positive atmosphere when you're in the team, makes everybody feel heard and recognized for their really positive contribution that they're making. And I think a big thing here as well is that if you've achieved something epic as a team, it's helpful for other teams to hear that as well, right? You figured out a cool new way to do something, share it. If it helped you as a team, it's most likely going to help another team.Caitlin Mackie:
So I think celebrating the wins isn't even just reserved for work stuff either, right? If somebody's doing something amazing outside of work or hit a personal goal, get behind it.
Chloe Hall:
Yeah.
Caitlin Mackie:
To celebrate all the wins always.
Chloe Hall:
Yeah. And I think it's so good how you mentioned that it's vital to celebrate the wins of someone's personal life as well, because at the end of the day, we're all human beings. Yes,, we come to work, but we do have that personal element. And knowing what someone's like outside of work as well is an element to creating that psychological safe space and team bonding, which is so vital to having a good team at the end of the day. Yeah.
Caitlin Mackie:
Yeah, a hundred percent. Yeah, you hit the nail in the head with that. We talked about psychological safety before, and I definitely think incorporating that, acknowledging that, yeah, we are ourselves at work, but we also have a whole other life outside of that too, so just being mindful of that and just cheering each other on all the time. That's what we got to do, be each other's biggest cheerleaders.
Chloe Hall:
Yeah, exactly. That's the real key to success. Isn't it?
Caitlin Mackie:
Yeah, that's it. That's the key.
Chloe Hall:
So, you've been working really well as a new cross functional, high performing Agile team. How do you think... What is your future process for retros?
Caitlin Mackie:
We will for sure continue to do them weekly. It's part of the Agile manifesto, but we want to focus on responding to change, and I think retros really allow us to do that. It's beneficial and really valuable for
the team. And when you can set the team up for success, you're going to see that positive impact that has across the organization as a whole. So yeah, we've found a nice cadence and a rhythm that works for us. So, if it ain't broke, don't fix it.Chloe Hall:
Yeah.
Caitlin Mackie:
Is that what they say? Is that the saying?
Chloe Hall:
I don't know. I think so, but let's just go with it. [inaudible 00:19:02], don't fix it.
Caitlin Mackie:
There we go. Yeah.
Chloe Hall:
You can quote Caitlin Mackie on that one.
Caitlin Mackie:
Quote me on that.
Chloe Hall:
Okay, Caitlin. Well, there's just one final thing that I want to address today. I thought end of the podcast, let's just have a little bit of fun, and we're going to do a little snippet of Caitlin's hot tip. So, for the audience listening, I want you to think of something that they can take away from this episode, an action item that they can start doing within their teams today. Take it away.
Caitlin Mackie:
Okay. Okay. All right. I would say always have the retrospective. Don't skip it. Even if there's minimal items to discuss, new things will always come up. And you have to regularly provide ways for the team to share their thoughts. And I'll leave you with, always promote positive dialogue and show value and appreciation for team ideas and each other. That's my-
Chloe Hall:
I love that.
Caitlin Mackie:
That's my hot tip.
Chloe Hall:
Thanks, Caitlin. Thanks for sharing. I really like how you said always promote positive dialogue. I think that is so great. Yeah. Well, thanks, Caitlin. Thanks for jumping on the podcast today and-Caitlin Mackie:
Thanks, Chloe.
Chloe Hall:
Yeah. Sharing your team's experience with retrospectives and new cross functional team. It's been really nice hearing from you, and there's so much that our audience can take away from what you've shared with us today. And I hope that we've truly inspired everybody listening to get out there and implement the team retrospective on a regular basis. So, yeah, thank you.
Caitlin Mackie:
Thank you so much, Chloe. Thanks for having me. It was fun, fun to be on this side. And I hope everyone enjoys this episode.
Chloe Hall:
Thanks, Caitlin.
Caitlin Mackie:
Thanks. Bye.
- Podcast
Easy Agile Podcast Ep.23: How to navigate your cloud migration journey
"Having gone through a cloud migration at Splunk, Greg share's some insightful key learnings, challenges and opportunities" - Chloe Hall
Greg Warner has been involved with the Atlassian ecosystem since 2006 and is a frequent speaker at Atlassian events. Greg has worked as a senior consultant for a solution partner, supported Jira and Confluence at Amazon, and in his current role at Splunk, executed a cloud migration to Atlassian Enterprise Cloud for over 10,000 of his colleagues.
In this episode, Greg and Chloe discuss the cloud migration journey:
📌 The mental shift to cloud migration and how to think beyond the technical side
📌 How to navigate the journey without a roadmap to follow
📌 The four pillars to success for your cloud migration journey
📌 Finding the right time to migrate & thinking about future opportunities beyond your migration
📌 The unexpected value that can come from a cloud migration
+ more!
📲 Subscribe/Listen on your favourite podcasting app.
Thanks, Greg and Chloe!
Transcript
Chloe Hall:
Hey everyone and welcome back to the Easy Agile Podcast. So I'm Chloe, Marketing Coordinator at Easy Agile, and I'll be your host for today's episode. So before we begin, we'd like to acknowledge the traditional custodians of the land from which I am recording today, the Wodiwodi people of the Dharawal-speaking nation and pay our respects to elders past, present, and emerging. We extend that same respect to all Aboriginal and to Australia Islander peoples who are tuning in today.
Chloe Hall:
So we have a very exciting guest on the podcast today. This guest has been involved with the Atlassian ecosystem since 2006 and is a frequent speaker at Atlassian events. He has worked as a senior consultant for a solution partner, supported Jira and Confluence at Amazon and at his current role at Splunk, executed a cloud migration to Atlassian Enterprise Cloud for over 10,000 colleagues. So welcome to the Easy Agile podcast, Greg Warner.
Chloe Hall:
How are you?
Greg Warner:
Good, and thank you for having me.
Chloe Hall:
No worries. It's great to have you here today.
Greg Warner:
This is one of my favorite topics. We talk about cloud migration and yeah, I hope I can explain why.
Chloe Hall:
Yes, that's exactly what we want for you because I remember when we met at Team 22, you were just so passionate about cloud migration and had so many insights to share and I was very intrigued as well.
Greg Warner:
To give it a bit background about myself.
Chloe Hall:
Yeah.
Greg Warner:
I haven't always been a cloud person. So you mentioned before about being involved since 2006. I was involved early days with when Jira had the several different flavors of standard and professional, when you'd order an enterprise license for Atlassian and they'd send you a shirt. That was one of the difference between one of the licenses. So based a lot in the server versions, over many years. I looked at the cloud as being the poorer cousin, if you like.
Greg Warner:
I'd been to several Atlassian summits and later Team events where there was always things of what was happening in cloud but not necessarily server. I participated in writing exam questions for Atlassian certification program for both server and DC. For me, in the last 18 months, two years now, to make this fundamental shift from being certainly a proponent of what we do doing on server in DC to now absolutely cloud first and that is the definite direction that we as a company have chosen and certainly why I'm so passionate about speaking to other enterprise customers about their cloud migration journey.
Chloe Hall:
Wow. So what do you think it was that you were like, okay, let's migrate to the cloud, as you were so involved in the server DC part of it? What was it that grabbed your attention?
Greg Warner:
I joined Splunk in 2019 and it wasn't all roses in regards to how we maintained Jira and Confluence. It wasn't uncommon to have outages that would last hours. For two systems that were just so critical to our business operations to have that, I was kind of dumbfounded but I thought, hey, I've been here before. I have seen this. And so it was a slow methodical approach to root cause our problems, get us to a version that was in long-term support, and then take a breather.
Greg Warner:
Once we got to that point where we didn't have outages, we kind of think of what the future would be. And for me, that future was exactly what I'd done before, what I'd done at Amazon, which is where we would move all of our on-prem infrastructure, Jira, Confluence, and Crowd to public cloud, whether it would be a AWS or GCP, something of that flavor. I'd done that before. I knew how we were going to do that to the extent that I'd even held meetings in my team about how we were going to stand up the infrastructure, what the design was going to be.
Greg Warner:
But there was probably one pivotal conversation that was with our CIO and it was in one of those, just passing by, and he's like, "Greg, I've seen the plans and the funding requests." He's like, "But have you considered Atlassian Cloud?" Now, the immediate personal reaction to me was like, we are not going to do that because I'd seen the iterations. I'd seen it over time. I'd worked for a solution partner. I'd worked with customers in cloud, never really thought we could be enterprise-ready. So my immediate reaction was not going to do that. I said, "I'm not going to answer that question right now." I said, "I don't know enough to give you an answer."
Greg Warner:
And I'm absolutely glad I did that because I would've put a foot in mu mouth had I given the immediate response that was... So yeah, I took that question, went and did some analysis, spoke to our technical account manager at the time, and really looked at what had been going on and where was cloud today? Where was it in its maturity? And the actual monumental thing for me was that I think it's actually ready. People make excuses for why they can't do it, but there are a bunch of reasons why you should. And if we look at us as a company, with our own products that we are moving our own customers to cloud, and we are using cloud services, like Google Workspace and Zoom and a variety of SaaS applications. What was so different about what we did in engineering that couldn't go to cloud? And that was like, okay, I think the CIO was actually asking me a much bigger question here.
Greg Warner:
So the result of that was yes, we decided that it was the right time for Splunk to move. And that is a monumental shift. And I know there's a lot of Jira admins out there that are like, if you do this, you're putting your own jobs at risk. The answer is no, you're not. And even within my team, when we had we'd discussed this, there was emotional connection to maintaining on-premise infrastructure and were we giving our own jobs away if we do this? There's all those... No.
Greg Warner:
And there have actually been two people in my team that got actually promoted through the work of our cloud migration that otherwise wouldn't have because they could demonstrate the skills. But that's kind of like the backstory about how we decided to go to cloud. And I think as we are thinking about it, there is a mental shift first. Before you even go down the technical path about how you would do it, change your own mind so that it's open so that you're ready for it as well.
Chloe Hall:
Yeah, I love that. It's so good. And I think just the fact that you didn't respond to your CIO, did you say that?
Greg Warner:
Yep.
Chloe Hall:
That you didn't respond to your CIO straight away and you weren't like, "No, I don't want to do that." You actually stepped away, took that time to do your research, and think maybe cloud is the better option for Splunk, which is just so great and really created that mental shift in yourself. So when you say that your employees, like everyone kind of has that beef that, oh, we're going to lose our job if we move from on-prem to cloud and those employees ended up getting promoted. How did their roles change?
Greg Warner:
When we moved from on-prem to cloud, you no longer have to maintain the plumbing, right?
Chloe Hall:
Yeah.
Greg Warner:
You no longer have to maintain all the plumbing that's supporting Jira, Confluence, BitBucket, whatever is going to move. Now we thought that was the piece that's actually providing value to the organization. And it wasn't until we went to cloud, we actually realized it wasn't. Like what we can do now is different. And that's what my team has done. They've up-leveled.
Greg Warner:
So in the times since we moved from Jira, Confluence on-prem to cloud, we now get involved a lot more with the business analysis and understanding what our project teams want. So when someone from engineering is requesting something that has an integration or a workflow, we've got more time to spend on that than are we going to upgrade? Are we on the current feature release? Is there a bug we have to close? Log for J as a prime example where the extent of where we covered was logging a call with the Atlassian enterprise support and then telling us, "Yep, it's done."
Greg Warner:
Whereas other colleagues within the ecosystem that I spoke to spent a week dealing with that, right? Dealing with patching and upgrades. So the value for our team in the work we do has shifted up. We've also done Jira advanced roadmaps in that time. So we've been able to provide things we would've never got to because we're too busy to the plumbing, to the extent now that we have a very small footprint of on-prem that remains and that's primarily FedRAMP and IO5. It's not quite certified yet. It's going to get there. So we have a very small footprint and I'm the one who has to do the upgrades and now you look at it like, oh my god, that's going to be this couple-week tasks we going to do where I could do all this other better work that's waiting for us in cloud. You don't realize it until you have it removed how much you used to do.
Greg Warner:
And so we used to do two upgrades of Jira year and two upgrades of Confluence a year. We put that down to about a month's work of each. By the time you do all of your testing and you're staging and then do that. So you're really looking at four months of the year you were spending doing upgrades. We don't have that anymore. It's completely gone. And so now we make sure that we do things cloud first. We don't bring across behaviors that we were doing on-prem into cloud. So that's probably one thing we learned was that don't implement server DC in cloud.
Chloe Hall:
Yeah, that's so great. It seems like it's opened up a lot more opportunity for you as well. So I think something that I kind of want to look into and understand a bit more is that people focus a lot on the technical aspect of the cloud migration. What other aspects do you think need to be considered?
Greg Warner:
Certainly people. I mentioned at the very front here the mental mindset and that really started with my team, to get their mind around how we're going to do this cloud migration. There isn't necessarily yet a roadmap that says these are all the steps need to take to get ready for your cloud migration. So we had to invent some of those and one of those two was, what did we want to get out of the cloud migration?
Greg Warner:
I speak to other Atlassian customers. You talk about they're running a project, the project is the cloud migration, the start and the end is the cloud migration day. No, completely wrong. The cloud migration actually has a beginning, a middle, and an end. What you're talking about here, about this first changes is in the beginning, and that should be we're moving to cloud because it should be fundamentally better than what we have today.
Greg Warner:
If it's not better, there's no value in doing the activity. So we started with a vision and that vision was that all of the core things had to work from day one and they had to work better. So create issue, edit issue, up to issue, that just needs to work. There should be no argument whether it does or does not. That needs to work and work better. Create a page, edit a page, share a page. That stuff needs to work in Confluence without any problems. We also need to make sure that there are people in the organization who this could be a fundamental change of how they work, depending on how much they work with Jira and Confluence. So appreciating that there is some change management and some communications that needs to be ready as you do your cloud migration to ensure that your vision is going to work, but also acknowledging you will break some things. You're not going to be able to do a cloud migration and shift you from A to B without nothing.
Greg Warner:
It will go wrong. So we were aware of that and for that, what I would always tell people was that we're really fixed on the vision of making it sure it's better than it was today, but flexible on the details, how we get there. We will probably find different ways as we go along because things will change. Cloud changes itself. You'll discover things you didn't know before. There was a Jira admin that made a decision 10 years ago, you now found that. So yeah, very, very fixed on that vision that day one that we had to have this unboxing experience that when people got to use Jira and Conference Cloud for the first time, they could see why we'd spent so much effort to make sure it was polished and things just worked. And as you went a bit further out, there might be things to do with apps that might not be quite the same.
Greg Warner:
That's okay. And then further out, things you just ultimately can't control. And for that, we had 76 integrations of teams that had written automations from all over the company. We're never going to get to find out what they do, but we knew that some of those would probably break. And so just dealing with some change control and allowing those people to know this is coming, what the rest endpoints will be, how to set up their API keys. We did a lot of that, but we did have one integration that broke and that integration broke because the entire team was on PTO or leave that week. We can't avoid that one. But it was good to see other teams actually jumped in because they'd been involved in updating theirs to go help fix that. So that was okay. We had one integration that we really gave the white glove support to and that was for... We have a Salesforce to Jira integration that's a revenue-generating integration.
Greg Warner:
We gave that a lot of attention to make sure that just worked. But the 76 others, we provided a runbook. The runbook was essentially teams, you do things like this. So they knew how to change and update to the new system. But yeah, certainly the beginning, middle and end. The beginning is all those shifts that you're going to have to change and probably some history about design decisions. The middle is in fact your cloud migration and the end, middle to the end is everything you do with it afterwards. So that's where the real value comes from in your cloud migration. It's once you're in, what can we do with it?
Greg Warner:
And we are towards the end of that now. There have been things that I couldn't have planned for that people have done. So we did your advanced roadmaps, saving the forest there, but also we're encouraging our staff to extend the platform. That used to be really difficult and we've worked with Atlassian to understand what should that look like? And we've settled on using it Atlassian Forge. And so now we have our first app this week, in UAT, in Atlassian Cloud to solve business problems that we have. That's a custom Atlassian Forge app. And we're encouraging our engineers to build those and so they can extend and get that real value through the cloud migration.
Chloe Hall:
Yeah, wow. You've come so far and it's nice to hear that you're towards the end of it and all the opportunities are coming with it and you're seeing all the value. It's all paying off as well. I think I just want to go back to that moment where you talk about there isn't essentially a roadmap outlay. There isn't someone or something to follow where it says this is where you need to start. These are the steps to cloud migration. And I think a lot of people, that's what they fear. They're like, we're not sure exactly where to start. We're not sure what roadmap we'll follow. How do you navigate that in a way?
Greg Warner:
So I get back to that when I talked about the vision. We said we're fixing the vision flexible details. Early on when we signed for cloud migration, it was in the first week after we'd signed for it, that same CIO asked me, "Greg, what's our date? When are we moving? Because you've sold me that this is so much better. Where's the action? When are we get this?" And we took a good six weeks after we signed to actually understand the tooling that's available. So for Jira, there's really two options. There's the Jira site import and the Jira cloud migration assistant. And on Confluence side, there's one that's called the Confluence cloud migration assistant. Better kind of understand how those technologies work. And for a couple weeks there, my team actually considered if we did the migration ourself, we could probably save the company a bunch of money and we would own it.
Greg Warner:
We would know how this thing worked. We got about four weeks in and decided that was a terrible idea. Do not do that. Any enterprise customers I talk about that say we're going to do it ourselves, do not do that. Do not do that. And part of the reason is that there's really four pillars to success for your cloud migration. Jira migration, Confluence migration, apps, and users. And we did not know how to do apps and users and we probably could have gotten away with Confluence and Jira. But we said, look, this is something that we actually need to have a partner involved. And so we did ask for partners to provide their way of doing it, knowing what they knew about us. And we did provide as much detail as we can. We had two partners actually provided completely different methodologies how to get there.
Greg Warner:
So this is that flexible on the details, but we really had to make a decision on what worked for us. So when it really came down to Jira, would we do a big bang approach and just switch it over in the course of a weekend or did we want to do cohort by cohort over time? And we decided for us, because we are a 24/7 organization that's supporting our customers, doing the big bang switchover, that was the best way to do it. So that's one of the reasons we chose the partner we did. But that partner didn't necessarily have a roadmap of where they want to go. But we did then explain what we want to get out of this. That was the first thing, was about it needs to happen on a weekend. So that then filters down what your choices are. The ecosystem apps part is really important to make sure that one, there may have been apps installed in your system that have been there for 10 years and you're not sure why they're there anymore because it was four Jira admins ago.
Greg Warner:
Nobody knows what's there. But if they don't have a cloud migration pathway, you really should consider they're probably going to hit their end because there is no equivalent. So you can rule them out. Identify the ones that do have a business process with them. And for that, Salesforce for us, we had to find a cloud-first connect that would work. So that meant that we knew that was going forward. But really, I think the key thing that we invented that we didn't know about was that we created this thing called an App Burn Down. And that's where we looked at all the apps we had. We had about 40 apps. We said, okay, which ones are not going to go to cloud? Which ones don't have a migration pathway? Which ones are going to replace something else? And so we started to remove apps over the course of about three months.
Greg Warner:
So people would see that we're starting to get away from on-prem design decisions and old ways of doing things. But we also said, but once we get to cloud, this is the pathway out of it. So that we said, look, we're going to turn this app off but you're going to get this one instead, which is the cloud-first app. So people could see how we're going to make the jump over the river to get there. But it meant that we would, over time, identify apps that weren't used. If we turned them off and nothing happened, it's fine. But also we did come across some where they were critical to a business use. And so if we didn't have an answer for those yet, it gave us time to find one. And with your user base, typically it's your colleagues, that's going to be your most critical customers. They're going to ask, okay, you're turning it off. When do I get the functionality back?
Greg Warner:
And by doing that App Burn Down over time, that does buy you time to then have that answer. So it's a much easier conversation than I'm simply turning off functionality, I don't have an answer for you yet. There are things like that. It wasn't necessarily a roadmap, but working with a solution partner is absolutely the right way to go. Don't try and do it yourself. They also work with Atlassian and they have far better reach into getting some of these answers than you can possibly ever have. And I have on at least three different occasions where our solution partner did go and speak directly with an ecosystem partner to find out what's the path forward. How can we make this work? So it is good. The migration is really a three-way collaboration between yourself, your solution partner, and Atlassian. And you all have the same goals. You want to get to cloud and it does work really well.
Chloe Hall:
Wow. Yeah. So sounds like hope everyone got that advice. Definitely don't take this on your own. Reach out to solution partner. And I really like how you said you went to two different solution partners and you found out what their ideas were, which ways they wanted to take you, so you could kind of explore your options, work out what was the best route for Splunk. And it's worked very well for you as well. Having that support I think as well. Yeah. Sorry, you go.
Greg Warner:
The choice of the partner is really important and it's probably one of the earliest decisions that we made to get that one right. And I remember several times thinking about, have we got the right people on board? Did we speak to... And it was an interview process to the extent that when we had our final day after we'd been working with Atlassian and with our partner for six months, one month after our migration was completed and we're all done, we had one final Zoom call with all of us and took a photo and did that. But it kind of felt like a breakup, to be honest, because we'd been in each other's faces for six months and working. We're now all saying goodbye. We might not see each other. It was like the weirdest feeling. But it did work. And so yeah, it is a real fundamental choice.
Greg Warner:
Just take the time, make sure they understand what we want to do, make sure you understand how they're going to do it. But yeah, if we have done it ourselves, we would've got ourselves all caught up in knots, wouldn't have been a successful migration or so. I'm a technical guy. I want to solve it. I want to be like... But I think the actual right answer was no, you don't need to know how this works 100% because you're going to do this hopefully just once. And so focus on the real business value things about dealing with stakeholders and the change and making design decisions that are really important for you because you're going to own those probably the next decade rather than worrying about how do I get my data from A to Z?
Chloe Hall:
Yeah. It definitely would've felt like a breakup for you because you would've been working side by side for so long, dealing with so much. Are you still in contact with them or...
Greg Warner:
Yeah, we had this fundamental thing we always said is we're always, if there's a problem, we're always cautiously optimistic, we're going to solve it. We did engineering challenges that we went through, but I did say right early on is, the ecosystem is only big and we're all going to bump into each other at some point. So yeah, let's make sure that we're still friends at the end of this. And I didn't realize how important that was until later when I was in New York for Christmas and I arranged to meet the project manager that worked for us. She lives in New York, so how about I meet you so... So we met each other at the hotel and she's like, "I have never met a customer outside of work to do this." Yeah, I gave the story about it felt like a breakup, but she did say that at the beginning you said we'll be friends after.
Greg Warner:
Yeah it is because it can be really hard. I've been on the consultant side where you kind of have to have some hard conversations and sometimes... You want to make sure that everyone understands the problem. You're trying to make it better so that at the end of it, you can still be friends like that. That is the thing. There probably will be engagements later on that you might need them again. So you want to make sure that you have your choice of best in breed partner to choose from. You have those relationships. They understand what you want to choose. So yeah, it is really important to choose the right partner. Don't necessarily based on price but choose the partner that's going to work for you, understands what you're trying to get out of your cloud migration and they'll be there in the future when you need them for another cloud migration or a much more gnarly project. Try and be friends at the end of it.
Chloe Hall:
And definitely it's good that you have that friendship now because they have that understanding about your business and what you want and the value of it. So if you do need help again, it's a lot easier to bring them on board straight away. So now that you've performed a cloud migration and you're coming towards the end of it, do you look at the process any differently to when you were at the very beginning?
Greg Warner:
Yeah, I thought we were just executing a data migration just yeah, on-prem to cloud.
Chloe Hall:
Yeah.
Greg Warner:
Pretty straightforward, nothing big. I was pleasantly surprised as we're making some of these decisions as we went along, that it was more than that. There were business processes that we could improve. There was the beginning, the middle, and end. I didn't realize that until actually after the end. So when we did our cloud migration, it was actually the week before Thanksgiving in the US. It was November 19. And even that decision was made in just going for a walk at lunchtime. When should we really do this? And I kind of came down again, spoke to my project manager and said, "How about we do this in the cloud migration the week before Thanksgiving?" Because 50% of our workforce is located in the US and a large proportion of that will be on leave or PTO before.
Greg Warner:
So by doing it over a weekend before then we're ensuring that... Like when you open a new restaurant. You don't want to have all of your tables full on the first night. We knew that we were going to have everybody using Jira and Confluence day one after a migration because we're going to break some stuff. They actually turned out to be really exceptionally good idea. And I encouraged people to find... Look at your data and work out when is low time to do this? I've been involved in Jira and Confluence for a long time and just thought it's task tracker and it's a wiki. There's nothing there that I don't really know about. But one of the decisions we made was actually that when we completed the data migration and it was ready to go, I always said if we waited, do we get a better result? And the answer was no.
Greg Warner:
We should make this available to people now. And so we opened it up on a Sunday morning in the US, which was starting to be business hours in Australia. We started making teams aware that they can now go ahead and use Jira and Confluence. And it was the feedback that we immediately got from those teams that were starting to use Jira service management in cloud for the first time, about, "Wow, this is so much better than it was on-prem." And people said, "I can actually see the attention to detail you've made on fields and descriptions and the changes you've made." And it started to impact people's workday that this was better than it was. I didn't expect that to come back. And so I have a montage that we share with the team of all these Slack messages from people saying, "This is really good. This is much better than we had before."
Greg Warner:
What I didn't also realize is that when we moved from on-prem to cloud is the data that we had became more usable and accessible. Hadn't planned that. It seems obvious now, but when we put it in cloud and it has all the security controls around it and now no longer has the requirements of things like VPN to get access to it, people could build new things to use it to be able to interact with your issues, to interact with pages. And so we started with 76 integrations and over space of three months now we had this big jump in the first three months up to about a hundred something and now we're going to Forge And what it means is people who have had this need to be able to get to the data can now get to it. I didn't see that coming. I just thought we were just server cloud. But yeah, having a more accessible has led to improvements in the way that our teams are working but also how they use it in other applications that just simply wasn't available before.
Chloe Hall:
Yeah. Wow. That's great. And it's good that you were able to receive that feedback straight away from the teams that you had in Australia. I think that's really good and it sounds like it's created such a good opportunity for you at Splunk as well now that you're on cloud.
Greg Warner:
Yeah, it's certainly a business leader that can propel you forward and I eagerly come in now and look at what are other teams going to do with it. And so when we had the first team that said they want to build a Forge app, I'm like, Sure. We should not discourage that at all. Extend the platform. That's why we spent the money and time to do it. What can you do with it now? And we did certainly make Atlassian aware on the product side, like how we're using it and where we'd like to see improvements. If you look at the server DC comparison, I used to be that person that would look at the new features in cloud and ask that question about, when is that new feature coming to on-prem? To going to being that customer who's now, I have that feature today, right? And I'm using it because we don't wait for it.
Greg Warner:
So you mentioned about things you didn't plan from the roadmap. There are design decisions that I talk to enterprise customers that I need to make aware of about. One of them is to do with release tracks. In enterprise cloud, you can choose to bunch up the change to cloud and then they get released periodically every two weeks, every month. When I looked at that, came back to one of our principles about don't implement server in cloud, why would we do that? Atlassian has far more data points on whether this works for customers at scale than we do. So why would we hold back functionality? So as a result we don't do release tracks. We let all of the new functionality get delivered to us as Atlassian sees fit. And the result of that is our own engineering staff, our own support staff who use Jira, get the notifications about new products and features and this is fantastic.
Greg Warner:
Again, why would we implement server, which is where you would bunch up all your changes and then go forward? The other thing too about our cloud migration journey is don't be blinked that you're just doing a cloud migration today and then the project ends. There are things you need to be thinking about as you go along, but what's the impact in the future? So for us, we have multiple sites. Enterprise customer have multiple sites. So there are design decisions that we've made so that we can, in the future, do cloud to cloud migration. You will move sites. Your organization could be bought or could be buying companies. So you do mergers and acquisitions. And so as part of that, we have some runbooks now that talk about using the cloud-to-cloud tooling so we can move a Jira project from a site here to a site there, how we'd move users here and users there.
Greg Warner:
And that actually came about through the assistance with our TAM, not focusing just always on the cloud migration date but also what's that look like six months later? What's it look 12 months later? So that you don't perform your cloud migration and then lock yourself in a corner that later on now I have to unwind something. I had the opportunity to fix it. So yeah, I do encourage migration customers to also think six months, 12 months beyond their cloud migration. But what could also happen and then speak to your solution partner about design decisions today that could affect you in the future.
Chloe Hall:
Yeah. So you definitely need to be thinking future-focus when you're doing this cloud migration. I know you've addressed a lot of the opportunities that came out of the cloud migration. Was there anything else that was an unexpected value that came from it that you wanted to share?
Greg Warner:
The other value is make it more accessible. We have seen people use it in different places that we hadn't thought about. So some of the things that we were doing before, we had to have a company-owned asset to get on the VPN and just things like that. That actually restricted people in where they could do work. Whereas now we've, as long as you've got a computer or mobile device connected to the Internet, absolutely you can use a mobile device support, you can get access to it. Approvals that used to be done on a computer are now done on a mobile device. Those things. But I think the integrations has been probably been the one thing I'm most... We're not the catalyst. We kind of pushed it along but seeing people get real use out of it and using the data for other purposes. We have seen people build some microservices that use the data from Jira that we couldn't do before. Again, you're just unlocking that potential by making it more usable and accessible.
Chloe Hall:
After going through the whole migration journey and, like you said, you're coming towards the end of it, what were the things that stood out to you that you're like, okay, they didn't go so well? Maybe if I was to do this again, how would I do this better next time?
Greg Warner:
So I get back to that day one unboxing experience. You know you want to give it that best experience. And we delivered that for people in Australia and APAC as we opened it and they got to use Jira for the first time and it worked fine. And that is mainly the result of a lot of emphasis on the Jira piece because we said, we know this is going to be hard. It's got workflows, issue schemes, notifications schemes. This is going to be hard.
Greg Warner:
So we started that one really early and then probably about 60% down through our migration journey, we started on Confluence. We thought how hard can Confluence be. It's a bunch of spaces and pages. It can't be that hard. We actually hit some migration challenges with the engineering tooling with Confluence, which meant that the Confluence UAT was delayed. The Jira UAT was fantastic. Ran for a month. We found some problems, got fixed, got answers. We were really confident that was going to be fine.
Greg Warner:
And then we hit this Confluence piece. We're like, wow, this is going to be a challenge. And there was at least one time I could think of. It was a Saturday morning at breakfast where our solution partner sent me a Slack message about, I think we've got a problem here with some tooling. What are we going to do? Towards the middle of the day, I was kind of scratching my head. This could be a real blocker. We actually worked with Atlassian, came up with the engineering solution, cleared that out. That was good to see, like in the space of 12 to 24 hours, there was a solution. But what it meant was that it delayed the Confluence UAT and it made a week. And there was something we found to do with the new Confluence editor and third-party apps right at the end of that week. And we had to really negotiate with our stakeholders to make this go ahead.
Greg Warner:
Because again, if we'd waited, we'd get a better result. No, we really should go. We know that there's this problem. It's not system-wide but it affects a small group people. So we did it. But for about a hundred people they have this really bad Confluence experience because of this thing. And so for me, I couldn't deliver on that thing I promised, which was a day one experience that was going to be better than what it had before.
Greg Warner:
Now we did work with Atlassian and app vendors to get some mitigation so it wasn't as bad on day five. It wasn't day one but it wasn't perfect. But I would certainly encourage people to make sure that you do treat Jira and Confluence with as much importance as each other. They do go together. When I did our cloud migration, we did it on a weekend and I remember coming back after dropping my kids at school on Tuesday and sitting in the car park. I was like, wow, we actually pulled that off.
Greg Warner:
If we'd propose to the company to move your company email system and your finance system on a weekend, the answer would be no because it's too big a hat. But what we'd said is we're going to move all of our Atlassian stack in a weekend, which really is two big systems, Jira and Confluence. So if I had the time again, we would've started Confluence much, much earlier and then we wouldn't have the need to rush it at the end. And that really did result in a bad day one experience for those people. We have worked with Atlassian since then. We're getting that resolved. We know other Atlassian guys have the same problem. I would start early and don't underestimate the complexity that could happen. There will be some things outside of your control.
Greg Warner:
I talk about this Confluence problem and the migration tooling, which is actually do at scale. Not every customer will see it. We saw it, I conducted customer interviews when we were doing our solution partner decision and the customer actually told me this. Like I should have started Confluence because we had this problem, we wasted some time, and we did it. I even have my notes. But it wasn't until later, same problem, you even had the answer and they told you and you still waited. So I'm spending a few minutes on this podcast talking about it because it happened to me. It's probably going to happen to the next person. So if I could do one thing and that is just encourage you to start it earlier. You're going to end up with a much, much better migration and hopefully can deliver on that day one experience that I couldn't do.
Chloe Hall:
Yeah, no I'm so glad that you've shared that with the Easy Agile audience as well because now they know and hopefully the same mistake won't keep getting repeated. Well, Greg, my final question for you today, and I don't know if you want that to be your answer, but I think it's really good just for the audience, if there's one key takeaway that they can go away with them today from the podcast, what would be that one piece of advice for everyone listening to start their migration journey?
Greg Warner:
The first thing to do is to prioritize it. So if you're an Atlassian customer that's using on-prem Jira or Confluence and you don't have a timeline and you don't have a priority to your cloud migration, start there. Open up the task, which is start to investigate Atlassian Cloud and choose a date. Because yeah, there will come a situation down the track where you might be asked by your CIO and so it's better to have an answer prepared already. I would encourage people to start to look at it because it is the future. If you look across the industry, people are moving to SaaS. It's really a question. Do you want to maintain and be that customer wondering when that feature's coming to cloud or do you want to be that customer in cloud who has it today? We have seen a monumental shift to when we moved to cloud in functionality, availability, all the good things that cloud delivers. And it's one of the biggest promoter... The person that used to write exam questions for servers now saying go to cloud.
Greg Warner:
Absolutely. So when I've spoken to other enterprise customers, particularly at Team, I said like, when do you plan your cloud migration? I was like, wow, we're going to start it in three years. I'm like, three years? You need to go back to the office next week and start like 12 months because yeah you will... There is absolutely a competitive advantage to doing it. And it's not just me being now as biggest cloud opponents. We see it, we see it every day and for me, this is one of the most influential projects I've been involved in with Atlassian since 2006. This one here is going to have a long-lasting effect at Splunk for a long time and I'm happy to speak to yourself at Easy Agile and others about it and here at their cloud journey because I want to go to Team next year. I want to make sure we have these conversations in the whole way about, I got that one thing. It's either I started my Confluence migration earlier or I actually put in a timeline of when we should start our cloud migrations.
Chloe Hall:
Yeah, beautiful. That is some great advice to take away, Greg. And so honestly, thank you so much for coming on the podcast today. You have provided some brilliant insights, takeaways, and also because there is no roadmap, I feel like your guidance is so good for those who are looking to start their cloud migration. Yeah. We really appreciate you sharing your knowledge.
Greg Warner:
All right. Thanks for having me on. Thank you for listening.
Chloe Hall:
No worries.


