New Course: Run Better Retros in Jira

Learn with Easy Agile

Easy Agile Podcast Ep.21 LIVE from Agile2022!

Listen on
Subscribe to our newsletter
  • website.easyagile.com/blog/rss.xml

"That's a wrap on Agile2022! It was great to be able to catch up with so many of you in the agile community in-person!" - Tenille Hoppo

This bonus episode was recorded LIVE at Agile2022 in Nashville!

The Easy Agile team got to speak with so many amazing people in the agile community, reflecting on conference highlights, key learnings, agile ceremonies + more!

Thanks to everyone who stopped by the booth to say G’Day and enjoyed a Tim Tam or two ;)

Huge thank you to all of our podcast guests for spending some time with us to create this episode!

  • Cody Wooten
  • Gil Broza
  • Maciek Saganowski
  • Lindy Quick
  • Carey Young
  • Leslie Morse
  • Dan Neumann
  • Joseph Falú
  • Kai Zander
  • Avi Schneier
  • Doug Page
  • Evan Leybourn
  • Jon Kerr
  • Joshua Seckel
  • Rob Duval
  • Andrew Thompson

Transcript

Caitlin:

Hi, everyone. Well, that's a wrap on Agile 2022 in Nashville. The Easy Agile team is back home in Australia, and we spent most of our journey home talking about all of the amazing conversations that we got to have with everyone in the Agile community. It was great catching up with customers, partners, seeing old friends, and making lots of new ones. We managed to record some snippets of those amazing conversations, and we're excited to share them with you, our Easy Agile Podcast audience. So enjoy.

Maciek:

[inaudible 00:00:26].

Tenille:

Maciek, thanks so much for taking time with us today.

Maciek:

No worries.

Tenille:

[inaudible 00:00:30], can you let us know what was the best thing you've learned this week?

Maciek:

Oh, that was definitely at Melissa Perri's talk. When she talked about... Like, to me, she was talking about slowing down. And what we do in Agile, it's not just delivery, delivery, delivery, but very much learning and changing on things that we already built, and finding out what value we can give to customers. Not just ship features, it's all about value. That's what I learned.

Tenille:

That's great. Thank you. So what do you think would be the secret ingredient to a great Agile team?

Maciek:

Humility. Somehow, the team culture should embrace humility and mistakes. And people should not be afraid of making mistakes, because without making mistakes, you don't learn. That's what I think.

Tenille:

So what would be, I guess, if there's one Agile ceremony that every team should do, what do you think that might be?

Maciek:

For sure, retro, and that comes back to the mistakes and learning part.

Tenille:

Yeah. Fantastic.


Maciek:

No worries.

Tenille:

That's great. Thanks so much for taking time.

Maciek:

Okay. Thank you.

Tenille:

Cheers.

Gil:

[inaudible 00:01:42].

Caitlin:

Gil:, thank you so much for chatting with us. So we're all at Agile 2022 in Nashville at the moment. There's lots of interesting conversations happening.

Gil:

Yes.

Caitlin:

If you could give one piece of advice to a new forming Agile team, what would it be?

Gil:

It would be to finish small, valuable work together. It has a terrible acronym, FSVWT. So it cannot be remembered that way. Finish small, valuable work together. There's a lot of talk about process, working agreements, tools. This is all important, but sometimes it's too much for a team that's starting out. And so if we just remember to finish small valuable work together, that's a great story.

Caitlin:

Yeah, I love that. And you were a speaker at conference?

Gil:

Yes.

Caitlin:

Can you give our audience a little bit of an insight into what your conversation was about?

Gil:


What happens in many situations is that engineering or development doesn't really work collaboratively with product/business. And instead, there is a handoff relationship. But what happens is that in the absence of a collaborative relationship, it's really hard to sustain agility. People make a lot of one-sided assumptions. And over time, how decisions get made causes the cost of change to grow, and the safety to make changes to decrease. And when that happens, everything becomes harder to do and slower to do, so the agility takes a hit. So the essence of the talk was how can we collaboratively, so both product and engineering, work in ways that make it possible for us to control the cost of change and to increase safety? So it's not just collaboration of any kind. There are very specific principles to follow. It's called technical agility, and when we do that, we can have agility long-term.

Caitlin:

Great. I love it. Well, thank you so much and I hope you enjoy the rest of your time at the conference.

Gil:

Thank you.

Caitlin:

Great. Thank you.

Tenille:

Hi, Tenille here from Easy Agile, with Josh from Deloitte, and we're going to have a good chat about team retrospectives. So Josh, thank you for taking the time to have a good chat. So you are a bit of an expert on team retrospectives. What are your top tips?

Josh:

So my top tips for retrospective is first, actually make a change. Don't do a lessons observed. I've seen lots of them actually make a change, even if it's just a small one at the end. The second, and part of that, is make your change and experiment. Something you can measure, something that you can actually say yes, we did this thing and it had an impact. May not be the impact you wanted, but it did have some kind of impact. The second tip is vary your retrospectives. Having a retrospective that's the same sprint after sprint after sprint will work for about two sprints, and then your productivity, your creativity out of the retrospective will significantly reduce.

Tenille:

That's an excellent point. So how do you create [inaudible 00:05:03]?

Josh:

Lots and lots of thinking about them and doing research and using websites like TastyCupcakes, but also developing my own retrospectives. I've done retrospective based on the Pixar pitch. There's six sentences that define every Pixar movie. Take the base sentences, apply them to your sprint or to your PI and do a retro, and allow the team that creativity to create an entire movie poster if they want to. Directed by [inaudible 00:05:34], because it happens. People get involved and engaged when you give them alternatives, different ways of doing retrospectives.


Tenille:

That's right. So for those teams that aren't doing retrospectives at the moment, what's the one key thing they need to think about that you... What's the one key thing you could tell them to encourage them to start?

Josh:

If you're not doing retrospectives, you're not doing [inaudible 00:05:54]. So I shouldn't say that. But if you're not doing retrospectives, if you truly believe that you have absolutely nothing to improve and you are 100% of the best of the best, meaning you're probably working at Google or Amazon or Netflix, although they do retrospectives. So if you truly believe that you are the equivalent of those companies, then maybe you don't need to do them, but I'm pretty sure that every team has something they can improve on. And acknowledging that and then saying, how are we going to do that? Retrospective's a very fast, easy way to start actually making those improvements and making them real.

Tenille:

Fantastic. Great. Thanks so much for taking the time to chat to us briefly about retrospectives.

Josh:

Thank you.

Caitlin:

We're here with Leslie, who is the president of women in Agile. Leslie, there was an amazing event on Sunday.

Leslie:

Yes.

Caitlin:

Just talk to us a little bit about it. What went into the planning? How was it to all be back together again?

Leslie:

It was amazing to have the women in Agile community back together, right? Our first time since 2019, when everyone was together in Washington DC for that event. The better part of six or seven months of planning, we had about almost 200 people in the room. Fortunately, we know the [inaudible 00:07:10] of what these women in Agile sessions that we do, part of the Agile Alliance conferences every year, right? We've got a general opening. We've got a great keynote who is always someone that is adjacent to the Agile space. We don't want to just like... We want to infuse our wisdom and knowledge with people that aren't already one of us, because we get all of the Agile stuff at the big conference when we're there.

Leslie:

So that part, we always have launching new voices, which is really probably one of my most favorite women in Agile programs. Three mentees that have been paired with seasoned speakers, taking stage for the first time to share their talent and their perspective. So that's really great. And then some sort of interacting networking event. So that pattern has served us really well since we've been doing this since 2016, which is a little scary to think it's been happening that long. And it's become a flagship opportunity for community to come together in a more global fashion, because the Agile Alliance does draw so many people for their annual event.

Caitlin:

Yeah, for sure. Well, it was a great event. I know that we all had a lot of fun being there. What was your one key takeaway from the event?

Leslie:

I'm going to go to [inaudible 00:08:14] interactive networking that she did with us, and really challenging us to lean into our courage around boundaries and ending conversations. We don't have to give a reason. If some conversation's not serving us or is not the place that we need to be for whatever reason, you absolutely have that agency within yourself to end that conversation and just move on. I love the tips and tricks she gave us for doing that well.

Caitlin:

Yes, yes, I love that too. That's great. Well, thank you so much. Appreciate it.

Leslie:

Yeah. Thanks for having me.

Tenille:

Hi, Evan. How are you?

Evan:

Very good.

Tenille:

That's good. Can you please tell me what's the best thing you learned today?

Evan:

The best quote I've got, "Politics is the currency of human systems." Right?

Tenille:

Wow.

Evan:

So if you want to change a human system, you got to play the politics.

Tenille:


Fantastic.

Evan:

Which feels crappy, but-

Tenille:

It's the way it is.

Evan:

... that's the way it is.

Tenille:

[inaudible 00:09:07]. Okay, next question. What is the Agile ceremony that you and your team can't live without?

Evan:

Retrospective. With the retrospective, you can like create everything else.

Tenille:

Fantastic. That's really good. And what do you think is probably the key ingredient to a good retrospective?

Evan:

Oh, trust. Trust requires respect. It requires credibility. It requires empathy. So trust is like that underpinning human capability.

Tenille:

Yeah. Fantastic. Thanks very much.

Evan:

Thank you.

Tenille:

Yes.

Caitlin:

Right. We're here with Cody from Adfire. So Cody, how you enjoyed the conference so far?

Cody:

I'm really loving the conference. It's been awesome. To be honest, when we first got here, it seemed maybe a little bit smaller than we thought, but the people here's been incredible, highly engaged, which was always great. And plus, a lot of people are using Jira and Atlassian. So lot of big points.


Caitlin:

Win-win for both, huh?

Cody:

Yeah. Always, always, always.

Caitlin:

Very good.

Cody:

Yeah.

Caitlin:

Lots of interesting talks happening. Have you attended any that have really sparked an interest in you? What's [inaudible 00:10:15]-

Cody:

Yeah. I can't remember any of the talk names right off the top, but they've all been incredibly insightful. Tons of information. It seems like there's been a topic for everything, which is always a great sign and stuff like that. So my notes, I have pages and pages and pages of notes, which is always a good sign.

Caitlin:

Yeah, that's [inaudible 00:10:34].

Cody:

So I'm I have to go back and [inaudible 00:10:35] again.

Caitlin:

Yes.

Cody:

But it's been incredible and the talks have been very plentiful, so yeah.

Caitlin:

Good. Good. And what is the one key takeaway that you are looking forward to bringing back and sharing with the team?

Cody:

Well, I think one of the key takeaways for us was that... I talked about the engagement that everybody has, but one thing that's been incredible is to hear everybody's stories, to hear everybody's problems, their processes, all of that stuff. So all of that information's going to be a great aggregate for us to take back and create a better experience with our product and all that good stuff. So yeah.


Caitlin:

For sure. I love it. Now, I have one last question for you. It's just a fun one. It's a true or false. We're doing Aussie trivia. Are you ready for this one?

Cody:

Okay.

Caitlin:

Okay.

Cody:

Hopefully.

Caitlin:

So my true or false is, are Budgy Smugglers a type of bird?

Cody:

Are buggy smugglers-

Caitlin:

Budgy Smugglers.

Cody:

Budgy Smugglers.

Caitlin:

A type of bird.

Cody:

True.

Caitlin:

False. No.

Cody:

What are they?

Caitlin:

Speedos.

Cody:


Yeah. Well, I've got some of those up there in my luggage. So I'll bring the budgys out now.

Caitlin:

With your Daisy Dukes.

Cody:

Exactly. Exactly.

Caitlin:

Yeah. And cowboy boots, right?

Cody:

Yeah.

Caitlin:

Well, thank you so much.

Cody:

Thank you.

Caitlin:

Very appreciate it.

Cody:

Yeah. Thank you.

Tenille:

Doug, how are you?

Doug:

I'm great. Thank you.

Tenille:

Awesome. Well, tell me about, what's the best thing you've learned today?

Doug:

I think learning how our customers are using our products that we didn't even know about is really interesting.

Tenille:

That's amazing. Have you had a chance to get out to many of the sessions at all?


Doug:

I actually have not. I've been tied to this booth, or I've been in meetings that were already planned before I even came down here.

Tenille:

[inaudible 00:12:01].

Doug:

Yeah.

Tenille:

That's good. So when you're back at work, what do you think is probably the best Agile ceremony that you and your team can't live without?

Doug:

I think what I'm bringing back to the office is not so much ceremony. It's really from a product perspective. I work in product management. So for us, it's how we can explain how our product brings value to our customers. So many lessons learned from here that we're really anxious to bring back and kind of build into our value messaging.

Tenille:

Fantastic.

Doug:

Yeah.

Tenille:

Thanks. That's great. Thanks very much.

Caitlin:

He was one of the co-authors of the Agile Manifesto. Firstly, how are you doing in conference so far?

John:

Well, working hard.

Caitlin:

Yeah, good stuff.

John:

Enjoying Nashville.

Caitlin:


Yeah. It's cool, isn't it? It's so different from the [inaudible 00:12:46] what's happening.

John:

Yeah. It's good. Yes. It's nice to see a lot of people I haven't seen in a while.

Caitlin:

Yeah. Yeah.

John:

And seeing three dimensional.

Caitlin:

Yes. Yeah, I know. It's interesting-

John:

It's there-

Caitlin:

... [inaudible 00:12:54] and stuff happening.

John:

Yeah, IRL.

Caitlin:

Lots of interesting [inaudible 00:13:01] that's happening. Any key takeaways for you? What are you going to take after to share with the team?

John:

Oh, well, that's a good question. I'm mostly been talking with a lot of friends that I haven't seen in a while. [inaudible 00:13:14].

Caitlin:

Yes.

John:

And since I've only been here a couple days, I haven't actually gone for much, if anything. To be frank.

Caitlin:

I know. Well, we're pretty busy on the boots, aren't we?

John:


Yeah. Yeah. But certainly, the kinds of conversations that are going on are... I was a little bit worried about Agile. Like, I don't want to say... Yeah, I don't want to say it. But I don't want to say, Agile's becoming a jump turf.

Caitlin:

Yes.

John:

But I think there's a lot of people here that are actually really still embracing the ideals and really want to learn, do and practice [inaudible 00:14:00].

Caitlin:

Yeah.

John:

So I'm frankly surprised and impressed and happy. There's a lot. If you just embrace more of the manifesto, and maybe not all of the prescriptive stuff sometimes, and you get back to basics. [inaudible 00:14:22]-

Caitlin:

Yeah. So let's talk about that, the Agile Manifesto that you mentioned. Embracing that. What does embracing mean? Can you elaborate on that a bit more? So we know we've got the principles there. Is there one that really stands out more than another to you?

John:

Well, my world of what I was doing at the time, and I'd done a lot of defense department, water haul, and built my own sort of lightweight process, as we call it before Agile. So to me, the real key... This doesn't have the full-

Caitlin:

Full manifesto, yeah.

John:

But if you go to the website and read at the top, it talks about like we are uncovering ways by doing, and I'm still learning, still uncovering. And I think it's important for people to realize we really did leave our ego at the door. Being humble in our business is super important. So that might not be written anywhere in the principles, but if the whole thing at the preamble at the top, and the fact that we talk about how we value those things on the blog versus the whole... There's a pendulum that you could see both of those things collide. That, in my opinion, one the most important trait that we should exercise is being humble, treating things as a hypothesis. Like, don't just build features [inaudible 00:15:58] bottom up, how do you seek up on the answers, that's what I want people to takeaway.

Caitlin:


That's great. That's great advice. Well, thank you so much, John. Appreciate you taking the time to chat with us.

John:

You're welcome, Caitlin.

Caitlin:

Yeah. Enjoy what's [inaudible 00:16:11].

John:

Thank you.

Caitlin:

Thank you.

John:

[inaudible 00:16:13] tomorrow.

Caitlin:

All right.

Tenille:

Abukar, thanks for joining us today. Can I ask you both, what do you think is the best thing you've learned today?

Avi:

Best thing I've learned?

Tenille:

Yeah.

Avi:

That's a really interesting one. Because I'm here at the booth a lot, so I'll get to attend a lot of things. So there were two things I learned that were really important. One, which is that the Easy Agile logo is an upside down A, because it means you're from Australia. So it's down under. And then the second most important thing I learned about today was we were in a session talking about sociocracy, and about how to make experiments better with experiments, which sounded a little weird at first, but it was really all about going through like a mini A3 process. For those of you listening, that's something that was done to Toyota. It's a structured problem solving method, but instead of going [inaudible 00:17:02] around it and going through the experiment, going around two or three times and then deciding that's the right experiment you're going forward.

Tenille:


Thank you. How about your time?

Kai:

I've been at the booth most of the time, but from that you meet a lot of people all over the world. And we really have like one thing in common, which is wanting to help people. And it's really been nice to be in a room of people if they're at the beginning of their journey or their really seasoned, that their motivation is just to really empower others. So it's been really nice to be around that kind of energy.

Avi:

We've really learned that our friends from Australia are just as friendly up here as you are on the other side. I feel when you come on this side, you get mean, but it turns out you're just as nice up here too.

Tenille:

Well, it depends how long you've been on flight.

Avi:

Oh, exactly.

Tenille:

[inaudible 00:17:44], we're okay.

Kai:

Yeah.

Avi:Abukar:

Exactly. Good.

Tenille:

All right. One more question here.

Avi:

Sure.

Tenille:

What do you think is the secret ingredient for a successful team?

Avi:

What do I think the secret? Oh, that's a really good question. That's a-

Kai:

He's the best one to answer that question.


Avi:

That's a little longer than a two-second podcast, but I'll tell you this. It may not be psychological safety,-

Tenille:

Okay.

Avi:

... just because Google said that and Project Aristotle show that. I think to have a really, really successful team, you need a really skilled scrum master. Because to say that the team has psychological safety is one ingredient, it's not the only ingredient. A strong scrum master is someone who's really skilled to create that psychological safety, but also help with all the other aspects of getting ready to collaborate and coordinate in the most positive way possible. Plus, searching for... Her name is Cassandra. On Slack, she calls herself Kaizen. You get it? It's a joke. But that's the whole thing is that a really skilled scrum master helps the teams find the kaizens that they need to really get to become high performing. So psychological safety is an enabler of it, but that doesn't mean it creates the performance. It's an ingredient to make it happen.

Tenille:

Fantastic.

Kai:

There's no better answer than that one. Let's do exclamation.

Tenille:

Excellent. Thanks very much for taking the time.

Avi:

Thank you so much.

Kai:

Of course.

Hayley:

We're here with Carey from Path to Agility. Carey, what have you been really loving about this conference?

Carey:

I think I've loved the most about this conference so far is the interaction with all the people that are here. It's really nice to get together, meet different folks, network around, have the opportunity to see what else is out there in the marketplace. And then, of course, talk about the product that we have with Path to Agility. It's a wonderful experience to get out here and to see everybody. And it's so nice to be back out in person instead of being in front of a screen all the time.


Tenille:

Yeah, absolutely. Have you had a chance to get to many of the sessions?

Joseph:

I've tried to as much as I can, but it's also important to take that time to decompress and let everything sink in. So here we are having fun.

Tenille:

Yeah, absolutely. So thinking back to work, what do you think is the one Agile ceremony that you take that helps you and your team the most?

Joseph:

I think that finding different ways to collaborate, effective ways to collaborate. And in terms of work management, how are we solving some of the problems that we have? There's so many tools that are here to make that easier, which is made pretty special. Speaking to people and finding out how they go about solving problems.

Tenille:

And what do you think makes a really great Agile team?

Joseph:

Well, you could say something very cliche, like being very adaptive and change and so on and so forth. But I think it really comes down to the interaction between people. Understanding one another, encouraging one another, and just the way you work together.

Tenille:

Fantastic. Great. Well, thanks very much for taking the time to chat.

Joseph:

Thank you. It was nice chatting with you guys all week long.

Tenille:

Cheers.

Tenille:

Dan, thanks for taking the time to chat.

Dan:

You're welcome.

Tenille:

[inaudible 00:22:54] questions. What do you think is the best thing you learned today?


Dan:

Oh, the best thing I learned today, the morning products keynote was excellent. Got a couple tips on how to do product management, different strategies, how you have folks about seeing their focus on the tactical and the strategic. So just some nice little nuggets, how to [inaudible 00:23:12].

Tenille:

[inaudible 00:23:13], thanks for joining us today. Can I start by asking, what do you think is the best thing you've learned this week?

Speaker 17:

The best thing I've learned this week is there's no right way to do Agile. There's a lot of different ways you can do it. And so it's really about figuring out what the right process is for the organization you're in, and then leveraging those success patterns.

Tenille:

Well, I guess on that, is there one kind of Agile ceremony that you think your team can't do without?

Speaker 17:

The daily standup being daily. I think a lot of our teams, they talk all day long. They don't necessarily need to sync up that frequently. I've had a few teams already, they go down like three days a week and it seems to work for them. The other maybe key takeaway that I've seen folks do is time boxes. So no meetings from 10:00 to 2:00 or whatever it may be, and really driving that from a successful perspective.

Tenille:

I guess on that note, what do you think makes a really successful Agile team?

Speaker 17:

The ability to talk to each other, that ability to communicate. And so with all of our teams being either hybrid or remote, making sure that we have the tools that let them feel like they can just pick up and talk to somebody anytime they want, I think is key. And a lot of folks still don't have cameras, right, which is baffling to me. But that ability to see facial expressions, being face to face has been so nice because we're able to get that. So that's the other key is just that ability to talk to each other as though I could reach out and touch you.

Tenille:

Okay. Fantastic. Well, thanks so much.

Speaker 17:

You're welcome. Thank you.

Tenille:

Okay. Rob and Andrew, thanks so much for taking a few minutes with us. Can I start by asking you, what do you think is the best thing you learned this week?


Rob:

For me, it's definitely fast scaling Agile, we learned about this morning. We're going to try it.

Andrew:

For me, I really enjoyed the math programming session and learning kind of different ways to connect engineers and collaborate.

Tenille:

Great. Next up, I guess, what do you think makes a great Agile team?

Rob:

First and foremost, that they're in control of how they work and what they work on, more than anything else.

Andrew:

Yeah. For me, it's a obviously psychological safety and just having a good team dynamic where they can disagree, but still be respectful and come up with great ideas.

Tenille:

And is there one Agile ceremony that you think a great team can't live without?

Rob:

Probably retrospective. I think the teams need to always be improving, and that's a good way to do it.

Andrew:

Agreed. Yeah. Agreed.

Tenille:

Okay. That's great. Thanks so much for taking the time.

Andrew:

Thank so much. Appreciate it.

Related Episodes

  • Podcast

    Easy Agile Podcast Ep.34 Henrik Kniberg on Team Productivity, Code Quality, and the Future of Software Engineering

    TL;DR

    Henrik Kniberg, the agile coach behind Spotify's model, discusses how AI is fundamentally transforming software development. Key takeaways: AI tools like Cursor and Claude are enabling 10x productivity gains; teams should give developers access to paid AI tools and encourage experimentation; coding will largely disappear as a manual task within 3–4 years; teams will shrink to 2 people plus AI; sprints will become obsolete in favour of continuous delivery; product owners can now write code via AI, creating pull requests instead of user stories; the key is treating AI like a brilliant intern – when it fails, the problem is usually your prompt or code structure, not the AI. Bottom line: Learn to use AI now, or risk being left behind in a rapidly changing landscape.

    Introduction

    Artificial intelligence is fundamentally reshaping how software teams work, collaborate, and deliver value. But with this transformation comes questions: How do we maintain team morale when people fear being replaced? What happens to code quality when AI writes most of the code? Do traditional agile practices like sprints still make sense?

    In this episode, I sit down with Henrik Kniberg to tackle these questions head-on. Henrik is uniquely positioned to guide us through this transition – he's the agile coach and entrepreneur who pioneered the famous Spotify model and helped transform how Lego approached agile development. Now, as co-founder of Abundly AI, he's at the forefront of helping teams integrate AI into their product development workflows.

    This conversation goes deep into the practical realities of AI-powered development: from maintaining code review processes when productivity increases 10x, to ethical considerations around AI usage, to what cross-functional teams will look like in just a few years. Henrik doesn't just theorise – he shares real examples from his own team, where their CEO (a non-coder) regularly submits pull requests, and where features that once took a sprint can now be built during a 7-minute subway ride.

    Whether you're a developer wondering if AI will replace you, a product owner looking to leverage these tools, or a leader trying to navigate this transformation, this episode offers concrete, actionable insights for thriving in the AI era.

    About Our Guest

    Henrik Kniberg is an agile coach, author, and entrepreneur whose work has shaped how thousands of organisations approach software development. He's best known for creating the Spotify model – the squad-based organisational structure that revolutionised how large tech companies scale agile practices. His work at Spotify and later at Lego helped demonstrate how agile methodologies could work at enterprise scale whilst maintaining team autonomy and innovation.

    Henrik's educational videos have become legendary in the agile community. His "Agile Product Ownership in a Nutshell" video, created over a decade ago, remains one of the most-watched and shared resources for understanding product ownership, with millions of views. His ability to distil complex concepts into simple, visual explanations has made him one of the most accessible voices in agile education.

    More recently, Henrik has turned his attention to the intersection of AI and product development. As co-founder of Abundly AI, he's moved from teaching about agile transformation to leading AI transformation – helping companies and teams understand how to effectively integrate generative AI tools into their development workflows. His approach combines his deep understanding of team dynamics and agile principles with hands-on experience using cutting-edge AI tools like Claude, Cursor, and GitHub Copilot.

    Henrik codes daily using AI and has been doing so for over two and a half years, giving him practical, lived experience with these tools that goes beyond theoretical understanding. He creates educational content about AI, trains teams on effective AI usage, and consults with organisations navigating their own AI transformations. His perspective is particularly valuable because he views AI through the lens of organisational change management – recognising that successful AI adoption isn't just about the technology, it's about people, culture, and process.

    Based in Stockholm, Sweden, Henrik continues to push the boundaries of what's possible when human creativity and AI capabilities combine, whilst maintaining a pragmatic, human-centred approach to technological change.

    Transcript

    Note: This transcript has been lightly edited for clarity and readability.

    Maintaining Team Morale and Motivation in the AI Era

    Tenille Hoppo: Hi there, team, and welcome to this new episode of the Easy Agile Podcast. My name is Tenille Hoppo, and I'm feeling really quite lucky to have an opportunity to chat today with our guest, Henrik Kniberg.

    Henrik is an agile coach, author, and entrepreneur known for pioneering agile practices at companies like Spotify and Lego, and more recently for his thought leadership in applying AI to product development. Henrik co-founded Abundly AI, and when he isn't making excellent videos to help us all understand AI, he is focused on the practical application of generative AI in product development and training teams to use these technologies effectively.

    Drawing on his extensive experience in agile methodologies and team coaching, Henrik seems the perfect person to learn from when thinking about the intersection of AI, product development, and effective team dynamics. So a very warm welcome to you, Henrik.

    Henrik Kniberg: Thank you very much. It's good to be here.

    Tenille: I think most people would agree that motivated people do better work. So I'd like to start today by touching on the very human element of this discussion and helping people maintain momentum and motivation when they may be feeling some concern or uncertainty about the upheaval that AI might represent for them in their role.

    What would you suggest that leaders do to encourage the use of AI in ways that increase team morale and creativity rather than risking people feeling quite concerned or even potentially replaced?

    Henrik: There are kind of two sides to the coin. There's one side that says, "Oh, AI is gonna take my job, and I'm gonna get fired." And the other side says, "Oh, AI is going to give me superpowers and give us all superpowers, and thereby give us better job security than we had before."

    I think it's important to press on the second point from a leader's perspective. Pitch it as this is a tool, and we are entering a world where this tool is a crucial tool to understand how to use – in a similar way that everyone uses the Internet. We consider it obvious that you need to know how to use the Internet. If you don't know how to use the Internet, it's going to be hard.

    "I encourage people to experiment, give them access to the tools to do so, and encourage sharing. And don't start firing people because they get productive."

    I also find that people tend to get a little bit less scared once they learn to use it. It becomes less scary. It's like if you're worried there's a monster under your bed, maybe look under your bed and turn on the lights. Maybe there wasn't a monster there, or maybe it was there but it was kind of cute and just wanted a hug.

    Creating a Culture of Safe Experimentation

    Tenille: I've read that you encourage experimentation with AI through learning – I agree it's the best way to learn. What would you encourage leaders and team leaders to do to create a strong culture where teams feel safe to experiment?

    Henrik: There are some things. One is pretty basic: just give people access to good AI tools. And that's quite hard in some large organisations because there are all kinds of resistance – compliance issues, data security issues. Are we allowed to use ChatGPT or Claude? Where is our data going? There are all these scary things that make companies either hesitate or outright try to stop people.

    Start at that hygiene level. Address those impediments and solve them. When the Internet came, it was really scary to connect your computer to the Internet. But now we all do it, and you kind of have to, or you don't get any work done. We're at this similar moment now.

    "Ironically, when companies are too strict about restricting people, then what people tend to do is just use shadow AI – they use it on their own in private or in secret, and then you have no control at all."

    Start there. Once people have access to really good AI tools, then it's just a matter of encouraging and creating forums. Encourage people to experiment, create knowledge-sharing forums, share your own experiments. Try to role-model this yourself. Say, "I tried using AI for these different things, and here's what I learned." Also provide paths for support, like training courses.

    The Right Mindset for Working with AI

    Tenille: What would you encourage in team members as far as their mindset or skills go? Certainly a nature of curiosity and a willingness to learn and experiment. Is there anything beyond that that you think would be really key?

    Henrik: It is a bit of a weird technology that's never really existed before. We're used to humans and code. Humans are intelligent and kind of unpredictable. We hallucinate sometimes, but we can do amazing things. Code is dumb – it executes exactly what you told it to do, and it does so every time exactly the same way. But it can't reason, it can't think.

    Now we have AI and AI agents which are somewhere in the middle. They're not quite as predictable as code, but they're a lot more predictable than humans typically. They're a lot smarter than code, but maybe not quite as smart as humans – except for some tasks when they're a million times smarter than humans. So it's weird.

    You need a kind of humble attitude where you come at it with a mindset of curiosity. Part of it is also to realise that a lot of the limitation is in you as a user. If you try to use AI for coding and it wrote something that didn't work, it's probably not the model itself. It's probably your skills or lack of skills because you have to learn how to use these tools. You need to have this attitude of "Oh, it failed. What can I do differently next time?" until you really learn how to use it.

    "There can be some aspect of pride with developers. Like, 'I've been coding for 30 years. Of course this machine can't code better than me.' But if you think of it like 'I want this thing to be good, I want to bring out the best in this tool' – not because it's going to replace me, but because it's going to save me a tonne of time by doing all the boring parts of the coding so I can do the more interesting parts – that kind of mindset really helps."

    Maintaining Code Quality and Shared Understanding

    Tenille: Our team at Easy Agile is taking our steps and trying to figure out how AI is gonna work best for us. I put the question out to some of our teams, and there were various questions around people taking their first steps in using AI as a co-pilot and producing code. There are question marks around consistency of code, maintaining code quality and clean architecture, and even things like maintaining that shared understanding of the code base. What advice do you have for people in that situation?

    Henrik: My first piece of advice when it comes to coding – and this is something I do every day with AI, I've been doing for about two and a half years now – is that the models now, especially Claude, have gotten to the level where it's basically never the AI's fault anymore. If it does anything wrong, it's on you.

    You need to think about: okay, am I using the wrong tool maybe? Or am I not using the tool correctly?

    For example, the current market leader in terms of productivity tools with AI is Cursor. There are other tools that are getting close like GitHub Copilot, but Cursor is way ahead of anything else I've seen. With Cursor, it basically digs through your code base and looks for what it needs.

    But if it fails to find what it needs, you need to think about why. It probably failed for the same reason a human might have failed. Maybe your code structure was very unstructured. Maybe you need to explain to the AI what the high-level structure of your code is.

    "Think of it kind of like a really smart intern who just joined your team. They're brilliant at coding, but now they got confused about something, and it's probably your code – something in it that made it confused. And now you need to clarify that."

    There are ways to do that. In Cursor, for example, you can create something called cursor rules, which are like standing documents that describe certain aspects of your system. In my team, we're always tweaking those rules. Whenever we find that the AI model did something wrong, we're always analysing why. Usually it's our prompt – I just phrased it badly – or I just need to add a cursor rule, or I need to break the problem down a little bit.

    It's exactly the same thing as if you go to a team and give them this massive user story that includes all these assumptions – they'll probably get some things wrong. But if you take that big problem and sit down together and analyse it and split it into smaller steps where each step is verifiable and testable, now your team can do really good work. It's exactly the same thing with AI.

    Addressing the Code Review Bottleneck

    Tenille: One of our senior developers found that he was outputting code at a much greater volume and faster speed, but the handbrake he found was actually their code review processes. They were keeping the same processes they had previously, and that was a bit of a handbrake for them. What kind of advice would you have there?

    Henrik: This reminds me of the general issue with any kind of productivity improvement. If you have a value stream, a process where you do different parts – you do some development, some testing, you have some design – whenever you take one part of the process and make it super optimised, the bottleneck moves to somewhere else.

    If testing is no longer the bottleneck, maybe coding is. And when coding is instant, then maybe customer feedback – or lack of customer feedback – is the bottleneck. The bottleneck just keeps moving. In that particular case, the bottleneck became code review. So I would just start optimising that. That's not an AI problem. It's a process problem.

    Look at it: what exactly are we trying to do when we review? Maybe we could think about changing the way we review things. For example, does all code need to be reviewed? Would it be enough that the human who wrote it and the AI, together with the human, agree that this is fine? Or maybe depending on the criticality of that change, in some cases you might just let it pass or use AI to help in the reviewing process also.

    "I think there's value in code review in terms of knowledge sharing in a large organisation. But maybe the review doesn't necessarily need to be a blocking process either. It could be something you go back and look at – don't let it stop you from shipping, but maybe go back once per week and say, 'Let's look at some highlights of some changes we've made.'"

    We produce 10 times more code than in the past, so reviewing every line is not feasible. But maybe we can at least identify which code is most interesting to look at.

    Ethical Considerations: Balancing Innovation with Responsibility

    Tenille: Agile emphasises people over process and delivering value to customers. Now with AI in the mix, there's potential for raising some ethical considerations. I'm interested in your thoughts on how teams should approach these ethical considerations that come along with AI – things like balancing rapid experimentation against concerns around bias, potential data privacy concerns.

    Henrik: I would treat each ethical question on its own merits. Let me give you an example. When you use AI – let's say facial recognition technology that can process and recognise faces a lot better than any human – I kind of put that in the bucket of: any tool that is really useful can also be used for bad things. A hammer, fire, electricity.

    That doesn't have so much to do with the tool itself. It has much more to do with the rules and regulations and processes around the tool. I can't really separate AI in that sense. Treat it like any other system. Whenever you install a camera somewhere, with or without AI, that camera is going to see stuff. What are you allowed to do with that information? That's an important question. But I don't think it's different for AI really, in that sense, other than that AI is extremely powerful. So you need to really take that seriously, especially when it comes to things like autonomous weapons and the risk of fraud and fake news.

    "An important part of it is just to make it part of the agenda. Let's say you're a recruitment company and you're now going to add some AI help in screening. At least raise the question: we could do this. Do we want to do this? What is the responsible way to do it?"

    It's not that hard to come up with reasonable guidelines. Obviously, we shouldn't let the AI decide who we're going to hire or not. That's a bad idea. But maybe it can look at the pile of candidates that we plan to reject and identify some that we should take a second look at. There's nothing to lose from that because that AI did some extra research and found that this person who had a pretty weak CV actually has done amazing things before.

    We're actually working with a company now where we're helping them build some AI agents. Our AI agents help them classify CVs – not by "should we hire them or not," but more like which region in Sweden is this, which type of job are we talking about here. Just classifying to make it more likely that this job application reaches the right person. That's work that humans did before with pretty bad accuracy.

    The conclusion was that AI, despite having biases like we humans do, seemed to have less biases than the human. Mainly things like it's never going to be in a bad mood because it hasn't had its coffee today. It'll process everybody on the same merits.

    I think of it like a peer-to-peer thing. Imagine going to a doctor – ideally, I want to have both a human doctor and an AI doctor side by side, just because they both have biases, but now they can complement each other. It's like having a second opinion. If the AI says we should do this and the doctor says, "No, wait a second," or vice versa, having those two different opinions is super useful.

    Parallels Between Agile and AI Transformations

    Tenille: You're recognised as one of the leading voices in agile software development. I can see, and I'm interested if you do see, some parallels between the agile transformations that you led at Spotify and Lego with the AI transformations that many businesses are looking at now.

    Henrik: I agree. I find that when we help companies transition towards becoming AI native, a lot of the thinking is similar to agile. But I think we can generalise that agile transformations are not really very special either – it's organisational change.

    There are some patterns involved regardless of whether you're transitioning towards an agile way of working or towards AI. Some general patterns such as: you've got to get buy-in, it's useful to do the change in an incremental way, balance bottom-up with top-down. There are all these techniques that are useful regardless. But as an agilist, if you have some skills and competence in leading and supporting a change process, then that's going to be really useful also when helping companies understand how to use AI.

    Tenille: Are you seeing more top-down or bottom-up when it comes to AI transformations?

    Henrik: So far it's quite new still. The jury's not in yet. But so far it looks very familiar to me. I'm seeing both. I'm seeing situations where it's pure top-down where managers are like "we got to go full-out AI," and they push it out with mixed results. And sometimes just completely bottom-up, also with mixed results.

    Sometimes something can start completely organically and then totally take hold, or it starts organically and then gets squashed because there was no buy-in higher up. I saw all of that with agile as well. My guess is in most cases the most successful will be when you have a bit of both – support and guidance from the top, but maybe driven from the bottom.

    "I think the bottom-up is maybe more important than ever because this technology is so weird and so fast-moving. As a leader, you don't really have a chance if you try to control it – you're going to slow things down to an unacceptable level. People will be learning things that you can't keep up with yourself. So it's better to just enable people to experiment a lot, but then of course provide guidance."

    AI for Product Owners: From Ideation to Pull Requests

    Tenille: You're very well known for your guidance and for your ability to explain quite complex concepts very simply and clearly. I was looking at your video on YouTube today, the Agile Product Ownership in a Nutshell video, which was uploaded about 12 years ago now. Thinking about product owners, there's a big opportunity now with AI for generating ideas, analysing data, and even suggesting new features. What's your advice for product owners and product managers in using AI most effectively?

    Henrik: Use it for everything. Overuse it so you can find the limits. The second thing is: make sure you have access to a good AI model. Don't use the free ones. The difference is really large – like 10x, 100x difference – just in paying like $20 per month or something. At the moment, I can particularly strongly recommend Claude. It's in its own category of awesomeness right now. But that of course changes as they leapfrog each other. But mainly: pay up, use a paid model, and then experiment.

    For product owners, typical things are what you already mentioned – ideation, creating good backlog items, splitting a story – but also writing code. I would say as a PO, there is this traditional view, for example in Scrum, that POs should not be coding. There's a reason for that: because coding takes time, and then as PO you get stuck in details and you lose the big picture.

    Well, that's not true anymore. There are very many things that used to be time-consuming coding that is basically a five-minute job with a good prompt.

    "Instead of wasting the team's time by trying to phrase that as a story, just phrase it as a pull request instead and go to the team and demonstrate your running feature."

    That happened actually today. Just now, our CEO, who's not a coder, came to me with a pull request. In fact, quite often he just pushes directly to a branch because it's small changes. He wants to add some new visualisation for a graph or something in our platform – typically admin stuff that users won't see, so it's quite harmless if he gets it wrong.

    He's vibe coding, just making little changes to the admin, which means he never goes to my team and says, "Hey, can you guys generate this report or this graph for how users use our product?" No, he just puts it in himself if it's simple.

    Today we wanted to make a change with how we handle payments for enterprise customers. Getting that wrong is a little more serious, and the change wasn't that hard, but he just didn't feel completely comfortable pushing it himself. So he just made a PR instead, and then we spent 15 minutes reviewing it. I said it was fine, so we pushed it.

    It's so refreshing that now anybody can code. You just need to learn the basic prompting and these tools. And then that saves time for the developers to do the more heavyweight coding.

    Tenille: It's an interesting world where we can have things set up where anyone could just jump in and with the right guardrails create something. It makes Friday demos quite probably a lot more interesting than maybe they used to be in the past.

    Henrik: I would like to challenge any development team to let their stakeholders push code, and then find out whatever's stopping you from doing that and fix that. Then you get to a very interesting space.

    Closing the Gap Between Makers and Users

    Tenille: A key insight from your work with agile teams in the past has been to really focus on minimising that gap between maker and user. Do you think that AI helps to close that gap, or do you think it potentially risks widening it if teams are focusing too much on AI predictions and stop talking to their customers effectively?

    Henrik: I think that of course depends a lot on the team. But from what I've seen so far, it massively reduces the gap. Because if I don't have to spend a week getting a feature to work, I can spend an hour instead. Then I have so much more time to talk to my users and my customers.

    If the time to make a clickable prototype or something is a few seconds, then I can do it live in real time with my customers, and we can co-create. There are all these opportunities.

    I find that – myself, my teams, and the people I work with – we work a lot more closely with our users and customers because of this fast turnaround time.

    "Just yesterday I was teaching a course, and I was going home sitting on the subway. It was a 15-minute subway ride. I finally got a seat, so I had only 7 minutes left. There's this feature that I wanted to build that involved both front-end and back-end and a database schema change. Well, 5 minutes later it was done and I got off the subway and just pushed it. That's crazy."

    Of course, our system is set up optimised to enable it to be that fast. And of course not everything will work that well. But every time it does, I've been coding for 30 years, and I feel like I wake up in some weird fantasy every day, wondering, "Can I really be this productive?" I never would have thought that was possible.

    Looking Ahead: The Future of Agile Teams

    Tenille: I'd like you to put your futurist hat on for a moment. How do you see the future of agile teamwork in, say, 10 to 15 years time? If we would have this conversation again in 2035, given the exponential growth of AI and improvements over the last two to three years, what do you think would be the biggest change for software development teams in how they operate?

    Henrik: I can't even imagine 10 years. Even 5 years is just beyond imagination. That's like asking someone in the 1920s to imagine smartphones and the Internet. I think that's the level of change we're looking at.

    I would shorten the time a little bit and say maybe 3 or 4 years. My guess there – and I'm already seeing this transfer happen – is that coding will just go away. It just won't be stuff that we humans do because we're too slow and we hallucinate way too much.

    But I think engineering and the developer role will still be there, just that we don't type lines of code – in the same way that we no longer make punch cards or we no longer write machine code and poke values into registers using assembly language. That used to be a big part of it, but no longer.

    "In the future, as developers, a lot of the work will still be the same. You're still designing stuff, you're thinking about architecture, you're interacting with customers, and you're doing all the other stuff. But typing lines of code is something that we're gonna be telling our kids about, and they're not gonna believe that we used to do that."

    The other thing is smaller teams, which I'm already seeing now. I think the idea of a cross-functional team of 5 to 7 people – traditionally that was considered quite necessary in order to have all the different skills needed to deliver a feature in a product. But that's not the case anymore. If you skip ahead 2 or 3 years when this knowledge has spread, I think most teams will be 2 people and an AI, because then you have all the domain knowledge you need, probably.

    As a consequence of that, we'll just have more teams. More and smaller teams. Of course, then you need to collaborate between the teams, so cross-team synchronisation is still going to be an issue.

    Also, I'm already seeing this now, but this concept of sprints – the whole point is to give a team some peace of mind to build something complex, because typically you would need a week or two to build something complex. But now, when it takes a day and some good prompting to do the same thing that would have taken a whole sprint, then the sprint is a day instead. If the sprint is a day, is there any difference between a sprint planning meeting and a daily standup? Not really.

    I think sprints will just kind of shrink into oblivion. What's going to be left instead is something a little bit similar – some kind of synchronisation point or follow-up point. Instead of a sprint where every 2 weeks we sit down and try to make a plan, I think it'll be very much continuous delivery on a day-to-day basis. But then maybe every week or two we take a step back and just reflect a little bit and say, "Okay, what have we been delivering the past couple of weeks? What have we been learning? What's our high-level focus for the next couple of weeks?" A very, very lightweight equivalent of a sprint.

    I feel pretty confident about that guess because personally, we are already there with my team, and I think it'll become a bit of a norm.

    Final Thoughts: Preparing for the Future

    Henrik: No one knows what's gonna happen in the future, and those who say they do are kidding themselves. But there's one fairly safe bet though: no matter what happens in the future with AI, if you understand how to use it, you'll be in a better position to deal with whatever that is. That's why I encourage people to get comfortable with it, get used to using it.

    Tenille: I have a teenage daughter who I'm actually trying to encourage to learn how to use AI, because I feel like when I was her age, the Internet was the thing that was sort of coming mainstream. It completely changed the way we live. Everything is online now. And I feel like AI is that piece for her.

    Henrik: Isn't it weird that the generation of small children growing up now are going to consider this to be normal and obvious? They'll be the AI natives. They'll be like, "Of course I have my AI agent buddy. There's nothing weird about that at all."

    Tenille: I'll still keep being nice to my coffee machine.

    Henrik: Yeah, that's good. Just in case, you know.

    ---

    Thank you to Henrik Kniberg for joining us on this episode of the Easy Agile Podcast. To learn more about Henrik's work, visit Abundly AI or check out his educational videos on AI and agile practices.

    Subscribe to the Easy Agile Podcast on your favourite platform, and join us for more conversations about agile, product development, and the future of work.

  • Podcast

    Easy Agile Podcast Ep.27 Inclusive leadership

    "It was a pleasure speaking with Ray about empowering teams and helping people reach their full potential" - Mat Lawrence

    Mat Lawrence, Chief Operating Officer at Easy Agile is joined by Ray Arell. Ray currently works as the Director of Agile Transformations at Dell Technologies, is the host of the ACN Podcast, and the President Of The Board Of Directors for the nonprofit Forest Grove Foundation Inc.

    Ray is passionate about collaborative and inclusive leadership, and loves to inspire and motivate others to achieve their full potential. This is exactly what Mat and Ray dive into in this episode.

    Ray and Mat explore the concepts such as inclusive and situational leadership and the connection to agile ways of working, empowering the organisational brain, and fostering authenticity within teams.

    This is a fantastic episode for aspiring, emerging and existing leaders! Lots of great tips and advice to share with colleagues and friends and understand the ways we can be empowering and enabling one another.

    We hope you enjoy the episode!

    Transcript:

    Mat Lawrence:

    Hi folks, it's Mat Lawrence here. I'm the COO at Easy Agile and I'm really excited today to be joined by Ray Arell. Before we jump into our podcast episode, Easy Agile would like to acknowledge the traditional custodians of the land from which we're broadcasting today, the people of the Gadigal-speaking country. We pay our respects to elders past, present, and emerging, and extend that same respect to all Aboriginal Torres Strait Islander and First Nations people joining us today. Ray, thanks for joining us today. Ray is a collaborative and inclusive leader who loves to inspire and motivate others to achieve their full potential. Ray has 30 years of experience building and leading outstanding multinational teams in Fortune 100 companies, nonprofits, and startups. Also, he's recognized as a leading expert in large-scale agile adoptions, engineering practices, lean and complex adaptive systems. So Ray, welcome, really good to have you on the podcast today.

    Ray Arell:

    Thank you.

    Mat Lawrence:

    Love to get started by understanding what you enjoy most about being an inclusive leader and working with teams.

    Ray Arell:

    Yeah, so I've been in leadership probably for about 15 years, leading teams at different sizes. When you have the more intimate, smaller teams of maybe five or six people, upwards of teams that are upwards of several hundred people working within an organization that I might be the leader of. And what I enjoy the most about it is just connecting with the talented people that do the work. I mean, when you go into leadership, one of the things that you kind of transition from is not being the expert person in the room that's coding or doing hardware development or something else. You have these people who are now looking for direction or vision or other things in order for them to give them purpose in order to move forward with their day.

    And I enjoy coaching. I enjoy mentoring. I mean, a lot of my technical side of me is more nostalgia now more than it is relevant with the latest technologies. There's something rewarding when you see somebody who can, if you think of Daniel Pink's work of autonomy, mastery and purpose, that they suddenly find that they are engaged with the purpose that we're doing as an organization and then the autonomy for them to just do their day and be able to work and collaborate with others. And that's always been exciting to me.

    Mat Lawrence:

    I can relate to that. Yeah. I think in our audience today we're going to have a mixture of emerging leaders, aspiring leaders, and experienced leaders. I'd love to tap into your experience and ideally rewind a little bit to earlier in your career when you were transitioning into being a leader. And I'd love to understand around that time, what were some of the successes that you saw in the approach that you take that you've been trying to repeat over the years?

    Ray Arell:

    Well, I think early on, I think, especially when you grow up through the technical ranks, and suddenly at least the company that I was with at the time, very expert-based culture, if you were the smartest person in the room, those are the people that they looked at and said, "Okay, we're going to promote you to lead, or we're going to promote you to manager or promote you into the leadership ranks." I think looking back on that, I think Ray 2.0 or Ray 3.0, whatever version I was at the time, that I very much led from that expert leadership stance, which is sort of I know what is the best way to go and approach the delivery of something, and everyone should be following my technical lead for however this product comes together.

    And I don't think that was really a good approach. I think that constrained people because you ended up being more or less just telling people what to go do versus allowing them to experiment and learn and grow themselves in order to become what I had become as a senior technical person. And so I think lesson learned number one was that leading a team from an expert slant I think is probably not the best approach in order if you're going... especially if you think of agile and other more inclusive teamwork type of projects, you're going to want to give people more of a catalytic or a catalyst leader type of synergistic-based leadership style so that they can self-organize and they can move forward and learn and grow as an engineer.

    Mat Lawrence:

    Are there any times that stand out for you where you got it horribly wrong? I know I've got a few stories which I can happily share as well.

    Ray Arell:

    I'd love to hear some of yours. I think horribly wrong I think is... The question is is anything ever really not fixable, not recoverable? And in most cases, most of the issues that we've dealt with were recoverable. I think that looking at, and again, kind of back into that stance of well, am I creating a team or am I creating just a group of individuals that are just taking their work from the manager and I'm passing them out like cards type of thing... I think early on, probably the big mistake was just being too controlling, and the mistake of that control meant that I couldn't have a vacation. Others were dependent versus being interdependent on one another. And I think that made the organization run slower and not as efficient as it could be.

    Mat Lawrence:

    I've certainly been guilty of that same approach earlier in my leadership career where I became the bottleneck, absolutely.

    Ray Arell:

    Yeah. Exactly.

    Mat Lawrence:

    And to recognize that, it can be quite hard to undo, but it's definitely worth persevering with. Something else that I was fortunate to get some training in situational leadership, oh, probably nearly 10 years ago now. And that really opened my eyes to an approach, the way I was treating different people in my team. But I was treating them the way I first judged them. So if I saw [inaudible 00:07:01] an expert and a master, I would treat them as an expert and a master in all things. And [inaudible 00:07:05] if someone was less capable at that point in their career, I'd kind of assume the same thing. And so I would apply the same level of direction or lack of direction to those people for everything. And in situational leadership, the premise for those who don't know at home, is you change the level of direction that you give depending on the task at hand. Have you used that approach or something similar to guide how you include people in different ways?

    Ray Arell:

    Well, in order to include people, I think part of it is you need to... As you said, you were situationally looking at each person, and you were structuring it in a way that was from a way, an approach, of very individualized with somebody. I think the philosophy that I... Not everyone is very open or can communicate very well about their skills and their strengths, or in certain cases some people, they might be good at something but they don't exercise it because they themselves feel that that's not one of their strengths, but in reality is it is. So I think that when you're saying from a situational leadership perspective, when you hear somebody place doubt that they could be the one that could do something or to take up, say, even leadership of something, I think part of that just gets into that whole coaching and mentoring and really setting it up and helping them to be successful through that.

    And I think from an inclusive perspective, I think there's a set of honesty that you have to bring into your work and humility about being humble about even what you've accomplished. Because in engineering in particular, you tend to see that when you put people into a room, the people who are newer will sit back, and they will yield to who they think has the more experience. And reality is that they came from, say, let's say they just got fresh out of college. They actually might have more skills in a particular area based upon what they just went through in their curriculum that we might not have. And so the question of how do we use the whole organizational brain in order to bring all of the ideas onto the table, I think at times it requires us to be able to be effective listeners and to sometimes just pause and allow people to have the floor and pick up the pen and not hog the space, if that makes sense.

    Mat Lawrence:

    It really does, and I think I've seen that in every company I've worked in to some level. I'd be really interested to tap into how you go about addressing that scenario. For the people who are listening that would face that situation, it might be the first time they've been a leader and seeing that scenario and observing it. Is there any advice you would give them to help change that dynamic?

    Ray Arell:

    Well, one, just becoming aware of it. I frequently doodle when I'm in a group of people, and what I'll do is I'll sit there and I'll put dots on a paper of where people are at in the room, and then I start drawing lines between those individual dots if I see the communication happening between certain players. And what's interesting is if you watch that over about a 15-minute period of time, you start to see this emergent pattern that maybe someone's domineering the conversation or they're the focus point of the conversation, and it isn't going around the full room. So then that's when you get to be a gatekeeper and you invite others into the conversation. And then you politely help the ones who are being dominant in the conversation to pause, to just give space and allow those other people to talk and to get that out.

    And then I think the question of whether or not what the person says may sometimes be coherent or not coherent to the conversation, or maybe they're still trying to learn about just dynamics of everything. You just have to help to get, sometimes, to get that out of people, and use open words to basically open sentence... I mean, some open questions to pull that out from them. And I think that works really well.


    Mat Lawrence:

    I love that. I'm a doodler as well. I'm an artist originally in my early career, and I've worked my way into solving problems through tech a long time ago now, but I still can't... I need that physical drawing to help my mind think as much as anything else [inaudible 00:12:30] than just doodling on a pad.

    Ray Arell:

    Same here.

    Mat Lawrence:

    Something that you said a little earlier, we touched a little bit on inclusivity. In your LinkedIn bio you talk about being an inclusive leader who loves to inspire and motivate others to achieve their full potential. Something I'm really passionate about is that last part in particular, is helping people achieve their full potential. It's why I love being a people leader and a COO. You get to do that across a whole company. I'd love to first touch on the idea of being an inclusive leader. How do you define what it means to be one?

    Ray Arell:

    Well, inclusive leadership, there was an old bag that I used to have, a little coaching bag that I used to carry around with me. And at the very top of it said, "Take it to the team," was the motto that was at the top of it. And at the bottom of the bag it basically said, "Treat people like adults." Were the two kind of core things that I think part of what being inclusive is is that I have to accept the fact that, yeah, I'm a smart person, but do we get a better decision if we socialize that around the team? Do we see what other ideas or possibility thinking? Sort of in the lean sense, make the decision as late as you can.

    It's more towards the Eastern culture of, well, if I keep the decision open, maybe we're going to find something that's cheaper or better or even just more exciting for our customers. And so I think part of that is knowing that you don't have to be the one that has to make the decision. You can let the team make the decision. And we all embrace because we're empowering ourselves with this was what we all thought, not just what Ray thought, which I think is cool.

    Mat Lawrence:

    There's a second part to that piece you talked about in your bio around helping motivate others to achieve their full potential.

    Ray Arell:

    Yeah, yeah.

    Mat Lawrence:

    Yeah. Let's talk about where that came from for you, that passion, and what are some of the ways you look to help emerging leaders reach their full potential?

    Ray Arell:

    Yeah, I mean, I was lucky enough when I joined Intel Corporation that Andy Grove was still running the organization at the time. As a matter of fact, he taught my Welcome to Intel class. At the time when I joined Intel, there was only about 32,000 employees. And here's the CEO, founder of the company teaching the Welcome to Intel class, which I thought was incredibly cool, a great experience to have. He oozed this leadership, whatever mojo or whatever it is he is got going out into the environment as he's talking about the company. But he was really strong on the one-on-ones, the time that you can spend with your manager or others within the organization because you can have a one-on-one with anyone within the company. And he encouraged that. And I think that helps to... When somebody is trying to figure it out, they're brand new to the company, and you get a standing invitation from the CEO that says, "You can come and have a conversation with me," I think that sets the cultural norm right up front that this is a place that's going to assist and help me along my career.

    And I could tell you that there's been a number of different times that those developed into full-blown, "I'm the mentee and they're the mentors." And in those relationships over time, it's sort of like then you say, "Well, I'm going to pay that forward." Today I have at least six or seven mentees that have all sorts of questions about how do they guide through their career or if they had some specific area that they wanted to go focus on. And it's their time to pick my brain. And in certain cases, if I don't have the full answer, I can guide them to other mentors that can help them to grow.

    Mat Lawrence:

    I love that approach of pay it forward that you touched on there. It's definitely something that I've been trying to do in the last couple of years myself, and I wish I'd started sooner mentoring. I've had the privilege of working with some amazing leaders in my career who I've learned a lot from. And once I started mentoring, I realized how much I learned by being a mentor because you have to think. You really think about what these people are going through and not just project yourself onto them. And it validates the rationale about why you do things yourself, why you think that way. And it forces me to challenge myself.

    And I think if there's anything... I talk to some of the younger people at work who are emerging leaders, and they're exceptional in their own way. They've all got very different backgrounds, but a lot of them don't feel like they're ready to be a mentor. They really are. They're amazing people. And I wonder, have you seen people earlier in their careers try and pass it forwards kind of early on or do people feel they have to wait until [inaudible 00:18:22]?

    Ray Arell:

    I think it depends. One, I think the education system, at least in the United States, has shifted a bit. When people go for their undergraduate degree, it used to be just they were by themselves, they did their book studies. Very little interaction or teamwork was created for this study. I mean, back when I got my electrical engineering degree, it was just me by myself. There might be occasional lab work and lab projects, but it wasn't something that was very much inclusive, nor did they have people step up into leadership roles that early. I look at now my daughter who's right now going to the university, and everything is a cohort group. There's cohorts that are getting together. The studying that they do, they each have to pick up leadership in some regards for some aspect of a project that they're working on. So I think some of the newer people coming into the workforce are sort of built in with the skills to, if they need to take up leadership with something, run a little program, run a project, they've been equipped to do it. At least that's what I've seen.

    Mat Lawrence:

    I love that concept. Something that I've been observing and I talk it about a lot with our leadership team and our mentor exec teams for the [inaudible 00:19:56] as well. A lot of the conversation that comes up is around team dynamics, team trust, agility within teams, and to generally try and empower teams, set them up so they can be autonomous, they are truly empowered and they're trusted to make great decisions and drive work forwards. You've got a lot of experience in agile and agile [inaudible 00:20:21] agile leader. In your experience leading agile teams, those adoptions and those transformations, I'd love to understand if you see there's a connection between being agile as a team and those traits that an inclusive leader will have. Is there a connection there in your mind between what it means to be agile and be an inclusive leader?

    Ray Arell:

    I think so. Because if you think of early on, they established that servant leadership was a better leadership style for agile teams. And so I think when we talk about transformation, some of the biggest failures that occur tend to be more based upon not agile, but on issues of trust and other sort of organizational impediments that had already existed there before they got started. And if they don't address those, their agile journey is painful.

    I've heard people say that they've gotten Scrummed before, using it in a really kind of derogatory way of thinking that, well, instead of getting a team of empowered people to go do work within the Scrum framework, they end up being put under a micromanagement lens because the culture of the manager didn't shift, and the manager is using it as a daily way to making sure that everyone is working at 120% versus what we should be seeing in the pattern is that the team understands their flow. They're pulling work into the team. It's not being pushed. And those dynamics I think are something that if leadership doesn't shift and change the way that they work, then it just doesn't work in organizations.

    Mat Lawrence:

    In the many places that you've worked and coached and guided people on, you've started to come across... There's a term that we've started to use of agile natives where people who've really not known any different because so many companies in world are going through agile transformations, and that'll continue for a long time. But as some companies are born with agility at the forefront, have you experienced many people coming through into leadership roles that don't know anything but true agility and really authentic agility as you've just described?

    Ray Arell:

    Well, I think it's kind of interesting because as you talked about that phrase, I was thinking about it, about, well, if you knew nothing else... But I can also say that you could become native after you've been in the culture for a period of time as well. So you can eventually... That becomes your first reaction, your first habit is pulling more from the agile principles than you would be pulling from something else. Yeah, there are those people, but it's been interesting watching companies like Spotify or watching Salesforce or watching Pivotal, and I can just go down the list of companies that have started as an agile organization, they got large, and then suddenly the anti-patterns of a large company start to emerge within those companies. So even though the people within the smaller tribe are working in an agile way, the company slowly doesn't start to work in an agile way any longer. It falls underneath a larger context of what we see happening with the older companies.

    And I think some of that could be the executive culture might be just coming in where they bring somebody from the outside who wasn't a native, and they have a hard time dealing with the notion that, well, we're committing to a delivery date sometime over here, and we think we're going to hit it. But no, we don't have what would be affectionately known as a 90% confident plan that says that we've cleared all risk out of the way. And yeah, it's going to absolutely happen on that day. And some of those companies get really... They feel that they have to commit everything to the street, and if they don't meet it, they've already glued those in to some executive bonus program, ends up driving bad behaviors, unfortunately,

    Mat Lawrence:

    Yes, I have been there. I'm assuming that in our audience, we're going to have people who are transitioning into more senior leadership roles. They're not emerging leaders, they've been doing it for a while, and they've probably run some successful agile teams at the smaller level as you've described. For those people who are moving into the more senior roles, maybe into exec positions, is there any guidance that you'd give them for navigating that change and trying to maintain, through agile principles and what it means to be agile, in those more senior roles?

    Ray Arell:

    Yeah, I think part of it is the work that you did as a smaller team, everything still can scale up. And I hate to use the word scale because I think scale is kind of... People kind of use it... What would be the right word? It's misused in our industry. I think values and principles are scale-free. You can still walk each day walking into your team and still embracing those 12 principles, and you're going to do good work. The question is though, is if you're doing that at the lower level, say with a Kanban board, the question is, what does it look like when you're at your executive desk? What is the method that you go pool? If you look at most of the scaled frameworks that are out today, there's very little guidance that's given to what should be in the day in the life of an agile executive. What should that look like?

    And for me, if I think about the business team, the management team is working with the delivery teams daily. They should be doing that. So what are you going to put in place for that to facilitate and occur? What are you going to do about... stop doing these big annual budget processes. Embrace things like the beyond budgeting or other things where you're funding the organization strategically, and you're not trying to lock everything in on an annual cadence, but yet your organization beneath is working every two weeks. So you should be able to re-move your bets with any organization based upon the performance of each sprint. Can you do that?

    The last one is probably the most important one, is impediments. And that is how fast does it take information to go from the lowest part of the organization to the highest point of the organization? And if that takes three weeks, two weeks, or even sometimes later for certain organizations, optimize that. How do you optimize an impediment that you can personally help to go remove for people so that they're not slowed down by it any longer, whatever that might be?

    Mat Lawrence:

    You're touching on something there, which I think is a fundamental part of being agile, which is that ability to learn and adapt, and you can only learn when you are aware of what's happening around you, you can observe [inaudible 00:28:39] to it.

    Ray Arell:

    Well, I said something a couple months ago, and everyone just went, "Why did you say... I can't believe you said that out loud." It's the quiet stuff out loud sometimes. [inaudible 00:28:53]. We were trying to get a meeting together to go fix one of these impediments, and all the senior leaderships was busy. They were busy. And my question was is if this isn't the most important thing right now for us, what do you do? Really, are you doing in your day if this one isn't the highest priority that you walk into? And the questioning senior leaders that maybe they're not paying attention to the right things, and sometimes speaking that truth to power is something we have to do every once in a while.

    Mat Lawrence:

    I agree. That level of candor is definitely required at all levels and being able to receive that feedback so you can learn and adapt as an individual, as we were talking about earlier, about being adaptive as a leader, but also as a team. There's a point that I'd like to touch on before we wrap up, which is as you climb up the career ladder and you get into a more senior position, and then you become responsible for a broader range of things, particularly as you start reaching that executive level, I've witnessed people struggle with the transition from being the person, as you talked about right at the start of this discussion, being that person who knows everything and who can direct and have all the answers into someone where I see your job changes to being the person who can identify what we know least about, what we as an exec team know least, where we're... have the least confidence, where we see the impediments and we don't know what to do with them.

    How do you go about guiding people to embrace that? Because I think what I see is the fear that comes with that, almost a fear of exposure of, "Oh, I'm admitting to people I don't know what I'm doing." And I've been rewarded through my entire career by becoming more of an expert, and suddenly my job is to be the person who's confident enough to call out, this is what we don't understand yet. Let's get together and try and resolve it. When the risk is greater, the impact is greater, and you're responsible for more things, how do you help people transition into that higher-level role?

    Ray Arell:

    Well, I think part of it is can they let go of that technical side, having to have their hands dirty all the time? And I've seen certain leaders that, really, somebody needs to go back and say, "Are you really sure that this is the career that you're wanting to go to? You seem to be more into wanting to be into the nuts and bolts of things, and maybe that's the best place for you because you feel more comfortable in that space." The other aspect though, as they transition, I think is again, trust becomes critical. Trust the people that are working for you, that they're not coming in and being lazy and you have to go look over their shoulders all the time because you feel that they might not be being productive or other things. You have to have the ability to say that, look, that the people that you hired are talented, and they are moving us towards our goals.

    I think what becomes more critical for the health of the organization is that you have to do a much better job at actually saying, "Okay, well, here is our vision," whether it be a product vision, whether it be the company's vision, whatever that might be, helping people to understand what that North Star is, and then reinforcing that not from a perspective of yourself, but a perspective from the customer. And I think this is where a lot of companies start to drift because they start to optimize some internal metric that, yeah, that'll build efficiency within your organization. But what does the customer think? And constantly being able to represent as, if you think of from an agile perspective, the chief product owner of the organization, to be able to represent this is what the customers need and want and to be able to voice that in the vision and the ambitious missions that are set up for the organization. Make it real for people.

    And then the last part of that is not everything is going to happen and come true. If you read most executives' bios, there's lots and lots and lots and lots of mistakes. And I remember this of one leader, he was retiring. And I thought this wasn't most awkward time that he actually did this. He actually went up on the stage and he talked about his biggest failure. Now, throughout my career working with this person, I always wondered whether or not they were human. And then on the day of this person's exit, they finally decided to give you a few stories about mistakes that they made. And I think that he really needed to share those stories much, much earlier because I think people would've probably found... They would've been a little stressed working around him. And it would also show some vulnerability for you as a leader to say that you don't have everything figured out, and sometimes it's just a guess. We think that this is where the product needs to go.

    And then as soon as you put it in front of the customers, they're going to tell you whether or not... If you take the Cano model and suddenly you're going to hit this is the most exciting thing since sliced bread, are they going to love it or are they going to go, [inaudible 00:35:12]. I'll take it if it's free. You get into this situation where it's like, well, we can't charge as much. But I think those stories become important and anchor organizations. One other aspect of this is I think that by having somebody who's approachable and can relay those stories effectively into the organization and talk about these things, I think then that opens the door for everyone else to do it as well. Because like it or not, humans are hierarchical in the way that we think about things. A lot of people manage up, so they mimic leaders. So be that leader that somebody would want to mimic.

    Mat Lawrence:

    I think that's great advice, Ray. The connection for me that's run through this whole conversation is around engaging with your work authentically, whether it's the team that you're trying to lead, whether it's the agile practices at whatever scale and level that you're operating at. And to build that trust to enable that to work requires that level of authenticity.

    Ray Arell:

    Yeah, exactly.

    Mat Lawrence:

    I would love, as we wrap up, for you to leave any final tips or advice for both current and emerging leaders on that topic. If there's a way beyond just sharing your own personal stories, how would you advise people? What would you leave them with to build some trust in their teams?

    Ray Arell:

    Well, a couple of things. Number one, you have to be mindful about who you are as a person. Again, like I was saying, that people manage up. And if you send out an email at three o'clock in the morning, and five minutes later your people were responding to you, then you're not being a really good role model of a good work-life balance. So a lot of your tendencies will bleed off into the organization. So regardless how you assess yourself, do an assessment of your leadership, where you think it is. Harvard Business Review, a long time ago, put off the levels of what they saw as leadership models. And the lowest level is the expert and the achiever-based leaders. And if you're one of those, those are not very conducive to a good agile or collaborative culture. So if you're currently setting in that slant, then you should look ways of being able to move yourself more to a catalytic or a synergistic-based leader.

    And that journey's not an easy one because I went through that myself. It took years in order to pull away from some of those tendencies that you had as an expert leader. And as an example, an expert-based leader tends to only talk to other experts. If they perceive somebody not to be an expert of something, they tend to discount those individuals and not engage with them. And so again, the full organizational brain is what's going to solve the problem. So how do you engage the entire organization and pull those ideas together?

    The other one is that as you go into, from an emergent leader perspective, I think you said it yourself earlier, and that's not just the bias of you're not an expert, I'm not going to talk to you, but any bias that you might have can affect the way that you lead and judge an individual, and really could limit or grow their career based upon maybe a snap judgment that you might have had. So I think you have to be mindful of your decisions that you're taking within the organization and especially the ones you're making of people. And so you got to be careful of those.

    The last one is probably just... And this gets into the complex adaptive systems space. Not everything is cut and dry, black and white, or mechanistic, meaning that we can take the same product, redo it again and again and again, and we're going to get different answers. We're going to get different requirements. We're going to get different things. It's okay for that stuff to be there. And it's okay for the stuff that's coming out of our products to be different every once in a while, and specifically because everything, it's a very complex environment. Cause and effect relationships and complexity is, customer can change their mind, and we have to be comfortable with a customer changing their mind. Our customer might have new needs that come up.

    And likewise, our employees, they sometimes will have change of thought or change of what they are excited about. How do you encourage that? How do you grow those individuals to retain them in the company, not to use them for the skill they have right now, but how do you play the long game there? And I know I'm getting a little long-winded here, but the thing that I see most, even with all the layoff notices that are going on right now, is that that company's not playing the long game. I think that's a bad move because all you're doing by letting an employee go is enabling your competitor with a whole bunch of knowledge that you should be retaining. So anyway, I'll cut it short there.

    Mat Lawrence:

    Right. Thank you for sharing your wisdom with us today. It's been an absolute pleasure. I've really enjoyed the chat. So yes, thank you for joining me on the Easy Agile Podcast.

    Ray Arell:

    Awesome. Thank you for having me.

  • Podcast

    Easy Agile Podcast Ep.12 Observations on Observability

    On this episode of The Easy Agile Podcast, tune in to hear developers Angad, Jared, Jess and Jordan, as they share their thoughts on observability.  

    Wollongong has a thriving and supportive tech community and in this episode we have brought together some of our locally based Developers from Siligong Valley for a round table chat on all things observability.

    💥 What is observability?
    💥 How can you improve observability?
    💥 What's the end goal?

    Angad Sethi

    "This was a great episode to be a part of! Jess and Jordan shared some really interesting points on the newest tech buzzword - observability""

    Be sure to subscribe, enjoy the episode 🎧

    Transcript

    Jared Kells:

    Welcome everybody to the Easy Agile podcast. My name's Jared Kells, and I'm a developer here at Easy Agile. Before we begin, Easy Agile would like to acknowledge the traditional custodians of the land from which we broadcast today, the Wodiwodi people of the Dharawal nation, and pay our respects to elders past, present and emerging, and extend that same respect to any aboriginal people listening with us today.

    Jared Kells:

    So today's podcast is a bit of a technical one. It says on my run sheet here that we're here to talk about some hot topics for engineers in the IT sector. How exciting that we've got a couple of primarily front end engineers and Angad and I are going to share some front end technical stuff and Jess and Jordan are going to be talking a bit about observability. So we'll start by introductions. So I'll pass it over to Jess.

    Jess Belliveau:

    Cool. Thanks Jared. Thanks for having me one as well. So yeah, my name's Jess Belliveau. I work for Apptio as an infrastructure engineer. Yeah, Jordan?

    Jordan Simonovski:

    I'm Jordan Simonovski. I work as a systems engineer in the observability team at Atlassian. I'm a bit of a jack of all trades, tech wise. But yeah, working on building out some pretty beefy systems to handle all of our data at Atlassian at the moment. So, that's fun.

    Angad Sethi:

    Hello everyone. I'm Angad. I'm working for Easy Agile as a software dev. Nothing fancy like you guys.

    Jared Kells:

    Nothing fancy!

    Jess Belliveau:

    Don't sell yourself short.

    Jared Kells:

    Yeah, I'll say. Yeah, so my name's Jared, and yeah, senior developer at Easy Agile, working on our apps. So mainly, I work on programs and road maps. And yeah, they're front end JavaScript heavy apps. So that's where our experience is. I've heard about this thing called observability, which I think is just logs and stuff, right?

    Jess Belliveau:

    Yeah, yeah. That's it, we'll wrap up!

    Jared Kells:

    Podcast over! Tell us about observability.

    Jess Belliveau:

    Yeah okay, I'll, yeah. Well, I thought first I'd do a little thing of why observability, why we talk about this and sort of for people listening, how we got here. We had a little chat before we started recording to try and feel out something that might interest a broader audience that maybe people don't know a lot about. And there's a lot of movements in the broad IT scope, I guess, that you could talk about. There's so many different things now that are just blowing up. Observability is something that's been a hot topic for a couple of years now. And it's something that's a core part of my job and Jordan's job as well. So it's something easy for us to talk about and it's something that you can give an introduction to without getting too technical. So we don't want to get down. This is something that you can go really deep into the weeds, so we picked it as something that hopefully we can explain to you both at a level that might interest the people at home listening as well.

    Jess Belliveau:

    Jordan and I figured out these four bullet points that we wanted to cover, and maybe I can do the little overview of that, and then I can make Jordan cover the first bullet point, just throw him straight under the bus.

    Jordan Simonovski:

    Okay!

    Jess Belliveau:

    So we thought we'd try and describe to you, first of all, what is observability. Because that's a pretty, the term doesn't give you much of what it is. It gives you a little hint, but it'll be good to base line set what are we talking about when we say what is observability. And then why would a development team want observability? Why would a company want observability? Sort of high level, what sort of benefits you get out of it and who may need it, which is a big thing. You can get caught up in these industry hot buzz words and commit to stuff that you might not need, or that sort of stuff.

    Jared Kells:

    Yep.

    Jordan Simonovski:

    Yep.

    Jess Belliveau:

    We thought we'd talk about some easy wins that you get with observability. So some of the real basic stuff you can try and get, and what advantages you get from it. And then we just thought because we're no going to try and get too deep, we could just give a few pointers to some websites and some YouTube talks for further reading that people want to do, and go from there. So yeah, Jordan you want to-

    Jared Kells:

    Sounds good.

    Jess Belliveau:

    Yeah. I hopefully, hopefully. We'll see how this goes! And I guess if you guys have questions as well, that's something we should, if there's stuff that you think we don't cover or that you want to know more, ask away.

    Jordan Simonovski:

    I guess to start with observability, it's a topic I get really excited about, because as someone that's been involved in the dev ops and SRE space for so long, observability's come along and promises to close the loop or close a feedback loop on software delivery. And it feels like it's something we don't really have at the moment. And I get that observability maybe sounds new and shiny, but I think the term itself exists to maybe differentiate itself from what's currently out there. A lot of us working in tech know about monitoring and the loading and things like that. And I think they serve their own purpose and they're not in any way obsolete either. Things like traditional monitoring tools. But observability's come along as a way to understand, I think, the overwhelmingly complex systems that we're building at the moment. A lot of companies are probably moving towards some kind of complicated distributed systems architecture, microservices, other buzz words.

    Jordan Simonovski:

    But even for things like a traditional kind of monolith. Observability really serves to help us ask new questions from our systems. So the way it tends to get explained is monitoring exits for our known unknowns. With seniority comes the ability to predict, almost, in what way your systems will fail. So you'll know. The longer you're in the industry, you know this, like a Java server fails in x, y, z amount of ways, so we should probably monitor our JVM heap, or whatever it is.

    Jared Kells:

    I was going to say that!

    Jordan Simonovski:

    I'll try not to get too much into-

    Jared Kells:

    Runs out of memory!

    Jordan Simonovski:

    Yeah. So that's something that you're expecting to fail at some point. And that's something that you can consider a known unknown. But then, the promise of observability is that we should be shipping enough data to be able to ask new questions. So the way it tends to get talked about, you see, it's an unknown unknown of our system, that we want to find out about and ask new questions from. And that's where I think observability gets introduced, to answer these questions. Is that a good enough answer? You want me to go any further into detail about this stuff? I can talk all day about this.

    Jared Kells:

    Is it like a [crosstalk 00:08:05]. So just to repeat it back to you, see if I've understood. Is it kind of like if I've got a, traditionally with a Java app, I might log memories. It's because I know JVM's run out of memory and that's a thing that I monitor, but observability is more broad, like going almost over the top with what you monitor and log so that you can-

    Jordan Simonovski:

    Yeah. And I wouldn't necessarily say it's going over the top. I think it's maybe adding a bit more context to your data. So if any of you have worked with traces before, observability is very similar to the way traces work and just builds on top of the premise of traces, I guess. So you're creating these events, and these events are different transactions that could be happening in your applications, usually submitting some kind of request. And with that request, you can add a whole bunch of context to it. You can add which server this might be running on, which time zone. All of these additional and all the exciters. You can throw in user agency into there if you want to. The idea of observability is that you're not necessarily constrained by high cardinality data. High cardinality data being data sets that can change quite largely, in terms of the kinds of data they represent, or the combinations of data sets that you could have.

    Jordan Simonovski:

    So if you want shipping metrics on something, on a per user basis and you want to look at how different users are affected by things, that would be considered a high cardinality metric. And a lot of the time it's not something that traditional monitoring companies or metric providers can really give you as a service. That's where you'll start paying insanely huge bills on things like Datadog or whatever it is, because they're now being considered new metrics. Whereas observability, we try and store our data and query it in a way that we can store pretty vast data sets and say, "Cool. We have errors coming from these kinds of users." And you can start to build up correlations on certain things there. You can find out that users from a particular time zone or a particular device would only be experiencing that error. And from there, you can start building up, I think, better ways of understanding how a particular change might have broken things. Or some particular edge cases that you otherwise couldn't pick up on with something like CPU or memory monitoring.

    Angad Sethi:

    Would it be fair to say-

    Jared Kells:

    Yeah. It's [crosstalk 00:11:02].

    Angad Sethi:

    Oh, sorry Jared.

    Jared Kells:

    No you can-

    Angad Sethi:

    Would it be fair to say that, so, observability is basically a set of principles or a way to find the unknown unknowns?

    Jordan Simonovski:

    Yeah.

    Angad Sethi:

    Oh.

    Jess Belliveau:

    And better equip you to find, one of the things I find is a lot of people think, you get caught up in thinking observability is a thing that you can deploy and have and tick a box, but I like your choice of word of it being a set of principles or best practices. It's sort of giving you some guidance around these, having good logging coming out of your application. So structured logs. So you're always getting the same log format that you can look at. Tracing, which Jordan talked a little bit about. So giving you that ability to follow how a user is interacting with all the different microservices and possibly seeing where things are going wrong, and metrics as well. So the good thing with metrics is we're turning things a bit around and trying to make an application, instead of doing, and I don't want to get too technical, black box monitoring, where we're on the outside, trying to peer in with probes and checks like that. But the idea with metrics is the application is actually emitting these metrics to inform us what state it is in, thereby making it more observable.

    Jess Belliveau:

    Yeah, I like your choice of words there, Angad, that it's like these practices, this sort of guide of where to go, which probably leads into this next point of why would a team want to implement it. If you want to start again, Jordan?

    Jordan Simonovski:

    Yeah, I can start. And I'll give you a bit more time to speak as well, Jess in this one. I won't rant as much.

    Jess Belliveau:

    Oh, I didn't sign up for that!

    Jordan Simonovski:

    I think why teams would want it is because, it really depends on your organization and, I guess, the size of the teams you're working in. Most of the time, I would probably say you don't want to build observability yourself in house. It is something that you can, observability capabilities themselves, you won't achieve it just by buying a thing, like you can't buy dev ops, you can't buy Agile, you can't buy observability either.

    Jared Kells:

    Hang on, hang on. It says on my run sheet to promote Easy Agile, so that sounds like a good segue-

    Jess Belliveau:

    Unless you want to buy it. If you do want to buy Agile, the [crosstalk 00:13:55] in the marketplace.

    Jared Kells:

    Yeah, sorry, sorry, yeah! Go on.

    Jordan Simonovski:

    You can buy tools that make your life a lot easier, and there are a lot of things out there already which do stuff for people and do surface really interesting data that people might want to look at. I think there are a couple of start ups like LightStep and Honeycomb, which give you a really intuitive way of understanding your data in production. But why you would need this kind of stuff is that you want to know the state of your systems at any given point in time, and to build, I guess, good operational hygiene and good production excellence, I guess as Liz Fong-Jones would put it, is you need to be able to close that feedback loop. We have a whole bunch of tools already. So we have CICD systems in place. We have feature flags now, which help us, I guess, decouple deployments from releases. You can deploy code without actually releasing code, and you can actually give that power to your PM's now if you want to, with feature flags, which is great.

    Jordan Simonovski:

    But what you can also do now is completely close this loop, and as you're deploying an application, you can say, "I want to canary this deployment. I want to deploy this to 10% of my users, maybe users who are opted in for Beta releases or something of our application, and you can actually look at how that's performing before you release it to a wider audience. So it does make deployments a lot safer. It does give you a better understanding of how you're affecting users as well. And there are a whole bunch of tools that you can use to determine this stuff as well. So if you're looking at how a lot of companies are doing SRE at the moment, or understanding what reliable looks like for their applications, you have things like SLO's in place as well. And SLO's-

    Jared Kells:

    What's an SLO?

    Jordan Simonovski:

    They're all tied to user experiences. So you're saying, "Can my user perform this particular interaction?" And if you can effectively measure that and know how users are being affected by the changes you're making, you can easily make decisions around whether or not you continue shipping features or if you drop everything and work on reliability to make sure your users aren't affected. So it's this very user centric approach to doing things. I think in terms of closing the loop, observability gives us that data to say, "Yes, this is how users are being affected. This is how, I guess the 99th percentile of our users are fine, but we have 1% who are having adverse issues with our application." And you can really pinpoint stuff from there and say, "Cool. Users with this particular browser or this particular, or where we've deployed this app to," let's say if you have a global deployment of some kind, you've deployed to an island first, because you don't really care what happens to them. You can say, "Oh, we've actually broken stuff for them." And you can roll it back before you impact 100% of your users.

    Jared Kells:

    Yeah. I liked what you said about the test. I forgot the acronym, but actually testing the end user behavior. That's kind of exciting to me, because we have all these metrics that are a bit useless. They're cool, "Oh, it's using 1% CPU like it always is, now I don't really care," but can a user open up the app and drag an issue around? It's like-

    Jess Belliveau:

    Yeah, that's a really great example, right?

    Jared Kells:

    That's what I really care about.

    Jess Belliveau:

    The 1% CPU thing, you could look at a CPU usage graph and see a deployment, and the CPU usage doesn't change. Is everything healthy or not? You don't know, whereas if you're getting that deeper level info of the user interactions, you could be using 1% CPU to serve HTTP500 errors to the 80% of the customer base, sort of thing.

    Angad Sethi:

    How do you do that? The SLO's bit, how do you know a user can log in and drag an issue?

    Jordan Simonovski:

    Yeah. I think that would come with good instrumenting-

    Angad Sethi:

    Good question?

    Jordan Simonovski:

    Yeah, it comes down to actually keeping observability in mind when you are developing new features, the same way you would think about logging a particular thing in your code as you're writing, or writing test for your code, as you're writing code as well. You want to think about how you can instrument something and how you can understand how this particular feature is working in production. Because I think as a lot of Agile and dev ops principles are telling us now is that we do want our applications in production. And as developers, our responsibilities don't end when we deploy something. Our responsibility as a developer ends when we've provided value to the business. And you need a way of understanding that you're actually doing that. And that's where, I guess, you do nee do think about observability with a lot of this stuff, and actually measuring your success metrics. So if you do know that your application is successful if your user can log in and drag stuff around, then that's exactly what you want to measure.

    Jared Kells:

    I think that we have to build-

    Jordan Simonovski:

    Yeah?

    Jared Kells:

    Oh, sorry Jordan.

    Jordan Simonovski:

    No, you go.

    Jared Kells:

    I was just going to say we have to build our apps with integration testing in mind already. So doing browser based tests around new features. So it would be about building features with that and the same thing in mind but for testing and production.

    Jess Belliveau:

    Yeah and the actual how, the actual writing code part, there's this really great project, the open telemetry project, which provides all these sort of API's and SDK's that developers can consume, and it's vendor agnostic. So when you talk about the how, like, "How do I do this? How do I instrument things?" Or, "How do I emit metrics?" They provide all these helpful libraries and includes that you can have, because the last thing you want to do is have to roll this custom solution, because you're then just adding to your technical debt. You're trying to make things easier, but you're then relying on, "Well I need to keep Jared Kells employed, because he wrote our log in engine and no one else knows how it works.

    Jess Belliveau:

    And then the other thing that comes to mind with something like open telemetry as well, and we talked a bit about Datadog. So Datadog is a SaaS vendor that specializes in observability. And you would push your metrics and your logs and your traces to them and they give you a UI to display. If you choose something that's vendor agnostic, let's just use the example of Easy Agile. Let's say they start Datadog and then in six months time, we don't want to use Datadog anymore, we want to use SignalFx or whatever the Splunk one is now.

    Jordan Simonovski:

    I think NorthX.

    Jess Belliveau:

    Yeah. You can change your end point, push your same metrics and all that sort of stuff, maybe with a few little tweaks, but the idea is you don't want to tie in to a single thing.

    Jordan Simonovski:

    Your data structures remain the same.

    Jess Belliveau:

    Yeah. So that you could almost do it seamlessly without the developers knowing. There's even companies in the past that I think have pushed to multiple vendors. So you could be consuming vendor A and then you want to do a proof of concept with vendor B to see what the experience is like and you just push your data there as well.

    Jared Kells:

    Yeah. I think our coupling to Datadog will be I all the dashboards and stuff that we've made. It's not so much the data.

    Jess Belliveau:

    Yeah. That's sort of the big up sell, right. It's how you interact. That's where they want to get their hooks in, is making it easier for you to interpret that data and manipulate it to meet your needs and that sort of stuff.

    Jordan Simonovski:

    Observability suggests dashboards, right?

    Jess Belliveau:

    Yeah, perhaps. You used this term as well, Jordan, "production excellence." And when we talk about who needs observability, I was thinking a bit about that while you were talking. And for me, production excellence, or in Apptio we call it production readiness, operational readiness and that sort of stuff is like we want to deploy something to production like what sort of best practices do we want to have in place before we do that? And I think observability is a real great idea, because it's helping you in the future. You don't know what problems you're going to have down the line, but you're equipping your teams to be able to respond to those problems easily. Whereas, we've all probably been there, we've deployed code of production and we have no observability, we have a huge outage. What went wrong? Well, no one knows, but we know this is the fix, and it's hard to learn from that, or you have to learn from that I guess, and protect the user against future stuff, yeah.

    Jess Belliveau:

    When I think easy wins for observability, the first thing that really comes to mind is this whole idea of structured logging, which is really this idea that your application is you're logging, first of all. Quite important as a baseline starting point, but then you have a structured log format which lets you programmatically pass the logs as well. If you go back in time, maybe logging just looked like plain text with a line, with a timestamp, an error message. Whatever the developer decided to write to the standard out, or to the error file or something like that. Now I think there's a general move to having JSON, an actual formatted blob with that known structure so you can look into it. Tracing's probably not an easy win. That's a little bit harder. You can implement it with open telemetry and libraries and stuff. Requires a bit more understanding of your code base, I guess, and where you want tracing to fire, and that sort of stuff, parsing context through, things like that.

    Jordan Simonovski:

    I think Atlassian, when you probably just want to know that everything is okay. At a fairly superficial level. Maybe you just want to do some kind of up time on a trend. And then as, I guess, your code might get more complex or your product gets a bit more complex, you can start adding things in there. But I think actually knowing or surfacing the things you know might break. Those would probably be your quickest wins.

    Jess Belliveau:

    Well, let's mention some things for further reading. If you want to go get the whole picture of the whole, real observability started to get a lot of movement out of the Google SRE book from a few years ago. The Google SRE stuff covers the whole gamut of their soak reliability engineering practice, and observability is a portion of that, there's some great chapters on that. O'Reilly has an observability book, I think, just dedicated to observability now.

    Jordan Simonovski:

    I think that's still in early release, if people want to google chapters.

    Jess Belliveau:

    The open telemetry stuff, we'll drop a link to that I think that's really handy to know.

    Angad Sethi:

    From [inaudible 00:26:12], which is my perspective, as a developer, say I wanted to introduce cornflake use Datadog at Easy Agile. Not very familiar, I'm not very comfortable with it. I know how to navigate, but what's a quick way for me to get started on introducing observability? Sorry to lock my direct job or at my workplace.

    Jordan Simonovski:

    I would lean, I could be biased here. Jess correct me or give your opinion on this, I would lean heavily towards SLO's for this. And you can have a quick read in the SRE-

    Jess Belliveau:

    What does SLO stand for, Jordan?

    Jordan Simonovski:

    Okay, sorry. Buzz words! SLO is a service level objective, not to be confused with service level agreement. An agreement itself is contractual and you can pay people money if you do breach those. An SLO is something you set in your team and you have a target of reliability, because we are getting to the point where we understand that all systems at any point in time are in some kind of degraded state. And yeah, reliability isn't necessarily binary, it's not unreliable or reliable. Most of the time, it's mostly reliable and this gives us a better shared language, I guess. And you can have a read in the SRE handbook by Google, which is free online, which gives you a pretty good understanding of Datadog.

    Jordan Simonovski:

    I think the last time I used it had a SLO offering. But I think like I was mentioning earlier, you set an SLO on particular functionalities or features of your application. You're saying, "My user can do this 99% of the time," or whatever other reliability target you might want to set. I wouldn't recommend five nines of reliability. You'll probably burn yourself out trying to get there. And you have this target set for yourself. And you know exactly what you're measuring, you're measuring particular types of functionality. And you know when you do breach these, users are being affected. And that's where you can actually start thinking about observability. You can think about, "What other features are we implementing that we can start to measure?" Or, "What user facing things are we implementing that we can start to measure?"

    Jordan Simonovski:

    Other things you could probably look at are, I think they're all covered in the book anyway, data freshness in a way. You want to make sure the data users are being displayed is relatively fresh. You don't want them looking at stale data, so you can look at measuring things like that as well. But you can pretty much break it down into most functionalities of a website. It's no longer like a ping check, that you're just saying, "Yes, HTTP, okay. My application is fine." You're saying, "My users are actually being affected by things not working." And you can start measuring things from there. And that should give you a better understanding, or a better idea, at least, of where to start with what you want to measure and ow you want to measure it. That would be my opinion on where to get started with this if you do want to introduce it.

    Jared Kells:

    We're going to talk a little bit about state and how with some of these, like our very front end heavy applications that we're building, so the applications we build just basically run inside the browser and the traditional state as you would think about it, is just pulling a very simple API that writes some things into the database with some authentication, and that sort of stuff. So in terms of reliability of the services, it's really reliable. Those tiny API's just never have problems, because it's just so simple. And well, they've got plenty of monitoring around it. But all our state is actually, when you say, "Observe the state of the system," for the most part, that's state in a browser. And how do we get observability into that?

    Jess Belliveau:

    A big thing is really, there's not one thing fits all as well. When we talk about the SLO stuff as well, it's understanding what is important to not so much maybe your company but your team as well. If you're delivering this product, what's important to you specifically? So one SLO that might work for me at Apptio probably isn't going to work for Easy Agile. This is really pushing my knowledge, as well, of front end stuff, but when we say we want to observe the state as well, we don't necessarily mean specifically just the state. You could want to understand with each one of those API's when it's firing, what the request response time is for that API firing. So that might be an important metric. So you can start to see if one of those APIs is introducing latency, and so your user experience is degraded. Like, "Hey when we were on release three, when users were interacting with our service here, it would respond in this percentile latency. We've done a release and since then, now we're seeing it's now in this percentile. Have we degraded performance performance?" Users might not be complaining, but that could be something that the team then can look into, add to a sprint. Hey, I'm using Agile terms now. Watch out!

    Jared Kells:

    That's a really good example, Jess. Performance issues for us are typically not an API that's performing poorly. It's something in this very complicated front end application is not running in the same order as it used to, or there's some complex interaction we didn't think of, so it's requesting more data than expected. The APIs are returning. They're never slow, for the most part, but we have performance regressions that we may not know about without seeing them or investigating them. The observability is really at the individual user's browser level. That makes sense? I want to know how long did it take for this particular interaction to happen.

    Jess Belliveau:

    Yeah. I've never done that sort of side of things. As well, the other thing I guess, you could potentially be impacted in as well as then, you're dealing with end user manifestations as well. You could perceive-

    Jared Kells:

    Yeah sure.

    Jess Belliveau:

    ... Greater performance on their laptop or something, or their ISP or that sort of stuff. It'd be really hard to make sure you're not getting noise from that sort of thing as well.

    Jordan Simonovski:

    Yeah. There are tools like Sentry, I guess, which do exist to give you a bit more of an understanding what's happening on your front end. The way Sentry tends to work with JavaScript, is you'll upload a minified map of your JS to Sentry, deploy your code and then if something does break or work in a fairly unexpected way, that tends to get surfaced with Sentry will tell you exactly which line this kind of stuff is happening on, and it's a really cool tool for that company stuff. I don't know if it'd give you the right type of insights, I think, in terms of performance or-

    Jared Kells:

    Yeah, we use a similar tool and it does work for crashes and that sort of thing. And on the observability front, we log actions like state mutations in side the front end, not the actual state change, but just labels that represent that you updated an issue summary or you clicked this button, that sort of thing, and we send those with our crash reports. And it's super helpful having that sort of observability. So I think I know what you guys are talking about. But I'm just [crosstalk 00:35:25], yeah.

    Jess Belliveau:

    Yeah, that's almost like, I guess, a form of tracing. For me and Jordan, when we talk about tracing, we might be thinking about 12 different microservices sitting in AWS that are all interacting, whereas you're more shifting that. That's sort of all stuff in the browser interacting and just having that history of this is what the user did and how they've ended up-

    Jared Kells:

    In that state.

    Jess Belliveau:

    In that state, yeah.

    Jordan Simonovski:

    I guess even if you don't have a lot of microservices, if you're talking about particular, like you're saying for the most part your API requests are fine but sometimes you have particularly large payloads-

    Jared Kells:

    We actually have to monitor, I don't know, maybe you can help with this, we actually should be monitoring maybe who we're integrating with. It's actually much more likely that we'll have a performance issue on a Xero API rather than... We don't see it, the browser sees it as well, which is-

    Jordan Simonovski:

    Yeah, and tracing does solve all of those regressions for you. Most tracing libraries, like if you're running Node apps or whatever on your backend. I can just tell you about Node, because I probably have the most experience writing Node stuff. You pretty much just drop in Didi trace, which is a Datadog library for tracing into your backend and your hook itself into all of, I think, the common libraries that you'll tend to work with, I think. Like if you're working for express or for a lot of just HADP libraries, as well as a few AWS services, it will kind of hook itself into that. And you can actually pinpoint. It will kind of show you on this pretty cool service map exactly which services you're interacting with and where you might be experiencing a regression. And I think traces do serve to surface that information, which is cool. So that could be something worth investigating.

    Jess Belliveau:

    It's funny. This is a little bit unrelated to observability, but you've just made me think a bit more about how you're saying you're reliant on third party providers as well. And something I think that's really important that sometimes gets missed is so many of us today are relying on third party providers, like AWS is a huge thing. A lot of people writing apps that require AWS services. And I think a lot of the time, people just assume AWS or Jira or whatever, is 100% up time, always available. And they don't write their code in such a way that deals with failures. And I think it's super important. So many times now I've seen people using the AWS API and they don't implement exponential back off. And so they're basically trying to hit the AWS API, it fails or they might get throttled, for example, and then they just go into a fail state and throw an error to the user. But you could potentially improve that user experience, have a retry mechanism automatically built in and that sort of stuff. It doesn't really tie into the observability thing, but it's something.

    Jared Kells:

    And the users don't care, right? No one cares if it's an AWS problem. It's your problem, right, your app is too slow.

    Jess Belliveau:

    Well, they're using your app. Exactly right. It reflects on you sort of thing, so it's in your interest to guard against an upstream failure, or at least inform the user when it's that case. Yeah.

    Jared Kells:

    Well, I think we're going to have to call it, this podcast, because it was an hour ago. We had instructed max 45 minutes.

    Jess Belliveau:

    We could just keep going. We might need a part two! Maybe we can request [cross talk 00:39:21].

    Jared Kells:

    Maybe! Yeah.

    Jess Belliveau:

    Or we'll just start our own podcast! Yeah.

    Angad Sethi:

    So what were your biggest learnings today, given it's been Angad and I are just learning about observability, Angad what was your biggest learning today about observability? My biggest learning was that observability does not equal Datadog. No, sorry! It was just very fascinating to learn about quantifying the known unknowns. I don't know if that's a good takeaway, but...

    Jess Belliveau:

    Any takeaway is a good takeaway! What about you, Jared?

    Jared Kells:

    I think, because I we were going to talk about state management, and part of it was how we have this ability, at the moment to, the way our front ends are architected, we can capture the state of the app and get a customer to send us their state, basically. And we can load it into our app and just see exactly how it was, just the way our state's designed. But what might be even cooler is to build maybe some observability into that front end for support. I'm thinking instead of just having, we have this button to send us out your support information that sends us a bunch of the state, but instead of console logging to the browser log, we could be console logging, logging in our front end somewhere that when they click, "send support information," our customers should be sending us the actions that they performed.

    Jared Kells:

    Like, "Hey there's a bug, send us your support information." It doesn't have to be a third party service collecting this observability stuff. We could just build into our... So that's what I'm thinking about.

    Jess Belliveau:

    Yeah, for sure. It'll probably be a lot less intrusive, as well, as some of the third party stuff that I've seen around.

    Jared Kells:

    Yeah. It's pretty hard with some of these integrations, especially if you're developing apps that get run behind a firewall.

    Jess Belliveau:

    Yeah

    Jared Kells:

    You can't just talk to some of these third parties. So yeah, it's cool though. It's really interesting.

    Jess Belliveau:

    Well, I hope someone out there listening has learned something, and Jordan and I will send some links through, and we can add them, hopefully, to the show notes or something so people can do some more reading and...

    Jared Kells:

    All thanks!

    Jess Belliveau:

    Thanks for having us, yeah.

    Jared Kells:

    Thanks all for your time, and thanks everybody for listening.

    Jordan Simonovski:

    Thanks everyone.

    Angad Sethi:

    That was [inaudible 00:41:55].

    Jess Belliveau:

    Tune in next week!