Easy Agile Podcast Ep.22 The Scaled Agile Framework
"Rebecca is an absolute gold mine of knowledge when it comes to SAFe, can't wait to continue the conversation at SAFe Summit 2022!"" - Tenille Hoppo
In this episode, Rebecca and Jasmin are talking:
📌 The value of the Scaled Agile Framework, who it’s for & who would benefit
📌 The Importance of having a common language for organizations to scale effectively
📌 When to connect the Scaled Agile Framework with your agile transformation
📌 Is there ever really an end state?
+ more!
📲 Subscribe/Listen on your favourite podcasting app.
Thanks, Jasmin and Rebecca!
Transcript
Jasmin Iordandis:
Hello, and welcome to the Easy Agile podcast, where today we're chatting all things Scaled Agile with Rebecca Davis, SAFe Fellow, SPCT, principle consultant and member of the SAFe framework team. Rebecca is passionate about teamwork, integrity, communication, and dedication to quality. And she's coached organizations on building competitive market-changing products at scale while also bringing joy to the work, for what is work without joy. Today, we've chatted all things Scaled Agile implementations, challenges, opportunities, and also the idea around optimizing flow, which Rebecca is hosting a workshop at the SAFe Summit in Denver in August this year. Hope you enjoy the podcast.
Jasmin Iordandis:
Hello everyone, and welcome to the Easy Agile podcast. I'm your host Jasmin Lordandis, product marketing manager here at Easy Agile. And today, we are delighted to welcome Rebecca Davis from the Scaled Agile framework. Welcome, Rebecca, and thanks for joining us.
Rebecca Davis:
Thanks. I appreciate being here. I'm excited.
Jasmin Iordandis:
Me too, especially because we are counting down the days before we get to meet you face to face, in person, at the SAFe Summit over in Denver, Colorado. And before we kick off our conversation, I just want to acknowledge the traditional custodians of the land from which we broadcast our podcast today. The people of the Djadjawurrung speaking country. We pay our respects to elders past, present and emerging, and extend that same respect to all Aboriginal Torres Strait Islanders and First Nations' people joining us today. So before we kick off, Rebecca, can you tell us a little bit about yourself and your role within Scaled Agile?
Rebecca Davis:
Sure. I'm actually relatively new to working for Scaled Agile. So I've been there a little over 90 days now, and I'm a member of the framework team, which means I help actually create the Scaled Agile framework and future versions of it. Prior to that, I led LACE at a company called CVS Health, and I've worked at a bunch of different kind of healthcare organizations across my years implementing or organizing agile transformation and digital transformation. And I think one of the reasons that Scaled Agile was interested in me joining the team is just a lot of different experiences across business agility as a whole outside of technology, in addition to within technology. So marketing transformations and HR transformations, legal transformations. But I love being at Scaled Agile and being part of the framework team. It's really exciting to help more organizations, and just the one I'm at, really understand how to bring joy to their workplace and bring value out to the world.
Jasmin Iordandis:
Yeah, cool. And you've given a little bit of information there around why Scaled Agile was interested in you. What attracted you to Scaled Agile, and did you use the Scaled Agile framework in these previous roles that you've just described?
Rebecca Davis:
Yeah. Those are great questions. I think I'm going to try to answer both of them together. But the reason I have always been drawn to the Scaled Agile framework is I ran a few different organizations, both as owning my own company and then also working in startups and working with larger organizations, where I knew that agility was important. But I was struggling as a change leader to find a way to really bring connectedness across large amounts of people. And to me, that's what Scaled Agile does for us, is after a certain size, it's a lot easier to create this common language and this common way to move forward and produce value with the framework. I also really enjoy it because there's a lot of thought that's already kind of done for you.
Rebecca Davis:
So if you're in an organization and you're trying to create change or change leadership, I'd much rather be leading the conversations and my context and making sure that I have a pulse on my particular cultural environment and pull from all these pieces, from the framework, where the thought's already been done about what are the right words and what do we do next, and what's the next step. So I've just found it an invaluable toolkit as a change leader.
Rebecca Davis:
I joined the framework team for a few reasons. One, I'd led so much change in so many different areas that, it's not that I wasn't challenged anymore, but I was really looking for something larger and different, and I've always had a belief that I really want to be the change that I want to see in the world. And I think being part of the framework team gives me access to things like this and all over the world to really help connect the humanness of people alongside with all the great techniques that we've learned, and hopefully expand it and just create a better place to be in.
Jasmin Iordandis:
Yeah. Cool. And you kind of touched on that in your response, but if we had to say, who is the Scaled Agile framework for and who would it most benefit, what would you say to that?
Rebecca Davis:
Yeah. I guess my opinion on that is I believe the Scaled Agile framework is for people who believe that their organizations have it in them to be better, both internally inside of themselves, as well as have this gigantic potential to go help the customers they serve and may be struggling right now, to really realize that potential. So I don't really see the framework as it's for a specific role necessarily. I think it's for people who believe in betterness. And those people, I found, live across an organization and across multiple different roles, and the framework just really helps you align that.
Jasmin Iordandis:
Yeah. And I think one thing that's evident from SAFe, once you learn how all the different practices and ceremonies work together, is exactly as you've said around connectiveness. And you also touched on having a common language. How important is that, when we're talking really large organizations with multiple different functions who, let's be honest, it's quite common for different functions to fall into different silos and things to break down. So how important is that connectivity and that common language, so that an organization as a whole can scale together?
Rebecca Davis:
Yeah. I don't even know how to state the amount of importance that is. I guess, specifically the organization I just came from, had over 400,000 people that worked there. And the last thing I want to is to debate what the word feature means, because that doesn't actually end up within a conversation where we have an understanding of why we want to feature or why we want this particular outcome, or how this outcome relates to this other outcome, if we're spending so much time just choosing word choice and having a conversation instead about what does the word even mean.
Rebecca Davis:
So I like it mostly because it gives us all this common framework to debate, and we need to be able to do that in really transparent and open ways across all of our different layers. So I don't even know how to quantify how much value it brings just to have this ability to bring stability, and the same language across the board, same word choice, same meaning behind those word choice, so that we can have all those debates that we need to have about what's the best possible thing we could be doing, since everything that we can do is valuable, but some things we have to decide are more valuable than others.
Jasmin Iordandis:
Yeah. And I think that really talks to what you were saying about helping an organization to reach its potential. It sounds like getting bogged down in what you call things or how you discuss things. And to be able to align on a common meaning in the end, you kind of need that common structure or that common language. And you're only going to get in your own way if you don't have it. So it makes total sense that the framework could really enable organizations on that journey. And in your experience, because it's implied in the name, it's about scaling agile. And I guess when we think of the Scaled Agile framework, we think of all those organizations of such a large size as the one you just mentioned, 400,000 employees. In your experience, what's a good time to introduce the Scaled Agile framework? Does it need to be right from the beginning? Does it need to be those organizations that are 400,000 people strong? Where is the right time to intersect the framework with an agile transformation?
Rebecca Davis:
Yeah. I think that's a really fascinating question, and my answer has changed over the years. I originally started researching Scaled Agile, because it was my first big transformation alongside of a large organization, and I knew there had to be some solutions out there to the problems I was seeing, and I discovered SAFe. But thinking back, I started my own startup company right out of high school actually. And I really wish that I would've had something to pull from, that gave me information about lean business cases, and speaking with my customer and getting tests and getting feedback. So I feel like the principles and the practices and the values are something that could be used at any size.
Rebecca Davis:
I think the part about scaling, the part about deciding like, "Hey, I'm going to do PI planning," I don't personally feel like you need to do PI planning if you have four people at your organization, because the point is to get teams across different groups to talk. You should definitely plan things 100%. So I think part of the idea is like, "When do I implement a train," or, "When do I have a solution train," or, "When do I officially call something LPM," versus just having discussions because my company is so small that we can all have discussions about things. I think those are a different part of implementing the Scaled Agile framework than just living and believing in the principles and the values and the mindset from whatever size or get-go you're at. Does that make sense at all?
Jasmin Iordandis:
That does make sense. And I guess then the question becomes, where do you begin and what would the first step be in implementing SAFe? And taking from your own experience, where do you start with this framework?
Rebecca Davis:
Yeah. I love that you asked that, as I've honestly seen this happen to me as well as some other change agents, where Scaled Agile gives us this thing called the implementation roadmap, and it has all the steps that you can start with. And it's proven, and companies use it and it works. And what I've found in my own change leadership is when I skip a step or I don't follow that because I get pressure to launch a train, instead of starting with getting my leaders at the right tipping point or having that executive buy in, it causes me so much pain downstream.
Rebecca Davis:
So if I were to give advice to somebody, it's, "Look, pull that map down the implementation roadmap from the SAFe site and follow it. And keep following it. And if you find that you..." I think that, back when I look back and do my own retrospective, the moments where I've decided to launch a train without training my people or launch or start doing more product management practices without actually training my people, it causes me a world to hurt later on with coaching and with communication, with feedback. So it's there for that reason. Just follow it. It's proven.
Jasmin Iordandis:
Yeah. And that's really good advice. And I think when people look at the roadmap for SAFe, there's a lot on there. But when we are talking agile transformations, necessarily there is going to be a lot that could get you there. So it kind of makes sense when all the thinking is been done for you and all those steps have been done. Just trust the process, I guess, is the message there, and following through on all of that. And I think it's really interesting, because the first step with SAFe is, as you say, getting your leaders on board. And often, we might be attracted to doing the work better. So let's start with those ceremonies. Let's start with all those things that make the day to day work better. How important it starting with the leaders of an organization?
Rebecca Davis:
Yeah. I've run the grassroots SAFe implementations where you start with the bottom and then you kind of move up. And personally, and this is a personal opinion, I'd much rather take the time and the efforts to get the communication right with the leaders and get the full leadership buy-in than be in that place again, where I'm trying to grassroot to move up and I hit the ceiling. The one thing I used to kind of tell the coaches that reported to me, and something I believe in deeply, is what we're trying to do with transformation is a journey. It's not a destination. So because we want to start that journey healthy and with a full pack of food and all those things, we need to take the time to really go and be bold and have conversations with our leaders, get their buy-in to go to Leading SAFe.
Rebecca Davis:
If they're not bought in to coming to a two-day course, then why would we believe that they're going to come to PI plannings and speak the way that we hope they will and create the change that they need to really lead? So I think that's one of the most important things, if not the most important thing from the very beginning, is be bold as that first change leader in your organization, go make those connections.
Rebecca Davis:
It may take a while. I've been in implementations or transformations where it started with just me discovering issues that senior leaders or executives were having, and going and solving some of those, so that there was trust built that I was a problem solver. So I could ask for the one hour executive workshop, which really should be a four to six-hour executive workshop, to get to the point where I could do the four to six-hour executive workshop, to get to the point where I could do PI Leading SAFe. And if that's what it takes to gain you that street cred to go do it, then, man, go do it, because that's where you get full business agility, I think, is getting that really senior buy-in and getting that excitement.
Jasmin Iordandis:
Yeah. That's really interesting. And I think building that level of understanding and building that foundation, we can't go past that. And I guess on that as well, from your experience, you've kind of hinted at one there, but what have been some of the challenges that you've experienced in implementing SAFe or even just in agile transformations more broadly, and as well as some of those opportunities that the framework has helped to unlock? So let's start with the challenges. What's some of the hard things you've experienced about an agile transformation and even implementing the framework?
Rebecca Davis:
Yeah, I'll give some real examples, and this first thing is going to sound a little wishy washy, but I also believe it, is the biggest challenge to transformation is you. So what I've discovered over the years, is I needed to step up. I needed to change. I think it's really easy to be in an organization and say, "My leaders don't get it," or, "Some won't understand," or, "It's been this way and I can't change it." And I think that the first thing you have to decide is that that's not actually acceptable to you as a person. And so you as a person are going to go fight. Not you're going to go try to convince somebody else to fight, but you are going to go fight. So I think that personal accountability is probably the biggest challenge to wake up every single day and say, "I'm going to get back in there."
Rebecca Davis:
I think from an example point of view, I've definitely seen huge challenges when the executive team shifts. So when we've got a set of leaders that we did the tipping point, we've gone through Leading SAFe, we've launched our trains. And then the organization, because every organization is going through a lot of change right now, and people are finding new roles and retiring and all that, there's a whole new set of executive leaders. And I think one of the things to discover there is there are going to be moments where it sucks, but you have to go and restart that implementation roadmap again, and reach that tipping point again, because there are new leaders. And that's hard. It really is, and it drains you a little bit, but you've just got to do it.
Rebecca Davis:
I think other challenges I've run into is there's a point after you've launched the trains and after you have been running for a while, where if you don't pay attention, people will stop learning, because you're not actively saying like, "Here's the next thing to learn. Here's the next new thing to try." So I do think it's the responsibility of a change leader, no matter if you're a LACE leader or not, to pay attention to maintaining excitement, pay attention to the continuous learning culture and really motivate people to get excited about learning and trialing and trying.
Jasmin Iordandis:
Yeah. That's an interesting point. How have you done that?
Rebecca Davis:
Hmm. So I think a few things. One, I had big lessons learned that there's a point inside of a transformation where, as an SPBC or as a change leader, that transformation is not yours anymore. So I had kind of a painful realization at one point that I had in my head the best next thing for the organization, and I was losing pulse of the people who are actually doing the work. So I think what I've discovered after that is, to me, there's a point where your LACE members and your change leaders and your SPCs need to start coming from a lot more areas. And honestly start to be made up of people who are not, at the moment, excited about the SAFe implementation, so you can hear from the pulse of the people.
Rebecca Davis:
And then I think if you can get those people and invite in and say like, "I'm inviting you to share it with me what's frustrating, what's good, what's bad, what's great, as well as I'm inviting you to tell me all the things that you're discovering out there in webcasts or videos that seem you'd like to try them, but we're not trying yet, and start giving back the ability to try new things and try things that you feel are probably going to be anti-patterns, but they need to try them anyway." So kind of a scrum master would do with a team of like, "Yeah, go try and then we'll retrospect." I think you have to do that at scale and let people get excited about owning their own transformation.
Jasmin Iordandis:
And what's the balance there between implementing the framework and taking all the good stuff that the framework says is good to do, and then letting people experiment and try those things, as you say, that may be anti-patents? Where's that sweet spot to allow that autonomy and that flexibility and that experimentation with still maintaining the integrity of the framework?
Rebecca Davis:
So I think the interesting thing is they are not actually different. So in the framework, we say hypothesis first, test first. So what I found is a layered kind of brain path where there're the steps in the framework and make sure we have teams and balance trains and all the principles and the values, and if you can live those principles and values all the time, while you're testing new things. So you test first like, "Hey, I want to try having my train off cadence from the other trains. I think it would be helpful for us." "Cool. Test that." And what we have to test it against is are we still living our principles? Are we still applying our values? Are we still applying the core fundamentals of agility and lean throughout that test and also as proof points?
Rebecca Davis:
So do we have an outcome where," Hey, I just made my train into a silo," or do we have an outcome where, "Well, now we have two different PI plannings within the overall PI cadence that one of them we merge with all the other trains and the other one is shorter because our market cadence is faster." Well, that's a beautiful win. So I think the key is it's not different, but one of the test points is make sure to check in on those principles and values.
Jasmin Iordandis:
Yeah. Have you ever seen that work well? The example that you just provided with the PI cadence, that makes complete sense, and it doesn't seem like it's going against the grain with anything that SAFe is there to help you achieve.
Rebecca Davis:
Yeah, I think that. This was kind of a little bit of what my summit talk was on last year, is during COVID, there were some trains. We had, I don't know, 30 trains. Two of them were having daily new requirements emerging from all the different states across the United States and emerging from the government and emerging from everything. Those trains were making sure everybody could get vaccinated across the United States. That's really darn important. And they needed to re-plan sometimes daily. It just didn't make sense to say, "Now we're just going to stop and go into PI planning for three days," when there wasn't any way that they could even think about what the next day's requirements could be. Since then, they still have a faster market rhythm. Then there are other trains that are working on, have a set unknown. There are trains that know that these holidays are when we need to release something or end of year is when we need to make sure that we've got something ready.
Rebecca Davis:
COVID is still in a reactive state. So what they've emerged into this year is those trains are still doing PI planning from my knowledge, I'm not there anymore, but from my knowledge. But they do eight a year instead of four a year. And four a year are on the same cadence and the other four are not, and it meets both needs. So I do think that key is test, and don't test just for the sake of it just because something feels dry or you get a new leader, and they haven't gone through Leading SAFe, but test because something actually doesn't feel right about, "We're not meeting our principles or values right now. We think that we could meet them better in this way. We think we could accelerate the flow of value in this way. Let's try it."
Jasmin Iordandis:
Yeah, cool. And on that, what are some of the red flags that you've seen in practice where those values aren't being met to be able to say, "Hang on a sec. This isn't working. We need to switch course"?
Rebecca Davis:
Yeah. Some of the things I've seen are the whole fun around when people are prioritizing their hierarchy or their piece of the organization over the enterprise value. So I've definitely seen people come to me and say, "Hey, I'd like to do his test." And when I ask the reasons why, a lot of the reasons are like a thinly veiled, "Because I would like more control."
Rebecca Davis:
So I think back to the values piece is that, "Okay, what's your why? Let's start with why. Why would you like to try something? What does that trial outcome achieve?" And, A, if it's really hard to articulate, probably there might be a bad thing going on, or if it is articulated and it actually goes against agility or lean practice and or diminishes flow or creates a silo, that's an initial gut. I think throughout testing, it's important to, the same way that we would do with iterations, have check-ins and demos, not just of what's the product being produced, but what is the change producing? So figuring out what those leading indicators would be and treat it the same way as we would treat a feature hypothesis or an epic hypothesis. We have some outcome we believe we could achieve. We're 100% open to being proven wrong. These are the things that we want to see as leading indicators as success and be really open with each other.
Jasmin Iordandis:
Yeah, cool. And it sounds like what's key to that though is having some concept of what that intended outcome is as a result of that experiment. It's not just going in for, as you say, the sake of doing an experiment. You want to have an idea of where you want to end up, so you can see if we're actually getting there or not.
Rebecca Davis:
Yeah.
Jasmin Iordandis:
That's really fascinating. And I think experimentation and iterative improvement, it kind of goes together. It's not just blindly following something because that's what you are supposed to do. It's preserving the values. That's a really interesting concept. And I think in that, would also come enormous opportunity. So in your experience as well, going back to the times where you've brought SAFe to an organization, or you've been going through an agile transformation, what are some of those opportunities that you've seen the framework unlock for enterprises or organizations that you've been leading those transformations within?
Rebecca Davis:
Yeah. I always was drawn to this idea of true value flow and business agility. So for me, what Scaled Agile helped unlock in a few of my organizations is, I always targeted that, like I'm not trying to make my thing better, I'm trying to make everything better. And with that mindset, really pushing for anybody should be able to take a class. Anybody should be able to take any of the classes. And these days, the enterprise subscription helps with that a lot. When I first started, we didn't have that. So it was also like anybody can take a class, and there should be creative ways of getting it paid for it.
Rebecca Davis:
But through that kind of invite model of really anybody, I had a nurse come take one of my SAFer teams classes, just because she was curious and she saw something about it on my blog, which ended up with her being more excited and getting to do agile team coaching for a set of nurses who were highly frustrated because their work on an individual basis was ebbing and flowing so much, and they felt like they weren't giving good patient care to coaching them on Kanban and having them all get really excited because they got to nurse as a team and whoever was available took the next patient case, and the patients were happier, and just being able to invite in and then say yes to coaching all of these roles that are so meaningful and they're so excited and they're something different.
Rebecca Davis:
And that same model ended up going from nothing to having a marketing person randomly take one of my Leading SAFe classes, which then turned into them talking to the VPs of marketing, which then turned into an 800-person marketing implementation. So I think the key is be open and spend time with the curious. And it doesn't matter if they're in your org. It's not like that's what I was paid to do, it's just really fun. So why not? If somebody wants to talk to you about agile, talk to them about agile. It's really cool.
Jasmin Iordandis:
Yeah, cool. And I think what I love about that is often agile may be associated just as software development teams. But as someone who's in marketing myself, I love the benefit and the way of thinking that it can provide to very traditional challenges, but the way that it can unlock those challenges in ways that not have not been approached before. And I think that there's something to be said in that too, around what you were saying earlier around maintaining excitement. And I feel like this question's already been answered, because often it's discussed, "Okay, we are scaling agile, we're going through a transformation." And it implies that there's this end state where it's done. It's transformed or we've scaled agile, but it doesn't sound like that's the case at all.
Rebecca Davis:
No, I don't think at all. I think mostly the opposite of... If you look at even yourself as a human, your whole life, you're transforming in different ways. Everything's impacting you. The environment's impacting you, whatever happens in your life is just this whole backpack that you carry around and you're transforming all the time. And the exact same thing, I think, for an organization and company. Today's age is nuts. There're updates all the time, there's new technology all the time. You and I are doing a talk from completely different countries, and there's change literally everywhere.
Rebecca Davis:
So yeah, I think part of transformation is helping your organization feel comfortable or as comfortable as possible with the rate of change happening and all the people within it, and not see change as a bad word, but as a positive thing where we can make betterness out there. And it's forever. It's a journey. It's not done. I really like Simon Sinek when he talks about that infinite game. I just feel really close to that of, we're not in it to win this moment or this year, we're in it to make a better future for ourselves and our children, and that's going to take forever. The people are in it right now and they've got to be excited about that.
Jasmin Iordandis:
Yeah. And I think that's that balance of delayed gratification, but constant improvement. So you'll feel and experience the improvement along the way. It's not like it'll be way out in the future where you won't feel the benefit of what you're doing, but it's something that's going to be built up and happen over time.
Rebecca Davis:
Yeah. And I think you reminded me just from saying that. I did that marketing transformation, and I just deeply remember a call with one of the marketing VPs who, after four or five iterations, I did a check in with her. And she's like, "My team is so happy. Is this because of agile? Is this what agile is, is happy with [inaudible 00:32:17]?" "Yes."
Jasmin Iordandis:
Yeah, joy at work, right?
Rebecca Davis:
Yeah.
Jasmin Iordandis:
Isn't that what it's all about? That is so cool. And yet the goal initially is never to go out and make people happy. It's just one of those bonus kind of side effects, a happy side effect.
Rebecca Davis:
Yeah.
Jasmin Iordandis:
Awesome. And I think I really want to talk about this idea, because you've mentioned it a couple times, you've even just mentioned then marketing, nursing. But then when you're in these larger organizations, you've got all these different functions. And I think it raises this idea around organizing around value. So I want to make sure we talk a bit about that, because value doesn't just happen from one function, or it's not delivered from just one function or one team. It's something that many people across an organization may have a hand in delivering. But I really want to get your take around this concept of organizing around value. What does that mean and what does that look like?
Rebecca Davis:
Yeah. I think there's a base concept that is also in that implementation roadmap around what happens first. So how do we first organize around value, because organizations tend to be organized around hierarchy. I am a VP of marketing and I have marketing all the way down. And so there's that first step of identifying what the value is that you produce as an organization. So being able to articulate it to begin with, which is not always an easy conversation. Sometimes it takes a bit of time, and then organizing all the different types of roles around what that value is. So I think that's your first thing in what most organizations implementing scaled agile start with, is just identifying it, forming around it, which ends up being what your trains end up being.
Rebecca Davis:
My experience is, because of that same rapid market change, the world changing so far, it's really important to re-evaluate how you've organized around value over time. So in my experience, one of the really healthy things that we used to do is, at the end of each year, give a chance to look at the different train structures and look at how we've organized and say, "Is this still right? And what's our strategy for next year? Where are we trying to head for our consumers and our users? And is there a different way to organize, that helps us with that?" And I say give a chance because in some years, we'd be like, "No. 80% of our portfolio is actually good to go. Things are flowing. We're doing okay." 20% of it has an entirely new strategic shift that's going to hit them, or, "Last year felt not good. We had too many dependencies. We didn't have the right people on the right trains," all those things.
Rebecca Davis:
And so at least take a pause and look at it, and see if our value still mean the same thing as it did a year ago or two years ago. Do we need to reorganize? What does that mean? What does the change leadership around it if we do need to, so that we're always focused on value, and it's not a definition that we gave ourselves five years ago and just stopped realizing that the world has changed.
Jasmin Iordandis:
Yeah. A living definition because it changes depending on what's going on in the world, but also what's going on within the organization and coming back to that idea of experimenting as well, like if you've tried out a new way of working, and that's gotten in the way. But even something that you said there really stood out is, "Okay, it didn't feel good. We might have had too many dependencies." And that brings up the idea of, "Well, how does that flow of value happen?" Oh, that sounds like there's a stifle to the delivery of value. So how do you optimize that flow particularly when there may be multiple people delivering that value?
Rebecca Davis:
Yeah. And I think Scaled Agile gives us some tools for that. So I think one of them is that first session I talked about, value stream and down vacation, so that you can really do a process for talking and discussing with the right blend of people. What is the value and how can we organize around that? I think past that point, there's another tool that I see used far less than I would think it would be, which is value stream mapping. So after we've identified it, now can we actually map what's happening? From concept to cash, which teams are doing pass offs? How long does it take to get an answer on an email? How long is it taking from testing to all the way to release?
Rebecca Davis:
So doing a lot of intentional measurement. Not measurement because we're judging people, but intentional measurement of, we organize this way, this is where all the pieces are connecting, and how long things are taking, as well as how people feel inside of their steps, like does it feel silo? Does it have an outcome? Did we put all of the designers and HR people and engineers on a train, but we made them separate teams, and so it still doesn't feel connected? That's what mapping's for. And those maps and also the program boards that actually visualize like, "Here's the dependencies," versus, "At the end of the PI, this is what those dependencies actually ended up being."
Rebecca Davis:
It's not that dependencies are bad, but they should be adding value, not restricting flow. So I think those connected stories as well as things like employee survey scores and just employee happiness are really good inputs, to, are we delivering flow. And it is a blended view. Some of it's qualitative and some of it's quantitative. But are our own internal things showing us good, bad and different, as well as how are our customers. So do they feel like they're receiving value or that they're receiving bits and pieces and they're unsure about the connected value? I think all of those are indicators.
Jasmin Iordandis:
Yeah. And would you say you'd need to have an idea of what those indicators are beforehand, so you can keep an eye on them as the PI progresses? So for example, you've done your value stream mapping, you've built your art. At that point, do you identify what those measurements of flow ought to be and keep an eye on them, or is it more retrospectively where you see these kind of things getting a little bit stuck?
Rebecca Davis:
I think there's both. So definitely those metrics that we indicate inside of the framework are healthy, good for teams and trains and solution trains and portfolio. So I think there is a set of metrics that you should and can utilize. Retrospectives are key, because retrospectives create action. So while we measure, then what's the conversation we have about them? Because what we don't want is vanity metrics. And my personal way of defining vanity metrics is any metric that you do nothing with.
Rebecca Davis:
So I think a key is use them to hold conversations and create outcomes, and create actions and make sure that you're prioritizing those actions. I think there's another piece of just understanding that this is not just about team and train. So teams and trains definitely do need to improve and measure themselves, but so does the portfolio, so does the enterprise, so do the pieces that connect to each other across different trains. So I do think if you over focus on, "Let's just make our teams go faster," you may be missing the whole point of how do we make our organization flow better, which may or may not equate to moving faster right away.
Jasmin Iordandis:
Yeah. Yeah. And team and train don't exist in a vacuuming within that organization like whole bunch of-
Rebecca Davis:
No, [inaudible 00:40:43].
Jasmin Iordandis:
Yeah. Well, I think we've touched on some really, really interesting concepts, and just I can't wait to hit the SAFe Summit, which is a really good segue to the fact that the next time we meet, Rebecca, it will be in person. And you're hosting a workshop at SAFe. Can you give us any sneak peek of what we can expect to be excited about at the summit?
Rebecca Davis:
Yeah. First of all, when we meet each other in person, I'm very short. So I think I'm maybe five foot. So that'll be exciting. So Harry, on the framework team and I, are running a workshop about flow. So we'll be doing a flow workshop. I can't talk about all of it yet, because some of it we're going to announce inside the summit, but I'm really excited. So I think if you do sign up for our workshop, you're going to get active advice, and be able to work also alongside other organizations and other people, really understanding flow, and how to apply improvements to flow and how to identify blockers to flow and what to do about it. So we're really focusing on why do certain things matter and what can you specifically do about it, whether you're at the team level or the train level or solution level or the portfolio level.
Jasmin Iordandis:
Cool. That sounds exciting.
Rebecca Davis:
And we [inaudible 00:42:08] a lot of other workshops, but definitely come to ours.
Jasmin Iordandis:
Well, we've just spoken about the importance of flow, so it makes sense. Right?
Rebecca Davis:
Yeah.
Jasmin Iordandis:
Awesome. Well, I personally am really looking forward to coming to SAFe and coming to Colorado and to get to chat with you a little bit more. But thank you so much for your time and joining us and sharing your expertise and experience on agile transformations, scaling agile and the SAFe framework itself. Thank you so much for your time, Rebecca.
Rebecca Davis:
Yeah, I appreciate it. And I look forward to maybe one day being able to do this in person with you in your own country. So that'll be really awesome.
Jasmin Iordandis:
Yeah. Cool. That would definitely be awesome. Thanks a lot.
Rebecca Davis:
Yeah. Thanks.
Related Episodes
- Podcast
Easy Agile Podcast Ep.12 Observations on Observability
On this episode of The Easy Agile Podcast, tune in to hear developers Angad, Jared, Jess and Jordan, as they share their thoughts on observability.
Wollongong has a thriving and supportive tech community and in this episode we have brought together some of our locally based Developers from Siligong Valley for a round table chat on all things observability.
💥 What is observability?
💥 How can you improve observability?
💥 What's the end goal?
"This was a great episode to be a part of! Jess and Jordan shared some really interesting points on the newest tech buzzword - observability""
Be sure to subscribe, enjoy the episode 🎧
Transcript
Jared Kells:
Welcome everybody to the Easy Agile podcast. My name's Jared Kells, and I'm a developer here at Easy Agile. Before we begin, Easy Agile would like to acknowledge the traditional custodians of the land from which we broadcast today, the Wodiwodi people of the Dharawal nation, and pay our respects to elders past, present and emerging, and extend that same respect to any aboriginal people listening with us today.
Jared Kells:
So today's podcast is a bit of a technical one. It says on my run sheet here that we're here to talk about some hot topics for engineers in the IT sector. How exciting that we've got a couple of primarily front end engineers and Angad and I are going to share some front end technical stuff and Jess and Jordan are going to be talking a bit about observability. So we'll start by introductions. So I'll pass it over to Jess.
Jess Belliveau:
Cool. Thanks Jared. Thanks for having me one as well. So yeah, my name's Jess Belliveau. I work for Apptio as an infrastructure engineer. Yeah, Jordan?
Jordan Simonovski:
I'm Jordan Simonovski. I work as a systems engineer in the observability team at Atlassian. I'm a bit of a jack of all trades, tech wise. But yeah, working on building out some pretty beefy systems to handle all of our data at Atlassian at the moment. So, that's fun.
Angad Sethi:
Hello everyone. I'm Angad. I'm working for Easy Agile as a software dev. Nothing fancy like you guys.
Jared Kells:
Nothing fancy!
Jess Belliveau:
Don't sell yourself short.
Jared Kells:
Yeah, I'll say. Yeah, so my name's Jared, and yeah, senior developer at Easy Agile, working on our apps. So mainly, I work on programs and road maps. And yeah, they're front end JavaScript heavy apps. So that's where our experience is. I've heard about this thing called observability, which I think is just logs and stuff, right?
Jess Belliveau:
Yeah, yeah. That's it, we'll wrap up!
Jared Kells:
Podcast over! Tell us about observability.
Jess Belliveau:
Yeah okay, I'll, yeah. Well, I thought first I'd do a little thing of why observability, why we talk about this and sort of for people listening, how we got here. We had a little chat before we started recording to try and feel out something that might interest a broader audience that maybe people don't know a lot about. And there's a lot of movements in the broad IT scope, I guess, that you could talk about. There's so many different things now that are just blowing up. Observability is something that's been a hot topic for a couple of years now. And it's something that's a core part of my job and Jordan's job as well. So it's something easy for us to talk about and it's something that you can give an introduction to without getting too technical. So we don't want to get down. This is something that you can go really deep into the weeds, so we picked it as something that hopefully we can explain to you both at a level that might interest the people at home listening as well.
Jess Belliveau:
Jordan and I figured out these four bullet points that we wanted to cover, and maybe I can do the little overview of that, and then I can make Jordan cover the first bullet point, just throw him straight under the bus.
Jordan Simonovski:
Okay!
Jess Belliveau:
So we thought we'd try and describe to you, first of all, what is observability. Because that's a pretty, the term doesn't give you much of what it is. It gives you a little hint, but it'll be good to base line set what are we talking about when we say what is observability. And then why would a development team want observability? Why would a company want observability? Sort of high level, what sort of benefits you get out of it and who may need it, which is a big thing. You can get caught up in these industry hot buzz words and commit to stuff that you might not need, or that sort of stuff.
Jared Kells:
Yep.
Jordan Simonovski:
Yep.
Jess Belliveau:
We thought we'd talk about some easy wins that you get with observability. So some of the real basic stuff you can try and get, and what advantages you get from it. And then we just thought because we're no going to try and get too deep, we could just give a few pointers to some websites and some YouTube talks for further reading that people want to do, and go from there. So yeah, Jordan you want to-
Jared Kells:
Sounds good.
Jess Belliveau:
Yeah. I hopefully, hopefully. We'll see how this goes! And I guess if you guys have questions as well, that's something we should, if there's stuff that you think we don't cover or that you want to know more, ask away.
Jordan Simonovski:
I guess to start with observability, it's a topic I get really excited about, because as someone that's been involved in the dev ops and SRE space for so long, observability's come along and promises to close the loop or close a feedback loop on software delivery. And it feels like it's something we don't really have at the moment. And I get that observability maybe sounds new and shiny, but I think the term itself exists to maybe differentiate itself from what's currently out there. A lot of us working in tech know about monitoring and the loading and things like that. And I think they serve their own purpose and they're not in any way obsolete either. Things like traditional monitoring tools. But observability's come along as a way to understand, I think, the overwhelmingly complex systems that we're building at the moment. A lot of companies are probably moving towards some kind of complicated distributed systems architecture, microservices, other buzz words.
Jordan Simonovski:
But even for things like a traditional kind of monolith. Observability really serves to help us ask new questions from our systems. So the way it tends to get explained is monitoring exits for our known unknowns. With seniority comes the ability to predict, almost, in what way your systems will fail. So you'll know. The longer you're in the industry, you know this, like a Java server fails in x, y, z amount of ways, so we should probably monitor our JVM heap, or whatever it is.
Jared Kells:
I was going to say that!
Jordan Simonovski:
I'll try not to get too much into-
Jared Kells:
Runs out of memory!
Jordan Simonovski:
Yeah. So that's something that you're expecting to fail at some point. And that's something that you can consider a known unknown. But then, the promise of observability is that we should be shipping enough data to be able to ask new questions. So the way it tends to get talked about, you see, it's an unknown unknown of our system, that we want to find out about and ask new questions from. And that's where I think observability gets introduced, to answer these questions. Is that a good enough answer? You want me to go any further into detail about this stuff? I can talk all day about this.
Jared Kells:
Is it like a [crosstalk 00:08:05]. So just to repeat it back to you, see if I've understood. Is it kind of like if I've got a, traditionally with a Java app, I might log memories. It's because I know JVM's run out of memory and that's a thing that I monitor, but observability is more broad, like going almost over the top with what you monitor and log so that you can-
Jordan Simonovski:
Yeah. And I wouldn't necessarily say it's going over the top. I think it's maybe adding a bit more context to your data. So if any of you have worked with traces before, observability is very similar to the way traces work and just builds on top of the premise of traces, I guess. So you're creating these events, and these events are different transactions that could be happening in your applications, usually submitting some kind of request. And with that request, you can add a whole bunch of context to it. You can add which server this might be running on, which time zone. All of these additional and all the exciters. You can throw in user agency into there if you want to. The idea of observability is that you're not necessarily constrained by high cardinality data. High cardinality data being data sets that can change quite largely, in terms of the kinds of data they represent, or the combinations of data sets that you could have.
Jordan Simonovski:
So if you want shipping metrics on something, on a per user basis and you want to look at how different users are affected by things, that would be considered a high cardinality metric. And a lot of the time it's not something that traditional monitoring companies or metric providers can really give you as a service. That's where you'll start paying insanely huge bills on things like Datadog or whatever it is, because they're now being considered new metrics. Whereas observability, we try and store our data and query it in a way that we can store pretty vast data sets and say, "Cool. We have errors coming from these kinds of users." And you can start to build up correlations on certain things there. You can find out that users from a particular time zone or a particular device would only be experiencing that error. And from there, you can start building up, I think, better ways of understanding how a particular change might have broken things. Or some particular edge cases that you otherwise couldn't pick up on with something like CPU or memory monitoring.
Angad Sethi:
Would it be fair to say-
Jared Kells:
Yeah. It's [crosstalk 00:11:02].
Angad Sethi:
Oh, sorry Jared.
Jared Kells:
No you can-
Angad Sethi:
Would it be fair to say that, so, observability is basically a set of principles or a way to find the unknown unknowns?
Jordan Simonovski:
Yeah.
Angad Sethi:
Oh.
Jess Belliveau:
And better equip you to find, one of the things I find is a lot of people think, you get caught up in thinking observability is a thing that you can deploy and have and tick a box, but I like your choice of word of it being a set of principles or best practices. It's sort of giving you some guidance around these, having good logging coming out of your application. So structured logs. So you're always getting the same log format that you can look at. Tracing, which Jordan talked a little bit about. So giving you that ability to follow how a user is interacting with all the different microservices and possibly seeing where things are going wrong, and metrics as well. So the good thing with metrics is we're turning things a bit around and trying to make an application, instead of doing, and I don't want to get too technical, black box monitoring, where we're on the outside, trying to peer in with probes and checks like that. But the idea with metrics is the application is actually emitting these metrics to inform us what state it is in, thereby making it more observable.
Jess Belliveau:
Yeah, I like your choice of words there, Angad, that it's like these practices, this sort of guide of where to go, which probably leads into this next point of why would a team want to implement it. If you want to start again, Jordan?
Jordan Simonovski:
Yeah, I can start. And I'll give you a bit more time to speak as well, Jess in this one. I won't rant as much.
Jess Belliveau:
Oh, I didn't sign up for that!
Jordan Simonovski:
I think why teams would want it is because, it really depends on your organization and, I guess, the size of the teams you're working in. Most of the time, I would probably say you don't want to build observability yourself in house. It is something that you can, observability capabilities themselves, you won't achieve it just by buying a thing, like you can't buy dev ops, you can't buy Agile, you can't buy observability either.
Jared Kells:
Hang on, hang on. It says on my run sheet to promote Easy Agile, so that sounds like a good segue-
Jess Belliveau:
Unless you want to buy it. If you do want to buy Agile, the [crosstalk 00:13:55] in the marketplace.
Jared Kells:
Yeah, sorry, sorry, yeah! Go on.
Jordan Simonovski:
You can buy tools that make your life a lot easier, and there are a lot of things out there already which do stuff for people and do surface really interesting data that people might want to look at. I think there are a couple of start ups like LightStep and Honeycomb, which give you a really intuitive way of understanding your data in production. But why you would need this kind of stuff is that you want to know the state of your systems at any given point in time, and to build, I guess, good operational hygiene and good production excellence, I guess as Liz Fong-Jones would put it, is you need to be able to close that feedback loop. We have a whole bunch of tools already. So we have CICD systems in place. We have feature flags now, which help us, I guess, decouple deployments from releases. You can deploy code without actually releasing code, and you can actually give that power to your PM's now if you want to, with feature flags, which is great.
Jordan Simonovski:
But what you can also do now is completely close this loop, and as you're deploying an application, you can say, "I want to canary this deployment. I want to deploy this to 10% of my users, maybe users who are opted in for Beta releases or something of our application, and you can actually look at how that's performing before you release it to a wider audience. So it does make deployments a lot safer. It does give you a better understanding of how you're affecting users as well. And there are a whole bunch of tools that you can use to determine this stuff as well. So if you're looking at how a lot of companies are doing SRE at the moment, or understanding what reliable looks like for their applications, you have things like SLO's in place as well. And SLO's-
Jared Kells:
What's an SLO?
Jordan Simonovski:
They're all tied to user experiences. So you're saying, "Can my user perform this particular interaction?" And if you can effectively measure that and know how users are being affected by the changes you're making, you can easily make decisions around whether or not you continue shipping features or if you drop everything and work on reliability to make sure your users aren't affected. So it's this very user centric approach to doing things. I think in terms of closing the loop, observability gives us that data to say, "Yes, this is how users are being affected. This is how, I guess the 99th percentile of our users are fine, but we have 1% who are having adverse issues with our application." And you can really pinpoint stuff from there and say, "Cool. Users with this particular browser or this particular, or where we've deployed this app to," let's say if you have a global deployment of some kind, you've deployed to an island first, because you don't really care what happens to them. You can say, "Oh, we've actually broken stuff for them." And you can roll it back before you impact 100% of your users.
Jared Kells:
Yeah. I liked what you said about the test. I forgot the acronym, but actually testing the end user behavior. That's kind of exciting to me, because we have all these metrics that are a bit useless. They're cool, "Oh, it's using 1% CPU like it always is, now I don't really care," but can a user open up the app and drag an issue around? It's like-
Jess Belliveau:
Yeah, that's a really great example, right?
Jared Kells:
That's what I really care about.
Jess Belliveau:
The 1% CPU thing, you could look at a CPU usage graph and see a deployment, and the CPU usage doesn't change. Is everything healthy or not? You don't know, whereas if you're getting that deeper level info of the user interactions, you could be using 1% CPU to serve HTTP500 errors to the 80% of the customer base, sort of thing.
Angad Sethi:
How do you do that? The SLO's bit, how do you know a user can log in and drag an issue?
Jordan Simonovski:
Yeah. I think that would come with good instrumenting-
Angad Sethi:
Good question?
Jordan Simonovski:
Yeah, it comes down to actually keeping observability in mind when you are developing new features, the same way you would think about logging a particular thing in your code as you're writing, or writing test for your code, as you're writing code as well. You want to think about how you can instrument something and how you can understand how this particular feature is working in production. Because I think as a lot of Agile and dev ops principles are telling us now is that we do want our applications in production. And as developers, our responsibilities don't end when we deploy something. Our responsibility as a developer ends when we've provided value to the business. And you need a way of understanding that you're actually doing that. And that's where, I guess, you do nee do think about observability with a lot of this stuff, and actually measuring your success metrics. So if you do know that your application is successful if your user can log in and drag stuff around, then that's exactly what you want to measure.
Jared Kells:
I think that we have to build-
Jordan Simonovski:
Yeah?
Jared Kells:
Oh, sorry Jordan.
Jordan Simonovski:
No, you go.
Jared Kells:
I was just going to say we have to build our apps with integration testing in mind already. So doing browser based tests around new features. So it would be about building features with that and the same thing in mind but for testing and production.
Jess Belliveau:
Yeah and the actual how, the actual writing code part, there's this really great project, the open telemetry project, which provides all these sort of API's and SDK's that developers can consume, and it's vendor agnostic. So when you talk about the how, like, "How do I do this? How do I instrument things?" Or, "How do I emit metrics?" They provide all these helpful libraries and includes that you can have, because the last thing you want to do is have to roll this custom solution, because you're then just adding to your technical debt. You're trying to make things easier, but you're then relying on, "Well I need to keep Jared Kells employed, because he wrote our log in engine and no one else knows how it works.
Jess Belliveau:
And then the other thing that comes to mind with something like open telemetry as well, and we talked a bit about Datadog. So Datadog is a SaaS vendor that specializes in observability. And you would push your metrics and your logs and your traces to them and they give you a UI to display. If you choose something that's vendor agnostic, let's just use the example of Easy Agile. Let's say they start Datadog and then in six months time, we don't want to use Datadog anymore, we want to use SignalFx or whatever the Splunk one is now.
Jordan Simonovski:
I think NorthX.
Jess Belliveau:
Yeah. You can change your end point, push your same metrics and all that sort of stuff, maybe with a few little tweaks, but the idea is you don't want to tie in to a single thing.
Jordan Simonovski:
Your data structures remain the same.
Jess Belliveau:
Yeah. So that you could almost do it seamlessly without the developers knowing. There's even companies in the past that I think have pushed to multiple vendors. So you could be consuming vendor A and then you want to do a proof of concept with vendor B to see what the experience is like and you just push your data there as well.
Jared Kells:
Yeah. I think our coupling to Datadog will be I all the dashboards and stuff that we've made. It's not so much the data.
Jess Belliveau:
Yeah. That's sort of the big up sell, right. It's how you interact. That's where they want to get their hooks in, is making it easier for you to interpret that data and manipulate it to meet your needs and that sort of stuff.
Jordan Simonovski:
Observability suggests dashboards, right?
Jess Belliveau:
Yeah, perhaps. You used this term as well, Jordan, "production excellence." And when we talk about who needs observability, I was thinking a bit about that while you were talking. And for me, production excellence, or in Apptio we call it production readiness, operational readiness and that sort of stuff is like we want to deploy something to production like what sort of best practices do we want to have in place before we do that? And I think observability is a real great idea, because it's helping you in the future. You don't know what problems you're going to have down the line, but you're equipping your teams to be able to respond to those problems easily. Whereas, we've all probably been there, we've deployed code of production and we have no observability, we have a huge outage. What went wrong? Well, no one knows, but we know this is the fix, and it's hard to learn from that, or you have to learn from that I guess, and protect the user against future stuff, yeah.
Jess Belliveau:
When I think easy wins for observability, the first thing that really comes to mind is this whole idea of structured logging, which is really this idea that your application is you're logging, first of all. Quite important as a baseline starting point, but then you have a structured log format which lets you programmatically pass the logs as well. If you go back in time, maybe logging just looked like plain text with a line, with a timestamp, an error message. Whatever the developer decided to write to the standard out, or to the error file or something like that. Now I think there's a general move to having JSON, an actual formatted blob with that known structure so you can look into it. Tracing's probably not an easy win. That's a little bit harder. You can implement it with open telemetry and libraries and stuff. Requires a bit more understanding of your code base, I guess, and where you want tracing to fire, and that sort of stuff, parsing context through, things like that.
Jordan Simonovski:
I think Atlassian, when you probably just want to know that everything is okay. At a fairly superficial level. Maybe you just want to do some kind of up time on a trend. And then as, I guess, your code might get more complex or your product gets a bit more complex, you can start adding things in there. But I think actually knowing or surfacing the things you know might break. Those would probably be your quickest wins.
Jess Belliveau:
Well, let's mention some things for further reading. If you want to go get the whole picture of the whole, real observability started to get a lot of movement out of the Google SRE book from a few years ago. The Google SRE stuff covers the whole gamut of their soak reliability engineering practice, and observability is a portion of that, there's some great chapters on that. O'Reilly has an observability book, I think, just dedicated to observability now.
Jordan Simonovski:
I think that's still in early release, if people want to google chapters.
Jess Belliveau:
The open telemetry stuff, we'll drop a link to that I think that's really handy to know.
Angad Sethi:
From [inaudible 00:26:12], which is my perspective, as a developer, say I wanted to introduce cornflake use Datadog at Easy Agile. Not very familiar, I'm not very comfortable with it. I know how to navigate, but what's a quick way for me to get started on introducing observability? Sorry to lock my direct job or at my workplace.
Jordan Simonovski:
I would lean, I could be biased here. Jess correct me or give your opinion on this, I would lean heavily towards SLO's for this. And you can have a quick read in the SRE-
Jess Belliveau:
What does SLO stand for, Jordan?
Jordan Simonovski:
Okay, sorry. Buzz words! SLO is a service level objective, not to be confused with service level agreement. An agreement itself is contractual and you can pay people money if you do breach those. An SLO is something you set in your team and you have a target of reliability, because we are getting to the point where we understand that all systems at any point in time are in some kind of degraded state. And yeah, reliability isn't necessarily binary, it's not unreliable or reliable. Most of the time, it's mostly reliable and this gives us a better shared language, I guess. And you can have a read in the SRE handbook by Google, which is free online, which gives you a pretty good understanding of Datadog.
Jordan Simonovski:
I think the last time I used it had a SLO offering. But I think like I was mentioning earlier, you set an SLO on particular functionalities or features of your application. You're saying, "My user can do this 99% of the time," or whatever other reliability target you might want to set. I wouldn't recommend five nines of reliability. You'll probably burn yourself out trying to get there. And you have this target set for yourself. And you know exactly what you're measuring, you're measuring particular types of functionality. And you know when you do breach these, users are being affected. And that's where you can actually start thinking about observability. You can think about, "What other features are we implementing that we can start to measure?" Or, "What user facing things are we implementing that we can start to measure?"
Jordan Simonovski:
Other things you could probably look at are, I think they're all covered in the book anyway, data freshness in a way. You want to make sure the data users are being displayed is relatively fresh. You don't want them looking at stale data, so you can look at measuring things like that as well. But you can pretty much break it down into most functionalities of a website. It's no longer like a ping check, that you're just saying, "Yes, HTTP, okay. My application is fine." You're saying, "My users are actually being affected by things not working." And you can start measuring things from there. And that should give you a better understanding, or a better idea, at least, of where to start with what you want to measure and ow you want to measure it. That would be my opinion on where to get started with this if you do want to introduce it.
Jared Kells:
We're going to talk a little bit about state and how with some of these, like our very front end heavy applications that we're building, so the applications we build just basically run inside the browser and the traditional state as you would think about it, is just pulling a very simple API that writes some things into the database with some authentication, and that sort of stuff. So in terms of reliability of the services, it's really reliable. Those tiny API's just never have problems, because it's just so simple. And well, they've got plenty of monitoring around it. But all our state is actually, when you say, "Observe the state of the system," for the most part, that's state in a browser. And how do we get observability into that?
Jess Belliveau:
A big thing is really, there's not one thing fits all as well. When we talk about the SLO stuff as well, it's understanding what is important to not so much maybe your company but your team as well. If you're delivering this product, what's important to you specifically? So one SLO that might work for me at Apptio probably isn't going to work for Easy Agile. This is really pushing my knowledge, as well, of front end stuff, but when we say we want to observe the state as well, we don't necessarily mean specifically just the state. You could want to understand with each one of those API's when it's firing, what the request response time is for that API firing. So that might be an important metric. So you can start to see if one of those APIs is introducing latency, and so your user experience is degraded. Like, "Hey when we were on release three, when users were interacting with our service here, it would respond in this percentile latency. We've done a release and since then, now we're seeing it's now in this percentile. Have we degraded performance performance?" Users might not be complaining, but that could be something that the team then can look into, add to a sprint. Hey, I'm using Agile terms now. Watch out!
Jared Kells:
That's a really good example, Jess. Performance issues for us are typically not an API that's performing poorly. It's something in this very complicated front end application is not running in the same order as it used to, or there's some complex interaction we didn't think of, so it's requesting more data than expected. The APIs are returning. They're never slow, for the most part, but we have performance regressions that we may not know about without seeing them or investigating them. The observability is really at the individual user's browser level. That makes sense? I want to know how long did it take for this particular interaction to happen.
Jess Belliveau:
Yeah. I've never done that sort of side of things. As well, the other thing I guess, you could potentially be impacted in as well as then, you're dealing with end user manifestations as well. You could perceive-
Jared Kells:
Yeah sure.
Jess Belliveau:
... Greater performance on their laptop or something, or their ISP or that sort of stuff. It'd be really hard to make sure you're not getting noise from that sort of thing as well.
Jordan Simonovski:
Yeah. There are tools like Sentry, I guess, which do exist to give you a bit more of an understanding what's happening on your front end. The way Sentry tends to work with JavaScript, is you'll upload a minified map of your JS to Sentry, deploy your code and then if something does break or work in a fairly unexpected way, that tends to get surfaced with Sentry will tell you exactly which line this kind of stuff is happening on, and it's a really cool tool for that company stuff. I don't know if it'd give you the right type of insights, I think, in terms of performance or-
Jared Kells:
Yeah, we use a similar tool and it does work for crashes and that sort of thing. And on the observability front, we log actions like state mutations in side the front end, not the actual state change, but just labels that represent that you updated an issue summary or you clicked this button, that sort of thing, and we send those with our crash reports. And it's super helpful having that sort of observability. So I think I know what you guys are talking about. But I'm just [crosstalk 00:35:25], yeah.
Jess Belliveau:
Yeah, that's almost like, I guess, a form of tracing. For me and Jordan, when we talk about tracing, we might be thinking about 12 different microservices sitting in AWS that are all interacting, whereas you're more shifting that. That's sort of all stuff in the browser interacting and just having that history of this is what the user did and how they've ended up-
Jared Kells:
In that state.
Jess Belliveau:
In that state, yeah.
Jordan Simonovski:
I guess even if you don't have a lot of microservices, if you're talking about particular, like you're saying for the most part your API requests are fine but sometimes you have particularly large payloads-
Jared Kells:
We actually have to monitor, I don't know, maybe you can help with this, we actually should be monitoring maybe who we're integrating with. It's actually much more likely that we'll have a performance issue on a Xero API rather than... We don't see it, the browser sees it as well, which is-
Jordan Simonovski:
Yeah, and tracing does solve all of those regressions for you. Most tracing libraries, like if you're running Node apps or whatever on your backend. I can just tell you about Node, because I probably have the most experience writing Node stuff. You pretty much just drop in Didi trace, which is a Datadog library for tracing into your backend and your hook itself into all of, I think, the common libraries that you'll tend to work with, I think. Like if you're working for express or for a lot of just HADP libraries, as well as a few AWS services, it will kind of hook itself into that. And you can actually pinpoint. It will kind of show you on this pretty cool service map exactly which services you're interacting with and where you might be experiencing a regression. And I think traces do serve to surface that information, which is cool. So that could be something worth investigating.
Jess Belliveau:
It's funny. This is a little bit unrelated to observability, but you've just made me think a bit more about how you're saying you're reliant on third party providers as well. And something I think that's really important that sometimes gets missed is so many of us today are relying on third party providers, like AWS is a huge thing. A lot of people writing apps that require AWS services. And I think a lot of the time, people just assume AWS or Jira or whatever, is 100% up time, always available. And they don't write their code in such a way that deals with failures. And I think it's super important. So many times now I've seen people using the AWS API and they don't implement exponential back off. And so they're basically trying to hit the AWS API, it fails or they might get throttled, for example, and then they just go into a fail state and throw an error to the user. But you could potentially improve that user experience, have a retry mechanism automatically built in and that sort of stuff. It doesn't really tie into the observability thing, but it's something.
Jared Kells:
And the users don't care, right? No one cares if it's an AWS problem. It's your problem, right, your app is too slow.
Jess Belliveau:
Well, they're using your app. Exactly right. It reflects on you sort of thing, so it's in your interest to guard against an upstream failure, or at least inform the user when it's that case. Yeah.
Jared Kells:
Well, I think we're going to have to call it, this podcast, because it was an hour ago. We had instructed max 45 minutes.
Jess Belliveau:
We could just keep going. We might need a part two! Maybe we can request [cross talk 00:39:21].
Jared Kells:
Maybe! Yeah.
Jess Belliveau:
Or we'll just start our own podcast! Yeah.
Angad Sethi:
So what were your biggest learnings today, given it's been Angad and I are just learning about observability, Angad what was your biggest learning today about observability? My biggest learning was that observability does not equal Datadog. No, sorry! It was just very fascinating to learn about quantifying the known unknowns. I don't know if that's a good takeaway, but...
Jess Belliveau:
Any takeaway is a good takeaway! What about you, Jared?
Jared Kells:
I think, because I we were going to talk about state management, and part of it was how we have this ability, at the moment to, the way our front ends are architected, we can capture the state of the app and get a customer to send us their state, basically. And we can load it into our app and just see exactly how it was, just the way our state's designed. But what might be even cooler is to build maybe some observability into that front end for support. I'm thinking instead of just having, we have this button to send us out your support information that sends us a bunch of the state, but instead of console logging to the browser log, we could be console logging, logging in our front end somewhere that when they click, "send support information," our customers should be sending us the actions that they performed.
Jared Kells:
Like, "Hey there's a bug, send us your support information." It doesn't have to be a third party service collecting this observability stuff. We could just build into our... So that's what I'm thinking about.
Jess Belliveau:
Yeah, for sure. It'll probably be a lot less intrusive, as well, as some of the third party stuff that I've seen around.
Jared Kells:
Yeah. It's pretty hard with some of these integrations, especially if you're developing apps that get run behind a firewall.
Jess Belliveau:
Yeah
Jared Kells:
You can't just talk to some of these third parties. So yeah, it's cool though. It's really interesting.
Jess Belliveau:
Well, I hope someone out there listening has learned something, and Jordan and I will send some links through, and we can add them, hopefully, to the show notes or something so people can do some more reading and...
Jared Kells:
All thanks!
Jess Belliveau:
Thanks for having us, yeah.
Jared Kells:
Thanks all for your time, and thanks everybody for listening.
Jordan Simonovski:
Thanks everyone.
Angad Sethi:
That was [inaudible 00:41:55].
Jess Belliveau:
Tune in next week!
- Podcast
Easy Agile Podcast Ep.13 Rethinking Agile ways of working with Diversity, Equity and Inclusion at the core
"The episode highlights that Interaction, collaboration, and helping every team member reach their potential is what makes agile work" - Terlya Hunt
In this episode join Terlya Hunt - Head of People & Culture at Easy Agile and Caitlin Mackie - Marketing Coordinator at Easy Agile, as they chat with Jazmin Chamizo and Rakesh Singh.
Jazmin and Rakesh are principal contributors of the recently published report "Reimagining Agility with Diversity, Equity and Inclusion".
The report explores the intersection between agile, business agility, and diversity, equity, and inclusion (DE&I), as well as the state of inclusivity and equity inside agile organizations.
“People are the beating heart of agile. If people are not empowered by inclusive and equitable environments, agile doesn't work. If agile doesn't work, agile organisations can't work."
📌 What led to writing the report
📌 Where the misalignments lie
📌 What we can be doing differently as individuals and business leadersBe sure to subscribe, enjoy the episode 🎧
Transcript
Terlya Hunt:
Hi, everyone. Thanks for joining us for another episode of the Easy Agile podcast. I'm Terlya, People & Culture business partner in Easy Agile.
Caitlin Mackie:
And I'm Caitlin, marketing coordinator at Easy Agile. And we'll be your hosts for this episode.
Terlya Hunt:
Before we begin, Easy Agile would like to acknowledge the traditional custodians of the land from which we broadcast today, the Wodiwodi people of the Dharawal nation, and pay our respects to the elders past, present and emerging, and extend the same respect to any Aboriginal people listening with us today.
Caitlin Mackie:
Today, we'll be joined by Jazmin Chamizo and Rakesh Singh. Both Jazmin and Rakesh are principal contributors and researchers of Reimagining Agile for Diversity, Equity and Inclusion, a report that explores the intersection between Agile business agility and diversity equity and inclusion published in May, 2021.
Terlya Hunt:
We're really excited to have Jazmin and Rakesh join us today. So let's jump in.
Caitlin Mackie:
So Jazmin and Rakesh, thank you so much for joining us today. We're so excited to be here with you both today, having the conversation. So I suppose today we'll be unpacking and asking you questions in relation to the report, which you were both principal contributors of, Reimagining Agility with Diversity, Equity and Inclusion. So for our audience tuning in today who may be unfamiliar the report, Jazmin, could you please give us a summary of what it's all about?
Jazmin Chamizo:
Absolutely. And first of all, thank you so much for having us here today and for your interest in our report. Just to give you a little bit of background of our research and how everything started out, the founder and the owner of the Business Agility Institute, Evan Leybourn, he actually attended a talk given by Mark Green. And Mark who used to be, I mean, an Agile coach, he was referring to his not very positive experience with Agile. So this actually grabbed the attention of Evan, who was a big advocate of agility, as all of us are. And they decided to embark upon this adventure and do some research trying to probe on and investigate the potential relationship between diversity, equity and inclusion and Agile.
So we had, I mean, a couple of hypothesis at the beginning of the research. And the first of hypothesis was that despite the positive intent of agility and despite the positive mindset and the values of Agile, which we all share, Agile organizations may be at the risk of further excluding marginalized staff and customers. And the second hypothesis that we had was that organizations who actually embed diversity, equity and inclusion directly into their Agile transformation and then strategy may outperform those organizations who don't. So we actually spent more than a year interviewing different participants from many different countries. And we actually ended up seeing that those hypothesis are true. And today, we would like to share with you, I mean, part of this research and also need to encourage you to read the whole report and also contribute to this discussion.
Terlya Hunt:
Amazing. And Jazmin, you touched on this a little bit in your answer just then, but I guess, Rakesh, could you tell us a bit more about what was the inspiration and catalyst for writing this report?
Rakesh Singh:
Yeah. So thanks for inviting once again. And it's a great [inaudible 00:03:51] talk about this beautiful project. The BAI was actually into this activity for a long time, and I happened to hear one of the presentation from Evan and this presentation actually got me interested into business agility and associated with DEI. So that was one thing. And second thing when Evan talked about this particular project, invited all of us, I had been with transformation in my job with Siemens for about three decades for a very long time. And we found that there were always some people, whenever you do transformation, they were not interested or they were skeptical. "We are wasting our time." And okay, that was to be expected, but what was surprising that even though Agile came up in a big way and people thought, "Okay. This is a solution to all our miseries," even though there was a focus on culture, culture was still our biggest issue. So it appeared to me that we are not really addressing the problem.
And as Jazmin talk about our goal and our hypothesis, and that was attractive to me that maybe this project will help me to understand why some [inaudible 00:05:12] to get the people on board in some of the Agile transformation.
Terlya Hunt:
Thank you. That was awesome. I think it definitely comes through in the report that this is a topic that's near and dear to all of you. And in the report you mentioned, there's a lack of consensus and some misalignment in defining some of these key terms. So thought to frame the conversation today, Jazmin, could you walk us through some of these key definitions, agility, diversity, equity, and inclusion.
Jazmin Chamizo:
That's a great question now, because over the last year, there's been a big boom on different topics related to diversity, equity and inclusion, I mean, especially with the Black Lives Matter movement and many different events that have affected our society in general. And with the rise of social movements, I mean, there's been a lot of talk in the area of diverse, equity and inclusion. And when we talk about agility, equality, equity and inclusion and diversity, I mean, it's very important to have a very clear understanding of what we mean with this terms. Agility is the mindset. I mean, it's really about having the customer, people, at the very center of the organization. So we're talking about agile ways of working. We're talking about more collaborative ways of working. So we can bring the best out of people and then innovate and put products into the market as fast as possible.
Now, when we were thinking about agility and this whole idea of putting people at the very core and customer at the very core of organization so we can respond in a very agile and nimble way to the challenges that our society presents at the moment, we found a lot of commonalities and a lot of similarities with diversity, equity and inclusion. However, when we talk about diversity, equity and inclusion, there's some nuances in the concepts that we need to understand. Diversity really refers to the mix. It refers to numbers, to statistics, all the differences that we have. There's a very long list of types of diversity. Diversity of gender, sexual orientation, ways of our thinking, our socioeconomic status, education and you name it, several types of diversity.
Now, when we talk about equality, I mean, we're talking about applying the same resources and support structures, I mean, for all. However, equality does not actually imply the element of equity, which is so important when we talk about now creating inclusive environments. With equity, we're talking about the element of fair treatment, we're talking about social justice, we're talking about giving equal access to opportunities for all. So it's pretty much about leveling the filed, so all those voices can be part of the conversation and everybody can contribute to the decision making in organizations and in society. So it's that element of fair treatment, it's that element of social justice that the element of equity has to contribute and that we really need to pay attention to.
And inclusion is really about that act of welcoming people in the organization. It's about creating all the conditions so people, everybody, can thrive and everybody can succeed in an organization. So I think it's very important, I mean, to have those definitions very clear to get a better understanding of how they overlap and how there's actually, I mean, a symbiotic relationship between these concepts.
Caitlin Mackie:
Yeah. Great. And I think just building on that, interaction, collaboration and helping every team member reach their potential is what makes Agile work. So your report discusses that there are lots of overlaps in those values with diversity, equity and inclusion. So I think, Rakesh, what are those key overlaps? It seems those qualities and traits go hand in hand. So how do we embrace them?
Rakesh Singh:
So if you see most of the organization which are big organization and being for about two decades or so, and you compare them with the startup organization, so in the traditional setup, normally people are working in their functional silos, so to say. And so the Agile transformation is taken care by one business function. It could be a quality team. It could be a transmission team. And DEI normally is a domain of an HR or people who enter the organization. And the issue is that sometime these initiatives, they are handled separately and the amount of collaboration that's required does not happen, whereas in a startup company, they don't have these kind of divisions.
So looking that as a basis, what we need to look at is that the organization should be sensitize that they work together on some of these projects and look at the underlying what is the commonality, and we can possibly either help each other or complement each other, because one example is, if I can give, it's very easy to justify an Agile transformation relating to a business outcome, okay, but any people related change is a very long-term change. So you cannot relate that to a business outcome in a shorter timeframe. So I call Agile and DEI as symbiotic. An Agile can be helped by a DEI process and DEI itself can be justified by having an Agile project. So they are symbiotic.
Now, what is the common thing between the two? So there are four items. I mean, there are many things which are common, but four things which I find are most important. Yeah? The first thing is respect for people, like Jazmin talked about being inclusive. So respect for people, both Agile and DEI, that's a basis for that. And make people feel welcomed. So no matter what diversity they come from, what background they come from, they're feeling welcome. Yeah? The second part is the work environment. So it's a big challenge to create some kind of a psychological safety. And I think people are now organizing, the management is now understanding that they think that they have provided a safe place, but people are still not feeling safe for whatever reason there. That's one thing.
The other thing is that whatever policies you write, documentation, policies or announcement, the basic things that people see, is it fair and is it transparent? Yeah? So I used to always see that if there are two people given bonus, if one person get 5% more, no matter how big is the amount, there's always felt that, "I have not got my due." Yeah? So be fair and be transparent. And the last one is that you have to invest in people. The organization need to invest in people. The organization need to invest in enabling them with opportunity to make use of new opportunity, and also grow and through learning. So these are four things that I can see, which actually can help both being an agile, and also having inclusive environment in the company.
Caitlin Mackie:
The report mentions that some of those opportunities to combine agile and diversity equity inclusion are being overlooked. Why do you think this is?
Rakesh Singh:
So I think that the reason why they're being overlooked is that, it's basically, educating the leaders. So it's just, if I'm in the agile world, I do not really realize that there are certain people related aspect. I think, if I just make an announcement, people will participate. Okay? So that's the understanding. On the other side, we got an input from quite a few responders saying that some of the DEI projects are basically words, are not really sincere about it. It's a waste of time. "I'm being forced to do certain training. I'm forced." So the sincerity part, sometime there's a lacking, so people have to be educated more at a leadership level and on at a employee level.
Caitlin Mackie:
I think a really interesting call out in your research is that many agile processes and rituals are built to suit the majority, which excludes team members with diverse attributes. Jazmin, what are some of those rituals?
Jazmin Chamizo:
Yeah, that's a great question. Now, if you think about agile and agile rituals and for example, I mean, daily standups, a lot of those rituals have not actually thought about diversity, or the design for diversity and inclusion. I mean, agile is a very on the spot and is a very, who can talk, type of rituals. But there's a lot of people, I mean, who might need more time to process information before they can provide inputs, so fast. So that requirement of processing information or giving input in a very fast manner, in daily standups, that might be overlooking the fact that a lot of people, with a different type of thought processing styles or preferences may need more time to carry out those processes.
So that would be, I mean, number one; the fact that it's very on the spot and sometimes only the loud voices can be heard. So we might be losing a lot of opportunities, trying to get feedback and input from people with different thinking styles.Now, also, if you think about organizations in different countries, where English is not the native language of a lot of people, they may also feel a lot of disadvantage. This happens a lot in multinational organizations, where people whose, you know, first language is English, they feel more confident and they're the ones who practically may monopolize now the conversations. So, for people who's first language is not English, I mean, they might feel at a disadvantage.
If you think about older employees who sometimes may not be part of an agile transformation, they might also feel that are not being part of the team and they may not have the sense of belonging, which is so important in an agile transformation and for any organization. Another example, I mean, would be people, who because of their religious belief, I mean, they need maybe to pray five times in a day, and I mean maybe a morning stand up might mean very difficult to adapt to, or even people with disabilities or language differences, they feel a little intimidated by agile. So there's a lot of different examples. And Doug report actually collects several lived experiences, by the respondents that we interview that illustrate how agile has been designed for the majority and for a more dominant type of culture and that highlights the need to redesign many of these rituals and many of these practices.
Caitlin Mackie:
Yeah, I think just building on that in your recommendations, you mentioned consciously recreating and redesigning these agile ways of working. What are some of the ways we can rethink and consciously create these?
Jazmin Chamizo:
Mm-hmm (affirmative). Well, the good news is that, during our research, and during our field work and the conversations that we had with some organizations mean there's a lot of companies and organizations that have actively implementing them different types of practices, starting from the way they're managing their meetings, their rituals, their stand ups, giving people an opportunity to communicate in different ways. Maybe giving some room for silence, so people can process their information or providing alternative channels for people to communicate and comment either in writing or maybe the next day. So it doesn't have to be right there on the spot., and they don't feel under that type of pressure.
Now, another example would be allowing people, I mean, to also communicate in their native language. I mean, not necessarily using English, I mean, all the time as, I mean, the main language. I think it's also important for people to feel that it can contribute with their own language, and also starting to analyze, I mean, the employee experience. We're talking about maybe using non-binary options in recruitment processes or in payroll. So, I mean, starting to be more inclusive in the different practices and analyzing, I mean, the whole employee journey. I mean, those are some examples that we can start implementing to creating a more inclusive environments. And the one that is the most important for me is encouraging leadership to intentionally design inclusive work environments through the use of, like creating environments that are really where people feel safe, where they have this. Psychologically safe.
Terlya Hunt:The whole section on exploring and challenging existing beliefs is so interesting. And I would definitely encourage everyone listening to go and read it. I could ask you so many questions on this section alone, because I think it was full of gold, and honestly, my copy is highlighted and scribbled and I read it and reread it, there was so much to absorb. The first thing that really stood out to me as a HR practitioner in an agile organization was this belief that focusing on one or two areas of diversity first is a good start. And from your research, what you actually found was that survey respondents found this method ineffective and actually harmful for DEI. And in your research, you also reference how important it is to be intentional and deliberate. So I guess, how do we balance this need for focus and creating change with these findings that being too narrow in our focus can actually be harmful? Might throw this one to you, Rakesh.
Rakesh Singh:
So actually, thanks to the reform data report, very interesting, in fact, we presented to quite a few groups. And one of the thing that I observed when we are talking about some of the beliefs and challenges, there were immediate to response say, "Hey, we do experience in our area." So, what we realized is that this whole aspect, as Jazmin talked about, many dimensions. So if you look at inclusiveness, and diversity and equity across organization, there are many streams, and many triggers. As diversity, we understand, okay, in very limited way, it may be gender, or it may be religion or country, but actually, it's much more in a working environment, there are many dynamics which are [inaudible 00:22:15]. So the challenges, what we saw was that if you pick up a project in a very sincere way and say, "I'll solve one problem, okay?" Let me say I solve problem of a region or language, yeah? Now the issue is that most of the time, we look at the most dominant and identify that problem.
So what happens is that you actually create an inequity right there, because there are other people they are suffering. They are, I won't say, "Suffering," but they're influenced by other factors of diversity and they felt, "Okay, nobody's really caring for me." Yeah? So you have to look at in a very holistic picture, and you have to look at in a way that everybody is on board, yeah? So you may not be able to find solution to every specific problem, but getting everybody on board, and let people work in some of the environment or either psychological safety or the policy level, so create an environment where everybody can participate, and issues can be different so they can bring up their own issues, and make sure they feel that they they're cared for. And that's what we actually observed.
Terlya Hunt:
And the second belief I thought was really interesting to call out was that this belief that we will adapt to somebody's beliefs if they ask. And your research found that not everyone is able to disclose their needs, no matter how safe the working environment, so that by relying on disclosure is the first step in the process,. Organizations will always be a step behind and, and also place the burden of change on marginalized groups. What are some things we can do, Rakesh, to remove this pressure and to be more proactive?
Rakesh Singh:
So there are a couple of things that we need to look at when we talk to people, actually, they discussed about the problem, and they also recommended what could be right, we are doing it. And we also discussed among ourselves. So one thing which was very clear that there was a little doubt about the sincerity of leadership. And so, we felt that any organization where leader was very proactive, like, for example, what is the basic reason, if I have a problem, if I talk about it, I am always worried what will happen when I disclose it? And is it the right issue to talk about it? So, these are the questions would inhibit a lot of people not to talk about it at all. So, that's where the proactive leadership can help people to overcome their inhibition and talk about it, and unless they discuss about it, you'll never know if there's a problem. So, that's the one thing. So, that's the approach.So there are a couple things that we could also recommend, is proactive leadership to start with, and something which can be done is there are a lot of tools available for the managers, yeah? People leaders, I would call it. Things like coaching, so you have a grow model where you can coach an individual person, even as a manager or as an independent coach, then having a facilitation techniques. When I started my career, they were not a training on facilitation, just going to the room and conduct the meeting. But they're very nice tools, facilitation techniques, which can be brought out to get people to participate, and so things like that can be very useful for being proactive and drawing people out of their inhibition. That definitely is with the leader. That's why we call it servant leadership. It is their job to initiate and take the lead, and get people out of their shell.
Terlya Hunt:
It ties quite nicely into the next question I had in mind. You both actually today have mentioned a lot of challenging beliefs, and calling things out. We need to build this awareness, and create safe spaces, and create psychological safety in our teams. What are some examples of how we can create safe spaces for these conversations?
Rakesh Singh:
The examples of someone creating safe places is ... I would say that educating people and the leaders. What I have seen is that if the leadership team recognizes that and educates the managers and other people ... You need to actually train people at different level, and create an environment that everybody's participating in the decision making, and they're free to make choices within, of course, the constraint of the business.
The focus, where I would put it, is that there are many educational programs and people would like to educated, because I normally felt that I was never trained for being a good leader. There was never training available. But these days we find that a lot of educational programs highlighting a various issue, like microaggression, unconscious bias, psychological safety. People should understand it. Things like being empathetic. These terminologies are there, but I find that people don't really appreciate it and understand it to the extent that they need to do, even though they are in a leadership position.
Caitlin Mackie:Thanks for sharing, Rakesh. I really love what you mentioned around proactive leadership, there. Your research found that 47% of respondents believed organizations who achieved this unity of Agile, and diversity, and equity, and inclusion will reap the benefits and exceed competitors. Jazmin, what did these organizations do differently?
Jazmin Chamizo:
Yes. That's a great question. Actually this ties very nicely with idea of servant leadership, inclusive leadership, and how leaders have this incredible challenge of creating workspaces that are psychologically safe, as Rakesh just mentioned. This is really everybody's responsibility, but it has a lot to do with a very strong leadership.
We found that several other organizations that we interviewed, they had a very strong leadership team, that they were really committed with diversity, equity, and inclusion in their agile transformation, and they were able to put DEI at the very core of the organization. That's number one, having a very strong leadership team that's actually committed to diversity, equity, and inclusion, and that does not perceive DEI efforts as isolated actions or initiatives.
This is something that we're seeing a lot nowadays. As a DEI coach and consultant, sometimes you see, unfortunately, several organizations that only try very isolated and very ... They don't have long-term strategy. What we have seen that actually works is having this committed leadership team that has been able to put DEI at the very core of their strategy.
Also a team that has been able to serve as an advocate in diversity, equity, and inclusion, and agility, and they're able to have advocates throughout the organization. It's not just one person's job. This calls for the effort of the whole organization and individuals to commit to DEI and be actively part of the agile transformation.
Also, I would say, leaders that embrace mistakes and embrace errors throughout the process. This is something that came up a lot during our conversations with people in different organizations, that in many cultures and in many organizations, mistakes are punished. They're not perceived as a source of opportunity.
One of the tips or best practices would be having leaders who are able to show the rest of their organization that mistakes are actually learning opportunities, that you can try things out of the box, and you can be more innovative. That even if you fail, you're not going to be punished, or there won't be any consequences because of that, and, quite on the country, that this is actually a learning opportunity that we can all thrive on.
Caitlin Mackie:
Yeah. I completely agree. What benefits did they see?
Jazmin Chamizo:
They definitely saw a greater working environment. This is something that was quoted a lot during our interviews with respondents, that individuals saw that they had the chance to try new and innovative ideas. Definitely greater innovation, more creativity. Business morale actually ultimately went up, because they saw that the organization was actually embracing different perspectives, even if they fail. This definitely called for greater innovation.
I would say innovation, more creativity, and a better working environment. Absolutely new products, new ideas. That if you think about the current circumstances with COVID, this is what organizations have to aim at. New products, more innovation to face all the challenges that we have nowadays.
Terlya Hunt:
Powerful things for the listeners to think about. Here at Easy Agile, our mission is to help teams be agile. Because we believe for too long the focus has been on doing, when the reality is that Agile is a constant journey of becoming.
There's a specific part in the report that really stood out to me that I'd like to read. "Agility is a journey with no fixed endpoint. The road towards creating diverse, equitable, and inclusive environments is the same. Agility and DEI can be pursued, but never fully achieved. They are a process of ongoing learning, reflection, and improvement. A team cannot enter the process of improving business agility or DEI with a mindset towards completion, and any model that unites Agile and DEI will ultimately be ineffective if those taking part are not ready to embark on an ongoing quest for self improvement."
I absolutely love this quote. Rakesh, let's explore this a little bit further. What more can you tell me about this?
Rakesh Singh:
Actually there's an interesting thing that I would like to share to start with. We wanted to look for a organization who would help us interview their people and talk to their people. The way organizations responded ... Some responded, "Shall I allow my people to talk to somebody? It could be a problem." But then we got other organizations, they were actually chasing us. "We would like to be part of this, and we would like to get our people interviewed." They were very positive about the whole thing.
I happened to talk to the DEI corporate manager, a lady, and the way she was talking was ... She was so much, I would say, passionate about the whole thing, even though at least I felt that they were very high level of awareness of DEI. But the quest for learning and finding out what they could do better was quite astonishing and quite positive.
That's where my answer is, is that ... If you look at the current pandemic, and people realized that, "Okay. We have to work from home," initially some people found it great. It's a great thing. Work-life balance. "I can attend my home." But after some time they found it's a problem. There's other problem.
The point is that, in any organization, where it's a business or a social life, or people, it just keeps changing. There's no method or policy which is going to be forever valid. There's a continuous learning process that we have to get in.
What we need to do is focus on our goal that we want to achieve. Depending on the environment, that's what we call business agility. Now bring it to people as well, because it is a people ... We talk about customer centricity, and all that. But finding it's the people who are going to deliver whatever organization want to. You have to see how their lives are getting impacted.
We are discussing about getting people back to office. The problem is that, a city like Bangalore, it's a very costly city and very clouded city. People have gone to their hometown and they can work from there. Now, to bring them back, you have to approve them back again. To cut short the explanation, our life is changing, constantly changing, and technology and everything is putting ... People have to look at methods and approach of how they can be adapting themself on a continuous basis.Learning is a continuous process. In fact, when I got into Agile and people ask me, "How many years of experience you have?" I generally say five years, because anything that I did before five years is actually the wrong practice. You have to be continuously learning, and DEI and Agile is no stranger to this situation.
Caitlin Mackie:
I love that. I think fostering that continuous learning environment is really key. I suppose, on that, a few of the recommendations from the report are centered around getting deeper training and intentional expertise. Jazmin, what further recommendations, or courses, or practitioners are there that people can engage with after this episode?
Jazmin Chamizo:
Sure. An important part of our report was a series of recommendations to the entire agile community, and practitioners, to organizations, and agile coaches. You can see that. You could get more specific information in our reports. I would like to encourage all of you to read. Definitely when it comes to agile coaches and consultants, we're encouraging people to learn more about diversity, equity, and inclusion because one of the insights and the learnings we drew from this research is that diversity, equity, and inclusion is not specifically included in the agile world.
When we talked to the respondents in many different countries, they did not spontaneously made the connection between agility, Agile, and diversity, equity, and inclusion. But the more we talk about it, they discovered that, indeed, they were very closely overlapped. There was a symbiotic relationship between them, because you're putting the person and everything that relates to that individual on the very core of the organization, on the transformation.
Definitely we do encourage ... Leaders and agile coaches need to start learning more about our DEI, building that proficiency, learning more about unconscious bias and the impact of unconscious bias, and discrimination, and racism that we'll continue to see in organizations. They're more mindful of those voices that are not being heard at the moment in the present conversations. They can learn different techniques or different methods to be more engaging and more inclusive.
When it comes to the agile community in general and influencers, it is important to mention that Evan Leybourn, the founder of the Agility Institute, is having at the moment some conversations with important institutions in the agile community, such as the Agile Alliance, because we are looking for ... That's what Gen Z-ers are looking for. There's a big call out there for organizations to embrace this type of transformation, but putting DEI at the very core of the organization. That's what I would like to say.
Contribute to the discussion. This is a pilot project. That we are hoping to conduct more research on other DEI areas related to agility. We would like listeners to be part of the conversation, and to contribute with their experience, to improve the state of agility in the current moment.
Caitlin Mackie:
Thank you both so much for joining us today. Thoroughly enjoyed our conversation. I can't wait to see how Agile and diversity, and equity, and inclusion evolves in the future. Thank you.
Jazmin Chamizo:
Thank you so much for having us. It's been a pleasure.
Rakesh Singh:
Thanks a lot to both of you. It was nice to share our experience. Thank you very much.
- Podcast
Easy Agile Podcast Ep.8 Gerald Cadden Strategic Advisor & SAFe Program Consultant at Scaled Agile Inc.
Gerald shared that companies often face the same challenges over & over again when it comes to implementing agile, but the real challenge and most crucial is overcoming a fixed mindset.
"Gerald helps massive companies work better together while keeping teams focused on people and on the customer. I'll be revisiting this episode."
Gerald also highlights the difference between consultants & coaches, and the value of having good mentors + more
I loved this episode and know you will too!Be sure to subscribe, enjoy the episode 🎧
Transcript
Sean Blake:
Hello, and welcome to this episode of the Easy Agile Podcast. Sean Blake here with you today. And we've got a great guest for you it's Gerald Cadden a Strategic Advisor and SAFe Program Consultant Trainer at Scaled Agile, Inc. Gerald is an experienced business, an IT professional, Strategic Advisor and Scaled Agile Program Consultant Trainer SPCT at Scaled Agile. Thanks, Gerald. Welcome to the Easy Agile Podcast. It's really great to have you on as a guest today, and thank you for spending a bit of time with us and sharing your expertise with our audience on the Easy Agile Podcast.
Sean Blake:
So I'm really interested and I'm interested in this story that... For all the guests that we have at the podcast, but can you tell me a little bit about your career today? I find that people find their way to these Agile roles or the Agile industry through so many diverse types of jobs in the past. Some people used to be plumbers or tradies, or they worked in finance or in banking. How did you find your way into working at somewhere like Scaled Agile?
Gerald Cadden:
Good morning, Sean. Thanks for having me here guys. I'm very happy to be here with you guys today. Career things are always an interesting question. I'm 53 and so when I look back I wonder how do I get to where I am? And you can often look at just a series of fortunate events. And I worked in retail shoe stores and then I decided to do something in my life. Did an IT diploma then did a degree and I started working in the IT side. I pretty much started as a developer because that was where the money was and so that's where you wanted to go. I didn't stay as a developer long. Okay. All right. I was a terrible developer so I wasn't good at it. It was frustrating.
Gerald Cadden:
I moved into some pre-sales work and that led me to doing business analysis and I really liked the BA work because I got to work with people and see changes. I could work with the developers, still got to work really directly with the customer which was much more interesting for me. So I spent a lot of time in BA doing the development work, doing business process reengineering my transitioned over to rational unified process. When it was around spent countless hours writing use cases doing your mail diagrams, convincing people on how to make the changes on those. And then Agile came along and I had to make a complete brain switch. So all of this stuff that I'd learned and depended on as a BA suddenly disappeared because Agile didn't require that as an upfront way of working. It required that to be in the background if you wanted it and it was more a collaboration.
Gerald Cadden:
So about 2004, 2005 started working with Agile a lot more by this time I was living in the U.S. So that's where I got my agile experience, stayed there for a long time. Got great experience and then I moved over to working with SAFe around 2011. The catalyst for that as I was working for the large financial firm in New York with a team there. And we were redesigning a large methodology for them to implement Agile at scale. Went to a seminar in 2011 at an Agile conference saw Dean Leffingwell presentation on SAFe and just looked up and went, "Well we can stop working on our methodology. It's done."
Gerald Cadden:
So hardly after that meeting I ran outside and tackled Dean Leffingwell because I wanted him to look at my diagrams and everything and give me some affirmation that I was doing the right thing. Dean is got a very frank face and he pulled his frank face and he looked at me and just said, "You know what? Just use SAFe?" And I'm like, "Yeah, we will." And so I started my SAFe journey around that time and we implemented that financial company and I've been on that journey ever since.
Sean Blake:
So take us back 10 years ago to 2011. And you're working at this financial company, you've heard of this concept of SAFe really for the first time you started to implement it. How did the people at that company respond to you bringing in this new way of thinking this new framework? It sounded you already had the diagrams on the frameworks and the concepts forming in your mind did you find that an easy process? I think I already know the answer, but how complex was that to try and introduce SAFe for the first time into an organization of that magnitude?
Gerald Cadden:
Yeah, this is a very large financial firm, a very old financial firm so very traditional ways of working. So what's interesting is the same challenges SAFe comes up against today they're present before SAFe even began. And so the same challenges of the past management approaches trying to move to faster ways of working was still there. So as we were furiously drawing diagrams in Visio, trying to create models for people to understand it was hard to create a continuum of knowledge and education that would get people to move from the mindset they had to the mindset we wanted them to have. And it was an evolving journey for myself and the team that I was working with. I work with a really great guy and his name is Algona, a very, very smart man.
Gerald Cadden:
And so the two of us we're always scratching our heads as to how to get the management to change their minds. And we focused on education, but it was still a big challenge. I finished on the project as they started with SAFe. I moved to different management role in the company that we continued the work there. Michael Stump he used to work for Scaled Agile I think he works now at a different company, but he continued a lot of that work and did a really good job and they did implement SAFe. They made changes, but they faced all the same challenges. The management mindset overcoming moving away from the silos to a more network structured organization. Just the tooling, just the simple things was still a challenge and there's still a challenge today. So the nature of the organization is still evolving even in the modern day Agile world.
Sean Blake:
You mentioned there that part of the challenge is around mindset and education. Have you found any shortcuts into how you change a team's mindset? The way they approach their work, the way that they approach working with other teams in that organization? I assume the factor of success has a lot to do with, has the team changed their mindset on the way they were working before and now committed to this new way of working? And can you talk to us a little bit about how do you go about changing a team's mindset?
Gerald Cadden:
Maybe I'll change the direction of your question here, because what I've found is usually you don't have to work too hard to change the mindset of a team. Most of the teams are really eager to try new things and be innovative. You only come across some people in teams who may be their career path has got them to a certain point where they're happy with the way the world is and they don't want to change. The mindset you really need to change is around that leadership space and that's still true today. So the teams will readily adapt if management can create the environment that allows them to do it and if they can be empowered. But it's really... If you want to enable the team it's getting the leadership around them to change their mindset, to change the structures that are constraining the teams from doing the best job they can.
Gerald Cadden:
And so that for me was the big discovery as you went along and it's still true today. As Agile has been evolving I've noticed that people don't always put leadership at the top of the list of challenges but for me it's always been at that top of the list. A lot of people want to look at leadership and say things about them unflattering things, but you have to remember these are human beings. And the best way to come to leadership is to really begin with a conversation, help them understand. They know the challenges, but we need to help them understand what's causing the issues that are creating those challenges.Gerald Cadden:
As you work with them and educate them you can to open their minds up a little more. Does that mean they'll actually change? Not necessarily. Political motivations, ideologies other things constrained leadership from moving. But conversations and education I think are the way to really approach leadership. And getting to know them as a person, take an interest in their challenges, take an interest in them as an individual. So create that social bond is an important thing. As a consultant that was always hard to do because as a consultant you're always seen as an external force and it's hard to build that somewhat social relationship with that leadership and build that trust.
Sean Blake:
Yeah, that's so true. Isn't it. I remember on an Agile transformation that I was on previously, how Agile coach really would spend just as much time with the leadership team as they would with us the Agile team. And it seems strange that the coach was spending so much time trying to really coach the leadership team on how they should think about this new way of working, but you put it in the right context there it's so important that they create that environment for their people and for their teams to feel safe in trying something new. Yeah, that's really important.
Gerald Cadden:
I think if you looked at how Agile evolves, when you look at the creation of the Agile manifesto and its principles and then the following frameworks like ScrumXP, et cetera it evolved from a team perspective. So everybody made the assumption that we needed to create these things for the teams to follow, but as people worked with teams they found that it wasn't the teams at all the teams adapt, but the management and the structures of the organizations are not adapting. And so that's really where it went.
Gerald Cadden:
I can't recall the number of countless Scrum implementations you worked on and you just hit that ceiling of organizational challenges. And it was always very frustrating for the teams. I think there's a an opposite side to that too is that too many in the Agile world just look at the teams as the center of the world and you can't approach it from that way either the teams are very important to delivering value to the customers, but it's the organization as a whole that delivers value. And I think you really have to sit back and just say, "The teams are part of that how do we change the organization inclusive of the teams?"
Sean Blake:
Okay. That's really interesting. Gerald, you've spoken a bit about teams and mindset, when you go into an organization, a big auto manufacturer or a big airline or a financial services company and they're asking for your help, or they're asking for your training, how do you assess where that organization is up to? What's their level of maturity from an Agile point of view? Do you have organizations that are coming to you who have in their mind that they're ready to go SAFe and then you turn up on day one and it turns out no one has any real idea about what that type of commitment looks like?
Gerald Cadden:
Yeah, it's a good question. Because I think as I look back at the history of this, in 2011, 2012 when SAFe really got going, as you went forward I mean, there was no concept of where to begin. Consultants were just figuring it out for themselves and like most consulting or most methodologies they got engaged in an IT space and at the team level. And people would try to grow from the team level upwards. And at some point we need to know I've struggled a lot with this because I was just trying to figure out where it is that. So my consulting hat was always on to sit down, talk to people about their challenges, find a way to help figure out how to solve the challenges whether it was going to be Scrum or SAFe or whatever is going to be right.
Gerald Cadden:
Those are just tools in the toolbox. But when Scaled Agile as I was working with... Excuse me, as I was working with SAFe, Scaled Agile brought out the implementation roadmap. It produced so much more clarity that came later in my time with SAFe and I wish it had come earlier because it really began to help me clarify that initial thing that we call getting over the tipping point. How to work with the organization you're talking to, work with the right people, understand their challenges, help them understand what causes those problems, which is the more traditional ways of working the traditional management mindset, help them connect SAFe as a way to overcome those challenges and begin to show them. If you looked at the roadmap it's this contiguous step-by-step thing, but what you find in reality is there are gaps between those steps and in those gaps is the time you as a transitional team are having lots of conversation with the management.
Gerald Cadden:
If you put them through a training class they're not going to come out of the class going, "Oh, wow that's it. We know what to do." It takes follow-up conversation. You have to have one-on-ones one on many conversations, cover topics of gains so you can remove the assumptions or sorry the misassumptions. So it's a lot of that kind of work that the roadmap its there for those who are implementing SAFe today use it. It is one of the most helpful tools you'll have.
Sean Blake:
Awesome. Yeah. I think just acknowledging the difference between the tools in the toolbox and then the other fact that you're dealing with humans and you're dealing with attitudes and motivations and behaviors and habits there's two very different things there really. It sounds you need to take them all together on that journey.
Gerald Cadden:
Yeah. A side to that we train so many SPCs like SAFe program consultants. We train them, training them out of classes all the time with us and our partners. The thing that you can, you can teach them about the framework, but you can't necessarily teach them how to be a good consultant or a good... I want to say I use the term consultant and coach, right?
Sean Blake:
Yes.
Gerald Cadden:
Sometimes I like to say a good consultant can be a good coach, but a good coach can't necessarily be a good consultant because there's another world of knowledge you need to have like how do you sit down and talk to executives? How do you learn the patients and the kinds of questions you need to ask, how do you learn to build those relationships and understand how to work the politics? So there are things outside the knowledge of an SPC that they need to gain. So young people coming in and running to do this SPC course I want to prepare you for everything, but it gives you the foundations.
Sean Blake:
So when you're in a organization or you're coaching people to go back to their organization how do you teach them those coaching skills so that when they come in and they've got to learn the politics, they've got to identify the red flags, they've got to manage the dependencies, they've got to bring new teams onto the train. How do you go about equipping that more human and communications of the toolbox really?
Gerald Cadden:I think you can obviously teach the fundamentals of the framework by running through the training courses. But mentoring for me is the way to go. Every time I teach a training class I make it very clear to people when they go back and they're starting a transformation don't go this alone. Find experienced people that have done this and the experience shouldn't just be with SAFe their experience should be having worked with large organizations having experience with the portfolio level if necessary. Simply because there are skills that people develop over years of their career if they don't have at the beginning.
Gerald Cadden:
I mean, if I look back at some of the horrific things I had said in meetings and in front of executives my boss would put his hands up in front of his face because I was young and impulsive and immature and I see that today. So when I first came to the U.S I worked with some younger BAs and they would say things in a meetings and you quickly have to dance around some things to, "We didn't really want to say that right now." So I think mentoring is the skill. We can teach you the tactical skills, but teaching you the political skills, the human skills is something that takes mentoring and time.
Sean Blake:
Mentoring so important in that context. Isn't it?
Gerald Cadden:
Yeah.
Sean Blake:
Okay. So let's rewind 12 months ago to March 2020, a month that's probably burned into a lot of people's mind is the month that COVID changed our lives for the foreseeable future. I know that Easy Agile had a lot of content out there, articles about how to do remote PI Planning, how to help your virtual teams work better together and we didn't know that COVID was coming we just saw this trend happening in the workforce and we had this content available.
Sean Blake:
And then I was checking out our website analytics and we had this huge spike in what I assume were people in these companies trying to work out for the first time, how to do PI Planning virtually, how to keep very literally their release trains on the tracks in a time where people were either leaving the state, working from home for the first time, it's really like someone dropped the bomb in the middle of these release trains and people scrambling on how we are we going to do this virtually now? Did you have a lot of questions at the time on how are we going to do this? And how have you seen companies respond to those challenges?
Gerald Cadden:
Yeah. I remember being in Boulder, Colorado in January of 2020 and I just come back from vacation in Australia and that's when COVID was coming around and you were hearing about things in January, 2020. I was talking with my colleagues and we were wondering how bad this is going to be within two months the world was falling apart. And for us I think a good way to tell that story is to look at what Scaled Agile did. We knew our business that it was very reliant on our partner success and it still is today. And so as we began to see the physical world of PI Planning and training, as we began to see that completely falling apart the company had to quickly adapt.
Gerald Cadden:
We already had a set of priorities set for the PI and we implement Scaled Agile internally in the company. At the time we're running the company as a train itself because it's 170 all people. So they had to reprioritize the different epics, we pushed a new features and it was all about what do we need to change now to keep our partners afloat by getting them online and a really good team at Scaled Agile in a really cross-company effort to get short-term online materials created to keep the partners upright so they could keep teaching. They could find ways to do this, to do PI Planning, to do they're inspecting adapts all online. And so we pushed out a lot of material just simply in the form of PowerPoint slides that they could then incorporate into tools like Mural, Al tool. SAFe collaborate we went about developing this and we've been maturing that over time.
Gerald Cadden:
And so now we're in a world where we have a lot more stability. We saw a big dip like everybody else, but the question is, are you going to come out of that dip? And so what we did notice within probably even the second quarter of that year where the tail end of it we saw it starting to come up again, which our partners starting to teach more online. So the numbers told us that the materials we're producing were working. So for us it was just a great affirmation that organizing yourself the way we did organize yourself, the quick way we could adapt saved us. So Scaled Agile could have gone the way of a lot of companies and not being able to survive because our partners wouldn't have survived. We had the ability to adapt. So it's a great success story from my perspective.
Sean Blake:
Well, that's great. We're all glad you're still around to tell the story.
Gerald Cadden:
Yes we are.
Sean Blake:
And Gerald, whether you're reflecting on companies you've worked with in the past, or maybe even that internal Scaled Agile example you just touched on. Are there specific meetings or ceremonies or checking points that are really important as part of the Agile release train process? What are the things that really for you are mandatory or the most important elements that company should really hold onto during that really set up stage of trying to move towards the Scaled Agile approach?
Gerald Cadden:
So I interpret your question correctly. I think for me when you're implementing the really important things to focus on as a team first of all is the PI Planning. That is the number one thing. It's the first one people want to change because it's two days long and everybody has to come and it can cost companies a quite a significant sum of money to run that every 10 to 12 weeks. And so you will run very quickly as I had in the past in the car company you run very quickly into the financial controller who wants to understand why you're spending $40,000 a quarter on a big two-day meeting. And so they lie, they start questioning every item on the bill, but that's the most significant one.
Gerald Cadden:
PI Planning is significant. The inspect and adapt is the other one simply because at the end if you remove that feedback cycle, what we call closing the loop if you remove that then we have no opportunities to improve. So those two events themselves create the bookends what we get started with and how we close the loop, but there are smaller events that happen in between the team events are obviously all important. But more significant for me is the constant, the event for the product management team or program management team how are you going to filter them, excuse me.
Gerald Cadden:
Who are going to need to get together on a regular basis to ensure that then we call this the Sync. So this is the ART Sync or the POPM Sync. You need to make sure those are happening because those are these more dynamic feedback loops and ensure the progress of good architectural requirements or good features coming through so that when you get to PI Planning the teams have significant things to work on. So if you had to give me my top three events, PI Planning, inspect and adapt, and the ART Sync and product POPM Sync.
Sean Blake:
Awesome. I know there's always that temptation for teams to find the shortcuts and define the workarounds where they don't have to do certain meetings or certain check-ins, but in terms of communication it must be terribly important for these teams to make sure they're still communicating and they don't use the framework as an excuse to stop meeting together and to stop collaborating.
Gerald Cadden:
Yeah. I mean, I went through when I started implementing at the large car company in the U.S I decided to rip the bandaid off. They had several teams working on projects and they weren't doing well, when I looked at the challenges and decided we're going to implement SAFe some of the management they were, "Are you crazy? Why would you do this?" But they trusted me. And so we did rip the bandaid off and we formed them all into a not. We launched set up. And I remember at the end of the PIs some of the management have had a lot of doubts that were coming up after they sat through the PI and they said they just couldn't believe how great that was.
Gerald Cadden:
Even though the first PI was a little chaotic they understood the work and the collaboration, the alignment, just the discussions that took place were far more powerful for them. And teams were happier, they were walking out to a different environment. So it changed the mood a great deal. So I think the teams their ability to be heard in one of the most significant places is during PI Planning, they get that chance to be heard. They get that chance to participate rather than just be at the end where they're told what to do.
Sean Blake:
Mm-hmm (affirmative). So it really empowers the team.
Gerald Cadden:
Yeah. Absolutely.
Sean Blake:
That's great. So as a company moves out of the implementation phase and becomes a little bit more used to the way of doing things what's the best way for them to go about communicating that progress to the wider organization and then really evangelizing this way of working to try and get more teams on board and more Agile release trains set up so that it's really a whole company approach.
Gerald Cadden:
Yeah. A good question. So I think first of all the system demo that we do. So the regular system demos that take place, this is an event where you can invite people to. So when you get to the end of the program increment, the 10, 12, or the eight, 10 or 12 weeks and you're doing your PI system demo that's a chance for you to invite people that may be in the organization who are next on the list and they're going to be doing this, or they're curious, or if you have external suppliers who you're trying to get on board as part of the training have them come. Have them come to these events so they can just participate. They can see what goes on and it takes away some of the fear of what that stuff is. It gives them work much.
Gerald Cadden:
So the system demo whether you do it during the PI, but definitely the PI system demo and you want that one. So more ad hoc things and one of the things that I've seen organizations really fail to do is when they're having success the leadership around the train need to go out and I hate the term evangelize, but go out and show the successes. Get out and talk about this at the next company meeting present where they were and where they are now. But as part of that don't share just the metrics that show greater delivery of value show the human metrics, show how the team went from maybe a certain level of disgruntlement to maybe feeling happier and getting better feedback, show with how the business and technology have come closer together because they're able to collaborate and actually produce value together rather than being at odds because the system makes them at odds.
Sean Blake:Awesome. Gerald is there anything else you'd like to share with our audience before we wrap up the episode? Any tips or words of encouragement, or perhaps some advice for those who are considering scaling up their Agile teams.
Gerald Cadden:
I think that the one piece of advice again, I'll reiterate back to the earlier point I made is as you are going through the implementation process and you're starting to launch your train and train your teams figure out how you're going to support them when you launch. Putting people through an SPC class or through all the other classes they won't come out safe geniuses. They'll have knowledge and they'll have the enthusiasm and have some trepidation as well, but you need good coaching. So figure out as you're beginning the implementation pattern where you're designing the teams et cetera, figure out what your coaching pattern is going to be. Hire the people with the knowledge and the experience work with a partner for the knowledge and experience. They shouldn't stay there forever if you work with consultants.
Gerald Cadden:
Their job should be to come in and empower you not to stay there permanently, but without that coaching and coaching over a couple of PIs your teams tend to run into problems and go backwards. So to keep that momentum moving forward for me it's figure out the coaching pattern. The only other one I would say too is make sure that you get good collaboration between product and the people who are going to be the product management role on architecture, get rid of the grievances, have them work together because those can stifle you. Get in and talk about the environments before you launch. You don't want funny problems when you, "Oh, the architecture is terrible." Okay. Let's talk about that before we launch." So just a couple of things that I think are really important things to focus on before you launch the train.
Sean Blake:
Awesome. I really appreciate that Gerald. I've actually learned a lot in our chat around. It's the same challenges that you had 10 years ago it's the same challenges that we have today. The really the COVID is the challenge of how do you focus on the mindset change. We've talked about the teams are eager to change. There might be a few grumbly voices along the way, but really it's about leadership providing a welcoming and safe environment to foster that change and the difference between being a coach and a consultant, the importance of mentoring. Wow we actually covered a lot of ground didn't we?
Gerald Cadden:
I may get some hate mail for that comment, but...
Sean Blake:
Oh, we'll see. Time will tell. Thanks so much Gerald for joining us on the Easy Agile Podcast. And we appreciate you sharing your expertise with us and the audience for the podcast. Thanks for having you.
Gerald Cadden:
Happy to do it anytime. Thanks for having me here today.
Sean Blake:
Thanks Gerald.