Easy Agile Podcast Ep.17 Defining a product manager: The idea of a shared brain
In this episode, I was joined by Sherif Mansour - Distinguished Product Manager at Atlassian.
We spoke about styles of product management and the traits that make a great product manager. Before exploring the idea of a shared brain and the role of a product engineer.
Sherif has been in software development for over 15 years. During his time at Atlassian, he was responsible for Confluence, a popular content collaboration tool for teams.
Most recently, Sherif spends most of his days trying to solve problems across all of Atlassian’s cloud products. Sherif also played a key role in developing new products at Atlassian such as Stride, Team Calendars and Confluence Questions. Sherif thinks building simple products is hard and so is writing a simple, short bio.
Hope you enjoy the episode as much as I did. Thanks for a great conversation Sherif.
Related Episodes
- Podcast
Easy Agile Podcast Ep.10 Kate Brodie, Director of Digital AI and CCAI Program at Optus
"It was great chat about Kate's experience working in an agile environment, and learning what artificial intelligence looks like at Optus"
Kate shares her experience of an agile transformation at Optus and the incredible impact it has had on the company. Delivering faster & enabling a sense of ownership & accountability amongst teams.
Kate also shares some great advice from embracing hybrid roles throughout her career, a gentle reminder to never put limits on yourself and adopting a “no risk no return” mentality.Be sure to subscribe, enjoy the episode 🎧
Transcript
Hayley Rodd:
Well, thank you for joining us here on the Easy Agile Podcast. Here in Wollongong things are a little different from when we last had a chat. We've since been put into lockdown as part of the Greater Sydney region, but I'm delighted to bring you this podcast from here in Wollongong. And also it maybe helps ease some of those lockdown blues that you might be suffering if you're in the same part of the world as I am today or if you're in another part of the world that is maybe in the same predicament that we find ourselves here in Wollongong in. So, I'd like to introduce myself. So my name is Hayley Rodd and I am the product marketing manager or one of the product marketing managers here at Easy Agile. And I have a great guest today, an old friend of mine but before we kick off with the podcast, I'd like to say any acknowledgement of country.
Hayley Rodd:
So here at Easy Agile, we acknowledge the traditional custodians of the land where we work and we live. We celebrate the diversity of Aboriginal people and their ongoing cultures and connections to the land and waters of New South Wales. We pay our respects to elders past, present and emerging and acknowledge the Aboriginal and Torres Strait Islander people and their contribution to the development of this tool. And now, to our guest, Kate Brodie. Kate is an old friend of mine from here in The Ngong or Wollongong if you're not from this region. And has been very successful in her pursuit of a career in technology. So a little bit about Kate. Katie is the director of digital AI and CCAI programs at Optus. Kate is now based in Sydney, Australia and is a leader in AI, digital and new emerging technology. Katie is responsible for Optus's AI, digital, portfolio and chapter working in an agile environment every day today.
Hayley Rodd:
Kate leads the development of new products to take to market and scale routines in an agile environment advocating for build, measure, learn culture. Most recently, Kate has been in charge of leading an enterprise first to market API consulting chatbox in Australia, compatible with Google Home. So obviously Kate is an extremely impressive person and I wanted to chat to her today about her career and also her role in the Agile team. But beyond that, I wanted to touch on women in technology and leadership, something that Kate has spoken about recently with Vogue Australia. So, thanks so much to Kate for joining us today. And I can't wait to share some of the advice from the lessons that Kate has learned through her career. Thank you so much for joining me today, Kate. It's really wonderful to see you. So could you tell me a bit about, I guess what your day-to-day looks like when you're at the office?
Kate Brodie:
Yeah, thank you for having me. My day-to-day is quite varied. I would say that in my role, I'm very lucky to work with lots of different people, engineers, designers, business people, marketers and more recently a lot of different partners including Google. So, a lot of my day is working between different groups, strategically thinking about how we're going to continue to create a particular vision and future together for our customers. And then parts of it are related more to the technology and how we're ensuring that we've got our teams performing at a level that will allow us to meet those goals. And yeah, day-to-day, I'm talking to a lot of different.
Hayley Rodd:
So, when we were chatting just before we started recording, you were telling me a little bit about your start in marketing and now you've moved over to technology, can you tell me a little bit about how you don't want people to feel pigeonholed, I guess in their careers or in their career path?Kate Brodie:
Yeah, absolutely. I really believe that anyone can get into anything if they put the effort behind it. And so I really think that no one should ever put limits on themselves. For me, it was partly because I was surrounded by really great people who supported me in trying lots of different things. And I think the way in which you build your confidence and start to move between different disciplines is by getting your hands dirty and just having a crack. So, I think it's important particularly and in this day and age for people to be open and not really put strong defined titles on themselves so that they do have a sense of freedom to kind of move around and try different roles because ultimately what is available today is probably going to look very different in 30 years time so... Yeah.
Hayley Rodd:
And do you still consider yourself a marketer or are you something, hybrid? What are you now?
Kate Brodie:
I would say that I am a technologist. I think that it requires the ability to have a bit of a marketing brain because you need to know how you're going to apply it in order to make a real impact, whether that's for customers, employees or commercially. But definitely with a strong kind of technology digital focus now, I wouldn't say that I would be purely seen as a marketer these days, but definitely it's about having that broad skill set and I think marketing's critical to being able to create great products.
Hayley Rodd:
Perfect. So, when I think of AI, I think of self-driving cars, someone who is very new to the technology industry myself. Could you unpack, I guess what AI means for Optus?
Kate Brodie:
Mm-hmm (affirmative). I think that what you've just said is shared by many. Artificial intelligence is just such a broad concept and it really is related to creating intelligent machines that can ultimately perform tasks or imitate behavior that we might consider human life. And so that can range from really narrow experiences like reading a brochure in a different language using AI to kind of rate it in the language that you can understand to kind of these macro experiences like you've just described with self-driving cars and completely changing the way that we travel. So, I think that AI is such a broad term where it will mean different things for different groups. At Optus, it's about creating lasting customer relationships with people and allowing them to connect with others. And so where we use AI in a variety of different places, it can be in our products themselves.
Kate Brodie:
So for instance, we just recently launched a really amazing product called Call Translate. And that's where in the call you can actually interact with people in different languages on that same phone call so breaking down those communication barriers that have existed before. So that's super exciting. And then there's other places where we're using it, for instance in our sales and service functions that we can more easily automate the simple tasks and give more time to our people to grow and create those types of relationships with our customers. So, we're using artificial intelligence in many different ways, but I think it's really exciting in everything that we do, it's more driven towards how can we create a better customer experience. It's not about the technology in of itself, which is what I really like about it.
Hayley Rodd:
Yeah. Nice. And it sounds like that that call translation would just... Could have so many applications and have... I'm even just thinking about in this COVID circumstance we... You're trying to get a message across to people to stay home and all those sorts of things like... Wow. Okay.
Kate Brodie:
Yeah. And there are some beautiful stories of people who are not able to go home with their young kids, travel home to their countries where their families are. And so they can have the grandkids talking to the grandparents more easily as they are learning different languages. So, it's really cool.
Hayley Rodd:
Wow. That's beautiful. So, in your title, there is, I, maybe assume it's an abbreviation, but it's somebody that says CCAI. Could you tell me what that is?
Kate Brodie:
CCAI stands for Contact Center Artificial Intelligence and it's actually a program of work that is used increasingly by different industries and refers to a particular product that Google is working with companies on. And so it's about re-imagining your contact center. So traditionally today banks, telecommunications companies, big organizations with lots of customers have a lot of customers that contact us regularly. And so this is a way of actually, how do we use AI to increasingly get to a point where you don't need to reach out to us but instead we're reaching out to you to better optimize your experiences with us. So, that's a little bit more of a program piece that's attached to my title at the moment.
Hayley Rodd:
Wonderful. So, prior to your current role, we're just going to get into the Agile space, which I know you seem extremely excited about at Optus and it's had some, I guess will be changes in the... Or it has some... Helped some massive changes at Optus. Before your current role what was your experience with that job?
Kate Brodie:
My current role and my experience with Agile has evolved. So, a couple of years ago, we rolled out Agile at a very large scale across our enterprise. So previously we had been using Agile in our IT teams for software development, but we actually started to roll out agile for product development. And I originally started as a product owner. So I was given a goal around creating a chatbot from scratch that would be supporting our teams. And with that, our Agile transformation involved breaking down the silos of divisions. So functional divisions. We started to merge into cross-functional squads and our squad was given the autonomy and the ownership to take on a particular initiative, and in my case, it was chatbot. And so I've actually experienced multiple roles within Agile including as a product owner and as a chapter lead, which was where I looked after a particular craft of people who run across multiple squads in Agile.
Kate Brodie:
And now more recently, I've got squads that are working within my area to produce these products and these outcomes for us. My experience with Agile has been brilliant. The amount of impact that it's had on our company is incredible. So, over the last couple of years, and this is pre COVID, we had a big target around moving towards a really digital led experience. And so we've seen our customers who used to choose digital around six...
Kate Brodie:
Around 65% of our customers would choose digital a couple of years ago and now it's more like 85%. So these big swings have come as a result of, I think, breaking down those silos and working in a more Agile way. Just on that I think what I like about Agile is that it's not about showcases and stand ups, it's actually about the culture that Agile allows for. So I think it allows for a lot more ideas and innovation because you have this mix of people who didn't traditionally sit with one another being together. And then also you can just deliver faster because you can cut through a lot of noise by working together. And the last piece I think is definitely that ownership and accountability for driving an outcome as opposed to delivering a piece of the puzzle, I think, yeah, Agile's been massive for us.
Hayley Rodd:
So, and you said that it was a big roll out across the organization. So does that mean that everyone within Optus works within an Agile framework or is there still sections that I guess don't employ Agile.
Kate Brodie:
There are areas of the business that aren't completely agile at this point in time. And I think they are areas of the business that makes sense. So sometimes in research and the like, you need to have a bit more freedom to sit back and ideate, although they would adopt principles of Agile so that they time box ideas and the like. From a delivery perspective, most of the organization has transformed into Agile delivery.
Hayley Rodd:
Wow. So it sounds like your customers would be seeing a lot of value from the organization transforming to Agile. You said before that there was a lot of people in your life who allowed you to do things you felt confident in your ability because I guess they helped you get there. So, has there been a mentor that I guess you look back on in your career or even now that has had an impact on where you are?
Kate Brodie:
I think that I've had a lot of different people who have been my mentor at different stages and who I would call upon now. So, I like to probably not have one mentor, but sort of look at the variety of people and their different skills and take a little bit of this, take a little bit of that, learn from this person on a particular area. There have definitely been some people who stand out. And so, early on one of the things that was really useful for me was being supported by a particular general manager who basically sort of pushed me into digital and technology and sort of, I was just very fortunate that he believed in me and said, "Now, you can run this area." I had never really been exposed to it. This is 10 years ago when digital was still sort of seen as more of a complimentary area as opposed to core, to a business.
Kate Brodie:
And by him supporting me in having a go at everything that's been... That's actually one of the most pivotal moments I would say in my career very early on that that's really led the way for me to increasingly get into the area that I'm in today. And along the way, obviously, there's been many people who have made a huge contribution to where I'm at and they're both in my career, but also outside. So, people that you play sport with people that you just have, that you share different stories with, I think that often you take a little bit from everyone and hopefully you give back something to those people too.
Hayley Rodd:
Yeah, I'm sure you do. So, is there any... Looking back on all those people that you've had throughout your life who have helped you get to where you are, is there a piece of advice that might have stuck with you that you could share with us?
Kate Brodie:
There's lots of different advice. I think one of them is, no risk no return. I really do think that you have to have a crack, you have to put yourself out there. The things that always been the most satisfying experiences have been by having a go at something that I hadn't done before. So I think no risk no return is something that I definitely subscribe to. And then in terms of some practical advice, particularly as a female, I think in your career, something called the assumptive close, which is a sales technique, around almost not asking if someone would like something but sort of implying that they would. I actually would say that I use that technique not to necessarily directly sell to someone, but in everything that I do and I would really encourage most people to use it. It was some early feedback in my career and it's been quite useful along the way.
Hayley Rodd:
Yeah. [inaudible 00:18:51] after working in real estate for a little while, I think a lot of real estate agents also assume the sale. So, and it just it... I think it can help with the confidence and going in there and I guess almost putting yourself in a position of power in the conversation when you assume you've got this in the bag. So yeah, it probably comes naturally to some people more than others, myself included, but I would struggle with that, but that's a really good piece of advice. So yeah, I'm sure that it will be helpful for a lot of people who are listening to the podcast now. So what about... What's your proudest moment as a leader there at Optus so far? I know that you're in Vogue recently. That's an amazing moment. And as a person who's known you for a really long time, that was a proud moment for me to see someone that I'd known do that, but for you, what's the proud moment?
Kate Brodie:
[inaudible 00:19:58] I think probably my proudest moment is when I've launched something large. So recently we launched a large piece of technology that will change the experience for our customers, but it wasn't as much the launch as it was looking around me and seeing the people that are there with me doing it. And there are quite a few amazing people that I get to work with. And having sort of started with a few of them in the early days, a few years ago, where we were sort of spitballing ideas and we had no products to now having large products that make a real impact to Australian consumers and to our business. It's those moments where it's actually the team around you that it... I'm most proud of. It's just the high engagement and the drive and the culture that we've created where people want to work in this area and we all enjoy creating these experiences together. So I think definitely I'm most proud of the team culture and environment that we've set.
Hayley Rodd:
Yeah. Sounds amazing. We're lucky enough here at Easy Agile to have, I guess the same... A culture that you can be proud of as well, so, I can understand how it can be something that makes a huge impact every day. So, we're getting close to the end of our time together, but again, I guess I wanted to touch on a bit of gender diversity. So how does gender diversity benefit technology companies? What do you think?
Kate Brodie:
I think diversity in general is going to benefit any business and particularly technology businesses, because it's imperative that you have a representation of the population and the people that will use your technology and the experiences that you're trying to create. So I think that it's only by ensuring that we are tapping into the entire talent pool, that we can represent people and represent customers, but also we're going to get the best ideas. And so that's gender diversity but also culturally and in every sort of facet, the more that we can tap into the entire talent pool, the more we'll create better experiences, better technology, solve more of the world's problems and capture more opportunities.
Hayley Rodd:
Mm. Fantastic. And last question, what advice would you give a young woman hoping to enter the technology industry or a technology company?
Kate Brodie:
I would say go for it. I would say don't ever put limits on yourself and speak up, learn as much as you can and get your hands dirty because it's through that kind of confidence... Oh, sorry. It's through working with lots of different people and creating things with people from scratch that you'll gain your confidence as well. And always ask, don't sit there waiting for someone to sort of tap you on the shoulder, ask for that new opportunity, ask for the salary increase, ask, it won't hurt. I promise.
Hayley Rodd:
That's a good advice. What's the worst they could say?
Kate Brodie:
No, exactly.
Hayley Rodd:
No, yeah.
Kate Brodie:
Yeah. And that's why.
Hayley Rodd:Or they might say yes. And then that's awesome too. Okay. Well, thank you so much, Kate, for your time. That was really wonderful. It was wonderful to catch up, but it was also wonderful to hear from someone who's so young in their career, has... But has also done so much and who has reached some amazing goals, has a team behind them. And I think that there's so many people who will watch this, myself included, who will learn a lot from you. So I really appreciate your time. Thank you.
- Podcast
Easy Agile Podcast Ep.12 Observations on Observability
On this episode of The Easy Agile Podcast, tune in to hear developers Angad, Jared, Jess and Jordan, as they share their thoughts on observability.
Wollongong has a thriving and supportive tech community and in this episode we have brought together some of our locally based Developers from Siligong Valley for a round table chat on all things observability.
💥 What is observability?
💥 How can you improve observability?
💥 What's the end goal?

"This was a great episode to be a part of! Jess and Jordan shared some really interesting points on the newest tech buzzword - observability""
Be sure to subscribe, enjoy the episode 🎧
Transcript
Jared Kells:
Welcome everybody to the Easy Agile podcast. My name's Jared Kells, and I'm a developer here at Easy Agile. Before we begin, Easy Agile would like to acknowledge the traditional custodians of the land from which we broadcast today, the Wodiwodi people of the Dharawal nation, and pay our respects to elders past, present and emerging, and extend that same respect to any aboriginal people listening with us today.
Jared Kells:
So today's podcast is a bit of a technical one. It says on my run sheet here that we're here to talk about some hot topics for engineers in the IT sector. How exciting that we've got a couple of primarily front end engineers and Angad and I are going to share some front end technical stuff and Jess and Jordan are going to be talking a bit about observability. So we'll start by introductions. So I'll pass it over to Jess.
Jess Belliveau:
Cool. Thanks Jared. Thanks for having me one as well. So yeah, my name's Jess Belliveau. I work for Apptio as an infrastructure engineer. Yeah, Jordan?
Jordan Simonovski:
I'm Jordan Simonovski. I work as a systems engineer in the observability team at Atlassian. I'm a bit of a jack of all trades, tech wise. But yeah, working on building out some pretty beefy systems to handle all of our data at Atlassian at the moment. So, that's fun.
Angad Sethi:
Hello everyone. I'm Angad. I'm working for Easy Agile as a software dev. Nothing fancy like you guys.
Jared Kells:
Nothing fancy!
Jess Belliveau:
Don't sell yourself short.
Jared Kells:
Yeah, I'll say. Yeah, so my name's Jared, and yeah, senior developer at Easy Agile, working on our apps. So mainly, I work on programs and road maps. And yeah, they're front end JavaScript heavy apps. So that's where our experience is. I've heard about this thing called observability, which I think is just logs and stuff, right?
Jess Belliveau:
Yeah, yeah. That's it, we'll wrap up!
Jared Kells:
Podcast over! Tell us about observability.
Jess Belliveau:
Yeah okay, I'll, yeah. Well, I thought first I'd do a little thing of why observability, why we talk about this and sort of for people listening, how we got here. We had a little chat before we started recording to try and feel out something that might interest a broader audience that maybe people don't know a lot about. And there's a lot of movements in the broad IT scope, I guess, that you could talk about. There's so many different things now that are just blowing up. Observability is something that's been a hot topic for a couple of years now. And it's something that's a core part of my job and Jordan's job as well. So it's something easy for us to talk about and it's something that you can give an introduction to without getting too technical. So we don't want to get down. This is something that you can go really deep into the weeds, so we picked it as something that hopefully we can explain to you both at a level that might interest the people at home listening as well.
Jess Belliveau:
Jordan and I figured out these four bullet points that we wanted to cover, and maybe I can do the little overview of that, and then I can make Jordan cover the first bullet point, just throw him straight under the bus.
Jordan Simonovski:
Okay!
Jess Belliveau:
So we thought we'd try and describe to you, first of all, what is observability. Because that's a pretty, the term doesn't give you much of what it is. It gives you a little hint, but it'll be good to base line set what are we talking about when we say what is observability. And then why would a development team want observability? Why would a company want observability? Sort of high level, what sort of benefits you get out of it and who may need it, which is a big thing. You can get caught up in these industry hot buzz words and commit to stuff that you might not need, or that sort of stuff.
Jared Kells:
Yep.
Jordan Simonovski:
Yep.
Jess Belliveau:
We thought we'd talk about some easy wins that you get with observability. So some of the real basic stuff you can try and get, and what advantages you get from it. And then we just thought because we're no going to try and get too deep, we could just give a few pointers to some websites and some YouTube talks for further reading that people want to do, and go from there. So yeah, Jordan you want to-
Jared Kells:
Sounds good.
Jess Belliveau:
Yeah. I hopefully, hopefully. We'll see how this goes! And I guess if you guys have questions as well, that's something we should, if there's stuff that you think we don't cover or that you want to know more, ask away.
Jordan Simonovski:
I guess to start with observability, it's a topic I get really excited about, because as someone that's been involved in the dev ops and SRE space for so long, observability's come along and promises to close the loop or close a feedback loop on software delivery. And it feels like it's something we don't really have at the moment. And I get that observability maybe sounds new and shiny, but I think the term itself exists to maybe differentiate itself from what's currently out there. A lot of us working in tech know about monitoring and the loading and things like that. And I think they serve their own purpose and they're not in any way obsolete either. Things like traditional monitoring tools. But observability's come along as a way to understand, I think, the overwhelmingly complex systems that we're building at the moment. A lot of companies are probably moving towards some kind of complicated distributed systems architecture, microservices, other buzz words.
Jordan Simonovski:
But even for things like a traditional kind of monolith. Observability really serves to help us ask new questions from our systems. So the way it tends to get explained is monitoring exits for our known unknowns. With seniority comes the ability to predict, almost, in what way your systems will fail. So you'll know. The longer you're in the industry, you know this, like a Java server fails in x, y, z amount of ways, so we should probably monitor our JVM heap, or whatever it is.
Jared Kells:
I was going to say that!
Jordan Simonovski:
I'll try not to get too much into-
Jared Kells:
Runs out of memory!
Jordan Simonovski:
Yeah. So that's something that you're expecting to fail at some point. And that's something that you can consider a known unknown. But then, the promise of observability is that we should be shipping enough data to be able to ask new questions. So the way it tends to get talked about, you see, it's an unknown unknown of our system, that we want to find out about and ask new questions from. And that's where I think observability gets introduced, to answer these questions. Is that a good enough answer? You want me to go any further into detail about this stuff? I can talk all day about this.
Jared Kells:
Is it like a [crosstalk 00:08:05]. So just to repeat it back to you, see if I've understood. Is it kind of like if I've got a, traditionally with a Java app, I might log memories. It's because I know JVM's run out of memory and that's a thing that I monitor, but observability is more broad, like going almost over the top with what you monitor and log so that you can-
Jordan Simonovski:
Yeah. And I wouldn't necessarily say it's going over the top. I think it's maybe adding a bit more context to your data. So if any of you have worked with traces before, observability is very similar to the way traces work and just builds on top of the premise of traces, I guess. So you're creating these events, and these events are different transactions that could be happening in your applications, usually submitting some kind of request. And with that request, you can add a whole bunch of context to it. You can add which server this might be running on, which time zone. All of these additional and all the exciters. You can throw in user agency into there if you want to. The idea of observability is that you're not necessarily constrained by high cardinality data. High cardinality data being data sets that can change quite largely, in terms of the kinds of data they represent, or the combinations of data sets that you could have.
Jordan Simonovski:
So if you want shipping metrics on something, on a per user basis and you want to look at how different users are affected by things, that would be considered a high cardinality metric. And a lot of the time it's not something that traditional monitoring companies or metric providers can really give you as a service. That's where you'll start paying insanely huge bills on things like Datadog or whatever it is, because they're now being considered new metrics. Whereas observability, we try and store our data and query it in a way that we can store pretty vast data sets and say, "Cool. We have errors coming from these kinds of users." And you can start to build up correlations on certain things there. You can find out that users from a particular time zone or a particular device would only be experiencing that error. And from there, you can start building up, I think, better ways of understanding how a particular change might have broken things. Or some particular edge cases that you otherwise couldn't pick up on with something like CPU or memory monitoring.
Angad Sethi:
Would it be fair to say-
Jared Kells:
Yeah. It's [crosstalk 00:11:02].
Angad Sethi:
Oh, sorry Jared.
Jared Kells:
No you can-
Angad Sethi:
Would it be fair to say that, so, observability is basically a set of principles or a way to find the unknown unknowns?
Jordan Simonovski:
Yeah.
Angad Sethi:
Oh.
Jess Belliveau:
And better equip you to find, one of the things I find is a lot of people think, you get caught up in thinking observability is a thing that you can deploy and have and tick a box, but I like your choice of word of it being a set of principles or best practices. It's sort of giving you some guidance around these, having good logging coming out of your application. So structured logs. So you're always getting the same log format that you can look at. Tracing, which Jordan talked a little bit about. So giving you that ability to follow how a user is interacting with all the different microservices and possibly seeing where things are going wrong, and metrics as well. So the good thing with metrics is we're turning things a bit around and trying to make an application, instead of doing, and I don't want to get too technical, black box monitoring, where we're on the outside, trying to peer in with probes and checks like that. But the idea with metrics is the application is actually emitting these metrics to inform us what state it is in, thereby making it more observable.
Jess Belliveau:
Yeah, I like your choice of words there, Angad, that it's like these practices, this sort of guide of where to go, which probably leads into this next point of why would a team want to implement it. If you want to start again, Jordan?
Jordan Simonovski:
Yeah, I can start. And I'll give you a bit more time to speak as well, Jess in this one. I won't rant as much.
Jess Belliveau:
Oh, I didn't sign up for that!
Jordan Simonovski:
I think why teams would want it is because, it really depends on your organization and, I guess, the size of the teams you're working in. Most of the time, I would probably say you don't want to build observability yourself in house. It is something that you can, observability capabilities themselves, you won't achieve it just by buying a thing, like you can't buy dev ops, you can't buy Agile, you can't buy observability either.
Jared Kells:
Hang on, hang on. It says on my run sheet to promote Easy Agile, so that sounds like a good segue-
Jess Belliveau:
Unless you want to buy it. If you do want to buy Agile, the [crosstalk 00:13:55] in the marketplace.
Jared Kells:
Yeah, sorry, sorry, yeah! Go on.
Jordan Simonovski:
You can buy tools that make your life a lot easier, and there are a lot of things out there already which do stuff for people and do surface really interesting data that people might want to look at. I think there are a couple of start ups like LightStep and Honeycomb, which give you a really intuitive way of understanding your data in production. But why you would need this kind of stuff is that you want to know the state of your systems at any given point in time, and to build, I guess, good operational hygiene and good production excellence, I guess as Liz Fong-Jones would put it, is you need to be able to close that feedback loop. We have a whole bunch of tools already. So we have CICD systems in place. We have feature flags now, which help us, I guess, decouple deployments from releases. You can deploy code without actually releasing code, and you can actually give that power to your PM's now if you want to, with feature flags, which is great.
Jordan Simonovski:
But what you can also do now is completely close this loop, and as you're deploying an application, you can say, "I want to canary this deployment. I want to deploy this to 10% of my users, maybe users who are opted in for Beta releases or something of our application, and you can actually look at how that's performing before you release it to a wider audience. So it does make deployments a lot safer. It does give you a better understanding of how you're affecting users as well. And there are a whole bunch of tools that you can use to determine this stuff as well. So if you're looking at how a lot of companies are doing SRE at the moment, or understanding what reliable looks like for their applications, you have things like SLO's in place as well. And SLO's-
Jared Kells:
What's an SLO?
Jordan Simonovski:
They're all tied to user experiences. So you're saying, "Can my user perform this particular interaction?" And if you can effectively measure that and know how users are being affected by the changes you're making, you can easily make decisions around whether or not you continue shipping features or if you drop everything and work on reliability to make sure your users aren't affected. So it's this very user centric approach to doing things. I think in terms of closing the loop, observability gives us that data to say, "Yes, this is how users are being affected. This is how, I guess the 99th percentile of our users are fine, but we have 1% who are having adverse issues with our application." And you can really pinpoint stuff from there and say, "Cool. Users with this particular browser or this particular, or where we've deployed this app to," let's say if you have a global deployment of some kind, you've deployed to an island first, because you don't really care what happens to them. You can say, "Oh, we've actually broken stuff for them." And you can roll it back before you impact 100% of your users.
Jared Kells:
Yeah. I liked what you said about the test. I forgot the acronym, but actually testing the end user behavior. That's kind of exciting to me, because we have all these metrics that are a bit useless. They're cool, "Oh, it's using 1% CPU like it always is, now I don't really care," but can a user open up the app and drag an issue around? It's like-
Jess Belliveau:
Yeah, that's a really great example, right?
Jared Kells:
That's what I really care about.
Jess Belliveau:
The 1% CPU thing, you could look at a CPU usage graph and see a deployment, and the CPU usage doesn't change. Is everything healthy or not? You don't know, whereas if you're getting that deeper level info of the user interactions, you could be using 1% CPU to serve HTTP500 errors to the 80% of the customer base, sort of thing.
Angad Sethi:
How do you do that? The SLO's bit, how do you know a user can log in and drag an issue?
Jordan Simonovski:
Yeah. I think that would come with good instrumenting-
Angad Sethi:
Good question?
Jordan Simonovski:
Yeah, it comes down to actually keeping observability in mind when you are developing new features, the same way you would think about logging a particular thing in your code as you're writing, or writing test for your code, as you're writing code as well. You want to think about how you can instrument something and how you can understand how this particular feature is working in production. Because I think as a lot of Agile and dev ops principles are telling us now is that we do want our applications in production. And as developers, our responsibilities don't end when we deploy something. Our responsibility as a developer ends when we've provided value to the business. And you need a way of understanding that you're actually doing that. And that's where, I guess, you do nee do think about observability with a lot of this stuff, and actually measuring your success metrics. So if you do know that your application is successful if your user can log in and drag stuff around, then that's exactly what you want to measure.
Jared Kells:
I think that we have to build-
Jordan Simonovski:
Yeah?
Jared Kells:
Oh, sorry Jordan.
Jordan Simonovski:
No, you go.
Jared Kells:
I was just going to say we have to build our apps with integration testing in mind already. So doing browser based tests around new features. So it would be about building features with that and the same thing in mind but for testing and production.
Jess Belliveau:
Yeah and the actual how, the actual writing code part, there's this really great project, the open telemetry project, which provides all these sort of API's and SDK's that developers can consume, and it's vendor agnostic. So when you talk about the how, like, "How do I do this? How do I instrument things?" Or, "How do I emit metrics?" They provide all these helpful libraries and includes that you can have, because the last thing you want to do is have to roll this custom solution, because you're then just adding to your technical debt. You're trying to make things easier, but you're then relying on, "Well I need to keep Jared Kells employed, because he wrote our log in engine and no one else knows how it works.
Jess Belliveau:
And then the other thing that comes to mind with something like open telemetry as well, and we talked a bit about Datadog. So Datadog is a SaaS vendor that specializes in observability. And you would push your metrics and your logs and your traces to them and they give you a UI to display. If you choose something that's vendor agnostic, let's just use the example of Easy Agile. Let's say they start Datadog and then in six months time, we don't want to use Datadog anymore, we want to use SignalFx or whatever the Splunk one is now.
Jordan Simonovski:
I think NorthX.
Jess Belliveau:
Yeah. You can change your end point, push your same metrics and all that sort of stuff, maybe with a few little tweaks, but the idea is you don't want to tie in to a single thing.
Jordan Simonovski:
Your data structures remain the same.
Jess Belliveau:
Yeah. So that you could almost do it seamlessly without the developers knowing. There's even companies in the past that I think have pushed to multiple vendors. So you could be consuming vendor A and then you want to do a proof of concept with vendor B to see what the experience is like and you just push your data there as well.
Jared Kells:
Yeah. I think our coupling to Datadog will be I all the dashboards and stuff that we've made. It's not so much the data.
Jess Belliveau:
Yeah. That's sort of the big up sell, right. It's how you interact. That's where they want to get their hooks in, is making it easier for you to interpret that data and manipulate it to meet your needs and that sort of stuff.
Jordan Simonovski:
Observability suggests dashboards, right?
Jess Belliveau:
Yeah, perhaps. You used this term as well, Jordan, "production excellence." And when we talk about who needs observability, I was thinking a bit about that while you were talking. And for me, production excellence, or in Apptio we call it production readiness, operational readiness and that sort of stuff is like we want to deploy something to production like what sort of best practices do we want to have in place before we do that? And I think observability is a real great idea, because it's helping you in the future. You don't know what problems you're going to have down the line, but you're equipping your teams to be able to respond to those problems easily. Whereas, we've all probably been there, we've deployed code of production and we have no observability, we have a huge outage. What went wrong? Well, no one knows, but we know this is the fix, and it's hard to learn from that, or you have to learn from that I guess, and protect the user against future stuff, yeah.
Jess Belliveau:
When I think easy wins for observability, the first thing that really comes to mind is this whole idea of structured logging, which is really this idea that your application is you're logging, first of all. Quite important as a baseline starting point, but then you have a structured log format which lets you programmatically pass the logs as well. If you go back in time, maybe logging just looked like plain text with a line, with a timestamp, an error message. Whatever the developer decided to write to the standard out, or to the error file or something like that. Now I think there's a general move to having JSON, an actual formatted blob with that known structure so you can look into it. Tracing's probably not an easy win. That's a little bit harder. You can implement it with open telemetry and libraries and stuff. Requires a bit more understanding of your code base, I guess, and where you want tracing to fire, and that sort of stuff, parsing context through, things like that.
Jordan Simonovski:
I think Atlassian, when you probably just want to know that everything is okay. At a fairly superficial level. Maybe you just want to do some kind of up time on a trend. And then as, I guess, your code might get more complex or your product gets a bit more complex, you can start adding things in there. But I think actually knowing or surfacing the things you know might break. Those would probably be your quickest wins.
Jess Belliveau:
Well, let's mention some things for further reading. If you want to go get the whole picture of the whole, real observability started to get a lot of movement out of the Google SRE book from a few years ago. The Google SRE stuff covers the whole gamut of their soak reliability engineering practice, and observability is a portion of that, there's some great chapters on that. O'Reilly has an observability book, I think, just dedicated to observability now.
Jordan Simonovski:
I think that's still in early release, if people want to google chapters.
Jess Belliveau:
The open telemetry stuff, we'll drop a link to that I think that's really handy to know.
Angad Sethi:
From [inaudible 00:26:12], which is my perspective, as a developer, say I wanted to introduce cornflake use Datadog at Easy Agile. Not very familiar, I'm not very comfortable with it. I know how to navigate, but what's a quick way for me to get started on introducing observability? Sorry to lock my direct job or at my workplace.
Jordan Simonovski:
I would lean, I could be biased here. Jess correct me or give your opinion on this, I would lean heavily towards SLO's for this. And you can have a quick read in the SRE-
Jess Belliveau:
What does SLO stand for, Jordan?
Jordan Simonovski:
Okay, sorry. Buzz words! SLO is a service level objective, not to be confused with service level agreement. An agreement itself is contractual and you can pay people money if you do breach those. An SLO is something you set in your team and you have a target of reliability, because we are getting to the point where we understand that all systems at any point in time are in some kind of degraded state. And yeah, reliability isn't necessarily binary, it's not unreliable or reliable. Most of the time, it's mostly reliable and this gives us a better shared language, I guess. And you can have a read in the SRE handbook by Google, which is free online, which gives you a pretty good understanding of Datadog.
Jordan Simonovski:
I think the last time I used it had a SLO offering. But I think like I was mentioning earlier, you set an SLO on particular functionalities or features of your application. You're saying, "My user can do this 99% of the time," or whatever other reliability target you might want to set. I wouldn't recommend five nines of reliability. You'll probably burn yourself out trying to get there. And you have this target set for yourself. And you know exactly what you're measuring, you're measuring particular types of functionality. And you know when you do breach these, users are being affected. And that's where you can actually start thinking about observability. You can think about, "What other features are we implementing that we can start to measure?" Or, "What user facing things are we implementing that we can start to measure?"
Jordan Simonovski:
Other things you could probably look at are, I think they're all covered in the book anyway, data freshness in a way. You want to make sure the data users are being displayed is relatively fresh. You don't want them looking at stale data, so you can look at measuring things like that as well. But you can pretty much break it down into most functionalities of a website. It's no longer like a ping check, that you're just saying, "Yes, HTTP, okay. My application is fine." You're saying, "My users are actually being affected by things not working." And you can start measuring things from there. And that should give you a better understanding, or a better idea, at least, of where to start with what you want to measure and ow you want to measure it. That would be my opinion on where to get started with this if you do want to introduce it.
Jared Kells:
We're going to talk a little bit about state and how with some of these, like our very front end heavy applications that we're building, so the applications we build just basically run inside the browser and the traditional state as you would think about it, is just pulling a very simple API that writes some things into the database with some authentication, and that sort of stuff. So in terms of reliability of the services, it's really reliable. Those tiny API's just never have problems, because it's just so simple. And well, they've got plenty of monitoring around it. But all our state is actually, when you say, "Observe the state of the system," for the most part, that's state in a browser. And how do we get observability into that?
Jess Belliveau:
A big thing is really, there's not one thing fits all as well. When we talk about the SLO stuff as well, it's understanding what is important to not so much maybe your company but your team as well. If you're delivering this product, what's important to you specifically? So one SLO that might work for me at Apptio probably isn't going to work for Easy Agile. This is really pushing my knowledge, as well, of front end stuff, but when we say we want to observe the state as well, we don't necessarily mean specifically just the state. You could want to understand with each one of those API's when it's firing, what the request response time is for that API firing. So that might be an important metric. So you can start to see if one of those APIs is introducing latency, and so your user experience is degraded. Like, "Hey when we were on release three, when users were interacting with our service here, it would respond in this percentile latency. We've done a release and since then, now we're seeing it's now in this percentile. Have we degraded performance performance?" Users might not be complaining, but that could be something that the team then can look into, add to a sprint. Hey, I'm using Agile terms now. Watch out!
Jared Kells:
That's a really good example, Jess. Performance issues for us are typically not an API that's performing poorly. It's something in this very complicated front end application is not running in the same order as it used to, or there's some complex interaction we didn't think of, so it's requesting more data than expected. The APIs are returning. They're never slow, for the most part, but we have performance regressions that we may not know about without seeing them or investigating them. The observability is really at the individual user's browser level. That makes sense? I want to know how long did it take for this particular interaction to happen.
Jess Belliveau:
Yeah. I've never done that sort of side of things. As well, the other thing I guess, you could potentially be impacted in as well as then, you're dealing with end user manifestations as well. You could perceive-
Jared Kells:
Yeah sure.
Jess Belliveau:
... Greater performance on their laptop or something, or their ISP or that sort of stuff. It'd be really hard to make sure you're not getting noise from that sort of thing as well.
Jordan Simonovski:
Yeah. There are tools like Sentry, I guess, which do exist to give you a bit more of an understanding what's happening on your front end. The way Sentry tends to work with JavaScript, is you'll upload a minified map of your JS to Sentry, deploy your code and then if something does break or work in a fairly unexpected way, that tends to get surfaced with Sentry will tell you exactly which line this kind of stuff is happening on, and it's a really cool tool for that company stuff. I don't know if it'd give you the right type of insights, I think, in terms of performance or-
Jared Kells:
Yeah, we use a similar tool and it does work for crashes and that sort of thing. And on the observability front, we log actions like state mutations in side the front end, not the actual state change, but just labels that represent that you updated an issue summary or you clicked this button, that sort of thing, and we send those with our crash reports. And it's super helpful having that sort of observability. So I think I know what you guys are talking about. But I'm just [crosstalk 00:35:25], yeah.
Jess Belliveau:
Yeah, that's almost like, I guess, a form of tracing. For me and Jordan, when we talk about tracing, we might be thinking about 12 different microservices sitting in AWS that are all interacting, whereas you're more shifting that. That's sort of all stuff in the browser interacting and just having that history of this is what the user did and how they've ended up-
Jared Kells:
In that state.
Jess Belliveau:
In that state, yeah.
Jordan Simonovski:
I guess even if you don't have a lot of microservices, if you're talking about particular, like you're saying for the most part your API requests are fine but sometimes you have particularly large payloads-
Jared Kells:
We actually have to monitor, I don't know, maybe you can help with this, we actually should be monitoring maybe who we're integrating with. It's actually much more likely that we'll have a performance issue on a Xero API rather than... We don't see it, the browser sees it as well, which is-
Jordan Simonovski:
Yeah, and tracing does solve all of those regressions for you. Most tracing libraries, like if you're running Node apps or whatever on your backend. I can just tell you about Node, because I probably have the most experience writing Node stuff. You pretty much just drop in Didi trace, which is a Datadog library for tracing into your backend and your hook itself into all of, I think, the common libraries that you'll tend to work with, I think. Like if you're working for express or for a lot of just HADP libraries, as well as a few AWS services, it will kind of hook itself into that. And you can actually pinpoint. It will kind of show you on this pretty cool service map exactly which services you're interacting with and where you might be experiencing a regression. And I think traces do serve to surface that information, which is cool. So that could be something worth investigating.
Jess Belliveau:
It's funny. This is a little bit unrelated to observability, but you've just made me think a bit more about how you're saying you're reliant on third party providers as well. And something I think that's really important that sometimes gets missed is so many of us today are relying on third party providers, like AWS is a huge thing. A lot of people writing apps that require AWS services. And I think a lot of the time, people just assume AWS or Jira or whatever, is 100% up time, always available. And they don't write their code in such a way that deals with failures. And I think it's super important. So many times now I've seen people using the AWS API and they don't implement exponential back off. And so they're basically trying to hit the AWS API, it fails or they might get throttled, for example, and then they just go into a fail state and throw an error to the user. But you could potentially improve that user experience, have a retry mechanism automatically built in and that sort of stuff. It doesn't really tie into the observability thing, but it's something.
Jared Kells:
And the users don't care, right? No one cares if it's an AWS problem. It's your problem, right, your app is too slow.
Jess Belliveau:
Well, they're using your app. Exactly right. It reflects on you sort of thing, so it's in your interest to guard against an upstream failure, or at least inform the user when it's that case. Yeah.
Jared Kells:
Well, I think we're going to have to call it, this podcast, because it was an hour ago. We had instructed max 45 minutes.
Jess Belliveau:
We could just keep going. We might need a part two! Maybe we can request [cross talk 00:39:21].
Jared Kells:
Maybe! Yeah.
Jess Belliveau:
Or we'll just start our own podcast! Yeah.
Angad Sethi:
So what were your biggest learnings today, given it's been Angad and I are just learning about observability, Angad what was your biggest learning today about observability? My biggest learning was that observability does not equal Datadog. No, sorry! It was just very fascinating to learn about quantifying the known unknowns. I don't know if that's a good takeaway, but...
Jess Belliveau:
Any takeaway is a good takeaway! What about you, Jared?
Jared Kells:
I think, because I we were going to talk about state management, and part of it was how we have this ability, at the moment to, the way our front ends are architected, we can capture the state of the app and get a customer to send us their state, basically. And we can load it into our app and just see exactly how it was, just the way our state's designed. But what might be even cooler is to build maybe some observability into that front end for support. I'm thinking instead of just having, we have this button to send us out your support information that sends us a bunch of the state, but instead of console logging to the browser log, we could be console logging, logging in our front end somewhere that when they click, "send support information," our customers should be sending us the actions that they performed.
Jared Kells:
Like, "Hey there's a bug, send us your support information." It doesn't have to be a third party service collecting this observability stuff. We could just build into our... So that's what I'm thinking about.
Jess Belliveau:
Yeah, for sure. It'll probably be a lot less intrusive, as well, as some of the third party stuff that I've seen around.
Jared Kells:
Yeah. It's pretty hard with some of these integrations, especially if you're developing apps that get run behind a firewall.
Jess Belliveau:
Yeah
Jared Kells:
You can't just talk to some of these third parties. So yeah, it's cool though. It's really interesting.
Jess Belliveau:
Well, I hope someone out there listening has learned something, and Jordan and I will send some links through, and we can add them, hopefully, to the show notes or something so people can do some more reading and...
Jared Kells:
All thanks!
Jess Belliveau:
Thanks for having us, yeah.
Jared Kells:
Thanks all for your time, and thanks everybody for listening.
Jordan Simonovski:
Thanks everyone.
Angad Sethi:
That was [inaudible 00:41:55].
Jess Belliveau:
Tune in next week!
- Podcast
Easy Agile Podcast Ep.34 Henrik Kniberg on Team Productivity, Code Quality, and the Future of Software Engineering
TL;DR
Henrik Kniberg, the agile coach behind Spotify's model, discusses how AI is fundamentally transforming software development. Key takeaways: AI tools like Cursor and Claude are enabling 10x productivity gains; teams should give developers access to paid AI tools and encourage experimentation; coding will largely disappear as a manual task within 3–4 years; teams will shrink to 2 people plus AI; sprints will become obsolete in favour of continuous delivery; product owners can now write code via AI, creating pull requests instead of user stories; the key is treating AI like a brilliant intern – when it fails, the problem is usually your prompt or code structure, not the AI. Bottom line: Learn to use AI now, or risk being left behind in a rapidly changing landscape.
Introduction
Artificial intelligence is fundamentally reshaping how software teams work, collaborate, and deliver value. But with this transformation comes questions: How do we maintain team morale when people fear being replaced? What happens to code quality when AI writes most of the code? Do traditional agile practices like sprints still make sense?
In this episode, I sit down with Henrik Kniberg to tackle these questions head-on. Henrik is uniquely positioned to guide us through this transition – he's the agile coach and entrepreneur who pioneered the famous Spotify model and helped transform how Lego approached agile development. Now, as co-founder of Abundly AI, he's at the forefront of helping teams integrate AI into their product development workflows.
This conversation goes deep into the practical realities of AI-powered development: from maintaining code review processes when productivity increases 10x, to ethical considerations around AI usage, to what cross-functional teams will look like in just a few years. Henrik doesn't just theorise – he shares real examples from his own team, where their CEO (a non-coder) regularly submits pull requests, and where features that once took a sprint can now be built during a 7-minute subway ride.
Whether you're a developer wondering if AI will replace you, a product owner looking to leverage these tools, or a leader trying to navigate this transformation, this episode offers concrete, actionable insights for thriving in the AI era.
About Our Guest
Henrik Kniberg is an agile coach, author, and entrepreneur whose work has shaped how thousands of organisations approach software development. He's best known for creating the Spotify model – the squad-based organisational structure that revolutionised how large tech companies scale agile practices. His work at Spotify and later at Lego helped demonstrate how agile methodologies could work at enterprise scale whilst maintaining team autonomy and innovation.
Henrik's educational videos have become legendary in the agile community. His "Agile Product Ownership in a Nutshell" video, created over a decade ago, remains one of the most-watched and shared resources for understanding product ownership, with millions of views. His ability to distil complex concepts into simple, visual explanations has made him one of the most accessible voices in agile education.
More recently, Henrik has turned his attention to the intersection of AI and product development. As co-founder of Abundly AI, he's moved from teaching about agile transformation to leading AI transformation – helping companies and teams understand how to effectively integrate generative AI tools into their development workflows. His approach combines his deep understanding of team dynamics and agile principles with hands-on experience using cutting-edge AI tools like Claude, Cursor, and GitHub Copilot.
Henrik codes daily using AI and has been doing so for over two and a half years, giving him practical, lived experience with these tools that goes beyond theoretical understanding. He creates educational content about AI, trains teams on effective AI usage, and consults with organisations navigating their own AI transformations. His perspective is particularly valuable because he views AI through the lens of organisational change management – recognising that successful AI adoption isn't just about the technology, it's about people, culture, and process.
Based in Stockholm, Sweden, Henrik continues to push the boundaries of what's possible when human creativity and AI capabilities combine, whilst maintaining a pragmatic, human-centred approach to technological change.
Transcript
Note: This transcript has been lightly edited for clarity and readability.
Maintaining Team Morale and Motivation in the AI Era
Tenille Hoppo: Hi there, team, and welcome to this new episode of the Easy Agile Podcast. My name is Tenille Hoppo, and I'm feeling really quite lucky to have an opportunity to chat today with our guest, Henrik Kniberg.
Henrik is an agile coach, author, and entrepreneur known for pioneering agile practices at companies like Spotify and Lego, and more recently for his thought leadership in applying AI to product development. Henrik co-founded Abundly AI, and when he isn't making excellent videos to help us all understand AI, he is focused on the practical application of generative AI in product development and training teams to use these technologies effectively.
Drawing on his extensive experience in agile methodologies and team coaching, Henrik seems the perfect person to learn from when thinking about the intersection of AI, product development, and effective team dynamics. So a very warm welcome to you, Henrik.
Henrik Kniberg: Thank you very much. It's good to be here.
Tenille: I think most people would agree that motivated people do better work. So I'd like to start today by touching on the very human element of this discussion and helping people maintain momentum and motivation when they may be feeling some concern or uncertainty about the upheaval that AI might represent for them in their role.
What would you suggest that leaders do to encourage the use of AI in ways that increase team morale and creativity rather than risking people feeling quite concerned or even potentially replaced?
Henrik: There are kind of two sides to the coin. There's one side that says, "Oh, AI is gonna take my job, and I'm gonna get fired." And the other side says, "Oh, AI is going to give me superpowers and give us all superpowers, and thereby give us better job security than we had before."
I think it's important to press on the second point from a leader's perspective. Pitch it as this is a tool, and we are entering a world where this tool is a crucial tool to understand how to use – in a similar way that everyone uses the Internet. We consider it obvious that you need to know how to use the Internet. If you don't know how to use the Internet, it's going to be hard.
"I encourage people to experiment, give them access to the tools to do so, and encourage sharing. And don't start firing people because they get productive."
I also find that people tend to get a little bit less scared once they learn to use it. It becomes less scary. It's like if you're worried there's a monster under your bed, maybe look under your bed and turn on the lights. Maybe there wasn't a monster there, or maybe it was there but it was kind of cute and just wanted a hug.
Creating a Culture of Safe Experimentation
Tenille: I've read that you encourage experimentation with AI through learning – I agree it's the best way to learn. What would you encourage leaders and team leaders to do to create a strong culture where teams feel safe to experiment?
Henrik: There are some things. One is pretty basic: just give people access to good AI tools. And that's quite hard in some large organisations because there are all kinds of resistance – compliance issues, data security issues. Are we allowed to use ChatGPT or Claude? Where is our data going? There are all these scary things that make companies either hesitate or outright try to stop people.
Start at that hygiene level. Address those impediments and solve them. When the Internet came, it was really scary to connect your computer to the Internet. But now we all do it, and you kind of have to, or you don't get any work done. We're at this similar moment now.
"Ironically, when companies are too strict about restricting people, then what people tend to do is just use shadow AI – they use it on their own in private or in secret, and then you have no control at all."
Start there. Once people have access to really good AI tools, then it's just a matter of encouraging and creating forums. Encourage people to experiment, create knowledge-sharing forums, share your own experiments. Try to role-model this yourself. Say, "I tried using AI for these different things, and here's what I learned." Also provide paths for support, like training courses.
The Right Mindset for Working with AI
Tenille: What would you encourage in team members as far as their mindset or skills go? Certainly a nature of curiosity and a willingness to learn and experiment. Is there anything beyond that that you think would be really key?
Henrik: It is a bit of a weird technology that's never really existed before. We're used to humans and code. Humans are intelligent and kind of unpredictable. We hallucinate sometimes, but we can do amazing things. Code is dumb – it executes exactly what you told it to do, and it does so every time exactly the same way. But it can't reason, it can't think.
Now we have AI and AI agents which are somewhere in the middle. They're not quite as predictable as code, but they're a lot more predictable than humans typically. They're a lot smarter than code, but maybe not quite as smart as humans – except for some tasks when they're a million times smarter than humans. So it's weird.
You need a kind of humble attitude where you come at it with a mindset of curiosity. Part of it is also to realise that a lot of the limitation is in you as a user. If you try to use AI for coding and it wrote something that didn't work, it's probably not the model itself. It's probably your skills or lack of skills because you have to learn how to use these tools. You need to have this attitude of "Oh, it failed. What can I do differently next time?" until you really learn how to use it.
"There can be some aspect of pride with developers. Like, 'I've been coding for 30 years. Of course this machine can't code better than me.' But if you think of it like 'I want this thing to be good, I want to bring out the best in this tool' – not because it's going to replace me, but because it's going to save me a tonne of time by doing all the boring parts of the coding so I can do the more interesting parts – that kind of mindset really helps."
Maintaining Code Quality and Shared Understanding
Tenille: Our team at Easy Agile is taking our steps and trying to figure out how AI is gonna work best for us. I put the question out to some of our teams, and there were various questions around people taking their first steps in using AI as a co-pilot and producing code. There are question marks around consistency of code, maintaining code quality and clean architecture, and even things like maintaining that shared understanding of the code base. What advice do you have for people in that situation?
Henrik: My first piece of advice when it comes to coding – and this is something I do every day with AI, I've been doing for about two and a half years now – is that the models now, especially Claude, have gotten to the level where it's basically never the AI's fault anymore. If it does anything wrong, it's on you.
You need to think about: okay, am I using the wrong tool maybe? Or am I not using the tool correctly?
For example, the current market leader in terms of productivity tools with AI is Cursor. There are other tools that are getting close like GitHub Copilot, but Cursor is way ahead of anything else I've seen. With Cursor, it basically digs through your code base and looks for what it needs.
But if it fails to find what it needs, you need to think about why. It probably failed for the same reason a human might have failed. Maybe your code structure was very unstructured. Maybe you need to explain to the AI what the high-level structure of your code is.
"Think of it kind of like a really smart intern who just joined your team. They're brilliant at coding, but now they got confused about something, and it's probably your code – something in it that made it confused. And now you need to clarify that."
There are ways to do that. In Cursor, for example, you can create something called cursor rules, which are like standing documents that describe certain aspects of your system. In my team, we're always tweaking those rules. Whenever we find that the AI model did something wrong, we're always analysing why. Usually it's our prompt – I just phrased it badly – or I just need to add a cursor rule, or I need to break the problem down a little bit.
It's exactly the same thing as if you go to a team and give them this massive user story that includes all these assumptions – they'll probably get some things wrong. But if you take that big problem and sit down together and analyse it and split it into smaller steps where each step is verifiable and testable, now your team can do really good work. It's exactly the same thing with AI.
Addressing the Code Review Bottleneck
Tenille: One of our senior developers found that he was outputting code at a much greater volume and faster speed, but the handbrake he found was actually their code review processes. They were keeping the same processes they had previously, and that was a bit of a handbrake for them. What kind of advice would you have there?
Henrik: This reminds me of the general issue with any kind of productivity improvement. If you have a value stream, a process where you do different parts – you do some development, some testing, you have some design – whenever you take one part of the process and make it super optimised, the bottleneck moves to somewhere else.
If testing is no longer the bottleneck, maybe coding is. And when coding is instant, then maybe customer feedback – or lack of customer feedback – is the bottleneck. The bottleneck just keeps moving. In that particular case, the bottleneck became code review. So I would just start optimising that. That's not an AI problem. It's a process problem.
Look at it: what exactly are we trying to do when we review? Maybe we could think about changing the way we review things. For example, does all code need to be reviewed? Would it be enough that the human who wrote it and the AI, together with the human, agree that this is fine? Or maybe depending on the criticality of that change, in some cases you might just let it pass or use AI to help in the reviewing process also.
"I think there's value in code review in terms of knowledge sharing in a large organisation. But maybe the review doesn't necessarily need to be a blocking process either. It could be something you go back and look at – don't let it stop you from shipping, but maybe go back once per week and say, 'Let's look at some highlights of some changes we've made.'"
We produce 10 times more code than in the past, so reviewing every line is not feasible. But maybe we can at least identify which code is most interesting to look at.
Ethical Considerations: Balancing Innovation with Responsibility
Tenille: Agile emphasises people over process and delivering value to customers. Now with AI in the mix, there's potential for raising some ethical considerations. I'm interested in your thoughts on how teams should approach these ethical considerations that come along with AI – things like balancing rapid experimentation against concerns around bias, potential data privacy concerns.
Henrik: I would treat each ethical question on its own merits. Let me give you an example. When you use AI – let's say facial recognition technology that can process and recognise faces a lot better than any human – I kind of put that in the bucket of: any tool that is really useful can also be used for bad things. A hammer, fire, electricity.
That doesn't have so much to do with the tool itself. It has much more to do with the rules and regulations and processes around the tool. I can't really separate AI in that sense. Treat it like any other system. Whenever you install a camera somewhere, with or without AI, that camera is going to see stuff. What are you allowed to do with that information? That's an important question. But I don't think it's different for AI really, in that sense, other than that AI is extremely powerful. So you need to really take that seriously, especially when it comes to things like autonomous weapons and the risk of fraud and fake news.
"An important part of it is just to make it part of the agenda. Let's say you're a recruitment company and you're now going to add some AI help in screening. At least raise the question: we could do this. Do we want to do this? What is the responsible way to do it?"
It's not that hard to come up with reasonable guidelines. Obviously, we shouldn't let the AI decide who we're going to hire or not. That's a bad idea. But maybe it can look at the pile of candidates that we plan to reject and identify some that we should take a second look at. There's nothing to lose from that because that AI did some extra research and found that this person who had a pretty weak CV actually has done amazing things before.
We're actually working with a company now where we're helping them build some AI agents. Our AI agents help them classify CVs – not by "should we hire them or not," but more like which region in Sweden is this, which type of job are we talking about here. Just classifying to make it more likely that this job application reaches the right person. That's work that humans did before with pretty bad accuracy.
The conclusion was that AI, despite having biases like we humans do, seemed to have less biases than the human. Mainly things like it's never going to be in a bad mood because it hasn't had its coffee today. It'll process everybody on the same merits.
I think of it like a peer-to-peer thing. Imagine going to a doctor – ideally, I want to have both a human doctor and an AI doctor side by side, just because they both have biases, but now they can complement each other. It's like having a second opinion. If the AI says we should do this and the doctor says, "No, wait a second," or vice versa, having those two different opinions is super useful.
Parallels Between Agile and AI Transformations
Tenille: You're recognised as one of the leading voices in agile software development. I can see, and I'm interested if you do see, some parallels between the agile transformations that you led at Spotify and Lego with the AI transformations that many businesses are looking at now.
Henrik: I agree. I find that when we help companies transition towards becoming AI native, a lot of the thinking is similar to agile. But I think we can generalise that agile transformations are not really very special either – it's organisational change.
There are some patterns involved regardless of whether you're transitioning towards an agile way of working or towards AI. Some general patterns such as: you've got to get buy-in, it's useful to do the change in an incremental way, balance bottom-up with top-down. There are all these techniques that are useful regardless. But as an agilist, if you have some skills and competence in leading and supporting a change process, then that's going to be really useful also when helping companies understand how to use AI.
Tenille: Are you seeing more top-down or bottom-up when it comes to AI transformations?
Henrik: So far it's quite new still. The jury's not in yet. But so far it looks very familiar to me. I'm seeing both. I'm seeing situations where it's pure top-down where managers are like "we got to go full-out AI," and they push it out with mixed results. And sometimes just completely bottom-up, also with mixed results.
Sometimes something can start completely organically and then totally take hold, or it starts organically and then gets squashed because there was no buy-in higher up. I saw all of that with agile as well. My guess is in most cases the most successful will be when you have a bit of both – support and guidance from the top, but maybe driven from the bottom.
"I think the bottom-up is maybe more important than ever because this technology is so weird and so fast-moving. As a leader, you don't really have a chance if you try to control it – you're going to slow things down to an unacceptable level. People will be learning things that you can't keep up with yourself. So it's better to just enable people to experiment a lot, but then of course provide guidance."
AI for Product Owners: From Ideation to Pull Requests
Tenille: You're very well known for your guidance and for your ability to explain quite complex concepts very simply and clearly. I was looking at your video on YouTube today, the Agile Product Ownership in a Nutshell video, which was uploaded about 12 years ago now. Thinking about product owners, there's a big opportunity now with AI for generating ideas, analysing data, and even suggesting new features. What's your advice for product owners and product managers in using AI most effectively?
Henrik: Use it for everything. Overuse it so you can find the limits. The second thing is: make sure you have access to a good AI model. Don't use the free ones. The difference is really large – like 10x, 100x difference – just in paying like $20 per month or something. At the moment, I can particularly strongly recommend Claude. It's in its own category of awesomeness right now. But that of course changes as they leapfrog each other. But mainly: pay up, use a paid model, and then experiment.
For product owners, typical things are what you already mentioned – ideation, creating good backlog items, splitting a story – but also writing code. I would say as a PO, there is this traditional view, for example in Scrum, that POs should not be coding. There's a reason for that: because coding takes time, and then as PO you get stuck in details and you lose the big picture.
Well, that's not true anymore. There are very many things that used to be time-consuming coding that is basically a five-minute job with a good prompt.
"Instead of wasting the team's time by trying to phrase that as a story, just phrase it as a pull request instead and go to the team and demonstrate your running feature."
That happened actually today. Just now, our CEO, who's not a coder, came to me with a pull request. In fact, quite often he just pushes directly to a branch because it's small changes. He wants to add some new visualisation for a graph or something in our platform – typically admin stuff that users won't see, so it's quite harmless if he gets it wrong.
He's vibe coding, just making little changes to the admin, which means he never goes to my team and says, "Hey, can you guys generate this report or this graph for how users use our product?" No, he just puts it in himself if it's simple.
Today we wanted to make a change with how we handle payments for enterprise customers. Getting that wrong is a little more serious, and the change wasn't that hard, but he just didn't feel completely comfortable pushing it himself. So he just made a PR instead, and then we spent 15 minutes reviewing it. I said it was fine, so we pushed it.
It's so refreshing that now anybody can code. You just need to learn the basic prompting and these tools. And then that saves time for the developers to do the more heavyweight coding.
Tenille: It's an interesting world where we can have things set up where anyone could just jump in and with the right guardrails create something. It makes Friday demos quite probably a lot more interesting than maybe they used to be in the past.
Henrik: I would like to challenge any development team to let their stakeholders push code, and then find out whatever's stopping you from doing that and fix that. Then you get to a very interesting space.
Closing the Gap Between Makers and Users
Tenille: A key insight from your work with agile teams in the past has been to really focus on minimising that gap between maker and user. Do you think that AI helps to close that gap, or do you think it potentially risks widening it if teams are focusing too much on AI predictions and stop talking to their customers effectively?
Henrik: I think that of course depends a lot on the team. But from what I've seen so far, it massively reduces the gap. Because if I don't have to spend a week getting a feature to work, I can spend an hour instead. Then I have so much more time to talk to my users and my customers.
If the time to make a clickable prototype or something is a few seconds, then I can do it live in real time with my customers, and we can co-create. There are all these opportunities.
I find that – myself, my teams, and the people I work with – we work a lot more closely with our users and customers because of this fast turnaround time.
"Just yesterday I was teaching a course, and I was going home sitting on the subway. It was a 15-minute subway ride. I finally got a seat, so I had only 7 minutes left. There's this feature that I wanted to build that involved both front-end and back-end and a database schema change. Well, 5 minutes later it was done and I got off the subway and just pushed it. That's crazy."
Of course, our system is set up optimised to enable it to be that fast. And of course not everything will work that well. But every time it does, I've been coding for 30 years, and I feel like I wake up in some weird fantasy every day, wondering, "Can I really be this productive?" I never would have thought that was possible.
Looking Ahead: The Future of Agile Teams
Tenille: I'd like you to put your futurist hat on for a moment. How do you see the future of agile teamwork in, say, 10 to 15 years time? If we would have this conversation again in 2035, given the exponential growth of AI and improvements over the last two to three years, what do you think would be the biggest change for software development teams in how they operate?
Henrik: I can't even imagine 10 years. Even 5 years is just beyond imagination. That's like asking someone in the 1920s to imagine smartphones and the Internet. I think that's the level of change we're looking at.
I would shorten the time a little bit and say maybe 3 or 4 years. My guess there – and I'm already seeing this transfer happen – is that coding will just go away. It just won't be stuff that we humans do because we're too slow and we hallucinate way too much.
But I think engineering and the developer role will still be there, just that we don't type lines of code – in the same way that we no longer make punch cards or we no longer write machine code and poke values into registers using assembly language. That used to be a big part of it, but no longer.
"In the future, as developers, a lot of the work will still be the same. You're still designing stuff, you're thinking about architecture, you're interacting with customers, and you're doing all the other stuff. But typing lines of code is something that we're gonna be telling our kids about, and they're not gonna believe that we used to do that."
The other thing is smaller teams, which I'm already seeing now. I think the idea of a cross-functional team of 5 to 7 people – traditionally that was considered quite necessary in order to have all the different skills needed to deliver a feature in a product. But that's not the case anymore. If you skip ahead 2 or 3 years when this knowledge has spread, I think most teams will be 2 people and an AI, because then you have all the domain knowledge you need, probably.
As a consequence of that, we'll just have more teams. More and smaller teams. Of course, then you need to collaborate between the teams, so cross-team synchronisation is still going to be an issue.
Also, I'm already seeing this now, but this concept of sprints – the whole point is to give a team some peace of mind to build something complex, because typically you would need a week or two to build something complex. But now, when it takes a day and some good prompting to do the same thing that would have taken a whole sprint, then the sprint is a day instead. If the sprint is a day, is there any difference between a sprint planning meeting and a daily standup? Not really.
I think sprints will just kind of shrink into oblivion. What's going to be left instead is something a little bit similar – some kind of synchronisation point or follow-up point. Instead of a sprint where every 2 weeks we sit down and try to make a plan, I think it'll be very much continuous delivery on a day-to-day basis. But then maybe every week or two we take a step back and just reflect a little bit and say, "Okay, what have we been delivering the past couple of weeks? What have we been learning? What's our high-level focus for the next couple of weeks?" A very, very lightweight equivalent of a sprint.
I feel pretty confident about that guess because personally, we are already there with my team, and I think it'll become a bit of a norm.
Final Thoughts: Preparing for the Future
Henrik: No one knows what's gonna happen in the future, and those who say they do are kidding themselves. But there's one fairly safe bet though: no matter what happens in the future with AI, if you understand how to use it, you'll be in a better position to deal with whatever that is. That's why I encourage people to get comfortable with it, get used to using it.
Tenille: I have a teenage daughter who I'm actually trying to encourage to learn how to use AI, because I feel like when I was her age, the Internet was the thing that was sort of coming mainstream. It completely changed the way we live. Everything is online now. And I feel like AI is that piece for her.
Henrik: Isn't it weird that the generation of small children growing up now are going to consider this to be normal and obvious? They'll be the AI natives. They'll be like, "Of course I have my AI agent buddy. There's nothing weird about that at all."
Tenille: I'll still keep being nice to my coffee machine.
Henrik: Yeah, that's good. Just in case, you know.
---
Thank you to Henrik Kniberg for joining us on this episode of the Easy Agile Podcast. To learn more about Henrik's work, visit Abundly AI or check out his educational videos on AI and agile practices.
Subscribe to the Easy Agile Podcast on your favourite platform, and join us for more conversations about agile, product development, and the future of work.


