Hero Background

Our Stories

Webinar: Accelerate enterprise AI impacts with Team Topologies

As we move into 2026, one theme has become unmistakably clear: generative AI (GenAI) acts as an amplifying force. It strengthens what is already working well in an organization – but it also magnifies poor practices and weak decision-making. Where foundations are shaky, GenAI quickly exposes them, increasing business risk through data loss, legal exposure, and reputational damage. This reality has left many leaders asking the same question: where should we start?

In this recent webinar, we explored why the answer is not found in new operating models invented specifically for AI. Instead, the organizational patterns that enabled rapid, sustainable value delivery before GenAI are the very same patterns that enable success with GenAI. At the center of these patterns is empowering teams as long-term stewards of value, with value flow as the primary organizing principle for work.

In the session, agility and transformation experts Chris Ruch (CEO, Agile Rising) and Matthew Skelton (co-author of Team Topologies) unpacked the foundational elements that set organizations up for successful AI transformation—and, more broadly, for any technology-enabled change.

During the discussion, they examined how:

  • Organizations that double down on universal clarity, trust, and shared purpose are best positioned to succeed with both humans and agentic AI
  • Team Topologies provides the “infrastructure for agency” required for effective collaboration between people and AI systems
  • Enterprises in North America are navigating the complexity of secure, compliant AI adoption across multiple tools and platforms
  • Effective AI transformation depends on leaders establishing clear boundaries for trust, accountability, and decision-making
  • As AI reshapes how value is created and flows through organizations, Team Topologies offers a practical approach to designing teams that enable—rather than constrain—that flow

🎥 Watch the full webinar recording below to explore how strong organizational foundations make the difference between AI as a source of risk and AI as a force for sustainable advantage.

Matthew Skelton:

Hi everyone, I’m Matthew Skelton. I’m the co-author of Team Topologies, and I’m here today with Chris Ruch from Agile Rising. Hi Chris.

Chris Ruch:

Hi, Matthew. Great to be here.

Matthew Skelton:

It’s good to be here too. Thank you for joining me. Really appreciate it. Do you want to just give a quick overview of your background and Agile Rising your company and how that fits in with the current shift towards enterprise transformation in general, particularly the AI fueled AI infused?

Chris Ruch:

Yeah, sure. So Agile Rising is a consulting firm based in the United States. We help organisations traditionally, we’ve helped organisations adopt new ways of working, particularly focused around agile, particularly focused around large scale agile in big Fortune 500 companies, government agencies, fast moving startups, and we came to be, my background was in software development, product development, and over the course of the years and then decades got more interested in the process for how good products were developed and the foundation for that being good organisations, good cultures, and that led to finding Agile as the way to help companies improve. And what that has meant has continually changed over the years as new best practices have come in, as new tools have come in. And so now Agile Rising is really focused on helping companies adopt the latest and greatest tools and processes to enable the way that they want to work and the products they want to develop and solve the challenges that they have with their internal processes. I

Matthew Skelton:

It’s good to be here today. I mean your expertise around particularly the enterprise sector or segment or whatever, I think it’s going to be really valuable for lots of people to hear, particularly in the US but also worldwide too. So it is really great to be looking at this stuff together today. So we are towards the end of 2025. Now it’s quite clear that generative AI in particular is here to make some substantial disruption to how things are done in large enterprises. But I think that disruption, it is not clear what that looks like and there’s a lot of people searching around for some clarity about what’s the sensible way to approach things. Does generative AI make things completely new? Do we throw out all of the old stuff that we thought before? Some people think that’s true.

Matthew Skelton:

Personally don’t believe that at all. I think the vast majority of what we’ve learned in the industry, particularly in digitally enabled services like software, call it, I think the vast majority of that stuff remains. We need to learn some new things and apply them in a slightly different way, but I think that there’s a huge amount to learn from organisations like yours, like my experience and people that we work with. I think a lot remains absolutely the same, and that’s what we’ll explore today. We’re looking at accelerating, accelerating enterprise AI impacts with Team Topologies.

Chris Ruch:

This topic is really timely. Every organisation, almost everybody in every organisation is thinking about what impact is AI going to have? How can we use AI? And I think you’re correct that it’s going to be an enhancement, it’s going to be an enabler, but I think the thing that’s really unique about AI when we look back over the last couple of decades is that this is a technology that is touching every segment of the business environment. It’s not just a technology, it’s not just a development tool. It’s getting brought into the everyday work of people throughout the organisation, which leads to the, you can’t really even imagine the impact that it’s going to have because that impact’s going to be so broad across the organisation. And that’s why it’s an interesting topic to talk about.

Matthew Skelton:

Exactly. So I think, as I said, we’re at the end of 2025. We’ve had a few years of generative ally playing out in the workplace and so on and so on. There’s been some interesting gains. There’s been plenty of substantially spicy and expensive disasters or failures that people have had with geometry of AI. For me, the key takeaway is that organisations that have not worked out how to achieve real organisational agility, business agility before AI are going to fail with AI unless they actually get that foundation of true agility first.

Maybe that’s the punchline effectively, but that’s what we’re seeing. I think it’s probably the same for you, but that’s what we’re seeing. You can’t expect AI somehow magically to transform your organisation if you’ve already got some stuff in there which is incredibly ineffective, lots of handoffs, all this kind of stuff. It’s not going to fix that. It can accelerate it if you’ve already got that stuff in place. If loads of stuff has been fixed and you’ve already got a nimble organisation, it can probably accelerate things, but it’s not somehow magically going to get you out of that tar, that swamp of highly coupled like very slow decision making

Chris Ruch:

Fair.

Yeah, it’s not going to get you out of it and it’s probably going to make it even worse. I say one of the things that I frequently tell organisations is if you haven’t been successful with another big transformation effort, if you tried to implement Agile and it didn’t really go very well and then you tried implementing DevOps and that didn’t go very well, and so then you tried product operating model and that didn’t go very well, you’re going to get rolled over trying to like a steamroller by AI if you try to implement AI and try to transform your organisation with it. And the reason that that’s the case is because those fundamentals aren’t there. If you have the inefficiencies, the things that prevented previous process improvement efforts or your architectural problems with your products, AI is just going to compound them and it’s going to let people go faster. It’s going to let the organisation move much faster, but that’s going to be moving faster in the wrong way. It’s going to be creating more problems. It’s going to be amplifying problems instead of solving ’em unless you really apply the focus to identifying the inefficiencies and getting down to implementing good processes, good agile, lean processes, good team organisation and Team Topologies that then enable you to get the organisation going in the right direction. And then you can go very quickly. And then AI and generative AI can be a real productivity booster. Sure.

Matthew Skelton:

It’s worth us looking at a couple of examples. So there’s generative AI used for content generation, so it can be creating reports, documents, whatever, blog posts, a whole lot of stuff, but when the quality of that content generation is not the thing that’s being measured when it’s number of a number of words or it’s the number of documents you produced, then there’s a real danger that organisations are just piling up mountains of very poor quality or dubious quality material, which then causes downstream problems for other people. There was a study from Harvard Business Review recently, when was it? In September time anyway, and the title of the article is “AI-Generated “Workslop” Is Destroying Productivity.” That phrase “workslop”, I think they invented it. But this idea basically that if people are measured on the number of documents they produce, then guess what?

They’re going to use AI to produce load and load documents. They’re not going to really care about the quality or at least secondary. Does that help with value delivery? Almost certainly not. And there’s a similar problem with organisations using AI to generate code as in software. The number of lines of code is not the best measure of value from software by any means. It never has been. And now it’s quite clear just because you can generate a million lines of code. Is it any good? Does it meet any user needs? Does it meet any business needs? Who knows, right? But that’s the bad version. If you are already incentivizing people in the organisation to focus on lines of code or focus on number of documents, then you’re not going to succeed withAI that’s just going to make that whole problem way worse.

Chris Ruch:

And this has been one of the big challenges with agile for decades, right? Measuring teams with story points and velocity and what are the outputs instead of looking at the, what’s the value creation, what are we actually generating and what impact is it having? So if you don’t have that value focused key performance indicators and measuring what’s going out into your user’s hands, if you don’t already have that, just piling on more of those bad metrics are going to encourage bad behaviour, which encourages poor quality and ultimately measuring the wrong thing.

Matthew Skelton:

There’s clearly ways to use generative AI in particular. It’s a really good benefit. We can rapidly prototype new ideas, we can test out new combinations very easily if it previously would’ve taken two hours to generate a particular kind of content, and now we can generate that very, very quickly. We can generate 10 versions of that or 120 versions of that, test them out and suddenly we’re actually getting better quality because we’ve now got increased competence of this version of this report or this digital service or whatever. We’ve got more confidence that that’s the version which is actually going to really land because we’re properly testing it because the cost of generating that artefact is way lower and therefore, we can automate a bunch of other things. The cost of testing that thing is way lower. That’s a good use of it. That’s a good place to be in, getting a rapid feedback loop. We’ve reduced the uncertainty around the fit of that thing, content code, whatever it is that’s increasing value effectively. We’re increasing the value to the value consumer. But again, having that perspective in place, having that value first perspective is then guiding how we use AI. I know you know this, but it’s surprising perhaps how many organisations don’t have that value first approach and so therefore not really well set up to

Chris Ruch:

Take. And I think in that example that you just gave, the two key things are that you have a good quality data set. You’re starting with information that is good and accurate. You’re starting with your own data and you have a human in the loop that’s using AI to generate multiple versions of something that you already know is good quality. So those are some of the fundamental building blocks of using AI. Well, and not just going out and what the internet says or what’s, what’s an AI tool just making up and making sure that you’re targeting it with the right information? So yeah, we see people being very successful in using AI or organisations in creating AI agents that are going and kind of taking over manual processes that people are doing, comparing this to that, doing quality controller or taking data, and we are using it all the time in helping companies, helping product managers create product backlogs, take the ideas, take the requirements from a traditional requirements document, get it into a backlog, works really well for that because you’re giving it data that you own and that is good and then you’re reformatting it or doing something else with it

Matthew Skelton:

Effectively. It is a kind of translation from a set of high quality data. You’re kind of translating it because that’s obviously, well, maybe it’s not obvious. That’s the original use of large language models is translation, which you made that clear, we maybe talk about a little bit more about later. But that translation task, you’re reformat, reformatting it into another version of similar thing, remixing it. That’s actually kind of a good use for really

Chris Ruch:

Good,

Matthew Skelton:

A really good fit. Maybe it’s a good time to talk about these kinds of things. A little bit more depth. I mean around governance. So you mentioned the high quality of the data. That’s one aspect of governance, but there’s a whole bunch of different things, kind of what’s the provenance of data? I mean data provenance has always been important, but it’s particularly important now

Partly from technical details. You need to know where the data comes from because if you can’t, you’re not sure, and it’s not high quality as it got injection, have prompt injection in there. That’s a very, very serious threat, which has sort of come almost out the blue, the equivalent kind of threat in terms of injection threat from I guess the nineties and the two thousands. Something like SQL injection is relatively well understood by the early 2020s that we’re in now relatively well understood, relatively easy to defend against, but suddenly you’ve got this entirely new threat vector. The data might contain the execution. Since the 1960s, we’ve been able to kind of assume or reason when building digital systems that the data and the instructions are separate, but now we’ve got data instructions potentially all mixed together. It makes things pretty spicy

Chris Ruch:

And you’re potentially using a large language model that you have no idea what it was trained on. You have no idea where the data came from that and was it legally obtained, is it accurate? There’s a bunch of questions with those kind of publicly available LLMs at this point,

Matthew Skelton:

And the traceability of how content was generated feels kind of important. How do you get to this output? Show me the different steps now whether that’s just for competence in the organisation or whether there’s some internal audit or even external audit at some point, if you don’t have the traceability, I mean, we’re not saying that the generative AI is always repeatable, but if you don’t have a series, you’re not capturing the prompt. If you’re not capturing the details of the sources and the provenance of the sources, there’s no story for the auditor. And if I were a COO or someone responsible for this stuff inside a large organisation, and if I don’t have a story, I’m not very happy. I’m not in a position where I’m feeling particularly safe with that because my neck’s on the line for failures of governance and so on and so on. So a bit happens in pre AI, IT and technology usage. What is the start from the audit story? If we start from that audit story expecting to be able to explain how this thing happened or explain how we got to a particular piece of data, then we’re actually in a quite good place.

I’m not saying it’s easy, but it’s a good way to think through end to end. How are we going to do things and put things together If we lead up to last, we’re missing a huge key part of the enterprise management aspect of rolling out any technology, but particularly, yeah,

Chris Ruch:

Yeah, it’s that audit trail. It’s the quality of the data, it’s the defensibility of decision making. We have customers that are making credit decisions and are they going to work with a customer or not work with a customer and applying AI to that and who’s making the decision about whether you approve or deny an applicant and that’s something that somebody can be coming back and asking you to explain in the future. There’s all sorts of things. And then I think from the COO’s perspective or from the CIO’s perspective, we’re seeing a lot of organisations who are going through a traditional project or product evaluation of what AI are we going to select? What are we going to bring in as our enterprise AI solution? But at the same time, you have people in all sorts of other parts of the organisation who are just using AI off the shelf or it’s coming baked into a product that they already have.

And so it’s creating something very similar to the shadow IT problem that happened in the early two thousands and the rise of cloud computing and suddenly you’re, every department is picking the solution that works best for them or is most tailored to their part of the organisation, which is potentially great. But now you’re not just looking at what’s the providence of that data, what processes going into the quality of the AI for just one AI tool or one LLM. You’re now looking at it for many 10, 20 different products that might be getting used and how do you stay on top of those? So from our perspective, having the governance of what your process is for evaluating AI tools and then having the technical tools in place to look for AI activity happening, identifying it, ensuring that it’s meeting your corporate governance standards. That’s something that can’t be done manually.

That’s something that you need a tool, IBM’s governance or a product like that that can sit on top of your whole collection of AI. And unfortunately when I am talking to people aboutAI, even enterprise wide AI adoptions, it’s often only talking about prompt engineering, front end use of AI and you start talking about that kind of governance layer and how you’re going to enforce things across it and you get kind of deer in the headlights or, oh, I don’t think we’re thinking about that. And then that’s kind of like, well, that’s a huge security risk if you’re not thinking about that and getting the systems in place to deal with it.

Matthew Skelton:

I think it is important to note that the organisation that seem to be approaching this more holistically and effectively are not treating AI as a separate little piece of technology. It doesn’t sit underneath the CTO or CIO area or CDIO. It’s sitting underneath COO ideally or somewhere like that because it is, as you said, it’s got this enterprise wide effect. It’s not just a technology we bolt on. Arguably something similar was true with cloud in the sense of the disruptive effect back in 2008, 2010s and so on and more recently. But even that arguably still just about manageable by kind of CIO or technology or engineering, whatever you want to call it. But I think it’s quite clear that generative AI particularly, although it technically is a digital product, it is a digital capability, siloing that off into technology feels like the wrong thing to do. It quite clearly has such, it’s threaded, the capability is threaded throughout the entire organisation, as you said in every business process, assuming it’s just technology feels like the wrong thing to do. So how do we empower the COO for example, to have enough awareness and enough oversight and visibility and confidence so that they can go to sleep at night rather than be lying awake thinking, oh my, I’m going to go to prison for some AI breach, AI related data breach

Chris Ruch:

And how do they have the collaborative structures in place so that they’re hearing from their constituents about what’s needed, what kind of solution? Because it’s not a one size fits all solution. There’s not, there’s, we’re going to continue to see AI proliferate throughout a bunch of different tools and there needs to be a way to bring in that kind of best of breed in the right place, in the right organisation at the right time.

Matthew Skelton:

There’s obviously a difference between, there’s two kinds of ways of using generative AI in particular inside IT and modern enterprise at least two. But let’s talk about two. One is, well, let’s use some kind of AI tools out of the box, whatever they might be. It might be a prompts type tool, it might be a code generation tool, it might be, I don’t know, all kinds of different tooling, but you’re using the tool, but there’s a different class of use of generativeAI, which is like we are going to, sorry, as part of that first type is actually code generation. We are using AI to write some traditional code effectively that then executes in a normal way. The second version though is actually building solutions and products. They are themselves AI enabled, so there’s some sense of generation or non unpredictability and so on and so on. Those two feel, I mean obviously they’re related, but they feel somewhat separate in terms of how we might approach them I think. But I know that you’ve got more experience than I have recently in this space. It’d be interesting to see, to hear what you see in the industry about how organisations are approaching these two different types of

AI adoption.

Chris Ruch:

And I’ll say there’s even a third, which is process automation and replacing manual processes. But I think it’s one of those things where people often talk about AI as if it’s a thing, it’s like one thing that’s being adopted and it’s, there’s a lot of different frameworks, there’s a lot of different ways to use it. And generative AI over the last two years has really taken off because it’s easy to adopt and no matter what your role is, no matter what your hobbies are, no matter what you have to do in your day-to-day life, you can think of ways I can use generative AI to help me with what I do or speed things up. And for a lot of people that’s what AI means right now. It’s how can I use it in my job to speed up what I do or to make my job easier, to free me up to do other things?

And that’s why it being widely used among developers and testers on the technology side being widely used on the business side and that kind of AI adoption goes very much to the heart of how people work and how teams work that it’s a for if we’re significantly speeding up the way that we can do things, what impact does that have on the rest of the other people we work with and how things flow to others. And so that becomes really interesting from an agile process and from a Team Topologies perspective. The other is really a product management function. You’re building AI into your products. You’re either building new products based on AI or what we see with most of our customers is they have existing products and they’re thinking about how we can add AI into that? And in large part, that’s kind of a traditional product management challenge.

We have a new technology, we are putting things on our roadmap, we’re building it out. But it also very frequently in our experience in large organisations is resulting in spinning up new r and d organisations to go often go really deep intoAI, figure out how we might be able to do that and then hand it off to product management to try to commercialise. And so that often is creating totally new teams from a Team Topologies perspective. We’re creating new value streams, we’re creating new teams. So they’re very different in what it means to just use AI versus build AI into a product. And they have unique challenges and they also both impact the organisational design in different ways.

Matthew Skelton:

Yeah, just going back a little bit, like we said before, AI gen value in particular is amplifying good and bad existing good and bad practices or approaches. They put now a position where inside the organisation you’ve got multiple different handoffs along the value flow

The value consumer. You’ve now got one team or one group of people or one person inside that value flow able to generate way more things that the actual time to value is not shrinking. You’re essentially just building inventory in one particular part of the organisation. So again, that’s kind of why I think the Team Topologies principles we’re thinking about organising for value flow. So what are the actual flows of value inside the organisation already or what could they be and organise around that as a starting point combined with thinking about cognitive load. So the way in which we’re going to use generative AI is to reduce extraneous cognitive load on the teams that are sitting inside those value streams. And that simple combination of organised for value flow and thinking about cognitive load to my mind has served really well for guiding use of all kinds of different technologies prior to generative AI and also to be honest up to a point guides the use of how are we going to use generative AI, like what we going to use it for, but in terms of helping people inside the organisation do their work, does it increase their cognitive load?

If it does, why would we use it like that? Can we use it in a different way? They’re actually going to not just speed up what they’re doing, but help ’em to focus more on the core aspects of the value stream that they’re working on rather than get distracted by other things. There’s a way to frame the core principles around Team Topologies that actually help us with pretty much any technology adoption when we’re thinking about knowledge work. I think that ends up being quite useful framing.

Chris Ruch:

And the key there is light leaders always need to be thinking about what has that entire system impact? How do we speed up and how do we improve the quality of the entire system? And what I’m seeing with a lot of the AI adoptions is it’s a very individual adoption. How do we upskill individual people? How do we teach ’em about AI and they make their job faster And it’s kind of, I think what Team Topologies challenges you to think of is, well, how do we not just make the individual better and faster? How do we make the team better and faster and then ultimately the entire system and the whole product delivery process.

Matthew Skelton:

Exactly, exactly. And it’s a bit weird, but we didn’t really, in the original teen parties book, didn’t really talk about it in those terms, but I think you’re absolutely right. This is what was behind the intention behind what a lot of what we were doing is really thinking about value flow, the larger picture. And certainly that’s what we’re seeing from organisations that have adopted Team Topologies since the original book was published in 2019, they’re saying exactly that it’s helped ’em to think about value flow. It’s helped ’em to realise there’s a different way of thinking about knowledge work and doing work and building things and providing value to customers. I think there’s an interesting phrase that came out in some LinkedIn discussions a few months ago. A few of us were talking about generative AI and the challenges, particularly agentic AI. So AI agents, so

AI with some degree of autonomy and decision making or whatever, and someone who’s an AI product manager or product manager expert, AI product expert, his name is Stuart Winter tier, he’s based in the UK, he reread Team Topologies earlier in 2025 and he had kind of a big light bulb moment and he ended up saying that, I’m paraphrasing here, but end up saying that teen apologies is the infrastructure for agency, the rules and the principles and then the guidance that we’ve got in the Team Topologies book are there to empower groups of originally people to do useful work. But it turns out that actually the same principles around good boundaries for value flow limiting the scope of what we focus on and so on and so on, decoupling all this kind of stuff. Turns out that actually works pretty well for AI agents too because effectively it’s knowledge work.

We’re doing the same kind of work. It might be humans, it might be agents, it might be a combination, but we don’t want to accidentally couple concepts that are not supposed to be coupled together. We don’t want to be creating a huge tangled mess of dependencies which are very difficult to diagnose and pull apart later. Having some principles for how we organise work. It works for humans and it seems to work for AI agents too. And it was really interesting to see that, and this is not me saying this, someone who’s an AI product expert reflecting on what Team Topologies is providing. It’s like this almost like foundational, I dunno if you call it like a meta language for how work happens, particularly when we’re focused on value flow. And I think there’s a lot to explore there. There’s a lot to explore there in the next few years. And this is one of the things I’ll be focused on over the coming years is exactly how that actually works. Are there some frameworks we put in place for agents? Are there some new team shapes or patterns or language or concept? I dunno. We will see what comes out, but so far the fundamental teams ties, principles and shapes and so on seem to hold pretty well.

Chris Ruch:

And it makes sense, right? Because when you think about artificial intelligence, you want your artificial intelligence to behave the way people behave and you want it to follow some of those same rules and conventions and those sorts of things. And I think it’s also well known that the way your teams are structured influences the way the work is done, which influences the way your systems are designed. And so if we want well architected systems, well architected software products being created, then we need to have well architected teams. And in an AI world, that means we have to have well architected agents that are organised and behaving in ways that kind of make sense or else they’re not going to be able to make a product that makes sense. I think it all logically ties together and it’s kind of interesting to see the idea of AI architecture starting to come together, married to Team Topologies.

Matthew Skelton:

Absolutely. I think it’s a really interesting space to be involved in the combination of these two things. So let’s head towards wrapping up. It looks like towards the end of 2025 when we’re talking, the route to success with AI transformation is to get the fundamentals right. It’s about trust boundaries, it’s about empower teams, it’s about good technical practises, good nimble governance that accelerates value flow. To be honest, that stuff all sounds like the kind of stuff that, well, you and I you in particular have been doing for quite a long time.

It’s nothing particularly to do withAI, right? It’s foundational agility for the organisation. I think what Team Topologies is bringing is this infrastructure for agency as Stuart called it, a set of rules and principles that help us think through that more intentional design of the work, whether it’s humans or whether it’s AI agents or some sort of combination of both. And we’ve got that language and those visual patterns that the Team Topologies might modelling shapes and the technique and things, which is like a language to help us reason and think through what our options are in this. It seems to be very, very useful for people.

Chris Ruch:

Yeah, I think the good news is that the things that are necessary to be successful going forward in this new AI world are the same kind of basic foundations that we’ve been talking about. The bad news is that we’ve been talking about those foundations for decades and there lots of organisations still haven’t kind of gotten it right. And so I think the opportunity that’s presented by organisations saying we’re about to make really big investments in AI because that’s what it comes. It’s not just this kind of new idea and this new thing. There’s lots and lots and lots of money being spent on AI initiatives and leaders want those initiatives to be successful. It’s defining what’s this company going to look like for the next decade. So you want to get that right. And often what I see is when organisations have tried to do agile transformations or implement other kind of new ways of working, they often didn’t change the fundamentals and they just applied these new ways of working.

Hey, go off and do daily Scrums and do PI planning and work in sprints and magically everything’s going to be better. And it wasn’t often because way the teams were organised, the hierarchy and the organisation the way work flowed was not right. And I think the opportunity that we have now is to bring that focus back to before you go off and try these big expensive transformational initiatives, make sure your team organisation make sure your Team Topologies are right and you’ve optimised your system and then go or else you’re just going to repeat those same challenges of the past.

Matthew Skelton:

I think the thing I’d add to that would be leaders really need to work out how they get to trust groups of humans and then later groups of AI agents, if that’s the transformation kind of outcome that you want, things are faster, more nimble, greater access to markets, blah, blah, blah. The work that you need to do is like yourself as a leader, make sure you are going to transform personally yourself. There’s going to be some new ways of working, some new things you’re going to have to do in particular is work how you empower groups of people to do things. And that actually is a substantial chunk of work for the individual leader and for an entire board or entire exec team to transform themselves into having the trust to be able to do that. And that’s moving away from traditional very command and control hierarchical stuff into newer models where we’re devolving responsibility and so on and so on. And the kind of nimble governance that comes along with it. That is an entire webinar topic in itself, which maybe we can talk about another time, but

Chris Ruch:

Leadership starts at the top and leaders have to drive that change. And I think one of the things that the misapplication of coaching in agile transformations was a hyper focus on the teams. Let’s teach the teams how to be agile and it’s much more impactful to have coaching at the leadership level, having the leaders change the way that they lead, the way that they manage, which will then influence the teams rather than the other way around. And even if you’re talking about going and applying Team Topologies across an organisation trying to drive flow through the organisation, adopting Agile across an enterprise, that’s even more true.

Matthew Skelton:

Yep. Super. Where can people find out more? Chris? The Agile Rising website.

Chris Ruch:

Yep. Go to the Agile Rising website, agilerising.com/ai and we have more information about the topics that we were talking about here today. And then I think they can also go to the Team Topologies website. Of course.

Matthew Skelton:

Yes. We’ve got lots of options now around enterprise transformation. If you go to team.topologies.com/scale, you’ll find all of the options there that are designed to help large organisations adopt teen in particular, but these kind of techniques more generally through training, learning a book purchase so you can buy like a thousand copies of the teen parties book and have ’em delivered worldwide free shipping and so on and so on. And work with our teen topologies partners in our partner network of Worldwide two. So we are setting things up for substantial positive outcomes for organisations that are moving in this direction and looking to use teams Topologies, it’s worth saying Agile Rising is one of the core Team Topologies solution partners, so it’s great to be working with you on this, Chris, thank you very much. But at Teams bodies, we’re on a mission and a genuine mission to make work more effective and more humane for everyone using team sport principles. That’s what drives us. And if you are considering or already embarked on some sort of AI transformation, we genuinely think that Team Topologies is a key part of that approach. So get in touch and look forward to working with you. Chris, thank you so much. Thank you for joining me.

Thank you, Matt. It was a wonderful conversation. That’s good. Let’s jump into the Q and A. Hi, it’s a good chat. Thank you. And there’s some really great questions in here. We’ve been trying to put a few responses in the chat as we’ve been going, but I’d like to start with this one here from Amanda. Now this is on the Streamy yard, so I’ll just share the question here. Amanda, on Streamy Yard was saying there may be ways to train AI agents to the context of a specific team or art, at least in code you might use an MCP or something like that. So it was an interesting point, and I guess because what comes to mind here, I’m sure this is already happening and you might have seen some, but what comes to mind, mind is because the team types in inter bodies are so clearly defined with clear expectations around their purpose and mission and behaviours in particular, it’s almost like each team almost has a different kind of style or personality almost. And that’s clearly one of the things that you can actually start to, you can give to transformer based AI and say, adopt this role.

It’s a really good point. I’ve not actually looked for it yet, but I’m sure this stuff, this kind of thing is happening and you could give an AI agent a particular, say, act like an enabling team. What kind of information would you provide to, as a member of an enabling team, how would you go and help a streamline team to adopt a new something or change how they do testing or whatever it is something like this? Have you seen something in that space? Is that stuff you’re working on or have you heard about it? What’s that look like?

Chris Ruch:

Yeah, yeah, we’re absolutely doing that and we have customers who are doing it and it takes a couple different forms. I think kind of thinking about that as what kind of team are, we are doing it a lot as part of our role-based coaching as a product manager or as a product owner. AI can be very useful in helping a backlog and prepare user stories and help ideate and do research around customer needs. That’s very different than what a scrum master might be using in order to facilitate things. It’s very different than what developers or testers are going to be using for their kind of work. So you have that kind of role-based use of AI and agents that are helping specific roles. It’s very logical to extend that to specific kinds of teams. But the other is what’s the industry? What’s the nature of work that they’re doing? So kind of very common to be using agents or even in chat GPT, you can create projects that bring together a group of people and it’s learning from that context of that particular team and can help generate more targeted responses based on the problems trying to be solved, the industry and those sorts of things. So 100%, that’s how people should be thinking about using these agents to target in on the specific workspace that they’re in.

Matthew Skelton:

I would developing an agentic or AI based tool to help kind of operationalize some of the team’s bodies ideas. I’m not doing that right now by the way, but it sounds quite fun to do it. The kind of thing I’d be thinking through would be, okay, so we’ve got this idea of collaboration mode versus something as a service, the two different interaction modes there, or two of the three interaction modes. And if you’ve got a platform providing services out, typically you’d want to start with collaboration mode to discover something and then you can offer something as a service. Now if let’s say you plug into Slack or Teams or some other chat interface and you’re able to, or perhaps into Jira or some other texting tool and something like an AI enabled tool can detect the fact that there’s lots of back and forth negotiation and confusion and discussion around how this service works, for example, then that agent could prompt the teams involved to say, Hey, it looks like you’re having a lot of conversation around this. It should be really easy to come consume this thing as a service, but it looks like that’s no longer the case.

Matthew Skelton:

Recommend you switch into collaboration mode like a team bodies collaboration mode, like deliberately carve out the time for the next two or three days, just go and thrash that, thrash out the detail, work out how this service should now work, switch into collaboration mode with all of the associated expectations about behaviour. And then once you’ve discovered how that service should now work, switch back to as a service mode, or once a service has been provided or updated, you can switch back to just providing consuming. I can definitely imagine. I’m sure it’s already been worked on tooling that embodies the team support principles like that, but just prompts and makes it easier for people to just get started like that rather than having to wait for a human to notice it. Actually, if lots of the communication is going via digital tooling, maybe an AI agent can prompt that. So there’s a free business idea for anyone who wants to implement the Teach Body principles in Agen approach like that. But I’m sure that will emerge over the next few years without a doubt. I’m sure that’s going to be a thing. So Amanda, thank you so much. That was a cool question. I think let’s have a look. Did you have one to zoom into, Chris,

Chris Ruch:

I think Juan just put one in. Do you think that the industry is shifting from specific framework implementation like safe to a tailored set of practises based on frameworks and practises of AI? There’s always been a segment of organisations and leaders who are comforted by adopting frameworks, whether that’s Scrum or whether it’s Safe or Less or Nexus or others. There are others that want to innovate and find their own way of doing things. Certainly over my career and with Agile Rising, we’ve been involved in lots of big scale safe transformations. There’s still organisations that are interested in that and still adopting that and using that as kind of a platform to also drive AI adoption. But we also have organisations out there that adopted Agile and Scrum more than 20 years ago and they adopted Safe 15 years ago. And maybe looking at kind of like, Hey, we’ve got the benefit from that. Where are we going to get the benefit in the future? What’s the next kind of thing? And that’s where they get kind of interested in DevOps is the next thing or product operating model. And I think there’s always going to be that place. But the reason why Team Topologies resonates so well with so many organisations is it’s kind of recognising that regardless of the framework or the best practises that you’re using, how you’re organised and how people interact with each other is fundamental.

And it doesn’t really matter what the other practises that you’re using are if you don’t have that organisation and interaction and collaboration happening. None of the kind of specific ceremonies is going to work. So don’t, there’s always going to be interest in frameworks and specific operating models, but I think everybody who has gone through long or big experience with those frameworks, is there a starting point? A framework by its very nature is kind of the skeleton, but it’s not going to solve all your problems and it’s not going to do everything that you need. So yeah, the industry is evolving, but that’s agile, right? We’re always evolving, we’re always learning new things and we’re always moving on to what the next thing is.

Matthew Skelton:

I think I see something similar, and there’s obviously really good success stories with some of these frameworks and there’s some situations where they don’t seem to add very much at all. The thing that I’ve seen, which is actually separate from teen bodies, really teen bodies feels to me like the architecture for value flow, if you like, or the dynamics for value flow, something like that. But one of the things that we sort of assumed when we were writing the Team Topologies books and me and my co-author Emmanuel po, one of the things we assumed, we assumed slightly naively maybe was that loads of organisations have lots of cross organisational learning, like lots of communities of practise and sharing and internal conferences and a whole bunch of these things. It turns out most organisations have nothing like that. So you might have a framework for delivery, but what you need an approach to actively diffusing knowledge across the organisation learning very, very quickly and all you write incentives in place to make that happen.

So I mean, as it happens right now, I’m busy writing and co-authoring a book on exactly this because it’s another dimension for success, which is often missing. So I think your point is important, but it’s not about whether we should use a framework or not. It’s like whatever the delivery approach is that we’ve got in place, whether it’s an official framework like Safe or whatever, or it’s something that’s more based on principles like teams, bodies, there’s something if you just look at that stuff, there’s something missing. You need to have intentional learning across the organisation, which is almost never part of those kinds of frameworks by design because the frameworks are focused on delivery. So there’s a whole separate dimension which is often missing inside organisations. I think that’s something that’s really important to acknowledge. Most of these kinds of frameworks here don’t deal with the intentional sharing and innovation and knowledge diffusion and things which are really needed for success.

Okay, we are heading towards the top of the hour. I think it’s probably time to wrap. I’m just going to share one last point from Al. I think we won’t get a chance to talk about it too much, but so now you’re saying balancing the hub, the Central AI Governance Forum and the Spoke, which are ubiquitous use cases and agents in organisations. So balancing the exponential speed and scale of AI experiments in the enterprise with robust adoption and guardrails. That whole dynamic about central strategic intent with actual devolved or decentralised execution. I mean, that’s the really hard thing that lots of organisations really struggle with in general, right? In general. And working out how to get a dynamic like that not just for AI adoption, but to be honest, cloud adoption, a bunch of other things. Whatever comes afterAI, you’ll need the same dynamic in place.

You can’t centralise everything and you can’t evolve everything. It’s a dynamic between the two. And I guess that’s what Team Topologies is helping with, thinking through these different patterns and the fact that the entire third of the Part three of the Team Topologies book is all about dynamic patterns for continuous evolution. I guess Team Topologies helps with that. It’s not the only thing that you’re going to need, but it certainly helps with navigating that balance if you like. Not even balance. It’s a dynamic between centralised, some things are centralised and some things are distributed all the time. And we are coming out from bringing in awareness from the distributed aspects back into the centre and then back out again. There’s information flowing continuously from the centre out and from the outside into the centre, which I think breaks people’s brains a bit because it’s not simple.

Chris Ruch:

Well, and I think this is a great example of one of those things that the problem has been there forever and organisations have struggled with how do we do this centralised governance, but allow innovation and freedom in those areas. And most have not come up with that. And it is necessary to get that right in order to get the full benefit of AI because every tool is building AI into it right now. So we are working with customers that Salesforce is rolling out AI agents, ServiceNow is rolling out AI agents, HubSpot’s rolling out AI agents, and they already have with their existing tools, half a dozen different sources for genic AI that’s been introduced into their environment already. And there’s just going to be more. So this is one of those things where AI has to have a central governance model to make sure that you are following corporate policy and law and you have to be able to embrace those AI agents that are coming from everywhere.

Chris Ruch:

I think that’s where we’re going to wrap it up.

Matthew Skelton:

We’re going to wrap it, but I think we’re going to follow up on that one. I’ve got a feeling we’re going to follow up around governance and governance like in 2026. So thank you even for joining. Thanks for all the comments. Really appreciate it. Here is how to get in contact with Chris on his crew agile horizon com, and here’s how to get in contact with Team Topologies, team topologies.com/scale, and we look forward to seeing you in our communities and seeing you online and following upon the discussions. Thanks everyone.

Chris Ruch:

Thank you, Matthew. Thank you to everybody for attending.

Matthew Skelton:

Great. Thank you.