Detailed Notes
In this episode of Building Better Developers with AI, Rob Broadhead and Michael Meloche revisit their earlier conversation on defining “done” in Agile. They break down why “done” can’t just mean “I finished coding,” and how a clear, enforceable Definition of Done (DoD) prevents scope creep, reduces rework, and keeps projects on track.
You’ll learn: ✅ What “done” really means in Agile ✅ How ambiguity derails projects and creates scope creep ✅ Real-world examples from Michael’s career—before and after a clear DoD ✅ The essential components of a strong Definition of Done ✅ How to implement and enforce your DoD for better delivery
Your Challenge: If your team hasn’t reviewed its Definition of Done in the last 3 months, set aside time this week to review, refine, and commit to it. The clarity you gain could save you weeks of rework.
📌 Listen to the full episode here: https://develpreneur.com/definition-of-done-in-agile/ 📌 Visit the website: https://develpreneur.com/
Transcript Text
[Music] All right, little water. And >> yeah, sorry I tripped you up in that. You went you talked so long and you talked through so many of the bullet points in the first one. I kind of lost the thread. So by having you paste that, that allowed me to kind of keep track of where you were going. >> Yeah, that was a great idea. I hadn't really thought of that. Um, let's see. Did I just do it for this? >> Yeah, I need the next one. >> Okay, let's see now. do it for >> because I floundered on that first one because it's like I I thought I had the thread and then I lost the thread. I'm like crap, what was the thread? So, I just kind of waffled through it. There's a I mean there's Yeah, this one is a lot. So, I'm going to give it make it. Let's see. Is it done? There we go. Not main take this and shove it into Oh, by the way, hello everybody. Um, we're recording our way through here. Uh, so I'm going to paste for you in the podcast ideas. I'm just going to paste the next one here. So this episode we are going to do defining done in agile. How to stay on track and avoid scope creep. So this will be a fun one because it is really a followup to that last one of scope creep. And now let's figure out maybe how to avoid some scope creep. And I'm going to stick with my Spanish as sucky as do uno it may be. Hola. Hello and welcome back to building better developers developer podcast. I am Rob Broadhead, one of the founders of developure, also a founder of RB Consulting. More about that in a second. First want to talk about this season, this series, this episode. We are in the season doing building better developers with AI. We're going back two seasons ago I think it is and we're grabbing a topic throwing it in AI and saying what would you suggest for a podcast and then we're basically analyzing that and it's giving us some great things to talk about. So that's what we're looking at this episode. Our title for this one is going to be defining done in agile how to stay on track and avoid scope creep. Now uh back to RB Consulting. We are a company that helps others figure out the best way to use technology. That's the best way to look at it. Just like you can do a financial audit or security audit, you can also do a technical assessment, which is very similar to it. You know, technical audit, things like that. Well, we're going to sit down. We're going to help you figure out what do you have, what what is your current situation, and we're also going to sit down and talk about your business because that's really the most important part about using technology is how to leverage technology to do what you do. We're going to help you walk through your processes. What is it that you do like in detail? So, it's a, you know, think about it like sometimes we get too much in our head. Just like how would you explain to somebody how to tie a shoe? There's probably business things that you do that are along that same line where you just know how to do it, but to explain to somebody else, which means to explain it to a computer or technology can be a bit of a challenge. So, we're going to help you bridge that gap. We're going to help you understand what's out there because there's a lot out there. We spent a lot of time. We are technology agnostic and so we're going to find ways to help you take your technology drunk junk drawer and clean it up and through integration, simplification, automation, innovation, we're going to find the best approach for you that custom recipe for success so you can have a road map that you can execute on or we can help you with that as well. Good thing, bad thing. Uh this is going to be like one of the goofiest ones we've had maybe so far out of a long list. Um, good thing today was I was sitting there and I was eating lunch and I had something like get stuck between my teeth and I was like, "Okay, I got to go like get that thing out." And it came free. The bad thing was when that came free, I also had part of my tooth came free. So, I had like a cracked tooth that somehow had lost its uh its strength or whatever. So, not in a painful way. There's nothing painful yet. I can drink hot and cold liquids. not causing my head to explode or anything, but enough that I'm going to have to go find a dentist very quickly and get all that kind of stuff repaired. So, you know, sometimes the simple things turn into not very simple things. Sort of the story of my life right now. Much like Michael's, which he has re regailed us with in recent episodes. Let's see how it's going there this time as we check in with Michael and he introduces himself. Hey >> everyone, my name is Michael Malash. I'm one of the co-founders of Developer Building Better Developers. I'm also the founder and owner of Envision QA, where we help startups and growing companies build better software faster with fewer problems. Our services cover software development, quality assurance, test automation, and release support. Companies come to us when they want to avoid delays, reduce bugs, and launch with confidence. Whether you're building your first MVP or scaling a live project, we make sure that your uh software is reliable, efficient, and ready for growth. You can learn more at envisionqa.com. Uh, let's see. Good thing, bad thing. So, last time I talked about the water issue, so that's been resolved. Um, I guess good thing we now get to enjoy the new toilets we had installed a month ago. Uh, now that the water is working again, uh, we can finally enjoy all the upgrades we kind of did in the house, which we weren't able to do, uh, last time because we had no water. Uh, and as far as bad things go, I got a project that's kind of dragging out and just dragging me down a little bit. So, but weather's getting nice, so I'm not gonna let it get me down. >> Yes, weather has definitely been getting nicer. It's been awesome enough that I've actually had the windows open on a couple of mornings and not been like dying of heat exhaustion. So, it's always good. So, we're going to dive right in. This time, I followed up from a prior post. So, it didn't give me like any, you know, excellent idea or anything like that. It just uh I said, "Hey, how about doing this?" And it said, "Absolutely. Here's a detailed breakdown." And it gives us the same kind of thing that we've had in the past. So, it's a suggested episode structure and item one with some bullet points. We'll dive right in. What does done really mean in agile? Explain the agile principle of a definition of done. Do contrast it with just finished coding. Why clear done criteria are critical for teams. I want to I really want to go with the like jump to the end there. Why clear done criteria are critical for teams because this is one of those things that when we we sometimes when we start a project and we say we need to make sure that one of the first things we do is we define what done is is that people look at us like we've got three heads or something like that. The thing about done is that there are varying understandings of what done in a software project in particularly mean. Like does done mean that you just wrote some code? Does it mean that you wrote unit tests with that code? Does it mean that you have done a full it's gone through QA? Does it mean that it's been deployed? Does it mean that the user is using it? There's a lot of different ways you can look at done. And within a development project, there's also things that done may include things like uh has it been properly, you know, besides unit test, has it been properly commented or documented? Has it been committed to version control? Has it been merged into a branch or something of those nature? Has the uh the ticket that originally, you know, that originated that task been moved through its processes and moved to complete so that it is done? Um has it been signed off on? There are things like that that are very much part of your uh your development process and your standards and your team or even your corporate process and standards that need to be taken under you know consideration when you consider what done is some places done may mean that it has to actually go through uh like a code review and a security analysis review and and all of these other things that are way way more than done in the hey I wrote the code and I tried it on my local machine and it works. And I'm using air I'm using air quotes everywhere here for those that can't see it because that's sort of how it is. It's like what really is done and we need to make sure we do that because that is the that is the target for whatever we're doing. So if we ask somebody is it done we're not going to get well sort of or yeah but it's not or it's kind of or any of that. You need to you need to know is it done or is it not because that's going to be a key part of scope creep and estimation and things like that. So where do you want to pick this one up? >> So yeah, so you kind of touch on a lot of things. I'm going to go with contrasting with just finished coding because one of my biggest pet peeves is you do all this work or a developer does a lot of work and they say they're done, they push the code up and then it gets to testing and you go down and you sit there and you read the the ticket and you're like the tester's reading the ticket and they're like what is done? What did you do? you know, it's not clear in the requirements what it is that they were supposed to do. So, what did you work on? So, when you're working on the requirements, the definition of done needs to be clear for everyone that reads the ticket because if you're working on the ticket, um you're working on this change, you want to make sure that the change is what is implied in the ticket. There have been times where I have made mistakes where I read the ticket one way, someone else reads the ticket another way and what gets implemented is not what was the requirement for the definition of done. And you run into these situations when the requirements may be clear but may not be clear enough to really define the definition to done. Case in point, you could have I'll just pick on login screen because login screen is just about everywhere. You could have a situation where I have a login screen and it's you basically were told, hey, set it up to where a registered user can log in with username and password. Cool. I write the code. I can log in. Now, it gets to the tester and they're going to read that as okay. So, I can log in with username and password. They it does not specify things like um case sensitivity, uh special characters, things of that nature. So if they go to test a login as typical login security which has been around for a while they're going to break things. They're going to think well why is this not working as expected? So they're you need to make sure that within the requirements definition of done is some of the things of what is done. So done would be implied user can log in using any username any password or if there are other requirements then you need to lay that out that hey username can only be lowercase username can be camel case username could be any case as long as the username matches a user. These are not just requirements but these are what needs to essentially be the story for testing so that you know it's done. So if someone picks this up or a user goes to test this, they know specifically how to test it to see how that works. Now, if it's a backend change, that's a little more difficult. You're going to have to have another developer test that. But this is to me from a test-driven developer approach what definition of done means to me. Because if I can essentially lay out how this works, then I can code it. If there are ambiguities in what I need to do, then it is not a clear definition at all. This sort of goes right into the next point. Why ambiguous done leads to scope creep when done means different things to different people leads to unfinished work. uh hidden bugs or endless tweaking creates mismatched expectations between DevQA and clients which is really what we just talked about is and it's here it's that it's that back and forth and I'm going to probably go right into the next one since Michael Sor stole this one uh and let him talk about the next couple items and just touch on this real quickly to give my thoughts is really what the problem is it does become very frustrating when you don't know what done is because you have and it it really is very much the developers QA and customer because you will have stuff that for example goes to QA and it's to them not done. It hasn't covered the requirements that they think it needs to. So they kick it back to the developers and they're like why is the developer not getting the work done? The developers like why is the QA, you know, on my butt all the time? Why they keep changing stuff? Why they why can't they just accept it? And of course the same thing happens with the customers like it'll go all the way to the customer. The customer's like this isn't what I wanted. This isn't how I needed. they miss stuff and it goes back and people get frustrated. So it does lead to scope creep and it's really more of that the scope creep tends to be that like now people start expanding what they want to talk about or or add to the requirements to try to make sure that they can figure out what you know that it actually gets done. It's almost shoot for the you know the star so you fail and hit the moon. It's that kind of stuff. It's just a bad situation to be in. Real world examples, stories from teams where unclear done led to delays or rework. How a strong definition of done saved another team from project chaos. I'm going to throw that one to you. Yeah. So, I I'll run with this one because the company I've worked for over the last year's transition. We were acquired by another company. And before we were acquired, we had clear requirements. We knew what needed to be done. Everything we had, we had definition done. Our tickets were being completed on time. We met the expectations. Yes, there was occasionally some rework because like Rob said, when you deal with reports, you run into, oh, that's a simple change. But outside of reporting, almost everything we did was able to be completed on time, on task, and we knew what it was we were doing and could test it. In that transition shift to the new company as we were pulled in almost every ticket I have had it feels like it is a monolithic spike. Every single ticket I have is ambiguous. It is basically make this work in inside of this ecosystem. Hell or high water just make it work. The problem is this is such a monolithic application that you have no idea where to go within this application. There are multiple teams working on this project and unfortunately even though we are in the project process of transitioning into this new ecosystem, we're still making change in the old ecosystem. So you could have one piece you get it working and then go back and pull the latest change. What you just redid this or oh you changed this now it doesn't work. So this is so frustrating that having clear guidelines and definition of done really avoids that and can hopefully get you across projects and meet your deadlines. >> Excellent. Good examples. And I'm going to dive into the next one because we're going to try to get through a couple of these points this time. uh components of a good definition of done code complete and reviewed automated test passing documentation updated deployment to staging production verified acceptance criteria met and signed off I think that's a really good start and I I think that I want to sort of touch on these real quick uh each of these because they code complete is and reviewed is something that I think we should do on a regular basis I think there is very much a value to reviewing code I have worked on projects that review have code reviews very strong all the way to don't do it at all and the strong honestly the stronger the better. I think yes it takes time, there's effort, there's it can be frustrating because you get something kicked back to you. It's like, hey, you need to, you know, make this conform, but but it does pay off in the long run. And this is from somebody that there's more than a few times I've been frustrated with a code review, especially uh the code analysis, static analysis stuff I do all the time. I'll get frustrated with something, it gets kicked back and it says you should do this and I'm like, I don't really want to do it. like I'm just gonna and there's always that temptation and sometimes I fail I fall for it to just say you know what I'm going to pass it anyways and we're going to move on but there is also a value in uh in doing those uh automated test passing is like I will I've been on those where it's like okay I'm creating tests for everything and I've been a situation where I'm like all right I'm going to whip a couple of tests out we're going to test it we're going to move on um yes going through and doing those tests can be timeconuming but particularly getting those autom automated test built will help you in the long run. And yes, sometimes they fail because you change uh requirements or something like that change, but it also gives you actually an extra uh leverage to not change stuff to say look if we have to change this and I have used this before. If we have to change it, the change is not that big a deal, but we have to retest all of this stuff or we have to update all of these tests and then suddenly that thing that was like, yes, it's a little change in air quotes actually is something that is not a small impact and we have to actually think about that. Uh, and you could say, well, just skip the testing. But it's like, well, wait, but any of those places it's testing, if one of those fails, then we would have to go find it. So, you're going to have to keep doing it. documentation update. We skip this all the time. I know everybody does, but it really should be something that we build into our processes to make sure that's part of done is that we, you know, wherever we need to update documentation, we do. So, I think the deployment thing is something is getting better with CI/CD and some of those kinds of things and pipelines, but I think we don't do it enough. I think it's very good to deploy it and run it through its tests on the on the new site, make sure everything goes. Um, and of course actual done is that it's been signed off on. So, we probably have a done during a sprint or done for a certain step, but that is not done for that feature because it's not done until we can go all the way through and somebody can actually use it. Uh, thoughts on those? >> Yeah, I want to briefly touch on that. I'm going to just go right into the next one. But one of the things that Rob touched on, you know, the automated testing, you know, going back and fixing those tests, make sure you don't let your tests get stale or just don't delete tests that are failing. A lot of situations, if you're rushing to get to the end, they I've seen developers do this where they don't maintain tests. They just modify the test enough to make the test pass, but not really meet the requirement that the test is passing. So make sure that you keep your tests somewhat fresh to the requirements as they change. Uh I'm just going to jump into five. Uh who creates and maintains the definition of done? You know project owners, scrum masters and the dev teams collaborate and uh DoD evolves as the project matures. I'm going take that first one. You know who creates and maintains the definition done? The team, your project owners, the scrum masters, the dev teams. If you are working as the developers, chances are within your team itself, you as a team need to sit down at least quarterly agree on what your team wants for definition of done. Everyone should be on the same page so that there is no ambiguity, no confusion of when you scope out tickets, you flush out the requirements that when you pick a ticket, you set, hey, I'm going to get it done in this amount of time. then you're going to get it done in that amount of time. And this does require working with the project owners and the scrum masters. At the beginning of this, it's going to be difficult, but in the long run, it's going to save you a lot of time, headache, and hassle. What are your thoughts, Rob? >> Yeah, I think in that that's the whole point is that if you have problems with it early on, if you're if the scrum master, the product owner u don't even the dev team, if they don't have a good definition of dumb, that should show up in your retrospective. That should be something that gets flagged. That should be something that you correct as you move forward because that's part of the whole idea is that agile should be getting better as you go. And honestly, there have been more part of the reason that I know that it's important to define done is that we have had this come up in sprints during as we've gone through an agile project and we've gotten to a point where we're like, you know what, we need to do a better job of done. Maybe we need to add something. We need to change something, tweak something. we've gotten away from maybe one of our steps that now we're not doing it right. So, let's go back to it. Code review process. There been more than a few times where it's like we need to adjust the code review process. Uh bring more people in, bring less people in, provide different uh a different format of feedback. Um things or you know, less feedback, more feedback, uh smaller chunks of work so they're easier to review. There's a lot of that kind of stuff that goes on. We're cruising right along. So um how to implement definition of done in your workflow incorporate into user stories and sprint planning use checklists or tools like Jira, GitHub and notion make definition of done visible and agreed upon by all stakeholders. Uh and this really is just like once you've defined done you should document it. There should be something in your it should be in your uh your team documentation, your development processes, your project processes that this is what done looks like. These are the steps. These are the bullet points that have to be a part of that. They don't have to be included in order for us to actually be done. And then within that is we can then if we're using this especially good if you're using like you know Jira or one of those kinds of things Trello or Son or whatever it is that when it goes into the done column then we know that all those things have actually been completed and it's not bad in some of those that you have you know sometimes the the columns the swim lanes that you're moving your ticket through are all of the things to define done. So maybe it's like you start out and then it's being coded and then once coding's done it goes to unit testing and once unit testing it goes to QA review and then it goes to code review or you know and like and not necessarily in that order but it's like you can in your swim lanes document all of the things that need to be done and then that should move through and then you can even have things around that. You can have logic that says it can only go from this column to this swim lane to this swim lane and only this person can move it from this swim lane to this swim lane. things like that that can really help you be more efficient with what your definition of done is and how you move your tasks through it. Thoughts on that one? >> So, the last thing I'll really touch on with this is holding yourself and your team accountable is one of the best ways to implement definition of done into your workflow. If your team really it should be a personal practice because a lot of teams in some companies don't even do this which is bad but personally if you want to be a good developer go from coding to becoming a developer to really just keep growing and improving and being the best developer that you can you need to hold yourself accountable and make sure that every task you go into or you work you look at with the mindset of what is the definition of done? what is it that I'm trying to complete with this and how does this fit into not just what I'm doing but the bigger picture because sometimes you could be say hey build this but in the bigger scope of things that's not what needs to be it's actually something else but kind of got lost in perspective the best example I can think of for that is go back to that tree swing picture that's all over the internet for software development starts out with this is what was pitched a tree swing you go through multiple iterations roller coasters tree, no tree, and all the customer really wanted was a rope and a tire. They wanted a tire swing. Defining definition of done helps you avoid scope creep, but also helps you ensure that the requirements stay on task and get you the right product at the end of the development cycle. >> Yeah, we talk a lot about knowing your why. Your definition of done is your why for each individual task basically. It's like it really is. the things that keep you uh it's a guard rails for your your work and to make sure that you stay on task and stay on focus. That being said, it is time to wrap this one up. Uh as always, shoot us an email at info developer.com. If you've got uh suggestions, product ideas or anything like that for topic ideas or product ideas, I guess, as well, any of those things, we'll be happy to hear from you. Uh we want your feedback because we're here for you. so we can build better developers, you can build a better podcast uh by letting us know what your thoughts are and where you want to go, future topics, uh areas of interest, things like that. I know there are some things we haven't spent a lot of time on, so we can always go back to those. Um, also you can check us out on X. You can go out at developer.com. You can go to or you can go developer. You can go developer.com and we have plenty of places for you to leave us feedback. We've got a contact us form. You can leave stuff there. You can, we have a developer Facebook page. You can definitely put stuff out there. However you want to get a hold of us, we're happy to get that feedback and incorporate into us building a better piece of sol a better solution, better bit of content for you. Uh you can always check us out if you're not on the uh out on YouTube on the developer channel. Uh, also if for some reason you're tired of seeing our faces and you want to just listen to us on audio, wherever you listen to podcasts, uh, you can find the Building Better Developers Development Podcast. As always, I appreciate your time. Appreciate you hanging out with us for a while. I appreciate you putting up with my very lame uh, introduction in Spanish. I'll try to like clean that stuff up. Uh, go out there and have yourself a great day, a great week, and we will talk to you next time. bonus material. So let's see, we are I guess to seven DoD is a weapon against scope creep. Keeps features from expanding endlessly. Forces conversations on what's really needed. Provides objective criteria for complete. And then tips for developers to advocate for a clear definition of done. Push for DoD clarity uh early and often. Use it to manage client expectations. learn to f say let's meet the definition of done first then consider enhancements. So where do you want to go with that? >> So it kind of goes I think one of seven and eight kind of go together. So I think forces conversation on what really is needed and learn to say let's meet the DoD first. This gets back to that why. Why are we building this project? What is the need? What is the MVP? If you focus on the MVP and you focus on the requirements around getting it done and the why then really focusing on the definition of done for that should be a no-brainer. It should be fairly straightforward to stick to what is our why and what it you know what is an MVP that we need to get to the end of the uh you know end of the project. If you start getting features that are not MVP then you are off track. you need to stick to that MVP and then focus on the definition of done to get that MVP done. Once the MVP is done, then maybe you can come back and look at some other features and things at that point. But that is a different requirement set and a different definition of done. Yeah, I think I want to go with I do I I you sort of stole my thunder, but I think I want to go with that one as well, which is basically like let's finish the work first and then add to it. I think that that is very often where we need to go is that I think and it that is it protects us as developers as well. So when somebody comes at us and says, "Okay, what about doing this?" And we can go back to the ticket and say, "Okay, is that in the ticket? Did that get done?" And it's like, "Okay, so we've done this and this and this." And when they say, "Well, hey, we want to make a change." Like, "Well, okay, well, let's finish this first. Let's not go back and change it because now then we're going to have to roll a step back. We've done these things based on that. Let's try to get it." Now, it is a little bit of a a mini waterfall approach cuz part of the thing with waterfall is like once you say you're going to do it, you just do it and it's like we'll fix it in the next version. That's sort of what we're doing here. We're talking about here, but it really is like let's make sure that we get done what we need to get done and then we'll worry about tweaking that and enhancements and things like that. And when you have a a defined process, then it's going to help you because you're going to be able to say, "This is where we're at. This is how long it typically takes us to get to the, you know, to done." And so you can just sit back and wait for x amount of time, and then we'll be done, and then we can move on to the next thing. So we can actually build on that or make adjustments to what's already been done as opposed to changing it midstream. Now to be fair though the waterfall approach it can be waterfall like but sometimes the MVP sticking to the MVP. Yes you can make some changes within the MVP but make sure it still sticks to the MVP. If you stick to that then you can still stay within the agile methodology and that really applies to the agile because you can pivot. Okay, the MVP slightly changed because this feature isn't quite flushed out. you will run into that. But as Rob mentioned though, sometimes it is waterfall because you do have to kind of stick to what you're trying to get done first before you can pivot outside of the MVP. >> Let me just like give an example on that is if you are walking and you try to pivot and you haven't planted your foot, you're going to plant your butt basically or your face. If so, you pivot when you have like you have to settle essentially. You have to actually have a stable position to pivot. If you pivot too much, if you pivot while you're in the middle of working on something, you are likely to fall on your butt just like you would normally. Um, I've been in too many situations where there is uh there's this pivoting going on and we don't get something done and you pivot midstream and the next thing you know it's like you've got partial stuff. So use the done to get yourself to a point where you're now stable enough that you can pivot. Otherwise, things get out of hand really fast and it gets really hard to keep track of what are we doing, where are we at, what have we, what is done, what is not, what's in motion, what needs to not be in motion and all those kinds of things. So I think that uh very much is something that can help you to say yes, we'll be happy to pivot, but let us like finish this thought first. let us get to a resting point or something that we can pivot and not try to do it in mid-stream. And this will help you with firefighting issues and a lot of other things like that because those are uh those are the disruptions that will be thrown at a sprint, you know, like hey, change this or throw this in and we're like, well, no, that's why we protect our sprints as a scrum master. That's why you do that is you don't want to get in a situation where you're doing too much uh zigging and zagging and weaving and pivoting and the next thing you know you really don't know what you have. You're sitting there on mush instead of actually solid ground. I think it is time for me to move on from the solid ground that I'm on, which is the end of this episode. And we will return. We're not done. There's plenty more stuff out there. There's plenty of a out there. And the AI is going to just keep spitting stuff at us. and we're going to keep talking about it because we can until we run out of episodes from that ch season, which we've got more than a few left and then we'll figure out where we want to go next. So, you know, we're always open for new uh for suggestions for future topics for future seasons is the perfect time to do so while we're sort of like starting to think about where we may go next. You know, where all the places are. Shoot us an email, send us feedback really, wherever you give us feedback, we can take a look at that. happy to take your uh whatever it is your suggestions are and find a way to work that into what we are doing. I do appreciate so much you guys hanging out with us and uh we will see you again next time. [Music]
Transcript Segments
[Music]
All right,
little water. And
>> yeah, sorry I tripped you up in that.
You went you talked so long and you
talked through so many of the bullet
points in the first one. I kind of lost
the thread. So by having you paste that,
that allowed me to kind of keep track of
where you were going.
>> Yeah, that was a great idea. I hadn't
really thought of that. Um, let's see.
Did I just do it for this?
>> Yeah, I need the next one.
>> Okay, let's see now. do it for
>> because I floundered on that first one
because it's like I I thought I had the
thread and then I lost the thread. I'm
like crap, what was the thread? So, I
just kind of waffled through it. There's
a I mean there's Yeah, this one is a
lot. So, I'm going to give
it make it. Let's see. Is it done? There
we go. Not main
take this and shove it into Oh, by the
way, hello everybody. Um, we're
recording our way through here. Uh, so
I'm going to paste for you in the
podcast ideas. I'm just going to paste
the next one here.
So this episode we are going to do
defining done in agile. How to stay on
track and avoid scope creep. So this
will be a fun one because it is really a
followup to that last one of scope
creep. And now let's figure out maybe
how to avoid some scope creep. And I'm
going to stick with my Spanish as sucky
as do uno it may be.
Hola. Hello and welcome back to building
better developers developer podcast. I
am Rob Broadhead, one of the founders of
developure, also a founder of RB
Consulting. More about that in a second.
First want to talk about this season,
this series, this episode. We are in the
season doing building better developers
with AI. We're going back two seasons
ago I think it is and we're grabbing a
topic throwing it in AI and saying what
would you suggest for a podcast and then
we're basically analyzing that and it's
giving us some great things to talk
about. So that's what we're looking at
this episode. Our title for this one is
going to be defining done in agile how
to stay on track and avoid scope creep.
Now uh back to RB Consulting. We are a
company that helps others figure out the
best way to use technology. That's the
best way to look at it. Just like you
can do a financial audit or security
audit, you can also do a technical
assessment, which is very similar to it.
You know, technical audit, things like
that. Well, we're going to sit down.
We're going to help you figure out what
do you have, what what is your current
situation, and we're also going to sit
down and talk about your business
because that's really the most important
part about using technology is how to
leverage technology to do what you do.
We're going to help you walk through
your processes. What is it that you do
like in detail? So, it's a, you know,
think about it like sometimes we get too
much in our head. Just like how would
you explain to somebody how to tie a
shoe? There's probably business things
that you do that are along that same
line where you just know how to do it,
but to explain to somebody else, which
means to explain it to a computer or
technology can be a bit of a challenge.
So, we're going to help you bridge that
gap. We're going to help you understand
what's out there because there's a lot
out there. We spent a lot of time. We
are technology agnostic and so we're
going to find ways to help you take your
technology drunk junk drawer and clean
it up and through integration,
simplification, automation, innovation,
we're going to find the best approach
for you that custom recipe for success
so you can have a road map that you can
execute on or we can help you with that
as well. Good thing, bad thing.
Uh this is going to be like one of the
goofiest ones we've had maybe so far out
of a long list. Um,
good thing today was I was sitting there
and I was eating lunch and I had
something like get stuck between my
teeth and I was like, "Okay, I got to go
like get that thing out." And it came
free. The bad thing was when that came
free, I also had part of my tooth came
free. So, I had like a cracked tooth
that somehow had lost its uh its
strength or whatever. So,
not in a painful way. There's nothing
painful yet. I can drink hot and cold
liquids. not causing my head to explode
or anything, but enough that I'm going
to have to go find a dentist very
quickly and get all that kind of stuff
repaired. So, you know, sometimes the
simple things turn into not very simple
things. Sort of the story of my life
right now. Much like Michael's, which he
has re regailed us with in recent
episodes. Let's see how it's going there
this time as we check in with Michael
and he introduces himself. Hey
>> everyone, my name is Michael Malash. I'm
one of the co-founders of Developer
Building Better Developers. I'm also the
founder and owner of Envision QA, where
we help startups and growing companies
build better software faster with fewer
problems. Our services cover software
development, quality assurance, test
automation, and release support.
Companies come to us when they want to
avoid delays, reduce bugs, and launch
with confidence. Whether you're building
your first MVP or scaling a live
project, we make sure that your uh
software is reliable, efficient, and
ready for growth. You can learn more at
envisionqa.com.
Uh, let's see. Good thing, bad thing.
So, last time I talked about the water
issue, so that's been resolved. Um, I
guess good thing we now get to enjoy the
new toilets we had installed a month
ago. Uh, now that the water is working
again, uh, we can finally enjoy all the
upgrades we kind of did in the house,
which we weren't able to do, uh, last
time because we had no water. Uh, and as
far as bad things go, I got a project
that's kind of dragging out and just
dragging me down a little bit. So, but
weather's getting nice, so I'm not gonna
let it get me down.
>> Yes, weather has definitely been getting
nicer. It's been awesome enough that
I've actually had the windows open on a
couple of mornings and not been like
dying of heat exhaustion. So, it's
always good. So, we're going to dive
right in. This time, I followed up from
a prior post. So, it didn't give me like
any, you know, excellent idea or
anything like that. It just uh I said,
"Hey, how about doing this?" And it
said, "Absolutely. Here's a detailed
breakdown." And it gives us the same
kind of thing that we've had in the
past. So, it's a suggested episode
structure and item one with some bullet
points. We'll dive right in. What does
done really mean in agile? Explain the
agile principle of a definition of done.
Do contrast it with just finished
coding. Why clear done criteria are
critical for teams.
I want to I really want to go with the
like jump to the end there. Why clear
done criteria are critical for teams
because
this is one of those things that when we
we sometimes when we start a project and
we say we need to make sure that one of
the first things we do is we define what
done is is that people look at us like
we've got three heads or something like
that. The thing about done is that there
are varying understandings of what done
in a software project in particularly
mean. Like does done mean that you just
wrote some code? Does it mean that you
wrote unit tests with that code? Does it
mean that you have done a full it's gone
through QA? Does it mean that it's been
deployed? Does it mean that the user is
using it? There's a lot of different
ways you can look at done. And within a
development project, there's also things
that done may include things like
uh has it been properly, you know,
besides unit test, has it been properly
commented or documented? Has it been
committed to version control? Has it
been merged into a branch or something
of those nature? Has the uh the ticket
that originally, you know, that
originated that task been moved through
its processes and moved to complete so
that it is done? Um has it been signed
off on? There are things like that that
are very much part of your uh your
development process and your standards
and your team or even your corporate
process and standards that need to be
taken under you know consideration when
you consider what done is some places
done may mean that it has to actually go
through uh like a code review and a
security analysis review and and all of
these other things that are way way more
than
done in the hey I wrote the code and I
tried it on my local machine and it
works. And I'm using air I'm using air
quotes everywhere here for those that
can't see it because that's sort of how
it is. It's like
what really is done and we need to make
sure we do that because that is the that
is the target for whatever we're doing.
So if we ask somebody is it done we're
not going to get well sort of or yeah
but it's not or it's kind of or any of
that. You need to you need to know is it
done or is it not because that's going
to be a key part of scope creep and
estimation and things like that. So
where do you want to pick this one up?
>> So yeah, so you kind of touch on a lot
of things. I'm going to go with
contrasting with just finished coding
because one of my biggest pet peeves is
you do all this work or a developer does
a lot of work and they say they're done,
they push the code up and then it gets
to testing and you go down and you sit
there and you read the the ticket and
you're like the tester's reading the
ticket and they're like what is done?
What did you do? you know, it's not
clear in the requirements what it is
that they were supposed to do. So, what
did you work on? So,
when you're working on the requirements,
the definition of done needs to be clear
for everyone that reads the ticket
because if you're working on the ticket,
um you're working on this change, you
want to make sure that the change is
what is implied in the ticket. There
have been times where I have made
mistakes where I read the ticket one
way, someone else reads the ticket
another way and what gets implemented is
not what was the requirement for the
definition of done. And you run into
these situations when the requirements
may be clear but may not be clear enough
to really define the definition to done.
Case in point, you could have I'll just
pick on login screen because login
screen is just about everywhere. You
could have a situation where I have a
login screen and it's you basically were
told, hey, set it up to where a
registered user can log in with username
and password. Cool. I write the code. I
can log in. Now, it gets to the tester
and they're going to read that as okay.
So, I can log in with username and
password. They it does not specify
things like um case sensitivity, uh
special characters, things of that
nature. So if they go to test a login as
typical login security which has been
around for a while they're going to
break things. They're going to think
well why is this not working as
expected? So they're you need to make
sure that within the requirements
definition of done is some of the things
of what is done. So done would be
implied user can log in using any
username any password or if there are
other requirements then you need to lay
that out that hey username can only be
lowercase username can be camel case
username could be any case as long as
the username matches a user. These are
not just requirements but these are what
needs to essentially be the story for
testing so that you know it's done. So
if someone picks this up or a user goes
to test this, they know specifically how
to test it to see how that works. Now,
if it's a backend change, that's a
little more difficult. You're going to
have to have another developer test
that. But this is to me from a
test-driven developer approach what
definition of done means to me. Because
if I can essentially lay out how this
works, then I can code it. If there are
ambiguities in what I need to do, then
it is not a clear definition at all.
This sort of goes right into the next
point. Why ambiguous done leads to scope
creep when done means different things
to different people leads to unfinished
work. uh hidden bugs or endless tweaking
creates mismatched expectations between
DevQA and clients which is really what
we just talked about is
and it's here it's that it's that back
and forth and I'm going to probably go
right into the next one since Michael
Sor stole this one uh and let him talk
about the next couple items and just
touch on this real quickly to give my
thoughts is really what the problem is
it does become very frustrating when you
don't know what done is because you have
and it it really is very much the
developers QA and customer because you
will have stuff that for example goes to
QA and it's to them not done. It hasn't
covered the requirements that they think
it needs to. So they kick it back to the
developers and they're like why is the
developer not getting the work done? The
developers like why is the QA, you know,
on my butt all the time? Why they keep
changing stuff? Why they why can't they
just accept it? And of course the same
thing happens with the customers like
it'll go all the way to the customer.
The customer's like this isn't what I
wanted. This isn't how I needed. they
miss stuff and it goes back and people
get frustrated. So it does lead to scope
creep and it's really more of that the
scope creep tends to be that like now
people start expanding what they want to
talk about or or add to the requirements
to try to make sure that they can figure
out what you know that it actually gets
done. It's almost shoot for the you know
the star so you fail and hit the moon.
It's that kind of stuff. It's just a bad
situation to be in. Real world examples,
stories from teams where unclear done
led to delays or rework. How a strong
definition of done saved another team
from project chaos. I'm going to throw
that one to you.
Yeah. So, I I'll run with this one
because
the company I've worked for over the
last year's transition. We were acquired
by another company. And before we were
acquired, we had clear
requirements. We knew what needed to be
done. Everything we had, we had
definition done. Our tickets were being
completed on time. We met the
expectations. Yes, there was
occasionally some rework because like
Rob said, when you deal with reports,
you run into, oh, that's a simple
change. But outside of reporting, almost
everything we did was able to be
completed on time, on task, and we knew
what it was we were doing and could test
it. In that transition shift to the new
company as we were pulled in
almost every ticket I have had it feels
like it is a monolithic spike. Every
single ticket I have is ambiguous. It is
basically make this work in inside of
this ecosystem.
Hell or high water just make it work.
The problem is this is such a monolithic
application that you have no idea where
to go within this application. There are
multiple teams working on this project
and unfortunately even though we are in
the project process of transitioning
into this new ecosystem, we're still
making change in the old ecosystem. So
you could have one piece you get it
working and then go back and pull the
latest change. What you just redid this
or oh you changed this now it doesn't
work. So this is so frustrating that
having clear guidelines and definition
of done really avoids that and can
hopefully get you across projects and
meet your deadlines.
>> Excellent. Good examples. And I'm going
to dive into the next one because we're
going to try to get through a couple of
these points this time. uh components of
a good definition of done code complete
and reviewed automated test passing
documentation updated deployment to
staging production verified acceptance
criteria met and signed off I think
that's a really good start and I I think
that I want to sort of touch on these
real quick uh each of these because they
code complete is and reviewed is
something that I think we should do on a
regular basis I think there is very much
a value to reviewing code I have worked
on projects that review have code
reviews very strong all the way to don't
do it at all and the strong honestly the
stronger the better. I think yes it
takes time, there's effort, there's it
can be frustrating because you get
something kicked back to you. It's like,
hey, you need to, you know, make this
conform, but but it does pay off in the
long run. And this is from somebody that
there's more than a few times I've been
frustrated with a code review,
especially uh the code analysis, static
analysis stuff I do all the time. I'll
get frustrated with something, it gets
kicked back and it says you should do
this and I'm like, I don't really want
to do it. like I'm just gonna and
there's always that temptation and
sometimes I fail I fall for it to just
say you know what I'm going to pass it
anyways and we're going to move on but
there is also a value in uh in doing
those uh automated test passing is like
I will I've been on those where it's
like okay I'm creating tests for
everything and I've been a situation
where I'm like all right I'm going to
whip a couple of tests out we're going
to test it we're going to move on um yes
going through and doing those tests can
be timeconuming but particularly getting
those autom automated test built will
help you in the long run. And yes,
sometimes they fail because you change
uh requirements or something like that
change, but it also gives you actually
an extra uh leverage to not change stuff
to say look if we have to change this
and I have used this before. If we have
to change it, the change is not that big
a deal, but we have to retest all of
this stuff or we have to update all of
these tests and then suddenly that thing
that was like, yes, it's a little change
in air quotes actually is something that
is not a small impact and we have to
actually think about that. Uh, and you
could say, well, just skip the testing.
But it's like, well, wait, but any of
those places it's testing, if one of
those fails, then we would have to go
find it. So, you're going to have to
keep doing it.
documentation update. We skip this all
the time. I know everybody does, but it
really should be something that we build
into our processes to make sure that's
part of done is that we, you know,
wherever we need to update
documentation, we do. So, I think the
deployment thing is something is getting
better with CI/CD and some of those
kinds of things and pipelines, but I
think we don't do it enough. I think
it's very good to deploy it and run it
through its tests on the on the new
site, make sure everything goes. Um, and
of course actual done is that it's been
signed off on. So, we probably have a
done during a sprint or done for a
certain step, but that is not done for
that feature because it's not done until
we can go all the way through and
somebody can actually use it. Uh,
thoughts on those?
>> Yeah, I want to briefly touch on that.
I'm going to just go right into the next
one. But one of the things that Rob
touched on, you know, the automated
testing, you know, going back and fixing
those tests, make sure you don't let
your tests get stale or just don't
delete tests that are failing. A lot of
situations, if you're rushing to get to
the end, they I've seen developers do
this where they don't maintain tests.
They just modify the test enough to make
the test pass, but not really meet the
requirement that the test is passing. So
make sure that you keep your tests
somewhat fresh to the requirements as
they change. Uh I'm just going to jump
into five. Uh who creates and maintains
the definition of done? You know project
owners, scrum masters and the dev teams
collaborate and uh DoD evolves as the
project matures. I'm going take that
first one. You know who creates and
maintains the definition done? The team,
your project owners, the scrum masters,
the dev teams. If you are working as the
developers, chances are within your team
itself, you as a team need to sit down
at least quarterly agree on what your
team
wants for definition of done. Everyone
should be on the same page so that there
is no ambiguity, no confusion of when
you scope out tickets, you flush out the
requirements that when you pick a
ticket, you set, hey, I'm going to get
it done in this amount of time. then
you're going to get it done in that
amount of time. And this does require
working with the project owners and the
scrum masters. At the beginning of this,
it's going to be difficult, but in the
long run, it's going to save you a lot
of time, headache, and hassle. What are
your thoughts, Rob?
>> Yeah, I think in that that's the whole
point is that if you have problems with
it early on, if you're if the scrum
master, the product owner u don't even
the dev team, if they don't have a good
definition of dumb, that should show up
in your retrospective. That should be
something that gets flagged. That should
be something that you correct as you
move forward because that's part of the
whole idea is that agile should be
getting better as you go. And honestly,
there have been more part of the reason
that I know that it's important to
define done is that we have had this
come up in sprints during as we've gone
through an agile project and we've
gotten to a point where we're like, you
know what, we need to do a better job of
done. Maybe we need to add something. We
need to change something, tweak
something. we've gotten away from maybe
one of our steps that now we're not
doing it right. So, let's go back to it.
Code review process. There been more
than a few times where it's like we need
to adjust the code review process. Uh
bring more people in, bring less people
in,
provide different uh a different format
of feedback. Um things or you know, less
feedback, more feedback, uh smaller
chunks of work so they're easier to
review. There's a lot of that kind of
stuff that goes on.
We're cruising right along. So um how to
implement definition of done in your
workflow incorporate into user stories
and sprint planning use checklists or
tools like Jira, GitHub and notion make
definition of done visible and agreed
upon by all stakeholders. Uh and this
really is just like once you've defined
done you should document it. There
should be something in your it should be
in your uh your team documentation, your
development processes, your project
processes that this is what done looks
like. These are the steps. These are the
bullet points that have to be a part of
that. They don't have to be included in
order for us to actually be done.
And then within that is we can then if
we're using this especially good if
you're using like you know Jira or one
of those kinds of things Trello or Son
or whatever it is that when it goes into
the done column then we know that all
those things have actually been
completed and it's not bad in some of
those that you have you know sometimes
the the columns the swim lanes that
you're moving your ticket through are
all of the things to define done. So
maybe it's like you start out and then
it's being coded and then once coding's
done it goes to unit testing and once
unit testing it goes to QA review and
then it goes to code review or you know
and like and not necessarily in that
order but it's like you can in your swim
lanes document all of the things that
need to be done and then that should
move through and then you can even have
things around that. You can have logic
that says it can only go from this
column to this swim lane to this swim
lane and only this person can move it
from this swim lane to this swim lane.
things like that that can really help
you
be more efficient with what your
definition of done is and how you move
your tasks through it. Thoughts on that
one?
>> So, the last thing I'll really touch on
with this is holding yourself and your
team accountable is one of the best ways
to implement definition of done into
your workflow.
If your team really it should be a
personal practice because a lot of teams
in some companies don't even do this
which is bad but personally if you want
to be a good developer go from coding to
becoming a developer to really just keep
growing and improving and being the best
developer that you can you need to hold
yourself accountable and make sure that
every task you go into or you work you
look at with the mindset of what is the
definition of done? what is it that I'm
trying to complete with this and how
does this fit into not just what I'm
doing but the bigger picture because
sometimes you could be say hey build
this but in the bigger scope of things
that's not what needs to be it's
actually something else but kind of got
lost in perspective the best example I
can think of for that is go back to that
tree swing picture that's all over the
internet for software development starts
out with this is what was pitched a tree
swing you go through multiple iterations
roller coasters tree, no tree, and all
the customer really wanted was a rope
and a tire. They wanted a tire swing.
Defining definition of done helps you
avoid scope creep, but also helps you
ensure that the requirements stay on
task and get you the right product at
the end of the development cycle.
>> Yeah, we talk a lot about knowing your
why. Your definition of done is your why
for each individual task basically. It's
like it really is. the things that keep
you uh it's a guard rails for your your
work and to make sure that you stay on
task and stay on focus.
That being said, it is time to wrap this
one up. Uh as always, shoot us an email
at info developer.com. If you've got uh
suggestions, product ideas or anything
like that for topic ideas or product
ideas, I guess, as well, any of those
things, we'll be happy to hear from you.
Uh we want your feedback because we're
here for you. so we can build better
developers, you can build a better
podcast uh by letting us know what your
thoughts are and where you want to go,
future topics, uh areas of interest,
things like that. I know there are some
things we haven't spent a lot of time
on, so we can always go back to those.
Um, also you can check us out on X. You
can go out at developer.com. You can go
to or you can go developer. You can go
developer.com and we have plenty of
places for you to leave us feedback.
We've got a contact us form. You can
leave stuff there. You can, we have a
developer Facebook page. You can
definitely put stuff out there. However
you want to get a hold of us, we're
happy to get that feedback and
incorporate into us building a better
piece of sol a better solution, better
bit of content for you. Uh you can
always check us out if you're not on the
uh out on YouTube on the developer
channel. Uh, also if for some reason
you're tired of seeing our faces and you
want to just listen to us on audio,
wherever you listen to podcasts, uh, you
can find the Building Better Developers
Development Podcast.
As always, I appreciate your time.
Appreciate you hanging out with us for a
while. I appreciate you putting up with
my very lame uh, introduction in
Spanish. I'll try to like clean that
stuff up. Uh, go out there and have
yourself a great day, a great week, and
we will talk to you next time.
bonus material. So let's see, we are I
guess to
seven
DoD is a weapon against scope creep.
Keeps features from expanding endlessly.
Forces conversations on what's really
needed. Provides objective criteria for
complete. And then tips for developers
to advocate for a clear definition of
done. Push for DoD clarity uh early and
often. Use it to manage client
expectations. learn to f say let's meet
the definition of done first then
consider enhancements. So where do you
want to go with that?
>> So it kind of goes I think one of seven
and eight kind of go together. So I
think forces conversation on what really
is needed and learn to say let's meet
the DoD first.
This gets back to that why. Why are we
building this project? What is the need?
What is the MVP? If you focus on the MVP
and you focus on the requirements around
getting it done and the why then really
focusing on the definition of done for
that should be a no-brainer. It should
be fairly straightforward to stick to
what is our why and what it you know
what is an MVP that we need to get to
the end of the uh you know end of the
project. If you start getting features
that are not MVP then you are off track.
you need to stick to that MVP and then
focus on the definition of done to get
that MVP done. Once the MVP is done,
then maybe you can come back and look at
some other features and things at that
point. But that is a different
requirement set and a different
definition of done.
Yeah, I think I want to go with
I do I I you sort of stole my thunder,
but I think I want to go with that one
as well, which is basically like let's
finish the work first and then add to
it. I think that that is very often
where we need to go is that I think and
it that is it protects us as developers
as well. So when somebody comes at us
and says, "Okay, what about doing this?"
And we can go back to the ticket and
say, "Okay, is that in the ticket? Did
that get done?" And it's like, "Okay, so
we've done this and this and this." And
when they say, "Well, hey, we want to
make a change." Like, "Well, okay, well,
let's finish this first. Let's not go
back and change it because now then
we're going to have to roll a step back.
We've done these things based on that.
Let's try to get it." Now, it is a
little bit of a
a mini waterfall approach cuz part of
the thing with waterfall is like once
you say you're going to do it, you just
do it and it's like we'll fix it in the
next version. That's sort of what we're
doing here. We're talking about here,
but it really is like let's make sure
that
we get done what we need to get done and
then we'll worry about tweaking that and
enhancements and things like that. And
when you have a a defined process, then
it's going to help you because you're
going to be able to say, "This is where
we're at. This is how long it typically
takes us to get to the, you know, to
done." And so you can just sit back and
wait for x amount of time, and then
we'll be done, and then we can move on
to the next thing. So we can actually
build on that or make adjustments to
what's already been done as opposed to
changing it midstream.
Now to be fair though the waterfall
approach it can be waterfall like but
sometimes the MVP sticking to the MVP.
Yes you can make some changes within the
MVP but make sure it still sticks to the
MVP. If you stick to that then you can
still stay within the agile methodology
and that really applies to the agile
because you can pivot. Okay, the MVP
slightly changed because this feature
isn't quite flushed out. you will run
into that. But as Rob mentioned though,
sometimes it is waterfall because you do
have to kind of stick to what you're
trying to get done first before you can
pivot outside of the MVP.
>> Let me just like give an example on that
is if you are walking and you try to
pivot and you haven't planted your foot,
you're going to plant your butt
basically or your face. If so, you pivot
when you have like you have to settle
essentially. You have to actually have a
stable position to pivot. If you pivot
too much, if you pivot while you're in
the middle of working on something, you
are likely to fall on your butt just
like you would normally. Um, I've been
in too many situations where there is uh
there's this pivoting going on and we
don't get something done and you pivot
midstream and the next thing you know
it's like you've got partial stuff. So
use the done to get yourself to a point
where you're now stable enough that you
can pivot. Otherwise, things get out of
hand really fast and it gets really hard
to keep track of what are we doing,
where are we at, what have we, what is
done, what is not, what's in motion,
what needs to not be in motion and all
those kinds of things. So I think that
uh very much is something that can help
you to say yes, we'll be happy to pivot,
but let us like finish this thought
first. let us get to a resting point or
something that we can pivot and not try
to do it in mid-stream. And this will
help you with firefighting issues and a
lot of other things like that because
those are
uh those are the disruptions that will
be thrown at a sprint, you know, like
hey, change this or throw this in and
we're like, well, no, that's why we
protect our sprints as a scrum master.
That's why you do that is you don't want
to get in a situation where you're doing
too much uh zigging and zagging and
weaving and pivoting and the next thing
you know you really don't know what you
have. You're sitting there on mush
instead of actually solid ground.
I think it is time for me to move on
from the solid ground that I'm on, which
is the end of this episode. And we will
return. We're not done. There's plenty
more stuff out there. There's plenty of
a out there. And the AI is going to just
keep spitting stuff at us. and we're
going to keep talking about it because
we can until we run out of episodes from
that ch season, which we've got more
than a few left and then we'll figure
out where we want to go next. So, you
know, we're always open for new uh for
suggestions for future topics for future
seasons is the perfect time to do so
while we're sort of like starting to
think about where we may go next. You
know, where all the places are. Shoot us
an email, send us feedback really,
wherever you give us feedback, we can
take a look at that. happy to take your
uh whatever it is your suggestions are
and find a way to work that into what we
are doing. I do appreciate so much you
guys hanging out with us and uh we will
see you again next time.
[Music]