Summary
In this episode, Rob and Michael discuss the concept of scope creep in software development, how it can lead to cost overruns and missed schedules, and how to avoid it by asking the right questions and assuming nothing.
Detailed Notes
Array
Highlights
- Scope creep is a new feature because it was not in the original requirements.
- It's a new feature because that was not part of the original estimate.
- Assume nothing when putting requirements together.
- You should not assume that anything is there.
- Constantly ask questions like 'Is there anything else?' and 'How else does this work?'
Key Takeaways
- Scope creep is a new feature because it was not in the original requirements.
- Assuming nothing and asking the right questions are key to avoiding scope creep.
- The concept of 'done' is crucial in avoiding scope creep.
- Scope creep can lead to cost overruns and missed schedules.
- Examples of scope creep in real-world projects can be used to illustrate its importance.
Practical Lessons
- Developers should assume nothing and ask the right questions when putting requirements together.
- The concept of 'done' is crucial in avoiding scope creep.
- Developers should be mindful of scope creep and take steps to avoid it.
Strong Lines
- Scope creep is a new feature because it was not in the original requirements.
- Assuming nothing and asking the right questions are key to avoiding scope creep.
Blog Post Angles
- How to avoid scope creep in software development
- The importance of assuming nothing and asking the right questions
- Examples of scope creep in real-world projects
- How to deliver successful projects by avoiding scope creep
- The concept of 'done' and its role in avoiding scope creep
Keywords
- Scope creep
- Software development
- Requirements gathering
- Definition of done
- Assuming nothing
Transcript Text
Welcome to Building Better Developers, the developer podcast, where we work on getting better step by step professionally and personally. Let's get started. Hello and welcome back. We are continuing our season of just Building Better Developers. We are the Building Better Developers podcast, but now we're really talking about the developer journey, which is hopefully going to take you from a developer to a better developer, whatever that happens to look like, whether it's a, you know, starting at the beginning of your career and going to end of your career and it's dramatically different or whether it's something where you're just starting. Maybe you want to do beginning of the year. Where are you at? Let's look a little bit talking about your journey through the year. This episode, I want to jump right into talking at least about the topic is we're going to talk about scope creep. We're going to talk about requirements. We're going to talk about what is done. What does that mean? And that's something that often is a problem in projects allows things to get a little bit out of hand. And then you get into this whole, well, you know, cost overruns and you miss schedules. But if you didn't know what you're building, it's hard to really schedule that right as well. Before we do all of that, I want to introduce myself. I just want to leave that little like, you know, little hanger hanging thing there so that you come back after the introductions. My name is Rob Broadhead. I am one of the founders of building better developers, developer, also a founder of RB consulting where we go out and we do we basically make you we help you find ways to use technology better through integration, simplification and automation is make that you know, that big expensive car that you've built in your technology, turn it into a fine lean machine. On the other side, I'm also going to bring up what we did last couple of times around is like, what is a big good thing and a bad thing that's happened since the last time we met that you want to throw on there and introduce yourself, Michael. Hey, everyone, my name is Mike Mollage. I'm also one of the co-founders of developer NURB building better developers, and I'm also the founder of Envision QA. We focus on high quality software and working with our customers to really establish what they really need their technology for is either better software, better integration with existing software or building something very custom to their needs. So good and bad. So good. Making progress on my latest project or unpacking peeling back the onion, really getting down into some of the niggery details. It's going really well. Things going a little not so well. End of the month, I forgot to pay a bill. So I had to unfortunately got hit with a late fee. But, you know, it happens. What about you? All right. So good stuff. I got a lot of stuff. I think this time around is I'm just coming off a vacation. Got to go visit Aruba. If you haven't been go. It's awesome. It's really a great time. We were exhausted by the time we got done with our vacation, but that was a good one. Bad thing is I mean, I guess it's not that bad. But I mean, one of those projects where you need certain permissions and security and access in order to like get the project done. And I'm just in this waiting period where it's like I keep, you know, it's like every day like, hey, I need you to give me this, this and this access. And they're just really slow to respond. So it's like, you know, you're just drumming your fingers going, OK, can't really help you. We we got this project. We really want to get done. We should be able to get it done quickly, but not if you don't give us access to the proper stuff. So that's my bad. Which actually is their bad. It's just my bad experience right now. Which goes sort of in requirements and scope creep. Now, I think one of the big things about scope creep that I want to first. Talk about because I don't think I haven't really heard a lot of people talk about it in this sense is that what really is scope creep? Because a lot of times what we have with projects is, yes, there are dates that are, you know, slip or you have, you know, change requests and stuff like that that come in. And then so whatever the original project looked like, that little box that somebody put it in and said it's going to cost X amount of money and Y amount of time has changed. And sometimes it's often just called scope creep is because, oh, well, we added a feature, we added a feature, we added a feature. But people don't always see it that way because sometimes they think that and it's whether it's it could be us as developers, it could be them as our customer is that we added a feature and I'm air quoting for those who, you know, I don't know, we need to figure out what that looks like in the audio world. But for the video world, you can see me, we added a feature in air quote. And what we really did is we didn't really add a feature. We found a requirement that was not that, you know, that was missed. There was a gap in the requirements. And this is where it gets interesting, because depending on how the project is set up sometimes and actually quite often there will be different views of whether that was a problem or a flaw or something with the development or implementation team or whether it was something that is scope creep or something along those lines. Now, in the very technical sense, it is a new feature because it was not in the original requirements. Even if you thought it was in the original requirements, it's a new feature because that was not part of the original estimate. Now, it may seem like I'm splitting hairs a little bit here, but it is something that I think changes the conversation. And this is about being a better developer. It changes a conversation about that particular change, particularly when these things come in, when you get deeper into implementation, maybe you're already, you know, just barely on schedule or maybe you're already behind schedule. And now you have a requirement that comes in that you realize that, oh, there was a, we missed it, whoever we is, somebody missed that requirement. And now that discussion is it was a requirement. Is it really a requirement? Is it really something that it was a requirement or is there something that we just sort of figured out as we went along that is a nice to have? But because it was probably should have been, you know, it should have been caught technically, should have been caught originally. If you did everything right, you would have got that in your requirements. And then in that case, it's like, this is, it is a requirement. It now moves in the requirements. We do have to adjust our schedules in that we have a reason for it. It's not what anybody likes. Nobody's happy with that, but it is a valid reason. It's like, this is, it is what it is, which is different from we're working along on our project and hey, we found out that we need to integrate with this different system, or we don't like the way this worked when we initially talked about it, we need this feature, we need a reporting feature, an export feature, an import feature, those, those things that show up all the time when you get a customer in front of the application, they actually start using it. And if they don't have a really hard example to work from. And this is even these days, it happens more and more is what, what happens is people realize what the technology can do. And then they suddenly say, Oh wait, because we can do this, we really need to do that. And it may be a very firm requirement, but that's a different discussion. And it's how you approach it because it really is now the leverage is more on, Hey, we're adding something to what we originally, you know, we're going to provide, we're adding something because this is now it is, it's a new feature versus you need to take a different approach when it's a, Oh, we forgot to put that feature in. So it's in a way, not a feature. And particularly if you have, and this is something that we were talking about a little bit before we hit record, the idea of when you have a customer that is, or customer representative, somebody that's that, that product owner, where they have a lot of stuff in their head and they have all of these assumptions that are related to that. There are going to be things where it's like, Oh, you should know that this is how this process works or yes, of course that process always has these five steps. And you have to explain it. Well, to me, it looked like it only had three steps that I didn't understand step four and five. And so these are different conversations. The good thing is, is that you can avoid those if you ask those questions before you get into the implementation phase, the bad, this is one of those good and bad things. The bad things is that you don't always know what you don't know. Now this goes a little bit back to, I'll do a flashback as you can go into our, how do you gather requirements discussion? Because that's really where it, that's where the value is. That is where your goal is going to be is by asking the right questions and assume nothing when you're putting those requirements together. Don't assume that anything is there and constantly with stuff. Is there anything else? Is there something else? How else does this work? Let's let me walk you through the process. So for example, if there's, if I see three steps, but there's five, if I walk you through the process as a customer and say, step one is this, step two is this, step three is that, and say, does that complete it? Hopefully they will say, wait, you missed step four and step five. Those are the kinds of questions. Now I think I've talked long enough to sort of set the stage on this. And hopefully got that. I also want to, I wanted to pitch it to you as well to sort of talk about the defining what done is, because I think that gets into your favorite way place to talk about testing and things like that. I know it's subtle, but I've picked that up. So thoughts on all this. Yeah. So I liked how you kind of laid out the two different ways where you kind of have scope, like feature creep. And then you have kind of like a bug creep where there's actually something wrong with the system, but you don't actually catch that until you actually get into the software and work with it. Interestingly, I ran into that recently. We found out that we had reports created one way, we're switching to another data system in the backend and all of a sudden the sort order of our reports is not the same. So we uncovered a bug, but they want apples to apples. They want the old report or the new report to look like the old report, which can't happen because the sorting, there's no way to, we would essentially have to create a bug to make it look like the old way, but since we're actually pulling different data, we can't really duplicate it. So it's kind of an apples and oranges issue, which is something you do run into with features it's like, oh, the software we think works like this, like Rob touched on, but it is actually doing this. So it's not an apples to apples. It's an apples to oranges. It's like, it's not doing what we thought. So we need to take a step back and understand what that requirement is, define it and make the software work the way it's expected to, not necessarily the way we think it does. That kind of gets you to that definition of done. You know, what is the end goal for the change, the requirements, whatever it is you're working on, what does it mean to be done? Is it the feature has to be complete in agile. You try to go through smaller sprints. So you are always delivering code. So you don't want these monolithic requirements or monolithic changes. So how can you break it down into a way where we can roll things out in stages and still have different stages of done. For instance, like if you're building a big application that has security, well, you can build the login screen. You can validate login, but you don't necessarily need the entire application for that. You can just focus on one section of the, or one feature within that particular sprint. Now circling back around to the definition of done. So if you're working within the agile approach or you're working with these kind of situations, one of the things that is interesting from a test driven approach, a definition of done approach is you start with the end part. What are you supposed to have at the end? And like Rob said, you shouldn't have any questions about this. This should be very straightforward. If this, then this, then this, if there's an else with a possible splitting of scenarios that aren't defined, you can't write that code that is unknown requirements and therefore you potentially could run into a situation where you're almost to the end. Oh, we need this. Oh, we need this. So now you run into feature creep or requirements creep for missing requirements. The one thing we, uh, Rob didn't touch on is the other side of things. So as developers, uh, especially front end developers, we will take a way of implementing something that like display a table and we will build the table, but we don't like the way it looks. So we'll add some colors to it. We'll add some features. We'll add some bells and whistles. That is scope creep. That is not even featured creep. It's not necessary. You don't do that, but if you do that, yes, what your timelines get blown out of proportion and you don't meet the deadline. So you've added scope creep to an application that wasn't necessary. So you made code changes that weren't required and the requirements, they weren't in the definition of done. So you now essentially blown your timeline to do something that you thought was cool, but not what the customer wanted or wasn't expected. So I've run into that. I've done it myself, especially in your early years as a developer, you want to show off, you want to show your skills. So you are going to make that mistake. And it's not necessarily a bad one, but it is one that has impact. It has a negative impact on your timelines. So while the feature may look good, it may be something that customer absolutely hates and therefore you get a bad reputation from the end user. So it kind of goes back to that old adage. The customer's always right, but the customer's always right. If, and only if they give you the requirements that they need for the feature that you're building, if they leave something out, you're going to miss your deadline, you're going to not be able to build them what they want. So before I pass it back to you, one of the things I would stay, and we talked about this in the requirements discussion is as you're going through the process and we'll get into this with agile more, but through the process of agile, you have different stages where you get the requirements for the next. Sprint and you're supposed to go through and do a backlog refinement. Look at the requirements, flush them out. If you are not doing that, you are going to be in a lot of trouble because you don't know if that ticket has all the requirements it needs. It does. It may not have what the definition of done is. It may be missing. Oh, Hey, you need to integrate with this particular software. So before you can even write the changes, you need to get access like Rob's running into getting access for one of the projects he's running into. These are things that have impact that if you really look at the tickets or the requirements first, you should be able to identify that at the beginning. Now, sometimes something's going to get missed. You may be doing with legacy software where no one in the company knows how the old stuff works anymore. You're trying to rebuild it or you're trying to maintain it. And it's going to be a hot mess. You're going to run into situations where you will run into scope creep, feature creep and missing requirements. But in those situations, before you even begin, you need to set the expectation that this is how we perceive it working. We're writing the definition of done and the requirements on this. However, we are going to add within scope that this may not be what's expected. And at that point, we're going to have to either create another ticket or we're going to need to go back and reanalyze the requirements. So you do have checkpoints, especially with agile to protect yourself, but you need to make sure that you utilize them and not just go headlong and assume that everything that the customer gives you or that the tickets have is what you need to do the job. Where you're going. That's actually, there's a really good, uh, junior versus middle versus senior level developer thing right in the midst of that, which I've, I've noticed. And it does make a huge difference is when you get a task, whether it's, however it's done, whether you've pulled a ticket off of, cause it's part of the sprint and you're working in an agile mode or whether it's something that you've been assigned by your, your boss, your manager or whatever, one of the first things we should always do is some level of design. Even if we're given a design and all of these, you know, most of the pieces as a developer, even as a coder, there's some level of design, there's some level of this is how I'm going to get this task completed a lot of times. And this is where I say that when I've usually a junior developer is just going to get it and they're just going to start running in it. They're just going to be like, okay, I'm going to start cranking code. You learn a little further on. You get more, depending on where you're at in your software, software maturity level, we'll say somewhere along the lines, you realize that you've got to think about this a little bit before you do it. And as you get further on, you realize that you really need to think about it as you really need to, it's not just as we've talked about a little bit before, it's not just coding the happy path, but it is how am I going to solve this problem and hopefully they gave us all the needs of as far as like, how am I going to solve this problem with the, we'll call it the proper data, the happy path, what are the places where there could be exceptions? What is, what does that look like when there's an exception? How am I supposed to handle those? What am I supposed to, what are the constraints on values and all of those things that are factors that go into how you implement the solution. So if you sit down and the first thing you do is you look at it and you say, okay, I'm going to sort of like sketch out in some way, form or fashion, however tool you use it, a design for what I'm doing, that is often going to right away start to bubble up those questions that Michael's referred to where you're going to look at stuff as good, like, wait, I don't know what this is. I don't know how this is supposed to work. This is a, a flaw in the requirements. This is something that is not fleshed out enough. This is something where there's a gap. There's something you ask questions. That is part of it. Because if you, this goes back to what I said earlier, if you assume, you're like, oh, well I can assume all of this. Like you can, but you may build something that actually does not fit the solution. Now, hopefully the requirements include all the things that you need to know about done. What does done look like? You know, what is it? What is proper input? What's proper output? What are all the different ways it can handle invalid input? And how is it supposed to do that? If you've got all of that, awesome. But if you don't, that's going to be your safety net. Basically, regardless of whether it's agile or anything else, when you're given that task, sit down, sketch it out somewhere that is going to highlight where there may be gaps. I'll throw it back to you before we wrap this one up. Yeah. So to kind of add on top of that. So especially for some of the junior and, uh, you know, early beginner developers, one of the other issues you can run into if you just jump in and code, and I've seen this a lot is copying and pasting of code. You may have a feature somewhere else that you think meets the requirements. So you grab that, you throw that in and you say you're done. If you don't test it to the requirements, that copy code could bite you in the ass. So take the time, look at the requirements. Even if you make the change, make sure that it functions to the definition of done. Don't just assume, always make sure that it works as expected. I will follow that. That is an excellent point. Uh, it's a little bit of a, I'm, I feel bad that I did not bring that up because it's a little bit of a sore point I have in some cases. Most importantly, what Michael just said in the context of chat GPT or any AI piece. I don't know how many times I've seen code that has been pulled from, you know, before it used to be back in my day, we had to write it all ourselves. But then at some point there was Google and there is all these other places you can get code where it was just lifted and put in. It's like, boom, this solves the problem. No, it does not necessarily solve the problem. Most importantly, if you get it through one of these AI tools, test it, test it, test it. I don't know how many times I have found stuff and I use, I use those tools to some extent. And the, to some extent is that every time I do it, I'm going to go walk through everything to make sure that it actually does what I need it to do. And if it doesn't, then I'll adjust it accordingly. But nine times out of 10, I don't think I, it is very rare for me to just be able to get that code, plug it in. It's like, boom, we're off and running almost every time. It's not solving exactly the same problem. So I have to make some adjustments. So this is that being a better developer, better developers, not mean you just slamming code in and you're getting it done faster. It means you're building good code or at the very least you're solving the problem there. And then as you get to be a better developer, you're going to have the right habits to write better code from the start. If not, part of it is you have some sort of a technology, you know, back backlog of stuff where you're going to go back and do your tech debt and go clean that stuff up. But you don't clean it up. It's not throw the code in there and then clean it up, even though it's horribly broken, it's make it work. And then you can come back and clean it up and make sure that it's like, you know, uses all of your prop follows all your proper standards and things like that. That being said, we got started here, but we are not done. We teased a little bit the whole idea of agile and how that's a little bit different and it is, and that's what we're going to talk about next episode. So we don't, you already know what's coming. So hold your breath, but not too long because it does take a while before these come out. And then we'll come back and we're going to continue really jumping right into this whole, how do you deal with done and not having scope creep or scope stampede as it often becomes, or the world famous death march when you're in an agile world, because that can and often has happened and how do you set expectations? But I've given you enough. You can give me an email at info at developer.com. You can check us out on developer.com. You can enter contact form. We've got to contact us, throw your suggestions, your comments. Your favorite things that we've covered, the things we need to cover more, the things we need to cover less. If we need to cover our faces in the videos, that's fine as well. I mean, we get that we're not the prettiest people, but we're here for you. So let us know what you want. Let us know how we can help because we are also getting close to the end of the season and we're going to figure out what our next season is going to be. Just like we sort of figure out our episodes sometimes right as we go. I think we have figured out this season at the beginning of the first episode. We're like, Hey, this is what the season is going to be because sometimes we do that because technology is everywhere and goes all over the place. That being said, you can go all over the place, but come back here next time around, go out there and have yourself a great day, a great week, and we will talk to you next time. Thank you for listening to building better developers, the developer podcast. You can subscribe on Apple podcasts, Stitcher, Amazon, anywhere that you can find podcasts, we are there. And remember just a little bit of effort every day ends up adding into great momentum and great success.