Summary
In this episode, we continue our exploration of the Agile Manifesto, focusing on the 12 principles. We dive into the third principle, which emphasizes the importance of delivering working software frequently, with a preference for the shorter time scale. We discuss the difference between working and valuable software, and how this principle is crucial for successful software development.
Detailed Notes
The Agile Manifesto is a set of principles and values that guide software development. The 12 principles are the foundation of Agile development. The third principle, 'deliver working software frequently from a couple of weeks to a couple of months with a preference to the shorter time scale,' is crucial for successful software development. This principle emphasizes the importance of delivering working software frequently, with a preference for the shorter time scale. Working software is not the same as valuable software. Customer's highest priority is to satisfy them, through early and continuous delivery of valuable software. The principle is crucial for successful software development because it enables continuous improvement, faster time-to-market, and better alignment with customer needs. The speaker provides several examples and analogies to help illustrate the concept, such as wireframes, clickable demos, and straw man. He also discusses the difference between working and valuable software, and how this principle is crucial for successful software development.
Highlights
- Deliver working software frequently from a couple of weeks to a couple of months with a preference to the shorter time scale
- Third principle says, quote, deliver working software frequently from a couple of weeks to a couple of months with a preference to the shorter time scale. End quote
- Value in wireframes, clickable demos, and straw man
- Working software is not the same as valuable software
- Customer's highest priority is to satisfy them, through early and continuous delivery of valuable software
Key Takeaways
- The third principle of the Agile Manifesto emphasizes the importance of delivering working software frequently, with a preference for the shorter time scale.
- Working software is not the same as valuable software.
- Customer's highest priority is to satisfy them, through early and continuous delivery of valuable software.
- The principle is crucial for successful software development because it enables continuous improvement, faster time-to-market, and better alignment with customer needs.
Practical Lessons
- Deliver working software frequently to satisfy customer needs.
- Prioritize the shorter time scale for delivering working software.
- Make working software a goal for each sprint cycle.
Strong Lines
- Deliver working software frequently from a couple of weeks to a couple of months with a preference to the shorter time scale.
- Value in wireframes, clickable demos, and straw man.
- Working software is not the same as valuable software.
Blog Post Angles
- Why delivering working software frequently is crucial for successful software development.
- The difference between working and valuable software, and how this principle is crucial for successful software development.
- How to prioritize the shorter time scale for delivering working software.
- The importance of delivering working software frequently to satisfy customer needs.
Keywords
- Agile Manifesto
- 12 principles
- third principle
- working software
- valuable software
- customer needs
- faster time-to-market
- continuous improvement
- better alignment with customer needs
Transcript Text
This is Building Better Developers, the Develop-a-Noor podcast. We will accomplish our goals through sharing experience, improving tech skills, increasing business knowledge, and embracing life. Let's dive into the next episode. Well, hello and welcome back. We are continuing our season where we're looking at the Agile Manifesto. Primarily, right now, we're focusing on the 12 principles that they lay out. This episode, we've gotten all the way up to principle number three. So let's dive into it. Third principle says, quote, deliver working software frequently from a couple of weeks to a couple of months with a preference to the shorter time scale. End quote. This is an interesting, we'll say clarification of the first principle. First principle started out with our favorite. Our highest priority is to satisfy the customer. But then it says through early and continuous delivery of valuable software. So we start off talking about valuable software, but now we get to the third principle and they start with working software. Now you may say, what's the difference? How can it be valuable if it doesn't work? Well, there's value in wireframes. There's value in clickable demos. There's a lot of stuff that we can provide that has value as a design tool, as a conversation as a straw man, as a way to essentially hang details on a proposal, a solution that's different from working. An interesting thing here is that they even put a timeframe to it. And this is any software. This is not, let's say, just small mobile apps or simple web applications or a quick calculator desktop application or something like that. This goes all the way up to huge enterprise resource management programs or huge CMSs or things like that. Some of those probably, I guess, CMS, probably easier to put something working sooner rather than later. You can think of some rather large scale enterprise software probably that you've worked on that it's difficult in a matter of weeks or maybe a few months to put something working out there. If you think of a big electronic medical records, there's a whole lot of stuff that goes into that. Financial applications, there's a lot of those. There's some simple stuff maybe, but if you go to enterprise level financial things, you're talking about huge amount of requirements and a lot of work that goes into those. So this is sort of putting their necks out a little bit to say, we want to deliver working software frequently from a couple of weeks to a couple of months with a preference to the shorter time scale. And now this works really interestingly into how sprints are done when you look at that approach. There are differences in how people focus on sprints, but if you look at the, we'll call it the official way to do a sprint, ideally each sprint cycle ends with a deployment, ends with working software theoretically. Now working is a, working and valuable are two different things. I know that I've been through sprints where we have delivered quote, you know, working software and really all it has done has been maybe, let's say for example, maybe it allows you to log in and log out or log in, see a homepage and log out. Technically it's working. Is it valuable? No, because there's not much you can do with it. Take that back. Because you can start talking about even in that situation, the registration method, what types of users are there? Are there say administrative versus regular users? Are there maybe read only versus write authentication or types? There's things that you could do that, there's conversations you can start that grow from log in and log out. Even simple things that are common like forgot password, password strength rules if there are any validations around what is, what is your log in? Is it an email address? Is it a phone number? Is it some unique identifier within the database system, the system? Is it maybe a unique identifier within an organization that's within a system? You can ask and answer some pretty good questions starting from that. But back to the point is that one of the goals of a sprint is to have working software at each step, at each completed sprint. That's why we code, we test, and we go through a deployment. I think most people don't. anybody that has had the 100% hit rate of every single sprint, you generate working software at the end of it that is significantly new or different. Usually after you get past the first sprint that generates software, then it becomes a little easier to make tweaks and adjustments to it going forward. But there are, even in that case, I know that there have been sprints that we've gone through where we haven't really produced working software. We've maybe put some stuff out there. It may or may not. I guess it works because we've tested it. But in some cases, it may not be really feature complete enough to be considered working. Sometimes we'll sort of roll out bits and pieces and don't turn everything on. But that's, I digress. But that idea of every sprint producing working software comes directly from this third principle, if you want to look at it from an agile point of view, or maybe vice versa, as this principle points directly to having working software at the end of each sprint. And it's frequently too. It's not just you get it out once and then they go away and you don't do anything again until the end. This is working software frequently implies, I would say, and you can argue, but I think it would be hard to argue against the idea that it implies that since we are implementing along the way, this working software that we are frequently delivering is iterations on prior versions, prior releases. Now the shorter time scale, if you look at sprints, we've fast forwarded quite a bit from this. Sprints typically are, in my experience, two to four weeks long. Some are a week, some maybe five or six weeks, but the vast majority of companies that I've talked to and in places that work, you're in that two to four week. Usually it's actually two to three weeks. I'm trying to think of who's done four week sprints even. And it doesn't have to be, some people do, there's a few that do sort of odd things. So they'll do a 17 day or something like that. They do something a little odd because of how things work out, how things lay out in order to hit maintenance windows and stuff like that. So there are some things that, you know, that shoehorn saw the sprints into some of the existing systems and processes within an organization. But that's usually adapted to the organization, which is fine. There's nothing wrong with that. But if you are essentially developing software in a vacuum where you don't have these other requirements that would cause you to shift stuff around, then it would make sense to your users, to your customer that your highest priority is to satisfy them. It would make sense to the customer to expect a new release every whatever the timeframe is, every two weeks, once a month, every three weeks, every six weeks, whatever it is. It makes sense for them to have a certain period of time that they are going to expect releases within that time. That's why we have release schedules all over the place, even when it has nothing to do with Agile. There is software that is waterfall, but they do major releases twice a year and they do fixed releases once a month. First place I worked, that was exactly what we did. Major releases, I think it was twice a year, maybe once a quarter, but I think it was twice a year. And you had fixes, patches, or whatever that went out once a month. And there's hot fixes and stuff that go out more frequently, but that was the expected release schedule. And that gives your customers something to help them stay satisfied so that they're seeing progress, they're seeing the ball move forward, and it's coming in a way that they can schedule it, that they can work with it. If you just haphazardly throw releases out, then your customers are never really sure when they need to maybe schedule some time to be able to look at a new release or when they need to, I don't know, back up their data and things like that in order to handle the potentials of an issue in a new release. So there's a lot that comes out of that regular cadence of a release schedule. So of course, this is a very interesting, it's a very tightly focused principle, but one that is, I think, very critical to software success. I've definitely seen it. The people that signed on to this manifesto saw it. Early on, they realized, early on in the principles, they said, hey, one of the things we need to do is deliver working software to the user so they can, we'll say in quotes, play around with it so they can get used to it. We have talked about this many times, the idea of clickable demos and then working your way from a clickable demo into something that is steadily being turned real, turned live, turned into something useful for the end user. If you do it right, before your software is even complete, there is that value that we talked about that allows them to actually put it to use. There are definitely situations where this isn't going to work, where you have to do things like dual entry because there's an existing system you're replacing, things like that. There's definitely situations where valuable is not the same as they are going to use it for real work. The valuable side of it is you're just getting feedback, but not necessarily that they're using it for real work. Although, I think you should push that. I think that is something that is very useful, it's valuable as you're building these releases that you're delivering this working software. The one thing you do is you try to get them to use real data. If you've got imports or migrations or something like that, hopefully those can get worked in at some point. One, you want to see what the data actually looks like, especially if you're a designer from a user experience point of view. This is something that you probably know. It's probably one of the first things that you're concerned about because there's a difference between displaying, for example, a list of 20 items and results that is 20,000 items. The user experience is going to be different. How you handle that. The tools around that, the controls that you're going to want to provide to the user are different. Then there's simpler things like just the length of text, special characters. You want the user experience to work with real data. Now the other side of the back end side of it, you want to be able to look at things like performance and things like that. There's a big difference between an application that works for 10 records and then trying to push a million records through the same application. May not handle it. Good design and architecture can make that probably not a problem, but even so, it never hurts to put your design and your architecture through its paces. Put some scale size data in there and make sure it does work. Make sure it handles things the way you expect it to handle them. And the user as well. Maybe your idea of reasonable time to return a result is not quite the same as the customer's reasonable amount of time to wait for result. So you may have to work with that. You may have to make some adjustments. You may have to change requirements. This factor, this principle, I think is an excellent way to address one of the most common issues with software development. And that is at its core communication. Technical people talking to non-technical people and vice versa. There's almost always going to be, unless you have a really good translator liaison or something like that, that's working between the two teams. There's almost always going to be something that's just not quite, they don't see eye to eye. And that's not necessarily a bad thing or a disagreement as much as it is a potentially confusing or maybe it's an opportunity that gets missed. Because the customer in most cases does not understand software development. They don't understand what you're doing. They don't need to. They shouldn't. And so they don't know what's available. They don't know what a, to some extent, they're not going to know what a hard issue to solve is or an easy thing to solve is going to be. And that impacts how they talk to you. That impacts their own internal estimation of whether it's worthwhile to bring something up or not. And so you want to have that steady communication that's clear and has examples, which would be that working software. Because they can point to it and they can say, here's how I use this or here's how I plan to use this. Here's the user experience or here's the results. And these don't work for what I need. And that may, that could be something that slips through in requirements. Now you could be tough on the requirements gatherer and say, well, if they're good at gathering requirements, they'll get that. And I can't argue that, but that doesn't mean that even if they are good, that they'll always catch it. Things can fall through the cracks no matter how good you are. None of us are perfect. So we use this as a validation, as a sanity check. And this is for us as well. I have been in situations where we have gotten basically to the end, thought we were ready to go and we weren't because something about the deployment process or the final deployment or something within our process was not thought through or tested or however you want to look at it. So you get all the way in and you get to the point where you're going to actually push this out to production and there's a failure or there's a bug. And it's usually something that would have been caught sooner or at least would have been seen, experienced sooner if you had tried to do a push to production before. Now again, it's not always possible because there are situations where you're replacing a system or pushing for induction is a one-time, one-way process. But I don't think there's that many cases where that absolutely has to be the case, particularly in a modern day where you've got virtual environments and things like that that you can spin up. You should be able to, I will say close enough, model your production environment that you could do a dry run into this test production environment and validate that it works before you do it for real. Again, doesn't always work. There are cases where that absolutely cannot be done. But I would guess, I would also guess that the vast majority of situations you can do that. Some way it may be costly, it may be in money or time or resources, but I think it's valuable. I think it's worthwhile because otherwise you're playing with fire in your production environment or on your production application. It's the same reason we don't make hot fixes in production. Ideally, we do them in dev, we test them, and then we eventually push in production just to try to catch obvious bugs and errors and things like that. So that's the third principle. Deliver working software frequently from a couple of weeks to a couple of months with a preference to the shorter time scale. I know I just spin that simple principle, which I can't even say very well, into a full episode, but I think it's worthwhile. I think it's one of those that it makes sense for us to take a close look at. And that being said, I'll release you back to the wild. Go out there and have yourself a great day, a great week, and we will talk to you next time. Thank you for listening to Building Better Developers, the Developer Noir podcast. For more episodes like this one, you can find us on Apple Podcasts, Stitcher, Amazon, and other podcast venues, or visit our site at developernoir.com. Just a step forward a day is still progress. So let's keep moving forward together. There are two things I want to mention to help you get a little further along in your embracing of the content of Developer Noir. One is the book, The Source Code of Happiness. You can find links to it on our page out on the Developer Noir site. You can also find it on Amazon, search for Rob Broadhead or Source Code of Happiness. You can get it on Kindle. If you're an Amazon Prime member, you can read it free. A lot of good information there. That'll be a lot easier than trying to dig through all of our past blog posts. The other thing is our mastermind slash mentor group. We meet roughly every other week, and this is an opportunity to meet with some other people from a lot of different areas of IT. We have a presentation every time. We talk about some cool tools and features and things that we've come across, things that we've learned, things that you can use to advance your career today. Just shoot us an email at info at developernoir.com if you would like more information. Now go out there and have yourself a great one.