🎙 Develpreneur Podcast Episode

Audio + transcript

Handling Software Delivery Panic Strategies for Developers

In this episode, Rob and Michael discuss strategies for managing panic in software delivery. They share personal anecdotes and offer practical advice on how to handle unexpected issues.

2024-05-25 •Season 21 • Episode 25 •Panic in software delivery •Podcast

Summary

In this episode, Rob and Michael discuss strategies for managing panic in software delivery. They share personal anecdotes and offer practical advice on how to handle unexpected issues.

Detailed Notes

In this episode, Rob and Michael discuss the importance of testing and communication in software delivery. They share personal anecdotes about times when they experienced panic and how they handled it. They also offer practical advice on how to identify and fix bugs, prioritize tasks, and communicate with stakeholders.

Highlights

  • Don't panic, take a step back, and assess the situation
  • Test your software thoroughly before release
  • Communicate with your team and stakeholders to manage expectations
  • Use testing frameworks and tools to identify and fix bugs
  • Prioritize and focus on solving the root cause of the issue

Key Takeaways

  • Test your software thoroughly before release
  • Communicate with your team and stakeholders to manage expectations
  • Use testing frameworks and tools to identify and fix bugs
  • Prioritize and focus on solving the root cause of the issue
  • Don't panic, take a step back, and assess the situation

Practical Lessons

  • Use logging and exception handling to facilitate debugging
  • Prioritize and focus on solving the root cause of the issue
  • Communicate with your team and stakeholders to manage expectations

Strong Lines

  • Don't panic, take a step back, and assess the situation
  • Test your software thoroughly before release
  • Communicate with your team and stakeholders to manage expectations

Blog Post Angles

  • The importance of testing in software delivery
  • Effective communication strategies for managing panic
  • Practical advice for handling unexpected issues

Keywords

  • software delivery
  • panic
  • testing
  • communication
  • agile development
Transcript Text
Welcome to building better developers, the developer podcast, where we work on getting better step by step professionally and personally. Let's get started. Hello and welcome back. We are diving into yet another episode of our podcast. It is building better developers. It is developer. It was originally it's developer. It became building better developers because the lady in the box, also known as Ale XA, does not recognize development or very well. If you say, hey, play the next episode of that. However, if you do say play the next episode of building better developers, she will pop right up. If I were to say that right now, she might even be so sensitive that she would fire that off wherever your room is. So just so you know, that being said, my name is Rob Broadhead. I am a founder of developer, also a founder of RV Consulting. And we've just been cruising through however many we're up to now. Seven hundred plus episodes on the other side of the digital divide is my friend, Michael. I'll let you introduce yourself. Hey, everyone. My name is Michael Mollage. I'm another co-founder of developer and I'm also a founder of Envision QA. This episode, I don't want to hang you. Have you hanging on too long? In this episode, we're going to talk about panic. Basically, this is really not an uncommon thing, probably, unfortunately, but it's something that is very workable. It's something you can work with that you can handle and make the panic not be a horrible thing. And this panic is when you are delivering software. Usually the obviously if it's 100 percent like we've tested it, this should be bulletproof and somebody breaks it and panic ensues. It's probably a little bit more panic because that means you didn't you didn't QA right or they're doing something completely different that has, you know, new and it should not be new at this point. We'll talk about that a little bit. But if it's a beta or something like that, there are these people sort of preceded with this. Is there some people that are just gifted at breaking stuff? You can deliver your you can test it as thoroughly as you think you can. You put your software out there. Two minutes later, they give you an email back and say, hey, I can't make this work. This is just broken. And I've run across several people like this. Usually QA people are that's like where they end up because that's their gift. That's what they do best. But sometimes it's going to be your manager. It's going to be your it could be a CEO. It could be whoever it is. And it's usually somebody you don't want it to be because you just gave them something that should work. And they basically responded right away. It doesn't. And the panic is usually when you send it out, they should be maybe testing like it's a beta product. They should be testing it. And it is a non it's a non-starter for them. They have hit a roadblock. It is something where they cannot move forward. And I mentioned this because I had this just the other day with a customer who is it's not like this is somebody that is not technically knowledgeable. I mean, they're not a developer, but they understand the software process, understand how these things go. So I'm like, hey, you can have an alpha version of it. Start banging around a little bit. Right away, he gets to a showstopper. Little bit of a and I didn't I'm in a call with him and he said, yeah, I couldn't get this to work. And so there is a little bit of that panic. It's just like now. OK, he and he was he was like, this isn't going to work at all. There's no way we're like, I can't imagine that you're anywhere close to the end of this thing being done. None of it works. None of it looks right. All this kind of stuff. And all of that was and even he during this was saying, you know, I know we haven't really like focused on the design and only have a really focused. I know there's bugs and things like that. But because he's blocked, he's panicked because he's like, all right, now this doesn't work at all. It goes from, you know, in his mind, it goes from we're 95 percent there or 90 percent there or something like that. We're not any work clothes. Those are the kinds of things where panic sets in. And if you are, especially if you're a consultant, you know, you're working with a customer or if your employee and your manager is hitting this situation, and they can ensue on your end because now you're like, oh, crap, I got to fix this like right away. Now, two things. One, don't panic. Take a step back. Take a deep breath. First thing you want to do is get the situation that they went into to break it, because sometimes it's as simple as, oh, I didn't get your user ID set up with the right permission or, oh, you can't set that value because this other thing's going on or, oh, by the way, yeah, you did that, but that should never happen because we discussed it because it's in the requirements. It doesn't happen that way. We have it allowed because you're an advent, you know, or something like that. And sometimes it is a bug. Sometimes it's like, oh, yeah, that's right. You don't see that or that value is not there. And it's this matter of, in my testing and my development mindset, I missed it. I didn't think about somebody entering that value at that point, or I didn't realize that you could enter that value. Take that into advisement. Say, okay, cool. Take a note. I will fix that. I will handle that. And then do so because they found it. You want to be, you do want to prioritize that a little bit is make sure that that gets addressed, particularly before you cut them loose on the application again. If it's something you can fix right there, maybe, but be careful about rabbit holes because I have been in more than especially like if you're demoing something and you're on a demo and it breaks right there because somebody says, hey, why don't can you try this? And you're like, oh, sure. I can try that because this works great. And then it blows up. One don't ever say, yes, I can try that because like if you haven't gone through your, if you're not sticking to your script in a demo, you're going to have problems. This trust me. They're just, I don't care how good you think the software is. There is a magic about people saying, Hey, can you enter this value? Can you do this real quick in a demo? And it breaks everything. And if you try to fix it, inevitably you're going to break everything. So now you can't even get back to the working state that you had right before you got into the demo. So don't panic. You know what level of trustworthiness your software is at. If it's crap, okay. It's like, yeah, I know it's full of bugs. It's like, here's this, we have one happy path and if you vary from it, it will break. Okay. Just make sure you're clear to that. But if it's something where it's like, Hey, we're mostly there, but we still have bugs. Understand that, okay, this person, especially thank you. You just found a bug. I'll knock that out, put it on my list and move on. If they're panicking, feel free to like, you know, especially if they're, they go from, we think this is almost there to this is not even close and we're going to bail it out. You know, we're going to bail out on the project. I guess it'd be the worst possible thing. It's like, okay, we're done. We're not even going to talk to you anymore. Take a deep breath. Say, Hey, let me show you where this does work. Let me walk you through it instead of you walking through it because obviously they found some broken stuff. Say, Hey, let me just alleviate some fears. I'm going to walk you through this and show you that it does work. And that will help alleviate those fears and help people from panicking and overreacting. Now that's just it in a nutshell. I want to throw it over to Michael because he's been like snickering along the way as well and see what your thoughts are in these kinds of situations. Because I also know you have experienced more than a couple of them. Yeah, thank you, Rob. Plenty of situations like that. One particular thing that came to mind, and we briefly touched on it before we kicked off the podcast, was we actually worked together at a place where no matter what we did, we could try to test our code. The moment we said the code was ready, our boss would go out and within minutes, Oh, I found a bug. And it's like, what did you do? So there was always a constant anxiety to hit that button to say the code is done, it's committed and things are good, which actually led me down a interesting path of my career over that. Because prior to that in college, I went through a lot of classes that actually push test driven development. And early on, a lot of companies and a lot of developers don't follow that model because really a lot of people think, well, if I write my code, the code performs, there's my test, the code works. And in a way, sure, but we still need to write our unit test, we still need to have some type of QA around our applications. However, if you go into the mindset of a requirement or an application that you're putting out with testing first, in fact, I actually did that this week, I had a ticket come across that was fairly well documented with the requirements, but there was kind of a few missing pieces. So I had to go back, ask some more questions, kind of refine the requirements first. But then I literally started with a main method and started stepping through, okay, what is the final output? What is this supposed to generate? So I created my POJO to generate, I'm sorry, the DTO to generate the output from the API call. So I had a little hard coded JSON script that kicked back out into this call and produced a DTO. And then I slowly walked through all the steps. So by the time I was done, I literally had 15 methods, each method was a specific requirement point. It did a specific task easily to test. And when I was done, testing was mute. All I had to do is literally write 15 unit tests to test my methods, done, and then write a couple integration tests to test the software. Now in Rob's situation where he was talking about this, I run into a lot of situations, especially with new clients, or even prospect clients, where it's like, hey, I have this problem. I need this application, or I'm looking to have this done. So what you do is you go and you mock something up, you throw something together that's, you know, on rubber bands, shoes, strings, and duct tape to just kind of present what it will do. The problem is, especially in enterprise, if you show it to the wrong person, they think, oh, this is great. It's going to production tomorrow. And you're like, oh, no, no. And then you end up in these weird coding situations where you're struggling 400, 200 hours trying to get something out the door that the bigwig thinks was ready yesterday and it's not. So they kind of a flip to where they went from high level of, hey, we're ready to go to no confidence in the project. In this situation, you went from, hey, even though this doesn't look, isn't fully functional, you demoed something that looked functional enough and they're already going to production. So now you're struggling to fill those holes and gaps to avoid the downfall that could be coming. And the testing is a good point on that because that is one of the, one of the things you can do to help, you know, to make lemonade out of the lemons of a situation like this is to talk to them about how did you generate this issue? How did this come about? And then the other thing is, you know, if you're, if you're, if you're, if you're really, if you're further into your, your software development life cycle where this should be a fairly mature product. And I've run into this as well, where I've got a, I've got another customer that it's a fairly mature product, but it is, it's amazing. And he's in the QA world. He's very good at breaking stuff and he's got a couple of employees that are very good And so what you do is you find out what they did. If you can, you know, these days you can maybe have them recorded or something like that. It's just like figure out those steps and then build that into your tests. So for example, this, this application I just talked about, one of the things we do is we've got a basically selenium based robot that runs through the whole site and does a whole bunch of stuff to go check different things out, look at variables, make sure things are, are working right. It's like a smoke test, maybe a little deeper than that, but it's just really, so when we go deploy something, kick this thing off, hit all these different pages, look at all the results, let's make sure everything's working still. And then if we've got like over time as they've said, oh, hey, we did this and this broke, then we have specific, we just add that in. So now we test and we'll go work, look at that specific thing and say, okay, I'm going to do this action and it better come back correctly. Something I learned back in college with some of these kinds of things, the special stuff is we had a guy and I will never forget his last name. His last name was Eigenschenk and the way it was spelled actually broke everybody's sort. We had something that you're supposed to go in and you're sorting a name or a string based by just alphabetical order. And the pattern that we went through, the sort that we did, and I can't remember which one it was, but it assumed a certain thing. It assumed a certain order of letters. We didn't realize that until we actually hit it and realized that it was like an I before E instead of E before I or something like that. I can't remember what it was, but it was something to that level that only certain words and his name happened to be one of those words would break it. And the test was you put your name into it. And this guy was like the smartest guy in the class and he would run through it. He could not get this thing to work. And finally, like drag the rest of us in. We all figured it out and went through it. But it was because everybody else's work because their name didn't have that little magic problem. And that's so often what happens with an application is that they take one step different or they value different. And that's the one that the combination of things then breaks it. And it's a matter of find out what that step is, reproduce it, and then you should be able to figure out why is it that their specific series of steps they took caused a break when yours did not. And so it is a again something that it can be. Yes, sort of uncomfortable at the moment because people are like, this doesn't work at all. And you're sort of stuck trying to figure out how to fix it. If you step back and say, OK, it's a bug. Software has that. Let's just go with that. I'm going to go figure out what it did, fix it. And then you can show them the next time around. And it helps build confidence to say, hey, this is what you did. We didn't catch this. And maybe you can show three other flavors of that process or that approach that would have broken it. But it's like, hey, look, we can catch this and this and this and this and build that confidence back up. And that's sometimes the best thing is to say it's not just that because I think people don't some people expect you to have 100% perfect software out of the gates. It's not going to happen. But a lot of people realize there's going to be some bugs and things like that. There's going to be issues. There's going to be changes. If you can respond quickly and in a logical manner and say, hey, here's what was going on. Here's what we changed. This is how it's been improved. Then a lot of times that is going to be the big win coming out of it. Any thoughts on that before we wrap this one up? Yeah, one additional thing, especially with what you kind of experienced to kind of point out when you're working on code or tickets or requirements, some things to think about as your before you even pick up the ticket or start the work. One, do you have enough requirements? Do you have enough to define to work the problem? For instance, if it's user inputs, do you have the min max? Is it alphanumeric? Is it alpha only? And it takes special characters. Common sense questions that you kind of want to define before you start writing because that will impact the user experience. One other thing to think about is not just from the user side, but also what are the basically the user acceptance criteria? If I complete this ticket, how do I test it? What is acceptable for completion? And then lastly, make sure you add enough debuggers or comments in your code so that if an exception or a problem does occur, it gets logged properly. So you're not having to dig through the code to figure out where the heck something went wrong. That is actually where a lot of times I've found that is that I found that the customer has a problem. I can't track it down. And so we add some logging around that. Maybe it's a process that we thought wasn't going to fail or code that we're like, oh, that's going to work every time. Oh, wait, no, it doesn't because they just broke somewhere in there. So sometimes that's where you add exception handling, logging, things like that. Anything that you can use to help you with debugging it is always going to be helpful as well because it's just like down the road, you can say, okay, we know that this section of code at least is logging the things that it needs to for us to be able to debug it outside of the outside of actually doing that action outside of like being on the server. And of course, depending on your application, sometimes that's easier said than done. Sometimes you can get more information out of your server. Sometimes you can't. All of that. It's you know, take that on a case by case basis. That being said, we're going to take this minute by minute on a case by case basis. And the case right now is to wrap this sucker up. So we are going to come back. We're not done yet with our season. We're going to continue to talk through some of the just the challenges and the experiences that we have each week, some of the technology that we're running into. If you have any questions, comments or anything like that, or suggestions or recommendations, give us a shoot an email at info develop in order.com. Check us out on develop in order.com. You've got a comment page, a contact us. You can check us out on our YouTube channel. You can check it out. Check us out at develop in order on Twitter slash X, you name it. We're out there somewhere. And if not, let us know and we'll go get out there so you can contact us there. You just got to contact us on a different thing. That being said, go out there and have yourself a great day, a great week, and we will talk to you next time. Thank you for listening to Building Better Developers, the Develop-a-Nor podcast. You can subscribe on Apple Podcasts, Stitcher, Amazon, anywhere that you can find podcasts. We are there. And remember, just a little bit of effort every day ends up adding into great momentum and great success.