🎙 Develpreneur Podcast Episode

Audio + transcript

Automating Quality: Greg Lind on AI, Testing, and Continuous Improvement

In this episode, we discuss the importance of automating quality in software development with guest Greg Lind. We explore the role of AI in testing and continuous improvement, and the need for transparency in the testing process.

2026-02-15 •Season 26 • Episode 14 •Automating Quality with AI, Testing, and Continuous Improvement •Podcast

Summary

In this episode, we discuss the importance of automating quality in software development with guest Greg Lind. We explore the role of AI in testing and continuous improvement, and the need for transparency in the testing process.

Detailed Notes

The conversation with Greg Lind highlights the importance of automating quality in software development. He explains that AI can help with testing and continuous improvement, but it's not a replacement for human involvement. The need for transparency in the testing process is also emphasized. The use of robot framework for automation is discussed, as well as the benefits of test-driven development. The episode also touches on the role of product managers and the importance of involving QA in the development process.

Highlights

  • the importance of automating quality in software development
  • the role of AI in testing and continuous improvement
  • the need for transparency in the testing process
  • the use of robot framework for automation
  • the benefits of test-driven development

Key Takeaways

  • Automating quality in software development is crucial for success.
  • AI can play a significant role in testing and continuous improvement.
  • Transparency in the testing process is essential.
  • Robot framework can be used for automation.
  • Test-driven development is beneficial.

Practical Lessons

  • Implement automation in the testing process.
  • Use AI to assist with testing and continuous improvement.
  • Involve QA in the development process.
  • Emphasize transparency in the testing process.
  • Use robot framework for automation.

Strong Lines

  • Automating quality in software development is crucial for success.
  • AI can play a significant role in testing and continuous improvement.
  • Transparency in the testing process is essential.

Blog Post Angles

  • The role of AI in software development and testing.
  • The importance of automation in the testing process.
  • The benefits of test-driven development.
  • The need for transparency in the testing process.
  • The use of robot framework for automation.

Keywords

  • AI
  • testing
  • continuous improvement
  • automation
  • quality
Transcript Text
Welcome to Building Better Developers, the Developer Nord podcast, where we work on getting better step by step professionally and personally. Let's get started. Well hello and welcome back. We are in part two of our interview. We are continuing the season of Building Better Developers. This is the Developer Nord podcast, but it is building better foundations. And funny enough, we thought the foundations were going to be about AI, but we really haven't touched on it. And as far as you know, we might not touch on it this episode, but we are touching upon another area as we're really talking about the foundation of the team itself and how it interacts with the CEO of Buildly, B-U-L-I-L-B-U-I-L-D-L-Y.com. So check out that. Plus there's going to be links in the show notes. We're going to carry this forward. First, I need to introduce myself. My name is Rob Brodhead. I'm one of the founders of Developer Nord, also the founder of RB Consulting, where we are a petite consulting firm. We help you leverage technology, create roadmaps to be more successful. Good thing and bad thing. Good thing is I'm getting into, the fall is always interesting. I get into the season of there is not like on a given week, not a ton of work, but by the time I get even started in the week, work fills stuff up very quickly. So I have now overbooked myself essentially every week because I have all these plans of things that I need to get done, want to get done working on my business instead of in my business. And then those are rabbit holes everywhere because sometimes in the business steps in the way, sometimes working on my business, I find new rabbit holes of like, oh, I need to go deal with this, explore that. This is every bit of it. So it is the good and the bad of. So many things that are out there, so many opportunities, so many things that I'm chasing, but the bad side is just I'm overwhelmed when there's too many things to chase. So I'm going to simplify it down to passing the introduction over to Michael. Hey everyone. My name is Michael Molloch. I'm one of the co-founders of Developer Nord. I'm also the founder of Envision QA, where we help businesses with their software problems, be it custom software, cookie cutter software, or just any type of software that you need to help run your business. If you need to customize or you're having problems with delivery, give us a call. We can help you build custom software or build tools to test yourself. Check us out at EnvisionQA.com. Good thing, bad thing. Good thing is we're getting past Halloween, we're heading towards Thanksgiving. So I'm getting kind of the vibe for some turkey and some pumpkin pie and those pumpkin spice lattes at your coffee shops. Bad thing, really into my seasonal allergies now. I'm waiting for all the leaves. While they're pretty, I'm waiting for them to fall off the trees so I can get back to normal. And the normal that we're going to get back to is we're going to continue our conversation with Gregory and pick up right where we left off. Yeah, I see that an awful lot, especially like the cherry picking. That's kind of where they go through the backlog and cherry pick. So I kind of want to pivot just slightly with your processes. So you've touched very heavily on the agile approach with the project managers, development teams and that. I'm interested in how you go about approaching the QA side of things. You've touched on it, but I'm a little more QA biased. I like test driven development, but kind of walk me through how you handle QA through this process. How do you simplify this and how do you show the communication going on between the development, the project managers and where things are? Yeah, I think Michael, you and I will have a very interesting conversation at some point about test driven development. But I agree that I think QA often gets left until the end. Same with security, right? So unfortunately QA sort of becomes security in a lot of organizations as well. But it is part of that overall security process is making sure you have well tested software. But it has to start from the developer. I think that I remember the first time, the first job I ever had where there was an actual QA team, I moved from a web design organization essentially that was doing, you know, brochure websites into a real software organization. And I was shocked at the fact that they had two QA, there was six developers and two QA team members at the same time. And I remember thinking, how can they, how do they have enough to do if they're just testing the software that I've built? There's not going to be, you know, I document my own code. There's not going to be any issues. And the amount of times they came back to me that first week with you need to fix this, let me show you what's happening here. I was completely shocked. But that to say that I, what I learned at that point, and then I started to learn down the road is I need to think like a QA person from the very beginning. I need to write my own unit tests so that I'm actually testing my software internally. I need to do that in a smart way. I don't need to go overboard with my unit tests because then I'm maintaining unit tests, not maintaining code. And then I also need to make sure that the QA developer understands what those requirements are, you know, the old agile flip side of the card where you would write the test on the back that I don't think I can, I can't remember a single place I've been where anybody actually did that in the planning poker phase of agile until it, until the QA person came in and said, I don't see a test on this. And what am I, what am I testing for? And I think that's the, the aspect for us that is worse. I'm still struggling with is at what point does an auto, like let's say the robot framework. So a lot of the testers that I've been working with in the past have been using things like robot and Selenium to automate front end testing as well as back end testing. So that combination of unit tests to then regression tests to then front end and back end tests or in a, let's say a green blue deployment process. That, that part of the process, you need to have the QA involved from the very beginning because they need to know what they're building, what they're, and that's product management, QA, developers, design, all need to be involved as early as possible in that process so that you can write those out. So what we, we do in our, in our process is we, we do a lot of the, you know, essentially the developers are writing their unit tests after they've finished a block of code. They're not writing it before. And I know that doesn't fit with the test driven development process, but I will come back to that because I do want to talk to you, Michael, about that. But I, I, what I like about that approach is we write minimal tests, right? So we, in other words, we're not having to maintain the tests. We're just making sure that once that test is written, that functionality works. So I think of it more as if you're an object oriented programmer, for example, you're testing the object for the, essentially the CRUD operations. You're not doing field level testing because every time then you're going back through and you're rewriting tests. So I like to see that happen early on in the process so that you're, every time you commit your code, you're committing your unit tests as well. And then QA gets involved at the integration tests and especially at the front end because, and I have a lot of friends that are amazing front end developers, but I think front end is maybe the worst. And I know I'm going to get in trouble for this at writing tests. They hate writing tests and they almost never do. I've had so many times where I've asked to see tests and they will tell me, oh, it's, yeah, it's all automated. I don't have to write a test because this JavaScript framework is going, no, no, that's not how it works. So we have to go back and review that. And I think the robot framework has helped me be more accepting of that fact, that a front end developer, even a back end developer, can be resistant to tests up to a certain point to where then I can already see the face of one of my old QA leads already sort of crowning at me right now as I say this. But they're covering for, let's call it the laziness of the software developer on the other side. And then they're going back and making sure that those tests are accommodated for in the code with the developer. And I don't necessarily like that process, but I've yet to come up with a better process that keeps developers moving forward without using, and this is something that I'm starting to explore as well, is without using AI to write the QA tests. And I don't mean to put that out there as a replacement for QA in any way, right? And we know that we need people looking at all of this. And developers need to be in the process. And I think of QA as developers as well. And I think that's the, to me, as a QA developer, you're still a software developer, you're just helping to ensure quality. And I think they need to be involved upfront, they need to be involved with the product discovery, the requirements. And oftentimes, one of the best QA leads I ever worked with was helping with API models and saying, oh, no, you need to adjust the model to handle this occasion or this use case. And so I think having more eyes and ears on a problem is never a bad thing. And especially when you're building out in the early stages. So yeah, involve QA, don't go too far, especially early stages. And then making sure that you're at every step, QA is in the loop and all of those pieces. And then I think writing the tests should be up to the QA developer, not the software. So I've never liked the flip the card approach. Because that's, I think developers should be focused on solving the problem, not what problems might jump up from that. They should understand what the connections are. But if it's, and I come from the object-oriented world too, where if it's all essentially encapsulated in one set of code, you shouldn't be worried about breaking downstream things, right? Because it should all be encapsulated. But sometimes it does. And so that's when QA comes and gets involved. And they understand that more than most software developers, I think. So with your whole process, because in the first half of this, you talked about the transparency, getting the project under the developers involved. So with this APIs and tools you've been putting together, how do you show the transparency or the level of tests that is being added as you build these projects out? So you know what's being tested, what's not, what needs to be covered. And so also so you know when you go to production, what can you smoke test when it goes out there? How much manual testing has to be done? How do you kind of manage that transparency through this process of yours? Yeah, so part of the issue for us is about documentation inside of the repository. So we have a set of tools and standards that we follow for every Docker container essentially has to have all of this already documented in it. So if we know that we're using, let's say, fast API, and we want to use PyTest to run all of the unit tests there, we need to make sure that there are a certain number of PyTests essentially for each function that we've written. And we need to make sure that essentially that those tests have been executed with hooks essentially before you check in to Git. You have to make sure that these tests have already been written. If there isn't a test, it runs it for you. It doesn't see a test and then it pulls it back into make sure that you have your test in before you check in. So pre-commit hooks into GitHub is one way to manage that, and that's the way that we do it. But there's a lot of other ways to look at that as well. The other thing that we like is, like I said, I'm sort of a fan of the robot test framework, is making sure that that's built into your CI-CD tool as well, so that it runs those tests. And the person responsible for that is always the developer that's checking in the code, but if he doesn't know how to write those tests or doesn't know how to use robot framework, which is more often than not an excuse, not really the reality, they just don't want to write the test, then they go to QA and say, can you help me write these tests or can you get me involved in writing this test? Or more often than not, could you write these tests for me? And that's really where I think that first level is. And that's where the CI-CD should be running those tests, making sure that that's happened before you push out code, even to your development environment, much less your integration environment or your production environment for sure. So it should have been tested on your local machine in CI-CD before it goes to development, and then again before it goes to production, at the very least. And then that's the last step. This is the part I still to this day have trouble with getting developers, and myself even, I forget to do this more often than not, is to test the code that you just deployed in production to make sure that it works. And so following up not just with QA at each step, but also then getting QA involved in production testing, and making sure that you're aware where they're testing and what they're testing, and that when they follow up with an issue, it shouldn't be something that could have been discovered in a development environment or an integration environment. Oftentimes it is, but that doesn't change the fact that you should be following up and making sure that those tests happened and that somebody actually monitored that while you're doing that yourself in production. So you mainly use some of the backend tools and the hooks in that you're talking about. You don't really have that in this dashboard kind of integration you have, so that the PMs can see it and say, oh, you need to think about this for testing as you're building out the tickets and that. It does in the sense that the GitHub repository, so we have an AI tool that looks through the history of the commits for the day and then creates a... And so rather than having a developer write their stand-up report, which is, again, can feel like a huge waste of time sometimes, but it is very useful for the rest of the team, is we have the AI actually write that for you. It goes back through, looks at your commit messages and writes through that. If it doesn't see anything about testing and it doesn't see a test in there, then it flags it, and it says there's no tests were created for this. And that is usually where a QA person can then get involved and say, hey... And to me, the first job of a QA person is, again, as a software developer, but also as somewhat of a software manager, a side manager, if you will, that's reviewing code, making sure that the standards are met, the lint testing all went through, that all of those lints are in place, the linters are not being overridden and the CI-CD tool or anything like that, which is... I've done that myself just because I needed to get something pushed out. So yeah, reviewing that process as well as being involved in the code review process and the pull request process. So that's really, to me, the thing that we're still missing that I want to add is the pull request process bringing that into our tool so that it's not... You're seeing the notification in GitHub that this pull request has been assigned to you, and that gets reported into your dashboard so that you then know, oh, I've got a pull request I need to review. You run the pull request, run it, and anything, and any comments that you bring into that get assigned directly to that developer or to that team so that you're then following up with it and managing it, but also then seeing the comments that were made in the pull request and being able to then go through that so that, again, you can use an AI to review that process and say, you know, a number of pull requests are failing because the lint tests weren't run beforehand, or they weren't running their unit tests before it was run, or they overrid this because they really wanted to push something through. So being able to follow up, and I'm sure there's all kinds of other areas that you can review and look at from just from looking at pull requests and the comments that happen. And it's not just a, and I always get stuck on this, is I try not to make it about blame. It's not who wrote this bad test or who didn't follow the test. It's just, it's a learning process, right? And going back through and seeing those pull requests and what was constantly, you know, this maybe a lead developer was always saying, do this, do this, do this, and maybe he was right and we weren't doing it, or maybe that was an unnecessary step that we could remove at some point and then help to, again, speed up our velocity a little bit. So before I hand this back to you, Rob, just one follow up to that. One thing to think about and considering is a lot of your languages and tools now have like test profilers that will actually scan the code and tell you what test or what code is covered by test and what aren't, which is something you can really utilize with AI today. So just if you're not looking at something like that, that's something you might want to consider with your processes. That's a good point, though. I think that, again, as a developer, sometimes the QA process kind of goes into back of mind. And even when we're looking at the product backlog, we're still going back through and it's always like, oh, yeah, what do we do about testing here? Or what do we do about QA here? And so, yeah, it's a good point to like where we can automate things. Absolutely should be. And then where we need human eyes on that is essentially, you know, following up with each step of that automation. And I want to go back to a little bit of the idea of, you know, I don't know how many times, same boat that we're overridden a winter tool or, you know, just push something through because, you know, because sometimes the process is, I mean, maybe that's just me being a little lazy and being a developer, but sometimes the process gets in the way. And it's like, look, we need to get this. We need to get this done. The process does not push the ball forward. And it is a little bit of the agile thing where it's like the process should not hold your, you know, hold you accountable. I mean, it should hold you accountable, but not hostage, basically. That's the challenges that like to not just knee-jerk do that. I mean, it's like, it's one thing if you do it once and like, okay, I'm going to flag this and it's an exception and we're going to move forward versus it becomes a norm. And then the next thing, you know, the, cause there's those have bit me and I've seen it in other places too, where it's like, this was here, this check was there and it just got ignored, ignored, ignored. And that was fine until it wasn't. It's like, it goes back to maybe communication of like, this is why this is here. So in a, in technical debt, everybody's favorite thing to like, just let it keep piling up is the, okay, I pushed it through. However, we need to go back and fix that address that clean that up now rather than later. And that's a little bit, I know it's a little bit me getting on my soap box, but it's also because we're a little bit out of time. But obviously we could do this for days. There's a lot of great conversations and a lot of directions we could go. And I know listening is like this, this guy knows his stuff. This guy's got a lot of cool stuff. I might want to go read that book. So how, what is the best way for people to get ahold of you and reach out and get to learn a little bit more about you and the process in your book? Yeah, for sure. So Biblio is usually the best place to go. Everything that I do is sort of linked through there. And beyond that, I have a sort of a page for my book as well as a blog that I do through, it's called radicaltherapy.dev. And almost everything goes through there. And LinkedIn, of course, is where everything gets shared as well. So those are the best ways to get in touch and to follow up with these things. But yeah, I especially appreciate anyone that goes through. We have a pretty simple process to sign up for a 30-day free trial. And you can sort of see our book and everything else that we've done is all in our process and our tools. And so you can see how we work and how we do things. And we're always open to feedback. That's the point of agile, right? That's the point of software development is you're not building things in a vacuum for yourself. You're building things for other people, right? And so the more feedback we can get from users, the better, especially seasoned developers, but also junior developers. I think the most undervalued part of your team is the new person that's coming in. And I love to get new developers involved, junior developers, people that are just starting to look at it and just say, oh, this seems upside down to me. Why don't you do it this way? Right? And so to me, that's the best way to if you've got fresh eyes and want to look at some of the problems that we're doing, love to hear them. Yeah, it solves my lot of times those new people that come in or the ones that solve my favorite problem is the, you know, that's the way we've always did it kind of really And sometimes they get it's I know it's frustrating, like all of us were new at some point when you're frustrating and you're like, okay, well, that doesn't really help me understand this. And it's even worse when maybe you realize it's like, oh, yeah, actually, they shouldn't have done it that way. And that's as I've, you know, trying to keep that pain when it was me to be on the other side of it and think about it and keeping that open mind of like, okay, maybe we do need to revisit this review it because stuff change, things change, things evolve, AI, for example, which we did almost no talking about it at this conversation other than mentioned of it is obviously changing a lot of things that are out there. And so whatever we did, even now, Agile, Agile manifesto is about, I don't know, now about 25 ish years old, something like that. And I remember when it first came out, it was the latest thing, the gang and then, you know, in the same line was like, to me, it was all in with like patterns and agile. And these are the things, this is future software development and all that stuff is grown and evolved and everything that you did 20 years ago, probably 15 years ago was already like passe and there's better ways to do it. And then 10 and then five and what we're doing today, I'm sure people five years from now will be like, I can't believe people are still using this archaic. So, you know, it is always evolve or evolve or die, I guess, as a part of this. Oh, thank you so much for your time. I appreciate you and the crowd. I'm sure there is a standing ovation. I'm being drowned out by the applause in the background here. This is a great conversation. This was, this is the kind of stuff we'd love to have, or we can go a little deeper and talk about some things that actually a lot of these areas I didn't even think we were going to get into. But I think are very, I think they're very critical for moving forward. It really does go back to like, okay, you say you're doing agile. What really is agile? And if it's, if what you're doing is broken and you've given a perfect example of that with buildly is like, if what you're doing is broken, then like, let's embrace your inner laziness and your desire to get through these things and automate the things that stink and make some changes. You go out there and suggest some differences and, and, you know, start to evolve. You don't have to be a slave to what you think the process is. And particularly agile, like it literally tells you don't be a slave to that, like adjust it as your team and your project. And too often, I think the, what I have seen now with scrum and some of that kind of stuff, it sprints that it's, it's too much been equated to agile and that's how you have to do it. And there are too many examples. I know of projects that really don't work with that as you have obviously found out as well. Absolutely. Yeah. And I think the laziness gene that's in all of us as developers, the AI tools, if we ever get another chance to talk about AI tools and how to use them to automate away those things, it's, it's ridiculously easy now to build something to do, make your own backlog sort of approach if you need to, or, or, or to get those things out of the way. The artifacts that get generated, even though agile is an anti-artifact sort of approach to software development, they still get generated and they still have to be updated. Usually it's a, it's a product manager that needs a document here or an CEO that needs something there. Yeah. So AI, like using that to, to manage the last, that's where it shines, simple, boring, repetitive tasks. That's where it's good. Yeah. I have found that that is, that is an area that was very, very early on with AI when I started using it, that I have, I've embraced and it's when you, when really you just need the artifact, the rest of the stuff there. And it's really just, okay, I've got to go about the, you know, the grunt work or whatever of generating the artifact, whether it is a, I mean, and sometimes it's a simple document. Sometimes it is something that, you know, that this is there are other tools for it, but things like having a, you know, an API document, like the things that like swagger and some of those tools dose do so well. And yeah, they require us a little bit of forethought to make sure we do it in a way that, and I think that's where it's going to be from here is like doing it a way that AI can then easily consume it and do the stuff that we, we really don't want to slow us down. Like you said, where you don't want the, you want the developers writing the code and the functionality. You don't want them stuck trying to figure out how to create tests and then fix the test for what they're coding. It's like finding a way to get the things done. And then you use AI to sort of walk behind us and clean up the mess a little bit and say, okay, well, fine. I didn't get that done. Go do this for me. Yeah, exactly. I think that's the, that's the primary use case now and will be for a long time for, for any AI tools, development tools is going to be just about automating those, those boring things away and education, just teaching you how to, how to do better and then following up right, you know, and again, I don't, I don't mean to put Michael, your job in the, in the, in the line of AI, but I do think reviewing tests and writing tests and then making sure that that code can be cleaned up. That's where AI and a Q, a good QA developer could write something really well with an AI tool and do it even faster. And I think that's the, it's more about, to me, it's, it's about building up your velocity and, and creating a team that works better together. That's where the AI tools can really help. A hundred percent. Well, that will wrap this one up. Some nice little bonus material there as well as we got a couple of little suggestions at the end. We will, we ran this, we turned this around pretty quick. I think this probably will show up as early as Tuesday and Thursday of next week. We drop on Tuesdays and Thursdays. If not, it'll be the week after I will send links out when we do so. So you feel free to share them wherever, wherever you want, edit them out, however you want. We will post them. We have a, we do a blog article on our site. It's out on YouTube. It's out on, out on all the different places that you can get podcasts, things like that. So feel free to share as much as you like, really have enjoyed this. And you've like, like I said, we may reach, I may reach out again sometime in the future, a few months from now and say, all right, let's, let's have some other conversations. Cause there's, we left a lot on the table that we could have gone into. And I have a feeling that we will all have different opinions of those three, six and 12 months. Yeah, absolutely. For sure. Yeah. All right. I will let you go and thanks a lot. Thank you for your time. Appreciate it. And we'll be in touch again. Yeah, no, thanks guys. It was fun. Hopefully we will get a chance to hit those things later on. Definitely. We'll be, Michael's working on his tester driven, that's test driven as we speak. Oh, good. Yeah. I need some defensive test driven development. All right. Thanks guys. Talk to you later. Talk to you later. Take care. This was sponsored by RB Consulting, your partner in building smarter, scalable tech from startups to established teams. RB Consulting helps you turn tech chaos into clarity with proven roadmaps and hands on expertise. Visit rb-sns.com to start your next step forward. Also sponsored by Envision QA. They help businesses take control of their software by focusing on what matters most, quality, reliability and support you can count on. Find out more at EnvisionQA.com. Thanks for tuning in to the Develop and Wear Podcast, where we're all about building better developers and better careers. I'd love to hear your thoughts, your feedback. So drop a note to info at DevelopandWear.com. Be sure to subscribe on Apple Podcasts, YouTube or wherever you listen. And remember, a little bit of effort every day adds up to a great success. Keep learning, keep growing, and we'll see you in the next episode.