🎙 Develpreneur Podcast Episode

Audio + transcript

Building Better Developers - S27E27 1.mp3

In this episode of Building Better Developers, we continue our conversation on getting unstuck and moving forward. We discuss the impact of AI on development and business, and how it can help us get from loose requirements to tight requirements. Our guests, Rob Brodhead and Michael Milaj, share their insights on how to use AI to improve our development process and get better results. We also talk about the importance of changing our mindset and getting from natural language to tech language.

2026-04-25 •Season 27 • Episode 27 •AI and its impact on development and business •Podcast

Summary

In this episode of Building Better Developers, we continue our conversation on getting unstuck and moving forward. We discuss the impact of AI on development and business, and how it can help us get from loose requirements to tight requirements. Our guests, Rob Brodhead and Michael Milaj, share their insights on how to use AI to improve our development process and get better results. We also talk about the importance of changing our mindset and getting from natural language to tech language.

Detailed Notes

The episode starts with the hosts introducing themselves and the topic of the conversation, which is the impact of AI on development and business. Rob and Michael share their experiences with AI and how it has helped them in their work. They discuss the importance of changing our mindset and getting from natural language to tech language. They also talk about the need to get from loose requirements to tight requirements, and how AI can help us do that. The guests share their insights on how to use AI to improve our development process and get better results. They also discuss the importance of understanding the problem we're trying to solve and defining our requirements clearly. Throughout the conversation, the hosts and guests discuss various aspects of AI and its impact on development and business, including its benefits, challenges, and limitations. The episode ends with the hosts thanking the guests for their time and insights, and encouraging listeners to reach out to them if they want to learn more.

Highlights

  • AI doesn't fix things, it just amplifies them.
  • The problem is not the tools, but the way we use them.
  • We need to change our mindset and get from natural language to tech language.
  • AI can help us get from loose requirements to tight requirements.
  • The future of development is a blend of business and technical skills.

Key Takeaways

  • AI can help us get better results in development and business by amplifying our efforts and improving our processes.
  • We need to change our mindset and get from natural language to tech language.
  • AI can help us get from loose requirements to tight requirements.
  • Understanding the problem we're trying to solve and defining our requirements clearly is crucial.
  • AI is not a replacement for human skills, but a complement to them.

Practical Lessons

  • Use AI to improve your development process and get better results.
  • Change your mindset and get from natural language to tech language.
  • Define your requirements clearly and get from loose to tight requirements.
  • Use AI to automate repetitive tasks and free up your time for more important things.

Strong Lines

  • AI doesn't fix things, it just amplifies them.
  • The problem is not the tools, but the way we use them.
  • We need to change our mindset and get from natural language to tech language.

Blog Post Angles

  • The impact of AI on development and business: A conversation with experts.
  • How AI can help us get from loose requirements to tight requirements.
  • The importance of changing our mindset and getting from natural language to tech language.
  • The benefits and challenges of using AI in development and business.
  • How to use AI to improve our development process and get better results.

Keywords

  • AI
  • development
  • business
  • requirements
  • process
  • productivity
  • results
Transcript Text
Welcome to building better developers, the developer podcast where we work on getting better step by step professionally and personally. Let's get started. Well, hello and welcome back. We are continuing our season where we are building better momentum. Yeah, we haven't even gotten to that yet. We're late into the season, but we're adding a few little title changes as we go. We're really talking about getting unstuck moving forward. This started at the beginning of the year and now we're into a new quarter. We still want to keep this forward momentum. This is an episode and the prior one, which you can pause, you can go back to listen to that if you have it, where we're really talking about that forward momentum. We're talking about doing the things that are going to allow you to be positioned to be a better developer in the year ahead. If you don't, you're probably going to be in a bad situation. Before we get into that really exciting conversation of part two, we should probably introduce ourselves. This is the development or podcast building better developers. I am Rob Brodhead, one of the founders of development. We're also the founder of RB consulting, where we help you use technology before you dive into whatever your project is. We actually put the brakes on a little bit and say, let's do a reality check and figure out what do you have and let's make sure that you define your problem properly so that you do not have it amplified by AI, which is something we'll talk about later in this episode. Good thing and bad thing. Good thing is, I'm just going to go right in the tech side of it this time. Good thing is this is a very fun period from a technology point of view. The ability for me to just knock out project after project after project after project that's been on my list and some of the things that I've said, you know, if I just had this, get my work done faster and free me up to do more big thinking has done so. It is amazing how many things that have not only gotten accomplished, but how that has opened me up to think about how I want to accomplish those even better and some of now other tools and automations that will just be like, oh, okay, now I'm up to the next level. So I'm going to need this information. I'm going to need that information. I'm going to need to process this and process that. That's great. This is so fun. This is one of those where you just love being able to allow the creative energies to just explode and go create stuff. The downside is it is hard to keep up with all of the changes. There's so many things. I'm using the tools so much right now that it is sometimes hard to keep up with how the tools are changing. It's not even it's working on the tool or in the tool instead of on the tool and things like that. Grab something and go and we're trying to figure it out as we move forward. It is definitely fun times, but they're only going to get better as Michael introduces himself. Hey, everyone. My name is Michael Milaj. I'm one of the co-founders of Building Better Developers, also known as DeveloperNur. I'm also the founder of Envision QA, where we create reliable tailored software that helps you work smarter, scale faster, and stay in control of your business. Good thing, bad thing. Very similar to Rob, I've been playing with AI, trying to keep up with the tools. Actually used it the other day to solve a problem I've been fighting with for five weeks, dealing with a legacy application that they wanted to clone and repurpose for something else. There was so much tribal knowledge or missing knowledge within the company. No one knew what this did or how this worked. Ultimately, AI was able to help me decipher this complex gobbledygook of code and get it working and repurposed. Downside, like Rob, there's so many things I wanted to do with it. Just with this conversation, part one gave us so many ideas. It's like, okay, I want to go in this direction. I want to do this. There's just so many things I want to start playing with and start trying. There's just not enough hours in the day. Getting there, and hopefully AI will help streamline that process for me faster and get me to where my forward momentum is going at a rate where I'm looking for the next idea instead of dealing with my backlog. That is definitely a good place to go, being up on top of the wave as opposed to being about to be crushed by the wave as it collapses over us, which too many of us have lived through that too many times. Getting Thanos that we'll be talking to here momentarily, we're going to dive right back into the conversation. This has been a really fun one. Both Michael and I enjoyed it. Thanos obviously enjoyed it. A lot of energy in this discussion and a lot of just modern reality. This is one of those that I have a feeling will probably not last over time. It is not going to be an evergreen episode. We are very much talking about what is going on in the here and now and some of the challenges that you guys are probably seeing out there as well. Without any further ado, let's return back to our conversation with Thanos. Yeah, I've kind of seen that too. In fact, I've actually gone more of a shift from like as I adopt AI more with my with assisting in code development, I actually have it start out with the requirements, define the test and then have it write the code to the test. And some of the other things I've gotten better at or is gotten better at is, okay, give me the happy path for these requirements, then give me the negative testing for these requirements. And that's where you start to see either the breakdown of the testing or it really helps train the AI in how you want the test built. After probably a couple of weeks of doing that, it's actually getting better now at defining these tests. But I love what you're saying though about the end-to-end tests. For a decade, I was writing automation tests and I built a test framework to build a selenium test where it would scan a sitemap and actually build the whole test framework for you. And all you literally had to do is plug in the actions. I was still automating that and then build your test cases. Six months of work in like two minutes. AI now, well, one of the arguments with that was selenium test, contract testing of that is too expensive because your IDs, your pronouns change too much and it's like they break. With AI now, like you said, it is interesting to see that you can quickly spin up that full end-to-end test and make that more repairable in an acceptable way instead of guessing. So that's just one interesting thing I've seen. So I want to take this just a little bit further than the testing though. So taking it from the test side of things and the requirements side, taking it all the way back to the business side. Where do you see changes? Because I'm running into some government's issues with a couple of different projects I'm working on, which is slowing AI because of course, private information and credit card information, things like that, is still unsafe in AI. So they're afraid to adopt it or they have to build their own systems. But I'm not seeing enough of a change or I can't figure out how to push the business side on the requirements gathering, on the business side, to build or adopt AI to help structure what we need as developers or the application. Kind of like what you said, we go months in between these iterations, it's ridiculous. So how do you see that changing and how can you see companies adapt that are not stuck adopting AI but are dealing with some limitations because of governance? Yeah, so I want to take one quick step back on your testing thing and edge in a thought that was super interesting. I love the idea of the end-to-end test, the way you described them. But one of the things that I've been hitting recently is when you set the AI to build something new, you always want to say, by the way, do not even show this to me until all the tests pass. So the AI will go in these nice little cycles that will figure this out and give you something that's really nice and working with the test passing and hopefully hasn't disabled half the tests to do this. But the annoying thing is that if your end-to-end tests take half an hour, now every time it needs to do a cycle, it's half an hour. So if it has to run these so many times, it just takes days to build something that if it could find some way to run the test faster, it'd be better. So that's why I keep coming back to like, we're going to figure out better ways to do this where we can get the same level of confidence without the same level of actual expense in time and compute. So that was kind of like a cool thought there. So we'll go back to your question, right? I keep coming back to... So before you jump into that, a quick answer to that question. Look at spinning up Docker containers with Selenium grid and ask it to dump or do that type of testing within your local Docker container instances and that might speed up your testing. Yeah, so if that works, but then my machine is like, which is like a Mac studio, is like all cores from Mac. I'm limited because I have like six projects and each of them are working on 10 different features and it's just maxing out. So like it's possible, but the compute just becomes so expensive. So I think we have to figure out a theoretical way of proving the risk in a better way. But yes, in the short term, obviously, that's kind of what we're doing. And that's really nice because we can actually get some speed, but you're kind of paying for it. But yeah, that's superb. So back to the question of like, the other fun question of how do you get business to adopt this? How do they come to trust the AI? I think similar to how you trust other processes, I think we're going to get there gradually, right? Because we saw the AI do some stupid things early on and then we put some guardrails around it and we started like looking at it and put the humans in the loop in certain places. But as you see it do the right thing over and over and over again, I think we're beginning to see more and more of the adoption here. I also think that kind of like self-driving cars, right? People are not the best drivers, engineers, as much as we like to think really positively about ourselves, we're not the best code reviewers. We're not the best things in many number of ways. So it becomes much easier to have the AI and I've done this recently as a project that I'm working on that has to be HIPAA compliant. And I've looked through every line of code that is generating in the authorization authentication area. But every week I run the AI and say, go through this and you tell me what you find. It always finds more stuff than I did. Right. So I think as we start to put again, these processes in place and that culture in place and now AI reviews become part of our CI, CD, and we start giving a precise instructions of the threats and things that we're looking at. I think we're going to see that it's going to be much better than us in some areas. And you can see this, like you saw Anthropic came out with a new version to cut last couple days Mystic that is really good at figuring out security flaws. And there's been talk over just the last few days of how much better AI is beginning to get in figuring out and punching PRs that fix CVEs across different open source projects. That's like two months ago, people were complaining that like AI is producing is a sloth that is too hard to read. And now it's like we're re-evaluating that because actually now we're beginning to see some really good CVEs being fixed. And not CVEs, CVEs are known. We're going to need some to see some zero days being found by the AI and getting the fixes in really quick. So I'm pretty positive here that we keep looking at the right structures and safeguards around it and keep putting those in place. And we're going to keep seeing this these massive improvements. It's interesting as I'm hearing the discussion on both sides here is that it seems like this actually goes back to like a core thought that I have and I've said a couple of times is AI doesn't fix things. It just amplifies them. Is it what we've done now with AI is we've taken the problem that you Thanos initially laid out of business talking to developers and it's supposed to take a week, but then it takes two weeks. And it's a back and forth and a back and forth. And you just it cycles. But what AI has done is it's shrunk those cycles now. So we are seeing more of those things come to the surface that and it begs the question that maybe it's not the tools. Maybe it's not even the process itself, but maybe there's something that we need to do a little more upstream to make sure that we are understanding the problem that we're defining so that our requirements are tightly defined. So it's not just giving a happy path as like Michael seeing is it but we are also within those requirements, including things like understanding not only what works, but what does not work. And is that something that you're seeing is that you're having to step in and almost like back them up a little bit and say, wait a minute, you guys are you guys are doing a few things, right? But you actually have to go back to some of the fundamentals you need to maybe understand and define your business a little more detailed. And as you said, it's like as a business user, is this one of those situations like somebody describing how to tie a shoe where they they know it, but to put that into a language and into details is where there's that there's a disconnect, you know, all the way before we even start into development and implementation. Yeah, you're exactly spot on, right? Depending on what your processes and systems are, AI will amplify those. If they're good, you'll get better. If they're bad, they'll get worse. So it is fixing those systems. But the nice thing about AI is that now it gives us a speed to fix those systems. So a year ago, I go in a company and say, look, I looked over what you're doing in a week. That's great. We have a few processes we can fix really quickly by fixing things over here. And you have all this tech debt over there. And you're going to put 30% of your time in tech debt. And it's going to take us a year to do this. You'll start seeing some benefits in six weeks and two months and so on. Now we can go in and do the same thing and say, and yes, and we're going to fix like maybe a lot of your tech debt in the first week because we can set up the AI and have it get done. And now you start seeing the results a lot faster than you used to before. That's a much more palatable answer to do this. Again, you need to be pointed in the right direction. And you mentioned sort of requirements. And I really like the idea of doing some upfront planning to figure out what your requirements are. But I talked about sort of the humility of not knowing what you don't know up front. So what I like to do is structure these processes where you can go from loose requirements to tight requirements as you develop. And there's different ways where you do the hardest things up front, the riskiest things up front. We have all those tools to do these bits and pieces. But you usually start with English, which is some language that you speak, whatever that may be. But my point is languages are imprecise and you end up with code and code is very precise. Code needs to know exactly what to do in every possible situation. So to go from English, just the language of choice for right now, to code, you will have to solve every detail and requirement. Literally, if you don't, you'll have something throw an exception or something crash because you hadn't thought of a particular situation that you're in. So it is this iterative discovery of requirements. The faster that we can make that one go, and that is made to work better if you have the right culture, the right processes, and the right technology, the faster we can go to code that works in the right way that we want it to. And that's kind of like the things that we put in place when we go in organizations is getting that process right. And we captured a lot of things before as to how we do these things, the language that we talk to people with, all the bits and pieces that we do in engineering. Yeah, so let's go on from the natural language to the tech language, essentially whatever your natural speaking language to tech. Yeah, I remember dealing with that back in college, having to do the technical writing classes versus your English classes. They were two totally different monsters to kind of suss out and then to actually write the code. Another language, running joke, my daughter's like, well, I can speak three languages. I'm like, well, that's fine. I can code in 25. But she's like, but that's good. I'm like, it's still a language. It's just one of those misconceptions. Back to the business side of this a little bit more. So changing that mindset, getting that conversation going. So we've talked about getting from the natural speaking language to the code and working more on the requirements. Garbage in, garbage out. Good practices gets good. AI just amplifies that. Getting back again to the SDLC model, you get the requirements, you get the why, the things that need to be done. How do you go into these companies or how would you help our listeners figure out, how do I work on that? AI is great for writing code, is great for solving these coding problems. It's good for helping me figure out requirements. But how does it help me with the rest of the early process of getting ready to build the code? Yeah, so I think the code building and the requirements building share a lot of similarities. And before it was, we might have some companies that are in the bigger side may have a research group that goes into some market research and figure some things out. And then they have a product group and they go figure out with that research what they're going to build. And then they have some designers that go and build some UI UX. And we make some figmas or something and we show it to people or paper prototypes back in the day. And then we might throw that to engineering and we decided which one of these we're going to build. Well, I think that's out the door now. Right now, you can have someone, almost anyone, and maybe they can sit with engineer, maybe they can have people with these kind of expertise to sit together and you can say, hey, we're thinking of tackling this market segment or this area of this general feature. We have five feature ideas. Think them through. Think of 10 ways of doing each of these. Pick the best based on these criteria. Think of any criteria you might have missed. Chew on it for a few hours and come back to us and make us some lists. Or even you can say and take the top five contenders and just prototype. And then you can actually look with hands on things that you can like literally click on it and they actually work and say, okay, yeah, we like this. We hate that one. That was a terrible idea. This is promising. Go do some more research on this. So we really compact this. By the time we ship this to engineering, we really know we're building the right thing with much better certainty. So I have the inner loop of engineering that's just trying to build the right thing and make it work and the outer loop is products are going to get product market fit, go from idea to something that sells in the market. I think these are like just squished together now. You can do them both. So when you go into reluctant company, what I like to do is run small experiments. So you can show this by doing and I've done this recently a few times where you can go from like, let's chat about it today and literally by tomorrow, you will have this feature. I have one client. We actually did this. We had a meeting yesterday morning. We thought this is a good idea. Now let's ship it. And today, this morning, it was actually shipped out into production. We're actually trying it out. We may throw this feature away. We may never see the light of day. I don't think it will. I think it will adapt to something useful. But showing these little successes helps build credibility and helps show how this works and you kind of build on that success and keep going and keep going. And also you kind of it's not just don't see this as a sly move to just convince people to move forward, but it's also to learn about how the teams operate with each other, learn what their needs are as they're talking through this, suss out what their requirements, requirements, not in product requirements, but what the business team would like sort of schedules and certainty. And the engineering team would like things that they understand that they can build out without having to be yanked around us that be changing things, right? They each have their needs. So if they communicate with their needs well to each other, that breeds more success as that team can push out things with better speed, with better quality, and with higher happiness across the board. So last question for me before I throw this back to Rob. So we've talked about all the good things that AI can do and all the things that you can bring to the companies to help transform their business practices with AI. Let's talk about the negative effects of this or the downside. Are there any instances where you have had this fail or you've had an overwhelming challenge of getting a convincing a company or getting this to work for someone? There's a couple of buckets here, right? One is culture. If you heard the adage that culture eats strategy for breakfast, right, you might have the strategy that you adopt AI across the company and no one wants to. So I think that's that's rule number one. I think you find a lot of people that are reluctant to adopt AI and there's some that are good reasons, right? There's good reasons that the last time I tried to do this, it deleted my production database. I think the answer to that is, well, you probably shouldn't have given it the keys and here's some safeguards you can build around it. So go and try that again rather than write it off because if you write it off, your competitors won't and they'll eat your lunch. So that's that kind of one thing, right? These are the manageable things that we as engineers should do. There's a second thing, which is an issue of psychology and psychological safety. Not in the traditional way they've been talking about it in the last years, but the engineering profession is changing and it's changing quickly and it's changing a lot. So I think people are really feeling that and they're afraid, right? Is the AI going to be doing my job? So if I can see that the AI can do this better than me, what am I here to do? Am I just clicking and authorizing its bash commands every time? And again, I think the answer to that isn't just put your head in the sand and hope it's going to go for the better, but learn how to leverage it, learn how to use it and learn how to get the speed gains and those that figure out how to put the safety net around it and get both speed and quality at the same time are going to be the winners. So there is a delicate process of having to work a team through that to get them to adopt it. And it's changing so quickly that it's hard to give concrete advice this week versus last week of what's happening here. But that's kind of one of the main things that I see here. Another thing is as you go to bigger companies, they have different sort of sets of rules. Two years ago, you might be hearing, we don't trust what the IP is that is going to come out of these things. I think no one talks about that anymore. I think that's reasonably okay for now. But now you're hearing things such as security, where security teams will not let you run anything even remotely exciting in there. So if you're working in a larger company, you may be sort of hamstrung with one tool that that company has bought there and struggling to make that work. So these are the two most common roadblocks that I'm seeing for adoption. I don't give too much credence to the AI delete my production database. I think these are, yes, those will happen. Yes, it will burn something down, but establish good version control, keep good backups, run it in VMs where you need to. Don't give it the keys to production. There are good practices you can follow there to mitigate these things. And occasionally we'll do something dumb, but I think the benefits that you'll see are going to be worth the trade off there. It's very funny because that is something that has been a mantra for my development careers. Don't give production keys to the development team because we have seen human beings do the same thing we have. And I may have possibly at some point in my life done one of those where I ran something and was like, darn, that was a mistake. And then having to restore a database or go to a backup or something along those lines. AI, once again, maybe showing some of the same flaws of its, we'll call it its human masters. A closing thought, a closing question because we've sort of touched on this a little bit. Moving forward, do you see, I guess, and I'm trying to see how to best ask this, but essentially a shift if you think of the two sides of the business side and the engineer side, do you see a shift where one is going to be moving more towards the other due to AI in the cycles where either developers are going to just, they're going to have to become more business savvy or the business owners are going to become more technology savvy or do you see it maybe a little of both? And with the caveat, this obviously could change a week from now. I think that was always going to be the case. And I think in my practice, it's something that I was always being advocating in that both sides, you say them as sides because they're just separate groups, literally, they usually have different reporting sections and so on, but they're two parts of one broader team. So these two parts of one bigger team, they need to understand each other. When you're developing what your roadmap is going to be for the coming future, the product side, business side is going to be the advocate for what are the features that they want and what are the bugs that they want. The engineering or technical side is going to be the advocate for what is the technical debt and investments they want to make in their platforms and their systems and what are the risks that they have, the technical risks. Neither of these sides, you're going to be able to get in the deep guts of the other side, right? Like not every engineer needs to understand the deep market research as to why this feature needs to be built, nor does any product person need to understand what are the actual technical vulnerabilities this pull request is going to fix when we update this version of YAML. But they need to understand that both sides need these things and you can't delay those things forever and you kind of have to work on them together. So I think it was always going to be the case that these groups need to get better at working together than they have historically been. And again, this varies across companies, etc. But now with AI, I think everyone can do each other's job much better with the help of the bots. So I, as an engineer, can say, hey, they gave me this feature and I'm not sure how the front end should work and this doesn't feel right to me. So give me five ideas on how to make it better. And in 10 minutes, I can have like a better UI and I can ship it back to my PM and say, this now works. Is that good? And they're like, yeah, that's probably fine. So I'm bleeding into doing their work. And the PM can also grab the code base and say, hey, I'd like to see what that page would look like if you move these things around this way. And they can also in 10 minutes have a working prototype. So I think the roles are beginning to sort of overlap and we'll have to understand each other better. And I think that's actually going to be a massive driver of productivity because when you get all the decision making and the expertise in one place, that speeds up those little loops and you can move a lot faster. So I think it's going to be a lot of fun. This, sorry, I got this. This hit me while you were talking and this was not a self-serving softball question because it sounds to me that like you've started, you're moving from a developer and an entrepreneur right to mix them together into a developer. So there you go. Maybe that's where we would swear we've all and maybe it is. I think this is it's interesting because this is where we've always thought the best developers are, are those that have the business knowledge. And maybe it really, it's funny that it just now has hit me as we've been talking to you that I think that this this developer thing is actually where we're going to go as those are going to be the next rise is the people that can really like, you know, they have both sides of it enough that they can they can get the bots doing what they need to do, but they understand the business side, the product development side so that they can actually push that through cycles very quickly and get something to market that can, you know, that can hopefully stick in the market for more than two weeks before AI finds a way to overcome it. Yeah, it is flown by. This has been way too much time or way too little time for us to talk. This has been incredible. So before we wrap this one up, I would be a miss if I did not allow you to share. What are the best ways for people to get a hold of you or reach out to you if they want to talk to you just professionally or whether they want to make it use of your services? Yeah, there's two ways that one is go to my website, Cosmic Teacups.com and sign up for my mailing list over there. I post occasionally on all sorts of subjects along the lines of what we discussed today on my mailing list. And also you can follow me on LinkedIn and you can just search me up on there and hit the follow button. I also write and post on LinkedIn a lot and happy to chat about any of these things with anyone. It's I find it super fascinating. Yes, it is. Obviously, this is that's what I think made this such a great discussion and maybe maybe something we have more of these discussions down the road as well. Thank you all of you that are listening that have been hanging out with us for this last little bit. You can put your pencils down now or you can tell your AI bot to stop and go summarize what just happened for you so that your fingers are not too worn down. We do appreciate all the time you've done the investments you've made into listening to this and to developing your own career and finding ways to make yourself better. As always, go out there and have yourself a great day, a great week, and we will talk to you next time.