🎙 Develpreneur Podcast Episode

Audio + transcript

Getting Unstuck and Moving Forward with Thanos Diacakis

In this episode, we talk to Thanos Diacakis about how AI is changing the software development landscape. We discuss how AI can accelerate development, but teams need to be able to accept the changes. We also talk about the importance of developing good tests and architectures to take advantage of AI.

2026-04-25 •Season 27 • Episode 26 •AI, software development, and agile methodologies •Podcast

Summary

In this episode, we talk to Thanos Diacakis about how AI is changing the software development landscape. We discuss how AI can accelerate development, but teams need to be able to accept the changes. We also talk about the importance of developing good tests and architectures to take advantage of AI.

Detailed Notes

In this episode, we talk to Thanos Diacakis about how AI is changing the software development landscape. We discuss how AI can accelerate development, but teams need to be able to accept the changes. We also talk about the importance of developing good tests and architectures to take advantage of AI. The conversation was well-structured and engaging, but there were some areas where the language was unclear or the topic was not well-defined. The guests' roles and responsibilities were not clearly defined, but their expertise and experience were evident throughout the conversation. The episode highlights the need for teams to be adaptable and able to accept the changes brought about by AI, as well as the importance of developing good tests and architectures to take advantage of AI.

Highlights

  • AI can accelerate software development, but teams need to be able to accept the changes
  • The bottleneck in software development is now shifted from coding to accepting changes
  • Teams need to focus on developing good tests and architectures to take advantage of AI
  • AI can help automate testing, but it's not a replacement for good testing practices
  • The future of software development will be about breaking down complex systems and writing clear contracts between them

Key Takeaways

  • AI can accelerate software development, but teams need to be able to accept the changes
  • The bottleneck in software development is now shifted from coding to accepting changes
  • Teams need to focus on developing good tests and architectures to take advantage of AI
  • AI can help automate testing, but it's not a replacement for good testing practices
  • The future of software development will be about breaking down complex systems and writing clear contracts between them

Practical Lessons

  • Developing good tests and architectures is essential for taking advantage of AI
  • Teams need to be adaptable and able to accept the changes brought about by AI
  • Automating testing with AI can help accelerate development, but it's not a replacement for good testing practices

Strong Lines

  • AI can accelerate software development, but teams need to be able to accept the changes
  • The bottleneck in software development is now shifted from coding to accepting changes
  • Teams need to focus on developing good tests and architectures to take advantage of AI

Blog Post Angles

  • How AI is changing the software development landscape
  • The importance of developing good tests and architectures to take advantage of AI
  • The need for teams to be adaptable and able to accept the changes brought about by AI
  • The future of software development: breaking down complex systems and writing clear contracts between them

Keywords

  • AI
  • software development
  • agile methodologies
  • testing
  • architectures
Transcript Text
Welcome to building better developers, the developer podcast, where we work on getting better step by step professionally and personally. Let's get started. Well, hello and welcome back. We are continuing our season. We're getting unstuck. We're moving forward, getting that forward momentum and starting a new year, even though now we're actually even starting into another quarter because this has been one of those seasons. It has taken a while to get through here, but we are plowing forward this episode. Once again, we have a guest that we will be speaking with. I will speak with that in a little bit. Don't want to do too much of a spoiler alert. Before we get there, let's talk about who we are. We are developing or we are the building better developers podcast. I am Rob Brodhead, one of the founders of developing or also the founder of our be consulting where we help you do a technology reality check before you step into that project, before you start into that AI thing you're going to do. Let's make sure you're actually have got your ducks in a row so you can take advantage of that, make the most of that investment. Good thing and bad thing. Good thing is I'm in an area where there is there are a lot of different neighborhoods and each neighborhoods got its own vibe. It's got its own even sort of like its own food style, all kinds of cool stuff like that, which makes it great. So on a day like today when it was sun shining and all that kind of stuff, a nice little breeze all that great time. Awesome to be out there. Downside is that you have to like take a little bit of a hike to get to some of these areas. So you have to start in a car. You have to start in a cab ride because they don't have a bolt around here. So sometimes you got to travel a little to be able to live a little like as it would be. But you don't have to travel anywhere because Michael is going to be right here and you can live it up with his introduction. Michael, go ahead. Hey everyone. My name is Michael Molles. I'm one of the co-founders of Building Better Developers, also known as Developaner. I'm also the founder of Envision QA where we create reliable tailored software that helps you work smarter, scale faster and stay in control of your business. We do that through test driven development and automation. Good thing and bad thing. Good thing it's spring. It's warm here. Bees are out. Flowers are out. It's great. Downside, pollen is here. I'm getting into allergy season. So it's going to be fun as we do these when I'm sneezing and sniffling in the weeks to come. That always gives a little nice background noise to some of those discussions as well. And now I'd like to have our guest introduce himself. Go ahead and dive in. Welcome to the show. Thank you. Thank you for having me on. I'm excited to be here and looking forward to the conversation. My name is Thanos Tiacakis. I'm a software engineer by training. I've been doing this for coming up to about 30 years now. And what I currently do is I help teams that are building software, whether these are tech teams building software or they're non-tech teams that sort of got themselves into building something. Anyway, I help them go faster with better quality and not going crazy in the process while they do that. Well, I think our audience is full of people that are definitely on the latter side of that sometimes where we love building stuff, but sometimes we go a little crazy as we get into some of those longer project cycles. So let's dive right into that. As you said, whether they're technical teams or not technical teams. And I think we've seen some of those. We've seen more of that in recent years where non-technical teams are diving right in trying to take advantage of some of the latest tools. And sometimes, as they say in the movies, hilarity ensues. Sometimes it's not so nice. And sometimes it does work. So how is it that you are helping these teams? And actually, yeah, let's start with the non-technical side of it. How do you help those teams stepping into them when they're working on a technical project? Yeah, I think typically what I see with these teams is it's not like a completely non-engineer will just grab something and do it, though that's a little bit changing now with the whole AI revolution that is taking place. What would typically happen maybe up to a year ago is that they would hire a few contractors and they try to self-manage these contractors. And these engineers would not have the best communication with the business oftentimes. So there'd be these mismatches of what was expected and when that was expected and it would diverge there. And as the requirements grew and you're getting into building more and more complicated software, that divergence got worse and worse. And you don't have the right words to talk to each other. You don't have the right expectations set and things get pretty bad. And there that's where I usually jump in and sort of start clearing that up and figuring out what are the structures that we need in place to make that work. Now, that start from a you mentioned that it sort of didn't start perfect and it sort of went off the rails as it went further into the project, which makes a lot of sense. I always see that a lot where technology and tools don't necessarily fix a problem or they, you know, if it's already broken, they just make it more broken that they amplify it. So how do you are you seeing that from the start? Is it the disconnect between the technology people and the business side or is it something that the business side could just that they could maybe set it a little set up a little better set the table as it were better for that project moving forward? Or is it one of those that almost from the start, the the length, we'll call it the language barrier is really where the issue is. I think the language barrier is where it starts from. I think there is a certain skill set that is required to manage an engineering team. And when you have a non software company that starts doing software and they don't have that muscle, it is hard to just build from scratch. And that is not rocket science, right? It's just things that you can train people with things you can explain to people and things you can help build. But if you don't know how to do it, you don't know what to look for. It's really hard to build it to give you like a classic example of this. And the story that I've seen played many, many times is that you get an engineer too and you say, this is the project that I want. And you write them like five pages of a specification and you say, now go and they go look at it and say, yeah, that's probably take us about a month. So you leave them alone. You come back after a month and they built you something like, no, no, no, no, no, this is not what I wanted. That's not what I meant. And that's not what I meant. And so they go off. And by the way, like that usually never took a month. It will actually take two months because they misunderstood something or something. Didn't get communicated. So then you go for it for rev number two and number three. And this maybe takes six months. And by the time you're done, you're super frustrated. It took way longer than you thought. You paid these engineers are expensive. So you paid them a lot more. And the next time this comes around, you'll be like, OK, instead of writing this down in five pages for you, you're going to write in 30 pages. And I'm going to design every screen until you exactly where I want every button is. And that's usually sort of the wrong reaction. And you guys are engineers. So you've seen this happen before. The right reaction is I will not let you go out for a month to build this. I will give you the rough idea and build something in two days. And let's come back and look at it together. And then let's repeat the cycle until we get really good at communicating or get really good at building things together, seeing the results, building the feedback in, doing that sort of thing. Right. So it's really obvious, like once you state it, but most people don't know how to do that. And they get in trouble real fast with these kinds of patterns. So is it in this any you reference a really good example of having like a start with a nice, I will call it more broad design. It's a five page document. And then you switch to something that's 35 pages. And now it's much more detailed in those five page documents. Do you find that they are the requirements are still there? They're just for lack of a better term, they're hidden or assumed. Or is it a language where they're just not defined out to a level that they translate? Yeah, I'll get controversial here. I don't think most of the times us as business owners, we know what the hell we're talking about. So when we hand over that first five page spec, we don't really know. This is new, right? If this software existed that we want to build, we were just going to bought it because that's way cheaper and faster. So we're building something completely bespoke for our needs. And this five page is like our first stab in the dark. So I think one of the first things that I try to explain to people is like, yes, I know, you know, your business, but like you're going to have to have some humility here that this is your first attempt and you don't know what you're doing. Because most of the times we've seen this play out. This is the first that five pages is not what gets built. Because once you put this onto software and you've played with it for five minutes, you're like, yeah, actually, that doesn't work. That doesn't work. I need this this way. That data structure doesn't work. I need to shift it that way. So what I try to explain is like we structure these in small experiments. There's a reason because that first five pages just stab in the dark. We will go iterate from there. Yes, there'll be some things that are the vision that are there will still be there. There's some principles that we had as to what we want this to work. They will still be there. So we have to sort of extract those and then weave our way into the implementation and get from the abstract to the concrete. Now, how do you I hear a lot of things there that are not, you know, the is the definition of how some of these things works as opposed to maybe what how they are defined as. And so I'm wondering, how does this fit into the world of agile and sprints or maybe a rapid active application development approach or just, you know, pure like maybe like test driven development approach or a pure like very tight iterative approach? Because it sounds like what you're describing does fit into each of those. But is that are they all sort of good or is there something some master plan that you work with to help them get into this essentially more we'll call it a more iterative process than maybe that is natural. Yeah, so I think all these techniques that you mentioned, sprints, scrum, test driven development, these are all great things. And in my practice, I sort of extract elements from all of these that work best for a given situation there. The one thing, the one bad pattern that I see with these things is that we have taken these frameworks and methodologies and produce them sort of as Bibles of this is how we do things. And it's like dogma. And we forget why we had these in place. Right. So if you look back, it takes I like to pick on sprints because sprints is not something I typically like or do in most circumstances. We had the situation where you do releases once a year, once every three years or twice a year, something like that. Right. And 20 years ago, we thought, well, doing sprints is a better idea, because that will make us have to do more frequent releases. And I think we've achieved that. Right. We can do releases multiple times a day now. So then we now start looking, is there still benefit in doing sprints if we switch to a Kanban model where we have a pile of work and we just pick up the pile and work off the pile? Does it help us to plan things within a week? And the answer is probably no, not really anymore. Maybe you have some circumstances where that is necessary, but most of the time it's like, we don't need this. So we switched to a slightly different model. Sprints also give protections to the engineers of switching things that are happening sort of mid-sprint. Right. The idea is you commit to a sprint and then you don't change the sprint until the next spring. Right. In a world where you can have an idea and have your AI build it overnight and you push it to production next day, is there much sense in distracting the engineer from telling him mid-feature to stop this and work on something else? Probably no, that doesn't really happen anymore. So understanding some of these things helps you figure out why you're doing this and if you still want to apply these techniques. So being agile with a small a and following a bunch of the practices that we have in Agile still, I think make a lot of sense when you use a lot of them, but we have to go back and look at why we're doing them. I talked to teams that do like a four hour planning meeting on Monday mornings, but spring planning is like, no, that's got to go. That's productive time. That would keep spend doing something else. Now, if you actually, if you're getting some value out of that meeting, let's talk about what that specific value of the planning meeting is and see how we can extract it and do more efficiently, do more effectively. I love all of that. You hit on a lot of great things that we actually hit on a regular basis. I think I'll go with the big one, the elephant in the room as it is around here, is that you hit on the really it's the why you're doing this. What is it that you're, what is your focus? What is it, whether it's a small release or a large release, what is it you're actually trying to show, prove, provide value with within that deliverable? Within that, if you've got a why that now, and part of it is because of the development cycles have, as you sort of alluded to, you can go to continuous integration and development. You can do some things that can really tighten those cycles up. Have you found that there's a sweet spot essentially of still being able to provide the right amount of time for engineering to implement, but also enough time for them to be able to get feedback from the business side so that they're not running off too much, but they're also not being just constantly poked and saying, how you doing, how you doing, how you doing, let's look at it. Let's look at it like somebody's almost sitting behind their back watching them. Yeah, I think there's a pre-AI question and there's a post-AI question. Maybe we should just chat about the post-AI because that's the word we live in. I think in the past months, and I see months, it's not even quarters, but in the past months, the speed has accelerated so much in teams that are leveraging and harnessing AI in the right ways. I think the bottleneck is clearly now shifted from typing out and planning and doing the actual coding to how fast can we actually accept the changes. Teams that have functional test-driven development, they have good tests and good architectures where they're not dying from complexity, they have good CICD on, they can now just push feature out the door faster than anyone can even understand them or see them. Depends on what your audience is, there's some businesses that just cannot accept that level of change. You cannot give them five new features a day and you have to train people to understand how to use them and so on. The gating is now happening somewhere else. Now, there's other places where the complexity is still there. You may still have a team where you can't go that fast because you have enough tech debt that you're still in the phase where you can now clean up a lot faster, but you still have enough of these things that you're not spitting out five features a day. You're gated a little bit earlier in this process, but as soon as you've finished done cleaning that one up, then your gates are going to shift forward. Your bottlenecks are not probably going to lean back into the business. Then we have teams that are even further behind that. They're either not leveraging AI or their culture, processes, architectures, or even further behind that. They're not able to take advantage of all these things. Part of the diagnostics is always going in and looking and seeing where you're at and figuring out where the bottlenecks are, fixing those so you can keep moving the bottlenecks further out and further out to the right. It's a fascinating pace of change right now. It just drives me crazy just think about what is happening where just the planning and doing and writing the code isn't really that tough. It's just what happens after that. It's really fun because instead of thinking like, no, to hook up this API with that API, what are the right variables and what are the right checks that I have to do to make these things happen? You can now think in higher level terms and saying, okay, I'm going to ask this AI to build this feature and it's going to have it done in like less than a day. What are the right data structures, the right checkpoints, the right tests that I can use to verify that this is correct? How do I direct it to write me code in a way that I can understand it? Because I'm not going to have time to read 10,000 lines of code, but I still need to have the right inspection points and the right places to look at to make sure that I can send this with confidence to my users and say, this is good. You can trust this. I as an engineer can sign up on this and say, I can trust this. It moves us to work in more fun levels and more exciting engineering tasks rather than the micro coding tasks that we used doing before. So are you saying now that the shift in those micro level coding tasks, the things we used to do where we were worried about commits and comments and making sure that a ticket moved and things like that, are you seeing those shift more into a sort of like a guardrail area where now we're really, instead of us doing it, we're trying to teach it to AI to make sure that it is doing those things and shifting us actually as developers more into a manager, senior developer, code reviewer type of role where now we're just, we're not writing the code, we're not coding. We are leading a, we'll call it a team of AI, whether it's one agent or one tool or multiples of those into whatever it is that we're building. I think that is absolutely the case. And it's moving fast, because if you asked me six months ago, I tell you, I cannot trust the agent to write like a hundred lines of code before I can look at it because it's just crap. I just look at it all the time and it's not good. But fast forward six months later, and now I reliably can trust it because I've read the thousand lines a few times and I couldn't find a single problem. And anything that I found was minor and easily fixable. You keep moving down and down the chain. And as engineers, you become, it's even more than just looking at engineering tasks and figuring out those higher layer things that I mentioned about, but it's thinking about, okay, how do I interact with my product counterparts? How do they interact with the rest of the business? What are the other things that happen downstream of me when I ship a feature and how do I build more context? So when the next feature comes in, I have better context to feed into the system to make better decisions as we build new things. And oftentimes we also think in terms of just building new things and adding new things, but often removing things is something we have to do because complexity is enemy number one. So we can't keep adding and adding to a product, right? Because that will just collapse in its own weight at some point in time. We have to make sure we're grooming it and cleaning it out and removing things. And it's our role as engineers to sort of figure these things out and flag them until the eye gets good enough to be telling us by itself. But for now, we still have a role to be doing there to making sure these systems are working great. Now, what is it that typically is the breaking point where somebody says, oh, we need to have Thanos come in as you've described very much what I'm seeing as well. This is moving very quickly. We are seeing cycles almost disappear and we're seeing AI very much able to step into a lot of these coding roles. But it goes back to sort of the foundations of making sure that you understand the problem that you're building so that you don't open to a situation where you've got all these little test runs and these throwaway pieces and this extra stuff that AI built two weeks ago because you were playing around with an idea or you hadn't really fully formed the design and the requirements for that business process that you're trying to fix. Where is it that is usually the first maybe breaking point or stress point where they're like, okay, we've got to get somebody to help us with this. There's a couple of places that I usually see this as the trigger. The first place is that teams are usually really, really busy. They have enough things on their hands of doing the work. They don't have enough time to go think about how they want to do the work. So they figure out, okay, we'll reach for some outside help to help us structure this because we don't have the time to go do that experimentation. And a lot of times that's really valuable because usually when you have a cause and effect that is obvious to you, you can look at it and say, this cause has this effect. It's really easy to say, I have this problem. It has a solution. I do it, problem solved, done. But the tricky ones are where the cause and effect are not obvious. And that's where you have someone with the experience that has seen this problem, has tried the hard way, has had the scars and had to fix the problem in an unintuitive way. And then you can apply this one, but there you need some additional experience to help you find this and do this quickly. You could probably do it yourself, but it will take you a really long time to do this. So that's a really good entry point. Myself and others that do this sort of thing, we work with multiple clients. So we have the benefit of being able to see different innovations in different places and cross-pollinate these kinds of cool ideas and say, oh, that's a really cool way that these guys are doing it. And then we can tell more people about it and move the state of engineering forward in general. So that's the one angle of having an outsider look at it. The second very common scenario is when this comes from the business side and they realize that they don't have the language and the framework to talk to the engineering side. These are where the frustrations come in terms of getting good at doing versus getting good at planning. And we can talk about doing versus planning. That's a whole different fun subject we can cover. But oftentimes the business is frustrated. It's like, I have this engineering team, I'm paying them a silly amount of money and I'm not getting the output that I'm expecting. Come help me figure this out. And then we work on what is the right language and words and mental models and maps of having each side understand what each other means and driving a healthy relationship between the two and building a healthy tension because each side has needs. And you have this healthy tension, healthy debate that you drive to resolution and you move forward for the better of the whole company. So I'm going to take this in a slightly different direction since Rob's finally letting me have a chance to get a word in edgewise. So my particular area of focus is more testing, test automation and development. But a lot of what I'm seeing in the industry is, like you said, things are going fast now with AI. Six months ago, it was like, okay, I can test it to really build me a valid test case. It's like, oh, build me some unit tests. It's like, yeah, here's a bunch of unit tests. And they all literally are marked with test pass at the end. So they're not doing anything other than just saying, hey, you got tests that don't do anything. How do you see this going? I mean, we've already talked about the changes with the sprints from the agile model from Weeks to maybe the Edubon. How do you see the effects of the whole software development lifecycle and the software test lifecycle in this age of AI? I'm not even going to talk free AI at this point. But with all the methodologies, all the fundamentals that we've talked about for years, what do you see AI driving us towards? I see a lot of different things out there. I have a lot of personal feelings on where things may go. I'm curious where you think it's going and how you think AI is going to affect the STLC and STLC patterns that we're going to be seeing in the AI era. Yeah. So I'm going to try and make a safe prediction, given if there's more than a week from the time that we record this to the time that we air this, this is probably going to have changed. So here's one of the really cool things that I think is happening. I think we've known what we should have been doing with testing for a fairly long time. I think we've got pretty well defined practices of test-driven development that we should all have been doing for 10 or 20 years, maybe now. Maybe at least 10. But we're not doing them. Why? Because they used to take time. And it used to be really, really expensive. Now, the benefit was there. But sometimes it was hard to pitch the benefit, to put the work in, to see the benefit later. But companies that did put the work in were seeing the results. So we're seeing these more and more adopted across the industry. And that's a really good thing. That's what enabled us to go faster. But that's not universal. I would say that there's quite a few teams out there that are not even unit tests, not even to mention the harder tests like integration tests and maybe some pattern of end-to-end tests, contract tests, and other good things that you should be doing, not even close to having those. So I think with AI, we will see a step function where now you can have your agent work overnight and say, now go build me some tests. And I hear you on the, they will build crap tests. They often do. But I think we're beginning to see a lot better things happening there. What I often do is I'll set one agent, say, write the tests. They'll be happy to please. So they'll write the tests that pass themselves. And I'll set another agent with clear context. There's a bunch of problems here. Go find them and analyze them across these three categories. Like, and go find all the bad assertions. Go find all the skip tests. And they'll go find them pretty well. And if we do that a few times around, now that's getting automated in the loop, I think we're getting pretty good results. So the first thing that AI think is going to do is going to boost us up. We're going to see much higher adoptions of all these good practices that we've known for freaking ever. And we're finally going to start getting these implemented. So that's exciting. The second thing that I think is super exciting here is, at least in my head, there's still a little bit of uncertainty of how to use these tests fully. They vary across companies. But I really like end-to-end tests because they will test across your thing. You break anything. You're going to find out. At the same time, write more than five of these. And good luck when one of these things break. Good luck debugging that because you get to know the stack end-to-end. So it's really tough. But it's also really tough to write contract tests across a stack so I can guarantee the end-to-endness of it. So there's some really good questions. But I think with AI, we're going to now have the bandwidth to experiment with these and find out what works and run a lot more experiments than we used to run before and further optimize in our heads what these right test mixes are. My money of where this thing is going is because complexity goes exponential when you have multiple systems talking to each other, we will figure out how to do a much better job breaking them down, writing down clear contracts between them with the help of AI, enforcing at the contract level so we can statistically guarantee the end-to-end level. Which again, is pretty standard, but it was really hard to do until recently where you can set your agents to do that. And then through this experimentation, I think we're going to get way better at testing. And we have to because you cannot read every single line of code that the AI agents write. And you cannot afford not to write that much code because otherwise you're going to be going at the same speed of everyone else. So we've got to find where we've got to get better at writing these tests so we can get this speed boost that we have. I don't think either extreme work, which is like we're not going to adopt AI because we've got to read every line of code or we're going to just go YOLO and test be darned. And that is where we will pause. Don't worry, we still have plenty more Thanos coming ahead. And we do not talk about Marvel the whole time. We do not actually address the elephant in the room at no point other than he's got the gauntlet behind him in his background. Do we ever talk about Marvel or his names? And we don't get a chance to because it is, as you may have noticed, a really awesome discussion. There's a lot of really good AI nuggets here. This is something that as a developer, if you aren't thinking and considering some of the issues that we talk about in this episode and next episode, then you are missing out. You really need to, you're going to have to catch up. Things are moving very quickly. Even within this, these couple of episodes as we were talking to them, there are things that I'm thinking, these are changes we're going to have to push within our organization, with our team, to make sure that our developers are right up there at the, it's now essentially the cutting edge, only because the bleeding cutting edge is like two weeks ahead of the rest of it right now. If you're on the leading edge, you're almost behind. And if you're trying to be safe at this point, you're probably going to be lost. So definitely take advantage of this. If you're listening to the podcast, this is another one where I think it is going to be very useful for you to flip over and listen to the YouTube episode, particularly with the bonus material that's going to be coming out, because there are a couple of very valuable little mini discussions that we have there as well that may help you with your focus. Of course, who knows with AI changing, that may be useless by the time we get to it. That being said, thank you so much for hanging out there. Go out to hang out with us, go out there and have yourself a great day, a great week, and we will talk to you next time. This was sponsored by RB Consulting, your partner in building smarter, scalable tech. From startups to established teams, RB Consulting helps you turn tech chaos into clarity with proven roadmaps and hands-on expertise. Visit rb-sns.com to start your next step forward. Also sponsored by Envision QA. They help businesses take control of their software by focusing on what matters most, quality, reliability, and support you can count on. Find out more at EnvisionQA.com. Thanks for tuning in to the Develop the Newer Podcast, where we're all about building better developers and better careers. I'd love to hear your thoughts and feedback, so drop a note to info at DevelopTheNewer.com. Be sure to subscribe on Apple Podcasts, YouTube, or wherever you listen. And remember, a little bit of effort every day adds up to a great success. Keep learning, keep growing, and we'll see you in the next episode.