🎙 Develpreneur Podcast Episode

Audio + transcript

Building Better Developers with AI: Season Premiere

In this episode, we discuss AI readiness and data integration for AI initiatives. Our guests, Michael Moulache and Matt Zoltow, share their insights on how to make data AI ready and the importance of data integration. We also touch on the regulatory hurdles that companies may face in the future.

2026-03-15 •Season 27 • Episode 18 •AI readiness and data integration for AI initiatives •Podcast

Summary

In this episode, we discuss AI readiness and data integration for AI initiatives. Our guests, Michael Moulache and Matt Zoltow, share their insights on how to make data AI ready and the importance of data integration. We also touch on the regulatory hurdles that companies may face in the future.

Detailed Notes

The episode began with an introduction to the topic of AI readiness and data integration. Our guests, Michael Moulache and Matt Zoltow, shared their experiences and insights on how to make data AI ready. They emphasized the importance of data integration and how it can power all AI initiatives. We also discussed the regulatory hurdles that companies may face in the future, particularly with regards to data protection and AI law. The guests provided valuable examples and analogies to illustrate their points, making the discussion clear and concise.

Highlights

  • Data is all around us, and especially in larger organizations, you've got the problem of a lot of different data silos.
  • Data readiness or getting data AI ready underlyingly really is making sure that the entire data can travel from any source model that you have without obviously breaking internal and legal rules to any target model as well.
  • There's various different ways on how you can get to it.
  • And what I often like doing is a little bit of a plumbing analogy.
  • The data integration altogether that really powers all of the AI.

Key Takeaways

  • Data is all around us, and companies need to make it AI ready.
  • Data integration is crucial for AI initiatives.
  • Regulatory hurdles may arise in the future.

Practical Lessons

  • Companies should invest in making their data AI ready.
  • Data integration should be a priority for AI initiatives.
  • Companies should be aware of regulatory hurdles and plan accordingly.

Strong Lines

  • Data is all around us.
  • Data readiness or getting data AI ready underlyingly really is making sure that the entire data can travel from any source model that you have without obviously breaking internal and legal rules to any target model as well.

Blog Post Angles

  • The importance of data AI readiness in AI initiatives.
  • The role of data integration in powering AI initiatives.
  • The regulatory hurdles that companies may face in the future.

Keywords

  • AI readiness
  • data integration
  • AI initiatives
  • regulatory hurdles
Transcript Text
Welcome to Building Better Developers, the Developer podcast, where we work on getting better step by step, professionally and personally. Let's get started. Well, hello and welcome back. We are continuing our season where we are not just building better developers, we're building a better launch pad. Essentially, we're getting unstuck, we're moving forward, we're getting some momentum going. And this is the perfect thing you need at the start of a year. This is the Develop-a-Nor podcast. I am Rob Brodhead, one of the founders of Develop-a-Nor. Also the founder of RB Consulting, where we help you with a technology check to figure out before you take that big step, whether it's a project, whether it's an AI thing, whatever it happens to be, make sure you're actually like, got your ducks in a row before you start. Good thing and bad thing. Good thing is, in a season that has been far rainier than it should have been, I finally have a sunny day. So I get to have like natural lighting again and fun stuff like that. The bad thing is, is that I'm stuck inside a little bit more than I want to, so I'm not going to enjoy it quite to the level that I would. But I am going to allow all of us to enjoy Michael introducing himself. Hey, everyone. My name is Michael Moulache, one of the co-founders of Building Better Developers, also known as Develop-a-Nor. I'm also the founder of Envision QA, where we build and test custom software that eliminates those bottlenecks. So your business runs smoother and grows faster. Good thing, bad thing. Well, still in Tennessee. We're still having our weird seasons. We're not quite to spring yet. We're not quite out of winter yet. And heck, who knows if we're going to get snow again. But anyway, that's kind of the bad thing. We just uncertainty about the weather. We had tornadoes up north just the other day. So good thing. I get to be here with you guys, learning some new things and talking about AI and some new problems. Cool. And he let the cat out of the bag. Yes, we will be talking about AI a little bit with a yet another interview episode. And we're going to be talking with Matt today. You want to go ahead and introduce yourself, please, Mr. Matt. Yeah, Rob, thank you so much. And Michael, great to be on. Really appreciate it. My name is Matt Zoltow. I lead the international business of IntelliPaths. It's a data integration platform. So we help build and transform AI transformations. So basically connecting all sorts of systems, cleaning up the data with it, making sure it becomes AI ready, obviously, compliant in the process of it. I can't think of any bad things here. I can only think of many good things. So basically getting everything AI ready. And yeah, looking forward to this conversation here. Thanks for having me. Excellent. Well, we'll start right with... What do you... Because we hear this now because AI is everywhere. But there are now there's like this wave of people are talking about getting their companies AI ready and particularly, like you said, getting your data AI ready. How do you see that? What's maybe in layman's turn or maybe a summer is like, what is it? What makes data AI ready? Yeah, that's a great question, Rob. Data is all around us. And especially in larger organizations, you've got the problem of a lot of different data silos. Different departments have access to different forms of information. Very often they stick to that information. They don't kind of want to share it with anybody. And as a result, you've got multiple different systems on the one hand, different duplications of data and systems on the other. And data readiness or getting data AI ready underlyingly really is making sure that the entire data can travel from any source model that you have without obviously breaking internal and legal rules to any target model as well. So it's all about this interconnectivity with that. There's various different ways on how you can get to it. And what I often like doing is a little bit of a plumbing analogy. The data integration altogether that really powers all of the AI. And in fact, there's some interesting stats that come out of or predictions from Gartner, in fact, that say that by the end of this year, almost 60% of all AI ready projects will be abandoned because of lack of ready, like AI ready data. So with that, there's billions and billions worth of investment that's ultimately lost as a result of it, just because the foundation isn't there. And what I'm sure many have discovered, especially if they're in some technical environments as well, is that AI tends to work pretty well in defined environments. So sandbox environments, etc. The moment you take it out of that, that's when the rubber really hits the road. And that's where lack of data, lack of access to real time data, where the plumbing basically isn't there in order to sustain these AI initiatives. So that's really where data AI readiness comes in, is making sure you've got this foundational plumbing layer that's there ensuring that anything can be connected with everything. And I mean literally everything. The average organization has over 80, I'm talking enterprise, 80 to 120 different systems. And that's only expanding, AI is really adding to that. So that data pipe really needs to be robust. So that brings up a good point because it is, I think everybody sees it, if you get into an enterprise, you're going to have more cooks in the kitchen essentially, because as they grow, they add systems and all that kind of stuff. Does that, do you see that that, sort of the flip side of that, that that means that like if you're a startup, if you're only a year or two old, that you're more likely to be AI ready, that you're all set and you don't have to worry about this AI readiness fear? So if you're a startup, there's probably no better time to be a startup in this environment than now, right? Because you need to make sure that you focus, especially in the AI world, what we see is a significant move towards best of breed applications. We've seen that across many, many years. Best of breed applications basically mean that you don't just take the large vendor that happens to cover a lot of different parts, you buy the vendor that happens to do one thing very, very well, and then assume that it connects with the entire other ecosystem. And that's where things fall over, because on the one side, it's incredibly important to be focused on one particular application that you're building. Let's pick an HR application just for argument's sake. Let's say you're building the best HR talent sourcing system that exists there. That still needs to be connected to all sorts of legacy systems that may not be on your radar. And the decision that's going to be made on the buyer side in the end is, okay, how does this additional puzzle piece that really has an incredible value proposition, how does that really fit into the entire ecosystem? If that isn't 100% taken care of, CIOs will reject it simply because there's too much risk that comes with it. And underlyingly, again, they want to make sure that they're not creating more data silos for themselves, but ensuring that the entire data flows. Yeah, and that's, Kashi, touched on something else that's really interesting. Because we have sort of, historically, technology sort of ebbed and flowed between best of breed versus just all in one kind of solutions. And it feels like not only that we have been now in the last several years, I think even before AI has gotten into this generation of it, moved towards more best of breed and integration and migrations and getting all of those pieces together has just grown and systems have grown in complexity. Do you think that AI is actually going to even accelerate that more as people learn how to, organizations learn how to use AI to maybe to build those little custom, their own custom best of breed as opposed to trying to find it on a shelf somewhere? They say, you know what, I'm going to use AI and be able to push and bring back more of the custom solutions. It's an interesting conversation I actually had the other day. I mean, we've all seen the big technology vendors tank recently on the markets because of fear about is AI going to reduce licensing of large vendors, etc. And the honest and potentially uncomfortable truth is it certainly has the potential. And vendors, especially the larger ones who are listed, are very much trying to protect their share of the pie. That's what I mean with it's a fantastic opportunity to build new applications in it that are also highly bespoke. But again, they need to be connected to the wider ecosystem, right? Because only because AI now enables custom development much quicker, potentially better than ever before, that still needs to be connected to, in some cases, a significant legacy tech stack. So we and from my point of view, we often work with enterprises, right? So we're talking very, very large companies. We've got some of the largest vehicle manufacturers out of Germany that are using us. We've got their suppliers using us. We've got G7 governments that are using us. So I'm kind of a little bit looking at it from that lens. But even if you're looking at mid-market, you've got a significant tech stack that's there, also attributed through acquisitions, right? It's very common that companies are acquiring another. And with that, they're taking over a legacy tech stack. I kind of think that is probably the easiest angle to do that, because that is a nightmare to manage internally. It doesn't matter how big or small the company is. But again, that still needs to be integrated into something else. Otherwise, you've got another data silo somewhere else. So anything can be possible now, which is, again, truly exciting. What I really think, though, companies need to or companies are really looking at the bigger picture of it, especially now that they feel anything might be possible on the AI horizon, maybe not in this moment, but in three to five years time. And that's where smaller disruptors need to be. They might be disrupting today. They've got to be careful not to get disrupted in a couple of years time through an internal team of a 24-year-old and a laptop. And it could happen. So protecting revenue models around that is going to become really, really critical. There's a lot we can learn from the big ones. So it's funny. Hang on, Rob. I've got a quick question. So it was kind of funny there. You mentioned how companies over time are doing acquisitions. You know, they acquire a company instead of spending the time to rebuild their current systems. They look for external companies to help that. Do you see that escalating to the point where companies won't be able to do that anymore, where these bigger companies are going to start failing to, I guess, be able to stay on top of things by acquiring things? Basically, the market's going to basically swallow them up because people are going to be able to do it faster themselves. Whereas the larger corporations, their model of here, I'll acquire or here I'll go buy this because it's better than what I have to integrate just isn't going to work. It's not going to scale fast enough to keep up with this boom. I mean, that's the 200, 500 billion dollar, trillion dollar question, right? And I guess that's what we've seen fluctuations on the markets of late as well, because that fear is certainly present. I would argue anything is possible. I mean, it's an uncomfortable truth potentially. But I mean, even right now, I could be on my laptop right now, run Claude code and reverse engineer copy is, you know, not full data layers just yet, at least not on the underlying integration layer. So that's a lucky place to be in. But if I wanted to create a new CRM system that's completely bespoke to myself, I could do that. And I know companies that have done that and they've asked us to integrate because for them, it just becomes another endpoint. But what they want to make sure of is that they can connect that new system that they've created with an existing legacy tech stack. So don't underestimate legacy like don't under like the it's also this stigma of everything's going to move into the cloud. We're yes, a lot of things from a scalability point of view are really going to go there. However, there's going to be an incredible amount of lockdowns that are going to be happening in the context of lockdown and data being treated also by countries, by the way, more on that maybe later as in the interest of national security, personal identifiable data, obviously transaction data, all that sort of data. There's going to be countries significantly locking down on how, in fact, putting guardrails in place on how enterprise, especially enterprises, they're the easiest one to target, are allowed to use their existing customer data, where it's stored, how it's processed, etc. We're starting to see this already in Korea, for example, they've passed the AI law, the it's got a long complicated name. Ultimately, it's the AI readiness law, they say. It's under the well, it's supposed to be all about transparency over there. It's really all about regulation, very tightly regulated, and it comes with a lot of penalties associated, in fact. Korea is up first, Europe famously comes up soon as well, in fact, with additional legislation here. If I was an AI startup, I would focus on these parts, because the larger, especially the mid-sized larger organizations, they will not be allowed soon, they will not be allowed to actually move much with it. So if you can provide local models around that, that stick to certain jurisdiction, that can add certain value and really a defined niche, yes, you might be able to find somebody who can, the 24 year old with a keyboard, who can replace that. But you will have expertise in a field that is going to be hard to match. So that's probably the bet that I would take when it comes to specific AI software pieces. So with companies, so you mentioned going more with the private AI models. For startups and for entrepreneurs and developers in that, right now, we're primarily using cloud based systems, because that's the most affordable. And with RAM prices going through the roof, hard drive being consumed by companies left and right, and prices just skyrocketing. What is a recommendation or an approach you might have for or recommend for a startup or an individual to try to build their own private LLM to kind of build these models so that they don't break the bank, but can still kind of tap into these markets like you're talking about? Yeah, that's a good question. Overall flexibility, I think, is really important. What I mean with this is that there's obviously certain different standards. For example, you develop your application that allows almost like a bring your own AI model, where customer A might be more tied towards an open AI. So therefore, you can use the open AI connector for it. Turns out Gemini, various on-premises systems, whether it's Olana, etc., even some of the Chinese ones are also offering a connectivity via the open AI standard. So to be honest, I would go down that approach because that allows you to test in any sort of environment, whichever one happens to be cheapest for you in that moment. You can test on-premises with your own environment as well. You can test in a cloud environment and you can tell the customer at the end of the day, look, we don't actually care which particular LLM you prefer. We're not going to tell you that you have to use our Olana, for example, which eats up resources that the customer may not have. So instead, they get to be able to use an open AI, which happens to be running on the Azure cloud. I'm making that up. So if they want to test for themselves, there's obviously various different options. Do you want a thinking model? Do you not want a thinking model? Do you want to have LLAMA 3? Do you want to have Mistral? Flexibility and deployment flexibility, I think are going to be really, really important because you're going to be talking to one customer who very much wants the entire AI stack to live in their Azure environment. For example, we've got some customers, they want everything in there, they don't care because they also have contractual commitments. We've got the same side on AWS, of course. In fact, I disclosed earlier, I spent a lot of time in Asia. We've got customers that are using Ali cloud as well. So our integration platform, and I'll give one example here, is completely globally deployable wherever customers want it to. So that's different to how others do it. And that's really because our philosophy has always been you need to be anywhere and everywhere where the customer is. We don't want to impose anything on them. So the reason I'm sharing that is I highly recommend that type of mentality because it allows you to be as flexible as the customer requires you to be. If a customer suddenly says, we need that entire environment, all the processing or the AI processing to be in Luxembourg, but it turns out your server is in Ireland, well, that's all well and good because you can say, well, you're look, you know, European Union, but at the end of the day, there might be a requirement. It's in Luxembourg. Yes or no. If you don't have that flexibility, you're immediately out and your competitor is going to be in. So having that architectural freedom to kind of put it anywhere, I think is going to open up a lot of doors there. And it ticks all the regulatory hurdles that I predict for the wider industry. Try small, make sure it's scalable out of the box. And that is where we're going to pause for this episode. We're going to go ahead and we are not done. We're going to continue our conversation with Matt. Next episode, really good stuff. There's a lot of great conversations here. We had a fun time with him. I think he had a great time. Looking forward to it. Take notes, all that kind of good stuff next time around and feel free to reach out to him afterwards. I think there's a lot of good opportunities there of things that maybe we haven't thought about. And sometimes these are those conversations that just having them will suddenly spark some neat ideas in your head. That being said, we're out there and have yourself a great day, a great week, and we will talk to you next time.