🎙 Develpreneur Podcast Episode

Audio + transcript

Getting Started with AI in Your Business: Insights from Hunter Jensen (Part 1)

In this episode, we discuss the challenges of implementing AI in business, including common mistakes companies make and the importance of access control and data protection. We also explore the use of LLMs in business and the benefits of pilot programs for AI implementation.

2025-11-26 •Season 26 • Episode 23 •Implementing AI in business •Podcast

Summary

In this episode, we discuss the challenges of implementing AI in business, including common mistakes companies make and the importance of access control and data protection. We also explore the use of LLMs in business and the benefits of pilot programs for AI implementation.

Detailed Notes

The conversation with Hunter Jensen highlighted the importance of access control and data protection when implementing AI in business. He noted that many companies make the mistake of assuming that their AI system will be able to access all of their data, when in fact, this is not possible. He also emphasized the importance of using LLMs, such as Copilot and Gemini, and the benefits of pilot programs for AI implementation. Additionally, he discussed the state of the art in AI technology and the importance of model agnosticism in AI development. Finally, he touched on the issue of hallucinations in AI and how they can be prevented.

Highlights

  • Common mistakes companies make when implementing AI
  • Importance of access control and data protection
  • Use of LLMs (Large Language Models) in business
  • Pilot programs for AI implementation
  • Comparison of Copilot and Gemini
  • State of the art in AI technology
  • Importance of model agnosticism in AI development
  • Hallucinations in AI and their prevention

Key Takeaways

  • Implementing AI in business requires careful consideration of access control and data protection
  • LLMs can be used to improve business processes and decision-making
  • Pilot programs can help companies test the waters before fully implementing AI
  • Model agnosticism is important in AI development to ensure flexibility and adaptability
  • Hallucinations in AI can be prevented by using techniques such as model checking and testing

Practical Lessons

  • Companies should prioritize access control and data protection when implementing AI
  • LLMs can be used to improve business processes and decision-making, but should be used carefully and with consideration for data protection
  • Pilot programs can help companies test the waters before fully implementing AI
  • Companies should consider the importance of model agnosticism in AI development to ensure flexibility and adaptability

Strong Lines

  • AI is not a panacea for all business problems
  • Access control and data protection are essential for successful AI implementation
  • LLMs can be used to improve business processes and decision-making, but should be used carefully and with consideration for data protection

Blog Post Angles

  • The challenges of implementing AI in business: a conversation with Hunter Jensen
  • The importance of access control and data protection in AI implementation
  • Using LLMs to improve business processes and decision-making
  • The benefits of pilot programs for AI implementation
  • The importance of model agnosticism in AI development

Keywords

  • AI
  • access control
  • data protection
  • LLMs
  • Copilot
  • Gemini
  • pilot programs
  • model agnosticism
Transcript Text
Welcome to Building Better Developers, the Developer Noir podcast, where we work on getting better step by step, professionally and personally. Let's get started. Well, hello and welcome back. We are continuing our season of Building Better Foundations. This is the Building Better Developer podcast, also known as Developer Noir. I am Rob Broadhead, one of the founders of Developer Noir, also the founder of RB Consulting, where we help you assess technology and build a roadmap for success. In the world of good things and bad things, good thing is, is that I live in Nashville where weather changes all the stinking time. So a little bit of a cold snap was followed by a warm snap, I guess. Maybe it was a light snap. So got to have the windows open and things like that, which was great. Got some fresh air, got to air things out. The other thing that I find is, is also with that sometimes comes rain. So as I got everything open, rain started to pour and it's like, uh-oh, got to make sure everything's closed back down, or at least enough so I don't like drift away in a flood. Someone who has grounded firmly and not about to drift away in a flood. Michael, go ahead and introduce yourself. Hey, everyone. My name is Michael Milosz. I'm one of the co-founders of Developer Noir, also known as Building Better Developers. I'm also the founder of Ambition Q8, where we help businesses build reliable software and expert testing. Good thing, bad thing. Good thing, had some medical procedures recently and they all came back good. Downside, the prep for those procedures sucked. And today our guest is Hunter Jensen. And I'm not even going to try to introduce you. I want you to start with the introduction and introduce yourselves to the audience. Yeah, thanks for having me on. Hunter Jensen, founder, CEO of both Barefoot Solutions, which is a custom software development shop, as well as Barefoot Labs, which is just now rolling out a product to help mid-sized companies deploy internal AI systems to boost their employee productivity. Well, that actually leads us right into a great starting question. Is when a company is starting out trying to implement AI, which it feels like everybody is right now, what are some common mistakes or red flags or things that you would recommend if they're diving into this that they should take a look for? Yeah, you know, a lot of mistakes are being made. This is kind of the Wild West right now. Best practices are currently in development. Right. And one of the biggest mistakes that I see, especially like at the leadership level, is, you know, CEOs have this vision of a model that knows absolutely everything about my business that can help in every single facet of that business because it knows all and it connects to all the systems and all the rest of it. And what they don't realize is that that's not really possible right now. And there's many reasons why that's not possible. But one of them is simply access control. How could we trust the model to not divulge information to people using it that they're not supposed to know? Right. If a model is trained on everybody's HR data, as an example, we cannot trust that model to interact with individual employees and protect other people's HR information. It's just we're not there yet. The technology is not there yet. The guardrails are very inconsistent at best. And so it really needs to kind of get a little more narrow in focus and not be this one all-knowing kind of, you know, my business model that can help for everyone with everything. That's just not really feasible right now. So that makes complete sense. I think everybody would love to see all of our financial numbers, but then when you start saying, but then also that means you're going to have to plug in everybody's salary and stuff like that, then it's like, wait a minute, I'm not sure I want access. People have access to that. And of course, there's a clarifying it's like, yes, even if you're going to have, you know, it may be that the LLM knows averages and sums and stuff like that. But if you're giving it the data to generate those things, then still somewhere in there, you've got the data, you know, everything about your business. So with that in mind, like what is a, you know, maybe there's some CEOs out there. They're like going, oh, crud, I didn't think about that. What is a good, what is a good like pilot program or a good way to get started? Yeah. So, you know, a good way to just get started. You know, I can't remember the last time I talked to a company, at least an American one that isn't either on Microsoft or G Suite, right? Both of them have products. There's Microsoft 365 Copilot and there's Gemini for G Suite. There it's not that expensive and it gives exposure to your team to an LLM that's safe because guess what? If you don't, they're going and using chat GPT, even if it's not, you know, against your AI governance policies. If you have an AI governance policy, which you need one, by the way, if you don't, we have to give tools to our team or we're putting our confidential data at risk. Period. Full stop. Right. And so, you know, firing up a license, you know, some licenses, some seats for those products is a nice way to kind of dip your toe in. Now, those products have major limitations. And so that's a starting point. That's not where that's not where you end up. That's not the overall solution. But it's a nice place to, you know, get your feet wet and get your feet in the water. But it's a nice place to, you know, get your feet wet, see what the appetite is, see what the skills of your team are, whether they're adopting this stuff and, you know, get a sense for as an organization, what direction you need to go in. So from there, well, I guess, actually, I will step back a little bit because you mentioned, you know, copilot versus Gemini. And of course, there's chat GPT and all of those sort of as a technology nerd kind of query. So curiosity, what have you found or have you found one engine better than the other, particularly, you know, I know that they have their different flavors, but particularly from a business point of view of somebody's trying to do sort of a general business, you know, marketing, sales, that kind of AI. Yeah, you know, I. Copilot is better than Gemini in general. You know, but it really just depends on like on your existing stack. Like, that's not even really a decision that you're going to be making at this point, unless you're new, unless you're new. And that's a big decision. Are we going to build this business on the Microsoft stack or on the Google stack? And so what model you end up using really depends on that. And that's just Gemini, the copilot is not enough to make that decision on how you're going to build your business on what's that. There's a lot of factors that go into it. That being said, Gemini 3 came out, I think it was this week, and it's topping leaderboards all over the place. And so, you know, it's really historically been quite interesting. In 2017, it was a Google researcher and research team that kind of discovered transformer models and they published a paper called Attention Is All You Need. And then the race was on and Google lost that race like big time. Right. Chat GPT. Do you guys remember Bard, which was the premature like, oh, gosh, we're getting crushed by open AI. Like, let's release Bard. And it was just awful, just awful. So bad they had to rebrand it. And so, you know, basically, Microsoft, through its majority investment in open AI, won that race to like come to market with, you know, general pre-trained transformer models GPT. That's what that stands for. What we've seen, though, and honestly, like maybe this week is when Google kind of officially caught up where they are, you know, their model. So now it's like, okay, new release of GPT five that state of the art. Gemini three. That's actually a little bit better. Now that state of the art. And then Claude does something and then Mistral does something. And then DeepSeek is over here doing amazing things. So it's become while open AI clearly came out in front way out in front. It's getting way more competitive now. And like each new release is better than the best release that the other companies have put out. So it's an arms race now, which is really good for us as consumers of this technology. Right. We want it to be competitive. That brings down pricing that improves all the products. Nobody's complacent. Everybody's sprinting as fast as they can because we've got a real competition going on. And it's not just Microsoft, the Google or open AI and Microsoft, the Google. Right. There's all these other players and the Anthropic and Mistral and you name them are doing really interesting things and are starting to specialize a bit. Right. Like Claude was really great at writing code before some of the others were really great at writing code. And so the landscape is just evolving so fast that it's honestly quite hard to keep track of it all. Yeah, it kind of reminds me of the early Java days back when it was Oak and then it became Java and then you didn't have any of the parsers for Dom or Saks. I mean, it was like the sky's the limit. Libraries were everywhere every other week. For those getting into AI. So you talked a little bit about Copilot and Gemini kind of being a good, you know, safer way to get started to protect your stacks and your your your your information. You mentioned the LLM. For most people, they just see AI as AI. They really it's like, oh yeah, there's some model back there. I basically ask it something. I get something out of it for developers. How what's a good way or can you explain your like what you mean by like each has a better stack? How would you suggest developers or entrepreneurs as they approach these AI models, how to look for the right AI model for what they need? Sorry, it took me a lot to get there, but there's just kind of a lot with what you brought up that just kind of want to get a little more focused on that. Yeah, yeah, it can be hard. It can really be hard. You know what I would suggest. Okay, so let's take out the like confidential information piece of this. Let's say there's just individuals. They're trying certain something out. They're not dealing in, you know, confidential client data. Chachi PT five one is probably your starting point. You know, it's the most robust, the most mature. And and just generally the chat GPT platform is is the right place to start right now, I would say. Now I haven't even evaluated Gemini three yet so I might have, I might say something different next week, but that's a good starting point. I'm not saying that I'm not going to be doing anything, but what I encourage folks that are starting out to do is try out a few like I will often have open. Let's call it three. Maybe I've got perplexity and I've got Claude and I've got chat GPT open and when I'm starting a task, I will often ask all three. I will decide which one I like the most, you know, after first few prompts and then dive in deeper with that one because there's a lot to be learned. I mean, like, you know, chat GPT five, for example, is horribly slow because it in my opinion right now, especially five and maybe not so much as five one but it's like overthinking everything it the amount of time it takes to do its reasoning and call all these is tremendous. Like we can't even we can't even deploy five for our product because of how long it takes and how many tokens it gobbles up. Right. I mean, you can wait minutes sometimes. And so, you know, I encourage kind of switching, at least at the beginning of a particular task to figure out which one is going to work now. At this point, most of them work for most of the things. You know what I mean? One might be the best, but you really need the best for what you're doing today. Now, like when you're making a decision about, okay, we need to pick a model to put into our product like that's important. You need to test out the disparate different models and see which ones are working. But really, why are you building a product that can only use a single model, you should be building your product so that it's model agnostic to a certain extent and depending on what you're trying to accomplish you may be using different models for different things. And you need to be building so that you can accommodate new new versions of models that come out right and have a full test harness in place so you can evaluate new models quickly to see if they're a good fit for your product. So it's ever-changing. And so I would caution folks starting out to go all in on one model. You need to be thinking of this as a multi-model world where you're switching back and forth all the time. And this is a little just like a little hack. Sometimes it's interesting to, okay, ask chat GPT-5 something, take both the prompt and the response that you got, plug it into a different LLM, say this is what I learned or this is what GPT-5 said. What do you think about this? Right? And get it to kind of critique. Are there errors here? Is there missing information here? I really like doing that and it can really illuminate hallucinations, which still happen way less, but they're still happening. And just give it some oversight. Nice. I like that because I've done that quite a bit in the coding world. Thankfully, I mainly use chat GPT as far as code goes to just build me a quick stub or something, the boilerplate stuff, or if you have a problem, it's like, hey, give me this. It gives me a solution. If I don't like it, I pop it in something else and kind of troubleshoot it. Or you can almost look at it and know it's wrong sometimes. Following that step, so you mentioned, at the beginning you talked about keeping the information safe, protecting the access. So as we're using AI more, as we're building our tool sets with these LLMs, what are some steps that people, as they start evolving their access or their experience with AI, how can they start protecting themselves and their data from being more secure? Data from being misused or exposing unnecessary data to other people within the model. Yeah, so the first step is you need to pay for access to these things. If you're not paying, they're doing whatever they want with your data, whatever they want. And so you have to actually evaluate the licenses that you pay for to make sure that they are in line with the kind of security posture that you need to have. You know, some folks, marketing folks, for example, they don't really care all that much. What they make, they put out into the world on purpose, right? But then there are others like attorneys that absolutely cannot put confidential client information into a third party system, even if they're protected by the license, they still can't oftentimes. And so it's a spectrum of like what your needs are. And so the first step would just be actually read the licenses or asking for summaries or something like that to make sure that what you're paying for is in line with what you need from a data protection standpoint. Okay, that makes sense. To follow up on that, okay, so now we've kind of talked about the introduction to this. We've talked about the access. So now if I want to start building that model for my company, start building an application or putting my information into have AI help me analyze my company, things along that lines. What's kind of the next step in the progression of that in your, for what you see? Yeah, so this is precisely why we built our product compass is to is to kind of be that next step. You know what, as a company, you need something that will connect to your other systems that will leverage your existing access control that's in place already. And that can that you can own that you can host yourself, right? You know, copilot works great if you are integrating with stuff on the Microsoft stack. But if you have other systems, it doesn't work that great. If you need to process really large files, it doesn't work at all. It actually completely falls down. And so, you know, the next step is is implementing, you know, looking for some models. There's a lot available. There are open source models. You can also get. You can also get access to not open source models through a WS bedrock or through Azure Foundry and Azure Open AI. And so, you know, you get to a point where you can't be sending this information outside of your own firewall like it needs to remain in your network. For companies that are like serious about data security or compliance or, you know, risk control, things like that, you really need to own the stack. And so I suggest deploying some of those, you know, with an application on top of it for your team to be able to interact with it. And so, you know, when I say piggyback on existing access control, I want to click in on that. What does that mean? We talked about these financial reports, right? Let's say I want to pull in financials from a third party system. Now, if I have a system that's connected to it, I can authenticate as myself to that QuickBooks and QuickBooks will only give me what I'm allowed to see. Right. And then I can put that into the context of what it is that I'm trying to do and say, hey, generate a report for me or do this analysis for me or what have you. And so by piggybacking on the existing access control, we no longer have to trust the model. The model doesn't know all and see all the model that you're talking to right now only knows what you're allowed to know, and it can only access what you're allowed to access. And so when you build it out that way, like we can't, we cannot trust the models. The LLMs are not to be trusted, right? We need our own mechanisms for protecting our business and our data. And so by architecting it that way, you kind of sidestep that problem by saying, you know, well, we already have access control, of course, right? The CEO can access the financials, but, you know, the knowledge workers can't. And that's kind of the next phase is when you're ready for this, like, internal AI system that's connecting to all of your other stuff. And that's when it gets really powerful. And that's where we're going to pause part one of our episode of our interview with Hunter Jensen. Great conversation. It's one of those that it seems like every time we mention AI, it sort of goes off the rails. But in this case, those were the rails. We like this is somebody that is a CEO of an AI driven company when you got a .ai at the end of the name. Hopefully that means something to you. In this case, I think you see that it does. Hunter actually has really thought about this stuff and is a great resource. And we will continue that in the next episode. Thank you so much for your time. We appreciate it. And all that you guys have done just hanging out there as we're getting towards the end of the year, just trying to be in that very thankful mood. And you guys are at the top of our list. Go out there and have yourself a great day, a great week, and we will talk to you next time. RBConsulting, your partner in building smarter, scalable tech. From startups to established teams, RBConsulting helps you turn tech chaos into clarity with proven roadmaps and hands-on expertise. Visit rb-sns.com to start your next step forward. Also sponsored by Envision QA. They help businesses take control of their software by focusing on what matters most, quality, reliability, and support you can count on. Find out more at EnvisionQA.com. Thanks for tuning in to the Develop the Newer Podcast, where we're all about building better developers and better careers. I'd love to hear your thoughts or feedback, so drop a note to info at DeveloptheNewer.com. Be sure to subscribe on Apple Podcasts, YouTube, or wherever you listen. And remember, a little bit of effort every day adds up to a great success. Keep learning, keep growing, and we'll see you in the next episode.