Detailed Notes
How do you safely and effectively get started with AI in your business? In this episode of Building Better Developers, we talk with Hunter Jensen, founder and CEO of Barefoot Solutions and Barefoot Labs, about practical first steps for adopting AI, choosing the right models, and protecting your company’s data.
Whether you’re a business owner, technology leader, or developer exploring AI strategy, Hunter breaks down what to avoid, what to try first, and how to prepare your organization for the future of AI.
⸻
🔍 What You’ll Learn in This Episode: • Why all-knowing AI models aren’t realistic (yet) • The biggest mistakes companies make when adopting AI • Safe first steps for getting started with AI in your business • Copilot vs. Gemini vs. ChatGPT for teams • How to evaluate AI models without locking yourself in • Why data protection must come before experimentation • When and why companies need internal AI systems • How access control impacts AI use
⸻
⭐ About Hunter Jensen https://www.linkedin.com/in/hunterjensen/ Founder & CEO of Barefoot Solutions and Barefoot Labs, specializing in custom software development and internal AI systems that help companies securely leverage artificial intelligence.
⸻
📢 Connect With Us
Website: https://Develpreneur.com Podcast: Building Better Developers Hosted by Rob Broadhead & Michael Meloche
Transcript Text
All right. Um, what we do is, um, we'll do a we do sort of a conversational kind of a, um, sorry, adjusting my mic a tad. conversational approach. Uh we'll start out with uh we do it'll take us about an hour. We end up splitting it into two u roughly 25 minute uh episodes. We'll start off the first one. We'll do an introduction. I'll introduce myself. Mike will introduce himself. We'll allow you to introduce yourself and then we will uh dive right in basically. Um like I said, we keep it keep it pretty free form usually sometimes even within the the introduction. Uh definitely as we get in the first couple of questions, there's so many follow-ups that the next thing we know we are, you know, all the way through the the episode basically. And then at the end, we'll make sure that we have a uh you ask you to just provide any, you know, good links or anything that you want to do for your uh to reach out uh social links or anything like that. We'll also have links in the show notes, but we find that tends to be a really good way to uh to do it. One second. All right. And this is we're going to talk about AI and that has like every time we've even started in on that conversation, it just it keeps going. So I I expect we will have nothing but uh rabbit trails to go down and of course everybody loves AI today too. So uh any questions before we get started? >> Uh yeah, just one which is like who's the audience? How would you characterize them? uh audience is typically we're looking at people that are starting their career in technology or maybe you know I guess and it could go further in but usually in your like you know junior to mid-level uh kind of technology developer type along with a lot of them we're really focusing on the ones that are uh have an entrepreneurial side so they're doing side hustles and things like that. Uh so it's really a combination of young entrepreneurs uh technology savvy entrepreneurs and technologists. So uh that's a lot of where their focus where this focus will be is how AI can be uh both as I guess an implementer of it but also how we can use AI to you know help ourselves in our daily routines and things like that. >> Great. Great. >> Then uh we will dive right into it. Oh, by the way, it is uh video and audio. We have some of the pre-show and post show stuff does end up on the u the audio side, but then we use our specific, you know, audio cut points for the the podcast uh the audio only do uno. Well, hello and welcome back. We are continuing our season of building better foundations. This is the building better developer podcast, also known as developer. I am Rob Broadhead, one of the founders of developer, also the founder of RB Consulting, where we help you assess technology and build a roadmap for success in the world of good things and bad things. Good thing is is that I live in Nashville where weather changes all the stinking time. So, uh, a little bit of a cold snap was followed by a a warm snap, I guess. Maybe it was a light snap. So, got to have the windows open and things like that, which was great. Got some fresh air, got to air things out. Downside is is also with that sometimes comes rain. So as I got everything open, rain started to pour and it's like uhoh, got to make sure everything's closed back down or at least enough so I don't like drift away in a flood. Someone who has grounded firmly and not about to drift away in a flood. Michael, go ahead and introduce yourself. Hey everyone, my name is Michael Malashsh. I'm one of the co-founders of Developer, also known as Building Better Developers. I'm also the founder of Envision QA where we help businesses build reliable software through custom development and expert testing. Good thing, bad thing. Uh good thing, um had some medical procedures recently and they all came back good. Uh downside, the prep for those procedures sucked. And today, our guest is Hunter Jensen. And uh I'm not even going to try to introduce you. I'm going allow you to start with the introduction and uh introduce yourselves to the the audience. Yeah, thanks for having me on. Uh Hunter Jensen, uh founder CEO of uh both Barefoot Solutions, which is a custom software development shop, as well as uh Barefoot Labs, which has uh is just now rolling out a product to help um you know, midsize companies deploy internal AI systems to boost their employee uh productivity. Well, that uh actually leads us right into a great starting question is when a company is is starting out and trying to implement AI, which it feels like everybody is right now. What are some common uh mistakes or red flags or things that you would recommend if they're diving into this that they should take a look for? >> Yeah, you know, um a lot of mistakes are being made. This is kind of the wild west right now. Best practices are currently in development, right? And one of the one of the biggest mistakes that I see especially like at the leadership level is you know CEOs have this vision of a model that knows absolutely everything about my business that can you know help in every single facet of that business because it knows all and it connects to all the systems and all the rest of it. And uh what they don't realize is that uh that's not really possible right now. And and the reason there's many reasons why that's not possible, but one of them is simply access control. How could we trust the model to not divulge information to people using it that they're not supposed to know? Right? If if a model is trained on everybody's HR data, as an example, we cannot trust that model to interact with individual employees and protect other people's HR information. It's just we're not there yet. The technology is not there yet. The guardrails are very inconsistent at at best. Uh and so it really needs to uh kind of get a little more narrow in focus and not be this one all knowing kind of you know my business model uh that can help with with for everyone with everything. That's just not really feasible right now. >> So that and that that makes complete sense. thank everybody. It's like, oh, we would love to see all of our, you know, our financial numbers, but then when you start saying, but then also that means you're going to have to plug in everybody's salary and stuff like that. Then it's like, wait a minute, I'm not sure I want access people have access to that. And of course, there's a clarifying. It's like, yes, even if you're going to have, you know, it may be that the LLM knows averages and sums and stuff like that, but if you're giving it the data to generate those things, then still somewhere in there, you've got the data. You know everything about your business. So, with that in mind, like what is a, you know, and maybe there's some CEOs out there, they're like going, "Oh, crud, I didn't think about that." What is a good what is a good like pilot program or a good way to get started? >> Yeah. So, you know, a good way to just get started. Um, you know, I can't remember the last time I talked to a company, at least an American one, that isn't either on Microsoft or G Suite, right? Uh, both of them have products. There's Microsoft 365 C-pilot and there's Gemini for G Suite. It's not that expensive and it gives a exposure to your team uh to an LLM that's safe because guess what? If you don't they are going and using chat GPT even if it's not you know against your AI governance policies. If you have an AI governance policy, which you need one by the way, if you don't, uh, you we have to give tools to our team, uh, or we're putting our confidential data at risk. Period. Full stop. Right. And so, you know, firing up a license, you know, some licenses, some seats for those products is a nice way to kind of dip your toe in. Now, those products have major limitations. And so, that's a starting point. that's not where that's not where you end up. That's not the overall solution. Uh but it's a nice place to you know get your feet wet, see what the appetite is, see what the skills of your team are, whether they're adopting this stuff and and you know get a sense for as an organization what direction you need to go in. So from there, uh, well, I guess actually I will step back a little bit because you mentioned, um, you know, Copilot versus Gemini and of course there's ChatgPT and all of those, um, sort of as a a technology nerd kind of, uh, query uh, sort of curiosities, what have you found or have you found, uh, one uh, engine better than the other? particularly and let's you know I know that they have their different flavors but particularly from a business point of view if somebody's trying to do sort of a general business you know marketing sales that kind of AI. >> Yeah. Um you know I co-pilot's better than Gemini in general. Um you know but it really just depends on like on your existing stack. Like that's not even really a decision that you're going to be making at this point unless you're new. Unless you're new and that's a big decision. Are we going to build this business on the Microsoft stack or on the Google stack? And so what model you end up using um really depends on that and that's just Gemini v co-pilot is not enough to make that decision on how you're going to build your business on what stack, right? There's a lot of factors that go into it. Um, that being said, Gemini 3 came out, I think it was this week, and it's topping leaderboards all over the place. And so, you know, it it's really historically been quite interesting, right? In 2017, it was a Google researcher and research team that kind of discovered transformer models and they published a paper called attention is all you need. And then the race was on and Google lost that race like big time, right? Uh chat GPT, do you guys remember Bard which was the premature like oh gosh we're getting crushed by open AI like let's release Bard and it was just awful just awful. So bad they had to rebrand it. Um and so you know basically Microsoft through its majority investment in open AI uh won that race to like come to market with you know generative pre-trained transformer models GPT that's what that stands for. Um, what we've seen though and and honestly like maybe this week is when Google kind of officially caught up where they are, you know, their model. So now it's like, okay, new release of GPT5, that's state-of-the-art. Gemini 3, that's actually a little bit better. Now that's state-of-the-art. And then Claude does something, and then Mistl does something, and then Deepseek is over here doing amazing things. Um, so it's become while OpenAI clearly came out in front, way out in front, it's getting way more competitive now. Um, and and like each new release is better than the best release that the other companies have put out. So, it's an arms race now. Uh, which is really good for us as consumers of this technology, right? We want it to be competitive. That brings down pricing that improves all the products. Nobody's complacent. Everybody's sprinting as fast as they can because we've got a real competition going on and it's not just Microsoft v Google or you know open AI and Microsoft v Google right it's there's all these other players and anthropic and mistrol and you name them are doing really interesting things and are starting to specialize a bit right like Claude was really great at writing code before some of the others were really great at writing code um and so you know the landscape is just evolving so fast that it's it's honestly quite hard to keep keep track of it all. >> Yeah, it kind of reminds me of the early Java days uh back when it was Oak and then it became Java and then you didn't have any of the parsers for DOM or SAX. I mean, it was like the sky's is the limit. Libraries were everywhere every other week for those getting into AI. So, you talked a little bit about C-Pilot and Gemini kind of being a good, you know, safer way to get started to protect your stacks and your um your your um your information. You mentioned the LMM. Um for most people, they just see AI as AI. They really it's like, oh yeah, there's some model back there. I I basically ask it something, I get something out of it. for developers. How what's a good way or can you explain your like what you mean by like each has a better how would you suggest developers or entrepreneurs as they approach these AI models? How to look for the right AI model for what they need? Sorry, it took me a while to get there, but it there's just kind of a lot with what you brought up that just kind of want to get a little more focused on that. >> Yeah. Yeah. Um it it can be hard. it it can really be hard. Um, you know what I would suggest? Okay, so let's take out the like confidential information piece of this. Let's say it's just individuals. They're trying starting something out. They're not dealing in, you know, confidential client data. Um, CHPT51 is probably your starting point. um you know it's the most robust the most mature um and and just generally the chat GPT platform is is the right place to start right now I would say now I haven't even evaluated Gemini 3 yet so I might have I might say something different next week but that's a good starting point but what I encourage um folks that are starting out to do is try out a few like I will often have open let's call it three maybe I've got perplexity and I've got Claude and I've got Chad GPT open and when I'm starting a task I will often ask all three and then decide which one I like the most you know after first few prompts and then dive in deeper with that one um because there's a lot to be learned I mean like you know chat GPT5 for example is horribly slow because it in my opinion right now especially five and maybe not so much as 51 but uh it it's like overthinking everything. It it the amount of time it takes to do its reasoning and to call all these tools is tremendous. like we can't even we can't even deploy five um for our product because of how long it takes and know how many tokens it gobbles up, right? I mean, you can wait minutes sometimes. Um and so, you know, I I encourage kind of switching at least at the beginning of a particular task to to figure out which one uh is going to work. Now, at this point, most of them work for most of the things. You know what I mean? One might be the best, but do you really need the best uh for what you're doing today? Now, like when you're making a decision about, okay, we need to pick a model to put into our product. Like, that's important. You need to test out the dis different models and see which ones are working. But really, why are you building a product that can only use a single model? You should be building your product so that it's model agnostic to a certain extent. And depending on what you're trying to accomplish, you may be using different models for different things. And you need to be building so that you can accommodate new new versions of models that come out, right? And have a full test harness in place so you can evaluate new models quickly to see if they're a good fit for your product. So it's it's just it's everchanging. And so like I would caution folks starting out to like go all in on one model. You need to be thinking of this as like a multimodel world where you you're you're switching back and forth all the time. And and and like you know this is a little just like a little hack. Uh sometimes it's interesting to okay ask chatpt 5 something. take both the prompt and the response that you got, plug it into a different LLM, say this is what I this is what I learned or this is what GPT5 said. What do you think about this? Right? And get it to kind of critique like are there errors here? Are there is there missing information here? I really like doing that and it can really illuminate, you know, hallucinations which still happen way less but they're still happening. Um and just like give it some oversight. >> Nice. I I like that because uh I I've done that quite a bit in in the coding world. Thankfully, I mainly use chat tpt as far as code goes to like just build me a quick like stub or something, you know, the boilerplate stuff. Or if you have a problem, it's like, hey, give me this. It gives me a solution. If I don't like it, I pop it in something else and kind of troubleshoot. Or you can almost look at it and know it's wrong sometimes. >> Sometimes. >> Um following that step. So you mentioned, you know, how do you at the beginning you talked about keeping the uh the information safe, you know, protecting the access. So, as we're using AI more, as we're building our tool sets with these LLMs, what are some steps that people as they start evolving their access with or their um experience with AI, how can they start protecting themselves and their data uh from being misused or exposing unnecessary data to other people within the model? >> Yeah. So the first step is um you need to pay for access to these things. If you're not paying, they're doing whatever they want with your data, whatever they want. Um and so you have to actually evaluate the licenses that you pay for uh to make sure that they are in line with like the the kind of security posture that you need to have. Um, you know, some folks, marketing folks for example, they don't really care all that much. What they make, they put out into the world on purpose, right? Um, but then there are others like attorneys that absolutely cannot, you know, put confidential client information into a third party system. Even if they're protected by the license, they still can't often times. And so it's a spectrum of like what your needs are. Um, and so the first step would just be re actually read the licenses or or asking for summaries or something like that to make sure that what you're paying for is in line with what you need from a data protection standpoint. >> Okay, that makes sense. Um, to follow up on that, okay, so now we've kind of talked about the introduction to this. We've talked about the access. So now if I want to start building that model for my company, start building an application or putting my information into have AI help me analyze my company, things along that lines. What's kind of the next step in the progression of that in your for what you see? >> Yeah. So um this is uh precisely why we built our product compass is to is to kind of be that next step. um you know what as a company you need something that will connect to your other systems that will leverage your existing access control that's in place already um and that can that you can own that you can that you can host yourself right um you know co-pilot works great if you are integrating with stuff on the Microsoft stack but if you have other you know systems it doesn't work that right? Uh if you need to process really large files, it doesn't work at all. It actually completely falls down. And so, you know, the next step is is implementing, you know, uh looking for some models. There's a lot available that are open- source models. You can also get um access to not open- source models through AWS Bedrock or through Azure um Foundry and Azure Open AAI. Um, and so, you know, you get to a point where you can't be sending this information outside of your own firewall, like it needs to remain in your network. Um, for companies that are like serious about data security or compliance, uh, or, you know, risk control, things like that, um, you really need to own the stack. And so I I suggest, you know, deploying some of those I, you know, with an application on top of it for your team to be able to interact with it. And so, you know, when I say piggyback on existing access control, I want to click in on that. What does that mean? We talked about these financial reports, right? Let's say I want to pull in finance financials from a third party system. Now, if I have a system that's connected to it, I can authenticate as myself to that QuickBooks. And QuickBooks will only give me what I'm allowed to see, right? And then I can put that into the context of what it is that I'm trying to do and say, "Hey, generate a report for me or do this analysis for me or what have you." And so by piggybacking on the existing access control, we no longer have to trust the model. The model doesn't know all and see all. The model that you're talking to right now, only knows what you're allowed to know and it can only access what you're allowed to access. And so when you build it out that way, like we can't we cannot trust the models. The LLMs are not to be trusted, right? We need our own mechanisms for protecting our business and our data. And so by by architecting it that way um you kind of sidestep that problem by saying you know well we already have access control of course right you know the CEO can access the financials but you know the some knowledge workers can't and and that that's kind of the next phase is when you're ready for this like internal AI system that's connecting to all of your other stuff. And that's when it gets really powerful. >> And that's where we're going to pause part one of our episode of our interview with uh Hunter Jensen. Great conversation. Uh it's one of those that it seems like every time we mention AI, it sort of goes off the rails. But in this case, those were the rails. We like this is somebody that is a a CEO of a of an AIdriven company. When you've got a AI at the end of the name, hopefully that means something to you. uh in this case I think you see that it does Hunter actually has really thought about this stuff and is a great great resource and uh we will continue that in the next episode. Uh thank you so much for your time. We appreciate it and all that you guys have done just hanging out there as we're getting towards the end of the year just trying to be in that very thankful mood and you guys are at the top of our list. Go out there and have yourself a great day, a great week and we will talk to you next time.
Transcript Segments
All right. Um, what we do is, um, we'll
do a we do sort of a conversational kind
of a, um, sorry, adjusting my mic a tad.
conversational approach. Uh we'll start
out with uh we do it'll take us about an
hour. We end up splitting it into two u
roughly 25 minute uh episodes. We'll
start off the first one. We'll do an
introduction. I'll introduce myself.
Mike will introduce himself. We'll allow
you to introduce yourself and then we
will uh dive right in basically. Um like
I said, we keep it keep it pretty free
form usually sometimes even within the
the introduction. Uh definitely as we
get in the first couple of questions,
there's so many follow-ups that the next
thing we know we are, you know, all the
way through the the episode basically.
And then at the end, we'll make sure
that we have a uh you ask you to just
provide any, you know, good links or
anything that you want to do for your uh
to reach out uh social links or anything
like that. We'll also have links in the
show notes, but we find that tends to be
a really good way to uh to do it. One
second.
All right. And this is
we're going to talk about AI and that
has like every time we've even started
in on that conversation, it just it
keeps going. So I I expect we will have
nothing but uh rabbit trails to go down
and of course everybody loves AI today
too. So uh any questions before we get
started?
>> Uh yeah, just one which is like who's
the audience? How would you characterize
them? uh audience is typically we're
looking at people that are starting
their career in technology or maybe you
know I guess and it could go further in
but usually in your like you know junior
to mid-level uh kind of technology
developer type along with a lot of them
we're really focusing on the ones that
are uh have an entrepreneurial side so
they're doing side hustles and things
like that. Uh so it's really a
combination of young entrepreneurs uh
technology savvy entrepreneurs and
technologists. So uh that's a lot of
where their focus where this focus will
be is how AI can be uh both as I guess
an implementer of it but also how we can
use AI to you know help ourselves in our
daily routines and things like that.
>> Great. Great.
>> Then uh we will dive right into it. Oh,
by the way, it is uh video and audio. We
have some of the pre-show and post show
stuff does end up on the u the audio
side, but then we use our specific, you
know, audio cut points for the the
podcast uh the audio only
do uno. Well, hello and welcome back. We
are continuing our season of building
better foundations. This is the building
better developer podcast, also known as
developer. I am Rob Broadhead, one of
the founders of developer, also the
founder of RB Consulting, where we help
you assess technology and build a
roadmap for success in the world of good
things and bad things. Good thing is is
that I live in Nashville where weather
changes all the stinking time. So, uh, a
little bit of a cold snap was followed
by a a warm snap, I guess. Maybe it was
a light snap. So, got to have the
windows open and things like that, which
was great. Got some fresh air, got to
air things out. Downside is is also with
that sometimes comes rain. So as I got
everything open, rain started to pour
and it's like uhoh, got to make sure
everything's closed back down or at
least enough so I don't like drift away
in a flood. Someone who has grounded
firmly and not about to drift away in a
flood. Michael, go ahead and introduce
yourself. Hey everyone, my name is
Michael Malashsh. I'm one of the
co-founders of Developer, also known as
Building Better Developers. I'm also the
founder of Envision QA where we help
businesses build reliable software
through custom development and expert
testing. Good thing, bad thing. Uh good
thing, um had some medical procedures
recently and they all came back good. Uh
downside, the prep for those procedures
sucked.
And today, our guest is Hunter Jensen.
And uh I'm not even going to try to
introduce you. I'm going allow you to
start with the introduction and uh
introduce yourselves to the the
audience.
Yeah, thanks for having me on. Uh Hunter
Jensen, uh founder CEO of uh both
Barefoot Solutions, which is a custom
software development shop, as well as uh
Barefoot Labs, which has uh is just now
rolling out a product to help um you
know, midsize companies deploy internal
AI systems to boost their employee uh
productivity.
Well, that uh actually leads us right
into a great starting question is when a
company is is starting out and trying to
implement AI, which it feels like
everybody is right now. What are some
common uh mistakes or red flags or
things that you would recommend if
they're diving into this that they
should take a look for?
>> Yeah, you know, um a lot of mistakes are
being made. This is kind of the wild
west right now. Best practices are
currently in development, right?
And one of the one of the biggest
mistakes that I see especially like at
the leadership level is you know CEOs
have this vision of
a model that knows absolutely everything
about my business that can you know help
in every single facet of that business
because it knows all and it connects to
all the systems and all the rest of it.
And
uh what they don't realize is that uh
that's not really possible right now.
And and the reason there's many reasons
why that's not possible, but one of them
is simply access control. How could we
trust the model to not divulge
information to people using it that
they're not supposed to know? Right? If
if a model is trained on everybody's HR
data, as an example, we cannot trust
that model to interact with individual
employees and protect other people's HR
information. It's just we're not there
yet. The technology is not there yet.
The guardrails are very inconsistent at
at best. Uh and so it really needs to uh
kind of get a little more narrow in
focus and not be this one all knowing
kind of you know my business model uh
that can help with with for everyone
with everything. That's just not really
feasible right now.
>> So that and that that makes complete
sense. thank everybody. It's like, oh,
we would love to see all of our, you
know, our financial numbers, but then
when you start saying, but then also
that means you're going to have to plug
in everybody's salary and stuff like
that. Then it's like, wait a minute, I'm
not sure I want access people have
access to that. And of course, there's a
clarifying. It's like, yes, even if
you're going to have, you know, it may
be that the LLM knows averages and sums
and stuff like that, but if you're
giving it the data to generate those
things, then still somewhere in there,
you've got the data. You know everything
about your business. So, with that in
mind, like what is a, you know, and
maybe there's some CEOs out there,
they're like going, "Oh, crud, I didn't
think about that." What is a good what
is a good like pilot program or a good
way to get started?
>> Yeah. So, you know, a good way to just
get started. Um, you know, I can't
remember the last time I talked to a
company, at least an American one, that
isn't either on Microsoft or G Suite,
right? Uh, both of them have products.
There's Microsoft 365 C-pilot and
there's Gemini for G Suite. It's not
that expensive and it gives a exposure
to your team uh to an LLM that's safe
because guess what? If you don't they
are going and using chat GPT even if
it's not you know against your AI
governance policies. If you have an AI
governance policy, which you need one by
the way, if you don't, uh, you we have
to give tools to our team, uh, or we're
putting our confidential data at risk.
Period. Full stop. Right. And so, you
know, firing up a license, you know,
some licenses, some seats for those
products is a nice way to kind of dip
your toe in. Now, those products have
major limitations. And so, that's a
starting point. that's not where that's
not where you end up. That's not the
overall solution. Uh but it's a nice
place to you know get your feet wet, see
what the appetite is, see what the
skills of your team are, whether they're
adopting this stuff and and you know get
a sense for as an organization what
direction you need to go in.
So from there, uh, well, I guess
actually I will step back a little bit
because you mentioned, um, you know,
Copilot versus Gemini and of course
there's ChatgPT and all of those, um,
sort of as a a technology nerd kind of,
uh, query uh, sort of curiosities, what
have you found or have you found, uh,
one uh, engine better than the other?
particularly and let's you know I know
that they have their different flavors
but particularly from a business point
of view if somebody's trying to do sort
of a general business you know marketing
sales that kind of AI.
>> Yeah. Um you know I co-pilot's better
than Gemini
in general. Um you know but it really
just depends on like on your existing
stack. Like that's not even really a
decision that you're going to be making
at this point unless you're new. Unless
you're new and that's a big decision.
Are we going to build this business on
the Microsoft stack or on the Google
stack? And so what model you end up
using um really depends on that and
that's
just Gemini v co-pilot is not enough to
make that decision on how you're going
to build your business on what stack,
right? There's a lot of factors that go
into it. Um, that being said, Gemini 3
came out, I think it was this week, and
it's topping leaderboards all over the
place. And so, you know, it it's really
historically been quite interesting,
right? In 2017,
it was a Google researcher and research
team that kind of discovered transformer
models and they published a paper called
attention is all you need. And then the
race was on and Google lost that race
like big time, right? Uh chat GPT, do
you guys remember Bard which was the
premature like oh gosh we're getting
crushed by open AI like let's release
Bard and it was just awful just awful.
So bad they had to rebrand it. Um and so
you know basically Microsoft through its
majority investment in open AI uh won
that race to like come to market with
you know generative pre-trained
transformer models GPT that's what that
stands for. Um, what we've seen though
and and honestly like maybe this week is
when Google kind of officially caught up
where they are, you know, their model.
So now it's like, okay, new release of
GPT5, that's state-of-the-art. Gemini 3,
that's actually a little bit better. Now
that's state-of-the-art. And then Claude
does something, and then Mistl does
something, and then Deepseek is over
here doing amazing things. Um, so it's
become while OpenAI clearly came out in
front, way out in front, it's getting
way more competitive now. Um, and and
like each new release is better than the
best release that the other companies
have put out. So, it's an arms race now.
Uh, which is really good for us as
consumers of this technology, right? We
want it to be competitive. That brings
down pricing that improves all the
products. Nobody's complacent.
Everybody's sprinting as fast as they
can because we've got a real competition
going on and it's not just Microsoft v
Google or you know open AI and Microsoft
v Google right it's there's all these
other players and anthropic and mistrol
and you name them are doing really
interesting things and are starting to
specialize a bit right like Claude was
really great at writing code before some
of the others were really great at
writing code um and so you know the
landscape is just evolving so fast that
it's it's honestly quite hard to keep
keep track of it all.
>> Yeah, it kind of reminds me of the early
Java days uh back when it was Oak and
then it became Java and then you didn't
have any of the parsers for DOM or SAX.
I mean, it was like the sky's is the
limit. Libraries were everywhere every
other week for those getting into AI.
So, you talked a little bit about
C-Pilot and Gemini kind of being a good,
you know, safer way to get started to
protect your stacks and your um
your your um your information.
You mentioned the LMM. Um
for most people, they just see AI as AI.
They really it's like, oh yeah, there's
some model back there. I I basically ask
it something, I get something out of it.
for developers. How what's a good way or
can you explain your like what you mean
by like each has a better
how would you suggest developers or
entrepreneurs as they approach these AI
models? How to look for the right AI
model for what they need? Sorry, it took
me a while to get there, but it there's
just kind of a lot with what you brought
up that just kind of want to get a
little more focused on that.
>> Yeah. Yeah. Um it it can be hard. it it
can really be hard. Um,
you know what I would suggest? Okay, so
let's take out the like confidential
information piece of this. Let's say
it's just individuals. They're trying
starting something out. They're not
dealing in, you know, confidential
client data. Um, CHPT51
is probably your starting point. um you
know it's the most robust the most
mature
um and and just generally the chat GPT
platform is is the right place to start
right now I would say now I haven't even
evaluated Gemini 3 yet so I might have I
might say something different next week
but that's a good starting point but
what I encourage
um folks that are starting out to do is
try out a few like I will often have
open
let's call it three maybe I've got
perplexity
and I've got Claude and I've got Chad
GPT open and when I'm starting a task I
will often ask all three and then decide
which one I like the most you know after
first few prompts and then dive in
deeper with that one um because there's
a lot to be learned I mean like you know
chat GPT5 for example is horribly slow
because it in my opinion right now
especially five and maybe not so much as
51 but uh it it's like overthinking
everything. It it the amount of time it
takes to do its reasoning and to call
all these tools is tremendous. like we
can't even we can't even deploy five um
for our product because of how long it
takes and know how many tokens it
gobbles up, right? I mean, you can wait
minutes sometimes. Um and so, you know,
I I encourage kind of switching
at least at the beginning of a
particular task to to figure out which
one uh is going to work. Now, at this
point, most of them work for most of the
things.
You know what I mean? One might be the
best, but do you really need the best uh
for what you're doing today? Now, like
when you're making a decision about,
okay, we need to pick a model to put
into our product. Like, that's
important. You need to test out the dis
different models and see which ones are
working.
But really, why are you building a
product that can only use a single
model? You should be building your
product so that it's model agnostic to a
certain extent. And depending on what
you're trying to accomplish, you may be
using different models for different
things. And you need to be building so
that you can accommodate new new
versions of models that come out, right?
And have a full test harness in place so
you can evaluate new models quickly to
see if they're a good fit for your
product. So it's it's just it's
everchanging. And so like I would
caution folks starting out to like go
all in on one model. You need to be
thinking of this as like a multimodel
world where you you're you're switching
back and forth all the time. And and and
like you know this is a little just like
a little hack. Uh sometimes it's
interesting to okay ask chatpt 5
something. take both the prompt and the
response that you got, plug it into a
different LLM, say this is what I this
is what I learned or this is what GPT5
said. What do you think about this?
Right? And get it to kind of critique
like are there errors here? Are there is
there missing information here? I really
like doing that and it can really
illuminate, you know, hallucinations
which still happen way less but they're
still happening. Um and just like give
it some oversight.
>> Nice. I I like that because uh I I've
done that quite a bit in in the coding
world. Thankfully, I mainly use chat tpt
as far as code goes to like just build
me a quick like stub or something, you
know, the boilerplate stuff. Or if you
have a problem, it's like, hey, give me
this. It gives me a solution. If I don't
like it, I pop it in something else and
kind of troubleshoot. Or you can almost
look at it and know it's wrong
sometimes.
>> Sometimes.
>> Um following that step. So you
mentioned, you know, how do you at the
beginning you talked about keeping the
uh the information safe, you know,
protecting the access. So, as we're
using AI more, as we're building our
tool sets with these LLMs, what are some
steps that people as they start evolving
their access with or their um experience
with AI, how can they start protecting
themselves and their data uh from being
misused or exposing unnecessary data to
other people within the model?
>> Yeah. So the first step is
um you need to pay for access to these
things. If you're not paying, they're
doing whatever they want with your data,
whatever they want. Um and so you have
to actually evaluate the licenses that
you pay for uh to make sure that they
are in line with like the the kind of
security posture that you need to have.
Um, you know, some folks, marketing
folks for example, they don't really
care all that much. What they make, they
put out into the world on purpose,
right? Um, but then there are others
like attorneys that absolutely cannot,
you know, put confidential client
information into a third party system.
Even if they're protected by the
license, they still can't often times.
And so it's a spectrum of like what your
needs are. Um, and so the first step
would just be
re actually read the licenses or or
asking for summaries or something like
that to make sure that what you're
paying for is in line with what you need
from a data protection standpoint.
>> Okay, that makes sense. Um,
to follow up on that, okay, so now we've
kind of talked about the introduction to
this. We've talked about the access. So
now if I want to start building that
model for my company, start building an
application or putting my information
into have AI help me analyze my company,
things along that lines. What's kind of
the next step in the progression of that
in your for what you see?
>> Yeah. So um this is uh precisely why we
built our product compass is to is to
kind of be that next step. um you know
what as a company you need something
that will connect to your other systems
that will leverage your existing access
control that's in place already
um and that can that you can own that
you can that you can host yourself right
um you know co-pilot works great if you
are integrating with stuff on the
Microsoft stack but if you have other
you know systems it doesn't work that
right? Uh if you need to process really
large files, it doesn't work at all. It
actually completely falls down. And so,
you know, the next step is is
implementing, you know, uh looking for
some models. There's a lot available
that are open- source models. You can
also get um
access to not open- source models
through AWS Bedrock or through Azure um
Foundry and Azure Open AAI. Um, and so,
you know, you get to a point where you
can't be sending this information
outside of your own firewall, like it
needs to remain in your network. Um, for
companies that are like serious about
data security or compliance, uh, or, you
know, risk control, things like that,
um, you really need to own the stack.
And so I I suggest, you know, deploying
some of those I, you know, with an
application on top of it for your team
to be able to interact with it. And so,
you know, when I say piggyback on
existing access control, I want to click
in on that. What does that mean? We
talked about these financial reports,
right? Let's say I want to pull in
finance financials from a third party
system. Now, if I have a system that's
connected to it, I can authenticate as
myself to that QuickBooks. And
QuickBooks will only give me what I'm
allowed to see, right? And then I can
put that into the context of what it is
that I'm trying to do and say, "Hey,
generate a report for me or do this
analysis for me or what have you." And
so by piggybacking on the existing
access control, we no longer have to
trust the model. The model doesn't know
all and see all. The model that you're
talking to right now, only knows what
you're allowed to know and it can only
access what you're allowed to access.
And so when you build it out that way,
like we can't we cannot trust the
models. The LLMs are not to be trusted,
right? We need our own mechanisms for
protecting our business and our data.
And so by by architecting it that way
um you kind of sidestep that problem by
saying you know well we already have
access control of course right you know
the CEO can access the financials but
you know the some knowledge workers
can't and and that that's kind of the
next phase is when you're ready for this
like internal AI system that's
connecting to all of your other stuff.
And that's when it gets really powerful.
>> And that's where we're going to pause
part one of our episode of our interview
with uh Hunter Jensen.
Great conversation. Uh it's one of those
that it seems like every time we mention
AI, it sort of goes off the rails. But
in this case, those were the rails. We
like this is somebody that is a a CEO of
a of an AIdriven company. When you've
got a AI at the end of the name,
hopefully that means something to you.
uh in this case I think you see that it
does Hunter actually has really thought
about this stuff and is a great great
resource and uh we will continue that in
the next episode. Uh thank you so much
for your time. We appreciate it and all
that you guys have done just hanging out
there as we're getting towards the end
of the year just trying to be in that
very thankful mood and you guys are at
the top of our list. Go out there and
have yourself a great day, a great week
and we will talk to you next time.