📺 Develpreneur YouTube Episode

Video + transcript

Getting Started with AI in Your Business | Interview with Hunter Jensen (Part 1)

2025-12-16 Youtube

Detailed Notes

How do you safely and effectively get started with AI in your business? In this episode of Building Better Developers, we talk with Hunter Jensen, founder and CEO of Barefoot Solutions and Barefoot Labs, about practical first steps for adopting AI, choosing the right models, and protecting your company’s data.

Whether you’re a business owner, technology leader, or developer exploring AI strategy, Hunter breaks down what to avoid, what to try first, and how to prepare your organization for the future of AI.

🔍 What You’ll Learn in This Episode: • Why all-knowing AI models aren’t realistic (yet) • The biggest mistakes companies make when adopting AI • Safe first steps for getting started with AI in your business • Copilot vs. Gemini vs. ChatGPT for teams • How to evaluate AI models without locking yourself in • Why data protection must come before experimentation • When and why companies need internal AI systems • How access control impacts AI use

⭐ About Hunter Jensen https://www.linkedin.com/in/hunterjensen/ Founder & CEO of Barefoot Solutions and Barefoot Labs, specializing in custom software development and internal AI systems that help companies securely leverage artificial intelligence.

📢 Connect With Us

Website: https://Develpreneur.com Podcast: Building Better Developers Hosted by Rob Broadhead & Michael Meloche

Transcript Text
All right. Um, what we do is, um, we'll
do a we do sort of a conversational kind
of a, um, sorry, adjusting my mic a tad.
conversational approach. Uh we'll start
out with uh we do it'll take us about an
hour. We end up splitting it into two u
roughly 25 minute uh episodes. We'll
start off the first one. We'll do an
introduction. I'll introduce myself.
Mike will introduce himself. We'll allow
you to introduce yourself and then we
will uh dive right in basically. Um like
I said, we keep it keep it pretty free
form usually sometimes even within the
the introduction. Uh definitely as we
get in the first couple of questions,
there's so many follow-ups that the next
thing we know we are, you know, all the
way through the the episode basically.
And then at the end, we'll make sure
that we have a uh you ask you to just
provide any, you know, good links or
anything that you want to do for your uh
to reach out uh social links or anything
like that. We'll also have links in the
show notes, but we find that tends to be
a really good way to uh to do it. One
second.
All right. And this is
we're going to talk about AI and that
has like every time we've even started
in on that conversation, it just it
keeps going. So I I expect we will have
nothing but uh rabbit trails to go down
and of course everybody loves AI today
too. So uh any questions before we get
started?
>> Uh yeah, just one which is like who's
the audience? How would you characterize
them? uh audience is typically we're
looking at people that are starting
their career in technology or maybe you
know I guess and it could go further in
but usually in your like you know junior
to mid-level uh kind of technology
developer type along with a lot of them
we're really focusing on the ones that
are uh have an entrepreneurial side so
they're doing side hustles and things
like that. Uh so it's really a
combination of young entrepreneurs uh
technology savvy entrepreneurs and
technologists. So uh that's a lot of
where their focus where this focus will
be is how AI can be uh both as I guess
an implementer of it but also how we can
use AI to you know help ourselves in our
daily routines and things like that.
>> Great. Great.
>> Then uh we will dive right into it. Oh,
by the way, it is uh video and audio. We
have some of the pre-show and post show
stuff does end up on the u the audio
side, but then we use our specific, you
know, audio cut points for the the
podcast uh the audio only
do uno. Well, hello and welcome back. We
are continuing our season of building
better foundations. This is the building
better developer podcast, also known as
developer. I am Rob Broadhead, one of
the founders of developer, also the
founder of RB Consulting, where we help
you assess technology and build a
roadmap for success in the world of good
things and bad things. Good thing is is
that I live in Nashville where weather
changes all the stinking time. So, uh, a
little bit of a cold snap was followed
by a a warm snap, I guess. Maybe it was
a light snap. So, got to have the
windows open and things like that, which
was great. Got some fresh air, got to
air things out. Downside is is also with
that sometimes comes rain. So as I got
everything open, rain started to pour
and it's like uhoh, got to make sure
everything's closed back down or at
least enough so I don't like drift away
in a flood. Someone who has grounded
firmly and not about to drift away in a
flood. Michael, go ahead and introduce
yourself. Hey everyone, my name is
Michael Malashsh. I'm one of the
co-founders of Developer, also known as
Building Better Developers. I'm also the
founder of Envision QA where we help
businesses build reliable software
through custom development and expert
testing. Good thing, bad thing. Uh good
thing, um had some medical procedures
recently and they all came back good. Uh
downside, the prep for those procedures
sucked.
And today, our guest is Hunter Jensen.
And uh I'm not even going to try to
introduce you. I'm going allow you to
start with the introduction and uh
introduce yourselves to the the
audience.
Yeah, thanks for having me on. Uh Hunter
Jensen, uh founder CEO of uh both
Barefoot Solutions, which is a custom
software development shop, as well as uh
Barefoot Labs, which has uh is just now
rolling out a product to help um you
know, midsize companies deploy internal
AI systems to boost their employee uh
productivity.
Well, that uh actually leads us right
into a great starting question is when a
company is is starting out and trying to
implement AI, which it feels like
everybody is right now. What are some
common uh mistakes or red flags or
things that you would recommend if
they're diving into this that they
should take a look for?
>> Yeah, you know, um a lot of mistakes are
being made. This is kind of the wild
west right now. Best practices are
currently in development, right?
And one of the one of the biggest
mistakes that I see especially like at
the leadership level is you know CEOs
have this vision of
a model that knows absolutely everything
about my business that can you know help
in every single facet of that business
because it knows all and it connects to
all the systems and all the rest of it.
And
uh what they don't realize is that uh
that's not really possible right now.
And and the reason there's many reasons
why that's not possible, but one of them
is simply access control. How could we
trust the model to not divulge
information to people using it that
they're not supposed to know? Right? If
if a model is trained on everybody's HR
data, as an example, we cannot trust
that model to interact with individual
employees and protect other people's HR
information. It's just we're not there
yet. The technology is not there yet.
The guardrails are very inconsistent at
at best. Uh and so it really needs to uh
kind of get a little more narrow in
focus and not be this one all knowing
kind of you know my business model uh
that can help with with for everyone
with everything. That's just not really
feasible right now.
>> So that and that that makes complete
sense. thank everybody. It's like, oh,
we would love to see all of our, you
know, our financial numbers, but then
when you start saying, but then also
that means you're going to have to plug
in everybody's salary and stuff like
that. Then it's like, wait a minute, I'm
not sure I want access people have
access to that. And of course, there's a
clarifying. It's like, yes, even if
you're going to have, you know, it may
be that the LLM knows averages and sums
and stuff like that, but if you're
giving it the data to generate those
things, then still somewhere in there,
you've got the data. You know everything
about your business. So, with that in
mind, like what is a, you know, and
maybe there's some CEOs out there,
they're like going, "Oh, crud, I didn't
think about that." What is a good what
is a good like pilot program or a good
way to get started?
>> Yeah. So, you know, a good way to just
get started. Um, you know, I can't
remember the last time I talked to a
company, at least an American one, that
isn't either on Microsoft or G Suite,
right? Uh, both of them have products.
There's Microsoft 365 C-pilot and
there's Gemini for G Suite. It's not
that expensive and it gives a exposure
to your team uh to an LLM that's safe
because guess what? If you don't they
are going and using chat GPT even if
it's not you know against your AI
governance policies. If you have an AI
governance policy, which you need one by
the way, if you don't, uh, you we have
to give tools to our team, uh, or we're
putting our confidential data at risk.
Period. Full stop. Right. And so, you
know, firing up a license, you know,
some licenses, some seats for those
products is a nice way to kind of dip
your toe in. Now, those products have
major limitations. And so, that's a
starting point. that's not where that's
not where you end up. That's not the
overall solution. Uh but it's a nice
place to you know get your feet wet, see
what the appetite is, see what the
skills of your team are, whether they're
adopting this stuff and and you know get
a sense for as an organization what
direction you need to go in.
So from there, uh, well, I guess
actually I will step back a little bit
because you mentioned, um, you know,
Copilot versus Gemini and of course
there's ChatgPT and all of those, um,
sort of as a a technology nerd kind of,
uh, query uh, sort of curiosities, what
have you found or have you found, uh,
one uh, engine better than the other?
particularly and let's you know I know
that they have their different flavors
but particularly from a business point
of view if somebody's trying to do sort
of a general business you know marketing
sales that kind of AI.
>> Yeah. Um you know I co-pilot's better
than Gemini
in general. Um you know but it really
just depends on like on your existing
stack. Like that's not even really a
decision that you're going to be making
at this point unless you're new. Unless
you're new and that's a big decision.
Are we going to build this business on
the Microsoft stack or on the Google
stack? And so what model you end up
using um really depends on that and
that's
just Gemini v co-pilot is not enough to
make that decision on how you're going
to build your business on what stack,
right? There's a lot of factors that go
into it. Um, that being said, Gemini 3
came out, I think it was this week, and
it's topping leaderboards all over the
place. And so, you know, it it's really
historically been quite interesting,
right? In 2017,
it was a Google researcher and research
team that kind of discovered transformer
models and they published a paper called
attention is all you need. And then the
race was on and Google lost that race
like big time, right? Uh chat GPT, do
you guys remember Bard which was the
premature like oh gosh we're getting
crushed by open AI like let's release
Bard and it was just awful just awful.
So bad they had to rebrand it. Um and so
you know basically Microsoft through its
majority investment in open AI uh won
that race to like come to market with
you know generative pre-trained
transformer models GPT that's what that
stands for. Um, what we've seen though
and and honestly like maybe this week is
when Google kind of officially caught up
where they are, you know, their model.
So now it's like, okay, new release of
GPT5, that's state-of-the-art. Gemini 3,
that's actually a little bit better. Now
that's state-of-the-art. And then Claude
does something, and then Mistl does
something, and then Deepseek is over
here doing amazing things. Um, so it's
become while OpenAI clearly came out in
front, way out in front, it's getting
way more competitive now. Um, and and
like each new release is better than the
best release that the other companies
have put out. So, it's an arms race now.
Uh, which is really good for us as
consumers of this technology, right? We
want it to be competitive. That brings
down pricing that improves all the
products. Nobody's complacent.
Everybody's sprinting as fast as they
can because we've got a real competition
going on and it's not just Microsoft v
Google or you know open AI and Microsoft
v Google right it's there's all these
other players and anthropic and mistrol
and you name them are doing really
interesting things and are starting to
specialize a bit right like Claude was
really great at writing code before some
of the others were really great at
writing code um and so you know the
landscape is just evolving so fast that
it's it's honestly quite hard to keep
keep track of it all.
>> Yeah, it kind of reminds me of the early
Java days uh back when it was Oak and
then it became Java and then you didn't
have any of the parsers for DOM or SAX.
I mean, it was like the sky's is the
limit. Libraries were everywhere every
other week for those getting into AI.
So, you talked a little bit about
C-Pilot and Gemini kind of being a good,
you know, safer way to get started to
protect your stacks and your um
your your um your information.
You mentioned the LMM. Um
for most people, they just see AI as AI.
They really it's like, oh yeah, there's
some model back there. I I basically ask
it something, I get something out of it.
for developers. How what's a good way or
can you explain your like what you mean
by like each has a better
how would you suggest developers or
entrepreneurs as they approach these AI
models? How to look for the right AI
model for what they need? Sorry, it took
me a while to get there, but it there's
just kind of a lot with what you brought
up that just kind of want to get a
little more focused on that.
>> Yeah. Yeah. Um it it can be hard. it it
can really be hard. Um,
you know what I would suggest? Okay, so
let's take out the like confidential
information piece of this. Let's say
it's just individuals. They're trying
starting something out. They're not
dealing in, you know, confidential
client data. Um, CHPT51
is probably your starting point. um you
know it's the most robust the most
mature
um and and just generally the chat GPT
platform is is the right place to start
right now I would say now I haven't even
evaluated Gemini 3 yet so I might have I
might say something different next week
but that's a good starting point but
what I encourage
um folks that are starting out to do is
try out a few like I will often have
open
let's call it three maybe I've got
perplexity
and I've got Claude and I've got Chad
GPT open and when I'm starting a task I
will often ask all three and then decide
which one I like the most you know after
first few prompts and then dive in
deeper with that one um because there's
a lot to be learned I mean like you know
chat GPT5 for example is horribly slow
because it in my opinion right now
especially five and maybe not so much as
51 but uh it it's like overthinking
everything. It it the amount of time it
takes to do its reasoning and to call
all these tools is tremendous. like we
can't even we can't even deploy five um
for our product because of how long it
takes and know how many tokens it
gobbles up, right? I mean, you can wait
minutes sometimes. Um and so, you know,
I I encourage kind of switching
at least at the beginning of a
particular task to to figure out which
one uh is going to work. Now, at this
point, most of them work for most of the
things.
You know what I mean? One might be the
best, but do you really need the best uh
for what you're doing today? Now, like
when you're making a decision about,
okay, we need to pick a model to put
into our product. Like, that's
important. You need to test out the dis
different models and see which ones are
working.
But really, why are you building a
product that can only use a single
model? You should be building your
product so that it's model agnostic to a
certain extent. And depending on what
you're trying to accomplish, you may be
using different models for different
things. And you need to be building so
that you can accommodate new new
versions of models that come out, right?
And have a full test harness in place so
you can evaluate new models quickly to
see if they're a good fit for your
product. So it's it's just it's
everchanging. And so like I would
caution folks starting out to like go
all in on one model. You need to be
thinking of this as like a multimodel
world where you you're you're switching
back and forth all the time. And and and
like you know this is a little just like
a little hack. Uh sometimes it's
interesting to okay ask chatpt 5
something. take both the prompt and the
response that you got, plug it into a
different LLM, say this is what I this
is what I learned or this is what GPT5
said. What do you think about this?
Right? And get it to kind of critique
like are there errors here? Are there is
there missing information here? I really
like doing that and it can really
illuminate, you know, hallucinations
which still happen way less but they're
still happening. Um and just like give
it some oversight.
>> Nice. I I like that because uh I I've
done that quite a bit in in the coding
world. Thankfully, I mainly use chat tpt
as far as code goes to like just build
me a quick like stub or something, you
know, the boilerplate stuff. Or if you
have a problem, it's like, hey, give me
this. It gives me a solution. If I don't
like it, I pop it in something else and
kind of troubleshoot. Or you can almost
look at it and know it's wrong
sometimes.
>> Sometimes.
>> Um following that step. So you
mentioned, you know, how do you at the
beginning you talked about keeping the
uh the information safe, you know,
protecting the access. So, as we're
using AI more, as we're building our
tool sets with these LLMs, what are some
steps that people as they start evolving
their access with or their um experience
with AI, how can they start protecting
themselves and their data uh from being
misused or exposing unnecessary data to
other people within the model?
>> Yeah. So the first step is
um you need to pay for access to these
things. If you're not paying, they're
doing whatever they want with your data,
whatever they want. Um and so you have
to actually evaluate the licenses that
you pay for uh to make sure that they
are in line with like the the kind of
security posture that you need to have.
Um, you know, some folks, marketing
folks for example, they don't really
care all that much. What they make, they
put out into the world on purpose,
right? Um, but then there are others
like attorneys that absolutely cannot,
you know, put confidential client
information into a third party system.
Even if they're protected by the
license, they still can't often times.
And so it's a spectrum of like what your
needs are. Um, and so the first step
would just be
re actually read the licenses or or
asking for summaries or something like
that to make sure that what you're
paying for is in line with what you need
from a data protection standpoint.
>> Okay, that makes sense. Um,
to follow up on that, okay, so now we've
kind of talked about the introduction to
this. We've talked about the access. So
now if I want to start building that
model for my company, start building an
application or putting my information
into have AI help me analyze my company,
things along that lines. What's kind of
the next step in the progression of that
in your for what you see?
>> Yeah. So um this is uh precisely why we
built our product compass is to is to
kind of be that next step. um you know
what as a company you need something
that will connect to your other systems
that will leverage your existing access
control that's in place already
um and that can that you can own that
you can that you can host yourself right
um you know co-pilot works great if you
are integrating with stuff on the
Microsoft stack but if you have other
you know systems it doesn't work that
right? Uh if you need to process really
large files, it doesn't work at all. It
actually completely falls down. And so,
you know, the next step is is
implementing, you know, uh looking for
some models. There's a lot available
that are open- source models. You can
also get um
access to not open- source models
through AWS Bedrock or through Azure um
Foundry and Azure Open AAI. Um, and so,
you know, you get to a point where you
can't be sending this information
outside of your own firewall, like it
needs to remain in your network. Um, for
companies that are like serious about
data security or compliance, uh, or, you
know, risk control, things like that,
um, you really need to own the stack.
And so I I suggest, you know, deploying
some of those I, you know, with an
application on top of it for your team
to be able to interact with it. And so,
you know, when I say piggyback on
existing access control, I want to click
in on that. What does that mean? We
talked about these financial reports,
right? Let's say I want to pull in
finance financials from a third party
system. Now, if I have a system that's
connected to it, I can authenticate as
myself to that QuickBooks. And
QuickBooks will only give me what I'm
allowed to see, right? And then I can
put that into the context of what it is
that I'm trying to do and say, "Hey,
generate a report for me or do this
analysis for me or what have you." And
so by piggybacking on the existing
access control, we no longer have to
trust the model. The model doesn't know
all and see all. The model that you're
talking to right now, only knows what
you're allowed to know and it can only
access what you're allowed to access.
And so when you build it out that way,
like we can't we cannot trust the
models. The LLMs are not to be trusted,
right? We need our own mechanisms for
protecting our business and our data.
And so by by architecting it that way
um you kind of sidestep that problem by
saying you know well we already have
access control of course right you know
the CEO can access the financials but
you know the some knowledge workers
can't and and that that's kind of the
next phase is when you're ready for this
like internal AI system that's
connecting to all of your other stuff.
And that's when it gets really powerful.
>> And that's where we're going to pause
part one of our episode of our interview
with uh Hunter Jensen.
Great conversation. Uh it's one of those
that it seems like every time we mention
AI, it sort of goes off the rails. But
in this case, those were the rails. We
like this is somebody that is a a CEO of
a of an AIdriven company. When you've
got a AI at the end of the name,
hopefully that means something to you.
uh in this case I think you see that it
does Hunter actually has really thought
about this stuff and is a great great
resource and uh we will continue that in
the next episode. Uh thank you so much
for your time. We appreciate it and all
that you guys have done just hanging out
there as we're getting towards the end
of the year just trying to be in that
very thankful mood and you guys are at
the top of our list. Go out there and
have yourself a great day, a great week
and we will talk to you next time.
Transcript Segments
27.599

All right. Um, what we do is, um, we'll

32

do a we do sort of a conversational kind

33.84

of a, um, sorry, adjusting my mic a tad.

38.719

conversational approach. Uh we'll start

40.8

out with uh we do it'll take us about an

42.8

hour. We end up splitting it into two u

46.32

roughly 25 minute uh episodes. We'll

49.52

start off the first one. We'll do an

50.879

introduction. I'll introduce myself.

52.16

Mike will introduce himself. We'll allow

54.079

you to introduce yourself and then we

56.399

will uh dive right in basically. Um like

61.039

I said, we keep it keep it pretty free

63.12

form usually sometimes even within the

65.68

the introduction. Uh definitely as we

68.24

get in the first couple of questions,

69.6

there's so many follow-ups that the next

71.68

thing we know we are, you know, all the

73.52

way through the the episode basically.

76.799

And then at the end, we'll make sure

78.64

that we have a uh you ask you to just

81.2

provide any, you know, good links or

82.88

anything that you want to do for your uh

85.04

to reach out uh social links or anything

86.96

like that. We'll also have links in the

88.56

show notes, but we find that tends to be

91.119

a really good way to uh to do it. One

94.24

second.

99.759

All right. And this is

102.64

we're going to talk about AI and that

104.159

has like every time we've even started

105.6

in on that conversation, it just it

107.6

keeps going. So I I expect we will have

110

nothing but uh rabbit trails to go down

113.92

and of course everybody loves AI today

115.92

too. So uh any questions before we get

118.56

started?

119.52

>> Uh yeah, just one which is like who's

121.84

the audience? How would you characterize

123.92

them? uh audience is typically we're

126.88

looking at people that are starting

129.28

their career in technology or maybe you

131.599

know I guess and it could go further in

133.28

but usually in your like you know junior

135.28

to mid-level uh kind of technology

137.44

developer type along with a lot of them

140.319

we're really focusing on the ones that

141.76

are uh have an entrepreneurial side so

144.48

they're doing side hustles and things

145.92

like that. Uh so it's really a

147.92

combination of young entrepreneurs uh

151.599

technology savvy entrepreneurs and

154

technologists. So uh that's a lot of

156.48

where their focus where this focus will

158.64

be is how AI can be uh both as I guess

161.84

an implementer of it but also how we can

164.08

use AI to you know help ourselves in our

166.16

daily routines and things like that.

169.04

>> Great. Great.

171.2

>> Then uh we will dive right into it. Oh,

175.12

by the way, it is uh video and audio. We

178.319

have some of the pre-show and post show

180.319

stuff does end up on the u the audio

182.959

side, but then we use our specific, you

184.959

know, audio cut points for the the

187.2

podcast uh the audio only

190.959

do uno. Well, hello and welcome back. We

195.44

are continuing our season of building

197.68

better foundations. This is the building

199.519

better developer podcast, also known as

202.239

developer. I am Rob Broadhead, one of

204.239

the founders of developer, also the

206

founder of RB Consulting, where we help

207.68

you assess technology and build a

210

roadmap for success in the world of good

212.799

things and bad things. Good thing is is

215.76

that I live in Nashville where weather

217.36

changes all the stinking time. So, uh, a

220.239

little bit of a cold snap was followed

222

by a a warm snap, I guess. Maybe it was

224.319

a light snap. So, got to have the

226.4

windows open and things like that, which

228.159

was great. Got some fresh air, got to

229.76

air things out. Downside is is also with

232.799

that sometimes comes rain. So as I got

235.12

everything open, rain started to pour

237.36

and it's like uhoh, got to make sure

238.64

everything's closed back down or at

240.239

least enough so I don't like drift away

242.56

in a flood. Someone who has grounded

246.08

firmly and not about to drift away in a

248.159

flood. Michael, go ahead and introduce

250

yourself. Hey everyone, my name is

251.68

Michael Malashsh. I'm one of the

252.56

co-founders of Developer, also known as

254.56

Building Better Developers. I'm also the

256.4

founder of Envision QA where we help

258

businesses build reliable software

259.44

through custom development and expert

261.359

testing. Good thing, bad thing. Uh good

264.479

thing, um had some medical procedures

267.12

recently and they all came back good. Uh

269.68

downside, the prep for those procedures

271.84

sucked.

274.88

And today, our guest is Hunter Jensen.

278

And uh I'm not even going to try to

279.759

introduce you. I'm going allow you to

280.8

start with the introduction and uh

282.96

introduce yourselves to the the

284.32

audience.

285.759

Yeah, thanks for having me on. Uh Hunter

288.4

Jensen, uh founder CEO of uh both

292.8

Barefoot Solutions, which is a custom

294.88

software development shop, as well as uh

297.84

Barefoot Labs, which has uh is just now

301.84

rolling out a product to help um you

305.199

know, midsize companies deploy internal

307.68

AI systems to boost their employee uh

311.44

productivity.

313.84

Well, that uh actually leads us right

316.16

into a great starting question is when a

320.56

company is is starting out and trying to

323.36

implement AI, which it feels like

324.72

everybody is right now. What are some

327.759

common uh mistakes or red flags or

331.36

things that you would recommend if

333.12

they're diving into this that they

334.32

should take a look for?

337.28

>> Yeah, you know, um a lot of mistakes are

340.56

being made. This is kind of the wild

342.24

west right now. Best practices are

344.8

currently in development, right?

348

And one of the one of the biggest

350.639

mistakes that I see especially like at

354.16

the leadership level is you know CEOs

358.8

have this vision of

361.52

a model that knows absolutely everything

364.319

about my business that can you know help

368.16

in every single facet of that business

371.28

because it knows all and it connects to

373.28

all the systems and all the rest of it.

376.08

And

377.68

uh what they don't realize is that uh

383.039

that's not really possible right now.

385.039

And and the reason there's many reasons

387.6

why that's not possible, but one of them

390.24

is simply access control. How could we

393.68

trust the model to not divulge

396.08

information to people using it that

398.96

they're not supposed to know? Right? If

400.88

if a model is trained on everybody's HR

403.36

data, as an example, we cannot trust

406.639

that model to interact with individual

408.72

employees and protect other people's HR

411.6

information. It's just we're not there

413.52

yet. The technology is not there yet.

415.28

The guardrails are very inconsistent at

418.88

at best. Uh and so it really needs to uh

424

kind of get a little more narrow in

426.319

focus and not be this one all knowing

430.16

kind of you know my business model uh

433.199

that can help with with for everyone

435.52

with everything. That's just not really

437.599

feasible right now.

439.759

>> So that and that that makes complete

441.52

sense. thank everybody. It's like, oh,

442.96

we would love to see all of our, you

444.56

know, our financial numbers, but then

446.08

when you start saying, but then also

447.36

that means you're going to have to plug

448.319

in everybody's salary and stuff like

450

that. Then it's like, wait a minute, I'm

451.52

not sure I want access people have

453.919

access to that. And of course, there's a

455.84

clarifying. It's like, yes, even if

457.12

you're going to have, you know, it may

458.8

be that the LLM knows averages and sums

461.199

and stuff like that, but if you're

462.319

giving it the data to generate those

464.72

things, then still somewhere in there,

466.319

you've got the data. You know everything

467.919

about your business. So, with that in

469.919

mind, like what is a, you know, and

472.8

maybe there's some CEOs out there,

474.08

they're like going, "Oh, crud, I didn't

475.919

think about that." What is a good what

478.16

is a good like pilot program or a good

479.919

way to get started?

482.08

>> Yeah. So, you know, a good way to just

484.72

get started. Um, you know, I can't

488.319

remember the last time I talked to a

490

company, at least an American one, that

491.52

isn't either on Microsoft or G Suite,

494.16

right? Uh, both of them have products.

497.599

There's Microsoft 365 C-pilot and

500.24

there's Gemini for G Suite. It's not

503.199

that expensive and it gives a exposure

507.36

to your team uh to an LLM that's safe

511.919

because guess what? If you don't they

514.88

are going and using chat GPT even if

517.12

it's not you know against your AI

518.959

governance policies. If you have an AI

521.2

governance policy, which you need one by

523.039

the way, if you don't, uh, you we have

525.519

to give tools to our team, uh, or we're

529.6

putting our confidential data at risk.

531.6

Period. Full stop. Right. And so, you

534.32

know, firing up a license, you know,

536.24

some licenses, some seats for those

538.56

products is a nice way to kind of dip

540.88

your toe in. Now, those products have

544.56

major limitations. And so, that's a

547.76

starting point. that's not where that's

549.76

not where you end up. That's not the

551.519

overall solution. Uh but it's a nice

554.16

place to you know get your feet wet, see

557.279

what the appetite is, see what the

559.36

skills of your team are, whether they're

560.959

adopting this stuff and and you know get

563.04

a sense for as an organization what

565.04

direction you need to go in.

569.12

So from there, uh, well, I guess

570.88

actually I will step back a little bit

572.08

because you mentioned, um, you know,

574

Copilot versus Gemini and of course

575.68

there's ChatgPT and all of those, um,

578.399

sort of as a a technology nerd kind of,

581.839

uh, query uh, sort of curiosities, what

585.04

have you found or have you found, uh,

587.6

one uh, engine better than the other?

590.64

particularly and let's you know I know

592.48

that they have their different flavors

594.24

but particularly from a business point

595.839

of view if somebody's trying to do sort

597.36

of a general business you know marketing

599.279

sales that kind of AI.

602.32

>> Yeah. Um you know I co-pilot's better

607.2

than Gemini

608.959

in general. Um you know but it really

612.24

just depends on like on your existing

615.36

stack. Like that's not even really a

617.76

decision that you're going to be making

619.68

at this point unless you're new. Unless

622

you're new and that's a big decision.

624.72

Are we going to build this business on

626.88

the Microsoft stack or on the Google

629.2

stack? And so what model you end up

632.079

using um really depends on that and

635.2

that's

637.2

just Gemini v co-pilot is not enough to

640.8

make that decision on how you're going

642.32

to build your business on what stack,

643.76

right? There's a lot of factors that go

645.519

into it. Um, that being said, Gemini 3

648.88

came out, I think it was this week, and

651.279

it's topping leaderboards all over the

653.68

place. And so, you know, it it's really

656.399

historically been quite interesting,

657.839

right? In 2017,

660.959

it was a Google researcher and research

664.64

team that kind of discovered transformer

667.6

models and they published a paper called

669.44

attention is all you need. And then the

672.399

race was on and Google lost that race

676

like big time, right? Uh chat GPT, do

679.44

you guys remember Bard which was the

682.64

premature like oh gosh we're getting

686.399

crushed by open AI like let's release

688.56

Bard and it was just awful just awful.

691.279

So bad they had to rebrand it. Um and so

696.16

you know basically Microsoft through its

699.839

majority investment in open AI uh won

703.04

that race to like come to market with

705.68

you know generative pre-trained

706.88

transformer models GPT that's what that

709.12

stands for. Um, what we've seen though

713.519

and and honestly like maybe this week is

718.24

when Google kind of officially caught up

721.12

where they are, you know, their model.

723.839

So now it's like, okay, new release of

727.44

GPT5, that's state-of-the-art. Gemini 3,

730.399

that's actually a little bit better. Now

732

that's state-of-the-art. And then Claude

734.079

does something, and then Mistl does

735.92

something, and then Deepseek is over

737.519

here doing amazing things. Um, so it's

740.72

become while OpenAI clearly came out in

744.8

front, way out in front, it's getting

746.959

way more competitive now. Um, and and

749.68

like each new release is better than the

752.48

best release that the other companies

755.12

have put out. So, it's an arms race now.

758.24

Uh, which is really good for us as

760

consumers of this technology, right? We

762

want it to be competitive. That brings

764.399

down pricing that improves all the

767.12

products. Nobody's complacent.

768.639

Everybody's sprinting as fast as they

770.48

can because we've got a real competition

772.639

going on and it's not just Microsoft v

774.56

Google or you know open AI and Microsoft

777.12

v Google right it's there's all these

778.88

other players and anthropic and mistrol

781.519

and you name them are doing really

782.959

interesting things and are starting to

785.04

specialize a bit right like Claude was

788.399

really great at writing code before some

791.2

of the others were really great at

792.88

writing code um and so you know the

796.32

landscape is just evolving so fast that

799.36

it's it's honestly quite hard to keep

801.6

keep track of it all.

804.24

>> Yeah, it kind of reminds me of the early

805.76

Java days uh back when it was Oak and

808

then it became Java and then you didn't

809.839

have any of the parsers for DOM or SAX.

813.04

I mean, it was like the sky's is the

814.56

limit. Libraries were everywhere every

816.32

other week for those getting into AI.

819.839

So, you talked a little bit about

820.72

C-Pilot and Gemini kind of being a good,

823.519

you know, safer way to get started to

826.24

protect your stacks and your um

830.72

your your um your information.

835.519

You mentioned the LMM. Um

838.72

for most people, they just see AI as AI.

841.12

They really it's like, oh yeah, there's

843.04

some model back there. I I basically ask

844.88

it something, I get something out of it.

847.04

for developers. How what's a good way or

850.24

can you explain your like what you mean

853.36

by like each has a better

857.199

how would you suggest developers or

860

entrepreneurs as they approach these AI

862.8

models? How to look for the right AI

865.04

model for what they need? Sorry, it took

867.519

me a while to get there, but it there's

869.279

just kind of a lot with what you brought

870.56

up that just kind of want to get a

872.32

little more focused on that.

873.76

>> Yeah. Yeah. Um it it can be hard. it it

877.12

can really be hard. Um,

880.399

you know what I would suggest? Okay, so

884.48

let's take out the like confidential

886.399

information piece of this. Let's say

888.24

it's just individuals. They're trying

889.92

starting something out. They're not

891.76

dealing in, you know, confidential

893.76

client data. Um, CHPT51

898.16

is probably your starting point. um you

901.519

know it's the most robust the most

905.04

mature

906.639

um and and just generally the chat GPT

910.72

platform is is the right place to start

913.92

right now I would say now I haven't even

916

evaluated Gemini 3 yet so I might have I

918.8

might say something different next week

920.32

but that's a good starting point but

922.24

what I encourage

924.079

um folks that are starting out to do is

928.16

try out a few like I will often have

932.079

open

934

let's call it three maybe I've got

935.839

perplexity

937.519

and I've got Claude and I've got Chad

940.56

GPT open and when I'm starting a task I

944

will often ask all three and then decide

948.24

which one I like the most you know after

950.56

first few prompts and then dive in

952.72

deeper with that one um because there's

955.279

a lot to be learned I mean like you know

957.519

chat GPT5 for example is horribly slow

962.399

because it in my opinion right now

965.6

especially five and maybe not so much as

967.92

51 but uh it it's like overthinking

971.68

everything. It it the amount of time it

974.16

takes to do its reasoning and to call

976.24

all these tools is tremendous. like we

979.519

can't even we can't even deploy five um

983.839

for our product because of how long it

986.88

takes and know how many tokens it

988.56

gobbles up, right? I mean, you can wait

990.88

minutes sometimes. Um and so, you know,

995.04

I I encourage kind of switching

998.88

at least at the beginning of a

1000.16

particular task to to figure out which

1002.079

one uh is going to work. Now, at this

1006.24

point, most of them work for most of the

1009.759

things.

1011.36

You know what I mean? One might be the

1013.519

best, but do you really need the best uh

1016.8

for what you're doing today? Now, like

1018.399

when you're making a decision about,

1019.92

okay, we need to pick a model to put

1022.48

into our product. Like, that's

1023.839

important. You need to test out the dis

1025.679

different models and see which ones are

1027.36

working.

1029.76

But really, why are you building a

1031.839

product that can only use a single

1033.28

model? You should be building your

1035.919

product so that it's model agnostic to a

1038.799

certain extent. And depending on what

1040.319

you're trying to accomplish, you may be

1041.839

using different models for different

1043.12

things. And you need to be building so

1045.28

that you can accommodate new new

1047.839

versions of models that come out, right?

1050.32

And have a full test harness in place so

1052.799

you can evaluate new models quickly to

1055.12

see if they're a good fit for your

1056.48

product. So it's it's just it's

1058.88

everchanging. And so like I would

1062.24

caution folks starting out to like go

1065.44

all in on one model. You need to be

1067.6

thinking of this as like a multimodel

1070.24

world where you you're you're switching

1073.28

back and forth all the time. And and and

1075.919

like you know this is a little just like

1078.64

a little hack. Uh sometimes it's

1081.6

interesting to okay ask chatpt 5

1086

something. take both the prompt and the

1088.96

response that you got, plug it into a

1091.6

different LLM, say this is what I this

1094.48

is what I learned or this is what GPT5

1096.72

said. What do you think about this?

1098.72

Right? And get it to kind of critique

1100.32

like are there errors here? Are there is

1102.64

there missing information here? I really

1104.72

like doing that and it can really

1106.32

illuminate, you know, hallucinations

1108.799

which still happen way less but they're

1110.48

still happening. Um and just like give

1113.44

it some oversight.

1116.08

>> Nice. I I like that because uh I I've

1118.4

done that quite a bit in in the coding

1120.24

world. Thankfully, I mainly use chat tpt

1124.559

as far as code goes to like just build

1126.32

me a quick like stub or something, you

1128.88

know, the boilerplate stuff. Or if you

1130.64

have a problem, it's like, hey, give me

1132.48

this. It gives me a solution. If I don't

1134.4

like it, I pop it in something else and

1136.16

kind of troubleshoot. Or you can almost

1138.48

look at it and know it's wrong

1140

sometimes.

1140.88

>> Sometimes.

1141.28

>> Um following that step. So you

1143.76

mentioned, you know, how do you at the

1146

beginning you talked about keeping the

1148.559

uh the information safe, you know,

1150.559

protecting the access. So, as we're

1153.36

using AI more, as we're building our

1155.76

tool sets with these LLMs, what are some

1158.799

steps that people as they start evolving

1162.4

their access with or their um experience

1164.88

with AI, how can they start protecting

1167.12

themselves and their data uh from being

1170.16

misused or exposing unnecessary data to

1174.24

other people within the model?

1177.44

>> Yeah. So the first step is

1182.08

um you need to pay for access to these

1185.919

things. If you're not paying, they're

1188.799

doing whatever they want with your data,

1191.76

whatever they want. Um and so you have

1194.96

to actually evaluate the licenses that

1197.12

you pay for uh to make sure that they

1199.76

are in line with like the the kind of

1201.76

security posture that you need to have.

1204.48

Um, you know, some folks, marketing

1207.28

folks for example, they don't really

1208.88

care all that much. What they make, they

1211.84

put out into the world on purpose,

1213.52

right? Um, but then there are others

1216

like attorneys that absolutely cannot,

1220.32

you know, put confidential client

1222.4

information into a third party system.

1224.24

Even if they're protected by the

1225.679

license, they still can't often times.

1228.159

And so it's a spectrum of like what your

1231.28

needs are. Um, and so the first step

1234.559

would just be

1236.88

re actually read the licenses or or

1239.28

asking for summaries or something like

1240.96

that to make sure that what you're

1243.6

paying for is in line with what you need

1245.6

from a data protection standpoint.

1249.84

>> Okay, that makes sense. Um,

1253.12

to follow up on that, okay, so now we've

1255.039

kind of talked about the introduction to

1256.48

this. We've talked about the access. So

1258.24

now if I want to start building that

1260.559

model for my company, start building an

1263.12

application or putting my information

1264.64

into have AI help me analyze my company,

1268.08

things along that lines. What's kind of

1270

the next step in the progression of that

1272

in your for what you see?

1274.24

>> Yeah. So um this is uh precisely why we

1279.44

built our product compass is to is to

1282

kind of be that next step. um you know

1285.44

what as a company you need something

1288.799

that will connect to your other systems

1291.44

that will leverage your existing access

1294.4

control that's in place already

1297.28

um and that can that you can own that

1300.08

you can that you can host yourself right

1303.76

um you know co-pilot works great if you

1307.12

are integrating with stuff on the

1308.4

Microsoft stack but if you have other

1310.799

you know systems it doesn't work that

1313.52

right? Uh if you need to process really

1315.919

large files, it doesn't work at all. It

1318.64

actually completely falls down. And so,

1321.36

you know, the next step is is

1323.52

implementing, you know, uh looking for

1326.4

some models. There's a lot available

1328.08

that are open- source models. You can

1330.08

also get um

1333.039

access to not open- source models

1335.6

through AWS Bedrock or through Azure um

1339.919

Foundry and Azure Open AAI. Um, and so,

1344.159

you know, you get to a point where you

1346.72

can't be sending this information

1349.2

outside of your own firewall, like it

1351.28

needs to remain in your network. Um, for

1355.44

companies that are like serious about

1357.44

data security or compliance, uh, or, you

1361.36

know, risk control, things like that,

1364.159

um, you really need to own the stack.

1366.4

And so I I suggest, you know, deploying

1370.48

some of those I, you know, with an

1373.12

application on top of it for your team

1375.52

to be able to interact with it. And so,

1378.799

you know, when I say piggyback on

1380.4

existing access control, I want to click

1382.08

in on that. What does that mean? We

1384.64

talked about these financial reports,

1386.159

right? Let's say I want to pull in

1388.559

finance financials from a third party

1390.48

system. Now, if I have a system that's

1392.48

connected to it, I can authenticate as

1394.48

myself to that QuickBooks. And

1397.28

QuickBooks will only give me what I'm

1400.159

allowed to see, right? And then I can

1402.559

put that into the context of what it is

1404.24

that I'm trying to do and say, "Hey,

1405.679

generate a report for me or do this

1407.2

analysis for me or what have you." And

1409.6

so by piggybacking on the existing

1412.48

access control, we no longer have to

1414.559

trust the model. The model doesn't know

1416.48

all and see all. The model that you're

1419.36

talking to right now, only knows what

1422.32

you're allowed to know and it can only

1424.4

access what you're allowed to access.

1426.88

And so when you build it out that way,

1429.12

like we can't we cannot trust the

1432.159

models. The LLMs are not to be trusted,

1434.799

right? We need our own mechanisms for

1437.36

protecting our business and our data.

1439.2

And so by by architecting it that way

1443.919

um you kind of sidestep that problem by

1448

saying you know well we already have

1451.039

access control of course right you know

1453.12

the CEO can access the financials but

1456.72

you know the some knowledge workers

1458.72

can't and and that that's kind of the

1462.08

next phase is when you're ready for this

1464.559

like internal AI system that's

1467.84

connecting to all of your other stuff.

1469.76

And that's when it gets really powerful.

1473.52

>> And that's where we're going to pause

1475.36

part one of our episode of our interview

1477.919

with uh Hunter Jensen.

1480.64

Great conversation. Uh it's one of those

1483.44

that it seems like every time we mention

1484.96

AI, it sort of goes off the rails. But

1487.279

in this case, those were the rails. We

1489.44

like this is somebody that is a a CEO of

1491.76

a of an AIdriven company. When you've

1493.6

got a AI at the end of the name,

1495.919

hopefully that means something to you.

1497.919

uh in this case I think you see that it

1499.44

does Hunter actually has really thought

1500.799

about this stuff and is a great great

1502.559

resource and uh we will continue that in

1505.36

the next episode. Uh thank you so much

1507.679

for your time. We appreciate it and all

1509.679

that you guys have done just hanging out

1511.279

there as we're getting towards the end

1512.4

of the year just trying to be in that

1513.76

very thankful mood and you guys are at

1515.84

the top of our list. Go out there and

1517.84

have yourself a great day, a great week

1520.08

and we will talk to you next time.