Kyle Kellams: Small business owners are trying to navigate the line of integrating artificial intelligence into their operations while doing so safely.
Chris Wright, a partner and the lead security engineer at the Little Rock-based Sullivan Wright Technologies, says adding AI to compete in an evolving landscape is tempting.
Wright: There’s just a ton of hype around AI. And so I think everybody feels that kind of FOMO, fear of missing out aspect of it. And so people jump into it. When I see a lot of the hype, I always look back and see who’s the creator of that hype. And it typically is someone who is a CEO or an owner of some AI startup company. And so, you know, they’ve got something invested in it. So they’re out there hyping with every breath to hope that something will catch on and they’ll become millionaires off of it.
But some of the things that we’re starting to see now is a downturn in that. I continually see here in Arkansas—this morning I was looking at Arkansas Business and there was an article about kind of telling all small businesses that they needed to adopt AI or get left behind. And then I immediately flipped over to some of my broader social feeds and saw that the U.S. Census had reported a falling off of adoption in AI in businesses of pretty much every size except for like one to four employees. So anything above five employees, there’s kind of a couple of month downturn.
Kellams: So when a company of any size might launch onto AI too quickly, how are they using it? How are they trying to implement it into what they do?
Wright: On the small business side, a lot of them are using the generative AI, large language model type prompts. So you’ve seen these with ChatGPT, Claude, Microsoft Copilot, things like that. I would say that’s probably the majority on the small business side.
There are larger functions where there’s kind of an attempt to get AI integrated into existing tools. And there’s a platform called a Model Context Protocol, and it allows you to pipeline AI into some grander processes. I think the larger businesses are using it, or at least starting to use it a little bit more in that way.
They’re also looking at something that’s been a longtime goal—the customer or sales pipeline chatbot. So, you know, pop into the website, you have some questions. We all see these. They pop up everywhere. Whether you’re visiting a site for the first time, whether you’re logged into a site that you use pretty frequently, wait about two seconds and then some fake chatbot personality will pop up and say, “Hey, can I help you with something?”
That’s seen fairly limited success in that a lot of people are not getting what they need out of those, and they’re not feeling it. It just doesn’t have that personal type feeling. Many of us have had those experiences with that little chatbot who’s trying to guide you through a subscription or something. And I’ll admit, I’ve ended up yelling at that chatbot when it doesn’t quite understand. And someone might have lost a customer.
Kellams: So that’s one challenge with adopting AI too soon. Are there others?
Wright: Yeah. So a lot of times what we’re seeing now—and if you looked at IBM’s Cost of a Data Breach report, they’ve published this for many, many years—we use this as guidelines and sort of a “why we do what we do” when we present reports to clients.
What we’re seeing with AI is kind of twofold. First and foremost, the organization is really not putting up those guardrails and those controls. So there aren’t policies, there aren’t procedures, there aren’t officially adopted methodologies or education for employees. How do we use this? Why don’t we allow you to use it?
And then with that, you get the end users who struggle to see the difference between home computing and business computing. Our clients are all small businesses. I come from large business size. I was telling my business partner earlier, the smallest company I worked in before joining up with him was 6,000 people. Still considered pretty large. The largest company—I always like to joke, it’s the largest organization on the planet with infinity employees—the U.S. federal government.
So, coming from that side, it’s a little awkward for me stepping into the small business world and seeing how folks treat their business computer like their home computer. And with that, we get, “Oh, hey, I want to use this AI platform, but we don’t have it at work. My work says I can’t, so I’ll just do it anyway.”
We’ve called that shadow IT before, and now we’re calling it shadow AI. So you’re bringing in platforms you have access to because the company hasn’t put those controls in place. There aren’t policy controls to say no, there aren’t technical controls to prevent people from going out there. So someone at a medical practice can dump a bunch of patient data into ChatGPT and say, “Hey, sum this up for me somehow.” And by doing so, that employee has just created a data breach, a HIPAA breach, that’s reportable to the federal Department of Health and Human Services.
People do these kinds of things without really understanding, because the structures and guardrails aren’t in place. In other cases, businesses are almost like a dragster spinning their wheels, waiting for the green light. They really are hard charging, trying to get out there and go. And they’ll implement these things without investigating. They don’t think about, “OK, this is just another vendor.”
We investigate every other vendor. We have vendor due diligence questionnaires. We look at how the vendor protects the data that we entrust with it. And for some reason they skip past that because it seems like it’s something completely different in this case.
Kellams: What should a company that wants to integrate AI into their business model do?
Wright: Exactly that. Don’t be that drag racer waiting for the green light, spinning your wheels. First and foremost, you’re putting the cart before the horse saying, “I want AI,” without having a reason.
Figure out what you want to use it for. There are very good use cases for this. But a lot of people just want AI to have AI. They don’t want AI to solve a problem. So what problem do you have that you’re trying to solve that you think AI is going to solve?
Then, as you define that and you’ve designed out that process and the technical aspects of it, look for the security gaps. It doesn’t hurt to call somebody like us and say, “Hey, can you help consult on that?” I’ve been in cybersecurity since the early 2000s. I’ve seen numerous iterations of how we do things, and it follows a similar path.
Have someone who knows what they’re looking for work through that process with you. Find where the security concerns are, where you need controls, where you need guardrails. Put those in. Validate that any product or service you select is going to safeguard your data as best as possible. There’s no 100% security, but we can reduce the risk by being cognizant of where that risk is, putting in mitigating controls, remediating where possible, avoiding the risk if necessary.
Make sure that risk management mindset is in the process. And once you’re through with that, start slow. Don’t just go in and say, “We’re going to pour the entire business into this process.” Pick a few samples. See what those do, how this works. If that helps by giving efficiencies or allows me to do more with the resources I have now, then measure and evaluate. Once you’ve got everything in place, start the process over.
Kellams: Chris, thank you so much for your time.
Wright: Yeah, sure. Happy to do it. Let me know if you’ve got any more questions.
Chris Wright is a partner and lead security engineer at Little Rock-based Sullivan Wright Technologies. He’s interested in answering your questions about business-related AI. You can send those questions to us at info@kuaf.edu, and your question might be answered on an upcoming edition of Ozarks at Large.
Ozarks at Large transcripts are created on a rush deadline. Copy editors utilize AI tools to review work. KUAF does not publish content created by AI. Please reach out to kuafinfo@uark.edu to report an issue. The audio version is the authoritative record of KUAF programming.