July 6, 2024
How to buy an AI solution the right way: 7 questions new customers should consider


AI is poised to become a significant and ubiquitous presence in our lives. It holds tremendous potential value, but we cannot contribute meaningfully to a technology that we do not understand.

When a user sets out to buy a new piece of technology, they’re not particularly interested in what it might be able to do somewhere down the road. A potential user needs to understand what a solution will do for them today, how it will interact with their existing technology stack, and how the current iteration of that solution will provide ongoing value to their business.

But because this is an emerging space that changes seemingly by the day, it can be hard for these potential users to know what questions they should be asking, or how to evaluate products so early in their life cycles.

With that in mind, I’ve provided a high-level guide for evaluating an AI-based solution as a potential customer — an enterprise buyer scorecard, if you will. When evaluating AI, consider the following questions.

Does the solution fix a business problem, and do the builders truly understand that problem?

Chatbots, for example, perform a very specific function that helps promote individual productivity. But can the solution scale to the point where it is used effectively by 100 or 1,000 people?

The fundamentals of deploying enterprise software still apply — customer success, change management, and ability to innovate within the tool are foundational requirements for delivering continuous value to the business. Don’t think of AI as an incremental solution; think about it as a little piece of magic that completely removes a pain point from your experience.

But it will only feel like magic if you can literally make something disappear by making it autonomous, which all comes back to truly understanding the business problem.

What does the security stack look like?

Data security implications around AI are next level and far outstrip the requirements we are used to. You need built-in security measures that meet or exceed your own organizational standards out of the box.

Here’s a high-level guide for evaluating an AI-based solution as a potential customer — an enterprise buyer scorecard, if you will.

Today, data, compliance, and security are table stakes for any software and are even more important for AI solutions. The reason for this is twofold: First and foremost, machine learning models run against massive troves of data, and it can be an unforgiving experience if that data is not handled with strategic care.

With any AI-based solution, regardless of what it is meant to accomplish, the objective is to have a large impact. Therefore, the audience experiencing the solution will also be large. The way you leverage the data these expansive groups of users generate is very important, as is the type of data you use, when it comes to keeping that data secure.

Second, you need to ensure that whatever solution you have in place allows you to maintain control of that data to continually train the machine learning models over time. This isn’t just about creating a better experience; it’s also about ensuring that your data doesn’t leave your environment.

How do you protect and manage data, who has access to it, and how do you secure it? The ethical use of AI is already a hot topic and will continue to be with imminent regulations on the way. Any AI solution you deploy needs to have been built with an inherent understanding of this dynamic

Is the product truly something that can improve over time?

As ML models age, they begin to drift and start to make the wrong conclusions. For example, ChatGPT3 only took in data through November of 2021, meaning it couldn’t make sense of any events that occurred after that date.

Enterprise AI solutions must be optimized for change over time to keep up with new and valuable data. In the world of finance, a model may have been trained to spot a specific regulation that changes along with new legislation.

A security vendor may train its model to spot a specific threat, but then a new attack vector comes along. How are those changes reflected to maintain accurate results over time? When buying an AI solution, ask the vendor how they keep their models up to date, and how they think about model drift in general.

What does the technical team behind the product look like?



Source link