For all of the customer service jockeys out there, did you know that, according to a recent PwC study, “[Businesses should] take advantage of automation, but make sure customers can reach a human when one is needed. In turn, automated solutions should “learn” from human interactions so those experiences also improve?”

Of course you did. You live and breathe customer support every day, all day. In addition, you’re constantly exploring new technologies to improve your metrics as well the customer support experience. Artificial intelligence (AI) is one of those promising technologies.

Examples of How Artificial Intelligence is Used in Customer Support

As an industry, we’ve talked about AI for years. In recapping the Technology & Services World conference in May, John Ragsdale, Distinguished VP, Technology Research at TSIA, stated “I opened a recent webinar on [AI] by updating an old quote attributed to Mark Twain, ‘Everybody’s talking about AI, but nobody’s doing anything about it.’ Though Twain said “weather,” not “AI” of course,” he went on to say, “I feel like many CIOs went to an AI conference and came back offering budget to departments to do something cool with it, but the truth is, we are mostly in investigative mode, without a lot of real examples. Yet.”

So, let’s talk about some real-world learnings and results that we and our customers have both discovered along the way.

Creating a Relevant Feature Set

When they search for an answer to their question, customers expect the right answer to appear on the first page. They also expect it to be one of the first 3 results. As a case in point, how often do you click to pages 2 or 3 to find a relevant answer?

Search relevancy is the sorting of search results so that the most accurate answer is the first one delivered to a user. Essentially, the relevancy model is a formula for weighing different aspects (features) of a document for a given query by scoring each search result. For example, a simple relevancy model might weigh a match in the title field as 10x more valuable than a match in the content field.

Getting the relevancy formula right is hard. You have to analyze all users’ queries and understand their behavior to meaningful answers. This is where machine learning comes into play. When building relevancy models, the ability to automatically define the formula for a query will both save time during implementation as well as result in better answers and insights.

As stated in Towards Data Science, “Automated feature engineering reduced machine learning development time by 10x compared to manual feature engineering while delivering better modeling performance.”

Retraining Machine Learning Models

The ability to retrain your models is a must-have if you want to improve the results users receive from their queries. In essence, retraining machine learning models means that the models autonomously learn and adapt as new data comes in. It is also referred to as auto-adaptive learning, or continual “AutoML”.

Think about your experience as a customer support rep or searching for something on a self-serve portal. The first time around it may take several iterations of questions to receive the right set of answers. The next time you ask the question, the query will result in insightful answers with less iterations.

Retraining machine learning models works the same way, only it is automated. Without retraining, you get the same not-so-relevant answers every time and the model degrades over time. Because data changes over time as do your users’ questions, your models should change as well. Known as “concept drift”, search results presented by machine learning models that do not change become less accurate and less useful.

Explainable AI

We can all agree that machine learning can help us build better and more accurate relevancy models in an agile manner. Explainable AI means that the decisions a model renders are interpretable and the whole process and intention surrounding the model is transparent as well. To do this effectively, there needs to be checks and balances to ensure we don’t deliver bad training data to the relevancy models.

Customers expect relevancy models to be both understandable and visible. Without this “check,” we are not inclined to trust the decisions an AI system makes. We want computer systems to work as expected and produce transparent reasons for the decisions they make.

The “balance” is confirmation through an independent source. For example, at Attivio, we’ve incorporated a best practice of working with a knowledge expert to ensure results relevancy. Once verified, the model is updated accordingly.

Learn More about Artificial Intelligence and Machine Learning in Customer Support

Machine learning, as defined in a recent webcast, “A Practical Guide for Machine Learning,” with John Ragsdale, is defined as “capturing the human experience, learning from every transaction to improve the customer experience and providing the best results, the right answers, and best recommendations every single time.” Based on our experience with both large and small installations, the above learnings are capabilities organizations should seriously consider when researching insight engines.

If you’re interested in learning more about machine learning and how you can improve your customer resolution rates, download the latest Gartner Magic Quadrant: Insight Engines 2019 where Attivio was named a leader in the report.

 

Dorit Zilbershot

About Author Dorit Zilbershot

Dorit Zilbershot is Chief Product Officer at Attivio. She sets product strategy and leads development for the Attivio Platform and Elevate. Dorit brings more than 15 years’ experience leading software development teams. Prior to Attivio, Dorit led the Boston Informatics group of the VA in building a genomic research platform to advance personalized medicine. She previously served as a Major in the Israeli Defense Force. Dorit holds an MPA from Harvard University, M.Sc. in Medical Science from the University of Tel-Aviv and a BA in Computer Science from the Technion.