Skip to content

News & Insights

Graphic of a circular ring

C is for Classifier

Share

C is for Classifier

From A to I to Z: Jaid’s Guide to Artificial Intelligence

Classifiers are machine learning algorithms that learn the characteristics of the data they’re exposed to and organize it into predetermined categories. This makes it easier for AI to recognize inputs and respond to them appropriately.

Classifiers are the librarians of the AI world.

Just as a librarian sorts books by topic, genre, and author so it’s easier for you to find what you’re looking for, a classifier assigns data to specific categories so the AI can hone in as quickly as possible on the information it needs to do its job.

Classifiers can be trained to sort data in any type of category.

They could be topics — sports, food and drink, and politics, for instance — semantic categories like sentiment and intent, or another type of classification that is useful to the AI. It all depends on what kind of training the AI is receiving.

For this reason, they’re used in a wide range of machine learning applications, including image recognition, medical diagnosis, and natural language processing.

In natural language processing, classifiers help AI identify the topic of a sentence, the sentiment behind it, and even whether it’s a question, order, or request.

This makes it possible for AI to provide relevant, helpful responses to human queries.

Types of Classifiers

Logistic Regression

Despite its name, Logistic Regression is a type of classification model (as opposed to a regression model – which predicts a continuous value). Logistic regression produces the probability of a data point being a positive example of a class by taking a linear combination of features and learned weights that are “squashed” into a value between zero and one using the logistic function.

Softmax Regression

Softmax Regression is a generalisation of the standard logistic regression model used for a multi-class classification problem.

Support Vector Machines

Support Vector Machines (SVMs) use hyperplanes (decision boundaries) for classification — lines that physically separate data. The AI can then predict or decide what happens next based on which side of the line the data falls on. This method differs from Logistic Regression in that the SVM attempts to fit a decision boundary that explicitly maximises the separation between classes.

SVMs are useful for categorizing data that would be otherwise challenging to classify, such as the various objects in an image, medical symptoms, and language sentiment — for example, is a sentence positive or negative?

The Support Vectors in SVMs are the data points that “support” the hyperplanes.

SVMs can be generalised to produce decision boundaries that are non-linear.

Decision Trees

Decision Trees are perhaps the most intuitive classifier. As the name suggests, these use a tree-like structure to predict or decide what’s next. They are a class of algorithms which include Random forests, gradient boosting and XGBoost.

Decision trees are hierarchical models that classify a data point by asking successive binary questions about the features of the data, eventually coming to a decision as if following a flow chart. Random Forests and Gradient Boosted trees combine the outcomes of multiple trees to produce a final prediction. The AI can then follow the same pattern used to build the tree to make predictions or decisions about new data.

Neural Networks

Neural networks are made up of multiple layers of interconnected nodes, called neurons, that closely resemble the structure and function of the human brain.

Each neuron receives input from another one, processes it using a set of weights — predetermined values that establish how much influence that neuron has on the end-result — and produces an output it passes on to the next neuron.

Neural networks are typically used in complex classification tasks, because of the huge amounts of data they can handle.

Some facts:

AI pioneer John McCarthy coined the term “classifier” to describe the process of sorting and categorising data in the 1950s, but the first known classifier was developed in the 1800s by Charles Darwin and is the system he used to organise and describe the plants and animals he observed on his travels.

The first machine learning classifier was a decision tree developed by Arthur Samuel, another artificial intelligence pioneer who was active in the 1950s. Samuel’s classifier taught a computer how to play checkers. By the 1970s, the computer was good enough to beat a respectable amateur, despite the limited processing power and memory available at the time — the first computer to be able to beat a human.

Want to know more?

Arthur Samuel was a huge advocate of teaching computers how to play games. He believed that, once machines acquired these skills, they could be easily applied more generally to solve everyday problems. This paper outlines his thinking and approach to machine learning in more detail.

If you want to get more technical, this hour-long video tutorial is a deep dive into classification, including how to train classifier algorithms and key use cases.

Jaid’s perspective

Classifiers are a critical part of natural language processing, because they enable AI to accurately understand meaning and context and address human queries in a way that is helpful and satisfying. In customer service, classifiers make it possible for AI to identify the nature of a query, ask for more information if needed, and even recognise that an inquiry is a follow up, so it can provide an update.

Schedule a demo today, and we’ll show you how Jaid’s AI platform utilizes classifiers to interpret human communications, automate responses, extract data, organise actionable insights and integrate with your existing workflows.