Natural Language Processing

What Is Natural Language Processing (NLP)?

3 things you need to know

Natural language processing (NLP) is a branch of artificial intelligence (AI) that teaches computers how to understand human language in both verbal and written forms. Natural language processing combines computational linguistics with machine learning and deep learning to process speech and text data, which can also be used with other types of data for developing smart engineered systems.

How Natural Language Processing Works

Natural language processing aims to transform unstructured language data into a structured format that machines can use to interpret speech and text data, discover and visualize complex relationships in large data sets, and generate new language data.

Raw human language data can come from various sources, including audio signals, web and social media, documents, and databases. The data contains valuable information such as voice commands, public sentiment on topics, operational data, and maintenance reports. Natural language processing can combine and simplify these large sources of data, transforming them into meaningful insights with visualizations and topic models.

Speech and text data are fed to an AI model for natural language processing.

Natural language processing combines computational linguistics with AI modeling to interpret speech and text data.

To perform natural language processing on speech data, detect the presence of human speech in an audio segment, perform speech-to-text transcription, and apply text mining and machine learning techniques on the derived text.

Data Preparation for Natural Language Processing

Some techniques used in natural language processing to convert text from an unstructured format to a structured format are:

Tokenization: Typically, this is the first step in text processing for natural language processing. It refers to splitting up the text into sentences or words.

Stemming: This text normalization technique reduces words to their root forms by removing affixes of the words. It uses simple heuristic rules and may result in invalid dictionary words.

Lemmatization: This sophisticated text normalization technique uses vocabulary and morphological analysis to remove affixes of words. For example, “building has floors” reduces to “build have floor.”

Word2vec: The most popular implementation among word embedding techniques is Word2vec. The technique creates a distributed representation of words into numerical vectors, which capture semantics and relationships among words.

N-gram modeling: An n-gram is a collection of n successive items in a text document that may include words, numbers, symbols, and punctuation. N-gram models can be useful in natural language processing applications where sequences of words are relevant, such as in sentiment analysis, text classification, and text generation.

Natural Language Processing with AI

AI models trained on language data can recognize patterns and predict subsequent characters or words in a sentence. To build natural language processing models, you can use classical machine learning algorithms, such as logistic regression or decision trees, or use deep learning architectures, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and autoencoders. For example, you can use CNNs to classify text and RNNs to generate a sequence of characters.

Transformer models (a type of deep learning model) revolutionized natural language processing, and they are the basis for large language models (LLMs) such as BERT and ChatGPT™. Transformers are designed to track relationships in sequential data. They rely on a self-attention mechanism to capture global dependencies between input and output.

In the context of natural language processing, this allows LLMs to capture long-term dependencies, complex relationships between words, and nuances present in natural language. LLMs can process all words in parallel, which speeds up training and inference.

Similar to other pretrained deep learning models, you can perform transfer learning with pretrained LLMs to solve a particular problem in natural language processing. For example, you can fine-tune a BERT model for Japanese text.

Why Natural Language Processing Matters

Natural language processing teaches machines to understand and generate human language. The applications are vast and as AI technology evolves, the use of natural language processing—from everyday tasks to advanced engineering workflows—will expand.

Common tasks in natural language processing are speech recognition, speaker recognition, speech enhancement, and named entity recognition. In a subset of natural language processing, referred to as natural language understanding (NLU), you can use syntactic and semantic analysis of speech and text to extract the meaning of a sentence. NLU tasks include document classification and sentiment analysis.

Illustration of the output of NLP tasks. On the left, five different speakers are recognized in an audio signal. On the right, classified word clouds for positive and negative words.

Speaker recognition and sentiment analysis are common tasks of natural language processing.

Another sub-area of natural language processing, referred to as natural language generation (NLG), encompasses methods computers use to produce a text response given a data input. While NLG started as template-based text generation, AI techniques have enabled dynamic text generation in real time. NLG tasks include text summarization and machine translation.

The two major areas of natural language processing (NLP) are natural language understanding (NLU) and natural language generation (NLG).

Natural language processing and its sub-areas.

Natural language processing is used in finance, manufacturing, electronics, software, information technology, and other industries for applications such as:

  • Automating the classification of reviews based on sentiment, positive or negative
  • Counting the frequency of words or phrases in documents and performing topic modeling
  • Automating labeling and tagging of speech recordings
  • Developing predictive maintenance schedules based on sensor and text log data
  • Automating requirement formalization and compliance checking

Natural Language Processing with MATLAB

MATLAB enables you to create natural language processing pipelines from data preparation to deployment. Using Deep Learning Toolbox™ or Statistics and Machine Learning Toolbox™ with Text Analytics Toolbox™, you can perform natural language processing on text data. By also using Audio Toolbox™, you can perform natural language processing on speech data.

The complete NLP workflow includes accessing and exploring text data, preprocessing the data, developing predictive models, and sharing insights and models.

Extended workflow for natural language processing.

Data Preparation

You can use low-code apps to preprocess speech data for natural language processing. The Signal Analyzer app lets you explore and analyze your data, and the Signal Labeler app automatically labels the ground truth. You can use Extract Audio Features to extract domain-specific features and perform time-frequency transformations. Then, you can transcribe speech to text by using the speech2text function.

Once you have text data for applying natural language processing, you can transform the unstructured language data to a structured format interactively and clean your data with the Preprocess Text Data Live Editor task. Alternatively, you can prepare your NLP data programmatically with built-in functions.

Using word clouds and scatter plots, you can also visualize text data and models for natural language processing.

 Illustration of cleaning text data for natural language processing. On the left: word cloud of raw data. On the right: word cloud of cleaned data.

Word clouds that illustrate word frequency analysis applied to raw and cleaned text data from factory reports.

AI Modeling

You can train many types of machine learning models for classification or regression. For example, you create and train long short-term memory networks (LSTMs) with a few lines of MATLAB code. You can also create and train deep learning models using the Deep Network Designer app and monitor the model training with plots of accuracy, loss, and validation metrics.

Screenshot Deep Network Designer app showing a simple BiLSTM network that can be used for natural language processing

Deep Network Designer app for interactively building, visualizing, editing, and training NLP networks.

Instead of creating a deep learning model from scratch, you can get a pretrained model that you apply directly or adapt to your natural language processing task. With MATLAB, you can access pretrained networks from the MATLAB Deep Learning Model Hub. For example, you can use the VGGish model to extract feature embeddings from audio signals, the wav2vec model for speech-to-text transcription, and the BERT model for document classification. You can also import models from TensorFlow™ or PyTorch™ by using the importNetworkFromTensorFlow or importNetworkFromPyTorch functions.

Related Topics