Chapter 1

Deep Learning for Signals Basics

AI is becoming more mainstream; it can be found anywhere from safety-critical automated driving systems to fraud detection to chatbots. Where once only a small group of machine learning engineers or data scientists could build AI-powered applications, now things are changing.

A growing number of signal processing engineers and domain experts are expanding their skill sets to create AI systems; this is made possible by an increase in pretrained models, existing research, and tools to synthesize and label large data sets.

Deep learning is an AI technique that is particularly well-suited to signal processing applications.

This ebook covers the basics of deep learning for signal processing and the tasks associated with preparing signal data and modeling a deep learning application, demonstrated through a speech processing example. It uses a trigger word detector as an example that can be applied to a much wider class of signal processing applications. Trigger word detection, or keyword recognition, is a speech processing algorithm that runs embedded on a mobile device.

Check out this short video to see the trigger word detection example in action.

First, a brief overview of how deep learning works with signal data.


Why Deep Learning

Deep learning is the key driving technology of AI-powered systems because it enables models to learn complex patterns and high-level abstractions from large collections of data in order to make a prediction, respond to an input, or take another action.

Signal data is often subject to wider variability—caused by wideband noise, interference, non-linear trends, jitter, phase distortion, and missing samples—compared with other data types. This makes signal data difficult to use as raw input, so it must be prepared before being used as an input for a deep learning model. For more information, read an in-depth blog post on this topic.

Two commons types of deep learning algorithms, recurrent neural network (RNN) and convolutional neural network (CNN), are well suited to signal data and common signal processing application use cases. You could choose to use a traditional machine learning model instead, but this would likely limit the complexity of the model and require an expert knowledge of the features of the data.


Deep Learning Workflow

There are four stages in a standard deep learning workflow: create and access data sets, preprocess and transform data, develop predictive models, and accelerate and deploy models. Typically, people do not move through these stages linearly; instead, they take an iterative approach to design, train, and optimize predictive models.

The tasks involved in each stage will vary by project and data source. Signal processing applications in particular use a wide range of data types, and the techniques to create effective learning inputs for the network vary significantly depending on the application. Data preparation and transformation are particularly important tasks in order to use signal data to train a deep learning model.



Deep Network Designer

If you are new to defining network architectures, the interactive Deep Network Designer app may be a good place to start. You can drag and drop layers from a set list and explore the learnable weights that even seemingly simple networks have.

Don’t Start from Scratch

Consider starting from an architecture published in a paper that tackles a similar problem to yours. Research papers often include repositories with prebuilt networks ready to download. Keep in mind that MATLAB® can also import and export prebuilt models from other deep learning frameworks.