How is convolution neuron network evolved with supervised training in image classification?

There are tons of online blogs/papers introducing basic mechanism of how convolution neuron network process a single image. For a classification problem, for example classify a image of dog or cat, the result is normally the respective probabilities of the image being dog and cat categories.
However, I'm not clear how such result is compared with ground truth then feed back into network and re-train it.
The demo codes online either Python or Matlab normally start from splitting the images into two groups: training set and testing set. Then a CNN object is built. The training set is feed into the CNN object to build a model. How does CNN process images? Does it process one by one?

Respuestas (0)

La pregunta está cerrada.

Preguntada:

el 28 de Jun. de 2017

Cerrada:

el 20 de Ag. de 2021

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by