low training & validation accuracy with adding new data set

7 visualizaciones (últimos 30 días)
YA
YA el 22 de Ag. de 2020
Respondida: Pranav Verma el 28 de Ag. de 2020
I'm training a 2D semantic segmentation network using Unet architecture, on 2D png images (which are slices of 3D nifti images).
I'm using pre-trained network weights, which I trained before with ~2k images (gained ~75% validation accuracy) and trying to continue trainning THE SAME images - with added ~2k more images - of the same source. in every training I tried, I got very (!) low accuracy all the time (8%) - both validation & training accuracy. I tried all kinds of parameters, checked that the PixellabelDatastore & ImageDatastore files are corresponding to each other (so that I didn't messed them up while adding the new data set), checked the labelsID's and classnames which are ok. Tried to train only on the new data set - still got low values (8%). Also, I predicted on ~900 NEW images with the old weights I trained before and got 62% validation accuracy! not 8%! - does anyone has an idea of why does it happening and what needs to be done to fix it??
*** below is an example of one of my tries - only 4 epochs were made when I stopped (however, I did had also try with much more epochs - getting the same results)

Respuestas (1)

Pranav Verma
Pranav Verma el 28 de Ag. de 2020
Hi YA,
The distribution of the dataset can also play a role in the validation accuracy of the trained model. If the dataset is not uniformly distributed, then the model may perform good for some part of the data as it may have overfitted on it but may not perform so well on other data.

Categorías

Más información sobre Recognition, Object Detection, and Semantic Segmentation en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by