que about training tif images

2 visualizaciones (últimos 30 días)
tamar wase
tamar wase el 15 de Sept. de 2019
Respondida: tamar wase el 18 de Sept. de 2019
hi
i need a help :
in line:rcnn = trainRCNNObjectDetector(AcneAndBeautySpots,cifar10Net, options,'NegativeOverlapRange', [0 0.01], 'PositiveOverlapRange',[0.011 0.1])
i have a error:
Error using vision.internal.rcnn.TrainingImageRegionDatastore (line 138)
Unable to find any region proposals to use as positive training samples. Lower the first value of PositiveOverlapRange to
increase the number of positive region proposals.
Error in rcnnObjectDetector/createTrainingDispatcher (line 933)
dispatcher = vision.internal.rcnn.TrainingImageRegionDatastore(...
Error in rcnnObjectDetector.train (line 223)
dispatcher = createTrainingDispatcher(detector, trainingData, regionProposals, opts, params.InputSize, params);
Error in trainRCNNObjectDetector (line 280)
[detector, ~, info] = rcnnObjectDetector.train(trainingData, lgraphOrLayers, options, params);
Error in MyRCNNObjectDetectionDemo (line 403)
rcnn = trainRCNNObjectDetector(AcneAndBeautySpots,cifar10Net, options,'NegativeOverlapRange', [0 0.01],
'PositiveOverlapRange',[0.011 0.1])
i changes the range of positive and negative proporties many times and also i removed them from thid line but still there is a error
please help me!!!!!!!!!!
I've been sitting on this code for almost two weeks trying all kinds of options but no one helps
  1 comentario
Walter Roberson
Walter Roberson el 15 de Sept. de 2019
"but no one helps"
I responded to your earlier question. Unfortunately your response was too vague for me to proceed.
I doubt that anyone will be able to assist without seeing your code, and a copy of any extra data beyond cifar10Net.

Iniciar sesión para comentar.

Respuestas (1)

tamar wase
tamar wase el 18 de Sept. de 2019
this is our code:
close all;
cifar10Data = pwd;
%addpath('C:\Users\ronip\OneDrive - Systematics LTD\Documents\MATLAB\Examples\deeplearning_shared\DeepLearningRCNNObjectDetectionExample');
%url = 'https://www.cs.toronto.edu/~kriz/cifar-10-matlab.tar.gz';
addpath('C:\Users\תמר ווסה\Desktop\study_lest_year_ba\final_project\cbct hadassah_original\3.6.19 from RAGDA\Dicom anonymous\8_convert')
% helperCIFAR10Data.download(url, cifar10Data);
% Load the CIFAR-10 training and test data.
[trainingImages, trainingLabels, testImages, testLabels] = helperCIFAR10Data.load(cifar10Data);
% In order to save time:
%load CIFAR10DatahelperCIFAR10Data
%%
% Each image is a 32x32 RGB image and there are 50,000 training samples.
size(trainingImages)
%%
% CIFAR-10 has 10 image categories. List the image categories:
numImageCategories = 10;
categories(trainingLabels)
%%
% Display a few of the training images, resizing them for display.
figure
thumbnails = trainingImages(:,:,:,1:100);
thumbnails = imresize(thumbnails, [64 64]);
montage(thumbnails)
%% Create A Convolutional Neural Network (CNN)
% A CNN is composed of a series of layers, where each layer defines a
% specific computation. The Neural Network Toolbox(TM) provides
% functionality to easily design a CNN layer-by-layer. In this example, the
% following layers are used to create a CNN:
%
% * |imageInputLayer| - Image input layer
% * |convolutional2dLayer| - 2D convolution layer for Convolutional Neural Networks
% * |reluLayer| - Rectified linear unit (ReLU) layer
% * |maxPooling2dLayer| - Max pooling layer
% * |fullyConnectedLayer| - Fully connected layer
% * |softmaxLayer| - Softmax layer
% * |classificationLayer| - Classification output layer for a neural network
%
% The network defined here is similar to the one described in [4] and
% starts with an |imageInputLayer|. The input layer defines the type and
% size of data the CNN can process. In this example, the CNN is used to
% process CIFAR-10 images, which are 32x32 RGB images:
% Create the image input layer for 32x32x3 CIFAR-10 images
[height, width, numChannels, ~] = size(trainingImages);
imageSize = [height width numChannels];
inputLayer = imageInputLayer([32 32 3]);
% inputLayer = imageInputLayer([227 227 2]);
%%
% Next, define the middle layers of the network. The middle layers are made
% up of repeated blocks of convolutional, ReLU (rectified linear units),
% and pooling layers. These 3 layers form the core building blocks of
% convolutional neural networks. The convolutional layers define sets of
% filter weights, which are updated during network training. The ReLU layer
% adds non-linearity to the network, which allow the network to approximate
% non-linear functions that map image pixels to the semantic content of the
% image. The pooling layers downsample data as it flows through the
% network. In a network with lots of layers, pooling layers should be used
% sparingly to avoid downsampling the data too early in the network.
% Convolutional layer parameters
filterSize = [5 5];
numFilters = 32;
middleLayers = [
convolution2dLayer(filterSize, numFilters, 'Padding', 2)
reluLayer()
convolution2dLayer(filterSize, numFilters, 'Padding', 2)
reluLayer()
maxPooling2dLayer(3, 'Stride',2)
];
% middleLayers = [
%
% % The first convolutional layer has a bank of 32 5x5x3 filters. A
% % symmetric padding of 2 pixels is added to ensure that image borders
% % are included in the processing. This is important to avoid
% % information at the borders being washed away too early in the
% % network.
% convolution2dLayer(filterSize, numFilters, 'Padding', 2)
%
% % Note that the third dimension of the filter can be omitted because it
% % is automatically deduced based on the connectivity of the network. In
% % this case because this layer follows the image layer, the third
% % dimension must be 3 to match the number of channels in the input
% image.
%
% Next add the ReLU layer:
% reluLayer()
%
% Follow it with a max pooling layer that has a 3x3 spatial pooling area
% and a stride of 2 pixels. This down-samples the data dimensions from
% 32x32 to 15x15.
% maxPooling2dLayer(3, 'Stride', 2)
%
% Repeat the 3 core layers to complete the middle of the network.
% convolution2dLayer(filterSize, numFilters, 'Padding', 2)
% reluLayer()
% maxPooling2dLayer(3, 'Stride',2)
%
% convolution2dLayer(filterSize, 2 * numFilters, 'Padding', 2)
% reluLayer()
% maxPooling2dLayer(3, 'Stride',2)
%
% ]
%%
% A deeper network may be created by repeating these 3 basic layers.
% However, the number of pooling layers should be reduced to avoid
% downsampling the data prematurely. Downsampling early in the network
% discards image information that is useful for learning.
%
% The final layers of a CNN are typically composed of fully connected
% layers and a softmax loss layer.
finalLayers = [
% Add a fully connected layer with 64 output neurons. The output size of
% this layer will be an array with a length of 64.
fullyConnectedLayer(64)
% Add an ReLU non-linearity.
reluLayer
% Add the last fully connected layer. At this point, the network must
% produce 10 signals that can be used to measure whether the input image
% belongs to one category or another. This measurement is made using the
% subsequent loss layers.
fullyConnectedLayer(numImageCategories)
% Add the softmax loss layer and classification layer. The final layers use
% the output of the fully connected layer to compute the categorical
% probability distribution over the image classes. During the training
% process, all the network weights are tuned to minimize the loss over this
% categorical distribution.
softmaxLayer
classificationLayer
]
%%
% Initialize the first convolutional layer weights using normally
% distributed random numbers with standard deviation of 0.0001. This helps
% improve the convergence of training.
layers(2).Weights = 0.0001 * randn([filterSize numChannels numFilters]);
%% Train CNN Using CIFAR-10 Data
% Now that the network architecture is defined, it can be trained using the
% CIFAR-10 training data. First, set up the network training algorithm
% using the |trainingOptions| function. The network training algorithm uses
% Stochastic Gradient Descent with Momentum (SGDM) with an initial learning
% rate of 0.001. During training, the initial learning rate is reduced
% every 8 epochs (1 epoch is defined as one complete pass through the
% entire training data set). The training algorithm is run for 40 epochs.
%
% Note that the training algorithm uses a mini-batch size of 128 images.
% This size may need to be lowered when training deeper networks due to
% memory constraints on the GPU.
% Set the network training options
opts = trainingOptions('sgdm', ...
'Momentum', 0.9, ...
'InitialLearnRate', 0.001, ...
'LearnRateSchedule', 'piecewise', ...
'LearnRateDropFactor', 0.1, ...
'LearnRateDropPeriod', 8, ...
'L2Regularization', 0.004, ...
'MaxEpochs', 40, ...
'MiniBatchSize', 128, ...
'Verbose', true);
%%
% Train the network using the |trainNetwork| function. This is a
% computationally intensive process that takes 20-30 minutes to complete.
% To save time while running this example, a pre-trained network is loaded
% from disk. If you wish to train the network yourself, set the
% |doTraining| variable shown below to true.
%
% Note that training a network requires a CUDA-capable NVIDIA(TM) GPU with
% compute capability 3.0 or higher.
% A trained network is loaded from disk to save time when running the
% example. Set this flag to true to train the network.
doTraining = false;
if doTraining
% Train a network.
cifar10Net = trainNetwork(trainingImages, trainingLabels, layers, opts);
else
% Load pre-trained detector for the example.
load('cifar10Net.mat','cifar10Net')
end
%% Validate CIFAR-10 Network Training
% After the network is trained, it should be validated to ensure that
% training was successful. First, a quick visualization of the first
% convolutional layer's filter weights can help identify any immediate
% issues with training.
% Extract the first convolutional layer weights
w = cifar10Net.Layers(2).Weights;
% rescale and resize the weights for better visualization
w = mat2gray(w);
w = imresize(w, [100 100]);
figure
montage(w)
%%
% The first layer weights should have some well defined structure. If the
% weights still look random, then that is an indication that the network
% may require additional training. In this case, as shown above, the first
% layer filters have learned edge-like features from the CIFAR-10 training
% data.
%
% To completely validate the training results, use the CIFAR-10 test data
% to measure the classification accuracy of the network. A low accuracy
% score indicates additional training or additional training data is
% required. The goal of this example is not necessarily to achieve 100%
% accuracy on the test set, but to sufficiently train a network for use in
% training an object detector.
% Run the network on the test set.
YTest = classify(cifar10Net, testImages);
% Calculate the accuracy.
accuracy = sum(YTest == testLabels)/numel(testLabels)
%%
% Further training will improve the accuracy, but that is not necessary for
% the purpose of training the R-CNN object detector.
%% Load Training Data
% Now that the network is working well for the CIFAR-10 classification
% task, the transfer learning approach can be used to fine-tune the network
% for stop sign detection.
%
% Start by loading the ground truth data for stop signs.
% Load the ground truth data
% we save the table as X.mat
% load (X.mat, name of export ROIs)
data = load('new6To10.mat','newSixToTen');
%data = load('mysession2.mat','aaa');
AcneAndBeautySpots = data.newSixToTen;
for i=1:size(AcneAndBeautySpots,1)
AcneAndBeautySpots.imageFilename{i}(1:45) = [];
end
AcneAndBeautySpots=[AcneAndBeautySpots(:,2),AcneAndBeautySpots(:,1)];
%AcneAndBeautySpots = data.aaaa;
% Update the path to the image files to match the local file system
%visiondata = fullfile(toolboxdir('vision'),'visiondata');
% for i=1:size(AcneAndBeautySpots,1)
% help= split(AcneAndBeautySpots.imageFilename(i),'\');
% a= strcat('\',help(3));
% b= strcat('\',help(4));
% path= strcat(a,b);
% path= strcat(visiondata,path);
% end
% [status,message,messageId] = copyfile('Z:\matlab\acne', 'C:\Program Files\MATLAB\R2017a\toolbox\vision\visiondata');
%AcneAndBeautySpots.imageFilename = AcneAndBeautySpots.imageFilename;
% Display a summary of the ground truth data
summary(AcneAndBeautySpots)
%%
% The training data is contained within a table that contains the image
% filename and ROI labels for stop signs, car fronts, and rears. Each ROI
% label is a bounding box around objects of interest within an image. For
% training the stop sign detector, only the stop sign ROI labels are
% needed. The ROI labels for car front and rear must be removed:
% Only keep the image file names and the stop sign ROI labels
% stopSigns = stopSignsAndCars(:, {'imageFilename','stopSign'});
% Display one training image and the ground truth bounding boxes
%I = imread(AcneAndBeautySpots.imageFilename{4});
%I = insertObjectAnnotation(I, 'Rectangle', AcneAndBeautySpots.Acne{4}, 'Acne', 'LineWidth', 8);
%figure
%imshow(I)
%%
% Note that there are only 41 training images within this data set.
% Training an R-CNN object detector from scratch using only 41 images is
% not practical and would not produce a reliable stop sign detector.
% Because the stop sign detector is trained by fine-tuning a network that
% has been pre-trained on a larger dataset (CIFAR-10 has 50,000 training
% images), using a much smaller dataset is feasible.
%% Train R-CNN Stop Sign Detector
% Finally, train the R-CNN object detector using |trainRCNNObjectDetector|.
% The input to this function is the ground truth table which contains
% labeled stop sign images, the pre-trained CIFAR-10 network, and the
% training options. The training function automatically modifies the
% original CIFAR-10 network, which classified images into 10 categories,
% into a network that can classify images into 2 classes: stop signs and
% a generic background class.
%
% During training, the input network weights are fine-tuned using image
% patches extracted from the ground truth data. The 'PositiveOverlapRange'
% and 'NegativeOverlapRange' parameters control which image patches
% are used for training. Positive training samples are those that overlap
% with the ground truth boxes by 0.5 to 1.0, as measured by the bounding
% box intersection over union metric. Negative training samples are those
% that overlap by 0 to 0.3. The best values for these parameters should be
% chosen by testing the trained detector on a validation set.
%
% For R-CNN training, *the use of a parallel pool of MATLAB workers is
% highly recommended to reduce training time*. |trainRCNNObjectDetector|
% automatically creates and uses a parallel pool based on your <http://www.mathworks.com/help/vision/gs/computer-vision-system-toolbox-preferences.html parallel preference settings>. Ensure that the use of
% the parallel pool is enabled prior to training.
%
% To save time while running this example, a pre-trained network is loaded
% from disk. If you wish to train the network yourself, set the
% |doTraining| variable shown below to true.
%
% Note that training an R-CNN detector requires a CUDA-capable NVIDIA(TM)
% GPU with compute capability 3.0 or higher.
% A trained detector is loaded from disk to save time when running the
% example. Set this flag to true to train the detector.
doTraining = true;
if doTraining
% % % Set training options
% % options = trainingOptions('sgdm', ...
% % 'MiniBatchSize', 128, ...
% % 'InitialLearnRate', 1e-3, ...
% % 'LearnRateSchedule', 'piecewise', ...
% % 'LearnRateDropFactor', 0.1, ...
% % 'LearnRateDropPeriod', 100, ...
% % 'MaxEpochs', 100, ...
% % 'Verbose', true);
% %
% %
% %
% %
% % % Options for step 1.
% % optionsStage1 = trainingOptions('sgdm', ...
% % 'MaxEpochs', 10, ...
% % 'MiniBatchSize', 1, ...
% % 'InitialLearnRate', 1e-3, ...
% % 'CheckpointPath', tempdir);
% %
% % % Options for step 2.
% % optionsStage2 = trainingOptions('sgdm', ...
% % 'MaxEpochs', 10, ...
% % 'MiniBatchSize', 1, ...
% % 'InitialLearnRate', 1e-3, ...
% % 'CheckpointPath', tempdir);
% %
% % % Options for step 3.
% % optionsStage3 = trainingOptions('sgdm', ...
% % 'MaxEpochs', 10, ...
% % 'MiniBatchSize', 1, ...
% % 'InitialLearnRate', 1e-3, ...
% % 'CheckpointPath', tempdir);
% %
% % % Options for step 4.
% % optionsStage4 = trainingOptions('sgdm', ...
% % 'MaxEpochs', 10, ...
% % 'MiniBatchSize', 1, ...
% % 'InitialLearnRate', 1e-3, ...
% % 'CheckpointPath', tempdir);
% %
% % options = [
% % optionsStage1
% % optionsStage2
% % optionsStage3
% % optionsStage4
% % ];
options = trainingOptions('sgdm', ...
'MiniBatchSize', 128, ...
'InitialLearnRate', 1e-3, ...
'LearnRateSchedule', 'piecewise', ...
'LearnRateDropFactor', 0.1, ...
'LearnRateDropPeriod', 100, ...
'MaxEpochs', 100, ...
'Verbose', true);
% Train an R-CNN object detector. This will take several minutes.
% tarinImageOur=load('label_142_168_gtruth.mat');
%net=alexnet
addpath('C:\Users\תמר ווסה\Desktop\study_lest_year_ba\final_project\cbct hadassah_original\3.6.19 from RAGDA\Dicom anonymous\8_convert');
rcnn = trainRCNNObjectDetector(AcneAndBeautySpots,cifar10Net, options,'NegativeOverlapRange', [0 0.01], 'PositiveOverlapRange',[0.011 0.1])
%rcnn = trainRCNNObjectDetector(AcneAndBeautySpots,cifar10Net , options, 'NegativeOverlapRange', [0 0.3], 'PositiveOverlapRange',[0.5 1])
% rcnn = trainFasterRCNNObjectDetector(, layers, options, ...
% 'NegativeOverlapRange', [0 0.3], ...
% 'PositiveOverlapRange', [0.6 1], ...
% 'NumRegionsToSample', [256 128 256 128], ...
% 'BoxPyramidScale', [1.2]);
else
% Load pre-trained network for the example.
load('newRCNN.mat','rcnn')
end
%% Test R-CNN Stop Sign Detector
% The R-CNN object detector can now be used to detect stop signs in images.
% Try it out on a test image:
files = dir('tests');
for i = 4:size(files)
filename = strcat('tests/',files(i).name);
testImage = imread(filename);
testImage=imresize(testImage,[227 227]);
figure, imshow(testImage),impixelinfo;
[bboxes, score, label] = detect(rcnn, testImage, 'MiniBatchSize', 128);
[score, idx] = max(score);
bbox = bboxes(idx, :);
annotation = sprintf('%s: (Confidence = %f)', label(idx), score);
outputImage = insertObjectAnnotation(testImage, 'rectangle', bbox, annotation);
figure,imshow(outputImage);
for j = 1:size(bboxes,1)
rectangle('Position',bboxes(j,:),'LineWidth',4,'LineStyle','-','EdgeColor','b');
end
% load strcat('mat_files/', filename);
end
% Read test image
% Detect stop signs
%[bboxes, score, label] = detect(rcnn, testImage, 'MiniBatchSize', 128)
%%
% The R-CNN object |detect| method returns the object bounding boxes, a
% detection score, and a class label for each detection. The labels are
% useful when detecting multiple objects, e.g. stop, yield, or speed limit
% signs. The scores, which range between 0 and 1, indicate the confidence
% in the detection and can be used to ignore low scoring detections.
% Display the sdetection results
%% Summary
% This example showed how to train an R-CNN stop sign object detector using
% a network trained with CIFAR-10 data. Similar steps may be followed to
% train other object detectors using deep learning.
%% References
% [1] Girshick, Ross, et al. "Rich feature hierarchies for accurate object
% detection and semantic segmentation." Proceedings of the IEEE conference
% on computer vision and pattern recognition. 2014.
%
% [2] Deng, Jia, et al. "Imagenet: A large-scale hierarchical image
% database." Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE
% Conference on. IEEE, 2009.
%
% [3] Krizhevsky, , and Geoffrey Hinton. "Learning multiple layers of
% features from tiny images." (2009).
%
% [4] http://code.google.com/p/cuda-convnet/
displayEndOfDemoMessage(mfilename)
the error is in line:rcnn = trainRCNNObjectDetector(AcneAndBeautySpots,cifar10Net, options,'NegativeOverlapRange', [0 0.01], 'PositiveOverlapRange',[0.011 0.1])
I think Matlab can't identify the markings in the pictures
Maybe the background should be markered as well?

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by