how to save SIFT feature descriptor as an one dimensional vector.
1 visualización (últimos 30 días)
Mostrar comentarios más antiguos
Newman
el 25 de Jun. de 2016
Comentada: Newman
el 27 de Jun. de 2016
hello i want to extract SIFT features from each human face . When i am running the code given at official SIFT website :
[image, descriptors, locs] = sift('1.pgm'); where 1.pgm is one image
I am getting three matrices
image 58x128 double
descriptors 112x92 unit8
locs 58x4 double
What should I choose as a feature vector? Or how to convert the descriptor matrix to 1xN matrix ?
0 comentarios
Respuesta aceptada
Walter Roberson
el 25 de Jun. de 2016
feature_vector = [size(descriptors,1); size(descriptors,2); size(locs,1); double(descriptors(:)); locs(:)];
However, it appears that the number of descriptors output depends in part on image content, so the size of the feature vector would depend upon the image content. That is not suitable: feature vectors need to be of consistent size. You will need to figure out how to deal with that.
3 comentarios
Walter Roberson
el 26 de Jun. de 2016
For neural networks, is strictly mandatory that the total length of (used) feature vectors from an image be the same size for all images. It is not required at all (and would not usually be the case) that all feature vectors for a particular image be the same size as the other feature vectors for the image. However, it is unlikely that you would get good computational results if, for two images, the total length of the feature vectors was the same but the individual feature vectors were not consistent size between images.
All the feature vectors for an image put together mark a "point" in some high dimensional space. You cannot compare two points that are in spaces of different dimensions.
You would run through each image and compute the feature vectors. Then at the time of classification, you would run through a subset of the images and for each chosen image put all of the feature vectors together into one column a matrix; you would do that for each image to produce a 2D array in which different images are represented in different columns. You would then train on that 2D array together with the known class information for the chosen subset. You might then use a different subset to test how well it worked; you might loop re-retraining, looking for good test results. Eventually you stop training and testing, and use the resulting neural network to make predictions for images you do not know the class information for.
Más respuestas (0)
Ver también
Categorías
Más información sobre Feature Detection and Extraction en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!