Does the pca function restrict the number of components to be kept?

9 visualizaciones (últimos 30 días)
Hi there,
I'm using pca function to reduce the number of variables in a huge given datasets. It works well but when I want to change the number of components to be kept, i can't go beyond the number of observations.
To put it differently, let's say that my dataset has 500x24300 dim, I want to reduce it into 500x16100. However, the function works only for a number less than 499 and gives an error otherwise.
I'm using Matlab 2016a.
Does anyone have an idea ? Thanks for your help.
  2 comentarios
John D'Errico
John D'Errico el 26 de Jun. de 2018
Please get used to using comments, instead of adding answers for every response you make.
elid latf
elid latf el 26 de Jun. de 2018
Yes, you're right. I haven't realized till after posting it. thanks.

Iniciar sesión para comentar.

Respuesta aceptada

Anton Semechko
Anton Semechko el 25 de Jun. de 2018
The eigenvectors computed by PCA (and its generalized version called probabilistic PCA) only span the subspace of the ambient space containing the sample data; and are therefore based on linear combinations of the sample datapoints. If N and D are the number of samples and dimensionality of the data, respectively, then min(N-1,D) is the maximum number of principal components (PCs) you will be able to extract. The number of PCs will be even smaller if the data points are linearly dependent.
In principle, you can always find the complement of the PCA subspace (i.e., set of eigenvectors orthogonal to the PCs), but that it is very rarely done in practice, especially when dealing with high-dimensional spaces like yours (i.e., 24300 dimensions).
  5 comentarios
Anton Semechko
Anton Semechko el 26 de Jun. de 2018
That min(N-1,D) is the maximum number of PCs that can be extracted from a D-by-N data matrix is a "theoretical" limitation. It is simply not possible to extract more than min(N-1,D) PCs that contain ANY information what so ever about your data.
Note that even though the maximum number of PCs may be much smaller than dimensionality of the data, when put together, they represent the original data with 100% accuracy. However, real data often contains noise, and the information carried by the "higher-order" PCs will be increasingly dominated by noise. When performing dimensionality reduction, which is what I am assuming you want to do, you
1) Select the first K<min(N-1,D) PCs that retain as much of the underlying structure carried by the data as possible; the remaining min(N-1,D)-K PCs will be dominated by noise
2) Project observed data (after centering it on the mean) on the K PCs, to get the so-called "feature vectors" (or scores in statistics literature). These K-dimensional feature vectors are low-dimensional representations of your data.
Various methods have be developed to determine the optimal value of K (e.g., Horn's rule, cross-validation), but none of them work 100% of the time; because real data rarely meets underlying assumption of the PCA model (see [1] and [2] for details).
[1] Roweis, 1998, EM algorithms for PCA and SPCA
[2] Tipping & Bishop, 1999, Probabilistic principal component analysis
elid latf
elid latf el 26 de Jun. de 2018
Again, thank you all so much for the time you've given to my question. It was so interesting.

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Dimensionality Reduction and Feature Extraction en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by