Esta página aún no se ha traducido para esta versión. Puede ver la versión más reciente de esta página en inglés.

resubMargin

Margin of k-nearest neighbor classifier by resubstitution

Sintaxis

m = resubMargin(mdl)

Descripción

ejemplo

m = resubMargin(mdl) returns the classification margins (m) of the data used to train mdl.

m is returned as a numeric vector of length size(mdl.X,1), where mdl.X is the training data for mdl. Each entry in m represents the margin for the corresponding row of mdl.X and the corresponding true class label in mdl.Y.

Ejemplos

contraer todo

Create a k-nearest neighbor classifier for the Fisher iris data, where = 5.

Load the Fisher iris data set.

load fisheriris
X = meas;
Y = species;

Create a classifier for five nearest neighbors.

mdl = fitcknn(X,Y,'NumNeighbors',5);

Examine some statistics of the resubstitution margin of the classifier.

m = resubMargin(mdl);
[max(m) min(m) mean(m)]
ans = 1×3

    1.0000   -0.6000    0.9253

The mean margin is over 0.9, indicating fairly high classification accuracy for resubstitution. For a more reliable assessment of model accuracy, consider cross-validation, such as kfoldLoss.

Argumentos de entrada

contraer todo

k-nearest neighbor classifier model, specified as a ClassificationKNN object.

Más acerca de

contraer todo

Margin

The classification margin for each observation is the difference between the classification score for the true class and the maximal classification score for the false classes.

Score

The score of a classification is the posterior probability of the classification. The posterior probability is the number of neighbors with that classification divided by the number of neighbors. For a more detailed definition that includes weights and prior probabilities, see Posterior Probability.

Introducido en R2012a