Main Content

kfoldEdge

Classification edge for cross-validated kernel classification model

Description

edge = kfoldEdge(CVMdl) returns the classification edge obtained by the cross-validated, binary kernel model (ClassificationPartitionedKernel) CVMdl. For every fold, kfoldEdge computes the classification edge for validation-fold observations using a model trained on training-fold observations.

example

edge = kfoldEdge(CVMdl,Name,Value) returns the classification edge with additional options specified by one or more name-value pair arguments. For example, specify the number of folds or the aggregation level.

Examples

collapse all

Load the ionosphere data set. This data set has 34 predictors and 351 binary responses for radar returns, which are labeled either bad ('b') or good ('g').

load ionosphere

Cross-validate a binary kernel classification model using the data.

CVMdl = fitckernel(X,Y,'Crossval','on')
CVMdl = 
  ClassificationPartitionedKernel
    CrossValidatedModel: 'Kernel'
           ResponseName: 'Y'
        NumObservations: 351
                  KFold: 10
              Partition: [1x1 cvpartition]
             ClassNames: {'b'  'g'}
         ScoreTransform: 'none'


CVMdl is a ClassificationPartitionedKernel model. By default, the software implements 10-fold cross-validation. To specify a different number of folds, use the 'KFold' name-value pair argument instead of 'Crossval'.

Estimate the cross-validated classification edge.

edge = kfoldEdge(CVMdl)
edge = 
1.5585

Alternatively, you can obtain the per-fold edges by specifying the name-value pair 'Mode','individual' in kfoldEdge.

Perform feature selection by comparing k-fold edges from multiple models. Based solely on this criterion, the classifier with the greatest edge is the best classifier.

Load the ionosphere data set. This data set has 34 predictors and 351 binary responses for radar returns, which are labeled either bad ('b') or good ('g').

load ionosphere

Randomly choose half of the predictor variables.

rng(1); % For reproducibility
p = size(X,2); % Number of predictors
idxPart = randsample(p,ceil(0.5*p));

Cross-validate two binary kernel classification models: one that uses all of the predictors, and one that uses half of the predictors.

CVMdl = fitckernel(X,Y,'CrossVal','on');
PCVMdl = fitckernel(X(:,idxPart),Y,'CrossVal','on');

CVMdl and PCVMdl are ClassificationPartitionedKernel models. By default, the software implements 10-fold cross-validation. To specify a different number of folds, use the 'KFold' name-value pair argument instead of 'Crossval'.

Estimate the k-fold edge for each classifier.

fullEdge = kfoldEdge(CVMdl)
fullEdge = 
1.5142
partEdge = kfoldEdge(PCVMdl)
partEdge = 
1.8910

Based on the k-fold edges, the classifier that uses half of the predictors is the better model.

Input Arguments

collapse all

Cross-validated, binary kernel classification model, specified as a ClassificationPartitionedKernel model object. You can create a ClassificationPartitionedKernel model by using fitckernel and specifying any one of the cross-validation name-value pair arguments.

To obtain estimates, kfoldEdge applies the same data used to cross-validate the kernel classification model (X and Y).

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: kfoldEdge(CVMdl,'Mode','individual') returns the classification edge for each fold.

Fold indices for prediction, specified as the comma-separated pair consisting of 'Folds' and a numeric vector of positive integers. The elements of Folds must be within the range from 1 to CVMdl.KFold.

The software uses only the folds specified in Folds for prediction.

Example: 'Folds',[1 4 10]

Data Types: single | double

Aggregation level for the output, specified as the comma-separated pair consisting of 'Mode' and 'average' or 'individual'.

This table describes the values.

ValueDescription
'average'The output is a scalar average over all folds.
'individual'The output is a vector of length k containing one value per fold, where k is the number of folds.

Example: 'Mode','individual'

Output Arguments

collapse all

Classification edge, returned as a numeric scalar or numeric column vector.

If Mode is 'average', then edge is the average classification edge over all folds. Otherwise, edge is a k-by-1 numeric column vector containing the classification edge for each fold, where k is the number of folds.

More About

collapse all

Classification Edge

The classification edge is the weighted mean of the classification margins.

One way to choose among multiple classifiers, for example to perform feature selection, is to choose the classifier that yields the greatest edge.

Classification Margin

The classification margin for binary classification is, for each observation, the difference between the classification score for the true class and the classification score for the false class.

The software defines the classification margin for binary classification as

m=2yf(x).

x is an observation. If the true label of x is the positive class, then y is 1, and –1 otherwise. f(x) is the positive-class classification score for the observation x. The classification margin is commonly defined as m = yf(x).

If the margins are on the same scale, then they serve as a classification confidence measure. Among multiple classifiers, those that yield greater margins are better.

Classification Score

For kernel classification models, the raw classification score for classifying the observation x, a row vector, into the positive class is defined by

f(x)=T(x)β+b.

  • T(·) is a transformation of an observation for feature expansion.

  • β is the estimated column vector of coefficients.

  • b is the estimated scalar bias.

The raw classification score for classifying x into the negative class is f(x). The software classifies observations into the class that yields a positive score.

If the kernel classification model consists of logistic regression learners, then the software applies the 'logit' score transformation to the raw classification scores (see ScoreTransform).

Version History

Introduced in R2018b

expand all