Documentación

Esta página aún no se ha traducido para esta versión. Puede ver la versión más reciente de esta página en inglés.

fitrkernel

Fit Gaussian kernel regression model using random feature expansion

`fitrkernel` trains or cross-validates a Gaussian kernel regression model for nonlinear regression. `fitrkernel` is more practical to use for big data applications that have large training sets, but can also be applied to smaller data sets that fit in memory.

`fitrkernel` maps data in a low-dimensional space into a high-dimensional space, then fits a linear model in the high-dimensional space by minimizing the regularized objective function. Obtaining the linear model in the high-dimensional space is equivalent to applying the Gaussian kernel to the model in the low-dimensional space. Available linear regression models include regularized support vector machine (SVM) and least-squares regression models.

To train a nonlinear SVM regression model on in-memory data, see `fitrsvm`.

Sintaxis

``Mdl = fitrkernel(X,Y)``
``Mdl = fitrkernel(X,Y,Name,Value)``
``[Mdl,FitInfo] = fitrkernel(___)``
``[Mdl,FitInfo,HyperparameterOptimizationResults] = fitrkernel(___)``

Descripción

ejemplo

````Mdl = fitrkernel(X,Y)` returns a compact Gaussian kernel regression model trained using the predictor data in `X` and the corresponding responses in `Y`.```

ejemplo

````Mdl = fitrkernel(X,Y,Name,Value)` returns a kernel regression model with additional options specified by one or more name-value pair arguments. For example, you can implement least-squares regression, specify the number of dimension of the expanded space, or specify cross-validation options.```

ejemplo

````[Mdl,FitInfo] = fitrkernel(___)` also returns the fit information in the structure array `FitInfo` using any of the input arguments in the previous syntaxes. You cannot request `FitInfo` for cross-validated models.```

ejemplo

````[Mdl,FitInfo,HyperparameterOptimizationResults] = fitrkernel(___)` also returns the hyperparameter optimization results when you optimize hyperparameters by using the `'OptimizeHyperparameters'` name-value pair argument. ```

Ejemplos

contraer todo

Train a kernel regression model for a tall array by using SVM.

Create a datastore that references the folder location with the data. The data can be contained in a single file, a collection of files, or an entire folder. Treat `'NA'` values as missing data so that `datastore` replaces them with `NaN` values. Select a subset of the variables to use. Create a tall table on top of the datastore.

```varnames = {'ArrTime','DepTime','ActualElapsedTime'}; ds = datastore('airlinesmall.csv','TreatAsMissing','NA',... 'SelectedVariableNames',varnames); t = tall(ds);```

Specify `DepTime` and `ArrTime` as the predictor variables (`X`) and `ActualElapsedTime` as the response variable (`Y`). Select the observations for which `ArrTime` is later than `DepTime`.

```daytime = t.ArrTime>t.DepTime; Y = t.ActualElapsedTime(daytime); % Response data X = t{daytime,{'DepTime' 'ArrTime'}}; % Predictor data```

Standardize the predictor variables.

`Z = zscore(X); % Standardize the data`

Train a default Gaussian kernel regression model with the standardized predictors. Extract a fit summary to determine how well the optimization algorithm fits the model to the data.

`[Mdl,FitInfo] = fitrkernel(Z,Y)`
```Found 6 chunks. |========================================================================= | Solver | Iteration / | Objective | Gradient | Beta relative | | | Data Pass | | magnitude | change | |========================================================================= | INIT | 0 / 1 | 4.335200e+01 | 9.821993e-02 | NaN | | LBFGS | 0 / 2 | 3.693870e+01 | 1.566041e-02 | 9.988238e-01 | | LBFGS | 1 / 3 | 3.692143e+01 | 3.030550e-02 | 1.352488e-03 | | LBFGS | 2 / 4 | 3.689521e+01 | 2.919252e-02 | 1.137336e-03 | | LBFGS | 2 / 5 | 3.686922e+01 | 2.801905e-02 | 2.277224e-03 | | LBFGS | 2 / 6 | 3.681793e+01 | 2.615365e-02 | 4.564688e-03 | | LBFGS | 2 / 7 | 3.671782e+01 | 2.276596e-02 | 9.170612e-03 | | LBFGS | 2 / 8 | 3.652813e+01 | 1.868733e-02 | 1.850839e-02 | | LBFGS | 3 / 9 | 3.442961e+01 | 3.260732e-02 | 2.030226e-01 | | LBFGS | 4 / 10 | 3.473328e+01 | 8.506865e-02 | 3.309396e-01 | | LBFGS | 4 / 11 | 3.378744e+01 | 5.473648e-02 | 1.428247e-01 | | LBFGS | 5 / 12 | 3.329728e+01 | 3.922448e-02 | 1.026073e-01 | | LBFGS | 6 / 13 | 3.309615e+01 | 1.551459e-02 | 6.118966e-02 | | LBFGS | 7 / 14 | 3.300400e+01 | 1.759430e-02 | 1.918912e-02 | | LBFGS | 8 / 15 | 3.277892e+01 | 3.155320e-02 | 4.781893e-02 | | LBFGS | 9 / 16 | 3.255352e+01 | 3.435953e-02 | 4.200697e-02 | | LBFGS | 10 / 17 | 3.207945e+01 | 6.192847e-02 | 2.161540e-01 | | LBFGS | 11 / 18 | 3.171391e+01 | 3.185452e-02 | 1.204747e-01 | | LBFGS | 12 / 19 | 3.155433e+01 | 1.183853e-02 | 5.837098e-02 | | LBFGS | 13 / 20 | 3.149625e+01 | 1.132499e-02 | 2.169556e-02 | |========================================================================= | Solver | Iteration / | Objective | Gradient | Beta relative | | | Data Pass | | magnitude | change | |========================================================================= | LBFGS | 14 / 21 | 3.136724e+01 | 1.478355e-02 | 3.132871e-02 | | LBFGS | 15 / 22 | 3.115575e+01 | 1.461357e-02 | 7.221907e-02 | | LBFGS | 16 / 23 | 3.091292e+01 | 1.900119e-02 | 1.237602e-01 | | LBFGS | 17 / 24 | 3.076649e+01 | 3.469328e-02 | 1.664433e-01 | | LBFGS | 18 / 25 | 3.104221e+01 | 1.341798e-01 | 2.831585e-02 | | LBFGS | 18 / 26 | 3.076703e+01 | 4.929652e-02 | 1.414956e-02 | | LBFGS | 18 / 27 | 3.073332e+01 | 1.434614e-02 | 7.072158e-03 | | LBFGS | 19 / 28 | 3.067248e+01 | 9.931353e-03 | 2.438284e-02 | | LBFGS | 20 / 29 | 3.063153e+01 | 6.781994e-03 | 1.606731e-02 | |========================================================================| ```
```Mdl = RegressionKernel PredictorNames: {'x1' 'x2'} ResponseName: 'Y' Learner: 'svm' NumExpansionDimensions: 64 KernelScale: 1 Lambda: 8.5385e-06 BoxConstraint: 1 Epsilon: 5.9303 Properties, Methods ```
```FitInfo = struct with fields: Solver: 'LBFGS-tall' LossFunction: 'epsiloninsensitive' Lambda: 8.5385e-06 BetaTolerance: 1.0000e-03 GradientTolerance: 1.0000e-05 ObjectiveValue: 30.6315 GradientMagnitude: 0.0068 RelativeChangeInBeta: 0.0161 FitTime: 77.1910 History: [1×1 struct] ```

`Mdl` is a `RegressionKernel` model. To inspect the regression error, you can pass `Mdl` and the training data or new data to the `loss` function. Or, you can pass `Mdl` and new predictor data to the `predict` function to predict responses for new observations. You can also pass `Mdl` and the training data to the `resume` function to continue training.

`FitInfo` is a structure array containing optimization information. Use `FitInfo` to determine whether optimization termination measurements are satisfactory.

For improved accuracy, you can increase the maximum number of optimization iterations (`'IterationLimit'`) and decrease the tolerance values (`'BetaTolerance'` and `'GradientTolerance'`) by using the name-value pair arguments of fitrkernel. Doing so can improve measures like `ObjectiveValue` and `RelativeChangeInBeta` in `FitInfo`. You can also optimize model parameters by using the `'OptimizeHyperparameters'` name-value pair argument.

Load the `carbig` data set.

`load carbig`

Specify the predictor variables (`X`) and the response variable (`Y`).

```X = [Acceleration,Cylinders,Displacement,Horsepower,Weight]; Y = MPG;```

Delete rows of `X` and `Y` where either array has `NaN` values. Removing rows with `NaN` values before passing data to `fitrkernel` can speed up training and reduce memory usage.

```R = rmmissing([X Y]); % Data with missing entries removed X = R(:,1:5); Y = R(:,end); ```

Standardize the predictor variables.

`Z = zscore(X);`

Cross-validate a kernel regression model using 5-fold cross-validation.

`Mdl = fitrkernel(Z,Y,'Kfold',5)`
```Mdl = classreg.learning.partition.RegressionPartitionedKernel CrossValidatedModel: 'Kernel' ResponseName: 'Y' NumObservations: 392 KFold: 5 Partition: [1x1 cvpartition] ResponseTransform: 'none' Properties, Methods ```
`numel(Mdl.Trained)`
```ans = 5 ```

`Mdl` is a `RegressionPartitionedKernel` model. Because `fitrkernel` implements five-fold cross-validation, `Mdl` contains five `RegressionKernel` models that the software trains on training-fold (in-fold) observations.

Examine the cross-validation loss (mean squared error) for each fold.

`kfoldLoss(Mdl,'mode','individual')`
```ans = 5×1 13.0610 14.0975 24.0104 21.1223 24.3979 ```

Optimize hyperparameters automatically using the `'OptimizeHyperparameters'` name-value pair argument.

Load the `carbig` data set.

`load carbig`

Specify the predictor variables (`X`) and the response variable (`Y`).

```X = [Acceleration,Cylinders,Displacement,Horsepower,Weight]; Y = MPG;```

Delete rows of `X` and `Y` where either array has `NaN` values. Removing rows with `NaN` values before passing data to `fitrkernel` can speed up training and reduce memory usage.

```R = rmmissing([X Y]); % Data with missing entries removed X = R(:,1:5); Y = R(:,end); ```

Standardize the predictor variables.

`Z = zscore(X);`

Find hyperparameters that minimize five-fold cross-validation loss by using automatic hyperparameter optimization. Specify `'OptimizeHyperparameters'` as `'auto'` so that `fitrkernel` finds the optimal values of the `'KernelScale'`, `'Lambda'`, and `'Epsilon'` name-value pair arguments. For reproducibility, set the random seed and use the `'expected-improvement-plus'` acquisition function.

```rng('default') [Mdl,FitInfo,HyperparameterOptimizationResults] = fitrkernel(Z,Y,'OptimizeHyperparameters','auto',... 'HyperparameterOptimizationOptions',struct('AcquisitionFunctionName','expected-improvement-plus'))```

```|====================================================================================================================| | Iter | Eval | Objective | Objective | BestSoFar | BestSoFar | KernelScale | Lambda | Epsilon | | | result | | runtime | (observed) | (estim.) | | | | |====================================================================================================================| | 1 | Best | 4.8295 | 5.3313 | 4.8295 | 4.8295 | 0.011518 | 6.8068e-05 | 0.95918 | | 2 | Best | 4.1488 | 0.31098 | 4.1488 | 4.1855 | 477.57 | 0.066115 | 0.091828 | | 3 | Accept | 4.1521 | 0.19631 | 4.1488 | 4.1747 | 0.0080478 | 0.0052867 | 520.84 | | 4 | Accept | 4.1506 | 0.17756 | 4.1488 | 4.1488 | 0.10935 | 0.35931 | 0.013372 | | 5 | Best | 4.1446 | 0.26321 | 4.1446 | 4.1446 | 326.29 | 2.5457 | 0.22475 | | 6 | Accept | 4.1521 | 0.17935 | 4.1446 | 4.1447 | 719.11 | 0.19478 | 881.84 | | 7 | Accept | 4.1501 | 0.1607 | 4.1446 | 4.1461 | 0.052426 | 2.5402 | 0.051319 | | 8 | Accept | 4.1521 | 0.14217 | 4.1446 | 4.1447 | 990.71 | 0.014203 | 702.34 | | 9 | Accept | 4.1521 | 0.14291 | 4.1446 | 4.1465 | 415.85 | 0.054602 | 81.005 | | 10 | Accept | 4.1454 | 0.13256 | 4.1446 | 4.1455 | 972.49 | 1.1601 | 1.8715 | | 11 | Accept | 4.1495 | 0.14178 | 4.1446 | 4.1473 | 121.79 | 1.4077 | 0.061055 | | 12 | Accept | 4.1521 | 0.13304 | 4.1446 | 4.1474 | 985.81 | 0.83297 | 213.45 | | 13 | Best | 4.1374 | 0.1322 | 4.1374 | 4.1441 | 167.34 | 2.5497 | 4.8997 | | 14 | Accept | 4.1434 | 0.15036 | 4.1374 | 4.1437 | 74.527 | 2.55 | 6.1044 | | 15 | Accept | 4.1402 | 0.13202 | 4.1374 | 4.1407 | 877.17 | 2.5391 | 2.2888 | | 16 | Accept | 4.1436 | 0.16724 | 4.1374 | 4.1412 | 0.0010354 | 0.017613 | 0.11811 | | 17 | Best | 4.1346 | 0.15259 | 4.1346 | 4.1375 | 0.0010362 | 0.010401 | 8.9719 | | 18 | Accept | 4.1521 | 0.12172 | 4.1346 | 4.1422 | 0.0010467 | 0.0094817 | 563.96 | | 19 | Accept | 4.1508 | 0.15027 | 4.1346 | 4.1367 | 760.12 | 0.0079557 | 0.009087 | | 20 | Accept | 4.1435 | 0.1804 | 4.1346 | 4.143 | 0.020647 | 0.0089063 | 2.3699 | |====================================================================================================================| | Iter | Eval | Objective | Objective | BestSoFar | BestSoFar | KernelScale | Lambda | Epsilon | | | result | | runtime | (observed) | (estim.) | | | | |====================================================================================================================| | 21 | Best | 3.7172 | 0.1653 | 3.7172 | 3.7174 | 818.08 | 2.5529e-06 | 2.1058 | | 22 | Accept | 4.1521 | 0.12828 | 3.7172 | 3.7177 | 0.006272 | 2.5598e-06 | 93.063 | | 23 | Accept | 4.0567 | 0.13549 | 3.7172 | 3.7176 | 940.43 | 2.6941e-06 | 0.12016 | | 24 | Best | 2.8979 | 0.30111 | 2.8979 | 2.8979 | 37.141 | 2.5677e-06 | 2.71 | | 25 | Accept | 4.1521 | 0.13746 | 2.8979 | 2.898 | 13.817 | 2.5755e-06 | 863.91 | | 26 | Best | 2.795 | 0.28299 | 2.795 | 2.7953 | 20.022 | 2.6098e-06 | 1.6561 | | 27 | Accept | 2.8284 | 0.30791 | 2.795 | 2.7956 | 17.252 | 2.7719e-06 | 0.82777 | | 28 | Best | 2.7896 | 0.29376 | 2.7896 | 2.7898 | 11.432 | 7.621e-06 | 2.094 | | 29 | Accept | 2.8194 | 0.61993 | 2.7896 | 2.7899 | 8.5133 | 2.5872e-06 | 2.0567 | | 30 | Accept | 2.8061 | 0.29593 | 2.7896 | 2.7968 | 15.823 | 6.1956e-06 | 2.0085 | __________________________________________________________ Optimization completed. MaxObjectiveEvaluations of 30 reached. Total function evaluations: 30 Total elapsed time: 36.7332 seconds. Total objective function evaluation time: 11.1668 Best observed feasible point: KernelScale Lambda Epsilon ___________ _________ _______ 11.432 7.621e-06 2.094 Observed objective function value = 2.7896 Estimated objective function value = 2.7968 Function evaluation time = 0.29376 Best estimated feasible point (according to models): KernelScale Lambda Epsilon ___________ __________ _______ 15.823 6.1956e-06 2.0085 Estimated objective function value = 2.7968 Estimated function evaluation time = 0.29839 ```
```Mdl = RegressionKernel ResponseName: 'Y' Learner: 'svm' NumExpansionDimensions: 256 KernelScale: 15.8229 Lambda: 6.1956e-06 BoxConstraint: 411.7488 Epsilon: 2.0085 Properties, Methods ```
```FitInfo = struct with fields: Solver: 'LBFGS-fast' LossFunction: 'epsiloninsensitive' Lambda: 6.1956e-06 BetaTolerance: 1.0000e-04 GradientTolerance: 1.0000e-06 ObjectiveValue: 1.3582 GradientMagnitude: 0.0051 RelativeChangeInBeta: 5.3944e-05 FitTime: 0.0521 History: [] ```
```HyperparameterOptimizationResults = BayesianOptimization with properties: ObjectiveFcn: @createObjFcn/inMemoryObjFcn VariableDescriptions: [5×1 optimizableVariable] Options: [1×1 struct] MinObjective: 2.7896 XAtMinObjective: [1×3 table] MinEstimatedObjective: 2.7968 XAtMinEstimatedObjective: [1×3 table] NumObjectiveEvaluations: 30 TotalElapsedTime: 36.7332 NextPoint: [1×3 table] XTrace: [30×3 table] ObjectiveTrace: [30×1 double] ConstraintsTrace: [] UserDataTrace: {30×1 cell} ObjectiveEvaluationTimeTrace: [30×1 double] IterationTimeTrace: [30×1 double] ErrorTrace: [30×1 double] FeasibilityTrace: [30×1 logical] FeasibilityProbabilityTrace: [30×1 double] IndexOfMinimumTrace: [30×1 double] ObjectiveMinimumTrace: [30×1 double] EstimatedObjectiveMinimumTrace: [30×1 double] ```

For big data, the optimization procedure can take a long time. If the data set is too large to run the optimization procedure, you can try to optimize the parameters using only partial data. Use the `datasample` function and specify `'Replace','false'` to sample data without replacement.

contraer todo

Predictor data to which the regression model is fit, specified as an n-by-p numeric matrix, where n is the number of observations and p is the number of predictor variables.

The length of `Y` and the number of observations in `X` must be equal.

Tipos de datos: `single` | `double`

Response data, specified as an n-dimensional numeric vector. The length of `Y` and the number of observations in `X` must be equal.

Tipos de datos: `single` | `double`

Nota

`fitrkernel` removes missing observations, that is, observations with any of these characteristics:

• `NaN` elements in the response (`Y`)

• At least one `NaN` value in a predictor observation (row in `X`)

• `NaN` value or `0` weight (`'Weights'`)

Argumentos de par nombre-valor

Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.

Ejemplo: ```Mdl = fitrkernel(X,Y,'Learner','leastsquares','NumExpansionDimensions',2^15,'KernelScale','auto')``` implements least-squares regression after mapping the predictor data to the `2^15` dimensional space using feature expansion with a kernel scale parameter selected by a heuristic procedure.

Nota

You cannot use any cross-validation name-value pair argument along with the `'OptimizeHyperparameters'` name-value pair argument. You can modify the cross-validation for `'OptimizeHyperparameters'` only by using the `'HyperparameterOptimizationOptions'` name-value pair argument.

Kernel Regression Options

contraer todo

Box constraint, specified as the comma-separated pair consisting of `'BoxConstraint'` and a positive scalar.

This argument is valid only when `'Learner'` is `'svm'`(default) and you do not specify a value for the regularization term strength `'Lambda'`. You can specify either `'BoxConstraint'` or `'Lambda'` because the box constraint (C) and the regularization term strength (λ) are related by C = 1/(λn), where n is the number of observations (rows in `X`).

Ejemplo: `'BoxConstraint',100`

Tipos de datos: `single` | `double`

Half the width of the epsilon-insensitive band, specified as the comma-separated pair consisting of `'Epsilon'` and `'auto'` or a nonnegative scalar value.

For `'auto'`, the `fitrkernel` function determines the value of `Epsilon` as `iqr(Y)/13.49`, which is an estimate of a tenth of the standard deviation using the interquartile range of the response variable `Y`. If `iqr(Y)` is equal to zero, then `fitrkernel` sets the value of `Epsilon` to 0.1.

`'Epsilon'` is valid only when `Learner` is `svm`.

Ejemplo: `'Epsilon',0.3`

Tipos de datos: `single` | `double`

Number of dimensions of the expanded space, specified as the comma-separated pair consisting of `'NumExpansionDimensions'` and `'auto'` or a positive integer. For `'auto'`, the `fitrkernel` function selects the number of dimensions using `2.^ceil(min(log2(p)+5,15))`, where `p` is the number of predictors.

Ejemplo: `'NumExpansionDimensions',2^15`

Tipos de datos: `char` | `string` | `single` | `double`

Kernel scale parameter, specified as the comma-separated pair consisting of `'KernelScale'` and `'auto'` or a positive scalar. MATLAB® obtains the random basis for random feature expansion by using the kernel scale parameter. For details, see Random Feature Expansion.

If you specify `'auto'`, then MATLAB selects an appropriate kernel scale parameter using a heuristic procedure. This heuristic procedure uses subsampling, so estimates can vary from one call to another. Therefore, to reproduce results, set a random number seed by using `rng` before training.

Ejemplo: `'KernelScale','auto'`

Tipos de datos: `char` | `string` | `single` | `double`

Regularization term strength, specified as the comma-separated pair consisting of `'Lambda'` and `'auto'` or a nonnegative scalar.

For `'auto'`, the value of `'Lambda'` is 1/n, where n is the number of observations (rows in `X`).

You can specify either `'BoxConstraint'` or `'Lambda'` because the box constraint (C) and the regularization term strength (λ) are related by C = 1/(λn).

Ejemplo: `'Lambda',0.01`

Tipos de datos: `char` | `string` | `single` | `double`

Linear regression model type, specified as the comma-separated pair consisting of `'Learner'` and `'svm'` or `'leastsquares'`.

In the following table, $f\left(x\right)=T\left(x\right)\beta +b.$

• x is an observation (row vector) from p predictor variables.

• $T\left(·\right)$ is a transformation of an observation (row vector) for feature expansion. T(x) maps x in ${ℝ}^{p}$ to a high-dimensional space (${ℝ}^{m}$).

• β is a vector of m coefficients.

• b is the scalar bias.

ValueAlgorithmResponse rangeLoss function
`'leastsquares'`Linear regression via ordinary least squaresy ∊ (-∞,∞)Mean squared error (MSE): $\ell \left[y,f\left(x\right)\right]=\frac{1}{2}{\left[y-f\left(x\right)\right]}^{2}$
`'svm'`Support vector machine regressionSame as `'leastsquares'`Epsilon-insensitive: $\ell \left[y,f\left(x\right)\right]=\mathrm{max}\left[0,|y-f\left(x\right)|-\epsilon \right]$

Ejemplo: `'Learner','leastsquares'`

Verbosity level, specified as the comma-separated pair consisting of `'Verbose'` and either `0` or `1`. `Verbose` controls the amount of diagnostic information `fitrkernel` displays at the command line.

ValueDescription
`0``fitrkernel` does not display diagnostic information.
`1``fitrkernel` displays and stores the value of the objective function, gradient magnitude, and other diagnostic information. `FitInfo.History` contains the diagnostic information.

Ejemplo: `'Verbose',1`

Tipos de datos: `single` | `double`

Maximum amount of allocated memory (in megabytes), specified as the comma-separated pair consisting of `'BlockSize'` and a positive scalar.

If `fitrkernel` requires more memory than the value of `BlockSize` to hold the transformed predictor data, then MATLAB uses a block-wise strategy. For details about the block-wise strategy, see Algorithms.

Ejemplo: `'BlockSize',1e4`

Tipos de datos: `single` | `double`

Random number stream for reproducibility of data transformation, specified as the comma-separated pair consisting of `'RandomStream'` and a random stream object. For details, see Random Feature Expansion.

Use `'RandomStream'` to reproduce the random basis functions that `fitrkernel` uses to transform the data in `X` to a high-dimensional space. For details, see Administración de la secuencia global (MATLAB) and Creación y control de una secuencia numérica aleatoria (MATLAB).

Ejemplo: `'RandomStream',RandStream('mlfg6331_64')`

Response transformation, specified as the comma-separated pair consisting of `'ResponseTransform'` and either `'none'` or a function handle. The default is `'none'`, which means `@(y)y`, or no transformation. For a MATLAB function or a function you define, use its function handle. The function handle must accept a vector (the original response values) and return a vector of the same size (the transformed response values).

Ejemplo: Suppose you create a function handle that applies an exponential transformation to an input vector by using `myfunction = @(y)exp(y)`. Then, you can specify the response transformation as `'ResponseTransform',myfunction`.

Tipos de datos: `char` | `string` | `function_handle`

Observation weights, specified as the comma-separated pair consisting of `'Weights'` and a numeric vector of positive values. `fitrkernel` weighs the observations in `X` with the corresponding values in `Weights`. The size of `Weights` must equal n, the number of observations (rows in `X`).

`fitrkernel` normalizes `Weights` to sum to 1.

Tipos de datos: `double` | `single`

Cross-Validation Options

contraer todo

Cross-validation flag, specified as the comma-separated pair consisting of `'Crossval'` and `'on'` or `'off'`.

If you specify `'on'`, then the software implements 10-fold cross-validation.

You can override this cross-validation setting using the `CVPartition`, `Holdout`, `KFold`, or `Leaveout` name-value pair argument. You can use only one cross-validation name-value pair argument at a time to create a cross-validated model.

Ejemplo: `'Crossval','on'`

Cross-validation partition, specified as the comma-separated pair consisting of `'CVPartition'` and a `cvpartition` partition object as created by `cvpartition`. The partition object specifies the type of cross-validation and the indexing for the training and validation sets.

To create a cross-validated model, you can use one of these four name-value pair arguments only: `'CVPartition'`, `'Holdout'`, `'KFold'`, or `'Leaveout'`.

Ejemplo: Suppose you create a random partition for 5-fold cross-validation on 500 observations by using `cvp = cvpartition(500,'KFold',5)`. Then, you can specify the cross-validated model by using `'CVPartition',cvp`.

Fraction of the data used for holdout validation, specified as the comma-separated pair consisting of `'Holdout'` and a scalar value in the range (0,1). If you specify `'Holdout',p`, then the software completes these steps:

1. Randomly select and reserve `p*100`% of the data as validation data, and train the model using the rest of the data.

2. Store the compact, trained model in the `Trained` property of the cross-validated model.

To create a cross-validated model, you can use one of these four name-value pair arguments only: `CVPartition`, `Holdout`, `KFold`, or `Leaveout`.

Ejemplo: `'Holdout',0.1`

Tipos de datos: `double` | `single`

Number of folds to use in a cross-validated model, specified as the comma-separated pair consisting of `'KFold'` and a positive integer value greater than 1. If you specify `'KFold',k`, then the software completes these steps.

1. Randomly partition the data into k sets.

2. For each set, reserve the set as validation data, and train the model using the other k – 1 sets.

3. Store the `k` compact, trained models in the cells of a `k`-by-1 cell vector in the `Trained` property of the cross-validated model.

To create a cross-validated model, you can use one of these four name-value pair arguments only: `CVPartition`, `Holdout`, `KFold`, or `Leaveout`.

Ejemplo: `'KFold',5`

Tipos de datos: `single` | `double`

Leave-one-out cross-validation flag, specified as the comma-separated pair consisting of `'Leaveout'` and `'on'` or `'off'`. If you specify `'Leaveout','on'`, then, for each of the n observations (where n is the number of observations excluding missing observations), the software completes these steps:

1. Reserve the observation as validation data, and train the model using the other n – 1 observations.

2. Store the n compact, trained models in the cells of an n-by-1 cell vector in the `Trained` property of the cross-validated model.

To create a cross-validated model, you can use one of these four name-value pair arguments only: `CVPartition`, `Holdout`, `KFold`, or `Leaveout`.

Ejemplo: `'Leaveout','on'`

Convergence Controls

contraer todo

Relative tolerance on the linear coefficients and the bias term (intercept), specified as the comma-separated pair consisting of `'BetaTolerance'` and a nonnegative scalar.

Let ${B}_{t}=\left[{\beta }_{t}{}^{\prime }\text{\hspace{0.17em}}\text{\hspace{0.17em}}{b}_{t}\right]$, that is, the vector of the coefficients and the bias term at optimization iteration t. If ${‖\frac{{B}_{t}-{B}_{t-1}}{{B}_{t}}‖}_{2}<\text{BetaTolerance}$, then optimization terminates.

If you also specify `GradientTolerance`, then optimization terminates when the software satisfies either stopping criterion.

Ejemplo: `'BetaTolerance',1e-6`

Tipos de datos: `single` | `double`

Absolute gradient tolerance, specified as the comma-separated pair consisting of `'GradientTolerance'` and a nonnegative scalar.

Let $\nabla {ℒ}_{t}$ be the gradient vector of the objective function with respect to the coefficients and bias term at optimization iteration t. If ${‖\nabla {ℒ}_{t}‖}_{\infty }=\mathrm{max}|\nabla {ℒ}_{t}|<\text{GradientTolerance}$, then optimization terminates.

If you also specify `BetaTolerance`, then optimization terminates when the software satisfies either stopping criterion.

Ejemplo: `'GradientTolerance',1e-5`

Tipos de datos: `single` | `double`

Size of the history buffer for Hessian approximation, specified as the comma-separated pair consisting of `'HessianHistorySize'` and a positive integer. At each iteration, `fitrkernel` composes the Hessian by using statistics from the latest `HessianHistorySize` iterations.

Ejemplo: `'HessianHistorySize',10`

Tipos de datos: `single` | `double`

Maximum number of optimization iterations, specified as the comma-separated pair consisting of `'IterationLimit'` and a positive integer.

The default value is 1000 if the transformed data fits in memory, as specified by `BlockSize`. Otherwise, the default value is 100.

Ejemplo: `'IterationLimit',500`

Tipos de datos: `single` | `double`

Hyperparameter Optimization Options

contraer todo

Parameters to optimize, specified as the comma-separated pair consisting of `'OptimizeHyperparameters'` and one of these values:

• `'none'` — Do not optimize.

• `'auto'` — Use `{'KernelScale','Lambda','Epsilon'}`.

• `'all'` — Optimize all eligible parameters.

• Cell array of eligible parameter names.

• Vector of `optimizableVariable` objects, typically the output of `hyperparameters`.

The optimization attempts to minimize the cross-validation loss (error) for `fitrkernel` by varying the parameters. To control the cross-validation type and other aspects of the optimization, use the `HyperparameterOptimizationOptions` name-value pair argument.

Nota

`'OptimizeHyperparameters'` values override any values you set using other name-value pair arguments. For example, setting `'OptimizeHyperparameters'` to `'auto'` causes the `'auto'` values to apply.

The eligible parameters for `fitrkernel` are:

• `Epsilon``fitrkernel` searches among positive values, by default log-scaled in the range `[1e-3,1e2]*iqr(Y)/1.349`.

• `KernelScale``fitrkernel` searches among positive values, by default log-scaled in the range `[1e-3,1e3]`.

• `Lambda``fitrkernel` searches among positive values, by default log-scaled in the range `[1e-3,1e3]/n`, where `n` is the number of observations.

• `Learner``fitrkernel` searches among `'svm'` and `'leastsquares'`.

• `NumExpansionDimensions``fitrkernel` searches among positive integers, by default log-scaled in the range `[100,10000]`.

Set nondefault parameters by passing a vector of `optimizableVariable` objects that have nondefault values. For example:

```load carsmall params = hyperparameters('fitrkernel',[Horsepower,Weight],MPG); params(2).Range = [1e-4,1e6];```

Pass `params` as the value of `'OptimizeHyperparameters'`.

By default, iterative display appears at the command line, and plots appear according to the number of hyperparameters in the optimization. For the optimization and plots, the objective function is log(1 + cross-validation loss) for regression and the misclassification rate for classification. To control the iterative display, set the `Verbose` field of the `'HyperparameterOptimizationOptions'` name-value pair argument. To control the plots, set the `ShowPlots` field of the `'HyperparameterOptimizationOptions'` name-value pair argument.

For an example, see Optimize Kernel Regression.

Ejemplo: `'OptimizeHyperparameters','auto'`

Options for optimization, specified as the comma-separated pair consisting of `'HyperparameterOptimizationOptions'` and a structure. This argument modifies the effect of the `OptimizeHyperparameters` name-value pair argument. All fields in the structure are optional.

Field NameValuesDefault
`Optimizer`
• `'bayesopt'` — Use Bayesian optimization. Internally, this setting calls `bayesopt`.

• `'gridsearch'` — Use grid search with `NumGridDivisions` values per dimension.

• `'randomsearch'` — Search at random among `MaxObjectiveEvaluations` points.

`'gridsearch'` searches in a random order, using uniform sampling without replacement from the grid. After optimization, you can get a table in grid order by using the command `sortrows(Mdl.HyperparameterOptimizationResults)`.

`'bayesopt'`
`AcquisitionFunctionName`

• `'expected-improvement-per-second-plus'`

• `'expected-improvement'`

• `'expected-improvement-plus'`

• `'expected-improvement-per-second'`

• `'lower-confidence-bound'`

• `'probability-of-improvement'`

For details, see the `bayesopt ``AcquisitionFunctionName` name-value pair argument, or Acquisition Function Types.

`'expected-improvement-per-second-plus'`
`MaxObjectiveEvaluations`Maximum number of objective function evaluations.`30` for `'bayesopt'` or `'randomsearch'`, and the entire grid for `'gridsearch'`
`MaxTime`

Time limit, specified as a positive real. The time limit is in seconds, as measured by `tic` and `toc`. Run time can exceed `MaxTime` because `MaxTime` does not interrupt function evaluations.

`Inf`
`NumGridDivisions`For `'gridsearch'`, the number of values in each dimension. The value can be a vector of positive integers giving the number of values for each dimension, or a scalar that applies to all dimensions. This field is ignored for categorical variables.`10`
`ShowPlots`Logical value indicating whether to show plots. If `true`, this field plots the best objective function value against the iteration number. If there are one or two optimization parameters, and if `Optimizer` is `'bayesopt'`, then `ShowPlots` also plots a model of the objective function against the parameters.`true`
`SaveIntermediateResults`Logical value indicating whether to save results when `Optimizer` is `'bayesopt'`. If `true`, this field overwrites a workspace variable named `'BayesoptResults'` at each iteration. The variable is a `BayesianOptimization` object.`false`
`Verbose`

Display to the command line.

• `0` — No iterative display

• `1` — Iterative display

• `2` — Iterative display with extra information

For details, see the `bayesopt` `Verbose` name-value pair argument.

`1`
`UseParallel`Logical value indicating whether to run Bayesian optimization in parallel, which requires Parallel Computing Toolbox™ . For details, see Parallel Bayesian Optimization.`false`
`Repartition`

Logical value indicating whether to repartition the cross-validation at every iteration. If `false`, the optimizer uses a single partition for the optimization.

`true` usually gives the most robust results because this setting takes partitioning noise into account. However, for good results, `true` requires at least twice as many function evaluations.

`false`
Use no more than one of the following three field names.
`CVPartition`A `cvpartition` object, as created by `cvpartition`.`'Kfold',5` if you do not specify any cross-validation field
`Holdout`A scalar in the range `(0,1)` representing the holdout fraction.
`Kfold`An integer greater than 1.

Ejemplo: `'HyperparameterOptimizationOptions',struct('MaxObjectiveEvaluations',60)`

Tipos de datos: `struct`

Output Arguments

contraer todo

Trained kernel regression model, returned as a `RegressionKernel` model object or `RegressionPartitionedKernel` cross-validated model object.

If you set any of the name-value pair arguments `CrossVal`, `CVPartition`, `Holdout`, `KFold`, or `Leaveout`, then `Mdl` is a `RegressionPartitionedKernel` cross-validated model. Otherwise, `Mdl` is a `RegressionKernel` model.

To reference properties of `Mdl`, use dot notation. For example, enter `Mdl.NumExpansionDimensions` in the Command Window to display the number of dimensions of the expanded space.

Nota

Unlike other regression models, and for economical memory usage, a `RegressionKernel` model object does not store the training data or training process details (for example, convergence history).

Optimization details, returned as a structure array including fields described in this table. The fields contain final values or name-value pair argument specifications.

FieldDescription
`Solver`

Objective function minimization technique: `'LBFGS-fast'`, `'LBFGS-blockwise'`, or `'LBFGS-tall'`. For details, see Algorithms.

`LossFunction`Loss function. Either mean squared error (MSE) or epsilon-insensitive, depending on the type of linear regression model. See `Learner`.
`Lambda`Regularization term strength. See `Lambda`.
`BetaTolerance`Relative tolerance on the linear coefficients and the bias term. See `BetaTolerance`.
`GradientTolerance`Absolute gradient tolerance. See `GradientTolerance`.
`ObjectiveValue`Value of the objective function when optimization terminates. The regression loss plus the regularization term compose the objective function.
`GradientMagnitude`Infinite norm of the gradient vector of the objective function when optimization terminates. See `GradientTolerance`.
`RelativeChangeInBeta`Relative changes in the linear coefficients and the bias term when optimization terminates. See `BetaTolerance`.
`FitTime`Elapsed, wall-clock time (in seconds) required to fit the model to the data.
`History`History of optimization information. This field also includes the optimization information from training `Mdl`. This field is empty (`[]`) if you specify `'Verbose',0`. For details, see `Verbose` and Algorithms.

To access fields, use dot notation. For example, to access the vector of objective function values for each iteration, enter `FitInfo.ObjectiveValue` in the Command Window.

Examine the information provided by `FitInfo` to assess whether convergence is satisfactory.

Cross-validation optimization of hyperparameters, returned as a `BayesianOptimization` object or a table of hyperparameters and associated values. The output is nonempty when the value of `'OptimizeHyperparameters'` is not `'none'`. The output value depends on the `Optimizer` field value of the `'HyperparameterOptimizationOptions'` name-value pair argument:

Value of `Optimizer` FieldValue of `HyperparameterOptimizationResults`
`'bayesopt'` (default)Object of class `BayesianOptimization`
`'gridsearch'` or `'randomsearch'`Table of hyperparameters used, observed objective function values (cross-validation loss), and rank of observations from lowest (best) to highest (worst)

Limitations

• `fitrkernel` does not accept initial conditions for the linear coefficients beta (β) and bias term (b) used to determine the decision function, $f\left(x\right)=T\left(x\right)\beta +b.$

• `fitrkernel` does not support standardization and cross-validation options.

• `fitrkernel` does not accept table inputs.

Más acerca de

contraer todo

Random Feature Expansion

Random feature expansion, such as Random Kitchen Sinks[1] and Fastfood[2], is a scheme to approximate Gaussian kernels of the kernel regression algorithm for big data in a computationally efficient way. Random feature expansion is more practical for big data applications that have large training sets but can also be applied to smaller data sets that fit in memory.

The kernel regression algorithm searches for an optimal function that deviates from each response data point (yi) by values no greater than the epsilon margin (ε) after mapping the predictor data into a high-dimensional space.

Some regression problems cannot be described adequately using a linear model. In such cases, obtain a nonlinear regression model by replacing the dot product x1x2 with a nonlinear kernel function $G\left({x}_{1},{x}_{2}\right)=〈\phi \left({x}_{1}\right),\phi \left({x}_{2}\right)〉$, where xi is the ith observation (row vector) and φ(xi) is a transformation that maps xi to a high-dimensional space (called the “kernel trick”). However, evaluating G(x1,x2) , the Gram matrix, for each pair of observations is computationally expensive for a large data set (large n).

The random feature expansion scheme finds a random transformation so that its dot product approximates the Gaussian kernel. That is,

`$G\left({x}_{1},{x}_{2}\right)=〈\phi \left({x}_{1}\right),\phi \left({x}_{2}\right)〉\approx T\left({x}_{1}\right)T\left({x}_{2}\right)\text{'},$`

where T(x) maps x in ${ℝ}^{p}$ to a high-dimensional space (${ℝ}^{m}$). The Random Kitchen Sink[1] scheme uses the random transformation

`$T\left(x\right)={m}^{-1/2}\mathrm{exp}\left(iZx\text{'}\right)\text{'},$`

where $Z\in {ℝ}^{m×p}$ is a sample drawn from $N\left(0,{\sigma }^{-2}\right)$ and σ2 is a kernel scale. This scheme requires O(mp) computation and storage. The Fastfood[2] scheme introduces another random basis V instead of Z using Hadamard matrices combined with Gaussian scaling matrices. This random basis reduces computation cost to O(m`log`p) and reduces storage to O(m).

You can specify values for m and σ2, using the `NumExpansionDimensions` and `KernelScale` name-value pair arguments of `fitrkernel`, respectively.

The `fitrkernel` function uses the Fastfood scheme for random feature expansion and uses linear regression to train a Gaussian kernel regression model. Unlike solvers in the `fitrsvm` function, which require computation of the n-by-n Gram matrix, the solver in `fitrkernel` only needs to form a matrix of size n-by-m, with m typically much less than n for big data.

Box Constraint

A box constraint is a parameter that controls the maximum penalty imposed on observations that lie outside the epsilon margin (ε), and helps to prevent overfitting (regularization). Increasing the box constraint can lead to longer training times.

The box constraint (C) and the regularization term strength (λ) are related by C = 1/(λn), where n is the number of observations.

Algoritmos

`fitrkernel` minimizes the regularized objective function using a Limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) solver with ridge (L2) regularization. To find the type of LBFGS solver used for training, type `FitInfo.Solver` in the Command Window.

• `'LBFGS-fast'` — LBFGS solver.

• `'LBFGS-blockwise'` — LBFGS solver with a block-wise strategy. If `fitrkernel` requires more memory than the value of `BlockSize` to hold the transformed predictor data, then it uses a block-wise strategy.

• `'LBFGS-tall'` — LBFGS solver with a block-wise strategy for tall arrays.

When `fitrkernel` uses a block-wise strategy, `fitrkernel` implements LBFGS by distributing the calculation of the loss and gradient among different parts of the data at each iteration. Also, `fitrkernel` refines the initial estimates of the linear coefficients and the bias term by fitting the model locally to parts of the data and combining the coefficients by averaging. If you specify `'Verbose',1`, then `fitrkernel` displays diagnostic information for each data pass and stores the information in the `History` field of `FitInfo`.

When `fitrkernel` does not use a block-wise strategy, the initial estimates are zeros. If you specify `'Verbose',1`, then `fitrkernel` displays diagnostic information for each iteration and stores the information in the `History` field of `FitInfo`.

Referencias

[1] Rahimi, A., and B. Recht. “Random Features for Large-Scale Kernel Machines.” Advances in Neural Information Processing Systems. Vol. 20, 2008, pp. 1177–1184.

[2] Le, Q., T. Sarlós, and A. Smola. “Fastfood — Approximating Kernel Expansions in Loglinear Time.” Proceedings of the 30th International Conference on Machine Learning. Vol. 28, No. 3, 2013, pp. 244–252.

[3] Huang, P. S., H. Avron, T. N. Sainath, V. Sindhwani, and B. Ramabhadran. “Kernel methods match Deep Neural Networks on TIMIT.” 2014 IEEE International Conference on Acoustics, Speech and Signal Processing. 2014, pp. 205–209.