# GeneralizedLinearModel

Generalized linear regression model class

## Description

`GeneralizedLinearModel` is a fitted generalized linear regression model. A generalized linear regression model is a special class of nonlinear models that describe a nonlinear relationship between a response and predictors. A generalized linear regression model has generalized characteristics of a linear regression model. The response variable follows a normal, binomial, Poisson, gamma, or inverse Gaussian distribution with parameters including the mean response μ. A link function f defines the relationship between μ and the linear combination of predictors.

Use the properties of a `GeneralizedLinearModel` object to investigate a fitted generalized linear regression model. The object properties include information about coefficient estimates, summary statistics, fitting method, and input data. Use the object functions to predict responses and to modify, evaluate, and visualize the model.

## Creation

Create a `GeneralizedLinearModel` object by using `fitglm` or `stepwiseglm`.

`fitglm` fits a generalized linear regression model to data using a fixed model specification. Use `addTerms`, `removeTerms`, or `step` to add or remove terms from the model. Alternatively, use `stepwiseglm` to fit a model using stepwise generalized linear regression.

## Properties

expand all

### Coefficient Estimates

Covariance matrix of coefficient estimates, specified as a p-by-p matrix of numeric values. p is the number of coefficients in the fitted model, as given by `NumCoefficients`.

For details, see Coefficient Standard Errors and Confidence Intervals.

Data Types: `single` | `double`

Coefficient names, specified as a cell array of character vectors, each containing the name of the corresponding term.

Data Types: `cell`

Coefficient values, specified as a table. `Coefficients` contains one row for each coefficient and these columns:

• `Estimate` — Estimated coefficient value

• `SE` — Standard error of the estimate

• `tStat`t-statistic for a two-sided test with the null hypothesis that the coefficient is zero

• `pValue`p-value for the t-statistic

Use `coefTest` to perform linear hypothesis tests on the coefficients. Use `coefCI` to find the confidence intervals of the coefficient estimates.

To obtain any of these columns as a vector, index into the property using dot notation. For example, obtain the estimated coefficient vector in the model `mdl`:

`beta = mdl.Coefficients.Estimate`

Data Types: `table`

Number of model coefficients, specified as a positive integer. `NumCoefficients` includes coefficients that are set to zero when the model terms are rank deficient.

Data Types: `double`

Number of estimated coefficients in the model, specified as a positive integer. `NumEstimatedCoefficients` does not include coefficients that are set to zero when the model terms are rank deficient. `NumEstimatedCoefficients` is the degrees of freedom for regression.

Data Types: `double`

### Summary Statistics

Deviance of the fit, specified as a numeric value. The deviance is useful for comparing two models when one model is a special case of the other model. The difference between the deviance of the two models has a chi-square distribution with degrees of freedom equal to the difference in the number of estimated parameters between the two models. For more information, see Deviance.

Data Types: `single` | `double`

Degrees of freedom for the error (residuals), equal to the number of observations minus the number of estimated coefficients, specified as a positive integer.

Data Types: `double`

Observation diagnostics, specified as a table that contains one row for each observation and the columns described in this table.

ColumnMeaningDescription
`Leverage`Diagonal elements of `HatMatrix``Leverage` for each observation indicates to what extent the fit is determined by the observed predictor values. A value close to `1` indicates that the fit is largely determined by that observation, with little contribution from the other observations. A value close to `0` indicates that the fit is largely determined by the other observations. For a model with `P` coefficients and `N` observations, the average value of `Leverage` is `P/N`. A `Leverage` value greater than `2*P/N` indicates high leverage.
`CooksDistance`Cook's distance of scaled change in fitted values`CooksDistance` is a measure of scaled change in fitted values. An observation with `CooksDistance` greater than three times the mean Cook's distance can be an outlier.
`HatMatrix`Projection matrix to compute fitted from observed responses`HatMatrix` is an `N`-by-`N` matrix such that `Fitted = HatMatrix*Y`, where `Y` is the response vector and `Fitted` is the vector of fitted response values.

The software computes these values on the scale of the linear combination of the predictors, stored in the `LinearPredictor` field of the `Fitted` and `Residuals` properties. For example, the software computes the diagnostic values by using the fitted response and adjusted response values from the model `mdl`.

```Yfit = mdl.Fitted.LinearPredictor Yadjusted = mdl.Fitted.LinearPredictor + mdl.Residuals.LinearPredictor```

`Diagnostics` contains information that is helpful in finding outliers and influential observations. For more details, see Leverage, Cook’s Distance, and Hat Matrix.

Use `plotDiagnostics` to plot observation diagnostics.

Rows not used in the fit because of missing values (in `ObservationInfo.Missing`) or excluded values (in `ObservationInfo.Excluded`) contain `NaN` values in the `CooksDistance` column and zeros in the `Leverage` and `HatMatrix` columns.

To obtain any of these columns as an array, index into the property using dot notation. For example, obtain the hat matrix in the model `mdl`:

`HatMatrix = mdl.Diagnostics.HatMatrix;`

Data Types: `table`

Scale factor of the variance of the response, specified as a numeric scalar.

If the `'DispersionFlag'` name-value pair argument of `fitglm` or `stepwiseglm` is `true`, then the function estimates the `Dispersion` scale factor in computing the variance of the response. The variance of the response equals the theoretical variance multiplied by the scale factor.

For example, the variance function for the binomial distribution is p(1–p)/n, where p is the probability parameter and n is the sample size parameter. If `Dispersion` is near `1`, the variance of the data appears to agree with the theoretical variance of the binomial distribution. If `Dispersion` is larger than `1`, the data set is “overdispersed” relative to the binomial distribution.

Data Types: `double`

Flag to indicate whether `fitglm` used the `Dispersion` scale factor to compute standard errors for the coefficients in `Coefficients.SE`, specified as a logical value. If `DispersionEstimated` is `false`, `fitglm` used the theoretical value of the variance.

• `DispersionEstimated` can be `false` only for the binomial and Poisson distributions.

• Set `DispersionEstimated` by setting the `'DispersionFlag'` name-value pair argument of `fitglm` or `stepwiseglm`.

Data Types: `logical`

Fitted (predicted) values based on the input data, specified as a table that contains one row for each observation and the columns described in this table.

ColumnDescription
`Response`Predicted values on the scale of the response
`LinearPredictor`Predicted values on the scale of the linear combination of the predictors (same as the link function applied to the `Response` fitted values)
`Probability`Fitted probabilities (included only with the binomial distribution)

To obtain any of these columns as a vector, index into the property using dot notation. For example, obtain the vector `f` of fitted values on the response scale in the model `mdl`:

`f = mdl.Fitted.Response`

Use `predict` to compute predictions for other predictor values, or to compute confidence bounds on `Fitted`.

Data Types: `table`

Loglikelihood of the model distribution at the response values, specified as a numeric value. The mean is fitted from the model, and other parameters are estimated as part of the model fit.

Data Types: `single` | `double`

Criterion for model comparison, specified as a structure with these fields:

• `AIC` — Akaike information criterion. `AIC = –2*logL + 2*m`, where `logL` is the loglikelihood and `m` is the number of estimated parameters.

• `AICc` — Akaike information criterion corrected for the sample size. `AICc = AIC + (2*m*(m + 1))/(n – m – 1)`, where `n` is the number of observations.

• `BIC` — Bayesian information criterion. `BIC = –2*logL + m*log(n)`.

• `CAIC` — Consistent Akaike information criterion. `CAIC = –2*logL + m*(log(n) + 1)`.

Information criteria are model selection tools that you can use to compare multiple models fit to the same data. These criteria are likelihood-based measures of model fit that include a penalty for complexity (specifically, the number of parameters). Different information criteria are distinguished by the form of the penalty.

When you compare multiple models, the model with the lowest information criterion value is the best-fitting model. The best-fitting model can vary depending on the criterion used for model comparison.

To obtain any of the criterion values as a scalar, index into the property using dot notation. For example, obtain the AIC value `aic` in the model `mdl`:

`aic = mdl.ModelCriterion.AIC`

Data Types: `struct`

Residuals for the fitted model, specified as a table that contains one row for each observation and the columns described in this table.

ColumnDescription
`Raw`Observed minus fitted values
`LinearPredictor`Residuals on the linear predictor scale, equal to the adjusted response value minus the fitted linear combination of the predictors
`Pearson`Raw residuals divided by the estimated standard deviation of the response
`Anscombe`Residuals defined on transformed data with the transformation selected to remove skewness
`Deviance`Residuals based on the contribution of each observation to the deviance

Rows not used in the fit because of missing values (in `ObservationInfo.Missing`) contain `NaN` values.

To obtain any of these columns as a vector, index into the property using dot notation. For example, obtain the ordinary raw residual vector `r` in the model `mdl`:

`r = mdl.Residuals.Raw`

Data Types: `table`

R-squared value for the model, specified as a structure with five fields.

FieldDescriptionEquation
`Ordinary`Ordinary (unadjusted) R-squared

`${R}_{\text{Ordinary}}^{2}=1-\frac{\text{SSE}}{\text{SST}}$`

`SSE` is the sum of squared errors, and `SST` is the total sum of squared deviations of the response vector from the mean of the response vector.

`Adjusted`R-squared adjusted for the number of coefficients

`${R}_{\text{Adjusted}}^{2}=1-\frac{\text{SSE}}{\text{SST}}\cdot \frac{N-1}{\text{DFE}}$`

N is the number of observations (`NumObservations`), and `DFE` is the degrees of freedom for the error (residuals).

`LLR`Loglikelihood ratio

`${R}_{\text{LLR}}^{2}=1-\frac{L}{{L}_{0}}$`

L is the loglikelihood of the fitted model (`LogLikelihood`), and L0 is the loglikelihood of a model that includes only a constant term. R2LLR is the McFadden pseudo R-squared value  for logistic regression models.

`Deviance`Deviance R-squared

`${R}_{\text{Deviance}}^{2}=1-\frac{D}{{D}_{0}}$`

D is the deviance of the fitted model (`Deviance`), and D0 is the deviance of a model that includes only a constant term.

`AdjGeneralized`Adjusted generalized R-squared

`${R}_{\text{AdjGeneralized}}^{2}=\frac{1-\mathrm{exp}\left(\frac{2\left({L}_{0}-L\right)}{N}\right)}{1-\mathrm{exp}\left(\frac{2{L}_{0}}{N}\right)}$`

R2AdjGeneralized is the Nagelkerke adjustment  to a formula proposed by Maddala , Cox and Snell , and Magee  for logistic regression models.

To obtain any of these values as a scalar, index into the property using dot notation. For example, to obtain the adjusted R-squared value in the model `mdl`, enter:

`r2 = mdl.Rsquared.Adjusted`

Data Types: `struct`

Sum of squared errors (residuals), specified as a numeric value. If the model was trained with observation weights, the sum of squares in the `SSE` calculation is the weighted sum of squares.

Data Types: `single` | `double`

Regression sum of squares, specified as a numeric value. `SSR` is equal to the sum of the squared deviations between the fitted values and the mean of the response. If the model was trained with observation weights, the sum of squares in the `SSR` calculation is the weighted sum of squares.

Data Types: `single` | `double`

Total sum of squares, specified as a numeric value. `SST` is equal to the sum of squared deviations of the response vector `y` from the `mean(y)`. If the model was trained with observation weights, the sum of squares in the `SST` calculation is the weighted sum of squares.

Data Types: `single` | `double`

### Fitting Information

Stepwise fitting information, specified as a structure with the fields described in this table.

FieldDescription
`Start`Formula representing the starting model
`Lower`Formula representing the lower bound model. The terms in `Lower` must remain in the model.
`Upper`Formula representing the upper bound model. The model cannot contain more terms than `Upper`.
`Criterion`Criterion used for the stepwise algorithm, such as `'sse'`
`PEnter`Threshold for `Criterion` to add a term
`PRemove`Threshold for `Criterion` to remove a term
`History`Table representing the steps taken in the fit

The `History` table contains one row for each step, including the initial fit, and the columns described in this table.

ColumnDescription
`Action`

Action taken during the step:

• `'Start'` — First step

• `'Add'` — A term is added

• `'Remove'` — A term is removed

`TermName`
• If `Action` is `'Start'`, `TermName` specifies the starting model specification.

• If `Action` is `'Add'` or `'Remove'`, `TermName` specifies the term added or removed in the step.

`Terms`Model specification in a Terms Matrix
`DF`Regression degrees of freedom after the step
`delDF`Change in regression degrees of freedom from the previous step (negative for steps that remove a term)
`Deviance`Deviance (residual sum of squares) at the step (only for a generalized linear regression model)
`FStat`F-statistic that leads to the step
`PValue`p-value of the F-statistic

The structure is empty unless you fit the model using stepwise regression.

Data Types: `struct`

### Input Data

Generalized distribution information, specified as a structure with the fields described in this table.

FieldDescription
`Name`Name of the distribution: `'normal'`, `'binomial'`, `'poisson'`, `'gamma'`, or `'inverse gaussian'`
`DevianceFunction`Function that computes the components of the deviance as a function of the fitted parameter values and the response values
`VarianceFunction`Function that computes the theoretical variance for the distribution as a function of the fitted parameter values. When `DispersionEstimated` is `true`, the software multiplies the variance function by `Dispersion` in the computation of the coefficient standard errors.

Data Types: `struct`

Model information, specified as a `LinearFormula` object.

Display the formula of the fitted model `mdl` using dot notation:

`mdl.Formula`

Number of observations the fitting function used in fitting, specified as a positive integer. `NumObservations` is the number of observations supplied in the original table, dataset, or matrix, minus any excluded rows (set with the `'Exclude'` name-value pair argument) or rows with missing values.

Data Types: `double`

Number of predictor variables used to fit the model, specified as a positive integer.

Data Types: `double`

Number of variables in the input data, specified as a positive integer. `NumVariables` is the number of variables in the original table or dataset, or the total number of columns in the predictor matrix and response vector.

`NumVariables` also includes any variables that are not used to fit the model as predictors or as the response.

Data Types: `double`

Observation information, specified as an n-by-4 table, where n is equal to the number of rows of input data. `ObservationInfo` contains the columns described in this table.

ColumnDescription
`Weights`Observation weights, specified as a numeric value. The default value is `1`.
`Excluded`Indicator of excluded observations, specified as a logical value. The value is `true` if you exclude the observation from the fit by using the `'Exclude'` name-value pair argument.
`Missing`Indicator of missing observations, specified as a logical value. The value is `true` if the observation is missing.
`Subset`Indicator of whether or not the fitting function uses the observation, specified as a logical value. The value is `true` if the observation is not excluded or missing, meaning the fitting function uses the observation.

To obtain any of these columns as a vector, index into the property using dot notation. For example, obtain the weight vector `w` of the model `mdl`:

`w = mdl.ObservationInfo.Weights`

Data Types: `table`

Observation names, specified as a cell array of character vectors containing the names of the observations used in the fit.

• If the fit is based on a table or dataset containing observation names, `ObservationNames` uses those names.

• Otherwise, `ObservationNames` is an empty cell array.

Data Types: `cell`

Offset variable, specified as a numeric vector with the same length as the number of rows in the data. `Offset` is passed from `fitglm` or `stepwiseglm` in the `'Offset'` name-value pair argument. The fitting functions use `Offset` as an additional predictor variable with a coefficient value fixed at `1`. In other words, the formula for fitting is

f(μ)``` ~ Offset + (terms involving real predictors)```

where f is the link function. The `Offset` predictor has coefficient `1`.

For example, consider a Poisson regression model. Suppose the number of counts is known for theoretical reasons to be proportional to a predictor `A`. By using the log link function and by specifying `log(A)` as an offset, you can force the model to satisfy this theoretical constraint.

Data Types: `double`

Names of predictors used to fit the model, specified as a cell array of character vectors.

Data Types: `cell`

Response variable name, specified as a character vector.

Data Types: `char`

Information about variables contained in `Variables`, specified as a table with one row for each variable and the columns described in this table.

ColumnDescription
`Class`Variable class, specified as a cell array of character vectors, such as `'double'` and `'categorical'`
`Range`

Variable range, specified as a cell array of vectors

• Continuous variable — Two-element vector `[min,max]`, the minimum and maximum values

• Categorical variable — Vector of distinct variable values

`InModel`Indicator of which variables are in the fitted model, specified as a logical vector. The value is `true` if the model includes the variable.
`IsCategorical`Indicator of categorical variables, specified as a logical vector. The value is `true` if the variable is categorical.

`VariableInfo` also includes any variables that are not used to fit the model as predictors or as the response.

Data Types: `table`

Names of variables, specified as a cell array of character vectors.

• If the fit is based on a table or dataset, this property provides the names of the variables in the table or dataset.

• If the fit is based on a predictor matrix and response vector, `VariableNames` contains the values specified by the `'VarNames'` name-value pair argument of the fitting method. The default value of `'VarNames'` is `{'x1','x2',...,'xn','y'}`.

`VariableNames` also includes any variables that are not used to fit the model as predictors or as the response.

Data Types: `cell`

Input data, specified as a table. `Variables` contains both predictor and response values. If the fit is based on a table or dataset array, `Variables` contains all the data from the table or dataset array. Otherwise, `Variables` is a table created from the input data matrix `X` and the response vector `y`.

`Variables` also includes any variables that are not used to fit the model as predictors or as the response.

Data Types: `table`

## Object Functions

expand all

 `compact` Compact generalized linear regression model
 `addTerms` Add terms to generalized linear regression model `removeTerms` Remove terms from generalized linear regression model `step` Improve generalized linear regression model by adding or removing terms
 `feval` Predict responses of generalized linear regression model using one input for each predictor `predict` Predict responses of generalized linear regression model `random` Simulate responses with random noise for generalized linear regression model
 `coefCI` Confidence intervals of coefficient estimates of generalized linear regression model `coefTest` Linear hypothesis test on generalized linear regression model coefficients `devianceTest` Analysis of deviance for generalized linear regression model `partialDependence` Compute partial dependence
 `plotDiagnostics` Plot observation diagnostics of generalized linear regression model `plotPartialDependence` Create partial dependence plot (PDP) and individual conditional expectation (ICE) plots `plotResiduals` Plot residuals of generalized linear regression model `plotSlice` Plot of slices through fitted generalized linear regression surface
 `gather` Gather properties of Statistics and Machine Learning Toolbox object from GPU

## Examples

collapse all

Fit a logistic regression model of the probability of smoking as a function of age, weight, and sex, using a two-way interaction model.

Load the `hospital` data set.

`load hospital`

Convert the dataset array to a table.

`tbl = dataset2table(hospital);`

Specify the model using a formula that includes two-way interactions and lower-order terms.

`modelspec = 'Smoker ~ Age*Weight*Sex - Age:Weight:Sex';`

Create the generalized linear model.

`mdl = fitglm(tbl,modelspec,'Distribution','binomial')`
```mdl = Generalized linear regression model: logit(Smoker) ~ 1 + Sex*Age + Sex*Weight + Age*Weight Distribution = Binomial Estimated Coefficients: Estimate SE tStat pValue ___________ _________ ________ _______ (Intercept) -6.0492 19.749 -0.3063 0.75938 Sex_Male -2.2859 12.424 -0.18399 0.85402 Age 0.11691 0.50977 0.22934 0.81861 Weight 0.031109 0.15208 0.20455 0.83792 Sex_Male:Age 0.020734 0.20681 0.10025 0.92014 Sex_Male:Weight 0.01216 0.053168 0.22871 0.8191 Age:Weight -0.00071959 0.0038964 -0.18468 0.85348 100 observations, 93 error degrees of freedom Dispersion: 1 Chi^2-statistic vs. constant model: 5.07, p-value = 0.535 ```

The large p-value indicates that the model might not differ statistically from a constant.

Create response data using three of 20 predictor variables, and create a generalized linear model using stepwise regression from a constant model to see if `stepwiseglm` finds the correct predictors.

Generate sample data that has 20 predictor variables. Use three of the predictors to generate the Poisson response variable.

```rng default % for reproducibility X = randn(100,20); mu = exp(X(:,[5 10 15])*[.4;.2;.3] + 1); y = poissrnd(mu);```

Fit a generalized linear regression model using the Poisson distribution. Specify the starting model as a model that contains only a constant (intercept) term. Also, specify a model with an intercept and linear term for each predictor as the largest model to consider as the fit by using the `'Upper'` name-value pair argument.

`mdl = stepwiseglm(X,y,'constant','Upper','linear','Distribution','poisson')`
```1. Adding x5, Deviance = 134.439, Chi2Stat = 52.24814, PValue = 4.891229e-13 2. Adding x15, Deviance = 106.285, Chi2Stat = 28.15393, PValue = 1.1204e-07 3. Adding x10, Deviance = 95.0207, Chi2Stat = 11.2644, PValue = 0.000790094 ```
```mdl = Generalized linear regression model: log(y) ~ 1 + x5 + x10 + x15 Distribution = Poisson Estimated Coefficients: Estimate SE tStat pValue ________ ________ ______ __________ (Intercept) 1.0115 0.064275 15.737 8.4217e-56 x5 0.39508 0.066665 5.9263 3.0977e-09 x10 0.18863 0.05534 3.4085 0.0006532 x15 0.29295 0.053269 5.4995 3.8089e-08 100 observations, 96 error degrees of freedom Dispersion: 1 Chi^2-statistic vs. constant model: 91.7, p-value = 9.61e-20 ```

`stepwiseglm` finds the three correct predictors: `x5`, `x10`, and `x15`.