Common evaluation metrics for regression models include:
-
Mean Absolute Error (MAE): Measures the average magnitude of errors in a set of predictions, without considering their direction. It is the average over the test sample of the absolute differences between prediction and actual observation.
[
\text{MAE} = \frac{1}{n} \sum_{i=1}^{n} |y_i - \hat{y}_i|
] -
Mean Squared Error (MSE): Measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value.
[
\text{MSE} = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2
] -
Root Mean Squared Error (RMSE): The square root of the mean of the squared errors. It gives a relatively high weight to large errors.
[
\text{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2}
] -
R-squared (Coefficient of Determination): Indicates the proportion of the variance in the dependent variable that is predictable from the independent variables. It ranges from 0 to 1, where 1 indicates perfect prediction.
[
R^2 = 1 - \frac{\text{SS}{\text{res}}}{\text{SS}{\text{tot}}}
]where (\text{SS}{\text{res}}) is the sum of squares of residuals and (\text{SS}{\text{tot}}) is the total sum of squares.
-
Adjusted R-squared: Adjusts the R-squared value based on the number of predictors in the model, providing a more accurate measure when comparing models with different numbers of predictors.
These metrics help assess the performance of regression models and guide improvements.
