Hyperparameter Tuning Strategies
1. Grid Search Cross-Validation
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
param_grid = {
'n_estimators': [50, 100, 200],
'max_depth': [5, 10, 15, None],
'min_samples_split': [2, 5, 10],
'min_samples_leaf': [1, 2, 4]
}
rf_model = RandomForestClassifier(random_state=42)
grid_search = GridSearchCV(
estimator=rf_model,
param_grid=param_grid,
cv=5,
scoring='accuracy',
n_jobs=-1
)
grid_search.fit(X_train, y_train)
best_params = grid_search.best_params_
Hyperparameter Impact
Hyperparameter |
Impact on Model |
n_estimators |
Number of trees |
max_depth |
Tree complexity |
min_samples_split |
Prevents overfitting |
min_samples_leaf |
Reduces model variance |
2. Advanced Optimization Techniques
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint, uniform
random_param_dist = {
'n_estimators': randint(50, 500),
'max_depth': [None] + list(randint(10, 100).rvs(5)),
'min_samples_split': randint(2, 20),
'max_features': uniform(0.1, 0.9)
}
random_search = RandomizedSearchCV(
estimator=rf_model,
param_distributions=random_param_dist,
n_iter=100,
cv=5,
scoring='accuracy',
n_jobs=-1
)
random_search.fit(X_train, y_train)
graph TD
A[Initial Model] --> B[Hyperparameter Tuning]
B --> C{Performance Improved?}
C -->|Yes| D[Validate Model]
C -->|No| E[Adjust Strategy]
D --> F[Deploy Model]
E --> B
3. Ensemble and Boosting Techniques
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
## Voting Classifier
from sklearn.ensemble import VotingClassifier
rf_classifier = RandomForestClassifier(random_state=42)
gb_classifier = GradientBoostingClassifier(random_state=42)
voting_classifier = VotingClassifier(
estimators=[
('rf', rf_classifier),
('gb', gb_classifier)
],
voting='soft'
)
## Cross-validation
cv_scores = cross_val_score(
voting_classifier,
X_train,
y_train,
cv=5
)
- Feature selection
- Dimensionality reduction
- Ensemble methods
- Regularization
- Handling class imbalance
Memory and Computational Efficiency
## Use n_jobs for parallel processing
rf_model = RandomForestClassifier(
n_estimators=100,
n_jobs=-1, ## Utilize all CPU cores
random_state=42
)
Key Optimization Metrics
Metric |
Purpose |
Accuracy |
Overall model performance |
Precision |
Positive prediction accuracy |
Recall |
Ability to find all positive instances |
F1-Score |
Balanced precision and recall |
By LabEx, these optimization techniques help create robust and efficient Random Forest models.