Implement a grid search function for hyperparameter optimization. Grid search is a technique used to find the optimal set of hyperparameters for a machine learning model by exhaustively searching through all possible combinations of specified parameter values.
Your function should:
- Accept training data, validation data, a parameter grid (dictionary mapping parameter names to lists of values), a model function, and a scoring function
- Generate all possible combinations of hyperparameters from the parameter grid
- Evaluate each combination by training/predicting with the model function and scoring with the scoring function
- Return the best parameter combination and its corresponding score (rounded to 4 decimal places)
The model_fn signature is: model_fn(X_train, y_train, X_val, **params) -> predictions The scoring_fn signature is: scoring_fn(y_true, y_pred) -> score (higher is better)
When multiple parameter combinations achieve the same best score, return the first one encountered (based on the order of iteration through combinations).
Examples
Example 1:
Input:
X_train = [[0, 0], [1, 0], [2, 0], [3, 0], [4, 0]], y_train = [0, 1, 1, 0, 0], X_val = [[1.5, 0]], y_val = [1], param_grid = {'k': [1, 3, 5]}, model_fn = knn_model, scoring_fn = accuracyOutput:
({'k': 1}, 1.0)Explanation: Grid search evaluates all k values: k=1 predicts label 1 (correct, accuracy=1.0), k=3 predicts label 1 (correct, accuracy=1.0), k=5 predicts label 0 (wrong, accuracy=0.0). Both k=1 and k=3 achieve accuracy 1.0, so the first one (k=1) is returned as the best parameter.
Starter Code
import numpy as np
from itertools import product
def grid_search(X_train: np.ndarray, y_train: np.ndarray,
X_val: np.ndarray, y_val: np.ndarray,
param_grid: dict, model_fn: callable,
scoring_fn: callable) -> tuple:
"""
Perform grid search to find optimal hyperparameters.
Args:
X_train: Training features
y_train: Training labels
X_val: Validation features
y_val: Validation labels
param_grid: Dict mapping parameter names to lists of values to try
model_fn: Function(X_train, y_train, X_val, **params) -> predictions
scoring_fn: Function(y_true, y_pred) -> score (higher is better)
Returns:
Tuple of (best_params dict, best_score rounded to 4 decimals)
"""
passPython3
ReadyLines: 1Characters: 0
Ready