Skip to main content

Custom Models

If you are using the AutoML provider, you can define and use custom classification and regression models. If you understand your data well and want to experiment with your own model definitions, you can manually define an additional set of models that AutoML can select when performing operations. These models must be defined in Python.

Once you have defined a model and stored it in the appropriate directory, you can refer to the model by its name in CREATE MODEL and TRAIN MODEL statements, where they are executed via Embedded Python. Before running these commands with a custom model, be sure that your instance is configured to use an appropriate version of Python. Any modules that you use when building your model should be installed in the mgr/python directory within your instance.

Model Creation

Create a Python file and implement a class called IRISModel. This class has a number of properties you can use to configure the behavior of the model, as described below. See Example Models for fully coded Python examples.

Required Properties
Optional Properties
  • time_complexity_fn: The name of a function written in Python that accepts one parameter, n (the population size), and returns the time complexity of your model’s performance in terms of n. When AutoML compares models against each other to select the one that is optimized for both time and accuracy, it uses this function to determine the time complexity component of the comparison. If omitted, the default returns n * log(n).

    To calculate this for your model, take stock of the operations your model performs and how many operations it performs on an element of the population. For example, if it performs a uniform number of operations (not determined by the size of the population) on every element in the population, the time complexity will be n, because each element of the population is processed one time. However, if you perform n operations per every element of the population, the time complexity is n2, because you access each of the n elements of the population n times.

  • fit_kwargs: Any additional arguments in model.fit(), not including the X and y arguments required by sklearn (X typically represents the training input samples and y typically represents the target values). These arguments should be passed as a dictionary with string-valued keys. For example, if the defined model is sklearn.linear_model.LogisticRegression and its fit method has signature fit(X,y,sample_weight), the fit_kwargs that could be passed are {"sample_weight": <values>}.

  • fit_params: The fit_params as expected by sklearn.mode_selection.cross_validateOpens in a new tab.

  • fix_matrix_type: A method that converts input data from one type to another, if such a conversion is required. If this field is left unspecified, a default method is assigned which returns the unconverted data.

  • problem_type: A string, either 'Regression' or 'Classification'.

  • package: The Python package to which this estimator belongs.

  • model_type: The type of model used from within the package.

Once you have defined your model in a Python file, you must store this file in the appropriate directory. Store classification models in the <installation-directory>/Mgr/python/AutoML/Classifiers directory and store regression models in the <installation-directory>/Mgr/python/AutoML/Regressors directory.

Example Models

The following examples demonstrate full Python files that define custom models.

 import numpy as np
 import scipy.sparse as sp
 import tensorflow as tf
 from sklearn.base import BaseEstimator, ClassifierMixin

 class IRISModel:
     def __init__(self, **kwargs): 
         self.model = TFModelFromScratch(
             hidden_units=kwargs.get("hidden_units", 32),
             epochs=kwargs.get("epochs",10),
             activation=kwargs.get("activation","relu")
         )
         self.name = "own_basic_tf_model"
         self.model_type = "Tensor Flow Model"
 
 
 class TFModelFromScratch(ClassifierMixin,BaseEstimator):
     def __init__(self, input_dim=None, hidden_units=32, activation="relu", epochs=10, batch_size=32, learning_rate=0.001):
         self.input_dim = input_dim
         self.hidden_units = hidden_units
         self.activation = activation
         self.epochs = epochs
         self.batch_size = batch_size
         self.learning_rate = learning_rate
         self.classes_ = None
         self.model = None
 
         if input_dim is not None:
             self._build_model()
 
     def _build_model(self):
         """Builds a simple feedforward neural network model using TensorFlow."""
         self.model = tf.keras.Sequential([
             tf.keras.layers.Dense(self.hidden_units, activation=self.activation, input_shape=(self.input_dim,)),
             tf.keras.layers.Dense(len(self.classes_) if self.classes_ is not None else 2, activation="softmax")
         ])
         self.model.compile(
             optimizer=tf.keras.optimizers.Adam(learning_rate=self.learning_rate),
             loss="sparse_categorical_crossentropy",
             metrics=["accuracy"]
         )
 
     def _ensure_dense(self, X):
         """Converts sparse input to dense if necessary."""
         if sp.issparse(X):  # Check if input is sparse
             return X.toarray()
         return np.array(X)  # Ensure it's a NumPy array
 
     def fit(self, X, y):
         """Trains the neural network on the given data."""
         X, y = self._ensure_dense(X), np.array(y)
         # Store class labels and map them to sequential indices
         self.classes_, y = np.unique(y, return_inverse=True)
  
         if self.model is None or self.model.output_shape[-1] != len(self.classes_):
             self.input_dim = X.shape[1]
             self._build_model() 
 
         self.model.fit(X, y, epochs=self.epochs, batch_size=self.batch_size, verbose=0)
         return self 
 
     def predict_proba(self, X):
         """Predicts class probabilities for given input samples."""
         X = self._ensure_dense(X)
         return self.model.predict(X, verbose=0)
 
     def predict(self, X):
         """Predicts class labels for given input samples."""
         X = self._ensure_dense(X)
         probabilities = self.model.predict(X, verbose=0)
         return self.classes_[np.argmax(probabilities, axis=1)]  # Convert back to original labels 
 
     def get_params(self, deep=True):
         """Returns a dictionary of model parameters."""
         return {
             "input_dim": self.input_dim,
             "hidden_units": self.hidden_units,
             "activation": self.activation,
             "epochs": self.epochs,
             "batch_size": self.batch_size,
             "learning_rate": self.learning_rate
         }
 
     def set_params(self, **params):
         """Sets model parameters and rebuilds the model if necessary."""
         for key, value in params.items():
             setattr(self, key, value)
         if "input_dim" in params:
             self._build_model()
         return self
 import numpy as np
 import scipy.sparse
 from sklearn.base import BaseEstimator, ClassifierMixin 
 
 class IRISModel:
     def __init__(self, **kwargs): 
         self.model = LogisticRegressionScratch(
             learning_rate=kwargs.get("learning_rate", 0.01),
             n_iterations=kwargs.get("n_iterations",1000),
             fit_intercept=kwargs.get("fit_intercept",True)
         )
         self.name = "own_logistic_regression"
         self.model_type = "Logistic Rregression Model"
 
 class LogisticRegressionScratch(ClassifierMixin,BaseEstimator):
     def __init__(self, learning_rate=0.01, n_iterations=1000, fit_intercept=True):
         """
         Initialize Logistic Regression model. 
         :param learning_rate: The step size for updating weights (default: 0.01)
         :param n_iterations: Number of gradient descent iterations (default: 1000)
         :param fit_intercept: Whether to add an intercept term (default: True)
         """
         self.learning_rate = learning_rate
         self.n_iterations = n_iterations
         self.fit_intercept = fit_intercept
         self.weights = None
         self.bias = None,
         self.class_labels = None  # To store the original class labels 
 
     def _sigmoid(self, z):
         """Compute the sigmoid function, handling sparse matrices properly."""
         if scipy.sparse.issparse(z):
         # Use sparse matrix element-wise operations
             return 1 / (1 + np.exp(-z.todense()))  # Convert to dense here
         else:
             return 1 / (1 + np.exp(-z))
 
     def _softmax(self, z):
         """Softmax activation function for multiclass classification."""
         if hasattr(z, "toarray"):  # Check if the input is a sparse matrix
             z = z.toarray()  # Convert sparse matrix to dense array
         exp_z = np.exp(z - np.max(z, axis=1, keepdims=True))  # For numerical stability
         return exp_z / np.sum(exp_z, axis=1, keepdims=True)
 
     def fit(self, X, y):
         """
         Train the logistic regression model using gradient descent. 
         :param X: Training features (numpy array of shape (n_samples, n_features))
         :param y: Target labels (numpy array of shape (n_samples,))
         """
 
         if hasattr(X, "toarray"):  # Convert X to dense array if it's sparse
             X = X.toarray() 
 
         self.classes_ = np.unique(y) 
 
         self.class_labels = np.unique(y) 
 
         num_samples, num_features = X.shape
         num_classes = len(np.unique(y))
 
         # One-hot encode the labels for multiclass
         label_map = {label: idx for idx, label in enumerate(np.unique(y))}
         y = np.array([label_map[label] for label in y]) 
 
         # Proceed with the one-hot encoding
         y_one_hot = np.zeros((len(y), len(np.unique(y))))
         for i in range(len(y)):
             y_one_hot[i, y[i]] = 1
 
         # Initialize weights and bias
         self.weights = np.zeros((num_features, num_classes))
         self.bias = np.zeros((1, num_classes))
 
         # Gradient descent loop
         for i in range(self.n_iterations):
             # Linear model
             linear_model = np.dot(X, self.weights) + self.bias
             y_predicted = self._softmax(linear_model)
 
             # Compute gradients
             dw = (1 / num_samples) * np.dot(X.T, (y_predicted - y_one_hot))
             db = (1 / num_samples) * np.sum(y_predicted - y_one_hot, axis=0, keepdims=True)
 
             # Update weights and bias
             self.weights -= self.learning_rate * dw
             self.bias -= self.learning_rate * db
 
         return self
 
     def predict_proba(self, X):
         """
         Compute probability estimates for each class.
         :param X: Data for prediction (numpy array of shape (n_samples, n_features))
         :return: Probabilities for class 1 (numpy array of shape (n_samples,))
         """
         if hasattr(X, "toarray"):
             X = X.toarray()
 
         # Linear model
         linear_model = np.dot(X, self.weights) + self.bias
         return self._softmax(linear_model)
 
     def predict(self, X):
         """
         :param X: Data for prediction (numpy array of shape (n_samples, n_features))
         :return: Predicted class labels (numpy array of shape (n_samples,))
         """
         if hasattr(X, "toarray"):
             X = X.toarray()
 
         # Linear model
         linear_model = np.dot(X, self.weights) + self.bias
         y_predicted_proba = self._sigmoid(linear_model)
 
         # Get the index of the class with the highest probability for each sample
         y_predicted_class = np.argmax(y_predicted_proba, axis=1)
 
         # Map the integer predictions back to the original labels
         y_predicted_labels = self.class_labels[y_predicted_class]
 
         return y_predicted_labels
 
     def get_params(self,deep=True):
         """Get parameters of the logistic regression model."""
         return {'learning_rate': self.learning_rate, 'n_iterations': self.n_iterations,
                 'fit_intercept': self.fit_intercept}
 
     def set_params(self, **params):
         """Set parameters for the logistic regression model."""
         for param, value in params.items():
             setattr(self, param, value)
         return self
 import numpy as np
 from sklearn.neighbors import KNeighborsClassifier
 from sklearn.base import BaseEstimator, ClassifierMixin 
 
 class IRISModel:
     def __init__(self, **kwargs): 
         n_neighbors=kwargs.get('n_neighbors',5)
         leaf_size=kwargs.get('leaf_size',30)
         state_variables = {'n_neighbors':n_neighbors, 'weights':'uniform', 'algorithm':'auto', 'leaf_size':leaf_size, 'p':2, 'metric':'minkowski', 'n_jobs':-1} 
 
         self.model = KNNWrapper(**state_variables)
         self.name = "wrapped_KNN_model"
         self.model_type = "KNN Model"
 
 class KNNWrapper(ClassifierMixin,BaseEstimator):
     def __init__(self, **kwargs):
         # Initialize the underlying KNeighborsClassifier
         self.knn = KNeighborsClassifier(**kwargs)
 
     def fit(self, X, y):
         """
         Fit the KNN model to the training data.
 
         :param X: ndarray or DataFrame of shape (n_samples, n_features)
         :param y: ndarray or Series of shape (n_samples,)
         """
         self.classes_ = np.unique(y)
         self.knn.fit(X, y)
 
     def predict(self, X):
         """
         Predict class labels for samples in X.
 
         :param X: ndarray or DataFrame of shape (n_samples, n_features)
         :return: predictions: ndarray of shape (n_samples,)
         """
         return self.knn.predict(X)
 
     def predict_proba(self, X):
         """
         Predict class probabilities for samples in X.
 
         :param X: ndarray or DataFrame of shape (n_samples, n_features)
         :return: probabilities: ndarray of shape (n_samples, n_classes)
         """
         return self.knn.predict_proba(X)
 
     def get_params(self, deep=True):
         """
         Get the parameters of the model.
 
         :param deep: If True, will return the parameters of the model and its underlying estimator.
       :return: dictionary of model parameters
         """
         return self.knn.get_params(deep)
 
     def set_params(self, **params):
         """
         Set the parameters of the model.
 
       :param params: dictionary of parameters to set
       :return: the model instance with updated parameters
         """
         self.knn.set_params(**params)
         return self
FeedbackOpens in a new tab