Workflow automation

Unlike scikit-learn, which only provides a loose library of validation components, skpro provides an object-oriented structure that standardizes the prediction workflows. The objective is to support efficient model management and fair model assessment in unified framework. After all, the user should only be concerned with the definition and development of models while leaving the tedious tasks of result aggregation to the framework.

Model-view-controller structure

skpro’s workflow framework is build up of three fundamental components: model, controller, view. The model object contains the actual prediction algorithm that was defined by the user (e.g. a probabilistic estimator object). It unifies and simplifies the management of learning algorithms. It allows to store information and configuration for the algorithm it contains, e.g. a name or a range of hyperparameters that should be optimized. In future, it might support saving of trained models for later use. Secondly, a controller represents an action or task that can be done with a model to obtain certain information. A scoring controller, for instance, might take a dataset and loss function and return the loss score of the model on this dataset. The controller can save the obtained data for later use. Finally, a view object takes what a controller returns to present it to the user. A scoring view, for example, could take a raw score value and format it in power mode. The separation of controller and view level is advantageous since controller tasks like the training of a model to obtain a score can be computationally expensive. Thus, a reformation of an output should not require the revaluation of the task. Moreover, if a view only displays a part of the information it yet useful to store the full information the controller returned.

skpro’s workflow framework currently implements one major controller, the Cross validation controller (CV), and multiple views to display scores and model information. The CV controller encapsulates the common cross-validation procedure to assess models out-of-sample. It takes a dataset and loss function and returns the fold-losses as well as the overall loss with confidence interval for a given model. If the model specifies a range of hyperparameters for tuning, the controller automatically optimizes the hyperparamters in a nested cross-validation procedure and additionally returns the found best hyperparameters.

The model-view-controller structure (MVC) thus encapsulates a fundamental procedure in machine learning: perform a certain task with a certain model and display the results. Thanks to its unified API, the MVC building blocks can then be easily used for result aggregation and comparison.

Result aggregation and comparison

At its current stage, the workflow framework support a simple way of results aggregation and comparison, namely a results table.

Tables

A table can be easily defined by providing controller-view-pairs as columns and models as rows. The framework will then evaluate the table cells by running the controller task for the respective models and render the results table using the specified views. Note that the evaluation of the controller tasks and the process of rendering the table is decoupled. It is therefore possible to access the “raw” table with all the information each controller returned and then render the table with the reduced information that is actually needed. Furthermore, the decoupling allows for manipulation or enhancement of the raw data before rendering. The raw table data can, for example, be sorted by the model performances.

Notably, the table supports so-called rank-sorting. Rank sorting is, for instance, useful if models are compared on different datasets and ought to be sorted by their overall performance. In this case, it is unsuitable to simply average the dataset’s performance scores since the value ranges might differ considerably between the different datasets. Instead, it is useful to rank the performances on each dataset and then average the model’s rank on each dataset to obtain the overall rank.

The table below represents an example of such a rank sorted result table that is typically generated by the workflow framework: Models are listed in the rows of the table while the columns present the cross-validated performance of a certain dataset and loss function. The numbers in parentheses denote the model’s performance rank in the respective column. The models are sorted by the average model rank, displaying models with the best performances (that is the lowest losses) on top of the table.

# Model CV(Dataset A, loss function) CV(Dataset B, loss function)
1 Example model 1
  1. 12\pm1*
  1. 3\pm2*
2 Example model 2
  1. 5\pm0.5*
  1. 9\pm1*
3 Example model 3
  1. 28\pm3*
  1. 29\pm4*

Code example

The following example demonstrates a common validation workflow that compares Parametric estimation models:

from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression

from skpro.workflow.table import Table, IdModifier, SortModifier, RankModifier
from skpro.workflow.cross_validation import CrossValidationController, CrossValidationView
from skpro.metrics import log_loss, linearized_log_loss
from skpro.workflow import Model
from skpro.workflow.utils import InfoView, InfoController
from skpro.workflow.manager import DataManager
from skpro.parametric import ParametricEstimator
from skpro.parametric.estimators import Constant

# Load the dataset
data = DataManager('boston')

tbl = Table()

# Adding controllers displayed as columns
tbl.add(InfoController(), InfoView())

for loss_func in [linearized_log_loss, log_loss]:
    tbl.add(
        controller=CrossValidationController(data, loss_func=loss_func),
        view=CrossValidationView()
    )

# Rank results
tbl.modify(RankModifier())
# Sort by score in the last column, i.e. log_loss
tbl.modify(SortModifier(key=lambda x: x[-1]['data']['score']))
# Use ID modifier to display model numbers
tbl.modify(IdModifier())

# Compose the models displayed as rows
models = []
for point_estimator in [RandomForestRegressor(), LinearRegression(), Constant('mean(y)')]:
    for std_estimator in [Constant('std(y)'), Constant(42)]:
        model = ParametricEstimator(point=point_estimator, std=std_estimator)
        models.append(Model(model))

tbl.print(models)
# # Info CrossValidation(da ta=boston, loss_func=lineari zed_log_loss, cv=KFold(3), tune=False) CrossValidation(dat a=boston, loss_func=log_los s, cv=KFold(3), tune=False)
1 1 Model(norm(point=RandomFores tRegressor(), std=C(std(y))))
  1. 3.32+/-0.04
  1. 3.30+/-0.04
2 5 Model(norm(point=C(mean(y)), std=C(std(y))))
  1. 3.88+/-0.09
  1. 3.88+/-0.09
3 3 Model(norm(point=LinearRegre ssion(), std=C(std(y))))
  1. 3.94+/-0.13
  1. 4.25+/-0.29
4 2 Model(norm(point=RandomFores tRegressor(), std=C(42))) (4) 4.6658+/-0.0016
  1. 4.6650+/-0.0015
5 6 Model(norm(point=C(mean(y)), std=C(42)))
  1. 4.688+/-0.004
  1. 4.688+/-0.004
6 4 Model(norm(point=LinearRegre ssion(), std=C(42)))
  1. 4.704+/-0.012
  1. 4.704+/-0.012