Curating Models

In order to stardardize and compare deep learning models at scale with benchmark data in a range of evaluation schemes, the model input/output, hyperparameters, and evaluation metrics must be in a stardard framework. We have developed the IMPROVE library to meet this need. The IMPROVE library provides stable interfaces that standardize a set of otherwise heterogenous deep learning software. We refer to the process of wrapping deep learning model code with the IMPROVE library’s functionality as curating the model.

We welcome members of the community to use the IMPROVE framework and benchmark data to standardize and compare their models.

The following guides explain various the process of model curation in detail.