Curating Models
To facilitate standardized, large-scale evaluation and comparison of prediction models using benchmark datasets and various evaluation schemes, it’s essential to have a consistent framework for model input/output, hyperparameters settings, and evaluation metrics. The IMPROVE library addresses this need by offering well-defined interfaces that generalize across a variety of deep learning and machine learning frameworks. The process of integrating a prediction model’s code with the IMPROVE library’s functionalities is referred to as “curating” the model.
We invite the community to leverage the IMPROVE framework and benchmark data to standardize, evaluate, and compare their models.
The following guides provide detailed explanations of the model curation process.
If you would like to contribute a curated model to the IMPROVE project, please contact us.