• LOGIN
Repository logo

BORIS Portal

Bern Open Repository and Information System

  • Publication
  • Projects
  • Funding
  • Research Data
  • Organizations
  • Researchers
  • LOGIN
Repository logo
Unibern.ch
  1. Home
  2. Publications
  3. MLcps: machine learning cumulative performance score for classification problems.
 

MLcps: machine learning cumulative performance score for classification problems.

Options
  • Details
BORIS DOI
10.48350/190326
Date of Publication
December 28, 2022
Publication Type
Article
Division/Institute

Department for BioMed...

Universitätsklinik fü...

Author
Akshay, Akshayorcid-logo
Department for BioMedical Research, Forschungsgruppe Urologie
Abedi, Masoud
Shekarchizadeh, Navid
Burkhard, Fiona Christine
Universitätsklinik für Urologie
Department for BioMedical Research (DBMR)
Katoch, Mitali
Bigger-Allen, Alex
Adam, Rosalyn M
Monastyrskaya-Stäuber, Katia
Universitätsklinik für Urologie
Department for BioMedical Research, Forschungsgruppe Urologie
Department for BioMedical Research (DBMR)
Hashemi Gheinani, Aliorcid-logo
Universitätsklinik für Urologie
Department for BioMedical Research, Forschungsgruppe Urologie
Subject(s)

600 - Technology::610...

600 - Technology::630...

Series
GigaScience
ISSN or ISBN (if monograph)
2047-217X
Publisher
Oxford University Press
Language
English
Publisher DOI
10.1093/gigascience/giad108
PubMed ID
38091508
Uncontrolled Keywords

Python package classi...

Description
BACKGROUND

Assessing the performance of machine learning (ML) models requires careful consideration of the evaluation metrics used. It is often necessary to utilize multiple metrics to gain a comprehensive understanding of a trained model's performance, as each metric focuses on a specific aspect. However, comparing the scores of these individual metrics for each model to determine the best-performing model can be time-consuming and susceptible to subjective user preferences, potentially introducing bias.

RESULTS

We propose the Machine Learning Cumulative Performance Score (MLcps), a novel evaluation metric for classification problems. MLcps integrates several precomputed evaluation metrics into a unified score, enabling a comprehensive assessment of the trained model's strengths and weaknesses. We tested MLcps on 4 publicly available datasets, and the results demonstrate that MLcps provides a holistic evaluation of the model's robustness, ensuring a thorough understanding of its overall performance.

CONCLUSIONS

By utilizing MLcps, researchers and practitioners no longer need to individually examine and compare multiple metrics to identify the best-performing models. Instead, they can rely on a single MLcps value to assess the overall performance of their ML models. This streamlined evaluation process saves valuable time and effort, enhancing the efficiency of model evaluation. MLcps is available as a Python package at https://pypi.org/project/MLcps/.
Handle
https://boris-portal.unibe.ch/handle/20.500.12422/172409
Show full item
File(s)
FileFile TypeFormatSizeLicensePublisher/Copright statementContent
giad108.pdftextAdobe PDF1.5 MBpublishedOpen
BORIS Portal
Bern Open Repository and Information System
Build: 360c85 [14.04. 8:05]
Explore
  • Projects
  • Funding
  • Publications
  • Research Data
  • Organizations
  • Researchers
More
  • About BORIS Portal
  • Send Feedback
  • Cookie settings
  • Service Policy
Follow us on
  • Mastodon
  • YouTube
  • LinkedIn
UniBe logo