Skip to content

Home

PyPI version Documentation Status License: MIT Open In Colab

๐Ÿ”ฅ News

  • [May 18, 2022] Added support for loading qrels from ir-datasets in v.0.1.13.
    Usage example: Qrels.from_ir_datasets("msmarco-document/dev") for MS MARCO document retrieval dev set.
  • [May 4, 2022] Added Paired Student's t-Test in v.0.1.12.

โšก๏ธ Introduction

ranx is a library of fast ranking evaluation metrics implemented in Python, leveraging Numba for high-speed vector operations and automatic parallelization. It offers a user-friendly interface to evaluate and compare Information Retrieval and Recommender Systems. Moreover, ranx allows you to perform statistical tests and export LaTeX tables for your scientific publications. ranx was featured in ECIR 2022, the 44th European Conference on Information Retrieval.

If you use ranx to evaluate results for your scientific publication, please consider citing it.

For a quick overview, follow the Usage section.

For a in-depth overview, follow the Examples section.

โœจ Features

Metrics

The metrics have been tested against TREC Eval for correctness.

Statistical tests

Please, refer to Smucker et al. for additional information on statistical tests for Information Retrieval.

Off-the-shelf qrels

You can load qrels from ir-datasets as simply as:

qrels = Qrels.from_ir_datasets("msmarco-document/dev")
A full list of the available qrels is provided here.

๐Ÿ”Œ Installation

pip install ranx

๐Ÿ’ก Usage

Create Qrels and Run

from ranx import Qrels, Run

qrels_dict = { "q_1": { "d_12": 5, "d_25": 3 },
               "q_2": { "d_11": 6, "d_22": 1 } }

run_dict = { "q_1": { "d_12": 0.9, "d_23": 0.8, "d_25": 0.7,
                      "d_36": 0.6, "d_32": 0.5, "d_35": 0.4  },
             "q_2": { "d_12": 0.9, "d_11": 0.8, "d_25": 0.7,
                      "d_36": 0.6, "d_22": 0.5, "d_35": 0.4  } }

qrels = Qrels(qrels_dict)
run = Run(run_dict)

Evaluate

from ranx import evaluate

# Compute score for a single metric
evaluate(qrels, run, "ndcg@5")
>>> 0.7861

# Compute scores for multiple metrics at once
evaluate(qrels, run, ["map@5", "mrr"])
>>> {"map@5": 0.6416, "mrr": 0.75}

Compare

from ranx import compare

# Compare different runs and perform statistical tests
report = compare(
    qrels=qrels,
    runs=[run_1, run_2, run_3, run_4, run_5],
    metrics=["map@100", "mrr@100", "ndcg@10"],
    max_p=0.01  # P-value threshold
)
Output:
print(report)
#    Model    MAP@100    MRR@100    NDCG@10
---  -------  --------   --------   ---------
a    model_1  0.320แต‡     0.320แต‡     0.368แต‡แถœ
b    model_2  0.233      0.234      0.239
c    model_3  0.308แต‡     0.309แต‡     0.330แต‡
d    model_4  0.366แตƒแต‡แถœ   0.367แตƒแต‡แถœ   0.408แตƒแต‡แถœ
e    model_5  0.405แตƒแต‡แถœแตˆ  0.406แตƒแต‡แถœแตˆ  0.451แตƒแต‡แถœแตˆ

๐Ÿ“– Examples

Name Link
Overview Open In Colab
Qrels and Run Open In Colab
Evaluation Open In Colab
Comparison and Report Open In Colab

๐Ÿ“š Documentation

Browse the documentation for more details and examples.

๐ŸŽ“ Citation

If you use ranx to evaluate results for your scientific publication, please consider citing it:

@inproceedings{bassani2022ranx,
  author    = {Elias Bassani},
  title     = {ranx: {A} Blazing-Fast Python Library for Ranking Evaluation and Comparison},
  booktitle = {{ECIR} {(2)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {13186},
  pages     = {259--264},
  publisher = {Springer},
  year      = {2022}
}

๐ŸŽ Feature Requests

Would you like to see other features implemented? Please, open a feature request.

๐Ÿค˜ Want to contribute?

Would you like to contribute? Please, drop me an e-mail.

๐Ÿ“„ License

ranx is an open-sourced software licensed under the MIT license.