🌐 TransformerRanker ⚡️
Find the best language model for your downstream task.
Load a dataset, pick models from the 🤗 Hub, and rank them by transferability.
Developed at Humboldt University of Berlin.
📚 Load Data
Pick a dataset from the Hugging Face Hub (e.g. trec
). This defines your downstream task.
⚡️ Speed mode on: tweak the downsampling ratio in Dataset Setup for quicker runs. Unlock the full data via framework.
🧠 Select Language Models
Add two or more pretrained models to compare. Stick to smaller models here since the demo runs on CPU.
🏆 Rank Models
Rank models by transferability to your task. More control? Tweak transferability metric and layer aggregation in Settings.
Leaderboard: higher score → better downstream performance.
- | - | - |
Note: Quick CPU-only demo.
Built by @lukasgarbas & @plonerma
Questions? Open a GitHub issue 🔫