Skip to content

Commit

Permalink
relocate paragraph
Browse files Browse the repository at this point in the history
  • Loading branch information
ruanchaves committed Sep 11, 2023
1 parent 3d6014a commit fce937d
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,6 @@ The **Napolab** is your go-to collection of Portuguese datasets with the followi
* 👩‍🔧 **Human**: Expert human annotations only. No automatic or unreliable annotations.
* 🎓 **General**: No domain-specific knowledge or advanced preparation is needed to solve dataset tasks.

Napolab is structured similarly to benchmarks like GLUE and [PLUE](https://github.com/ju-resplande/PLUE). All datasets come with either two or three fields: `'sentence1', 'sentence2', 'label'` or just `'sentence1', 'label'`. To evaluate LLMs using Napolab, you simply need to design prompts to get label predictions from the model.

Napolab currently includes the following datasets:

| | | |
Expand Down Expand Up @@ -48,6 +46,8 @@ benchmark = napolab["datasets"]
translated_benchmark = napolab["translations"]
```

Napolab is structured similarly to benchmarks like GLUE and [PLUE](https://github.com/ju-resplande/PLUE). All datasets come with either two or three fields: `'sentence1', 'sentence2', 'label'` or just `'sentence1', 'label'`. To evaluate LLMs using Napolab, you simply need to design prompts to get label predictions from the model.

## 🤖 Models

We've made several models, fine-tuned on this benchmark, available on Hugging Face Hub:
Expand Down

0 comments on commit fce937d

Please sign in to comment.