Skip to content

Commit

Permalink
readme
Browse files Browse the repository at this point in the history
  • Loading branch information
yuchenlin committed Jun 28, 2024
1 parent 58bbf16 commit 0b422b7
Show file tree
Hide file tree
Showing 2 changed files with 42 additions and 12 deletions.
50 changes: 40 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,21 @@


### Evaluation Framework
![img1](docs/wb_eval.png)
<details>
<summary>Click to expand</summary>

![img1](docs/wb_eval.png)

</details>

### Dataset Overview
![img1](docs/wb_table.png)
![img1](docs/wb_stat.png)
<details>
<summary>Click to expand</summary>

![img1](docs/wb_table.png)
![img1](docs/wb_stat.png)

</details>



Expand Down Expand Up @@ -53,7 +63,19 @@ export HF_HOME=/net/nfs/climate/tmp_cache/
-->


**Case 1: Models supported by vLLM**
### Shortcut to run a model

```bash
bash scripts/_common_vllm.sh m-a-p/neo_7b_instruct_v0.1 neo_7b_instruct_v0.1 4
# 1st arg is hf_name; 2nd is the pretty name; 3rd is the number of shards (gpus)
```

### Longer versions ⬇️

<details>
<summary>
<b> Case 1: Models supported by vLLM</b>
</summary>

You can take the files under `scripts` as a reference to add a new model to the benchmark, for example, to add `Yi-1.5-9B-Chat.sh` to the benchmark, you can follow the following steps:
1. Create a script named "Yi-1.5-9B-Chat.sh.py" under `scripts` folder.
Expand All @@ -63,22 +85,30 @@ You can take the files under `scripts` as a reference to add a new model to the
5. Run your script to make sure it works. You can run the script by running `bash scripts/Yi-1.5-9B-Chat.sh` in the root folder.
6. Create a PR to add your script to the benchmark.

### Shortcut to run a model
For Step 3-5, you can also use this common command to run the model if your model is supported by vLLM and has a conversation template on hf's tokenizer config:
```bash
bash scripts/_common_vllm.sh m-a-p/neo_7b_instruct_v0.1 neo_7b_instruct_v0.1 4
# 1st arg is hf_name; 2nd is the pretty name; 3rd is the number of shards (gpus)
```

For Step 3-5, you can also use the above shortcut common command to run the model if your model is supported by vLLM and has a conversation template on hf's tokenizer config.

</details>



<details>
<summary> Case 2: Models that are only supported by native HuggingFace API </summary>


Some new models may not be supported by vLLM for now. You can do the same thing as above but use `--engine hf` in the script instead, and test your script. Note that some models may need more specific configurations, and you will need to read the code and modify them accordingly. In these cases, you should add name-checking conditions to ensure that the model-specific changes are only applied to the specific model.


</details>


<details>
<summary> Case 3: Private API-based Models </summary>


You should change the code to add these APIs, for example, gemini, cohere, claude, and reka. You can refer to the `--engine openai` logic in the existing scripts to add your own API-based models. Please make sure that you do not expose your API keys in the code. If your model is on Together.AI platform, you can use the `--engine together` option to run your model, see `scripts/[email protected]` for an example.


</details>


Expand Down
4 changes: 2 additions & 2 deletions leaderboard/show_eval.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@ MODE=$1



# margin=3;tie_margin=2;K=4;dynamic=True
# python -m leaderboard.wb_elo --K $K --margin $margin --tie_margin $tie_margin --num_rounds 100 --dynamic $dynamic
margin=3;tie_margin=2;K=4;dynamic=True
python -m leaderboard.wb_elo --K $K --margin $margin --tie_margin $tie_margin --num_rounds 100 --dynamic $dynamic

# if MODE is not score
if [ "$MODE" != "score_only" ];
Expand Down

0 comments on commit 0b422b7

Please sign in to comment.