Skip to content

Commit

Permalink
README
Browse files Browse the repository at this point in the history
  • Loading branch information
kssteven418 committed Mar 24, 2024
1 parent d5e94d1 commit 0905105
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,8 @@ python run_llm_compiler.py --model_type vllm --benchmark {benchmark-name} --stor
* `--stream`: (Optional, Recommended) Enables streaming. It improves latency by streaming out tasks from the Planner to the Task Fetching Unit and Executor immediately after their generation, rather than blocking the Executor until all the tasks are generated from the Planner.
* `--react`: (Optional) Use ReAct instead of LLMCompiler for baseline evaluation.

You can optionally use your Azure endpoint instead of OpenAI endpoint with `--model_type azure`. In this case, you need to provide the associated Azure configuration as the following fields in your environment: `AZURE_ENDPOINT`, `AZURE_OPENAI_API_VERSION`, `AZURE_DEPLOYMENT_NAME`, and `AZURE_OPENAI_API_KEY`.

After the run is over, you can get the summary of the results by running the following command:
```
python evaluate_results.py --file {store-path}
Expand Down

0 comments on commit 0905105

Please sign in to comment.