Skip to content

Commit

Permalink
Update index.html
Browse files Browse the repository at this point in the history
  • Loading branch information
anchen1011 committed Apr 17, 2023
1 parent 95e5c1e commit 58099c1
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,8 @@
<table border="0">
</tbody>
<tr><td class="caption"> Introducing <a href="https://github.com/cambridgeltl/visual-med-alpaca"><strong>Visual Med-Alpaca</strong></a>, an open-source, parameter-efficient biomedical foundation model that can be integrated with medical &quot;visual experts&quot; for multimodal biomedical tasks. Built upon the <a href="https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/">LLaMa-7B</a> architecture (<a href="https://arxiv.org/abs/2302.13971">Touvron et al., 2023</a>), this model is trained using an instruction set curated collaboratively by GPT-3.5-Turbo and human experts. Leveraging a few hours of instruction-tuning and the inclusion of plug-and-play visual modules, Visual Med-Alpaca can perform a diverse range of tasks, from interpreting radiological images to addressing complex clinical inquiries. The model can be replicated with ease, necessitating only a single consumer GPU. </td></tr>
<br></br>
<tr><td class="caption"> Refer to our <a href="https://github.com/cambridgeltl/visual-med-alpaca"><strong>Official Github Repo</strong></a> for code and data.</td></tr>
</tbody></table>
<br>

Expand Down

0 comments on commit 58099c1

Please sign in to comment.