Tchat is an fast and open-source online multi-turn conversationnal agent chatting service, allowing to have access to a broad range of large language models (LLMs) everywhere from the web. It include a speech-to-text module which allow voice-based interactions.
It's build to run any LLMs and ASR systems, but serve as a demonstration tool for BioMistral 7B suite of models.
-
📰 Paper: BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains (pre-print)
-
📊 Multilingual medical benchmark : BioMistral/BioInstructQA
-
👩💻 GitHub: BioMistral/BioMistral
This project is the result of the collaboration between:
🏛️ LIA - Avignon University (1) | 🏛️ LS2N - Nantes University (3) |
🏥 Nantes University Hospital (2) | 🏢 Zenidoc (4) |
Authors : Yanis LABRAK (1,4) ; Adrien BAZOGE (2,3) ; Emmanuel MORIN (3) ; Pierre-Antoine GOURAUD (2) ; Mickaël ROUVIER (1) ; Richard DUFOUR (3)
Caution: We recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes.
- Create a conda environement
test
- Install the dependencies with
pip install -r requirements.txt
- Prevent errors on MacOS for M1 chips:
export PYTORCH_ENABLE_MPS_FALLBACK=1
- Download BioMistral using Ollama:
ollama pull cniongolo/biomistral
- Run the server in background:
ollama serve&
- Start the web services:
python3 -m flask --app app --debug run
- Go to the demonstration page:
127.0.0.1:5000/
You can also access to the web services using the Flask API:
- Query the LLMs using HTTP POST:
$.ajax({
url: '/api/v1/chat',
data: data_transcript,
processData: false,
contentType: false,
crossDomain: true,
type: 'POST',
success: function(data_llm){
console.log(data_llm);
}
});
- Socket for querying the LLM (socket
chat
) and listen to a output stream of the channelchat_response
:
socket.on('chat_response', function(msg) {
document.getElementById(msg.id).innerText = msg.data;
chatArea.scrollTop = chatArea.scrollHeight;
});
socket.emit('chat', {
messages: [{
'role': 'user',
'content': "What is the capital of France?",
}],
id: "unique_id_1"
});
- Query the Speech-To-Text model using HTTP POST:
$.ajax({
url: '/api/v1/transcript',
data: fd,
processData: false,
contentType: false,
crossDomain: true,
type: 'POST',
success: function(data_transcript){
console.log(data_transcript);
}
});
- Socket for querying the Speech-To-Text model and listen to a output stream:
Comming soon
- Flask
- Jinja2
- Socket.IO
- jQuery
- TailwindCSS
- Hugging Face Transformers (for DistilWhisper)
- Ollama (for LLMs)
- Javascript