using ViLT for medical data
-
Updated
Jan 10, 2024 - Jupyter Notebook
using ViLT for medical data
Visual Question Answering with ViLT
Implemented Vision and Language Transformer (ViLT) Model with FastApi and Dockers
A CLI and GUI for using the Vision-and-Language Transformer (ViLT) model for visual question answering (answering questions based on an image)
visual question answering in real-time
This repo contains the original implementation of VAuLT, the Vision-and-Augmented-Language Transformer. We provide instructions to download some multimodal social-media datasets, and scripts to experiment with. VAuLT is a stack of Transformers, a LM like BERT that preprocesses the text input of ViLT
Vue.js + Inertia.js + Laravel + Tailwind CSS
VILT stack admin panel
⚡ Human resource management system built with VueJS, InertiaJS, Laravel 8 and Tailwind CSS
Add a description, image, and links to the vilt topic page so that developers can more easily learn about it.
To associate your repository with the vilt topic, visit your repo's landing page and select "manage topics."