Skip to content

xiaolingdudu/Microscopic-Mamba

 
 

Repository files navigation

Microscopic-Mamba: Revealing the Secrets of Microscopic Images with Only 4M Parameters

[Paper] [Project Page]

Abstract

In the field of medical microscopic image classification (MIC), CNN-based and Transformer-based models have been extensively studied. However, CNNs struggle with modeling long-range dependencies, limiting their ability to fully utilize semantic information in images. Conversely, Transformers are hampered by the complexity of quadratic computations. To address these challenges, we propose a model based on the Mamba architecture: Microscopic-Mamba. Specifically, we designed the Partially Selected Feed Forward Network (PSFFN) to replace the last linear layer of the Visual State Space Module (VSSM), enhancing Mamba’s local feature extraction capabilities. Additionally, we introduced the Mod- ulation Interaction Feature Aggregation (MIFA) module to effectively modulate and dynamically aggregate global and local features. We also incorporated a parallel VSSM mech- anism to improve inter-channel information interaction while reducing the number of parameters. Extensive experiments have demonstrated that our method achieves state-of-the-art performance on five public datasets.

Overview

accuracy


💎Let's Get Started!

A. Installation

Note that the code in this repo runs under Linux system.

The repo is based on the VMama repo, thus you need to install it first. The following installation sequence is taken from the VMamba repo.

Step 1: Clone the repository:

Clone this repository and navigate to the project directory:

git clone https://github.com/zs1314/Microscopic-Mamba.git
cd Microscopic-Mamba

Step 2: Environment Setup:

It is recommended to set up a conda environment and installing dependencies via pip. Use the following commands to set up your environment:

Create and activate a new conda environment

conda create -n msmamba
conda activate msmamba

Install dependencies

pip install -r requirements.txt
cd kernels/selective_scan && pip install .

B. Data Preparation

The five datasets RPE, MHIST , SARS ,TissueMnist and MedMf_colon are used for MIC experiments. Please download them and make them have the following folder/file structure:

${DATASET_ROOT}   # Dataset root directory, for example: /home/username/data
├── RPE
    ├── train
    │   ├── class 1
    │   │   ├──00001.png
    │   │   ├──00002.png
    │   │   ├──00003.png
    │   │   ...
    │   │
    │   ├── class 2
    │   │   ├──00001.png
    │   │   ... 
    │   │
    │   └── class n
    │       ├──00001.png 
    │       ...   
    ├── val
    │   ├── ...
    ├── test
    │   ├── ...
    │   ...
├── MHIST
├── SARS
├── TissueMnist
├── MedMf_Colon

Or you can download it from here: baidu Netdisk

C. Model Training

python train.py 

D. Model Testing

python test.py 

🐥: Before training and testing, configure the relevant parameters in the script

🤝Acknowledgments

This project is based on VMamba (paper, code). Thanks for their excellent works!!

🙋Q & A

For any questions, please feel free to contact us.

📜Reference

If this code or paper contributes to your research, please kindly consider citing our paper and give this repo ⭐️ 🌝



About

this is shit. so i decide fork it

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 71.3%
  • Cuda 19.3%
  • C++ 8.8%
  • C 0.6%