Skip to content

rabihchamas/SAM2YOLOv11-LesionPipeline

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 SAM2YOLOv11-LesionPipeline

This repository provides an end-to-end pipeline for automatic skin lesion detection using a zero-shot approach with Meta AI's Segment Anything Model (SAM), and training a YOLO model on the generated dataset.


📌 Overview

This repo enables you to:

  1. Detect lesions in raw full-body images using SAM + filtering (no training required)
  2. Generate YOLO-format datasets
  3. Train a YOLO object detector on the dataset

🗂️ Project Structure

zero-shot-lesion-pipeline/
├── data_prep/                  # Lesion detection using SAM
│   ├── detector.py             # Core pipeline
│   └── utils.py                # Filtering and helper functions
├── yolo_training/             # YOLO training pipeline
│   ├── train.py                # YOLOv11 training launcher
│   └── evaluate.py             # YOLOv11 evaluation
├── dataset/                   # Output: YOLO-style dataset
│   ├── images/
│   └── labels/
├── lesions                    # Cropped lesions
├── checkpoints/               # Here add pretrained weights of SAM and YOLO
└── README.md

🛠️ Installation

1. Clone the repository

git clone https://github.com/rabihchamas/zero-shot-lesion-pipeline.git
cd SAM2YOLOv11-LesionPipeline

2. Create and activate a virtual environment

python3 -m venv venv
source venv/bin/activate  # On Windows: venv\\Scripts\\activate

3. Install dependencies

pip install -r requirements.txt

4. Download SAM model weights

Create a folder named checkpoints/ in the project root and download the appropriate model checkpoints from Meta AI's SAM repo:

  • sam_vit_b_01ec64.pth
  • or
  • sam_vit_h_4b8939.pth
  • Make sure that the model name (vit_b, vit_h, etc.) in your configuration matches the checkpoint filename you download.

Place them in the checkpoints/ folder:

zero-shot-lesion-pipeline/
├── checkpoints/
│   ├── sam_vit_b_01ec64.pth
│   └── sam_vit_h_4b8939.pth

🧪 Part 1: Dataset Generation (Zero-Shot Detection)

This step uses SAM to segment over-cropped regions and filter lesion candidates based on:

  • Comparison of the object's intensity against its surrounding area.
  • Color (skin-tone rules)
  • Geometric shape

See data_prep/utils.py for implementation details.

Run lesion detection:

python data_prep/detector.py --input_directory datasets/lesions/images/train/ --annotations_folder datasets/lesions/labels/train/ --lesions_folder lesions/  --model_name vit_b --grids 2 2 --min_area 0

Output:

  • lesions/*.jpg: Lesion crops
  • datasets/lesions/labels/train/*.txt: YOLO-format annotations

🧠 Part 2: Training YOLO

Note: This project uses Ultralytics for training YOLO model (YOLOv11). Make sure you have it installed.

Once the dataset is ready, train a YOLO model.

Prerequisites:

  • Prepare your data.yaml config with paths

Launch training (YOLOv11):

cd yolo_training/
python train.py --data ../datasets/lesions.yaml --imgsz 640 --batch 8 --epochs 1 --model checkpoints/yolo11n.pt

🤝 Contributions

Pull requests are welcome! Feel free to open issues for feature requests, bugs, or ideas.


🧠 Credits


📄 License

MIT License — use it freely, just cite this repo if it helps your research or product.

About

Zero-shot lesion detection pipeline using SAM for dataset generation and YOLOv11 training.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages