Get started with the No-Code Classification Toolkit in minutes!
Organize your images in this structure:
my_dataset/
├── Training/
│ ├── cat/
│ │ ├── img1.jpg
│ │ └── img2.jpg
│ └── dog/
│ ├── img1.jpg
│ └── img2.jpg
└── Validation/
├── cat/
│ └── img1.jpg
└── dog/
└── img1.jpg
Requirements:
- At least 100 images per class (configurable)
- Supported formats: JPG, JPEG, PNG, BMP
- Two folders: Training and Validation
For TensorFlow:
docker pull animikhaich/zero-code-classifier:tensorflowFor PyTorch:
docker pull animikhaich/zero-code-classifier:pytorchFor Both:
docker pull animikhaich/zero-code-classifier:bothOr build locally:
git clone https://github.com/animikhaich/No-Code-Classification-Toolkit.git
cd No-Code-Classification-Toolkit
bash build-all.sh# Replace /path/to/my_dataset with your actual dataset path
docker run -it --gpus all --net host \
-v /path/to/my_dataset:/data \
animikhaich/zero-code-classifier:pytorchOpen your browser and go to: http://localhost:8501
- Select Framework: Choose TensorFlow or PyTorch (if using "both" image)
- Dataset Paths:
- Training:
/data/Training - Validation:
/data/Validation
- Training:
- Model Settings:
- Backbone: Start with
resnet50(PyTorch) orResNet50(TensorFlow) - Optimizer:
Adam - Learning Rate:
0.001 - Batch Size:
16(adjust based on GPU memory) - Epochs:
100 - Image Size:
224
- Backbone: Start with
- Advanced:
- Enable Mixed Precision for faster training (if using modern GPU)
- Click Start Training!
Watch the live graphs showing:
- Training/Validation Loss
- Training/Validation Accuracy
- Progress bar for each epoch
Training will automatically:
- Save best model when validation accuracy improves
- Reduce learning rate when validation plateaus
- Stop early if no improvement for 10 epochs
After training, copy the model from the container:
For PyTorch:
docker cp <container-id>:/app/model/weights/pytorch ./my_models/For TensorFlow:
docker cp <container-id>:/app/model/weights/keras ./my_models/Get TensorBoard logs:
docker cp <container-id>:/app/logs/tensorboard ./logs/- Batch Size:
16 - Learning Rate:
0.0001 - Enable augmentation (default)
- Epochs:
50-100
- Batch Size:
32 - Learning Rate:
0.001 - Enable augmentation
- Epochs:
50-100
- Batch Size:
64-128 - Learning Rate:
0.001-0.01 - Enable augmentation
- Epochs:
30-50
If you get out-of-memory errors:
- Reduce batch size to
8or4 - Reduce image size to
192or128 - Try a smaller model (e.g.,
mobilenet_v2)
- Models: Use lowercase names (
resnet50,mobilenet_v2) - Optimizers:
Adam,SGD,AdamW - Mixed Precision: Check the "Use Mixed Precision (AMP)" box
- Best for: Research, experimentation, custom modifications
- Models: Use TitleCase names (
ResNet50,MobileNetV2) - Optimizers:
Adam,SGD,RMSprop - Mixed Precision: Select from dropdown (FP16 for GPU, BF16 for TPU)
- Best for: Production deployment, TPU training
- Make sure you're using the correct Docker image
- For TensorFlow: use
:tensorflowtag - For PyTorch: use
:pytorchtag
- Check your dataset path
- Ensure the path is absolute
- Verify the directory structure (Training/Validation folders)
- Enable mixed precision training
- Increase batch size if you have GPU memory
- Use a faster backbone (e.g., MobileNet)
- Ensure you're using GPU (check
--gpus allflag)
- Check your dataset quality
- Ensure labels are correct
- Try different learning rates
- Train for more epochs
- Use a larger backbone (e.g., ResNet101)
- Ensure Docker is installed
- For GPU: Install NVIDIA Container Toolkit
- Check port 8501 is not in use
- Try without
--net host:-p 8501:8501 -p 6006:6006
docker run -it --gpus all \
-p 8502:8501 \
-p 6007:6006 \
-v /path/to/dataset:/data \
animikhaich/zero-code-classifier:pytorchAccess at: http://localhost:8502
docker run -it --gpus all --net host \
-v /path/to/dataset1:/data1 \
-v /path/to/dataset2:/data2 \
animikhaich/zero-code-classifier:pytorchdocker run -it --gpus all --net host \
-v /path/to/dataset:/data \
-v /path/to/output:/app/model \
animikhaich/zero-code-classifier:pytorchModels will be saved directly to /path/to/output
Open a new terminal:
docker exec -it <container-id> bash
cd /app
tensorboard --logdir logs/tensorboard --host 0.0.0.0Access TensorBoard at: http://localhost:6006
- Experiment with different backbones and hyperparameters
- Compare TensorFlow vs PyTorch performance on your data
- Read the Framework Guide for details
- Check SECURITY.md for deployment best practices
- Review training logs in TensorBoard
- Deploy your trained model to production
- Documentation: Check README.md
- Issues: https://github.com/animikhaich/No-Code-Classification-Toolkit/issues
- Email: animikhaich@gmail.com
Happy Training! 🚀