This is a (Work In Progress!) teaching and research package for exploring latent generative flow matching models. (The name is inspired by "vocoder.")
This project initially started as a way to provide a lightweight, fast (and interpretable?) upgrade to the diffusion model system Pictures of MIDI for MIDI piano roll images, but flocoder
is intended to work on more general datasets too.
Head over to notebooks/SD_Flower_Flow.ipynb
and run through it for a taste. It will run on Colab.
Check out the sets of slides linked to on notebooks/README.md
.
The above diagram illustrates the architecture of our intended model: a VQVAE compresses MIDI data into a discrete latent space, while a flow model learns to generate new samples in the continuous latent space.
Though we can also flow in the continuous space of a VAE like the one for Stable Diffusion, which may be easier for starters.
# Clone the repository
git clone https://github.com/drscotthawley/flocoder.git
cd flocoder
# Install uv if not already installed
# On macOS/Linux:
# curl -LsSf https://astral.sh/uv/install.sh | sh
# On Windows PowerShell:
# irm https://astral.sh/uv/install.ps1 | iex
# Create a virtual environment with uv, specifying Python 3.10
uv venv --python=python3.10
# Activate the virtual environment
# On Linux/macOS:
source .venv/bin/activate
# On Windows:
# .venv\Scripts\activate
# Install the package in editable mode (See below if you get NATTEN errors!)
uv pip install -e .
# Recommended: Install development dependencies (jupyter, others...)
uv pip install -e ".[dev]"
# Recommended: install NATTEN separately with special flags
uv pip install natten --no-build-isolation
# if that fails, see NATTEN's install instructions (https://github.com/SHI-Labs/NATTEN/blob/main/docs/install.md)
# and specify exact version number, e.g.
# uv pip install natten==0.17.5+torch260cu126 -f https://shi-labs.com/natten/wheels/
# or build fromt the top of the source, e.g.:
# uv pip install --no-build-isolation git+https://github.com/SHI-Labs/NATTEN
The project is organized as follows:
flocoder/
: Main package codescripts/
: Training and evaluation scriptsconfigs/
: Configuration files for models and trainingnotebooks/
: Jupyter notebooks for tutorials and examplestests/
: Unit tests
The package includes multiple training scripts located in main directory.
You can skip the autoencoder/"codec" training if you'd rather use the pretrained Stable Diffusion VAE, e.g. for what follows:
export CONFIG_FILE=flowers_sd.yaml
You can use use the Stable Diffusion VAE to get started quickly. (It will auto-download). But if you want to train your own...
export CONFIG_FILE=flowers_vqgan.yaml
#export CONFIG_FILE=midi.yaml
./train_vqgan.py --config-name $CONFIG_FILE
The autoencoder AKA "codec" (e.g. VQGAN) compresses roll images into a quantized latent representation.
This will save checkpoints in the checkpoints/
directory. Use that checkpoint to pre-encode your data like so...
Takes about 20 minutes to run on a single GPU.
./preencode_data.py --config-name $CONFIG_FILE
./train_flow.py --config-name $CONFIG_FILE
The flow model operates in the latent space created by the autoencoder.
# Generate new MIDI samples
./generate_samples.py --config-name $CONFIG_FILE
# or with optional gradio UI:
#./generate_samples.py --config-name $CONFIG_FILE +use_gradio=true
This generates new samples by sampling from the flow model and decoding through the VQVAE.
Contributions are VERY welcome! See Contributing.md. Thanks in advance.
Discussions are open! Rather than starting some ad-hoc Discord server, let's share ideas, questions, insights, etc. using the Discussions tab.
- Add Discussions area
- Add Style Guide
- Replace custom config/CLI arg system with Hydra or other package
- Rename "vae"/"vqvae"/"vqgan" variable as just "codec"
- Replace Class in
preencode_data.py
with functions as per Style Guide - Research: Figure out why conditioning fails for latent model
- Add Standalone sampler script / Gradio demo?
- Add metrics (to wandb out) to quantify flow training progress (sinkhorn, FID)
- Add Contributing guidelines
- Try variable size scheduler
- Add audio example, e.g. using DAC
- low-priority: Make RK4(5) integrator fully CUDA-compatible
- Straighter/OT paths: Add ReFlow, Minibatch OT, Ray's Rays, Curvature penalty,...
- Add jitter / diffusion for comparison
- Add Documentation
- Improve overall introduction/orientation
- Fix "code smell" throughout -- repeated methods, hard-coded values, etc.
- Research: Figure out how to accelerate training of flows!!
- Research: Figure out how to accelerate training of vqgan
- Research: improve output quality of midi-flow (and midi-vqgan)
- Inference speedup: investigate model quantization / pruning (pytorch.ao?)
- Ops: Add tests
- Ops: Add CI
- Investigate "Mean Flows for One-step Generative Modeling"
This project is generously supported by Hyperstate Music AI.
This project is licensed under the terms of the MIT license.