MGT-python: Musical Gestures Toolbox¶
The Musical Gestures Toolbox for Python is a collection of tools for visualization and analysis of audio and video, with a focus on motion capture and musical gesture analysis.

What is MGT-python?¶
MGT-python provides researchers, artists, and developers with powerful tools to:
- Analyze motion in video recordings
- Extract audio features from multimedia files
- Generate visualizations like motiongrams, videograms, and motion history images
- Process and manipulate video content with computer vision techniques
- Integrate seamlessly with scientific Python ecosystem (NumPy, SciPy, Matplotlib)
Key Features¶
🎥 Video Analysis¶
- Motion detection and tracking
- Optical flow analysis
- Pose estimation with automatic model download
- Frame differencing and motion history
- Video preprocessing (cropping, filtering, rotation)
🎵 Audio Processing¶
- Waveform analysis and visualization
- Spectrograms and chromagrams
- Tempo and beat tracking
- Audio feature extraction
📊 Visualization Tools¶
- Motiongrams (motion over time)
- Videograms (pixel intensity over time)
- Average images and motion plots
- Interactive plotting with Matplotlib
🔧 Utilities¶
- Video format conversion
- Batch processing capabilities
- Integration with OpenPose for pose estimation
- Export functionality for further analysis
Quick Start¶
Installation¶
musicalgestures installs its Python dependencies automatically. Install ffmpeg separately to enable video processing.
Basic Usage¶
import musicalgestures as mg
# Load a video
mv = mg.MgVideo('dance.avi')
# Generate motion analysis
motion = mv.motion()
# Create visualizations
motiongram = mv.motiongrams()
average_image = mv.average()
# Audio analysis
audio = mg.MgAudio('music.wav')
spectrogram = audio.spectrogram()
Try it Online¶
Getting Started¶
- Installation Guide - Detailed setup instructions
- Quick Start Tutorial - Get up and running in minutes
- Examples - Sample code and use cases
- User Guide - Comprehensive documentation
Runtime Behavior¶
pose()downloads missing OpenPose model weights on demand.- In notebook and batch execution, pose weight downloads are attempted automatically instead of prompting for stdin.
- If CUDA-backed OpenCV DNN support is unavailable,
pose(device='gpu')falls back to CPU. flow.dense(),flow.sparse(), andblur_faces()run on CPU by default (use_gpu=False); passuse_gpu=Trueto attempt CUDA acceleration with automatic CPU fallback.get_cuda_device_count()can be used to check CUDA visibility from OpenCV.blur_faces()returns the generatedMgVideoresult consistently, including when exporting face-coordinate data.
Academic Background¶
This toolbox builds upon years of research in musical gesture analysis:
- Musical Gestures Toolbox for Max (Original)
- Musical Gestures Toolbox for Matlab (Previous version)
- MGT-python (Current version)
Support and Community¶
- Documentation: You're reading it! 📚
- Issues: GitHub Issues
- Source Code: GitHub Repository
- Research Group: fourMs Lab at RITMO
Citation¶
If you use MGT-python in your research, please cite this article:
- Laczkó, B., & Jensenius, A. R. (2021). Reflections on the Development of the Musical Gestures Toolbox for Python. Proceedings of the Nordic Sound and Music Computing Conference, Copenhagen.
@inproceedings{laczkoReflectionsDevelopmentMusical2021,
title = {Reflections on the Development of the Musical Gestures Toolbox for Python},
author = {Laczkó, Bálint and Jensenius, Alexander Refsum},
booktitle = {Proceedings of the Nordic Sound and Music Computing Conference},
year = {2021},
address = {Copenhagen},
url = {http://urn.nb.no/URN:NBN:no-91935}
}
License¶
MGT-python is released under the GNU General Public License v3 (GPLv3).
Ready to explore musical gestures? Start with our Quick Start Guide or jump into the examples!