* add gptq implementation * pre-checkout * passed resnet example * passed llama example * aglin gptq acc * add activation quantization * uniform interfaces * add gptq readme * update mmrazor_large redame * add gptq opt example * fix sparse_gpt example for opt * fix import Protocol from py37 * fix error function name * fix bug in test * fix bug * fix bug * limit sparsegpt test with torch>=1.12 * add docstring for gptq and sparse_gpt * pre-commit * align acc & add save load ckpt & add ut * fix ut * fix ut * fix ut * fix ut & add torch2.0 for ci * del torch2.0 for ci * fix ut --------- Co-authored-by: FIRST_NAME LAST_NAME <MY_NAME@example.com> |
||
---|---|---|
.. | ||
algorithms | ||
examples | ||
README.md |
README.md

MMRazor for Large Models
Introduction
MMRazor is dedicated to the development of general-purpose model compression tools. Now, MMRazor not only supports conventional CV model compression but also extends to support large models. This project will provide examples of MMRazor's compression for various large models, including LLaMA, stable diffusion, and more.
Code structure overview about large models.
mmrazor
├── implementations # core algorithm components
├── pruning
└── quantization
projects
└── mmrazor_large
├── algorithms # algorithms usage introduction
└── examples # examples for various models about algorithms
├── language_models
│ ├── LLaMA
│ └── OPT
└── ResNet
Model-Algorithm Example Matrix
ResNet | OPT | LLama | Stable diffusion | |
---|---|---|---|---|
SparseGPT | ✅ | ✅ | ✅ | |
GPTQ | ✅ | ✅ | ✅ |
PaperList
We provide a paperlist for researchers in the field of model compression for large models. If you want to add your paper to this list, please submit a PR.
Paper | Title | Type | MMRazor |
---|---|---|---|
SparseGPT | SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot | Pruning | ✅ |
GPTQ | GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers | Quantization | ✅ |