From 6b29727dc6b25a5214e8a59a2ce9a55dd51cf97a Mon Sep 17 00:00:00 2001 From: RE-OWOD <95522332+RE-OWOD@users.noreply.github.com> Date: Tue, 4 Jan 2022 22:09:10 +0800 Subject: [PATCH] Add files via upload --- INSTALL.md | 230 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 230 insertions(+) create mode 100644 INSTALL.md diff --git a/INSTALL.md b/INSTALL.md new file mode 100644 index 0000000..5d8887b --- /dev/null +++ b/INSTALL.md @@ -0,0 +1,230 @@ +## Installation + +Our [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5) +has step-by-step instructions that install detectron2. +The [Dockerfile](docker) +also installs detectron2 with a few simple commands. + +### Requirements +- Linux or macOS with Python ≥ 3.6 +- PyTorch ≥ 1.4 and [torchvision](https://github.com/pytorch/vision/) that matches the PyTorch installation. + You can install them together at [pytorch.org](https://pytorch.org) to make sure of this +- OpenCV is optional and needed by demo and visualization + + +### Build Detectron2 from Source + +gcc & g++ ≥ 5 are required. [ninja](https://ninja-build.org/) is recommended for faster build. +After having them, run: +``` +python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' +# (add --user if you don't have permission) + +# Or, to install it from a local clone: +git clone https://github.com/facebookresearch/detectron2.git +python -m pip install -e detectron2 + +# Or if you are on macOS +CC=clang CXX=clang++ python -m pip install ...... +``` + +To __rebuild__ detectron2 that's built from a local clone, use `rm -rf build/ **/*.so` to clean the +old build first. You often need to rebuild detectron2 after reinstalling PyTorch. + +### Install Pre-Built Detectron2 (Linux only) + +Choose from this table to install [v0.2.1 (Aug 2020)](https://github.com/facebookresearch/detectron2/releases): + +
CUDA torch 1.6torch 1.5torch 1.4
10.2
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.6/index.html
+
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.5/index.html
+
10.1
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.6/index.html
+
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.5/index.html
+
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.4/index.html
+
10.0
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cu100/torch1.4/index.html
+
9.2
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cu92/torch1.6/index.html
+
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cu92/torch1.5/index.html
+
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cu92/torch1.4/index.html
+
cpu
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.6/index.html
+
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.5/index.html
+
install
python -m pip install detectron2 -f \
+  https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.4/index.html
+  
+ + +Note that: +1. The pre-built package has to be used with corresponding version of CUDA and official PyTorch release. + It will not work with a different version of PyTorch or a non-official build of PyTorch. +2. New packages are released every few months. Therefore, packages may not contain latest features in the master + branch and may not be compatible with the master branch of a research project that uses detectron2 + (e.g. those in [projects](projects)). + +### Common Installation Issues + +Click each issue for its solutions: + +
+ +Undefined symbols that contains TH,aten,torch,caffe2; Missing torch dynamic libraries; Segmentation fault immediately when using detectron2. + +
+ +This usually happens when detectron2 or torchvision is not +compiled with the version of PyTorch you're running. + +If the error comes from a pre-built torchvision, uninstall torchvision and pytorch and reinstall them +following [pytorch.org](http://pytorch.org). So the versions will match. + +If the error comes from a pre-built detectron2, check [release notes](https://github.com/facebookresearch/detectron2/releases) +to see the corresponding pytorch version required for each pre-built detectron2. +Or uninstall and reinstall the correct pre-built detectron2. + +If the error comes from detectron2 or torchvision that you built manually from source, +remove files you built (`build/`, `**/*.so`) and rebuild it so it can pick up the version of pytorch currently in your environment. + +If you cannot resolve this problem, please include the output of `gdb -ex "r" -ex "bt" -ex "quit" --args python -m detectron2.utils.collect_env` +in your issue. +
+ +
+ +Undefined C++ symbols (e.g. `GLIBCXX`) or C++ symbols not found. + +
+Usually it's because the library is compiled with a newer C++ compiler but run with an old C++ runtime. + +This often happens with old anaconda. +Try `conda update libgcc`. Then rebuild detectron2. + +The fundamental solution is to run the code with proper C++ runtime. +One way is to use `LD_PRELOAD=/path/to/libstdc++.so`. + +
+ +
+ +"nvcc not found" or "Not compiled with GPU support" or "Detectron2 CUDA Compiler: not available". + +
+CUDA is not found when building detectron2. +You should make sure + +``` +python -c 'import torch; from torch.utils.cpp_extension import CUDA_HOME; print(torch.cuda.is_available(), CUDA_HOME)' +``` + +print `(True, a directory with cuda)` at the time you build detectron2. + +Most models can run inference (but not training) without GPU support. To use CPUs, set `MODEL.DEVICE='cpu'` in the config. +
+ +
+ +"invalid device function" or "no kernel image is available for execution". + +
+Two possibilities: + +* You build detectron2 with one version of CUDA but run it with a different version. + + To check whether it is the case, + use `python -m detectron2.utils.collect_env` to find out inconsistent CUDA versions. + In the output of this command, you should expect "Detectron2 CUDA Compiler", "CUDA_HOME", "PyTorch built with - CUDA" + to contain cuda libraries of the same version. + + When they are inconsistent, + you need to either install a different build of PyTorch (or build by yourself) + to match your local CUDA installation, or install a different version of CUDA to match PyTorch. + +* PyTorch/torchvision/Detectron2 is not built for the correct GPU architecture (aka. compute capability). + + The architecture included by PyTorch/detectron2/torchvision is available in the "architecture flags" in + `python -m detectron2.utils.collect_env`. It must include + the architecture of your GPU, which can be found at [developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus). + + If you're using pre-built PyTorch/detectron2/torchvision, they have included support for most popular GPUs already. + If not supported, you need to build them from source. + + When building detectron2/torchvision from source, they detect the GPU device and build for only the device. + This means the compiled code may not work on a different GPU device. + To recompile them for the correct architecture, remove all installed/compiled files, + and rebuild them with the `TORCH_CUDA_ARCH_LIST` environment variable set properly. + For example, `export TORCH_CUDA_ARCH_LIST=6.0,7.0` makes it compile for both P100s and V100s. +
+ +
+ +Undefined CUDA symbols; Cannot open libcudart.so + +
+The version of NVCC you use to build detectron2 or torchvision does +not match the version of CUDA you are running with. +This often happens when using anaconda's CUDA runtime. + +Use `python -m detectron2.utils.collect_env` to find out inconsistent CUDA versions. +In the output of this command, you should expect "Detectron2 CUDA Compiler", "CUDA_HOME", "PyTorch built with - CUDA" +to contain cuda libraries of the same version. + +When they are inconsistent, +you need to either install a different build of PyTorch (or build by yourself) +to match your local CUDA installation, or install a different version of CUDA to match PyTorch. +
+ + +
+ +C++ compilation errors from NVCC + + +1. NVCC version has to match the CUDA version of your PyTorch. + +2. The combination of NVCC and GCC you use is incompatible. You need to change one of their versions. + See [here](https://gist.github.com/ax3l/9489132) for some valid combinations. + +The CUDA/GCC version used by PyTorch can be found by `print(torch.__config__.show())`. +
+ + +
+ +"ImportError: cannot import name '_C'". + +
+Please build and install detectron2 following the instructions above. + +Or, if you are running code from detectron2's root directory, `cd` to a different one. +Otherwise you may not import the code that you installed. +
+ + +
+ +Any issue on windows. + +
+ +Detectron2 is continuously built on windows with [CircleCI](https://app.circleci.com/pipelines/github/facebookresearch/detectron2?branch=master). +However we do not provide official support for it. +PRs that improves code compatibility on windows are welcome. +
+ +
+ +ONNX conversion segfault after some "TraceWarning". + +
+The ONNX package is compiled with a too old compiler. + +Please build and install ONNX from its source code using a compiler +whose version is closer to what's used by PyTorch (available in `torch.__config__.show()`). +