diff --git a/projects/DensePose/dev/README.md b/projects/DensePose/dev/README.md
new file mode 100644
index 0000000..e3a94b6
--- /dev/null
+++ b/projects/DensePose/dev/README.md
@@ -0,0 +1,7 @@
+
+## Some scripts for developers to use, include:
+
+- `run_instant_tests.sh`: run training for a few iterations.
+- `run_inference_tests.sh`: run inference on a small dataset.
+- `../../dev/linter.sh`: lint the codebase before commit
+- `../../dev/parse_results.sh`: parse results from log file.
diff --git a/projects/DensePose/dev/run_inference_tests.sh b/projects/DensePose/dev/run_inference_tests.sh
new file mode 100644
index 0000000..34f47d5
--- /dev/null
+++ b/projects/DensePose/dev/run_inference_tests.sh
@@ -0,0 +1,33 @@
+#!/bin/bash -e
+# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
+
+BIN="python train_net.py"
+OUTPUT="inference_test_output"
+NUM_GPUS=2
+IMS_PER_GPU=2
+IMS_PER_BATCH=$(( NUM_GPUS * IMS_PER_GPU ))
+
+CFG_LIST=( "${@:1}" )
+
+if [ ${#CFG_LIST[@]} -eq 0 ]; then
+ CFG_LIST=( ./configs/quick_schedules/*inference_acc_test.yaml )
+fi
+
+echo "========================================================================"
+echo "Configs to run:"
+echo "${CFG_LIST[@]}"
+echo "========================================================================"
+
+for cfg in "${CFG_LIST[@]}"; do
+ echo "========================================================================"
+ echo "Running $cfg ..."
+ echo "========================================================================"
+ $BIN \
+ --eval-only \
+ --num-gpus $NUM_GPUS \
+ --config-file "$cfg" \
+ OUTPUT_DIR "$OUTPUT" \
+ SOLVER.IMS_PER_BATCH $IMS_PER_BATCH
+ rm -rf $OUTPUT
+done
+
diff --git a/projects/DensePose/dev/run_instant_tests.sh b/projects/DensePose/dev/run_instant_tests.sh
new file mode 100644
index 0000000..a537851
--- /dev/null
+++ b/projects/DensePose/dev/run_instant_tests.sh
@@ -0,0 +1,28 @@
+#!/bin/bash -e
+# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
+
+BIN="python train_net.py"
+OUTPUT="instant_test_output"
+NUM_GPUS=2
+SOLVER_IMS_PER_BATCH=$((NUM_GPUS * 2))
+
+CFG_LIST=( "${@:1}" )
+if [ ${#CFG_LIST[@]} -eq 0 ]; then
+ CFG_LIST=( ./configs/quick_schedules/*instant_test.yaml )
+fi
+
+echo "========================================================================"
+echo "Configs to run:"
+echo "${CFG_LIST[@]}"
+echo "========================================================================"
+
+for cfg in "${CFG_LIST[@]}"; do
+ echo "========================================================================"
+ echo "Running $cfg ..."
+ echo "========================================================================"
+ $BIN --num-gpus $NUM_GPUS --config-file "$cfg" \
+ SOLVER.IMS_PER_BATCH $SOLVER_IMS_PER_BATCH \
+ OUTPUT_DIR "$OUTPUT"
+ rm -rf "$OUTPUT"
+done
+
diff --git a/projects/DensePose/doc/GETTING_STARTED.md b/projects/DensePose/doc/GETTING_STARTED.md
new file mode 100644
index 0000000..a6bcbed
--- /dev/null
+++ b/projects/DensePose/doc/GETTING_STARTED.md
@@ -0,0 +1,58 @@
+# Getting Started with DensePose
+
+## Inference with Pre-trained Models
+
+1. Pick a model and its config file from [Model Zoo](MODEL_ZOO.md), for example [densepose_rcnn_R_50_FPN_s1x.yaml](../configs/densepose_rcnn_R_50_FPN_s1x.yaml)
+2. Run the [Apply Net](TOOL_APPLY_NET.md) tool to visualize the results or save the to disk. For example, to use contour visualization for DensePose, one can run:
+```bash
+python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml densepose_rcnn_R_50_FPN_s1x.pkl image.jpg dp_contour,bbox --output image_densepose_contour.png
+```
+Please see [Apply Net](TOOL_APPLY_NET.md) for more details on the tool.
+
+## Training
+
+First, prepare the [dataset](http://densepose.org/#dataset) into the following structure under the directory you'll run training scripts:
+
+datasets/coco/
+ annotations/
+ densepose_{train,minival,valminusminival}2014.json
+ densepose_minival2014_100.json (optional, for testing only)
+ {train,val}2014/
+ # image files that are mentioned in the corresponding json
+
+
+To train a model one can use the [train_net.py](../train_net.py) script.
+This script was used to train all DensePose models in [Model Zoo](MODEL_ZOO.md).
+For example, to launch end-to-end DensePose-RCNN training with ResNet-50 FPN backbone
+on 8 GPUs following the s1x schedule, one can run
+```bash
+python train_net.py --config-file configs/densepose_rcnn_R_50_FPN_s1x.yaml --num-gpus 8
+```
+The configs are made for 8-GPU training. To train on 1 GPU, one can apply the
+[linear learning rate scaling rule](https://arxiv.org/abs/1706.02677):
+```bash
+python train_net.py --config-file configs/densepose_rcnn_R_50_FPN_s1x.yaml \
+ SOLVER.IMS_PER_BATCH 2 SOLVER.BASE_LR 0.0025
+```
+
+## Evaluation
+
+Model testing can be done in the same way as training, except for an additional flag `--eval-only` and
+model location specification through `MODEL.WEIGHTS model.pth` in the command line
+```bash
+python train_net.py --config-file configs/densepose_rcnn_R_50_FPN_s1x.yaml \
+ --eval-only MODEL.WEIGHTS model.pth
+```
+
+## Tools
+
+We provide tools which allow one to:
+ - easily view DensePose annotated data in a dataset;
+ - perform DensePose inference on a set of images;
+ - visualize DensePose model results;
+
+`query_db` is a tool to print or visualize DensePose data in a dataset.
+Please refer to [Query DB](TOOL_QUERY_DB.md) for more details on this tool
+
+`apply_net` is a tool to print or visualize DensePose results.
+Please refer to [Apply Net](TOOL_APPLY_NET.md) for more details on this tool
diff --git a/projects/DensePose/doc/MODEL_ZOO.md b/projects/DensePose/doc/MODEL_ZOO.md
new file mode 100644
index 0000000..c263084
--- /dev/null
+++ b/projects/DensePose/doc/MODEL_ZOO.md
@@ -0,0 +1,277 @@
+# Model Zoo and Baselines
+
+# Introduction
+
+We provide baselines trained with Detectron2 DensePose. The corresponding
+configuration files can be found in the [configs](../configs) directory.
+All models were trained on COCO `train2014` + `valminusminival2014` and
+evaluated on COCO `minival2014`. For the details on common settings in which
+baselines were trained, please check [Detectron 2 Model Zoo](../../../MODEL_ZOO.md).
+
+## License
+
+All models available for download through this document are licensed under the
+[Creative Commons Attribution-ShareAlike 3.0 license](https://creativecommons.org/licenses/by-sa/3.0/)
+
+## COCO DensePose Baselines with DensePose-RCNN
+
+### Legacy Models
+
+Baselines trained using schedules from [Güler et al, 2018](https://arxiv.org/pdf/1802.00434.pdf)
+
+
+
+### Improved Baselines, Original Fully Convolutional Haad
+
+These models use an improved training schedule and Panoptic FPN head from [Kirillov et al, 2019](https://arxiv.org/abs/1901.02446).
+
+
+
+### Improved Baselines, DeepLabV3 Head
+
+These models use an improved training schedule, Panoptic FPN head from [Kirillov et al, 2019](https://arxiv.org/abs/1901.02446) and DeepLabV3 head from [Chen et al, 2017](https://arxiv.org/abs/1706.05587).
+
+
+
+### Baselines with Confidence Estimation
+
+These models perform additional estimation of confidence in regressed UV coodrinates, along the lines of [Neverova et al., 2019](https://papers.nips.cc/paper/8378-correlated-uncertainty-for-learning-dense-correspondences-from-noisy-labels).
+
+
+
+## Old Baselines
+
+It is still possible to use some baselines from [DensePose 1](https://github.com/facebookresearch/DensePose).
+Below are evaluation metrics for the baselines recomputed in the current framework:
+
+| Model | bbox AP | AP | AP50 | AP75 | APm |APl |
+|-----|-----|-----|--- |--- |--- |--- |
+| [`ResNet50_FPN_s1x-e2e`](https://dl.fbaipublicfiles.com/densepose/DensePose_ResNet50_FPN_s1x-e2e.pkl) | 54.673 | 48.894 | 84.963 | 50.717 | 43.132 | 50.433 |
+| [`ResNet101_FPN_s1x-e2e`](https://dl.fbaipublicfiles.com/densepose/DensePose_ResNet101_FPN_s1x-e2e.pkl) | 56.032 | 51.088 | 86.250 | 55.057 | 46.542 | 52.563 |
+
+Note: these scores are close, but not strictly equal to the ones reported in the [DensePose 1 Model Zoo](https://github.com/facebookresearch/DensePose/blob/master/MODEL_ZOO.md),
+which is due to small incompatibilities between the frameworks.
diff --git a/projects/DensePose/doc/TOOL_APPLY_NET.md b/projects/DensePose/doc/TOOL_APPLY_NET.md
new file mode 100644
index 0000000..f62bdbc
--- /dev/null
+++ b/projects/DensePose/doc/TOOL_APPLY_NET.md
@@ -0,0 +1,131 @@
+# Apply Net
+
+`apply_net` is a tool to print or visualize DensePose results on a set of images.
+It has two modes: `dump` to save DensePose model results to a pickle file
+and `show` to visualize them on images.
+
+## Dump Mode
+
+The general command form is:
+```bash
+python apply_net.py dump [-h] [-v] [--output ]
+```
+
+There are three mandatory arguments:
+ - ``, configuration file for a given model;
+ - ``, model file with trained parameters
+ - ``, input image file name, pattern or folder
+
+One can additionally provide `--output` argument to define the output file name,
+which defaults to `output.pkl`.
+
+
+Examples:
+
+1. Dump results of a DensePose model with ResNet-50 FPN backbone for images
+ in a folder `images` to file `dump.pkl`:
+```bash
+python apply_net.py dump configs/densepose_rcnn_R_50_FPN_s1x.yaml DensePose_ResNet50_FPN_s1x-e2e.pkl images --output dump.pkl -v
+```
+
+2. Dump results of a DensePose model with ResNet-50 FPN backbone for images
+ with file name matching a pattern `image*.jpg` to file `results.pkl`:
+```bash
+python apply_net.py dump configs/densepose_rcnn_R_50_FPN_s1x.yaml DensePose_ResNet50_FPN_s1x-e2e.pkl "image*.jpg" --output results.pkl -v
+```
+
+If you want to load the pickle file generated by the above command:
+```
+# make sure DensePose is in your PYTHONPATH, or use the following line to add it:
+sys.path.append("/your_detectron2_path/detectron2_repo/projects/DensePose/")
+
+f = open('/your_result_path/results.pkl', 'rb')
+data = pickle.load(f)
+```
+
+The file `results.pkl` contains the list of results per image, for each image the result is a dictionary:
+```
+data: [{'file_name': '/your_path/image1.jpg',
+ 'scores': tensor([0.9884]),
+ 'pred_boxes_XYXY': tensor([[ 69.6114, 0.0000, 706.9797, 706.0000]]),
+ 'pred_densepose': },
+ {'file_name': '/your_path/image2.jpg',
+ 'scores': tensor([0.9999, 0.5373, 0.3991]),
+ 'pred_boxes_XYXY': tensor([[ 59.5734, 7.7535, 579.9311, 932.3619],
+ [612.9418, 686.1254, 612.9999, 704.6053],
+ [164.5081, 407.4034, 598.3944, 920.4266]]),
+ 'pred_densepose': }]
+```
+
+We can use the following code, to parse the outputs of the first
+detected instance on the first image.
+```
+from densepose.data.structures import DensePoseResult
+img_id, instance_id = 0, 0 # Look at the first image and the first detected instance
+bbox_xyxy = data[img_id]['pred_boxes_XYXY'][instance_id]
+result_encoded = data[img_id]['pred_densepose'].results[instance_id]
+iuv_arr = DensePoseResult.decode_png_data(*result_encoded)
+```
+The array `bbox_xyxy` contains (x0, y0, x1, y1) of the bounding box.
+
+The shape of `iuv_arr` is `[3, H, W]`, where (H, W) is the shape of the bounding box.
+- `iuv_arr[0,:,:]`: The patch index of image points, indicating which of the 24 surface patches the point is on.
+- `iuv_arr[1,:,:]`: The U-coordinate value of image points.
+- `iuv_arr[2,:,:]`: The V-coordinate value of image points.
+
+
+## Visualization Mode
+
+The general command form is:
+```bash
+python apply_net.py show [-h] [-v] [--min_score ] [--nms_thresh ] [--output ]
+```
+
+There are four mandatory arguments:
+ - ``, configuration file for a given model;
+ - ``, model file with trained parameters
+ - ``, input image file name, pattern or folder
+ - ``, visualizations specifier; currently available visualizations are:
+ * `bbox` - bounding boxes of detected persons;
+ * `dp_segm` - segmentation masks for detected persons;
+ * `dp_u` - each body part is colored according to the estimated values of the
+ U coordinate in part parameterization;
+ * `dp_v` - each body part is colored according to the estimated values of the
+ V coordinate in part parameterization;
+ * `dp_contour` - plots contours with color-coded U and V coordinates
+
+
+One can additionally provide the following optional arguments:
+ - `--min_score` to only show detections with sufficient scores that are not lower than provided value
+ - `--nms_thresh` to additionally apply non-maximum suppression to detections at a given threshold
+ - `--output` to define visualization file name template, which defaults to `output.png`.
+ To distinguish output file names for different images, the tool appends 1-based entry index,
+ e.g. output.0001.png, output.0002.png, etc...
+
+
+The following examples show how to output results of a DensePose model
+with ResNet-50 FPN backbone using different visualizations for image `image.jpg`:
+
+1. Show bounding box and segmentation:
+```bash
+python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml DensePose_ResNet50_FPN_s1x-e2e.pkl image.jpg bbox,dp_segm -v
+```
+
+
+2. Show bounding box and estimated U coordinates for body parts:
+```bash
+python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml DensePose_ResNet50_FPN_s1x-e2e.pkl image.jpg bbox,dp_u -v
+```
+
+
+3. Show bounding box and estimated V coordinates for body parts:
+```bash
+python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml DensePose_ResNet50_FPN_s1x-e2e.pkl image.jpg bbox,dp_v -v
+```
+
+
+4. Show bounding box and estimated U and V coordinates via contour plots:
+```bash
+python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml DensePose_ResNet50_FPN_s1x-e2e.pkl image.jpg dp_contour,bbox -v
+```
+
diff --git a/projects/DensePose/doc/TOOL_QUERY_DB.md b/projects/DensePose/doc/TOOL_QUERY_DB.md
new file mode 100644
index 0000000..b0a764b
--- /dev/null
+++ b/projects/DensePose/doc/TOOL_QUERY_DB.md
@@ -0,0 +1,105 @@
+
+# Query Dataset
+
+`query_db` is a tool to print or visualize DensePose data from a dataset.
+It has two modes: `print` and `show` to output dataset entries to standard
+output or to visualize them on images.
+
+## Print Mode
+
+The general command form is:
+```bash
+python query_db.py print [-h] [-v] [--max-entries N]
+```
+
+There are two mandatory arguments:
+ - ``, DensePose dataset specification, from which to select
+ the entries (e.g. `densepose_coco_2014_train`).
+ - ``, dataset entry selector which can be a single specification,
+ or a comma-separated list of specifications of the form
+ `field[:type]=value` for exact match with the value
+ or `field[:type]=min-max` for a range of values
+
+One can additionally limit the maximum number of entries to output
+by providing `--max-entries` argument.
+
+Examples:
+
+1. Output at most 10 first entries from the `densepose_coco_2014_train` dataset:
+```bash
+python query_db.py print densepose_coco_2014_train \* --max-entries 10 -v
+```
+
+2. Output all entries with `file_name` equal to `COCO_train2014_000000000036.jpg`:
+```bash
+python query_db.py print densepose_coco_2014_train file_name=COCO_train2014_000000000036.jpg -v
+```
+
+3. Output all entries with `image_id` between 36 and 156:
+```bash
+python query_db.py print densepose_coco_2014_train image_id:int=36-156 -v
+```
+
+## Visualization Mode
+
+The general command form is:
+```bash
+python query_db.py show [-h] [-v] [--max-entries N] [--output ]
+```
+
+There are three mandatory arguments:
+ - ``, DensePose dataset specification, from which to select
+ the entries (e.g. `densepose_coco_2014_train`).
+ - ``, dataset entry selector which can be a single specification,
+ or a comma-separated list of specifications of the form
+ `field[:type]=value` for exact match with the value
+ or `field[:type]=min-max` for a range of values
+ - ``, visualizations specifier; currently available visualizations are:
+ * `bbox` - bounding boxes of annotated persons;
+ * `dp_i` - annotated points colored according to the containing part;
+ * `dp_pts` - annotated points in green color;
+ * `dp_segm` - segmentation masks for annotated persons;
+ * `dp_u` - annotated points colored according to their U coordinate in part parameterization;
+ * `dp_v` - annotated points colored according to their V coordinate in part parameterization;
+
+One can additionally provide one of the two optional arguments:
+ - `--max_entries` to limit the maximum number of entries to visualize
+ - `--output` to provide visualization file name template, which defaults
+ to `output.png`. To distinguish file names for different dataset
+ entries, the tool appends 1-based entry index to the output file name,
+ e.g. output.0001.png, output.0002.png, etc.
+
+The following examples show how to output different visualizations for image with `id = 322`
+from `densepose_coco_2014_train` dataset:
+
+1. Show bounding box and segmentation:
+```bash
+python query_db.py show densepose_coco_2014_train image_id:int=322 bbox,dp_segm -v
+```
+
+
+2. Show bounding box and points colored according to the containing part:
+```bash
+python query_db.py show densepose_coco_2014_train image_id:int=322 bbox,dp_i -v
+```
+
+
+3. Show bounding box and annotated points in green color:
+```bash
+python query_db.py show densepose_coco_2014_train image_id:int=322 bbox,dp_segm -v
+```
+
+
+4. Show bounding box and annotated points colored according to their U coordinate in part parameterization:
+```bash
+python query_db.py show densepose_coco_2014_train image_id:int=322 bbox,dp_u -v
+```
+
+
+5. Show bounding box and annotated points colored according to their V coordinate in part parameterization:
+```bash
+python query_db.py show densepose_coco_2014_train image_id:int=322 bbox,dp_v -v
+```
+
+
+
diff --git a/projects/DensePose/doc/images/res_bbox_dp_contour.jpg b/projects/DensePose/doc/images/res_bbox_dp_contour.jpg
new file mode 100644
index 0000000..8f0c195
Binary files /dev/null and b/projects/DensePose/doc/images/res_bbox_dp_contour.jpg differ
diff --git a/projects/DensePose/doc/images/res_bbox_dp_segm.jpg b/projects/DensePose/doc/images/res_bbox_dp_segm.jpg
new file mode 100644
index 0000000..855fb7f
Binary files /dev/null and b/projects/DensePose/doc/images/res_bbox_dp_segm.jpg differ
diff --git a/projects/DensePose/doc/images/res_bbox_dp_u.jpg b/projects/DensePose/doc/images/res_bbox_dp_u.jpg
new file mode 100644
index 0000000..fd4e77b
Binary files /dev/null and b/projects/DensePose/doc/images/res_bbox_dp_u.jpg differ
diff --git a/projects/DensePose/doc/images/res_bbox_dp_v.jpg b/projects/DensePose/doc/images/res_bbox_dp_v.jpg
new file mode 100644
index 0000000..09a8197
Binary files /dev/null and b/projects/DensePose/doc/images/res_bbox_dp_v.jpg differ
diff --git a/projects/DensePose/doc/images/vis_bbox_dp_i.jpg b/projects/DensePose/doc/images/vis_bbox_dp_i.jpg
new file mode 100644
index 0000000..113dd84
Binary files /dev/null and b/projects/DensePose/doc/images/vis_bbox_dp_i.jpg differ
diff --git a/projects/DensePose/doc/images/vis_bbox_dp_pts.jpg b/projects/DensePose/doc/images/vis_bbox_dp_pts.jpg
new file mode 100644
index 0000000..1a81dae
Binary files /dev/null and b/projects/DensePose/doc/images/vis_bbox_dp_pts.jpg differ
diff --git a/projects/DensePose/doc/images/vis_bbox_dp_segm.jpg b/projects/DensePose/doc/images/vis_bbox_dp_segm.jpg
new file mode 100644
index 0000000..b17f831
Binary files /dev/null and b/projects/DensePose/doc/images/vis_bbox_dp_segm.jpg differ
diff --git a/projects/DensePose/doc/images/vis_bbox_dp_u.jpg b/projects/DensePose/doc/images/vis_bbox_dp_u.jpg
new file mode 100644
index 0000000..e21be74
Binary files /dev/null and b/projects/DensePose/doc/images/vis_bbox_dp_u.jpg differ
diff --git a/projects/DensePose/doc/images/vis_bbox_dp_v.jpg b/projects/DensePose/doc/images/vis_bbox_dp_v.jpg
new file mode 100644
index 0000000..7bcab2c
Binary files /dev/null and b/projects/DensePose/doc/images/vis_bbox_dp_v.jpg differ
diff --git a/projects/DensePose/tests/common.py b/projects/DensePose/tests/common.py
new file mode 100644
index 0000000..c8a8975
--- /dev/null
+++ b/projects/DensePose/tests/common.py
@@ -0,0 +1,124 @@
+# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
+
+import os
+import torch
+
+from detectron2.config import get_cfg
+from detectron2.engine import default_setup
+from detectron2.modeling import build_model
+
+from densepose import add_densepose_config
+
+_BASE_CONFIG_DIR = "configs"
+_EVOLUTION_CONFIG_SUB_DIR = "evolution"
+_HRNET_CONFIG_SUB_DIR = "HRNet"
+_QUICK_SCHEDULES_CONFIG_SUB_DIR = "quick_schedules"
+_BASE_CONFIG_FILE_PREFIX = "Base-"
+_CONFIG_FILE_EXT = ".yaml"
+
+
+def _get_base_config_dir():
+ """
+ Return the base directory for configurations
+ """
+ return os.path.join(os.path.dirname(os.path.realpath(__file__)), "..", _BASE_CONFIG_DIR)
+
+
+def _get_evolution_config_dir():
+ """
+ Return the base directory for evolution configurations
+ """
+ return os.path.join(_get_base_config_dir(), _EVOLUTION_CONFIG_SUB_DIR)
+
+
+def _get_hrnet_config_dir():
+ """
+ Return the base directory for HRNet configurations
+ """
+ return os.path.join(_get_base_config_dir(), _HRNET_CONFIG_SUB_DIR)
+
+
+def _get_quick_schedules_config_dir():
+ """
+ Return the base directory for quick schedules configurations
+ """
+ return os.path.join(_get_base_config_dir(), _QUICK_SCHEDULES_CONFIG_SUB_DIR)
+
+
+def _collect_config_files(config_dir):
+ """
+ Collect all configuration files (i.e. densepose_*.yaml) directly in the specified directory
+ """
+ start = _get_base_config_dir()
+ results = []
+ for entry in os.listdir(config_dir):
+ path = os.path.join(config_dir, entry)
+ if not os.path.isfile(path):
+ continue
+ _, ext = os.path.splitext(entry)
+ if ext != _CONFIG_FILE_EXT:
+ continue
+ if entry.startswith(_BASE_CONFIG_FILE_PREFIX):
+ continue
+ config_file = os.path.relpath(path, start)
+ results.append(config_file)
+ return results
+
+
+def get_config_files():
+ """
+ Get all the configuration files (relative to the base configuration directory)
+ """
+ return _collect_config_files(_get_base_config_dir())
+
+
+def get_evolution_config_files():
+ """
+ Get all the evolution configuration files (relative to the base configuration directory)
+ """
+ return _collect_config_files(_get_evolution_config_dir())
+
+
+def get_hrnet_config_files():
+ """
+ Get all the HRNet configuration files (relative to the base configuration directory)
+ """
+ return _collect_config_files(_get_hrnet_config_dir())
+
+
+def get_quick_schedules_config_files():
+ """
+ Get all the quick schedules configuration files (relative to the base configuration directory)
+ """
+ return _collect_config_files(_get_quick_schedules_config_dir())
+
+
+def _get_model_config(config_file):
+ """
+ Load and return the configuration from the specified file (relative to the base configuration
+ directory)
+ """
+ cfg = get_cfg()
+ add_densepose_config(cfg)
+ path = os.path.join(_get_base_config_dir(), config_file)
+ cfg.merge_from_file(path)
+ if not torch.cuda.is_available():
+ cfg.MODEL_DEVICE = "cpu"
+ return cfg
+
+
+def get_model(config_file):
+ """
+ Get the model from the specified file (relative to the base configuration directory)
+ """
+ cfg = _get_model_config(config_file)
+ return build_model(cfg)
+
+
+def setup(config_file):
+ """
+ Setup the configuration from the specified file (relative to the base configuration directory)
+ """
+ cfg = _get_model_config(config_file)
+ cfg.freeze()
+ default_setup(cfg, {})
diff --git a/projects/DensePose/tests/test_combine_data_loader.py b/projects/DensePose/tests/test_combine_data_loader.py
new file mode 100644
index 0000000..b74aaea
--- /dev/null
+++ b/projects/DensePose/tests/test_combine_data_loader.py
@@ -0,0 +1,46 @@
+# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
+
+import random
+import unittest
+from typing import Any, Iterable, Iterator, Tuple
+
+from densepose.data import CombinedDataLoader
+
+
+def _grouper(iterable: Iterable[Any], n: int, fillvalue=None) -> Iterator[Tuple[Any]]:
+ """
+ Group elements of an iterable by chunks of size `n`, e.g.
+ grouper(range(9), 4) ->
+ (0, 1, 2, 3), (4, 5, 6, 7), (8, None, None, None)
+ """
+ it = iter(iterable)
+ while True:
+ values = []
+ for _ in range(n):
+ try:
+ value = next(it)
+ except StopIteration:
+ values.extend([fillvalue] * (n - len(values)))
+ yield tuple(values)
+ return
+ values.append(value)
+ yield tuple(values)
+
+
+class TestCombinedDataLoader(unittest.TestCase):
+ def test_combine_loaders_1(self):
+ loader1 = _grouper([f"1_{i}" for i in range(10)], 2)
+ loader2 = _grouper([f"2_{i}" for i in range(11)], 3)
+ batch_size = 4
+ ratios = (0.1, 0.9)
+ random.seed(43)
+ combined = CombinedDataLoader((loader1, loader2), batch_size, ratios)
+ BATCHES_GT = [
+ ["1_0", "1_1", "2_0", "2_1"],
+ ["2_2", "2_3", "2_4", "2_5"],
+ ["1_2", "1_3", "2_6", "2_7"],
+ ["2_8", "2_9", "2_10", None],
+ ]
+ for i, batch in enumerate(combined):
+ self.assertEqual(len(batch), batch_size)
+ self.assertEqual(batch, BATCHES_GT[i])
diff --git a/projects/DensePose/tests/test_frame_selector.py b/projects/DensePose/tests/test_frame_selector.py
new file mode 100644
index 0000000..2ad6607
--- /dev/null
+++ b/projects/DensePose/tests/test_frame_selector.py
@@ -0,0 +1,60 @@
+# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
+
+import random
+import unittest
+
+from densepose.data.video import FirstKFramesSelector, LastKFramesSelector, RandomKFramesSelector
+
+
+class TestFrameSelector(unittest.TestCase):
+ def test_frame_selector_random_k_1(self):
+ _SEED = 43
+ _K = 4
+ random.seed(_SEED)
+ selector = RandomKFramesSelector(_K)
+ frame_tss = list(range(0, 20, 2))
+ _SELECTED_GT = [0, 8, 4, 6]
+ selected = selector(frame_tss)
+ self.assertEqual(_SELECTED_GT, selected)
+
+ def test_frame_selector_random_k_2(self):
+ _SEED = 43
+ _K = 10
+ random.seed(_SEED)
+ selector = RandomKFramesSelector(_K)
+ frame_tss = list(range(0, 6, 2))
+ _SELECTED_GT = [0, 2, 4]
+ selected = selector(frame_tss)
+ self.assertEqual(_SELECTED_GT, selected)
+
+ def test_frame_selector_first_k_1(self):
+ _K = 4
+ selector = FirstKFramesSelector(_K)
+ frame_tss = list(range(0, 20, 2))
+ _SELECTED_GT = frame_tss[:_K]
+ selected = selector(frame_tss)
+ self.assertEqual(_SELECTED_GT, selected)
+
+ def test_frame_selector_first_k_2(self):
+ _K = 10
+ selector = FirstKFramesSelector(_K)
+ frame_tss = list(range(0, 6, 2))
+ _SELECTED_GT = frame_tss[:_K]
+ selected = selector(frame_tss)
+ self.assertEqual(_SELECTED_GT, selected)
+
+ def test_frame_selector_last_k_1(self):
+ _K = 4
+ selector = LastKFramesSelector(_K)
+ frame_tss = list(range(0, 20, 2))
+ _SELECTED_GT = frame_tss[-_K:]
+ selected = selector(frame_tss)
+ self.assertEqual(_SELECTED_GT, selected)
+
+ def test_frame_selector_last_k_2(self):
+ _K = 10
+ selector = LastKFramesSelector(_K)
+ frame_tss = list(range(0, 6, 2))
+ _SELECTED_GT = frame_tss[-_K:]
+ selected = selector(frame_tss)
+ self.assertEqual(_SELECTED_GT, selected)
diff --git a/projects/DensePose/tests/test_image_resize_transform.py b/projects/DensePose/tests/test_image_resize_transform.py
new file mode 100644
index 0000000..9355f54
--- /dev/null
+++ b/projects/DensePose/tests/test_image_resize_transform.py
@@ -0,0 +1,16 @@
+# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
+
+import unittest
+import torch
+
+from densepose.data.transform import ImageResizeTransform
+
+
+class TestImageResizeTransform(unittest.TestCase):
+ def test_image_resize_1(self):
+ images_batch = torch.ones((3, 100, 100, 3), dtype=torch.uint8) * 100
+ transform = ImageResizeTransform()
+ images_transformed = transform(images_batch)
+ IMAGES_GT = torch.ones((3, 3, 800, 800), dtype=torch.float) * 100
+ self.assertEqual(images_transformed.size(), IMAGES_GT.size())
+ self.assertAlmostEqual(torch.abs(IMAGES_GT - images_transformed).max().item(), 0.0)
diff --git a/projects/DensePose/tests/test_model_e2e.py b/projects/DensePose/tests/test_model_e2e.py
new file mode 100644
index 0000000..eed1310
--- /dev/null
+++ b/projects/DensePose/tests/test_model_e2e.py
@@ -0,0 +1,43 @@
+# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
+
+import unittest
+import torch
+
+from detectron2.structures import BitMasks, Boxes, Instances
+
+from .common import get_model
+
+
+# TODO(plabatut): Modularize detectron2 tests and re-use
+def make_model_inputs(image, instances=None):
+ if instances is None:
+ return {"image": image}
+
+ return {"image": image, "instances": instances}
+
+
+def make_empty_instances(h, w):
+ instances = Instances((h, w))
+ instances.gt_boxes = Boxes(torch.rand(0, 4))
+ instances.gt_classes = torch.tensor([]).to(dtype=torch.int64)
+ instances.gt_masks = BitMasks(torch.rand(0, h, w))
+ return instances
+
+
+class ModelE2ETest(unittest.TestCase):
+ CONFIG_PATH = ""
+
+ def setUp(self):
+ self.model = get_model(self.CONFIG_PATH)
+
+ def _test_eval(self, sizes):
+ inputs = [make_model_inputs(torch.rand(3, size[0], size[1])) for size in sizes]
+ self.model.eval()
+ self.model(inputs)
+
+
+class DensePoseRCNNE2ETest(ModelE2ETest):
+ CONFIG_PATH = "densepose_rcnn_R_101_FPN_s1x.yaml"
+
+ def test_empty_data(self):
+ self._test_eval([(200, 250), (200, 249)])
diff --git a/projects/DensePose/tests/test_setup.py b/projects/DensePose/tests/test_setup.py
new file mode 100644
index 0000000..a80d198
--- /dev/null
+++ b/projects/DensePose/tests/test_setup.py
@@ -0,0 +1,36 @@
+# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
+
+import unittest
+
+from .common import (
+ get_config_files,
+ get_evolution_config_files,
+ get_hrnet_config_files,
+ get_quick_schedules_config_files,
+ setup,
+)
+
+
+class TestSetup(unittest.TestCase):
+ def _test_setup(self, config_file):
+ setup(config_file)
+
+ def test_setup_configs(self):
+ config_files = get_config_files()
+ for config_file in config_files:
+ self._test_setup(config_file)
+
+ def test_setup_evolution_configs(self):
+ config_files = get_evolution_config_files()
+ for config_file in config_files:
+ self._test_setup(config_file)
+
+ def test_setup_hrnet_configs(self):
+ config_files = get_hrnet_config_files()
+ for config_file in config_files:
+ self._test_setup(config_file)
+
+ def test_setup_quick_schedules_configs(self):
+ config_files = get_quick_schedules_config_files()
+ for config_file in config_files:
+ self._test_setup(config_file)
diff --git a/projects/DensePose/tests/test_structures.py b/projects/DensePose/tests/test_structures.py
new file mode 100644
index 0000000..ad97c23
--- /dev/null
+++ b/projects/DensePose/tests/test_structures.py
@@ -0,0 +1,25 @@
+# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
+
+import unittest
+
+from densepose.data.structures import normalized_coords_transform
+
+
+class TestStructures(unittest.TestCase):
+ def test_normalized_coords_transform(self):
+ bbox = (32, 24, 288, 216)
+ x0, y0, w, h = bbox
+ xmin, ymin, xmax, ymax = x0, y0, x0 + w, y0 + h
+ f = normalized_coords_transform(*bbox)
+ # Top-left
+ expected_p, actual_p = (-1, -1), f((xmin, ymin))
+ self.assertEqual(expected_p, actual_p)
+ # Top-right
+ expected_p, actual_p = (1, -1), f((xmax, ymin))
+ self.assertEqual(expected_p, actual_p)
+ # Bottom-left
+ expected_p, actual_p = (-1, 1), f((xmin, ymax))
+ self.assertEqual(expected_p, actual_p)
+ # Bottom-right
+ expected_p, actual_p = (1, 1), f((xmax, ymax))
+ self.assertEqual(expected_p, actual_p)
diff --git a/projects/DensePose/tests/test_video_keyframe_dataset.py b/projects/DensePose/tests/test_video_keyframe_dataset.py
new file mode 100644
index 0000000..afcf99e
--- /dev/null
+++ b/projects/DensePose/tests/test_video_keyframe_dataset.py
@@ -0,0 +1,92 @@
+# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
+
+import contextlib
+import os
+import random
+import tempfile
+import unittest
+import torch
+import torchvision.io as io
+
+from densepose.data.transform import ImageResizeTransform
+from densepose.data.video import RandomKFramesSelector, VideoKeyframeDataset
+
+try:
+ import av
+except ImportError:
+ av = None
+
+
+# copied from torchvision test/test_io.py
+def _create_video_frames(num_frames, height, width):
+ y, x = torch.meshgrid(torch.linspace(-2, 2, height), torch.linspace(-2, 2, width))
+ data = []
+ for i in range(num_frames):
+ xc = float(i) / num_frames
+ yc = 1 - float(i) / (2 * num_frames)
+ d = torch.exp(-((x - xc) ** 2 + (y - yc) ** 2) / 2) * 255
+ data.append(d.unsqueeze(2).repeat(1, 1, 3).byte())
+ return torch.stack(data, 0)
+
+
+# adapted from torchvision test/test_io.py
+@contextlib.contextmanager
+def temp_video(num_frames, height, width, fps, lossless=False, video_codec=None, options=None):
+ if lossless:
+ if video_codec is not None:
+ raise ValueError("video_codec can't be specified together with lossless")
+ if options is not None:
+ raise ValueError("options can't be specified together with lossless")
+ video_codec = "libx264rgb"
+ options = {"crf": "0"}
+ if video_codec is None:
+ video_codec = "libx264"
+ if options is None:
+ options = {}
+ data = _create_video_frames(num_frames, height, width)
+ with tempfile.NamedTemporaryFile(suffix=".mp4") as f:
+ f.close()
+ io.write_video(f.name, data, fps=fps, video_codec=video_codec, options=options)
+ yield f.name, data
+ os.unlink(f.name)
+
+
+@unittest.skipIf(av is None, "PyAV unavailable")
+class TestVideoKeyframeDataset(unittest.TestCase):
+ def test_read_keyframes_all(self):
+ with temp_video(60, 300, 300, 5, video_codec="mpeg4") as (fname, data):
+ video_list = [fname]
+ dataset = VideoKeyframeDataset(video_list)
+ self.assertEqual(len(dataset), 1)
+ data1 = dataset[0]
+ self.assertEqual(data1.shape, torch.Size((5, 300, 300, 3)))
+ self.assertEqual(data1.dtype, torch.uint8)
+ return
+ self.assertTrue(False)
+
+ def test_read_keyframes_with_selector(self):
+ with temp_video(60, 300, 300, 5, video_codec="mpeg4") as (fname, data):
+ video_list = [fname]
+ random.seed(0)
+ frame_selector = RandomKFramesSelector(3)
+ dataset = VideoKeyframeDataset(video_list, frame_selector)
+ self.assertEqual(len(dataset), 1)
+ data1 = dataset[0]
+ self.assertEqual(data1.shape, torch.Size((3, 300, 300, 3)))
+ self.assertEqual(data1.dtype, torch.uint8)
+ return
+ self.assertTrue(False)
+
+ def test_read_keyframes_with_selector_with_transform(self):
+ with temp_video(60, 300, 300, 5, video_codec="mpeg4") as (fname, data):
+ video_list = [fname]
+ random.seed(0)
+ frame_selector = RandomKFramesSelector(1)
+ transform = ImageResizeTransform()
+ dataset = VideoKeyframeDataset(video_list, frame_selector, transform)
+ data1 = dataset[0]
+ self.assertEqual(len(dataset), 1)
+ self.assertEqual(data1.shape, torch.Size((1, 3, 800, 800)))
+ self.assertEqual(data1.dtype, torch.float32)
+ return
+ self.assertTrue(False)