mirror of
https://github.com/open-mmlab/mmsegmentation.git
synced 2025-06-03 22:03:48 +08:00
Update MMSegmentation_Tutorial.ipynb (#1281)
This commit is contained in:
parent
0166616d49
commit
f5ba2ea401
@ -33,7 +33,7 @@
|
||||
"## Install MMSegmentation\n",
|
||||
"This step may take several minutes. \n",
|
||||
"\n",
|
||||
"We use PyTorch 1.5.0 and CUDA 10.1 for this tutorial. You may install other versions by change the version number in pip install command. "
|
||||
"We use PyTorch 1.5.0 and CUDA 10.1 for this tutorial. You may install other versions by changing the version number in pip install command. "
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -214,7 +214,7 @@
|
||||
"source": [
|
||||
"## Train a semantic segmentation model on a new dataset\n",
|
||||
"\n",
|
||||
"To train on a customized dataset, the following steps are neccessary. \n",
|
||||
"To train on a customized dataset, the following steps are necessary. \n",
|
||||
"1. Add a new dataset class. \n",
|
||||
"2. Create a config file accordingly. \n",
|
||||
"3. Perform training and evaluation. "
|
||||
@ -228,11 +228,11 @@
|
||||
"source": [
|
||||
"### Add a new dataset\n",
|
||||
"\n",
|
||||
"Datasets in MMSegmentation require image and semantic segmentation maps to be placed in folders with the same perfix. To support a new dataset, we may need to modify the original file structure. \n",
|
||||
"Datasets in MMSegmentation require image and semantic segmentation maps to be placed in folders with the same prefix. To support a new dataset, we may need to modify the original file structure. \n",
|
||||
"\n",
|
||||
"In this tutorial, we give an example of converting the dataset. You may refer to [docs](https://github.com/open-mmlab/mmsegmentation/docs/en/tutorials/new_dataset.md) for details about dataset reorganization. \n",
|
||||
"\n",
|
||||
"We use [Standord Background Dataset](http://dags.stanford.edu/projects/scenedataset.html) as an example. The dataset contains 715 images chosen from existing public datasets [LabelMe](http://labelme.csail.mit.edu), [MSRC](http://research.microsoft.com/en-us/projects/objectclassrecognition), [PASCAL VOC](http://pascallin.ecs.soton.ac.uk/challenges/VOC) and [Geometric Context](http://www.cs.illinois.edu/homes/dhoiem/). Images from these datasets are mainly outdoor scenes, each containing approximately 320-by-240 pixels. \n",
|
||||
"We use [Stanford Background Dataset](http://dags.stanford.edu/projects/scenedataset.html) as an example. The dataset contains 715 images chosen from existing public datasets [LabelMe](http://labelme.csail.mit.edu), [MSRC](http://research.microsoft.com/en-us/projects/objectclassrecognition), [PASCAL VOC](http://pascallin.ecs.soton.ac.uk/challenges/VOC) and [Geometric Context](http://www.cs.illinois.edu/homes/dhoiem/). Images from these datasets are mainly outdoor scenes, each containing approximately 320-by-240 pixels. \n",
|
||||
"In this tutorial, we use the region annotations as labels. There are 8 classes in total, i.e. sky, tree, road, grass, water, building, mountain, and foreground object. "
|
||||
]
|
||||
},
|
||||
@ -249,8 +249,8 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# download and unzip\n",
|
||||
"!wget http://dags.stanford.edu/data/iccv09Data.tar.gz -O standford_background.tar.gz\n",
|
||||
"!tar xf standford_background.tar.gz"
|
||||
"!wget http://dags.stanford.edu/data/iccv09Data.tar.gz -O stanford_background.tar.gz\n",
|
||||
"!tar xf stanford_background.tar.gz"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -423,7 +423,7 @@
|
||||
"id": "1y2oV5w97jQo"
|
||||
},
|
||||
"source": [
|
||||
"Since the given config is used to train PSPNet on cityscapes dataset, we need to modify it accordingly for our new dataset. "
|
||||
"Since the given config is used to train PSPNet on the cityscapes dataset, we need to modify it accordingly for our new dataset. "
|
||||
]
|
||||
},
|
||||
{
|
||||
|
Loading…
x
Reference in New Issue
Block a user