mirror of https://github.com/alibaba/EasyCV.git
* 将COCO_Stuff_164k数据集增加到了data_hub.md, prepared_data.md, 以及将其对应脚本加到了prepare_data文件夹下面 * 增加了segformer对应的predictor * 完善docs/source/model_zoo_seg.md中关于segformer的信息 |
||
---|---|---|
.. | ||
source | ||
Makefile | ||
README.md | ||
build_docs.sh | ||
make.bat |
README.md
maintain docs
-
install requirements needed to build docs
# in easycv root dir pip install requirements/docs.txt
-
build docs
# in easycv/docs dir bash build_docs.sh
-
doc string format
We adopt the google style docstring format as the standard, please refer to the following documents.
- Google Python style guide docstring link
- Google docstring example link
- sample:torch.nn.modules.conv link
- Transformer as an example:
class Transformer(base.Layer): """ Transformer model from ``Attention Is All You Need``, Original paper: https://arxiv.org/abs/1706.03762 Args: num_token (int): vocab size. num_layer (int): num of layer. num_head (int): num of attention heads. embedding_dim (int): embedding dimension. attention_head_dim (int): attention head dimension. feed_forward_dim (int): feed forward dimension. initializer: initializer type. activation: activation function. dropout (float): dropout rate (0.0 to 1.0). attention_dropout (float): dropout rate for attention layer. Returns: None """