EasyCV/docs
pengyu.lpy 0128e881d9 add segformer algo
* 将COCO_Stuff_164k数据集增加到了data_hub.md, prepared_data.md, 以及将其对应脚本加到了prepare_data文件夹下面

* 增加了segformer对应的predictor

* 完善docs/source/model_zoo_seg.md中关于segformer的信息
2022-08-18 10:40:18 +08:00
..
source add segformer algo 2022-08-18 10:40:18 +08:00
Makefile initial commit 2022-04-02 20:01:06 +08:00
README.md initial commit 2022-04-02 20:01:06 +08:00
build_docs.sh fix: add missing shell scripts (#47) 2022-04-29 17:04:10 +08:00
make.bat initial commit 2022-04-02 20:01:06 +08:00

README.md

maintain docs

  1. install requirements needed to build docs

    # in easycv root dir
    pip install requirements/docs.txt
    
  2. build docs

    # in easycv/docs dir
    bash build_docs.sh
    
  3. doc string format

    We adopt the google style docstring format as the standard, please refer to the following documents.

    1. Google Python style guide docstring link
    2. Google docstring example link
    3. sampletorch.nn.modules.conv link
    4. Transformer as an example
    class Transformer(base.Layer):
        """
            Transformer model from ``Attention Is All You Need``,
            Original paper: https://arxiv.org/abs/1706.03762
    
            Args:
                num_token (int): vocab size.
                num_layer (int): num of layer.
                num_head (int): num of attention heads.
                embedding_dim (int): embedding dimension.
                attention_head_dim (int): attention head dimension.
                feed_forward_dim (int): feed forward dimension.
                initializer: initializer type.
                activation: activation function.
                dropout (float): dropout rate (0.0 to 1.0).
                attention_dropout (float): dropout rate for attention layer.
    
            Returns: None
        """