EasyCV/docs
pengyu.lpy 2bf3b55655 add segformer algo
. 增加了segformer的b1, b2, b3, b4几个配置文件。
. 预训练模型,log文件等均已经更新
. 已经从master合并结果
        Link: https://code.alibaba-inc.com/pai-vision/EasyCV/codereview/9931529
2022-08-30 19:15:15 +08:00
..
source add segformer algo 2022-08-30 19:15:15 +08:00
Makefile initial commit 2022-04-02 20:01:06 +08:00
README.md initial commit 2022-04-02 20:01:06 +08:00
build_docs.sh fix: add missing shell scripts (#47) 2022-04-29 17:04:10 +08:00
make.bat initial commit 2022-04-02 20:01:06 +08:00

README.md

maintain docs

  1. install requirements needed to build docs

    # in easycv root dir
    pip install requirements/docs.txt
    
  2. build docs

    # in easycv/docs dir
    bash build_docs.sh
    
  3. doc string format

    We adopt the google style docstring format as the standard, please refer to the following documents.

    1. Google Python style guide docstring link
    2. Google docstring example link
    3. sampletorch.nn.modules.conv link
    4. Transformer as an example
    class Transformer(base.Layer):
        """
            Transformer model from ``Attention Is All You Need``,
            Original paper: https://arxiv.org/abs/1706.03762
    
            Args:
                num_token (int): vocab size.
                num_layer (int): num of layer.
                num_head (int): num of attention heads.
                embedding_dim (int): embedding dimension.
                attention_head_dim (int): attention head dimension.
                feed_forward_dim (int): feed forward dimension.
                initializer: initializer type.
                activation: activation function.
                dropout (float): dropout rate (0.0 to 1.0).
                attention_dropout (float): dropout rate for attention layer.
    
            Returns: None
        """