EasyCV/docs
gulou 36a3c45efa
add more data source for auto download (#229)
* add caltech, flower, mnist data source

* add det lvis data source

* add pose crowdPose data source

* add pose of OC Human data source

* add pose of mpii data source

* add Seg of voc data source

* add Seg of coco data source

* add Det of wider person datasource

* add Det of african wildlife datasource

* add Det of fruit datasource

* add Det of pet datasource

* add Det of artaxor and tiny person datasource

* add Det of wider face datasource

* add Det of crowd human datasource

* add Det of object365 datasource

* add Seg of coco stuff 10k and 164k datasource

Co-authored-by: Cathy0908 <30484308+Cathy0908@users.noreply.github.com>
2022-12-02 10:57:23 +08:00
..
source add more data source for auto download (#229) 2022-12-02 10:57:23 +08:00
Makefile initial commit 2022-04-02 20:01:06 +08:00
README.md initial commit 2022-04-02 20:01:06 +08:00
build_docs.sh fix: add missing shell scripts (#47) 2022-04-29 17:04:10 +08:00
make.bat initial commit 2022-04-02 20:01:06 +08:00

README.md

maintain docs

  1. install requirements needed to build docs

    # in easycv root dir
    pip install requirements/docs.txt
    
  2. build docs

    # in easycv/docs dir
    bash build_docs.sh
    
  3. doc string format

    We adopt the google style docstring format as the standard, please refer to the following documents.

    1. Google Python style guide docstring link
    2. Google docstring example link
    3. sampletorch.nn.modules.conv link
    4. Transformer as an example
    class Transformer(base.Layer):
        """
            Transformer model from ``Attention Is All You Need``,
            Original paper: https://arxiv.org/abs/1706.03762
    
            Args:
                num_token (int): vocab size.
                num_layer (int): num of layer.
                num_head (int): num of attention heads.
                embedding_dim (int): embedding dimension.
                attention_head_dim (int): attention head dimension.
                feed_forward_dim (int): feed forward dimension.
                initializer: initializer type.
                activation: activation function.
                dropout (float): dropout rate (0.0 to 1.0).
                attention_dropout (float): dropout rate for attention layer.
    
            Returns: None
        """