EasyCV/docs
Chen Jiayu e61488d319
Support ViTDet algo (#35)
* adapt mmlab modules
* add vitdet
* support conv aggregation
* modify vitdet load pretrained
* support fp16
* modify defaultformatbundle
* modify aug
* bugfix sampler
* bugfix mmresize
* bugfix fp16&nonetype
* bugfix filterannotation
* support dlc
* bugfix dist
* bugfix detsourcecoco
* smodify mmdet_parse_losses
* bugfix nan
* bugfix eval
* bugfix data=nonetype
* modify resize_embed
* support vitdet_conv
* add vitdet_conv init_weight
* add test_vitdet
* uniform rand_another
* uniform use fp16 method
* add test_fp16

Co-authored-by: jiangnana.jnn <jiangnana.jnn@alibaba-inc.com>
2022-06-10 21:49:32 +08:00
..
source Support ViTDet algo (#35) 2022-06-10 21:49:32 +08:00
Makefile initial commit 2022-04-02 20:01:06 +08:00
README.md initial commit 2022-04-02 20:01:06 +08:00
build_docs.sh fix: add missing shell scripts (#47) 2022-04-29 17:04:10 +08:00
make.bat initial commit 2022-04-02 20:01:06 +08:00

README.md

maintain docs

  1. install requirements needed to build docs

    # in easycv root dir
    pip install requirements/docs.txt
    
  2. build docs

    # in easycv/docs dir
    bash build_docs.sh
    
  3. doc string format

    We adopt the google style docstring format as the standard, please refer to the following documents.

    1. Google Python style guide docstring link
    2. Google docstring example link
    3. sampletorch.nn.modules.conv link
    4. Transformer as an example
    class Transformer(base.Layer):
        """
            Transformer model from ``Attention Is All You Need``,
            Original paper: https://arxiv.org/abs/1706.03762
    
            Args:
                num_token (int): vocab size.
                num_layer (int): num of layer.
                num_head (int): num of attention heads.
                embedding_dim (int): embedding dimension.
                attention_head_dim (int): attention head dimension.
                feed_forward_dim (int): feed forward dimension.
                initializer: initializer type.
                activation: activation function.
                dropout (float): dropout rate (0.0 to 1.0).
                attention_dropout (float): dropout rate for attention layer.
    
            Returns: None
        """