mirror of https://github.com/alibaba/EasyCV.git
* adapt mmlab modules * add vitdet * support conv aggregation * modify vitdet load pretrained * support fp16 * modify defaultformatbundle * modify aug * bugfix sampler * bugfix mmresize * bugfix fp16&nonetype * bugfix filterannotation * support dlc * bugfix dist * bugfix detsourcecoco * smodify mmdet_parse_losses * bugfix nan * bugfix eval * bugfix data=nonetype * modify resize_embed * support vitdet_conv * add vitdet_conv init_weight * add test_vitdet * uniform rand_another * uniform use fp16 method * add test_fp16 Co-authored-by: jiangnana.jnn <jiangnana.jnn@alibaba-inc.com> |
||
---|---|---|
.. | ||
source | ||
Makefile | ||
README.md | ||
build_docs.sh | ||
make.bat |
README.md
maintain docs
-
install requirements needed to build docs
# in easycv root dir pip install requirements/docs.txt
-
build docs
# in easycv/docs dir bash build_docs.sh
-
doc string format
We adopt the google style docstring format as the standard, please refer to the following documents.
- Google Python style guide docstring link
- Google docstring example link
- sample:torch.nn.modules.conv link
- Transformer as an example:
class Transformer(base.Layer): """ Transformer model from ``Attention Is All You Need``, Original paper: https://arxiv.org/abs/1706.03762 Args: num_token (int): vocab size. num_layer (int): num of layer. num_head (int): num of attention heads. embedding_dim (int): embedding dimension. attention_head_dim (int): attention head dimension. feed_forward_dim (int): feed forward dimension. initializer: initializer type. activation: activation function. dropout (float): dropout rate (0.0 to 1.0). attention_dropout (float): dropout rate for attention layer. Returns: None """