mirror of
https://github.com/alibaba/EasyCV.git
synced 2025-06-03 14:49:00 +08:00
* avoid numpy version check when xtcocotools can be imported Link: https://code.alibaba-inc.com/pai-vision/EasyCV/codereview/10377599 * * move thirdparty into easycv * fix code style Link: https://code.alibaba-inc.com/pai-vision/EasyCV/codereview/10395748 * move thirdparty into easycv * fix missing thirdparty/deformable_attention/src when build package * optimize ci_test * update version to 0.6.3.8 Link: https://code.alibaba-inc.com/pai-vision/EasyCV/codereview/10412059 * update version to 0.6.3.8 * fix face keypoints bugs in FT * update version to 0.6.3.9 Link: https://code.alibaba-inc.com/pai-vision/EasyCV/codereview/10443200 * update version to 0.6.3.9 * fix import thirdparty * fix unittest * fix unittest Co-authored-by: wenmeng.zwm <wenmeng.zwm@alibaba-inc.com> Co-authored-by: shouzhou.bx <shouzhou.bx@alibaba-inc.com>
maintain docs
-
install requirements needed to build docs
# in easycv root dir pip install requirements/docs.txt
-
build docs
# in easycv/docs dir bash build_docs.sh
-
doc string format
We adopt the google style docstring format as the standard, please refer to the following documents.
- Google Python style guide docstring link
- Google docstring example link
- sample:torch.nn.modules.conv link
- Transformer as an example:
class Transformer(base.Layer): """ Transformer model from ``Attention Is All You Need``, Original paper: https://arxiv.org/abs/1706.03762 Args: num_token (int): vocab size. num_layer (int): num of layer. num_head (int): num of attention heads. embedding_dim (int): embedding dimension. attention_head_dim (int): attention head dimension. feed_forward_dim (int): feed forward dimension. initializer: initializer type. activation: activation function. dropout (float): dropout rate (0.0 to 1.0). attention_dropout (float): dropout rate for attention layer. Returns: None """