commit
7916a2ddcc
|
@ -0,0 +1,12 @@
|
|||
# Distributed Training
|
||||
|
||||
Distributed deep neural networks training is highly efficient in PaddlePaddle.
|
||||
And it is one of the PaddlePaddle's core advantage technologies.
|
||||
On image classification tasks, distributed training can achieve almost linear acceleration ratio.
|
||||
[Fleet](https://github.com/PaddlePaddle/Fleet) is High-Level API for distributed training in PaddlePaddle.
|
||||
By using Fleet, a user can shift from local machine paddlepaddle code to distributed code easily.
|
||||
In order to support both single-machine training and multi-machine training,
|
||||
[PaddleClas](https://github.com/PaddlePaddle/PaddleClas) uses the Fleet API interface.
|
||||
For more information about distributed training,
|
||||
please refer to [Fleet API documentation](https://github.com/PaddlePaddle/Fleet/blob/develop/README.md).
|
||||
|
|
@ -0,0 +1,6 @@
|
|||
# Paddle Hub
|
||||
|
||||
[PaddleHub](https://github.com/PaddlePaddle/PaddleHub) is a pre-trained model application tool for PaddlePaddle.
|
||||
Developers can conveniently use the high-quality pre-trained model combined with Fine-tune API to quickly complete the whole process from model migration to deployment.
|
||||
All the pre-trained models of [PaddleClas](https://github.com/PaddlePaddle/PaddleClas) have been collected by PaddleHub.
|
||||
For further details, please refer to [PaddleHub website](https://www.paddlepaddle.org.cn/hub).
|
Loading…
Reference in New Issue