English | 简体中文
Welcome to the PaddlePaddle GitHub.
PaddlePaddle, as the only independent R&D deep learning platform in China, has been officially open-sourced to professional communities since 2016. It is an industrial platform with advanced technologies and rich features that cover core deep learning frameworks, basic model libraries, end-to-end development kits, tools & components as well as service platforms. PaddlePaddle is originated from industrial practices with dedication and commitments to industrialization. It has been widely adopted by a wide range of sectors including manufacturing, agriculture, enterprise service, and so on while serving more than 1.5 million developers. With such advantages, PaddlePaddle has helped an increasing number of partners commercialize AI.
Our vision is to enable deep learning for everyone via PaddlePaddle. Please refer to our release announcement to track the latest features of PaddlePaddle.
# Linux CPU pip install paddlepaddle # Linux GPU cuda10cudnn7 pip install paddlepaddle-gpu # Linux GPU cuda9cudnn7 pip install paddlepaddle-gpu==1.8.5.post97
It is recommended to read this doc on our website.
Now our developers can acquire Tesla V100 online computing resources for free. If you create a program by AI Studio, you will obtain 12 hours to train models online per day. If you can insist on that for five consecutive days, then you will receive an extra 48 hours. Click here to start.
Agile Framework for Industrial Development of Deep Neural Networks
The PaddlePaddle deep learning framework facilitates the development while lowering the technical burden, through leveraging a programmable scheme to architect the neural networks. It supports both declarative programming and imperative programming with both development flexibility and high runtime performance preserved. The neural architectures could be automatically designed by algorithms with better performance than the ones designed by human experts.
Support Ultra-Large-Scale Training of Deep Neural Networks
PaddlePaddle has made breakthroughs in ultra-large-scale deep neural networks training. It launched the world's first large-scale open-source training platform that supports the training of deep networks with 100 billions of features and trillions of parameters using data sources distributed over hundreds of nodes. PaddlePaddle overcomes the online deep learning challenges for ultra-large-scale deep learning models, and further achieved the real-time model updating with more than 1 trillion parameters. Click here to learn more
Accelerated High-Performance Inference over Ubiquitous Deployments
PaddlePaddle is not only compatible with other open-source frameworks for models training, but also works well on the ubiquitous developments, varying from platforms to devices. More specifically, PaddlePaddle accelerates the inference procedure with the fastest speed-up. Note that, a recent breakthrough of inference speed has been made by PaddlePaddle on Huawei's Kirin NPU, through the hardware/software co-optimization. Click here to learn more
Industry-Oriented Models and Libraries with Open Source Repositories
PaddlePaddle includes and maintains more than 100 mainstream models that have been practiced and polished for a long time in the industry. Some of these models have won major prizes from key international competitions. In the meanwhile, PaddlePaddle has further more than 200 pre-training models (some of them with source codes) to facilitate the rapid development of industrial applications. Click here to learn more
You might want to start from how to implement deep learning basics with PaddlePaddle.
You might have got the hang of Beginner’s Guide, and wish to model practical problems and build your original networks.
So far you have already been familiar with Fluid. And the next step should be building a more efficient model or inventing your original Operator.
Our new API enables much shorter programs.
We appreciate your contributions!
PaddlePaddle is provided under the Apache-2.0 license.
：Code submit frequency
：React/respond to issue & PR etc.
：Well-balanced team members and collaboration
：Recent popularity of project
：Star counts, download counts etc.