4 Unstar Star 8 Fork 0

PaddlePaddle / edl

Create your Gitee Account
Explore and code with more than 5 million developers,Free private repositories !:)
Sign up
Clone or download
Cancel
Notice: Creating folder will generate an empty file .keep, because not support in Git
Loading...
README.md




Issues Forks Stars License

Motivation

Computing resources on cloud such as Amazon AWSBaidu Cloud have multi-tenancy. Deep learning model training and inference with elastic resources will be common on cloud. We propose Elastic Deep Learning (EDL) that makes training and inference of deep learning models on cloud easier and more efficient.

Now EDL is an incubation-stage project of the LF AI Foundation.

Installation

EDL package support python2.7/3.6/3.7. You can install with pip install paddle_edl. But we highly recommend you use it in our docker:

docker pull hub.baidubce.com/paddle-edl/paddle_edl:latest-cuda9.0-cudnn7
nvidia-docker run -name paddle_edl hub.baidubce.com/paddle-edl/paddle_edl:latest-cuda9.0-cudnn7 /bin/bash

Latest Release(0.3.1)

  • Support elastic training with inference type services during training, such as knowledge distillation
  • Inference type services are automatically registered through service discovery in EDL
  • Knowledge distillation examples in computer vision and natural language processing

Quick start Demo

  • Install Paddle Serving
pip install paddle-serving-server-gpu
cd example/distill/resnet

wget --no-check-certificate https://paddle-edl.bj.bcebos.com/distill_teacher_model/ResNeXt101_32x16d_wsl_model.tar.gz
tar -zxf ResNeXt101_32x16d_wsl_model.tar.gz

python -m paddle_serving_server_gpu.serve \
  --model ResNeXt101_32x16d_wsl_model \
  --mem_optim \
  --port 9898 \
  --gpu_ids 1
  • The Student Model: ResNet50_vd(that is ResNet-D in paper). Train student on gpu 0.
python -m paddle.distributed.launch --selected_gpus 0 \
  ./train_with_fleet.py \
  --model=ResNet50_vd \
  --data_dir=./ImageNet \
  --use_distill_service=True \
  --distill_teachers=127.0.0.1:9898
mode teacher resource student resource total batch size acc1 acc5 speed(img/s)
pure train None 8 * v100 256 77.1 93.5 1828
teacher and student on the same gpus 8 * v100 8 * v100 256 79.0 94.3 656
EDL service distill 40 * P4 8 * v100 256 79.0 94.5 1514

About Knowledge Distillation in EDL

  • Theory: Distilling the Knowledge in a Neural Network
    • Knowledge distillation consists of two parts in general, i.e. strong teachers and weak students.
    • Student model learns from a teacher or mixture-of-teachers model's feed-forward results to achieve better results.
  • Application scenarios of EDL knowledge distillation
    • Teacher models and student models are running on the same GPU devices that training throughputs are not maximized
    • Offline GPU cluster has limited resources but some online GPU resources can be used during training.
    • Heterogenous teacher models can improve student model's performance but are hard to deploy on a single GPU card due to memory limitation.
    • Computation burden of teacher models and student models is hard to balance to maximize the training throughputs.
  • Solution:
    • Deploy teacher models as online inference service through Paddle Serving
    • Online inference services are elastic and are registered to EDL service management modules.
    • Dynamical adaptation of teacher models' online instance to maximize students' training throughputs and resource utilization.

Release 0.2.0

Checkpoint based elastic training on multiple GPUs

  • We have several training nodes running on each GPU.
  • A master node is responsible for checkpoint saving and all the other nodes are elastic nodes.
  • When elastic nodes join or leave current training job, training hyper-parameter will be adjusted automatically.
  • Newly comming training nodes will load checkpoint from remote FS automatically.
  • A model checkpoint is saved every serveral steps given by user

Resnet50 experiments on a single machine in docker

  • Start a JobServer on one node which generates changing scripts.
cd example/demo/collective
node_ips="127.0.0.1"
python -u paddle_edl.demo.collective.job_server_demo \
    --node_ips ${node_ips} \
    --pod_num_of_node 8 \
    --time_interval_to_change 900 \
    --gpu_num_of_node 8
  • Start a Jobclient which controls the worker process.
# set the ImageNet data path
export PADDLE_EDL_IMAGENET_PATH=<your path>
# set the checkpoint path
export PADDLE_EDL_FLEET_CHECKPOINT_PATH=<your path>

mkdir -p resnet50_pod
unset http_proxy https_proxy

# running under edl
export PADDLE_RUNING_ENV=PADDLE_EDL
export PADDLE_JOB_ID="test_job_id_1234"
export PADDLE_POD_ID="not set"

python -u paddle_edl.demo.collective.job_client_demo \
    --log_level 20 \
    --package_sh ./resnet50/package.sh \
    --pod_path ./resnet50_pod \
    ./train_pretrain.sh
  • Experiments result on 2 nodes cluster
model dataset gpu cards total batch size acc1 acc5
Resnet50 ImageNet 16 * v100 1024 75.5 92.8

The whole example is here

Community

FAQ

License

Contribution

Comments ( 0 )

Sign in for post a comment

About

Elastic Deep Learning using PaddlePaddle and Kubernetes spread retract
Go and 4 more languages
Apache-2.0
Cancel

Releases

No release

Gitee Metrics

Contributors

All

Activities

load more
can not load any more
Go
1
https://git.oschina.net/paddlepaddle/edl.git
git@git.oschina.net:paddlepaddle/edl.git
paddlepaddle
edl
edl
develop

Search

132457 8cb2edc1 1899542 131848 70c8d3a4 1899542