8 Star 27 Fork 8

OpenDILab开源决策智能平台 / DI-engine

Create your Gitee Account
Explore and code with more than 6 million developers,Free private repositories !:)
Sign up
Clone or Download
Notice: Creating folder will generate an empty file .keep, because not support in Git

PyPI Conda Conda update PyPI - Python Version PyTorch Version

Loc Comments

Style Docs Unittest Algotest deploy codecov

GitHub Org's stars GitHub stars GitHub forks GitHub commit activity GitHub issues GitHub pulls Contributors GitHub license

Updated on 2021.12.03 DI-engine-v0.2.2 (beta)

Introduction to DI-engine (beta)

DI-engine is a generalized Decision Intelligence engine. It supports most basic deep reinforcement learning (DRL) algorithms, such as DQN, PPO, SAC, and domain-specific algorithms like QMIX in multi-agent RL, GAIL in inverse RL, and RND in exploration problems. Various training pipelines and customized decision AI applications are also supported. Have fun with exploration and exploitation.



System Optimization and Design



You can simply install DI-engine from PyPI with the following command:

pip install DI-engine

If you use Anaconda or Miniconda, you can install DI-engine from conda-forge through the following command:

conda install -c opendilab di-engine

For more information about installation, you can refer to installation.

And our dockerhub repo can be found here,we prepare base image and env image with common RL environments.

  • base: opendilab/ding:nightly
  • atari: opendilab/ding:nightly-atari
  • mujoco: opendilab/ding:nightly-mujoco
  • smac: opendilab/ding:nightly-smac


The detailed documentation are hosted on doc(中文文档).

Quick Start

3 Minutes Kickoff

3 Minutes Kickoff(colab)

3 分钟上手中文版(kaggle)

Bonus: Train RL agent in one line code:

ding -m serial -e cartpole -p dqn -s 0


↳ Stargazers

Stargazers repo roster for @opendilab/DI-engine

↳ Forkers

Forkers repo roster for @opendilab/DI-engine


Algorithm Versatility

No Algorithm Label Doc and Implementation Runnable Demo
1 DQN discrete DQN中文文档
python3 -u cartpole_dqn_main.py / ding -m serial -c cartpole_dqn_config.py -s 0
2 C51 discrete policy/c51 ding -m serial -c cartpole_c51_config.py -s 0
3 QRDQN discrete policy/qrdqn ding -m serial -c cartpole_qrdqn_config.py -s 0
4 IQN discrete policy/iqn ding -m serial -c cartpole_iqn_config.py -s 0
5 Rainbow discrete policy/rainbow ding -m serial -c cartpole_rainbow_config.py -s 0
6 SQL discretecontinuous policy/sql ding -m serial -c cartpole_sql_config.py -s 0
7 R2D2 distdiscrete policy/r2d2 ding -m serial -c cartpole_r2d2_config.py -s 0
8 A2C discrete policy/a2c ding -m serial -c cartpole_a2c_config.py -s 0
9 PPO/MAPPO discretecontinuous policy/ppo python3 -u cartpole_ppo_main.py / ding -m serial_onpolicy -c cartpole_ppo_config.py -s 0
10 PPG discrete policy/ppg python3 -u cartpole_ppg_main.py
11 ACER discretecontinuous policy/acer ding -m serial -c cartpole_acer_config.py -s 0
12 IMPALA distdiscrete policy/impala ding -m serial -c cartpole_impala_config.py -s 0
13 DDPG/PADDPG continuoushybrid policy/ddpg ding -m serial -c pendulum_ddpg_config.py -s 0
14 TD3 continuoushybrid policy/td3 python3 -u pendulum_td3_main.py / ding -m serial -c pendulum_td3_config.py -s 0
15 D4PG continuous policy/d4pg python3 -u pendulum_d4pg_config.py
16 SAC continuous policy/sac ding -m serial -c pendulum_sac_config.py -s 0
17 PDQN hybrid policy/pdqn ding -m serial -c gym_hybrid_pdqn_config.py -s 0
18 MPDQN hybrid policy/pdqn ding -m serial -c gym_hybrid_mpdqn_config.py -s 0
19 QMIX MARL policy/qmix ding -m serial -c smac_3s5z_qmix_config.py -s 0
20 COMA MARL policy/coma ding -m serial -c smac_3s5z_coma_config.py -s 0
21 QTran MARL policy/qtran ding -m serial -c smac_3s5z_qtran_config.py -s 0
22 WQMIX MARL policy/wqmix ding -m serial -c smac_3s5z_wqmix_config.py -s 0
23 CollaQ MARL policy/collaq ding -m serial -c smac_3s5z_collaq_config.py -s 0
24 GAIL IL reward_model/gail ding -m serial_gail -c cartpole_dqn_gail_config.py -s 0
25 SQIL IL entry/sqil ding -m serial_sqil -c cartpole_sqil_config.py -s 0
26 DQFD IL policy/dqfd ding -m serial_dqfd -c cartpole_dqfd_config.py -s 0
27 R2D3 IL policy/r2d3 python3 -u pong_r2d3_r2d2expert_config.py
28 GCL IL reward_model/guided_cost python3 lunarlander_gcl_config.py
29 HER exp reward_model/her python3 -u bitflip_her_dqn.py
30 RND exp reward_model/rnd python3 -u cartpole_ppo_rnd_main.py
31 ICM exp reward_model/icm python3 -u cartpole_ppo_icm_config.py
32 CQL offline policy/cql python3 -u d4rl_cql_main.py
33 TD3BC offline policy/td3_bc python3 -u mujoco_td3_bc_main.py
34 MBPO mbrl model/template/model_based/mbpo python3 -u sac_halfcheetah_mopo_default_config.py
35 PER other worker/replay_buffer rainbow demo
36 GAE other rl_utils/gae ppo demo

discrete means discrete action space, which is only label in normal DRL algorithms (1-16)

continuous means continuous action space, which is only label in normal DRL algorithms (1-16)

hybridmeans hybrid (discrete + continuous) action space (1-16)

dist means distributed training (collector-learner parallel) RL algorithm

MARL means multi-agent RL algorithm

exp means RL algorithm which is related to exploration and sparse reward

IL means Imitation Learning, including Behaviour Cloning, Inverse RL, Adversarial Structured IL

offline means offline RL algorithm

mbrl means model-based RL algorithm

other means other sub-direction algorithm, usually as plugin-in in the whole pipeline

P.S: The .py file in Runnable Demo can be found in dizoo

Environment Versatility

No Environment Label Visualization Code and Doc Links
1 atari discrete original code link
env tutorial
2 box2d/bipedalwalker continuous original dizoo link
3 box2d/lunarlander discrete original dizoo link
4 classic_control/cartpole discrete original dizoo link
5 classic_control/pendulum continuous original dizoo link
6 competitive_rl discrete selfplay original dizoo link
7 gfootball discretesparseselfplay original dizoo link
8 minigrid discretesparse original dizoo link
9 mujoco continuous original dizoo link
10 multiagent_particle discrete marl original dizoo link
11 overcooked discrete marl original dizoo link
12 procgen discrete original dizoo link
13 pybullet continuous original dizoo link
14 smac discrete marlselfplaysparse original dizoo link
15 d4rl offline ori dizoo link
16 league_demo discrete selfplay original dizoo link
17 pomdp atari discrete dizoo link
18 bsuite discrete original dizoo link
19 ImageNet IL original dizoo link
20 slime_volleyball discreteselfplay ori dizoo link
21 gym_hybrid hybrid ori dizoo link
22 GoBigger hybridmarlselfplay ori opendilab link
23 gym_soccer hybrid ori dizoo link

discrete means discrete action space

continuous means continuous action space

hybrid means hybrid (discrete + continuous) action space

MARL means multi-agent RL environment

sparse means environment which is related to exploration and sparse reward

offline means offline RL environment

IL means Imitation Learning or Supervised Learning Dataset

selfplay means environment that allows agent VS agent battle

P.S. some enviroments in Atari, such as MontezumaRevenge, are also sparse reward type


We appreciate all contributions to improve DI-engine, both algorithms and system designs. Please refer to CONTRIBUTING.md for more guides. And our roadmap can be accessed by this link.

And users can join our slack communication channel or our forum for more detailed discussion.

For future plans or milestones, please refer to our GitHub Projects.


    title={{DI-engine: OpenDILab} Decision Intelligence Engine},
    author={DI-engine Contributors},
    publisher = {GitHub},
    howpublished = {\url{https://github.com/opendilab/DI-engine}},


DI-engine released under the Apache 2.0 license.

Repository Comments ( 0 )

Sign in to post a comment


OpenDILab决策智能引擎 https://github.com/opendilab/DI-engine expand collapse
Python and 3 more languages


No release





Load More
can not load any more


182229 41614e54 1850385 182230 7885ed45 1850385