2.3K Star 8.1K Fork 4.3K

GVPMindSpore / mindspore

 / 详情

[ST][MS][NET][CANN][pangu-moe-alltoall-pipeline/pangu-moe-alltoall][910A 32p]RuntimeError: Compile graph kernel_graph1 failed

TODO
Bug-Report
创建于  
2024-03-21 11:56
name about labels
Bug Report Use this template for reporting a bug kind/bug

Describe the current behavior / 问题描述 (Mandatory / 必填)

pangu-moe-alltoall-pipeline/pangu-moe-alltoall 910A 32p训练失败

网络脚本路径:https://e.gitee.com/mind_spore/repos/mindspore/models/tree/master/official/nlp/Pangu_alpha

Environment / 环境信息 (Mandatory / 必填)

  • Hardware Environment(Ascend/GPU/CPU) / 硬件环境:

Please delete the backend not involved / 请删除不涉及的后端:
/device /ascend910A/

  • Software Environment / 软件环境 (Mandatory / 必填):
    -- MindSpore version (e.g., 1.7.0.Bxxx) :
    -- Python version (e.g., Python 3.7.5) :
    -- OS platform and distribution (e.g., Linux Ubuntu 16.04):
    -- GCC/Compiler version (if compiled from source):

失败版本:
run包:Milan_C17/20240315
MindSpore 版本:2.3.0/B080 r2.3.q1_20240320000457_1b2cb8cd14

  • Excute Mode / 执行模式 (Mandatory / 必填)(PyNative/Graph):

Please delete the mode not involved / 请删除不涉及的模式:
/mode graph

Related testcase / 关联用例 (Mandatory / 必填)

用例仓地址:MindFormers_Test/cases/pangu/train
test_mf_pangu_1_3b_train_moe_alltoall_check_loss_910_pangudata_32p_0001
test_mf_pangu_1_3b_train_moe_alltoall_pipeline_check_loss_910_pangudata_32p_0001

Steps to reproduce the issue / 重现步骤 (Mandatory / 必填)

1.get code from models
2.cd models/official/nlp/Pangu_alpha
3.node1:
bash scripts/run_distributed_train_moe.sh ./pangu-data/pangu_30_step_bs64 ./hccl_32p.json 32 fp32 1.3B 2 2 16 0 8 4 1 1
node2:
bash scripts/run_distributed_train_moe.sh ./pangu-data/pangu_30_step_bs64 ./hccl_32p.json 32 fp32 1.3B 2 2 16 8 8 4 1 1
node3:
bash scripts/run_distributed_train_moe.sh ./pangu-data/pangu_30_step_bs64 ./hccl_32p.json 32 fp32 1.3B 2 2 16 16 8 4 1 1
node4:
bash scripts/run_distributed_train_moe.sh ./pangu-data/pangu_30_step_bs64 ./hccl_32p.json 32 fp32 1.3B 2 2 16 24 8 4 1 1
4. 验证网络训练是否成功

Describe the expected behavior / 预期结果 (Mandatory / 必填)

网络pangu-moe-pipeline-alltotal训练正常,32p训练性能达到73/fps

Related log / screenshot / 日志 / 截图 (Mandatory / 必填)

Traceback (most recent call last):
  File "/home/jenkins/workspace/TDT_deployment/MindFormers_Test/cases/pangu/train/test_mf_pangu_1_3b_train_moe_alltoall_pipeline_check_loss_910_pangudata_32p_0001/train.py", line 558, in <module>
    run_train_pipeline(opt)
  File "/home/jenkins/workspace/TDT_deployment/MindFormers_Test/cases/pangu/train/test_mf_pangu_1_3b_train_moe_alltoall_pipeline_check_loss_910_pangudata_32p_0001/train.py", line 543, in run_train_pipeline
    sink_size=callback_size, dataset_sink_mode=True)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 1074, in train
    initial_epoch=initial_epoch)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 114, in wrapper
    func(self, *args, **kwargs)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 624, in _train
    cb_params, sink_size, initial_epoch, valid_infos)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 708, in _train_dataset_sink_process
    outputs = train_network(*inputs)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/nn/cell.py", line 662, in __call__
    out = self.compile_and_run(*args, **kwargs)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/nn/cell.py", line 980, in compile_and_run
    self.compile(*args, **kwargs)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/nn/cell.py", line 964, in compile
    jit_config_dict=self._jit_config_dict, **kwargs)
  File "/home/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/common/api.py", line 1583, in compile
    result = self._graph_executor.compile(obj, args, kwargs, phase, self._use_vm_mode())
RuntimeError: Compile graph kernel_graph1 failed.

----------------------------------------------------
- Ascend Error Message:
----------------------------------------------------
E20007: Failed to run graph fusion pass [BatchMatMulFusionPass]. The pass type is [built-in-ai-core-graph-pass]
        Solution: 1. If the pass code is custom, check the error log and the verification logic.  2. If the pass code is not custom, perform a complete or partial dump by using npucollect.sh and then send the dump to Huawei technical support for fault locating.
        TraceBack (most recent call last):
        RemoveEdge failed because src is nullptr or run Unlink failed.[FUNC:RemoveEdge][FILE:graph_utils.cc][LINE:165]
        RemoveEdge transpose-->matmul failed.[FUNC:LinkEdge][FILE:batch_matmul_fusion_pass.cc][LINE:314]
        LinkEdge failed.[FUNC:DoTransposeFusion][FILE:batch_matmul_fusion_pass.cc][LINE:285]
        DoTransposeFusion right failed.[FUNC:CheckAndDoTransposeFusion][FILE:batch_matmul_fusion_pass.cc][LINE:160]
        op[BatchMatMul:Gradients/recompute_Default/network-PanguAlphaTrainPipelineWithLossScaleCell/network-_VirtualDatasetCell/_backbone-PipelineCell/network-MicroBatchInterleaved/network-PanGUAlphaWithLoss/network-PanguAlphaModel/backbone-PanguAlpha_Model/blocks-CellList/8-TransformerEncoderLayer/output-MoE/gradBatchMatMul-expand/BatchMatMul-op101], failed to execute TransposeFusion.[FUNC:Fusion][FILE:batch_matmul_fusion_pass.cc][LINE:66]
        Failed to run graph fusion pass [BatchMatMulFusionPass]. The pass type is [built-in-ai-core-graph-pass]
        [GraphOpt][FirstRoundFusion] Fail to run graph fusion pass[BatchMatMulFusionPass, built-in-ai-core-graph-pass]. Return value is 4294967295.[FUNC:RunOnePassFusion][FILE:graph_fusion.cc][LINE:1173]
        [GraphOpt][FirstRoundFusion] MainGraph[kernel_graph1]: RunGraphFusion unsuccessfully.[FUNC:Fusion][FILE:graph_fusion.cc][LINE:100]
        [GraphOpt][AfterFusion]Failed to do graph fusion for graph kernel_graph1. ErrNo is 4294967295.[FUNC:OptimizeOriginalGraph][FILE:fe_graph_optimizer.cc][LINE:346]
        Call OptimizeOriginalGraph failed, ret:-1, engine_name:AIcoreEngine, graph_name:kernel_graph1[FUNC:OptimizeOriginalGraph][FILE:graph_optimize.cc][LINE:174]
        [Call][PreRun] Failed, graph_id:2, session_id:0.[FUNC:CompileGraph][FILE:graph_manager.cc][LINE:4408]
        [Compile][Graph]Compile graph failed, error code:1343225857, session_id:0, graph_id:2.[FUNC:CompileGraph][FILE:ge_api.cc][LINE:1159]

(Please search "CANN Common Error Analysis" at https://www.mindspore.cn for error code description)

----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/ccsrc/plugin/device/ascend/hal/hardware/ge_graph_executor.cc:972 CompileGraph

Special notes for this issue/备注 (Optional / 选填)

走给张银霞

评论 (4)

zhongjicheng 创建了Bug-Report
zhongjicheng 添加了
 
device/ascend
标签
zhongjicheng 添加了
 
attr/function
标签
zhongjicheng 添加了
 
stage/func-debug
标签
zhongjicheng 添加了
 
kind/bug
标签
zhongjicheng 添加了
 
v2.3.0
标签
zhongjicheng 添加了
 
v2.3.0.alpha
标签
展开全部操作日志

Please assign maintainer to check this issue.
请为此issue分配处理人。
@zhongjicheng

感谢您的提问,您可以评论//mindspore-assistant更快获取帮助:

  1. 如果您刚刚接触MindSpore,或许您可以在教程找到答案
  2. 如果您是资深Pytorch用户,您或许需要:
  1. 如果您遇到动态图问题,可以设置set_context(pynative_synchronize=True)查看报错栈协助定位
  2. 模型精度调优问题可参考官网调优指南
  3. 如果您反馈的是框架BUG,请确认您在ISSUE中提供了MindSpore版本、使用的后端类型(CPU、GPU、Ascend)、环境、训练的代码官方链接以及可以复现报错的代码的启动方式等必要的定位信息
  4. 如果您已经定位出问题根因,欢迎提交PR参与MindSpore开源社区,我们会尽快review
zhongjicheng 修改了标题
zhongjicheng 修改了描述
zhongjicheng 负责人设置为zhangyinxia
zhongjicheng 修改了描述

BatchMatMul算子报错,找对应责任人吧

zhangyinxia 添加协作者zhangyinxia
zhangyinxia 负责人zhangyinxia 修改为zhongjicheng
zhongjicheng 负责人zhongjicheng 修改为zyli2020
zhongjicheng 里程碑B-SIG-ASCEND 修改为B-SIG-Runtime
zhongjicheng 添加了
 
rct/cann
标签
i-robot 添加了
 
dts-szv
标签
zyli2020 移除了
 
dts-szv
标签
zyli2020 修改了标题
zhangyinxia 取消协作者zhangyinxia

登录 后才可以发表评论

状态
负责人
项目
里程碑
Pull Requests
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
开始日期   -   截止日期
-
置顶选项
优先级
预计工期 (小时)
参与者(3)
Python
1
https://gitee.com/mindspore/mindspore.git
git@gitee.com:mindspore/mindspore.git
mindspore
mindspore
mindspore

搜索帮助