2.4K Star 8.2K Fork 4.4K

GVPMindSpore / mindspore

 / 详情

[CT][MS][ops][fftwithsize]fftwithsize mode = "irfft"时signal_sizes空tuple时ascend报错RuntimeError: Launch kernel failed, name:Default/FFTWithSize-op2

TODO
Bug-Report
创建于  
2024-05-14 18:53
name about labels
Bug Report Use this template for reporting a bug kind/bug

Describe the current behavior / 问题描述 (Mandatory / 必填)

fftwithsize mode = "irfft"时signal_sizes空tuple时ascend报错RuntimeError: Launch kernel failed, name:Default/FFTWithSize-op2

Environment / 环境信息 (Mandatory / 必填)

  • Hardware Environment(Ascend/GPU/CPU) / 硬件环境:

Please delete the backend not involved / 请删除不涉及的后端:
/device ascend/

  • Software Environment / 软件环境 (Mandatory / 必填):
    -- MindSpore version (e.g., 1.7.0.Bxxx) :
    -- Python version (e.g., Python 3.7.5) :
    -- OS platform and distribution (e.g., Linux Ubuntu 16.04):
    -- GCC/Compiler version (if compiled from source):

  • Excute Mode / 执行模式 (Mandatory / 必填)(PyNative/Graph):

Please delete the mode not involved / 请删除不涉及的模式:
/mode pynative
/mode graph

Related testcase / 关联用例 (Mandatory / 必填)

test_p_fftwithsize_irfft

Steps to reproduce the issue / 重现步骤 (Mandatory / 必填)

  1. pytest -s -v test_fftwithsize.py::test_p_fftwithsize_irfft

Describe the expected behavior / 预期结果 (Mandatory / 必填)

用例执行通过

Related log / screenshot / 日志 / 截图 (Mandatory / 必填)

[ERROR] KERNEL(3565036,fffecb7fe070,python):2024-05-12-13:59:49.913.395 [mindspore/ccsrc/plugin/device/ascend/kernel/acl/acl_kernel_mod.cc:261] Launch] Kernel launch failed, msg: Acl compile and execute failed, op_type_:CustFFTWithSize

----------------------------------------------------
- Ascend Error Message:
----------------------------------------------------
E19014: 2024-05-12-13:59:49.909.944 Value [input __input0 shape] for Op [online_Node_Output] is invalid. Reason: contains negative or zero dimension.
        TraceBack (most recent call last):
        online_Node_Output(NetOutput) Verify failed.[FUNC:Verify][FILE:node_utils_ex.cc][LINE:127]
        Verifying online_Node_Output failed.[FUNC:InferShapeAndType][FILE:infershape_pass.cc][LINE:132]
        Call InferShapeAndType for node:online_Node_Output(NetOutput) failed[FUNC:Infer][FILE:infershape_pass.cc][LINE:120]
        process pass InferShapePass on node:online_Node_Output failed, ret:4294967295[FUNC:RunPassesOnNode][FILE:base_pass.cc][LINE:570]
        build graph failed, graph id:2, ret:1343242270[FUNC:BuildModelWithGraphId][FILE:ge_generator.cc][LINE:1615]
        [Build][SingleOpModel]call ge interface generator.BuildSingleOpModel failed. ge result = 1343242270[FUNC:ReportCallError][FILE:log_inner.cpp][LINE:161]
        [Build][Op]Fail to build op model[FUNC:ReportInnerError][FILE:log_inner.cpp][LINE:145]
        build op model failed, result = 500002[FUNC:ReportInnerError][FILE:log_inner.cpp][LINE:145]

(Please search "CANN Common Error Analysis" at https://www.mindspore.cn for error code description)

----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/ccsrc/transform/acl_ir/acl_utils.cc:379 Run

[ERROR] DEVICE(3565036,fffecb7fe070,python):2024-05-12-13:59:49.913.441 [mindspore/ccsrc/plugin/device/ascend/hal/hardware/ge_kernel_executor.cc:951] LaunchKernel] Launch kernel failed, kernel full name: Default/FFTWithSize-op2
FAILED

=================================== FAILURES ===================================
___________________________ test_p_fftwithsize_irfft ___________________________

    @Level2
    def test_p_fftwithsize_irfft():
        mode = "irfft"
        onesided_list = random.sample([True, True, False, False], 3)
        for i, norm_mode in enumerate(['backward', 'forward', 'ortho']):
            # choice 1/2/3 oneside True/False
            onesided = onesided_list[i]
            signal_ndim = signal_list[i]
            # choice complex64/128
            x_dtype = dtype_list[i]
            shape_rank = np.random.randint(signal_ndim, 8)
            x_shape = np.random.randint(2, 11, shape_rank)
            x, input_loss, output_loss = get_x(mode, norm_mode, x_dtype, x_shape)
            norm = get_norm_(mode, signal_ndim, norm_mode, x_shape, onesided)
            cmp_obj = get_cmp_(mode, signal_ndim, onesided)
    
            fact = FFTWithSizeMock(attributes={
                "signal_ndim": signal_ndim,
                "inverse": True,
                "real": True,
                "norm": norm_mode,
                "onesided": onesided,
                'signal_sizes': (),
            },
                inputs=[x])
    
            info_str = "signal_ndim={},norm_mode={},x_shape={},x_dtype={},cmp_obj={},loss={}".format(
                signal_ndim, norm_mode, x_shape, x_dtype, cmp_obj, [input_loss, output_loss])
            logger.info(info_str)
            # tf not support IRFFT 3d gradien
            if signal_ndim == 3:
                fact.loss = 1e-3
>               fact.forward_cmp(cmp_obj, norm)

../test_fftwithsize.py:343: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../share/ops/primitive/fftwithsize_ops.py:173: in forward_cmp
    out_mindspore = self.forward_mindspore_impl()
../../share/ops/primitive/fftwithsize_ops.py:168: in forward_mindspore_impl
    out = net(self.input_x)
../../share/utils.py:253: in __call__
    out = super().__call__(*args, **kwargs)
/home/ci/miniconda3/envs/ci3.9/lib/python3.9/site-packages/mindspore/nn/cell.py:713: in __call__
    raise err
/home/ci/miniconda3/envs/ci3.9/lib/python3.9/site-packages/mindspore/nn/cell.py:710: in __call__
    _pynative_executor.end_graph(self, output, *args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <mindspore.common.api._PyNativeExecutor object at 0xfffe32436b80>
obj = WrapOp<>
output = Tensor(shape=[8, 9, 6, 10, 8, 3, 12], dtype=Float32, value=
[[[[[[[ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00 ....
      [ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00 ...  0.00000000e+00,  0.00000000e+00,  0.00000000e+00]]]]]]])
args = (Tensor(shape=[8, 9, 6, 10, 8, 3, 7], dtype=Complex64, value=
[[[[[[[0.848174+1.85267j, -0.303879-0.211966j, 0.290996+...42+0.577635j, 1.68219-1.23387j, -0.836732+0.131673j ... 0.795906-0.647839j, 0.705827+1.48806j, 1.6735+1.1798j]]]]]]]),)
kwargs = {}

    def end_graph(self, obj, output, *args, **kwargs):
        """
        Clean resources after building forward and backward graph.
    
        Args:
            obj (Function/Cell): The function or cell instance.
            output (Tensor/tuple/list): Function or cell output object.
            args (tuple): Function or cell input arguments.
            kwargs (dict): keyword arguments.
    
        Return:
            None.
        """
>       self._executor.end_graph(obj, output, *args, *(kwargs.values()))
E       RuntimeError: Launch kernel failed, name:Default/FFTWithSize-op2
E       
E       ----------------------------------------------------
E       - C++ Call Stack: (For framework developers)
E       ----------------------------------------------------
E       mindspore/ccsrc/runtime/pynative/op_runner.cc:632 LaunchKernels

/home/ci/miniconda3/envs/ci3.9/lib/python3.9/site-packages/mindspore/common/api.py:1303: RuntimeError

Special notes for this issue/备注 (Optional / 选填)

评论 (2)

tanxinglian 创建了Bug-Report
tanxinglian 添加了
 
kind/bug
标签
tanxinglian 添加了
 
sig/ops
标签
tanxinglian 添加了
 
attr/function
标签
tanxinglian 添加了
 
v2.3.0.rc2
标签
tanxinglian 添加了
 
device/ascend
标签
tanxinglian 添加了
 
device/ascend
标签
展开全部操作日志

Please assign maintainer to check this issue.
请为此issue分配处理人。
@tanxinglian

感谢您的提问,您可以评论//mindspore-assistant更快获取帮助:

  1. 如果您刚刚接触MindSpore,或许您可以在教程找到答案
  2. 如果您是资深Pytorch用户,您或许需要:
  1. 如果您遇到动态图问题,可以设置set_context(pynative_synchronize=True)查看报错栈协助定位
  2. 模型精度调优问题可参考官网调优指南
  3. 如果您反馈的是框架BUG,请确认您在ISSUE中提供了MindSpore版本、使用的后端类型(CPU、GPU、Ascend)、环境、训练的代码官方链接以及可以复现报错的代码的启动方式等必要的定位信息
  4. 如果您已经定位出问题根因,欢迎提交PR参与MindSpore开源社区,我们会尽快review
fangwenyi 移除了
 
v2.3.0.rc2
标签
fangwenyi 添加了
 
master
标签
Shawny 里程碑B-SIG-Kit 修改为B-SIG-Data

登录 后才可以发表评论

状态
负责人
项目
里程碑
Pull Requests
关联的 Pull Requests 被合并后可能会关闭此 issue
分支
开始日期   -   截止日期
-
置顶选项
优先级
预计工期 (小时)
参与者(2)
Python
1
https://gitee.com/mindspore/mindspore.git
git@gitee.com:mindspore/mindspore.git
mindspore
mindspore
mindspore

搜索帮助