What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/kind bug
/kind task
/kind feature

/kind bug

[DEBUG] ANALYZER(6393,python):2020-06-30-00:38:01.037.046 [mindspore/ccsrc/pipeline/static_analysis/evaluator.cc:161] BroadenUndeterminedArgs] Current eval args: [const vector][AbstractTensor(shape: (), element: AbstractScalar(Type: Int32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ), AbstractTensor(shape: (), element: AbstractScalar(Type: Int32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ), AbstractRef(key: AbstractRefKey(value: RefKey[global_step]) ref_value: AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ) origin_value: AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad )), AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ), AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad )]
[DEBUG] ANALYZER(6393,python):2020-06-30-00:38:01.037.199 [mindspore/ccsrc/pipeline/static_analysis/evaluator.cc:165] BroadenUndeterminedArgs] Joined args: [const vector][AbstractTensor(shape: (), element: AbstractScalar(Type: Int32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ), AbstractTensor(shape: (), element: AbstractScalar(Type: Int32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ),  AbstractRef(key: AbstractRefKey(value: RefKey[global_step])  ref_value: AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ) origin_value: AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad )), AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ), AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad )]
[DEBUG] ANALYZER(6393,python):2020-06-30-00:38:01.037.295 [mindspore/ccsrc/pipeline/static_analysis/evaluator.cc:253] Run] Evaluator basegraph_construct run for Defaultbasegraph_construct input[0] abstract value: AbstractTensor(shape: (), element: AbstractScalar(Type: Int32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad )basegraph_construct input[1] abstract value: AbstractTensor(shape: (), element: AbstractScalar(Type: Int32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad )basegraph_construct input[2] abstract value: AbstractRef(key: AbstractRefKey(value: RefKey[global_step]) ref_value: AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ) origin_value: AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ))basegraph_construct input[3] abstract value: AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad )basegraph_construct input[4] abstract value: AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad )
[DEBUG] ANALYZER(6393,python):2020-06-30-00:38:01.049.610 [mindspore/ccsrc/pipeline/static_analysis/program_specialize.cc:615] FindUniqueArgvals] Evaluator cache has a single item, just use it.
[DEBUG] ANALYZER(6393,python):2020-06-30-00:38:01.049.780 [mindspore/ccsrc/pipeline/static_analysis/evaluator.cc:142] NormalizeArgs] construct original: [const vector][AbstractTensor(shape: (), element: AbstractScalar(Type: Int32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ), AbstractTensor(shape: (), element: AbstractScalar(Type: Int32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ), AbstractRef(key: AbstractRefKey(value: RefKey[global_step]) ref_value: AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ) origin_value: AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad )), AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ), AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad )], broaded: [const vector][AbstractTensor(shape: (), element: AbstractScalar(Type: Int32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ), AbstractTensor(shape: (), element: AbstractScalar(Type: Int32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ), AbstractRef(key: AbstractRefKey(value: AnyValue) ref_value: AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ) origin_value: AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad )), AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ), AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad )]
...
[DEBUG] ANALYZER(6393,python):2020-06-30-00:38:01.049.919 [mindspore/ccsrc/pipeline/static_analysis/program_specialize.cc:394] BuildSpecializedNodeInner] Specialize function graph: construct, args: 5, graph: construct:[CNode]9{[0]: ValueNode<Primitive> return, [1]: [CNode]11}
...
[DEBUG] ANALYZER(6393,python):2020-06-30-00:38:01.050.427 [mindspore/ccsrc/pipeline/static_analysis/static_analysis.cc:176] Eval] Begin Eval NodeConfig Node: Φout, Context: {Func Graph: construct Args: [0]: AbstractTensor(shape: (), element: AbstractScalar(Type: Int32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ), [1]: AbstractTensor(shape: (), element: AbstractScalar(Type: Int32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ), [2]: AbstractRef(key: AbstractRefKey(value: AnyValue) ref_value: AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ) origin_value: AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad )), [3]: AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ), [4]: AbstractTensor(shape: (1024, 512), element: AbstractScalar(Type: Float32 Value: AnyValue Shape: NoShape sparse_grad: ), value_ptr: 0x55c765acf340, value: AnyValue sparse_grad ), Parent: { Args: }}
[ERROR] ANALYZER(6393,python):2020-06-30-00:38:01.050.445 [mindspore/ccsrc/pipeline/static_analysis/static_analysis.cc:191] Eval] Illegal AnfNode for evaluating, Φout. graph: construct

  1. The reason why failed to find the abstract value of Φout is the args_spec for graph: construct is different between analysis and specialize.
    In analysis:
    AbstractRef(key: AbstractRefKey(value: RefKey[global_step])
    In specialize:
    [2]: AbstractRef(key: AbstractRefKey(value: AnyValue)

This is changed in [mindspore/ccsrc/pipeline/static_analysis/evaluator.cc:142] NormalizeArgs]

  1. why this is changed:
    graph: construct had been set flag: FUNC_GRAPH_FLAG_IGNORE_VALUES

  2. why this flag is been set:
    In FuncGraphEvaluator::BroadenUndeterminedArgs()

if (!(joined_args_spec_list == args_spec_list)) {
          func_graph_->set_flag(FUNC_GRAPH_FLAG_IGNORE_VALUES, true);
        }

Here joined_args_spec_list is compared with args_spec_list with pointer, but in AbstractTensor::Join(), it will always return a new AbstractTensor, so the pointer comparison will fail, the flag will always set.

What does this PR do / why do we need it:

Which issue(s) this PR fixes:

Fixes #
https://gitee.com/mindspore/dashboard/programs/67813/milestones/30692?issue_id=I1M2GI
Special notes for your reviewers: