Skip to main content

We've Moved!

Product Documentation has moved to docs.hitachivantara.com
Hitachi Vantara Knowledge

Workflow task issues

This section describes how to resolve some issues you might see when running a workflow task.

For information on viewing the failures reported by a workflow task, see Task details, status, and results.

IssueDescription/Resolution
For a workflow that uses an HCP MQE data connection, the task reports 0 documents input.

Do one of these:

  • If you configured the data connection to read from an entire HCP system or tenant, verify that the HCP user account you specified has access permissions for all namespaces on that HCP system or tenant.
  • If you specified a folder to read from, make sure that the directory exists on HCP.

A document fails with this message:

Document contains at least one immense term in field="<field-name>" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[]...', original message: bytes can be at most 32766 in length; got 953488. Perhaps the document has an indexed string field (solr.StrField) which is too large

The field value is too long to be indexed as the currently selected type.

In the Index collection schema, select a different type for the field.

A document fails with this message, instead of a more descriptive error message:

com.hds.ensemble.sdk.exception.

PluginOperationFailedException: SolrPlugin error: Bad Request

This can occur when indexing documents to an index collection with multiple shards.

When these failures occur, verify the Caused by section of the error message to see if more details are reported.

If you need more information about the failure:

  1. Create a new index collection with a single shard.
  2. Create a new workflow.
  3. Add your data connections and pipelines to the new workflow.
  4. Run the new workflow task.

    The task produces more descriptive indexing error messages.

  5. Make the necessary corrections to your original index collection pipeline.

A document fails with this message:

java.lang.OutOfMemoryError: Java heap space

The workflow task does not have enough memory.

Increase the Driver Heap Limit or Executor Heap Limit setting for the task.

A task halts with this message:

java.lang.OutOfMemoryError: GC overhead limit exceeded

The Input value on the task Metrics page is higher than expected.

Possible explanations:

  • Your workflow uses the HCP MQE (Hitachi Content Platform Metadata Query Engine) data connection to read from an HCP namespace that has versioning enabled. The Input value includes old versions of each HCP object processed.
  • The Retry Failed Documents setting is enabled for the workflow task. The Input increases with each failed document that is retried.

 

  • Was this article helpful?