You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, Im' trying to convert the model to onnx format with the help of below code.
pipeline = ChronosPipeline.from_pretrained(
"amazon/chronos-t5-small",
device_map = "cuda",
torch_dtype = torch.bfloat16)
File /libraries/gpu_py_3_11/lib/python3.11/site-packages/transformers/models/t5/modeling_t5.py:1023, in T5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
1020 raise ValueError("You have to initialize the model with valid token embeddings")
1021 inputs_embeds = self.embed_tokens(input_ids)
-> 1023 batch_size, seq_length = input_shape
1025 # required mask seq length can be calculated via length of past
1026 mask_seq_length = past_key_values[0][0].shape[2] + seq_length if past_key_values is not None else seq_length
ValueError: not enough values to unpack (expected 2, got 1)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi, Im' trying to convert the model to onnx format with the help of below code.
pipeline = ChronosPipeline.from_pretrained(
"amazon/chronos-t5-small",
device_map = "cuda",
torch_dtype = torch.bfloat16)
df = pd.read_csv("AirPassengers.csv")
dummy_input = torch.tensor(df['#Passengers']).to(device)
torch.onnx.export(
pipeline.model,
dummy_input,
"chronos_model.onnx",
verbose=True,
input_names=["input"],
output_names=["output"],
)
Facing and error as below:
File /libraries/gpu_py_3_11/lib/python3.11/site-packages/transformers/models/t5/modeling_t5.py:1023, in T5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
1020 raise ValueError("You have to initialize the model with valid token embeddings")
1021 inputs_embeds = self.embed_tokens(input_ids)
-> 1023 batch_size, seq_length = input_shape
1025 # required mask seq length can be calculated via length of past
1026 mask_seq_length = past_key_values[0][0].shape[2] + seq_length if past_key_values is not None else seq_length
ValueError: not enough values to unpack (expected 2, got 1)
Could anyone please help?
#ONNX
Beta Was this translation helpful? Give feedback.
All reactions