Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No option in load data in spark to snowflake for adding new columns to existing table using overwrite mode #554

Open
pragathisharma-8451 opened this issue May 3, 2024 · 0 comments

Comments

@pragathisharma-8451
Copy link

pragathisharma-8451 commented May 3, 2024

The existing table my_table in snowflake has 10 columns and we want to add additional 7 columns into the table

new_df holds 17 columns

new_df.write.format("net.snowflake.spark.snowflake")
.options(**sfOptions)
.option("dbtable", 'my_table')
.mode("overwrite")
.option("usestagingtable", "off")
.save()

An error occurred while calling o761.save.
: java.sql.SQLException: Status of query associated with resultSet is FAILED_WITH_ERROR. Number of columns in file (17) does not match that of the corresponding table (10), use file format option error_on_column_count_mismatch=false to ignore this error

Tried various options such as
.option("column_mismatch_behavior" , 'ignore')
and
.option("error_on_column_count_mismatch","false")

Did not work. Kindly let me know if there are ways to handle it. As a workaround, We are looking to alter the table by adding additional columns before overwriting OR write to a new temp table and swap to my_table.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant