Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Max retry reached for the download of chunk after 6 hrs #2023

Closed
anmolEnterprises opened this issue Jan 9, 2025 · 3 comments
Closed

Max retry reached for the download of chunk after 6 hrs #2023

anmolEnterprises opened this issue Jan 9, 2025 · 3 comments
Assignees
Labels
question Issue is a usage/other question rather than a bug status-triage_done Initial triage done, will be further handled by the driver team

Comments

@anmolEnterprises
Copy link

I am getting the below errors while retrieving a large dataset. An exception occurs after 6 hrs every time I execute it.
JDBC driver : - snowflake-jdbc-3.21.0
OS : - Windows
Java : - 17

Jan 08, 2025 5:50:33 PM net.snowflake.client.jdbc.SnowflakeUtil logResponseDetails SEVERE: Response status line reason: Forbidden Jan 08, 2025 5:50:33 PM net.snowflake.client.jdbc.SnowflakeUtil logResponseDetails SEVERE: Response content: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Request has expired</Message><X-Amz-Expires>21599</X-Amz-Expires><Expires>2025-01-08T11:58:11Z</Expires><ServerTime>2025-01-08T12:20:32Z</ServerTime><RequestId>xxxxxxxxxx</RequestId><HostId>/nCa3X0hEvZn3xxxx3xKroHlABPHebNly6NdTfpkkkkLByxV8k8+wythIcWtYKUg/5782sFMc=</HostId></Error> net.snowflake.client.jdbc.SnowflakeSQLLoggedException: JDBC driver internal error: Max retry reached for the download of chunk#168 (Total chunks: 794) retry: 7, error: net.snowflake.client.jdbc.SnowflakeSQLException: JDBC driver encountered communication error. Message: Error encountered when downloading a result chunk: HTTP status: 403. at net.snowflake.client.jdbc.DefaultResultStreamProvider.getInputStream(DefaultResultStreamProvider.java:71) at net.snowflake.client.jdbc.SnowflakeChunkDownloader$2.call(SnowflakeChunkDownloader.java:1029) at net.snowflake.client.jdbc.SnowflakeChunkDownloader$2.call(SnowflakeChunkDownloader.java:943) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:840) . at net.snowflake.client.jdbc.SnowflakeChunkDownloader.getNextChunkToConsume(SnowflakeChunkDownloader.java:633) at net.snowflake.client.core.SFResultSet.fetchNextRowUnsorted(SFResultSet.java:240) at net.snowflake.client.core.SFResultSet.fetchNextRow(SFResultSet.java:206) at net.snowflake.client.core.SFResultSet.next(SFResultSet.java:304) at net.snowflake.client.jdbc.SnowflakeResultSetV1.next(SnowflakeResultSetV1.java:123) at SnowflakeMain.main(SnowflakeMain.java:57)

Can you please help us to tackle this issue with a long execution dataset? I mean when retrieving data beyond 6 hrs. I have added the main program below.

` String url = "jdbc:snowflake://xxxxxxx.us-east-1.snowflakecomputing.com/?db=xxxx&warehouse=xxxx&CLIENT_SESSION_KEEP_ALIVE=true&CLIENT_SESSION_KEEP_ALIVE_HEARTBEAT_FREQUENCY=900";
String driverClass = "net.snowflake.client.jdbc.SnowflakeDriver";
String username = "xxxx";
String password = "xxxx";
String query = "select * from PUBLIC.EMPLOYEE_INFO_100M order by 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20";

	Connection connection = null;
	PreparedStatement ps = null;
	ResultSet rs = null;

	Properties props = new Properties();
	props.put("user", username);
	props.put("password", password);
	props.put("partner","xxxx");
	props.put("JDBC_QUERY_RESULT_FORMAT","JSON");
	props.put("CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX","true");
	props.put("QUOTED_IDENTIFIERS_IGNORE_CASE","true");

	try
	{
		Class.forName(driverClass);
		connection = DriverManager.getConnection(url, props);
		ps = connection.prepareStatement(query);
		ps.setFetchSize(1000);
		rs = ps.executeQuery();
		ResultSetMetaData rsmd = rs.getMetaData();

		while (rs.next()) {
			for (int i = 0; i < rsmd.getColumnCount(); i++) {
				rs.getString(i + 1);
			}
		}
	}
	catch (Exception e) 
	{
		e.printStackTrace();
		try {
			connection.close();
		} catch (SQLException e1) {
			e1.printStackTrace();
		}
		try {
			ps.close();
		} catch (SQLException e1) {
			e1.printStackTrace();
		}
		try {
			rs.close();
		} catch (SQLException e1) {
			e1.printStackTrace();
		}

	}`
@sfc-gh-dszmolka sfc-gh-dszmolka self-assigned this Jan 9, 2025
@sfc-gh-dszmolka sfc-gh-dszmolka added question Issue is a usage/other question rather than a bug status-triage_done Initial triage done, will be further handled by the driver team and removed bug labels Jan 9, 2025
@sfc-gh-dszmolka
Copy link
Contributor

sfc-gh-dszmolka commented Jan 9, 2025

hey there - you're seeing the expected and documented behaviour of Snowflake, independently from the JDBC driver. The behaviour is governed by the backend so it's the same for every client library.

Reference is https://docs.snowflake.com/en/user-guide/querying-persisted-results, quoting:

Note that the security token used to access large persisted query results (i.e. greater than 100 KB in size) expires after 6 hours. A new token can be retrieved to access results while they are still in cache.

As a workaround, can you try retrieving the result before 6 hours to refresh that token and see if it helps?

edited to add that if this approach doesn't work out for you for any reason, Snowflake Support can adjust this parameter and increase the max. value up to 24 hours (since the change was added due to security reasons, the best would be to find the lowest value which is still high enough, and not go straight away for 24 hours)

@sfc-gh-dszmolka sfc-gh-dszmolka added the status-information_needed Additional information is required from the reporter label Jan 9, 2025
@anmolEnterprises
Copy link
Author

Hey,
I appreciate the update, thank you.

@sfc-gh-dszmolka sfc-gh-dszmolka removed the status-information_needed Additional information is required from the reporter label Jan 10, 2025
@sfc-gh-dszmolka
Copy link
Contributor

No worries. I'm going to close this issue now as it has nothing to do with the JDBC driver here and the next steps have been provided, but if you still need further help please let me know and i can reopen.

@sfc-gh-dszmolka sfc-gh-dszmolka closed this as not planned Won't fix, can't repro, duplicate, stale Jan 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Issue is a usage/other question rather than a bug status-triage_done Initial triage done, will be further handled by the driver team
Projects
None yet
Development

No branches or pull requests

2 participants