Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] fromScalar can fail when parsing string as timestamp #11924

Open
kuhushukla opened this issue Jan 6, 2025 · 1 comment
Open

[BUG] fromScalar can fail when parsing string as timestamp #11924

kuhushukla opened this issue Jan 6, 2025 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@kuhushukla
Copy link
Collaborator

Describe the bug
A projection that contains strings to timestamp conversions fails intermittently with the failure shown below. It does not always fail so could be data driven. The timezone was UTC / GMT. I will gather and add more information as it comes.

ai.rapids.cudf.CudfException: RMM failure at: <path>/target/libcudf-install/include/rmm/device_uvector.hpp:320: Attempt to access out of bounds element.
                at ai.rapids.cudf.ColumnVector.fromScalar(Native Method)
                at ai.rapids.cudf.ColumnVector.fromScalar(ColumnVector.java:433)
                at org.apache.spark.sql.rapids.GpuToTimestamp$.isTimestamp(datetimeExpressions.scala:701)
                at org.apache.spark.sql.rapids.GpuToTimestamp$.parseStringAsTimestamp(datetimeExpressions.scala:713)
                at org.apache.spark.sql.rapids.GpuToTimestamp.doColumnar(datetimeExpressions.scala:872)
                at com.nvidia.spark.rapids.GpuBinaryExpression.$anonfun$columnarEval$3(GpuExpressions.scala:318)
                at com.nvidia.spark.rapids.Arm$.withResourceIfAllowed(Arm.scala:84)
                at com.nvidia.spark.rapids.GpuBinaryExpression.$anonfun$columnarEval$2(GpuExpressions.scala:311)
                at com.nvidia.spark.rapids.Arm$.withResourceIfAllowed(Arm.scala:84)
                at com.nvidia.spark.rapids.GpuBinaryExpression.columnarEval(GpuExpressions.scala:310)
                at com.nvidia.spark.rapids.GpuBinaryExpression.columnarEval$(GpuExpressions.scala:309)
                at org.apache.spark.sql.rapids.GpuToTimestamp.columnarEval(datetimeExpressions.scala:833)
                at com.nvidia.spark.rapids.RapidsPluginImplicits$ReallyAGpuExpression.columnarEval(implicits.scala:35)
                at com.nvidia.spark.rapids.GpuUnaryExpression.columnarEval(GpuExpressions.scala:260)
                at com.nvidia.spark.rapids.RapidsPluginImplicits$ReallyAGpuExpression.columnarEval(implicits.scala:35)
                at com.nvidia.spark.rapids.GpuAlias.columnarEval(namedExpressions.scala:110)
                at com.nvidia.spark.rapids.RapidsPluginImplicits$ReallyAGpuExpression.columnarEval(implicits.scala:35)
                at com.nvidia.spark.rapids.GpuProjectExec$.$anonfun$project$1(basicPhysicalOperators.scala:110)
                at com.nvidia.spark.rapids.RapidsPluginImplicits$MapsSafely.$anonfun$safeMap$1(implicits.scala:221)
                at com.nvidia.spark.rapids.RapidsPluginImplicits$MapsSafely.$anonfun$safeMap$1$adapted(implicits.scala:218)
                at scala.collection.immutable.List.foreach(List.scala:431)
                at com.nvidia.spark.rapids.RapidsPluginImplicits$MapsSafely.safeMap(implicits.scala:218)
                at com.nvidia.spark.rapids.RapidsPluginImplicits$AutoCloseableProducingSeq.safeMap(implicits.scala:253)
                at com.nvidia.spark.rapids.GpuProjectExec$.project(basicPhysicalOperators.scala:110)
                at com.nvidia.spark.rapids.GpuTieredProject.$anonfun$project$2(basicPhysicalOperators.scala:615)
                at com.nvidia.spark.rapids.Arm$.withResource(Arm.scala:30)
                at com.nvidia.spark.rapids.GpuTieredProject.recurse$2(basicPhysicalOperators.scala:614)
                at com.nvidia.spark.rapids.GpuTieredProject.project(basicPhysicalOperators.scala:627)
                at com.nvidia.spark.rapids.GpuTieredProject.$anonfun$projectWithRetrySingleBatchInternal$5(basicPhysicalOperators.scala:563)
                at com.nvidia.spark.rapids.RmmRapidsRetryIterator$.withRestoreOnRetry(RmmRapidsRetryIterator.scala:272)
                at com.nvidia.spark.rapids.GpuTieredProject.$anonfun$projectWithRetrySingleBatchInternal$4(basicPhysicalOperators.scala:563)
                at com.nvidia.spark.rapids.Arm$.withResource(Arm.scala:30)
                at com.nvidia.spark.rapids.GpuTieredProject.$anonfun$projectWithRetrySingleBatchInternal$3(basicPhysicalOperators.scala:561)
                at com.nvidia.spark.rapids.RmmRapidsRetryIterator$NoInputSpliterator.next(RmmRapidsRetryIterator.scala:395)
                at com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryIterator.next(RmmRapidsRetryIterator.scala:613)
                at com.nvidia.spark.rapids.RmmRapidsRetryIterator$RmmRapidsRetryAutoCloseableIterator.next(RmmRapidsRetryIterator.scala:517)
                at com.nvidia.spark.rapids.RmmRapidsRetryIterator$.drainSingleWithVerification(RmmRapidsRetryIterator.scala:291)
                at com.nvidia.spark.rapids.RmmRapidsRetryIterator$.withRetryNoSplit(RmmRapidsRetryIterator.scala:185)
                at com.nvidia.spark.rapids.GpuTieredProject.$anonfun$projectWithRetrySingleBatchInternal$1(basicPhysicalOperators.scala:561)
                at com.nvidia.spark.rapids.Arm$.withResource(Arm.scala:39)
                at com.nvidia.spark.rapids.GpuTieredProject.projectWithRetrySingleBatchInternal(basicPhysicalOperators.scala:558)
                at com.nvidia.spark.rapids.GpuTieredProject.projectAndCloseWithRetrySingleBatch(basicPhysicalOperators.scala:597)
                at com.nvidia.spark.rapids.GpuProjectExec.$anonfun$internalDoExecuteColumnar$2(basicPhysicalOperators.scala:380)
                at com.nvidia.spark.rapids.Arm$.withResource(Arm.scala:30)
                at com.nvidia.spark.rapids.GpuProjectExec.$anonfun$internalDoExecuteColumnar$1(basicPhysicalOperators.scala:376)
                at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
                at com.nvidia.spark.rapids.CollectTimeIterator.$anonfun$next$1(GpuExec.scala:229)
                at com.nvidia.spark.rapids.Arm$.withResource(Arm.scala:30)
                at com.nvidia.spark.rapids.CollectTimeIterator.next(GpuExec.scala:228)
                at com.nvidia.spark.rapids.CloseableBufferedIterator.next(CloseableBufferedIterator.scala:65)
                at com.nvidia.spark.rapids.CloseableBufferedIterator.next(CloseableBufferedIterator.scala:32)
                at scala.collection.Iterator$$anon$10.next(Iterator.scala:461)
                at com.nvidia.spark.rapids.SplittableJoinIterator.setupNextGatherer(AbstractGpuJoinIterator.scala:222)
                at com.nvidia.spark.rapids.AbstractGpuJoinIterator.hasNext(AbstractGpuJoinIterator.scala:102)
                at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
                at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
                at org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExecBase$$anon$1.partNextBatch(GpuShuffleExchangeExecBase.scala:332)
                at org.apache.spark.sql.rapids.execution.GpuShuffleExchangeExecBase$$anon$1.hasNext(GpuShuffleExchangeExecBase.scala:355)
                at org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:179)
                at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
                at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
                at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
                at org.apache.spark.scheduler.Task.run(Task.scala:131)
                at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
                at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1476)
                at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
                at java.lang.Thread.run(Thread.java:750)

Steps/Code to reproduce bug
TBD
Expected behavior
No errors

Environment details (please complete the following information)

  • on prem Spark 3.2

Additional context
Add any other context about the problem here.

@kuhushukla kuhushukla added ? - Needs Triage Need team to review and classify bug Something isn't working labels Jan 6, 2025
@mattahrens
Copy link
Collaborator

Please update as more info becomes available @kuhushukla and then we'll assign to an engineer.

@mattahrens mattahrens removed the ? - Needs Triage Need team to review and classify label Jan 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants