: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, .th, executor 1): : Task failed while writing rows.Īt .$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)Īt .DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587)Īt .DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586)Īt $class.foreach(ResizableArray.scala:59)Īt .foreach(ArrayBuffer.scala:48)Īt .DAGScheduler.abortStage(DAGScheduler.scala:1586)Īt .DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)Īt (Option.scala:257)Īt .DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)Īt .DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820)Īt .DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769)Īt .DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758)Īt .EventLoop$$anon$1.run(EventLoop.scala:48)Īt .nJob(DAGScheduler.scala:642)Īt .runJob(SparkContext.scala:2027)Īt .$.write(FileFormatWriter.scala:194)Īt .(InsertIntoHadoopFsRelationCommand.scala:154)Īt .$lzycompute(commands.scala:104)Īt .(commands.scala:102)Īt .(commands.scala:115)Īt .Dataset$$anonfun$6.apply(Dataset.scala:190)Īt .Dataset$$anonfun$52.apply(Dataset.scala:3253)Īt .execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)Īt .Dataset.withAction(Dataset.scala:3252)Īt .Dataset.(Dataset.scala:190)Īt .Dataset$.ofRows(Dataset.scala:75)Īt .SparkSession.sql(SparkSession.scala:638)Īt $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:24)Īt $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.(:29)Īt $line23.$read$$iw$$iw$$iw$$iw$$iw$$iw.(:31)Īt $line23.$read$$iw$$iw$$iw$$iw$$iw.(:33)Īt 0(Native Method)Īt (NativeMethodAccessorImpl.I need to create a Hive table from Spark SQL which will be in the PARQUET format and SNAPPY compression. Then error as 18/04/26 21:03:44 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, .th, executor 1): : Task failed while writing rows.Īt .$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:285)Īt .$$anonfun$write$1.apply(FileFormatWriter.scala:197)Īt .$$anonfun$write$1.apply(FileFormatWriter.scala:196)Īt .nTask(ResultTask.scala:87)Īt .n(Task.scala:109)Īt .Executor$n(Executor.scala:345)Īt .runWorker(ThreadPoolExecutor.java:1149)Īt $n(ThreadPoolExecutor.java:624)Ĭaused by: : .maxCompressedLength(I)IĪt .maxCompressedLength(Native Method)Īt .maxCompressedLength(Snappy.java:316)Īt 圜press(Snapp圜ompressor.java:67)Īt .(CompressorStream.java:81)Īt .(CompressorStream.java:92)Īt $press(CodecFactory.java:112)Īt $ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89)Īt 1.writePage(ColumnWriterV1.java:153)Īt 1.flush(ColumnWriterV1.java:241)Īt 1.flush(ColumnWriteStoreV1.java:126)Īt (InternalParquetRecordWriter.java:159)Īt (InternalParquetRecordWriter.java:111)Īt (ParquetRecordWriter.java:112)Īt .close(ParquetOutputWriter.scala:42)Īt .$SingleDirectoryWriteTask.releaseResources(FileFormatWriter.scala:405)Īt .$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:396)Īt .$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:269)Īt .$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:267)Īt .Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1411)Īt .$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272)ġ8/04/26 21:03:44 ERROR scheduler.TaskSetManager: Task 0 in stage 0.0 failed 4 times aborting jobġ8/04/26 21:03:44 ERROR datasources.FileFormatWriter: Aborting job null. Sql("INSERT INTO parquet_table_name VALUES(1, 'test')") Sql("CREATE TABLE parquet_table_name (x INT, y STRING) STORED AS PARQUET") I'm trying to create Hive table with snappy compression via Spark2.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |