-
Notifications
You must be signed in to change notification settings - Fork 285
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
When using the native Parquet writer with INSERT INTO ... SELECT syntax, subsequent reads return empty results even though the file is written successfully. This could be due to Spark caching .
Ref https://github.com/apache/datafusion-comet/pull/3479/changes#r2804765579
withSQLConf(
CometConf.COMET_NATIVE_PARQUET_WRITE_ENABLED.key -> "true",
CometConf.COMET_EXEC_ENABLED.key -> "true",
CometConf.getOperatorAllowIncompatConfigKey(classOf[DataWritingCommandExec]) -> "true") {
sql("create table t(i boolean) using parquet")
sql("alter table t add column s bigint default 42")
sql("insert into t select false, default")
spark.table("t").show() // Returns empty!
}
expected : false, 42
actual: (0 rows)
Note: This issue does NOT affect INSERT INTO ... VALUES syntax, only INSERT INTO ... SELECT.
Steps to reproduce
No response
Expected behavior
No response
Additional context
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working