Skip to content

[Native Writer] INSERT INTO ... SELECT fails due to stale catalog cache after write #3521

@Shekharrajak

Description

@Shekharrajak

Describe the bug

When using the native Parquet writer with INSERT INTO ... SELECT syntax, subsequent reads return empty results even though the file is written successfully. This could be due to Spark caching .

Ref https://github.com/apache/datafusion-comet/pull/3479/changes#r2804765579

withSQLConf(
  CometConf.COMET_NATIVE_PARQUET_WRITE_ENABLED.key -> "true",
  CometConf.COMET_EXEC_ENABLED.key -> "true",
  CometConf.getOperatorAllowIncompatConfigKey(classOf[DataWritingCommandExec]) -> "true") {
  
  sql("create table t(i boolean) using parquet")
  sql("alter table t add column s bigint default 42")
  sql("insert into t select false, default")
  spark.table("t").show()  // Returns empty!
}

expected : false, 42

actual: (0 rows)

Note: This issue does NOT affect INSERT INTO ... VALUES syntax, only INSERT INTO ... SELECT.

Steps to reproduce

No response

Expected behavior

No response

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions