Skip to content

ZeroCopy Conversion from Spark ColumnarBatch #3518

@tokoko

Description

@tokoko

What is the problem the feature request solves?

When spark DataSource api can read data as ColumnarBatch objects datafusion can already do a direct columnar to columnar conversion. This avoids row conversion, but still means data needs to be copied even when the data in ColumnarBatch is already arrow.

Describe the potential solution

Conversion should first check if vectors in ColumnarBatch are instances of org.apache.spark.sql.vectorized.ArrowColumnVector. When that's true, conversion should be zero-copy and probably ignore datafusion's maxRowsPerBatch config (??).

Additional context

I'm working on spark-adbc data source. data source outputs ArrowColumnVectors in ColumnarBatch. while using comet helps avoid row conversion right after read, there's still some penalty for copying vectors over.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions