You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
298 Total
149 Total Unique
-------- ---- ----------------------------------------------------------------------------------------------------------
26 DocTestFailure
17 PySparkAssertionError: [DIFFERENT_PANDAS_DATAFRAME] DataFrames are not almost equal:
15 AssertionError: AnalysisException not raised
15 UnsupportedOperationException: lambda function
10 handle add artifacts
6 AssertionError: "AttributeError" does not match "
6 AssertionError: False is not true
6 UnsupportedOperationException: PlanNode::CacheTable
6 UnsupportedOperationException: function: window
4 AssertionError: "TABLE_OR_VIEW_NOT_FOUND" does not match "view not found: v"
4 AssertionError: Attributes of DataFrame.iloc[:, 7] (column name="8_timestamp_t") are different
4 PySparkNotImplementedError: [NOT_IMPLEMENTED] rdd() is not implemented.
4 UnsupportedOperationException: function: input_file_name
4 UnsupportedOperationException: unknown aggregate function: hll_sketch_agg
3 AnalysisException: No files found in the specified paths: file:///home/runner/work/sail/sail/.venvs/test-spark.spark-3.5.7/lib/python3.11/site-packages/pyspark/python/test_support/sql/ages_newlines.cs...
3 AssertionError: Attributes of DataFrame.iloc[:, 0] (column name="time") are different
3 UnsupportedOperationException: handle analyze input files
3 ValueError: Converting to Python dictionary is not supported when duplicate field names are present
2 AnalysisException: Could not find config namespace "spark"
2 AnalysisException: Failed to coerce arguments to satisfy a call to 'approx_percentile_cont' function: coercion from Float64, List(Float64), Int32 to the signature OneOf([Exact([Int8, Float64]), Exact(...
2 AnalysisException: No table format found for: orc
2 AnalysisException: not supported: function exists
2 AnalysisException: not supported: list functions
2 AnalysisException: two values expected: [Column(Column { relation: None, name: "#2" }), Column(Column { relation: None, name: "#3" }), Literal(Utf8("/"), None)]
2 AssertionError
2 AssertionError: "Exception thrown when converting pandas.Series" does not match "
2 AssertionError: "\[UDTF_EXEC_ERROR\] User defined table function encountered an error in the 'eval' method: error" does not match " Exception: error
2 AssertionError: "\[UDTF_EXEC_ERROR\] User defined table function encountered an error in the 'terminate' method: terminate error" does not match " ValueError: terminate error
2 AssertionError: 0 not greater than or equal to 1
2 AssertionError: AnalysisException not raised by <lambda>
2 AssertionError: Lists differ: [Row([22 chars](key=1, value='1'), Row(key=10, value='10'), R[2402 chars]99')] != [Row([22 chars](key=0, value='0'), Row(key=1, value='1'), Row[4882 chars]99')]
2 IllegalArgumentException: expected value at line 1 column 1
2 IllegalArgumentException: invalid argument: found FUNCTION at 7:15 expected 'DATABASE', 'SCHEMA', 'OR', 'TEMP', 'TEMPORARY', 'EXTERNAL', 'TABLE', 'GLOBAL', or 'VIEW'
2 PythonException:
2 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@dpwaizb7u45fv8mrw1upwqebz(#9) PARTITION BY [#8] ORDER BY [#9 ASC NULLS...
2 UnsupportedOperationException: approx quantile
2 UnsupportedOperationException: collect metrics
2 UnsupportedOperationException: freq items
2 UnsupportedOperationException: function: from_json
2 UnsupportedOperationException: function: schema_of_csv
2 UnsupportedOperationException: handle analyze is local
2 UnsupportedOperationException: handle analyze same semantics
2 UnsupportedOperationException: pivot
2 UnsupportedOperationException: user defined data type should only exist in a field
2 UnsupportedOperationException: with watermark
2 handle artifact statuses
1 AnalysisException: Cannot cast string 'abc' to value of Float64 type
1 AnalysisException: Cannot cast value 'abc' to value of Boolean type
1 AnalysisException: Error parsing timestamp from '2023-01-01' using format '%d-%m-%Y': input contains invalid characters
1 AnalysisException: Failed to parse placeholder id: cannot parse integer from empty string
1 AnalysisException: No files found in the specified paths: file:///home/runner/work/sail/sail/.venvs/test-spark.spark-3.5.7/lib/python3.11/site-packages/pyspark/sql/functions.py
(+1) 1 AnalysisException: No files found in the specified paths: file:///tmp/tmp_vd156p3/
(+1) 1 AnalysisException: No files found in the specified paths: file:///tmp/tmpixg3pcdp/text-0.text, file:///tmp/tmpixg3pcdp/text-1.text, file:///tmp/tmpixg3pcdp/text-2.text
(+1) 1 AnalysisException: No files found in the specified paths: file:///tmp/tmplv6x5n70/
(+1) 1 AnalysisException: UNION queries have different number of columns: left has 3 columns whereas right has 2 columns
1 AnalysisException: table already exists: tbl1
1 AnalysisException: temporary view not found: tab2
1 AssertionError: "2000000" does not match "raise_error expects a single UTF-8 string argument"
1 AssertionError: "Column names of the returned pandas.DataFrame do not match specified schema. Missing: id. Unexpected: iid. PySparkRuntimeError: [RESULT_COLUMNS_MISMATCH_FOR_PANDAS_UDF] Column names o...
1 AssertionError: "Column names of the returned pandas.DataFrame do not match specified schema. Missing: mean. Unexpected: median, std. PySparkRuntimeError: [RESULT_COLUMNS_MISMATCH_FOR_PANDAS_UDF] Colu...
(+1) 1 AssertionError: "Database 'memory:a9e022a2-64a2-46c3-a719-8ea6de4ce33b' dropped." does not match "No table format found for: jdbc"
(+1) 1 AssertionError: "Database 'memory:cc975061-7d98-4aab-8d78-247986274d41' dropped." does not match "No table format found for: jdbc"
1 AssertionError: "My error" does not match "
1 AssertionError: "Number of columns of the returned pandas.DataFrame doesn't match specified schema. Expected: 2 Actual: 3 PySparkRuntimeError: [RESULT_LENGTH_MISMATCH_FOR_PANDAS_UDF] Number of columns...
1 AssertionError: "Result vector from pandas_udf was not the required length" does not match "
1 AssertionError: "Return type of the user-defined function should be pandas.DataFrame, but is int" does not match " PySparkTypeError: [UDF_RETURN_TYPE] R (truncated)"
1 AssertionError: "Return.*type.*Series" does not match "
1 AssertionError: "attribute.*missing" does not match "cannot resolve attribute: ObjectName([Identifier("b")])"
1 AssertionError: "division( or modulo)? by zero" does not match "
1 AssertionError: "foobar" does not match "raise_error expects a single UTF-8 string argument"
1 AssertionError: '+--------------------------------+-------------------[411 chars]-+\n' != '+-----------+-----------+\n|from_csv(a)|from_csv(b)|\[105 chars]-+\n'
1 AssertionError: '+---[17 chars]-----+\n| x|\n+--------[132 chars]-+\n' != '+---[17 chars]----------+\n|update_fields(x, WithField(e))|\[167 chars]-+\n'
1 AssertionError: '4.1.1' != '3.5.7'
1 AssertionError: 1 != 0
1 AssertionError: 2 != 6
1 AssertionError: ArrayIndexOutOfBoundsException not raised
1 AssertionError: Attributes of DataFrame.iloc[:, 0] (column name="a") are different
1 AssertionError: Attributes of DataFrame.iloc[:, 0] (column name="ts") are different
1 AssertionError: Exception not raised by <lambda>
1 AssertionError: Lists differ: [(1, 2), (3, 4), (None, 5), (0, 0)] != [(1, 2), (3, 4), (None, 5), (None, None)]
1 AssertionError: Lists differ: [Row([14 chars] _c1=25, _c2='I am Hyukjin\n\nI love Spark!'),[86 chars]om')] != [Row([14 chars] _c1='25', _c2='I am Hyukjin\n\nI love Spark!'[92 chars]om')]
1 AssertionError: Lists differ: [Row(id=90, name='90'), Row(id=91, name='91'), Ro[176 chars]99')] != [Row(id=15, name='15'), Row(id=16, name='16'), Ro[176 chars]24')]
1 AssertionError: Lists differ: [Row(key='0'), Row(key='1'), Row(key='10'), Row(ke[1435 chars]99')] != [Row(key=0), Row(key=1), Row(key=10), Row(key=11),[1235 chars]=99)]
1 AssertionError: Lists differ: [Row(ln(id)=0.0, ln(id)=0.0, struct(id, name)=Row(id=[1232 chars]0'))] != [Row(ln(id)=4.31748811353631, ln(id)=4.31748811353631[1312 chars]4'))]
1 AssertionError: Lists differ: [Row(name='Andy', age=30), Row(name='Justin', [34 chars]one)] != [Row(_corrupt_record=' "age":19}\n', name=None[104 chars]el')]
1 AssertionError: Row(point='[1.0, 2.0]', pypoint='[3.0, 4.0]') != Row(point='(1.0, 2.0)', pypoint='[3.0, 4.0]')
1 AssertionError: StorageLevel(False, True, True, False, 1) != StorageLevel(False, False, False, False, 1)
1 AssertionError: Struc[30 chars]estampType(), True), StructField('val', IntegerType(), True)]) != Struc[30 chars]estampType(), True), StructField('val', IntegerType(), False)])
1 AssertionError: Struc[32 chars]e(), False), StructField('b', DoubleType(), Fa[158 chars]ue)]) != Struc[32 chars]e(), True), StructField('b', DoubleType(), Tru[154 chars]ue)])
1 AssertionError: Struc[40 chars]ue), StructField('val', ArrayType(DoubleType(), False), True)]) != Struc[40 chars]ue), StructField('val', PythonOnlyUDT(), True)])
1 AssertionError: Struc[64 chars]Type(), True), StructField('i', StringType(), True)]), False)]) != Struc[64 chars]Type(), True), StructField('i', StringType(), True)]), True)])
1 AssertionError: Struc[69 chars]e(), True), StructField('name', StringType(), True)]), True)]) != Struc[69 chars]e(), True), StructField('name', StringType(), True)]), False)])
1 AssertionError: YearMonthIntervalType(0, 1) != YearMonthIntervalType(0, 0)
1 AssertionError: [1.0, 2.0] != ExamplePoint(1.0,2.0)
1 AssertionError: dtype('<M8[us]') != 'datetime64[ns]'
1 AttributeError: 'DataFrame' object has no attribute '_ipython_key_completions_'
1 AttributeError: 'DataFrame' object has no attribute '_joinAsOf'
1 PySparkNotImplementedError: [NOT_IMPLEMENTED] foreach() is not implemented.
1 PySparkNotImplementedError: [NOT_IMPLEMENTED] foreachPartition() is not implemented.
1 PySparkNotImplementedError: [NOT_IMPLEMENTED] localCheckpoint() is not implemented.
1 PySparkNotImplementedError: [NOT_IMPLEMENTED] sparkContext() is not implemented.
1 PySparkNotImplementedError: [NOT_IMPLEMENTED] toJSON() is not implemented.
1 PythonException: AttributeError: 'NoneType' object has no attribute 'partitionId'
1 SparkRuntimeException: Invalid argument error: 83.140 is too large to store in a Decimal128 of precision 4. Max is 9.999
1 SparkRuntimeException: Invalid argument error: column types must match schema types, expected Int64 but found List(Int64) at column index 1
1 SparkRuntimeException: Invalid argument error: column types must match schema types, expected LargeUtf8 but found Utf8 at column index 0
1 SparkRuntimeException: Json error: Not valid JSON: EOF while parsing a list at line 1 column 1
1 SparkRuntimeException: Json error: Not valid JSON: expected value at line 1 column 2
1 SparkRuntimeException: Parser error: Error parsing timestamp from '1997/02/28 10:30:00': error parsing date
1 SparkRuntimeException: Parser error: Error while parsing value '0
1 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@dpwaizb7u45fv8mrw1upwqebz(#9) PARTITION BY [#8] ORDER BY [#9 ASC NULLS...
1 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@dpwaizb7u45fv8mrw1upwqebz(plus_one@5u7wzy1x1zf4loxm9n1r277lw(#9)) PART...
1 UnsupportedOperationException: Physical plan does not support logical expression AggregateFunction(AggregateFunction { func: AggregateUDF { inner: PySparkGroupAggregateUDF { signature: Signature { typ...
1 UnsupportedOperationException: PlanNode::ClearCache
1 UnsupportedOperationException: PlanNode::IsCached
1 UnsupportedOperationException: PlanNode::RecoverPartitions
1 UnsupportedOperationException: SHOW FUNCTIONS
1 UnsupportedOperationException: Support for 'approx_distinct' for data type Float64 is not implemented
1 UnsupportedOperationException: bucketing for writing listing table format
1 UnsupportedOperationException: deduplicate within watermark
1 UnsupportedOperationException: function: java_method
1 UnsupportedOperationException: function: json_tuple
1 UnsupportedOperationException: function: reflect
1 UnsupportedOperationException: function: regexp_extract_all
1 UnsupportedOperationException: function: schema_of_json
1 UnsupportedOperationException: function: sentences
1 UnsupportedOperationException: function: session_window
1 UnsupportedOperationException: function: spark_partition_id
1 UnsupportedOperationException: function: to_char
1 UnsupportedOperationException: function: to_csv
1 UnsupportedOperationException: function: to_varchar
1 UnsupportedOperationException: function: xpath
1 UnsupportedOperationException: function: xpath_boolean
1 UnsupportedOperationException: function: xpath_double
1 UnsupportedOperationException: function: xpath_float
1 UnsupportedOperationException: function: xpath_int
1 UnsupportedOperationException: function: xpath_long
1 UnsupportedOperationException: function: xpath_number
1 UnsupportedOperationException: function: xpath_short
1 UnsupportedOperationException: function: xpath_string
1 UnsupportedOperationException: handle analyze semantic hash
1 UnsupportedOperationException: unknown aggregate function: bitmap_construct_agg
1 UnsupportedOperationException: unknown aggregate function: bitmap_or_agg
1 UnsupportedOperationException: unknown aggregate function: count_min_sketch
1 UnsupportedOperationException: unknown aggregate function: grouping_id
1 UnsupportedOperationException: unknown function: distributed_sequence_id
1 UnsupportedOperationException: unknown function: product
1 ValueError: The column label 'id' is not unique.
1 ValueError: The column label 'struct' is not unique.
(-1) 0 AnalysisException: No files found in the specified paths: file:///tmp/tmpgvup_mu7/
(-1) 0 AnalysisException: No files found in the specified paths: file:///tmp/tmpl3_4hz3r/
(-1) 0 AnalysisException: No files found in the specified paths: file:///tmp/tmpnjnaeblg/text-0.text, file:///tmp/tmpnjnaeblg/text-1.text, file:///tmp/tmpnjnaeblg/text-2.text
(-1) 0 AnalysisException: UNION queries have different number of columns: left has 2 columns whereas right has 3 columns
(-1) 0 AssertionError: "Database 'memory:f5c8b558-e45e-4509-a25b-c6bce933c846' dropped." does not match "No table format found for: jdbc"
(-1) 0 AssertionError: "Database 'memory:f6330ff9-be61-406a-8706-5b2b9bff1636' dropped." does not match "No table format found for: jdbc"
1443 Total
333 Total Unique
-------- ---- ----------------------------------------------------------------------------------------------------------
224 PythonException:
74 AssertionError: False is not true
64 IllegalArgumentException: missing argument: Python UDTF return type
60 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 250
55 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 252
39 DocTestFailure
36 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 302
31 UnsupportedOperationException: function: parse_json
31 UnsupportedOperationException: unresolved table valued function
27 UnsupportedOperationException: lateral join
24 UnsupportedOperationException: named argument expression
23 IllegalArgumentException: expected value at line 1 column 1
22 UnsupportedOperationException: handle add artifacts
20 UnsupportedOperationException: variant data type
15 UnsupportedOperationException: lambda function
14 AssertionError: 1 != 0 : dict_keys([])
14 PySparkAssertionError: [DIFFERENT_PANDAS_DATAFRAME] DataFrames are not almost equal:
14 UnsupportedOperationException: unknown function: kll_sketch_agg_bigint
13 UnsupportedOperationException: time literal
12 PythonException: TypeError: object of type 'generator' has no len()
11 IllegalArgumentException: invalid argument: expected function for lateral table factor
11 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 216
10 AssertionError: AnalysisException not raised
10 SparkRuntimeException: Python error: [test::partitions] NotImplementedError:
9 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 251
9 UnsupportedOperationException: collect metrics
9 UnsupportedOperationException: unsupported subquery type
8 AssertionError
8 AssertionError: "UDTF_ARROW_TYPE_CONVERSION_ERROR" does not match "
8 AssertionError: "UDTF_RETURN_SCHEMA_MISMATCH" does not match "
8 UnsupportedOperationException: unknown function: kll_sketch_agg_double
8 UnsupportedOperationException: unknown function: kll_sketch_agg_float
7 AssertionError: 3 != 0 : []
7 PySparkAssertionError: [DIFFERENT_ROWS] Results do not match: ( 100.00000 % )
7 UnsupportedOperationException: function: spark_partition_id
7 UnsupportedOperationException: named function arguments
7 UnsupportedOperationException: unknown function: theta_sketch_agg
7 UnsupportedOperationException: user defined data type should only exist in a field
6 AssertionError: "AttributeError" does not match "
6 AssertionError: "TABLE_OR_VIEW_NOT_FOUND" does not match "view not found: v"
6 AssertionError: "UDTF_ARROW_TYPE_CAST_ERROR" does not match "
6 AssertionError: "UDTF_RETURN_NOT_ITERABLE" does not match "
6 AssertionError: 1 != 0
6 AssertionError: Exception not raised
6 IllegalArgumentException: invalid argument: found range at 40:45 expected '->', '.', '(', '[', '::', 'ESCAPE', 'IS', 'NOT', 'IN', '*', '/', '%', 'DIV', '+', '-', '||', '>>>', '>>', '<<', '&', '^', '|'...
6 UnsupportedOperationException: PlanNode::CacheTable
6 UnsupportedOperationException: direct shuffle partition ID expression
6 UnsupportedOperationException: function: input_file_name
6 UnsupportedOperationException: function: window
5 AssertionError: "Python worker process terminated due to idle timeout \(timeout: 1 seconds\)" does not match "
5 AssertionError: `query_context_type` is required when QueryContext exists. QueryContext: [].
4 AnalysisException: temporary view not found: t2
4 AssertionError: AnalysisException not raised by <lambda>
4 AssertionError: unexpectedly None
4 UnsupportedOperationException: approx quantile
4 UnsupportedOperationException: freq items
4 UnsupportedOperationException: transpose
4 UnsupportedOperationException: unknown aggregate function: hll_sketch_agg
4 clone session
3 AnalysisException: Failed to parse placeholder id: cannot parse integer from empty string
3 AnalysisException: Invalid Python user-defined table function return type. Expect a struct type, but got Int32.
3 AnalysisException: No files found in the specified paths: file:///home/runner/work/sail/sail/.venvs/test-spark.spark-4.1.1/lib/python3.11/site-packages/pyspark/python/test_support/sql/ages_newlines.cs...
3 AssertionError: "(Please use a different output data type for your UDF or DataFrame|Invalid return type with Arrow-optimized Python UDF)" does not match "
3 AssertionError: 0 not greater than or equal to 1
3 AssertionError: DayTimeIntervalType(0, 3) != DayTimeIntervalType(1, 3)
3 AssertionError: Struc[49 chars]valType(0, 3), True), StructField('name', StringType(), True)]) != Struc[49 chars]valType(1, 3), True), StructField('name', StringType(), True)])
3 IllegalArgumentException: data did not match any variant of untagged enum JsonDataType
3 IllegalArgumentException: invalid argument: found PARTITION at 281:290 expected '.', '[', '::', 'ESCAPE', 'IS', 'NOT', 'IN', '*', '/', '%', 'DIV', '+', '-', '||', '>>>', '>>', '<<', '&', '^', '|', '!=...
3 IllegalArgumentException: invalid argument: found PARTITION at 295:304 expected '.', '[', '::', 'ESCAPE', 'IS', 'NOT', 'IN', '*', '/', '%', 'DIV', '+', '-', '||', '>>>', '>>', '<<', '&', '^', '|', '!=...
3 IllegalArgumentException: invalid argument: found PARTITION at 59:68 expected '.', '[', '::', 'ESCAPE', 'IS', 'NOT', 'IN', '*', '/', '%', 'DIV', '+', '-', '||', '>>>', '>>', '<<', '&', '^', '|', '!=',...
3 IllegalArgumentException: invalid argument: found WITH at 171:175 expected '.', '[', '::', 'ESCAPE', 'IS', 'NOT', 'IN', '*', '/', '%', 'DIV', '+', '-', '||', '>>>', '>>', '<<', '&', '^', '|', '!=', '!...
3 IllegalArgumentException: invalid argument: found WITH at 279:283 expected '.', '[', '::', 'ESCAPE', 'IS', 'NOT', 'IN', '*', '/', '%', 'DIV', '+', '-', '||', '>>>', '>>', '<<', '&', '^', '|', '!=', '!...
3 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 211
3 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 215
3 UnsupportedOperationException: cached remote relation
3 UnsupportedOperationException: function: from_json
3 UnsupportedOperationException: handle analyze input files
3 UnsupportedOperationException: pivot
3 UnsupportedOperationException: table argument options in subquery expression
3 UnsupportedOperationException: unknown function: distributed_sequence_id
3 ValueError: Converting to Python dictionary is not supported when duplicate field names are present
2 AnalysisException: Failed to coerce arguments to satisfy a call to 'approx_percentile_cont' function: coercion from Float64, List(Float64), Int32 to the signature OneOf([Exact([Int8, Float64]), Exact(...
2 AnalysisException: No table format found for: orc
2 AnalysisException: ambiguous attribute: ObjectName([Identifier("id")])
2 AnalysisException: not supported: function exists
2 AnalysisException: not supported: list functions
2 AnalysisException: temporary view not found: variant_table
2 AnalysisException: two values expected: [Column(Column { relation: None, name: "#2" }), Column(Column { relation: None, name: "#3" }), Literal(Utf8("/"), None)]
2 AssertionError: ".*constructor has more than one argument.*" does not match "
2 AssertionError: "ARROW_TYPE_MISMATCH.*SQL_MAP_ARROW_ITER_UDF" does not match "Invalid argument error: column types must match schema types, expected Int32 but found Int64 at column index 0"
2 AssertionError: "AttributeError: 'int' object has no attribute 'corr'" does not match "
2 AssertionError: "Exception thrown when converting pandas.Series" does not match "
2 AssertionError: "NO_ACTIVE_SESSION" does not match "
2 AssertionError: "\[UDTF_EXEC_ERROR\] User defined table function encountered an error in the '__init__' method: error" does not match "
2 AssertionError: "\[UDTF_EXEC_ERROR\] User defined table function encountered an error in the 'eval' method: error" does not match "
2 AssertionError: "\[UDTF_EXEC_ERROR\] User defined table function encountered an error in the 'terminate' method: terminate error" does not match "
2 AssertionError: "eval error" does not match "
2 AssertionError: "missing a required argument" does not match "
2 AssertionError: "terminate error" does not match "
2 AssertionError: "terminate\(\) missing 1 required positional argument: 'a'" does not match "
2 AssertionError: 3 != 0 : dict_keys([])
2 AssertionError: {'foo': 'bar'} != {}
2 IllegalArgumentException: invalid argument: found FUNCTION at 7:15 expected 'DATABASE', 'SCHEMA', 'OR', 'TEMP', 'TEMPORARY', 'EXTERNAL', 'TABLE', 'GLOBAL', or 'VIEW'
2 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 213
2 IllegalArgumentException: missing argument: Python UDF output type
2 PySparkAssertionError: [DIFFERENT_ROWS] Results do not match: ( 99.50000 % )
2 PythonException: AssertionError: assert None is not None
2 PythonException: AttributeError: 'NoneType' object has no attribute 'cpus'
2 PythonException: PySparkRuntimeError: [UDTF_EVAL_METHOD_ARGUMENTS_DO_NOT_MATCH_SIGNATURE] Failed to evaluate the user-defined table function '' because the function arguments did not match the expect...
2 SparkRuntimeException: Error during planning: Correlated scalar subquery must be aggregated to return at most one row
2 SparkRuntimeException: Python error: [TestDataSource::partitions] NotImplementedError:
2 SparkRuntimeException: Python error: [my-json::partitions] AttributeError: 'pyarrow.lib.Schema' object has no attribute 'fieldNames'
2 TypeError: 'NoneType' object is not iterable
2 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@8jjv77o85l4r1u661eucb9ylm(#9) PARTITION BY [#8] ORDER BY [#9 ASC NULLS...
2 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: mean_udf@4jfgqif5e847nsiw7pwudti7o(#3) PARTITION BY [#2] ORDER BY [#3 ASC ...
2 UnsupportedOperationException: CLUSTER BY for write
2 UnsupportedOperationException: LATERAL JOIN with criteria
2 UnsupportedOperationException: Physical plan does not support logical expression ScalarSubquery(<subquery>)
2 UnsupportedOperationException: Physical plan does not support logical expression Wildcard { qualifier: None, options: WildcardOptions { ilike: None, exclude: None, except: None, replace: None, rename:...
2 UnsupportedOperationException: cast Time64(Nanosecond) to Spark data type
2 UnsupportedOperationException: create resource profile command
2 UnsupportedOperationException: function: from_xml
2 UnsupportedOperationException: function: to_variant_object
2 UnsupportedOperationException: function: try_make_interval
2 UnsupportedOperationException: function: try_parse_json
2 UnsupportedOperationException: function: uniform
2 UnsupportedOperationException: handle analyze is local
2 UnsupportedOperationException: handle analyze same semantics
2 UnsupportedOperationException: unknown function: st_setsrid
2 UnsupportedOperationException: unknown function: st_srid
2 UnsupportedOperationException: unknown function: try_to_date
2 UnsupportedOperationException: unknown function: try_to_time
2 UnsupportedOperationException: wildcard with plan ID
2 UnsupportedOperationException: with watermark
2 handle artifact statuses
1 AnalysisException: Cannot cast string 'abc' to value of Float64 type
1 AnalysisException: Cannot cast value 'abc' to value of Boolean type
1 AnalysisException: Could not find config namespace "mapred"
1 AnalysisException: Could not find config namespace "spark"
1 AnalysisException: Error parsing timestamp from '082017' using format '%m%Y': input is not enough for unique date and time
1 AnalysisException: Error parsing timestamp from '2014-31-12' using format '%Y-%d-%pa': input contains invalid characters
1 AnalysisException: Error parsing timestamp from '2023-01-01' using format '%d-%m-%Y': input contains invalid characters
1 AnalysisException: Invalid partition id 2 in write result (expected < 1)
1 AnalysisException: No files found in the specified paths: file:///home/runner/work/sail/sail/.venvs/test-spark.spark-4.1.1/lib/python3.11/site-packages/pyspark/sql/functions/builtin.py
(+1) 1 AnalysisException: No files found in the specified paths: file:///tmp/test_multi_paths1h_e_q3q1/text-0.text, file:///tmp/test_multi_paths1h_e_q3q1/text-1.text, file:///tmp/test_multi_paths1h_e_q3q1/te...
(+1) 1 AnalysisException: No files found in the specified paths: file:///tmp/tmpcgd_o936/
(+1) 1 AnalysisException: No files found in the specified paths: file:///tmp/tmpl50n2qwp/
1 AnalysisException: No table format found for: xml
(+1) 1 AnalysisException: UNION queries have different number of columns: left has 3 columns whereas right has 2 columns
(+1) 1 AnalysisException: Write failed for partition 0: External error: Python error: [TestArrowWriter::write] AttributeError: 'NoneType' object has no attribute 'partitionId'
1 AnalysisException: Write failed for partition 0: External error: Python error: [TestJsonWriter::write] AttributeError: 'NoneType' object has no attribute 'partitionId'
1 AnalysisException: Write failed for partition 1: External error: Python error: [TestJsonWriter::write] AttributeError: 'NoneType' object has no attribute 'partitionId'
1 AnalysisException: ambiguous attribute: ObjectName([Identifier("b")])
1 AnalysisException: ambiguous attribute: ObjectName([Identifier("i")])
1 AnalysisException: cannot resolve attribute: ObjectName([Identifier("x")])
1 AnalysisException: database not found: testcat
1 AnalysisException: element_at expects List or Map type as first argument, got Null
1 AnalysisException: one value expected: [Column(Column { relation: None, name: "#0" }), Literal(Int32(123), None)]
(+1) 1 AnalysisException: one value expected: [Column(Column { relation: None, name: "#1" }), Literal(Int64(1934909040795122888), None)]
(+1) 1 AnalysisException: one value expected: [Column(Column { relation: None, name: "#1" }), Literal(Int64(795921372334407837), None)]
1 AnalysisException: table already exists: tbl1
1 AnalysisException: temporary view not found: tab2
1 AnalysisException: to_time format argument 2 must be a scalar, not an array
1 AnalysisException: too big
1 AnalysisException: zero values expected: [Literal(Int32(123), None)]
(+1) 1 AssertionError: "'path' is not specified." does not match "Generic LocalFileSystem error: Unable to open file /wE6g2GPVcaKNdE2v_0.zst.parquet#1: Permission denied (os error 13)"
1 AssertionError: "ARROW_TYPE_MISMATCH.*SQL_MAP_ARROW_ITER_UDF" does not match "Invalid argument error: column types must match schema types, expected Struct("b": Int32) but found Struct("a": Int64, "b"...
1 AssertionError: "Column names of the returned pandas.DataFrame do not match specified schema. Missing: id. Unexpected: iid. PySparkRuntimeError: [RESULT_COLUMNS_MISMATCH_FOR_PANDAS_UDF] Column names o...
1 AssertionError: "Column names of the returned pandas.DataFrame do not match specified schema. Missing: mean. Unexpected: median, std. PySparkRuntimeError: [RESULT_COLUMNS_MISMATCH_FOR_PANDAS_UDF] Colu...
1 AssertionError: "Column names of the returned pyarrow.Table do not match specified schema. Missing: m.
1 AssertionError: "Column names of the returned pyarrow.Table do not match specified schema. Missing: m. TypeError: object of type 'generator' has no len()
1 AssertionError: "Column names of the returned pyarrow.Table do not match specified schema. Missing: m. Unexpected: v, v2.
1 AssertionError: "Column names of the returned pyarrow.Table do not match specified schema. Missing: m. Unexpected: v, v2. TypeError: object of type 'generator' has no len()
1 AssertionError: "Columns do not match in their data type: column 'a' \(expected int32, actual int64\)" does not match "
1 AssertionError: "Columns do not match in their data type: column 'a' \(expected int32, actual int64\)" does not match " TypeError: object of type 'generator' has no len()
1 AssertionError: "Columns do not match in their data type: column 'id' \(expected int32, actual int64\)" does not match "
1 AssertionError: "Columns do not match in their data type: column 'id' \(expected int32, actual int64\)" does not match " TypeError: object of type 'generator' has no len()
1 AssertionError: "DATA_SOURCE_EXTRANEOUS_FILTERS" does not match "Python error: [test::partitions] AssertionError: assert False
1 AssertionError: "DATA_SOURCE_PUSHDOWN_DISABLED" does not match "Python error: [<reader>::read] AssertionError: assert False
(+1) 1 AssertionError: "Database 'memory:49e77faa-b167-46ec-a755-1a11fc10dbd0' dropped." does not match "No table format found for: jdbc"
(+1) 1 AssertionError: "Database 'memory:af439ff8-526c-49e4-942c-f75977661ed9' dropped." does not match "No table format found for: jdbc"
1 AssertionError: "Invalid return type" does not match " AttributeError: 'Series' object has no attribute 'columns'
1 AssertionError: "My error" does not match "
1 AssertionError: "Number of columns of the returned pandas.DataFrame doesn't match specified schema. Expected: 2 Actual: 3 PySparkRuntimeError: [RESULT_LENGTH_MISMATCH_FOR_PANDAS_UDF] Number of columns...
1 AssertionError: "PySparkValueError: Exception thrown when converting pandas.Series \(object\) with name 'id' to Arrow Array \(int32\)\." does not match "
1 AssertionError: "Python worker process terminated due to idle timeout \(timeout: 1 seconds\)" does not match " PySparkRuntimeError: [UDTF_INVALID_OUTPUT_ROW_TYPE] The type of an individual output row ...
1 AssertionError: "Result vector from pandas_udf was not the required length" does not match "
1 AssertionError: "Return type of the user-defined function should be pandas.DataFrame, but is int" does not match " PySparkTypeError: [UDF_RETURN_TYPE] Return typ (truncated)"
1 AssertionError: "Return type of the user-defined function should be pyarrow.Table, but is tuple" does not match "
1 AssertionError: "Return type of the user-defined function should be pyarrow.Table, but is tuple" does not match " TypeError: object of type 'generator' has no len()
1 AssertionError: "Return.*type.*Series" does not match "
1 AssertionError: "UNRESOLVED_COLUMN.WITH_SUGGESTION" does not match "cannot resolve attribute: ObjectName([Identifier("b")])"
1 AssertionError: "ValueError: Exception thrown when converting pandas.Series \(object\) with name 'id' to Arrow Array \(double\). It can be caused by overflows or other unsafe conversions warned by Arr...
1 AssertionError: "ValueError: Exception thrown when converting pandas.Series \(object\) with name 'k' to Arrow Array \(double\). It can be caused by overflows or other unsafe conversions warned by Arro...
1 AssertionError: "ValueError: Exception thrown when converting pandas.Series \(object\) with name 'mean' to Arrow Array \(double\). It can be caused by overflows or other unsafe conversions warned by A...
1 AssertionError: "\[UDTF_EXEC_ERROR\] User defined table function encountered an error in the 'eval' method: error" does not match " Exception: error
1 AssertionError: "\[UDTF_EXEC_ERROR\] User defined table function encountered an error in the 'terminate' method: terminate error" does not match " ValueError: terminate error
1 AssertionError: "division( or modulo)? by zero" does not match "
1 AssertionError: "foobar" does not match "raise_error expects a single UTF-8 string argument"
1 AssertionError: "is null" does not match " ArrowException: Invalid argument error: Column 'a' is declared as non-nullable but contains null values
1 AssertionError: "requirement failed: Cogroup keys must have same size: 2 != 1" does not match "invalid argument: child plan grouping expressions must have the same length"
1 AssertionError: "timestamp values are not equal (timestamp='1968-12-31 17:01:01': data[0][1]='1969-01-01 01:01:01')" is not None
1 AssertionError: '+--------------------------------+-------------------[411 chars]-+\n' != '+-----------+-----------+\n|from_csv(a)|from_csv(b)|\[105 chars]-+\n'
1 AssertionError: '+---[17 chars]-----+\n| x|\n+--------[132 chars]-+\n' != '+---[17 chars]----------+\n|update_fields(x, WithField(e))|\[167 chars]-+\n'
1 AssertionError: '+---[23 chars]---+-----+\n| 1| 1|\n+---+-----+\nonly showing top 1 row' != '+---[23 chars]---+-----+\n| 1| 1|\n+---+-----+\nonly showing top 1 row\n'
1 AssertionError: 'INVALID_CLONE_SESSION_REQUEST.TARGET_SESSION_ID_FORMAT' not found in '<_InactiveRpcError of RPC that terminated with:\n\tstatus = StatusCode.UNIMPLEMENTED\n\tdetails = "clone session"...
1 AssertionError: 'ST_INVALID_SRID_VALUE' != None : Expected error class was 'ST_INVALID_SRID_VALUE', got 'None'.
1 AssertionError: 'UNSUPPORTED_SUBQUERY_EXPRESSION_CATEGORY.UNSUPPORTED_IN_EXISTS_SUBQUERY' != None : Expected error class was 'UNSUPPORTED_SUBQUERY_EXPRESSION_CATEGORY.UNSUPPORTED_IN_EXISTS_SUBQUERY', ...
1 AssertionError: 'a NULL, b BOOLEAN, c BINARY' != 'a VOID,b BOOLEAN,c BINARY'
1 AssertionError: 'bytearray' != 'bytes'
1 AssertionError: 0 not greater than 0
1 AssertionError: 0.6363787615254752 != 0.9531453492357947 : Column<'rand(1)'>
1 AssertionError: 2 != 6
1 AssertionError: 6 != 0 : []
1 AssertionError: ArrayIndexOutOfBoundsException not raised
1 AssertionError: Exception not raised by <lambda>
1 AssertionError: Lists differ: [(1, 2), (3, 4), (None, 5), (0, 0)] != [(1, 2), (3, 4), (None, 5), (None, None)]
1 AssertionError: Lists differ: [Row([14 chars] _c1=25, _c2='I am Hyukjin\n\nI love Spark!'),[86 chars]om')] != [Row([14 chars] _c1='25', _c2='I am Hyukjin\n\nI love Spark!'[92 chars]om')]
1 AssertionError: Lists differ: [Row([24 chars]2019, 1, 1, 8, 0), aware=datetime.datetime(2019, 1, 1, 16, 0))] != [Row([24 chars]2019, 1, 1, 0, 0), aware=datetime.datetime(2019, 1, 1, 16, 0))]
1 AssertionError: Lists differ: [Row([259 chars]681098, ln(id)=1.0986122886681098, struct(id, [975 chars]0'))] != [Row([259 chars]681096, ln(id)=1.0986122886681096, struct(id, [975 chars]0'))]
1 AssertionError: Lists differ: [Row(id=90, name='90'), Row(id=91, name='91'), Ro[176 chars]99')] != [Row(id=15, name='15'), Row(id=16, name='16'), Ro[176 chars]24')]
1 AssertionError: Lists differ: [Row(key='0'), Row(key='1'), Row(key='10'), Row(ke[1435 chars]99')] != [Row(key=0), Row(key=1), Row(key=10), Row(key=11),[1235 chars]=99)]
1 AssertionError: Lists differ: [Row(name='Andy', age=30), Row(name='Justin', [34 chars]one)] != [Row(_corrupt_record=' "age":19}\n', name=None[104 chars]el')]
1 AssertionError: Row(point='[1.0, 2.0]', pypoint='[3.0, 4.0]') != Row(point='(1.0, 2.0)', pypoint='[3.0, 4.0]')
1 AssertionError: SparkConnectGrpcException not raised
1 AssertionError: StorageLevel(False, True, True, False, 1) != StorageLevel(False, False, False, False, 1)
1 AssertionError: Struc[30 chars]estampType(), True), StructField('val', IntegerType(), True)]) != Struc[30 chars]estampType(), True), StructField('val', IntegerType(), False)])
1 AssertionError: Struc[32 chars]e(), False), StructField('b', DoubleType(), Fa[158 chars]ue)]) != Struc[32 chars]e(), True), StructField('b', DoubleType(), Tru[154 chars]ue)])
1 AssertionError: Struc[40 chars]ue), StructField('val', ArrayType(DoubleType(), False), True)]) != Struc[40 chars]ue), StructField('val', PythonOnlyUDT(), True)])
1 AssertionError: True is not false : Default URL is not secure
1 AssertionError: YearMonthIntervalType(0, 1) != YearMonthIntervalType(0, 0)
1 AssertionError: [1.0, 2.0] != ExamplePoint(1.0,2.0)
1 AssertionError: datetime.datetime(1970, 1, 1, 0, 0) != datetime.datetime(1970, 1, 1, 8, 0)
1 AttributeError: 'NoneType' object has no attribute 'extract_graph'
1 AttributeError: 'NoneType' object has no attribute 'toText'
1 FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/work/sail/sail/.venvs/test-spark.spark-4.1.1/lib/python3.11/site-packages/pyspark/data/artifact-tests/junitLargeJar.jar'
(+1) 1 FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpnvbnhl62'
1 IllegalArgumentException: invalid argument: empty data type
1 IllegalArgumentException: invalid argument: expecting column to drop
1 IllegalArgumentException: invalid argument: field not found in input schema: col1
1 IllegalArgumentException: invalid argument: found abc at 0:3 expected something else, ';', statement, or end of input
1 IllegalArgumentException: invalid argument: found collate at 13:20 expected string, '.', '[', '::', 'ESCAPE', 'IS', 'NOT', 'IN', '*', '/', '%', 'DIV', '+', '-', '||', '>>>', '>>', '<<', '&', '^', '|',...
1 IllegalArgumentException: invalid argument: grouping sets with grouping expressions
1 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 212
1 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 214
1 IllegalArgumentException: invalid argument: invalid user-defined window function type
1 IllegalArgumentException: invalid argument: table does not exist: ObjectName([Identifier("test_table")])
(+1) 1 PySparkAssertionError: Received incorrect server side session identifier for request. Please create a new Spark Session to reconnect. (027ce195-ecfb-4f5d-bb2f-ea1a17b4df39 != 4d95dacc-719a-46e8-875b-c...
(+1) 1 PySparkAssertionError: Received incorrect server side session identifier for request. Please create a new Spark Session to reconnect. (5b46ab40-e544-4f68-b886-44a0be54a4c4 != 72aa72b3-a18b-415a-88b1-c...
1 PySparkNotImplementedError: [NOT_IMPLEMENTED] rdd is not implemented.
1 PySparkNotImplementedError: [NOT_IMPLEMENTED] toJSON() is not implemented.
1 PySparkTypeError: [UNSUPPORTED_DATA_TYPE_FOR_ARROW_CONVERSION] binary_view is not supported in conversion to Arrow.
1 PythonException: AttributeError: 'NoneType' object has no attribute 'partitionId'
1 PythonException: KeyError: 'a'
1 PythonException: TypeError: net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for pyspark.sql.types._create_row).
1 SparkRuntimeException: Assertion failed: !args.is_empty(): args should not be empty
1 SparkRuntimeException: Assertion failed: compatible: Failed due to a difference in schemas: original schema: DFSchema { inner: Schema { fields: [Field { name: "#0", data_type: Int64, nullable: true },...
1 SparkRuntimeException: Compute error: Cannot perform a binary operation on arrays of different length
1 SparkRuntimeException: Error during planning: expr type Struct("col1": Struct("a": Int64, "b": Float64)) can't cast to Struct("a": Int64, "b": Float64) in InSubquery
1 SparkRuntimeException: Exception: path is not specified
1 SparkRuntimeException: Execution error: Schema field count mismatch: expected 1 fields, got 2
1 SparkRuntimeException: Internal error: Cannot run range queries on datatype: Time64(µs).
1 SparkRuntimeException: Invalid argument error: column types must match schema types, expected Int64 but found List(Int64) at column index 1
1 SparkRuntimeException: Invalid argument error: column types must match schema types, expected LargeUtf8 but found Utf8 at column index 0
1 SparkRuntimeException: Invalid argument error: must either specify a row count or at least one column
1 SparkRuntimeException: Invalid argument error: number of columns(3) must match number of fields(2) in schema
1 SparkRuntimeException: Json error: Not valid JSON: EOF while parsing a list at line 1 column 1
1 SparkRuntimeException: Json error: Not valid JSON: expected value at line 1 column 2
1 SparkRuntimeException: Parser error: Error while parsing value '0
1 SparkRuntimeException: Python error: [TestArrowStreamWriter::writer] PySparkNotImplementedError: [NOT_IMPLEMENTED] writer is not implemented.
1 SparkRuntimeException: Python error: [TestDataSource::writer] PySparkNotImplementedError: [NOT_IMPLEMENTED] writer is not implemented.
1 SparkRuntimeException: Python error: [my-json::writer] AttributeError: 'pyarrow.lib.Schema' object has no attribute 'fieldNames'
1 SparkRuntimeException: Python error: [test::partitions] AssertionError: assert False
1 SparkRuntimeException: Python error: [testdatasourcepyarrow::partitions] PySparkNotImplementedError: [NOT_IMPLEMENTED] reader is not implemented.
1 SparkRuntimeException: Schema error: Failed to parse DDL schema 'a INT, b INT, c VARIANT, d STRUCT<v VARIANT>, e ARRAY<VARIANT>,f MAP<STRING, VARIANT>': error in SQL parser: found VARIANT at 23:30 exp...
1 SparkRuntimeException: Schema error: Unsupported type in DDL schema: List { data_type: Int32, nullable: true }. Use PyArrow Schema for complex types.
1 SparkRuntimeException: Schema error: Unsupported type in DDL schema: Struct { fields: Fields([Field { name: "a", data_type: Int32, nullable: true, metadata: [] }, Field { name: "b", data_type: Int32, ...
1 SparkRuntimeException: Schema error: Unsupported type in DDL schema: Struct { fields: Fields([Field { name: "y", data_type: Int32, nullable: true, metadata: [] }]) }. Use PyArrow Schema for complex ty...
1 SparkRuntimeException: This feature is not implemented: Data type Decimal128(38, 18) not supported in row-based write path. Use DataSourceArrowWriter for full type support.
1 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@8jjv77o85l4r1u661eucb9ylm(#9) PARTITION BY [#8] ORDER BY [#9 ASC NULLS...
1 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@8jjv77o85l4r1u661eucb9ylm(plus_one@8f9fwaevnfdj031wzemmraerh(#9)) PART...
1 UnsupportedOperationException: Physical plan does not support logical expression AggregateFunction(AggregateFunction { func: AggregateUDF { inner: PySparkGroupAggregateUDF { signature: Signature { typ...
1 UnsupportedOperationException: PlanNode::ClearCache
1 UnsupportedOperationException: PlanNode::IsCached
1 UnsupportedOperationException: PlanNode::RecoverPartitions
1 UnsupportedOperationException: SHOW FUNCTIONS
1 UnsupportedOperationException: Support for 'approx_distinct' for data type Float64 is not implemented
1 UnsupportedOperationException: Support for 'approx_distinct' for data type Struct("name": Utf8, "value": Int64) is not implemented
1 UnsupportedOperationException: as of join
1 UnsupportedOperationException: bucketing for writing listing table format
1 UnsupportedOperationException: deduplicate within watermark
1 UnsupportedOperationException: function: collate
1 UnsupportedOperationException: function: collation
1 UnsupportedOperationException: function: java_method
1 UnsupportedOperationException: function: json_tuple
1 UnsupportedOperationException: function: reflect
1 UnsupportedOperationException: function: regexp_extract_all
1 UnsupportedOperationException: function: schema_of_csv
1 UnsupportedOperationException: function: schema_of_json
1 UnsupportedOperationException: function: schema_of_xml
1 UnsupportedOperationException: function: sentences
1 UnsupportedOperationException: function: session_window
1 UnsupportedOperationException: function: to_char
1 UnsupportedOperationException: function: to_csv
1 UnsupportedOperationException: function: to_varchar
1 UnsupportedOperationException: function: to_xml
1 UnsupportedOperationException: function: try_reflect
1 UnsupportedOperationException: function: xpath
1 UnsupportedOperationException: function: xpath_boolean
1 UnsupportedOperationException: function: xpath_double
1 UnsupportedOperationException: function: xpath_float
1 UnsupportedOperationException: function: xpath_int
1 UnsupportedOperationException: function: xpath_long
1 UnsupportedOperationException: function: xpath_number
1 UnsupportedOperationException: function: xpath_short
1 UnsupportedOperationException: function: xpath_string
1 UnsupportedOperationException: handle analyze semantic hash
1 UnsupportedOperationException: named window function arguments
1 UnsupportedOperationException: unknown aggregate function: bitmap_construct_agg
1 UnsupportedOperationException: unknown aggregate function: bitmap_or_agg
1 UnsupportedOperationException: unknown aggregate function: count_min_sketch
1 UnsupportedOperationException: unknown aggregate function: grouping_id
1 UnsupportedOperationException: unknown function: bitmap_and_agg
1 UnsupportedOperationException: unknown function: product
1 UnsupportedOperationException: unknown function: quote
1 UnsupportedOperationException: unknown function: timestampadd
1 UnsupportedOperationException: unknown function: timestampdiff
1 UnsupportedOperationException: unknown function: unwrap_udt
1 UnsupportedOperationException: unknown window function: pd_win_max
1 ValueError: The column label 'id' is not unique.
1 ValueError: The column label 'struct' is not unique.
1 failed to decode Protobuf message: WithColumns.input: Relation.rel_type: WithColumns.input: Relation.rel_type: WithColumns.input: Relation.rel_type: WithColumns.input: Relation.rel_type: WithColumns.i...
1 handle add artifacts
(-1) 0 AnalysisException: Invalid partition id 1 in write result (expected < 1)
(-1) 0 AnalysisException: No files found in the specified paths: file:///tmp/test_multi_paths1inmqwswe/text-0.text, file:///tmp/test_multi_paths1inmqwswe/text-1.text, file:///tmp/test_multi_paths1inmqwswe/te...
(-1) 0 AnalysisException: No files found in the specified paths: file:///tmp/tmp9eqyhxwm/
(-1) 0 AnalysisException: No files found in the specified paths: file:///tmp/tmpe1no52c2/
(-1) 0 AnalysisException: UNION queries have different number of columns: left has 2 columns whereas right has 3 columns
(-1) 0 AnalysisException: one value expected: [Column(Column { relation: None, name: "#1" }), Literal(Int64(1462217764685989819), None)]
(-1) 0 AnalysisException: one value expected: [Column(Column { relation: None, name: "#1" }), Literal(Int64(5700579565814582232), None)]
(-1) 0 AssertionError: "'path' is not specified." does not match "Generic LocalFileSystem error: Unable to open file /ba7YvLeMncAoRWHA_0.zst.parquet#1: Permission denied (os error 13)"
(-1) 0 AssertionError: "Database 'memory:299520f4-3baf-47a9-8577-c59d26f7aaba' dropped." does not match "No table format found for: jdbc"
(-1) 0 AssertionError: "Database 'memory:e8580f07-2e82-49a2-b0b2-9179aee4164c' dropped." does not match "No table format found for: jdbc"
(-1) 0 FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp28h50cy9'
(-1) 0 PySparkAssertionError: Received incorrect server side session identifier for request. Please create a new Spark Session to reconnect. (6af6dc92-4895-4bb4-9a8b-6dcf9a85a006 != 7a55a2b6-25b1-47ec-a39f-f...
(-1) 0 PySparkAssertionError: Received incorrect server side session identifier for request. Please create a new Spark Session to reconnect. (cb722134-364c-491b-a2d2-548f66dd8854 != edcf10a6-90bc-4d78-a014-8...
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.