Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] spark connector, when doris table is non-paritioned. where clause is useless #223

Open
3 tasks done
YangTan09 opened this issue Aug 5, 2024 · 6 comments
Open
3 tasks done

Comments

@YangTan09
Copy link

YangTan09 commented Aug 5, 2024

Search before asking

  • I had searched in the issues and found no similar issues.

Version

1.2.9

What's Wrong?

doris表是非分区表的情况下,用spark connector去读取到df以后。无法使用where 算子。写了也不生效,变成了全部读取,然后全表写入。

image

What You Expected?

组件bug。

How to Reproduce?

1.准备一个非分区的doris表。
2.插入100条数据
3.使用spark doris connector读取到df
4.随便加一个where算子,带上条件
最后会发现条件没用

Anything Else?

No response

Are you willing to submit PR?

  • Yes I am willing to submit a PR!

Code of Conduct

@gnehil
Copy link
Contributor

gnehil commented Aug 6, 2024

Which Spark connector version are you using?

@YangTan09
Copy link
Author

YangTan09 commented Aug 6, 2024

Which Spark connector version are you using?

1.3.1

org.apache.doris spark-doris-connector-3.2_${spark.scala.version} 1.3.1

@gnehil
Copy link
Contributor

gnehil commented Aug 7, 2024

Which Spark connector version are you using?

1.3.1

org.apache.doris spark-doris-connector-3.2_${spark.scala.version} 1.3.1

This is due to a query execution error in Doris, not the connector. The problem occurs in versions 1.2.8 and 1.2.9. You can upgrade to version 2.x to solve this problem.

@jianghuzai
Copy link

Which Spark connector version are you using?

1.3.1
org.apache.doris spark-doris-connector-3.2_${spark.scala.version} 1.3.1

This is due to a query execution error in Doris, not the connector. The problem occurs in versions 1.2.8 and 1.2.9. You can upgrade to version 2.x to solve this problem.

我也遇到这个问题了,doris.filter.query只对分区列生效,
spark.read.format..xxxx.load后接上.cache()再createOrReplaceTempView,
然后视图上执行where 可以正常过滤掉数据.
如果把cache()去掉,在视图上执行where过滤依然不生效,
猜测应该是把where条件下推到doris引擎中去执行导致

@YangTan09
Copy link
Author

Which Spark connector version are you using?

1.3.1
org.apache.doris spark-doris-connector-3.2_${spark.scala.version} 1.3.1

This is due to a query execution error in Doris, not the connector. The problem occurs in versions 1.2.8 and 1.2.9. You can upgrade to version 2.x to solve this problem.

我也遇到这个问题了,doris.filter.query只对分区列生效, spark.read.format..xxxx.load后接上.cache()再createOrReplaceTempView, 然后视图上执行where 可以正常过滤掉数据. 如果把cache()去掉,在视图上执行where过滤依然不生效, 猜测应该是把where条件下推到doris引擎中去执行导致

Thx, 尝试了一下。加上cache可以暂时避免这个问题。 核心还是得升级doris版本

@gnehil
Copy link
Contributor

gnehil commented Aug 7, 2024

Which Spark connector version are you using?

1.3.1
org.apache.doris spark-doris-connector-3.2_${spark.scala.version} 1.3.1

This is due to a query execution error in Doris, not the connector. The problem occurs in versions 1.2.8 and 1.2.9. You can upgrade to version 2.x to solve this problem.

我也遇到这个问题了,doris.filter.query只对分区列生效, spark.read.format..xxxx.load后接上.cache()再createOrReplaceTempView, 然后视图上执行where 可以正常过滤掉数据. 如果把cache()去掉,在视图上执行where过滤依然不生效, 猜测应该是把where条件下推到doris引擎中去执行导致

That's right. The connector has correctly pushed the predicate down to the scan operator, and the exception occurred on the Doris side. Caching the dataframe returned by load is a solution. Adding predicates to the cached dataframe will be processed by Spark.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants