-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] spark connector, when doris table is non-paritioned. where clause is useless #223
Comments
Which Spark connector version are you using? |
1.3.1 org.apache.doris spark-doris-connector-3.2_${spark.scala.version} 1.3.1 |
This is due to a query execution error in Doris, not the connector. The problem occurs in versions 1.2.8 and 1.2.9. You can upgrade to version 2.x to solve this problem. |
我也遇到这个问题了,doris.filter.query只对分区列生效, |
Thx, 尝试了一下。加上cache可以暂时避免这个问题。 核心还是得升级doris版本 |
That's right. The connector has correctly pushed the predicate down to the scan operator, and the exception occurred on the Doris side. Caching the dataframe returned by load is a solution. Adding predicates to the cached dataframe will be processed by Spark. |
Search before asking
Version
1.2.9
What's Wrong?
doris表是非分区表的情况下,用spark connector去读取到df以后。无法使用where 算子。写了也不生效,变成了全部读取,然后全表写入。
What You Expected?
组件bug。
How to Reproduce?
1.准备一个非分区的doris表。
2.插入100条数据
3.使用spark doris connector读取到df
4.随便加一个where算子,带上条件
最后会发现条件没用
Anything Else?
No response
Are you willing to submit PR?
Code of Conduct
The text was updated successfully, but these errors were encountered: