You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A nonparametric test based on regression error (FIT) [paper] [python code]
A bit more fringe than KCI/KCIP but provides good simulation comparisons between all three methods plus more.
Uses a nonparametric regression (in their case, a decision tree) to examine the change in predictive power based on including versus excluding some variables Z.
Uses the mean squared error as a test statistic and an analytic Gaussian/T-test approach to compute a pvalue
Seemingly efficient for large samples sizes as compared to other kernel based approaches.
Interesting connections in that trees/forests are adaptive kernel methods and extensions to forests/honesty/leaf permutations.
In the paper they use a decision tree to compute the mean squared error, but I don't think there is anything stopping any other method. Thus the implementation may want to input a general sklearn style regression function.
A nonparametric test based on regression error (FIT) [paper] [python code]
[Issue 226]
The text was updated successfully, but these errors were encountered: