"The best way to select features?"
Comparing MDA, LIME, and SHAP feature selection methods in Machine Learning.
Ernest P. Chan and Xin Man
Feature selection in machine learning is subject to the intrinsic randomness of the feature selection algorithms (e.g. random permutations during MDA). Stability of selected features with respect to such randomness is essential to the human interpretability of a machine learning algorithm. The authors propose a rank based stability metric called ‘instability index’ to compare the stabilities of three feature selection algorithms MDA, LIME, and SHAP as applied to random forests. Typically, features are selected by averaging many random iterations of a selection algorithm. Though the variability of the selected features does decrease as the number of iterations increases, it does not go to zero, and the features selected by the three algorithms do not necessarily converge to the same set. LIME and SHAP are found to be more stable than MDA, and LIME is at least as stable as SHAP for the top ranked features. Hence overall LIME is best suited for human interpretability. However, the selected set of features from all three algorithms significantly improves various predictive metrics out of sample, and their predictive performances do not differ significantly. Experiments were conducted on synthetic datasets, two public benchmark datasets, a S&P 500 dataset and on proprietary data from an active investment strategy.
Please provide the following details to download the paper: