Abstract:
Decision combination has recently become popular to improve over single learner systems. The fundamental idea behind an ensemble of classifiers is that the patterns which are misclassified by different classifiers are not necessarily the same and that by suitably combining the decisions of complementary classifiers misclassification error can be reduced. Classifier selection is different from fusion: In classifier selection, for a given input, only one or a small number of the models in the ensemble are used whereas in fusion, given an input all models give an output which are then combined, for example by averaging, to calculate the overall output. In this study, we review some classifier selection methods in the literature in a comparative manner. We propose some composite systems which are capable of selecting the optimal subset of the base classifiers from the ensemble dynamically when a test instance is given. We focus on the selection units of these systems and their training so that they learn the areas of expertise of each classifier. In the classification phase, given a data instance, the selection unit allows the calculation and use of the most competent classifiers so that only their decisions are taken into account. For this expertise learning task, we try different algorithms such as decision trees, rule based algorithms, and neural networks. On 40 datasets and 21 base learning algorithms, we see that by using a well trained selection unit, an ensemble of experts is capable of selecting the experts successfully and improve the overall accuracy. This improvement is significant especially in cases when none of the base classifiers have high accuracy.