A regression model based on the nearest centroid neighborhood

The renowned k-nearest neighbor decision rule is widely used for classification tasks, where the label of any new sample is estimated based on a similarity criterion defined by an appropriate distance function. It has also been used successfully for regression problems where the purpose is to predic...

全面介紹

Saved in:
書目詳細資料
主要作者: Garcia, Vicente
其他作者: Sánchez Garreta, Josep Salvador, Marques, Ana Isabel, Martínez-Peláez, Rafael
格式: Artículo
語言:en_US
出版: 2018
主題:
在線閱讀:https://doi.org/10.1007/s10044-018-0706-3
https://doi.org/10.1007/s10044-018-0706-3
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:The renowned k-nearest neighbor decision rule is widely used for classification tasks, where the label of any new sample is estimated based on a similarity criterion defined by an appropriate distance function. It has also been used successfully for regression problems where the purpose is to predict a continuous numeric label. However, some alternative neighborhood definitions, such as the surrounding neighborhood, have considered that the neighbors should fulfill not only the proximity property, but also a spatial location criterion. In this paper, we explore the use of the k-nearest centroid neighbor rule, which is based on the concept of surrounding neighborhood, for regression problems. Two support vector regression models were executed as reference. Experimentation over a wide collection of real-world data sets and using fifteen odd different values of k demonstrates that the regression algorithm based on the surrounding neighborhood significantly outperforms the traditional k-nearest neighborhood method and also a support vector regression model with a RBF kernel.