Crossref journal-article
Oxford University Press (OUP)
Bioinformatics (286)
Abstract

Abstract Motivation: Ranking feature sets is a key issue for classification, for instance, phenotype classification based on gene expression. Since ranking is often based on error estimation, and error estimators suffer to differing degrees of imprecision in small-sample settings, it is important to choose a computationally feasible error estimator that yields good feature-set ranking. Results: This paper examines the feature-ranking performance of several kinds of error estimators: resubstitution, cross-validation, bootstrap and bolstered error estimation. It does so for three classification rules: linear discriminant analysis, three-nearest-neighbor classification and classification trees. Two measures of performance are considered. One counts the number of the truly best feature sets appearing among the best feature sets discovered by the error estimator and the other computes the mean absolute error between the top ranks of the truly best feature sets and their ranks as given by the error estimator. Our results indicate that bolstering is superior to bootstrap, and bootstrap is better than cross-validation, for discovering top-performing feature sets for classification when using small samples. A key issue is that bolstered error estimation is tens of times faster than bootstrap, and faster than cross-validation, and is therefore feasible for feature-set ranking when the number of feature sets is extremely large. Availability: We provide a companion website, which contains the complete set of tables and plots regarding the simulation study, and a compilation of references on feature-set ranking with applications in Genomics. The companion website can be accessed at the URL http://ee.tamu.edu/~edward/bolster_ranking Contact:  edward@ee.tamu.edu

Bibliography

Sima, C., Braga-Neto, U., & Dougherty, E. R. (2004). Superior feature-set ranking for small samples using bolstered error estimation. Bioinformatics, 21(7), 1046–1054.

Authors 3
  1. Chao Sima (first)
  2. Ulisses Braga-Neto (additional)
  3. Edward R. Dougherty (additional)
References 19 Referenced 38
  1. Ambroise, C. and McLachlan, G.J. 2002Selection bias in gene extraction on the basis of microarray gene-expression data. Proc. Natl Acad. Sci. USA996562–6566 (10.1073/pnas.102102699)
  2. Braga-Neto, U.M. and Dougherty, E.R. 2004Bolstered error estimation. Pattern Recogn.371267–1281
  3. Braga-Neto, U.M. and Dougherty, E.R. 2004Is cross-validation valid for small-sample microarray classification?. Bioinformatics20374–380 (10.1093/bioinformatics/btg419)
  4. Braga-Neto, U.M., Hashimoto, R., Dougherty, E.R., Nguyen, D.V., Carroll, R.J. 2004Is cross-validation better than resubstitution for ranking genes?. Bioinformatics20253–258 (10.1093/bioinformatics/btg399)
  5. Chen, Y., Kamat, V., Dougherty, E.R., Bittner, M.L., Meltzer, P.S., Trent, J.M. 2002Ratio statistics of gene expression levels and applications to microarray data analysis. Bioinformatics181207–1215 (10.1093/bioinformatics/18.9.1207)
  6. Cover, T. and van Campenhout, J. 1997On the possible orderings in the measurement selection problem. IEEE Trans. Systems Man Cybernet.7657–661
  7. Devroye, L., Gyorfi, L., Lugosi, G. A Probabilistic Theory of Pattern Recognition1996, New York Springer-Verlag (10.1007/978-1-4612-0711-5)
  8. Dougherty, E.R. 2001Small sample issues for microarray-based classification. Compar. Funct. Genom.2, pp. 28–34 (10.1002/cfg.62)
  9. Efron, B. 1979Bootstrap methods: another look at the jackknife. Ann. Statist.71–26 (10.1214/aos/1176344552)
  10. Efron, B. 1983Estimating the error rate of a prediction rule: improvement on cross-validation. J. Am. Statist. Soc.78316–331
  11. Jain, A.K. and Chandrasekaran, B. 1982Dimensionality and sample size considerations in pattern recognition practice. In Krishnaiah, P.R. and Kanal, L.N. (Eds.). Handbook of Statistics , Amsterdam North-Holland Vol. II, pp. 835–855 (10.1016/S0169-7161(82)02042-2)
  12. Jain, A.K. and Zongker, D. 1997Feature selection—evaluation, application, and small sample performance. IEEE Trans. Pattern Anal. Machine Intell.19153–158 (10.1109/34.574797)
  13. Kim, S., Dougherty, E.R., Shmulevich, I., Hess, K.R., Hamilton, S.R., Trent, J.M., Fuller, G.N., Zhang, W. 2002Identification of combination gene sets for glioma classification. Mol. Cancer Therap.11229–1236
  14. Kudo, M. and Sklansky, J. 2000Comparison of algorithms that select features for pattern classifiers. Pattern Recogn.3325–41 (10.1016/S0031-3203(99)00041-2)
  15. Liu, W.-M., Mei, R., Di, X., Ryder, T.B., Hubbell, E., Dee, S., Webster, T.A., Harrington, C.A., Ho, M.-H., Baid, J., Smeekens, S.P. 2002Analysis of high density expression microarrays with signed-rank call algorithms. Bioinformatics181593–1599 (10.1093/bioinformatics/18.12.1593)
  16. Raudys, S.J. and Jain, A.K. 1991Small sample size effects in statistical pattern recognition: recommendations for practitioners. IEEE Trans. Pattern Anal. Machine Intell.13252–262 (10.1109/34.75512)
  17. van de Vijver, M.J., He, Y.D., van’t Veer, L.J., Dai, H., Hart, A.A.M., Voskuil, D.W., Schreiber, G.J., Peterse, J.L., Roberts, C., Marton, M.J., et al. 2002A gene-expression signature as a predictor of survival in breast cancer. N. Engl. J. Med.3471999–2009
  18. van’t Veer, L.J., Dai, H., van de Vijver, M.J., He, Y.D., Hart, A.A.M., Mao, M., Peterse, H.L., van der Kooy, K., Marton, M.J., Witteveen, A.T., et al. 2002Gene expression profiling predicts clinical outcome of breast cancer. Nature415530–536 (10.1038/415530a)
  19. Vapnik, V.N. Statistical Learning Theory1998, New York, NY Wiley
Dates
Type When
Created 20 years, 9 months ago (Oct. 28, 2004, 8:51 p.m.)
Deposited 2 years, 6 months ago (Jan. 31, 2023, 5:26 a.m.)
Indexed 3 weeks, 2 days ago (Aug. 2, 2025, 1:25 a.m.)
Issued 20 years, 9 months ago (Oct. 28, 2004)
Published 20 years, 9 months ago (Oct. 28, 2004)
Published Online 20 years, 9 months ago (Oct. 28, 2004)
Published Print 20 years, 4 months ago (April 1, 2005)
Funders 0

None

@article{Sima_2004, title={Superior feature-set ranking for small samples using bolstered error estimation}, volume={21}, ISSN={1367-4803}, url={http://dx.doi.org/10.1093/bioinformatics/bti081}, DOI={10.1093/bioinformatics/bti081}, number={7}, journal={Bioinformatics}, publisher={Oxford University Press (OUP)}, author={Sima, Chao and Braga-Neto, Ulisses and Dougherty, Edward R.}, year={2004}, month=oct, pages={1046–1054} }