• learning by extrapolation from marginal to full-multivariate probability distributions: decreasingly naive bayesian classification

    جزئیات بیشتر مقاله
    • تاریخ ارائه: 1392/07/24
    • تاریخ انتشار در تی پی بین: 1392/07/24
    • تعداد بازدید: 983
    • تعداد پرسش و پاسخ ها: 0
    • شماره تماس دبیرخانه رویداد: -
     averaged n-dependence estimators (ande) is an approach to probabilistic classification learning that learns by extrapolation from marginal to full-multivariate probability distributions. it utilizes a single parameter that transforms the approach between a low-variance high-bias learner (naive bayes) and a high-variance low-bias learner with bayes optimal asymptotic error. it extends the underlying strategy of averaged one-dependence estimators (aode), which relaxes the naive bayes independence assumption while retaining many of naive bayes’ desirable computational and theoretical properties. ande further relaxes the independence assumption by generalizing aode to higher-levels of dependence. extensive experimental evaluation shows that the bias-variance trade-off for averaged 2-dependence estimators results in strong predictive accuracy over a wide range of data sets. it has training time linear with respect to the number of examples, learns in a single pass through the training data, supports incremental learning, handles directly missing values, and is robust in the face of noise. beyond the practical utility of its lower-dimensional variants, ande is of interest in that it demonstrates that it is possible to create low-bias high-variance generative learners and suggests strategies for developing even more powerful classifiers.

سوال خود را در مورد این مقاله مطرح نمایید :

با انتخاب دکمه ثبت پرسش، موافقت خود را با قوانین انتشار محتوا در وبسایت تی پی بین اعلام می کنم