• linear classifiers are nearly optimal when hidden variables have diverse effects

    جزئیات بیشتر مقاله
    • تاریخ ارائه: 1392/07/24
    • تاریخ انتشار در تی پی بین: 1392/07/24
    • تعداد بازدید: 848
    • تعداد پرسش و پاسخ ها: 0
    • شماره تماس دبیرخانه رویداد: -
     we analyze classification problems in which data is generated by a two-tiered random process. the class is generated first, then a layer of conditionally independent hidden variables, and finally the observed variables. for sources like this, the bayes-optimal rule for predicting the class given the values of the observed variables is a two-layer neural network. we show that, if the hidden variables have non-negligible effects on many observed variables, a linear classifier approximates the error rate of the bayes optimal classifier up to lower order terms. we also show that the hinge loss of a linear classifier is not much more than the bayes error rate, which implies that an accurate linear classifier can be found efficiently.

سوال خود را در مورد این مقاله مطرح نمایید :

با انتخاب دکمه ثبت پرسش، موافقت خود را با قوانین انتشار محتوا در وبسایت تی پی بین اعلام می کنم
مقالات جدیدترین ژورنال ها