• good edit similarity learning by loss minimization

    جزئیات بیشتر مقاله
    • تاریخ ارائه: 1392/07/24
    • تاریخ انتشار در تی پی بین: 1392/07/24
    • تعداد بازدید: 973
    • تعداد پرسش و پاسخ ها: 0
    • شماره تماس دبیرخانه رویداد: -
     similarity functions are a fundamental component of many learning algorithms. when dealing with string or tree-structured data, measures based on the edit distance are widely used, and there exist a few methods for learning them from data. however, these methods offer no theoretical guarantee as to the generalization ability and discriminative power of the learned similarities. in this paper, we propose an approach to edit similarity learning based on loss minimization, called gesl. it is driven by the notion of (ϵ,γ,τ)-goodness, a theory that bridges the gap between the properties of a similarity function and its performance in classification. using the notion of uniform stability, we derive generalization guarantees that hold for a large class of loss functions. we also provide experimental results on two real-world datasets which show that edit similarities learned with gesl induce more accurate and sparser classifiers than other (standard or learned) edit similarities.

سوال خود را در مورد این مقاله مطرح نمایید :

با انتخاب دکمه ثبت پرسش، موافقت خود را با قوانین انتشار محتوا در وبسایت تی پی بین اعلام می کنم