• learning compact markov logic networks with decision trees

    جزئیات بیشتر مقاله
    • تاریخ ارائه: 1392/07/24
    • تاریخ انتشار در تی پی بین: 1392/07/24
    • تعداد بازدید: 921
    • تعداد پرسش و پاسخ ها: 0
    • شماره تماس دبیرخانه رویداد: -
     statistical-relational learning combines logical syntax with probabilistic methods. markov logic networks (mlns) are a prominent model class that generalizes both first-order logic and undirected graphical models (markov networks). the qualitative component of an mln is a set of clauses and the quantitative component is a set of clause weights. generative mlns model the joint distribution of relationships and attributes. a state-of-the-art structure learning method is the moralization approach: learn a set of directed horn clauses, then convert them to conjunctions to obtain mln clauses. the directed clauses are learned using bayes net methods. the moralization approach takes advantage of the high-quality inference algorithms for mlns and their ability to handle cyclic dependencies. a weakness of moralization is that it leads to an unnecessarily large number of clauses. in this paper we show that using decision trees to represent conditional probabilities in the bayes net is an effective remedy that leads to much more compact mln structures. in experiments on benchmark datasets, the decision trees reduce the number of clauses in the moralized mln by a factor of 5–25, depending on the dataset. the accuracy of predictions is competitive with the models obtained by standard moralization, and in many cases superior.

سوال خود را در مورد این مقاله مطرح نمایید :

با انتخاب دکمه ثبت پرسش، موافقت خود را با قوانین انتشار محتوا در وبسایت تی پی بین اعلام می کنم
مقالات جدیدترین رویدادها
مقالات جدیدترین ژورنال ها