<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Tabular Deep Learning on King Fox And Butterfly</title>
    <link>http://liyingbo.com/tags/tabular-deep-learning/</link>
    <description>Recent content in Tabular Deep Learning on King Fox And Butterfly</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Sat, 25 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="http://liyingbo.com/tags/tabular-deep-learning/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Tabular Deep Learning - Benchmarks </title>
      <link>http://liyingbo.com/stat/2026/04/25/tabular-deep-learning-benchmarks/</link>
      <pubDate>Sat, 25 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>http://liyingbo.com/stat/2026/04/25/tabular-deep-learning-benchmarks/</guid>
      <description>Datasets Models  Overview of top ranking models Details of model comparison results from the papers  When does GBDT outperform DNN  Large sample size Feature redundancy Feature heterogeneity and irregularity Regression and binary classification tasks Non-smooth target functions Other factors  Insights and Learnings References   It has almost been a decade since the invention of modern implementations of Gradient Boosted Decision Trees (GBDT), such as XGBoost [Chen and Guestrin 2016], LightGBM [Ke et al 2017], and CatBoost [Prokhorenkova et al 2018].</description>
    </item>
    
  </channel>
</rss>
