Part of International Conference on Representation Learning 2025 (ICLR 2025) Conference
Lin Long, Xijun Gu, Xinjie Sun, Wentao Ye, Haobo Wang, Sai Wu, Gang Chen, Junbo Zhao
The rise of Large Language Models (LLMs) has revolutionized numerous domains, yet these models still exhibit weakness in understanding structured tabular data.Although the growing context window promises to accommodate a larger volume of table contents, it does not inherently improve the model's ability to understand the underlying structure and semantics of tabular data.To bridge the semantic gap between Text and Table, we propose TnT, a table-language model that features multimodal table representations to empower LLMs to effectively and efficiently abstract structure-enriched semantics from tabular data. TnT also introduces a scalable and efficient training pipeline, featuring novel self-supervised tasks, to integrate abstract tabular knowledge into the language modality.Extensive experimental results on NL2SQL demonstrate a much better table understanding of TnT, which achieves up to 14.4 higher execution accuracy compared with traditional text-based table representations.