no code implementations • 7 Dec 2023 • Zifan Xu, Haozhu Wang, Dmitriy Bespalov, Peter Stone, Yanjun Qi
Simultaneously, RSD learns a reasoning policy to determine the required reasoning skill for a given question.
no code implementations • 23 Oct 2023 • Ying Liu, Haozhu Wang, Huixue Zhou, Mingchen Li, Yu Hou, Sicheng Zhou, Fang Wang, Rama Hoetzlein, Rui Zhang
It has gained significant attention in the field of Natural Language Processing (NLP) due to its ability to learn optimal strategies for tasks such as dialogue systems, machine translation, and question-answering.
1 code implementation • 27 Sep 2023 • Yijun Tian, Huan Song, Zichen Wang, Haozhu Wang, Ziqing Hu, Fang Wang, Nitesh V. Chawla, Panpan Xu
While existing work has explored utilizing knowledge graphs (KGs) to enhance language modeling via joint training and customized model architectures, applying this to LLMs is problematic owing to their large number of parameters and high computational cost.
no code implementations • 19 May 2023 • Taigao Ma, Haozhu Wang, L. Jay Guo
Deep learning-based methods have recently been established as fast and accurate surrogate simulators for optical multilayer thin film structures.
no code implementations • 20 Apr 2023 • Taigao Ma, Haozhu Wang, L. Jay Guo
Foundation models are large machine learning models that can tackle various downstream tasks once trained on diverse and large-scale data, leading research trends in natural language processing, computer vision, and reinforcement learning.
1 code implementation • 21 Jun 2020 • Haozhu Wang, Zeyu Zheng, Chengang Ji, L. Jay Guo
In this work, we frame the multi-layer optical design task as a sequence generation problem.
1 code implementation • ICLR 2018 • Dejiao Zhang, Haozhu Wang, Mario Figueiredo, Laura Balzano
This has motivated a large body of work to reduce the complexity of the neural network by using sparsity-inducing regularizers.
1 code implementation • 8 Nov 2017 • Jiaxuan Wang, Jeeheh Oh, Haozhu Wang, Jenna Wiens
In this work, we formally define credibility in the linear setting and focus on techniques for learning models that are both accurate and credible.