Search Results for author: Kaiyuan Wang

Found 7 papers, 1 papers with code

FinLLMs: A Framework for Financial Reasoning Dataset Generation with Large Language Models

no code implementations19 Jan 2024 Ziqiang Yuan, Kaiyuan Wang, Shoutai Zhu, Ye Yuan, Jingya Zhou, Yanlin Zhu, Wenqi Wei

To address the limited data resources and reduce the annotation cost, we introduce FinLLMs, a method for generating financial question-answering data based on common financial formulas using Large Language Models.

Question Answering

SuPerPM: A Large Deformation-Robust Surgical Perception Framework Based on Deep Point Matching Learned from Physical Constrained Simulation Data

no code implementations25 Sep 2023 Shan Lin, Albert J. Miao, Ali Alabiad, Fei Liu, Kaiyuan Wang, Jingpei Lu, Florian Richter, Michael C. Yip

Thus, for tuning the learning model, we gather endoscopic data of soft tissue being manipulated by a surgical robot and then establish correspondences between point clouds at different time points to serve as ground truth.

AnyOKP: One-Shot and Instance-Aware Object Keypoint Extraction with Pretrained ViT

no code implementations15 Sep 2023 Fangbo Qin, Taogang Hou, Shan Lin, Kaiyuan Wang, Michael C. Yip, Shan Yu

Towards flexible object-centric visual perception, we propose a one-shot instance-aware object keypoint (OKP) extraction approach, AnyOKP, which leverages the powerful representation ability of pretrained vision transformer (ViT), and can obtain keypoints on multiple object instances of arbitrary category after learning from a support image.

Object

A Study of the Learnability of Relational Properties: Model Counting Meets Machine Learning (MCML)

no code implementations25 Dec 2019 Muhammad Usman, Wenxi Wang, Kaiyuan Wang, Marko Vasic, Haris Vikalo, Sarfraz Khurshid

However, MCML metrics based on model counting show that the performance can degrade substantially when tested against the entire (bounded) input space, indicating the high complexity of precisely learning these properties, and the usefulness of model counting in quantifying the true performance.

BIG-bench Machine Learning

MoET: Interpretable and Verifiable Reinforcement Learning via Mixture of Expert Trees

no code implementations25 Sep 2019 Marko Vasic, Andrija Petrovic, Kaiyuan Wang, Mladen Nikolic, Rishabh Singh, Sarfraz Khurshid

We propose MoET, a more expressive, yet still interpretable model based on Mixture of Experts, consisting of a gating function that partitions the state space, and multiple decision tree experts that specialize on different partitions.

Game of Go Imitation Learning +3

MoËT: Mixture of Expert Trees and its Application to Verifiable Reinforcement Learning

2 code implementations16 Jun 2019 Marko Vasic, Andrija Petrovic, Kaiyuan Wang, Mladen Nikolic, Rishabh Singh, Sarfraz Khurshid

By training Mo\"ET models using an imitation learning procedure on deep RL agents we outperform the previous state-of-the-art technique based on decision trees while preserving the verifiability of the models.

Game of Go Imitation Learning +4

Cannot find the paper you are looking for? You can Submit a new open access paper.