no code implementations • CCL 2020 • Wangda Luo, YuHan Liu, Bin Liang, Ruifeng Xu
针对问答立场任务中, 现有方法难以提取问答文本间的依赖关系问题, 本文提出一种基于循环交互注意力(Recurrent Interactive Attention, RIA)网络的问答立场分析方法。该方法通过模仿人类阅读理解时的思维方式, 基于交互注意力机制和循环迭代方法, 有效地从问题和答案的相互联系中挖掘问答文本的立场信息。此外, 该方法将问题进行陈述化表示, 有效地解决疑问句表述下问题文本无法明确表达自身立场的问题。实验结果表明, 本文方法取得了比现有模型方法更好的效果, 同时证明该方法能有效拟合问答立场分析任务中的问答对依赖关系。
no code implementations • Findings (EMNLP) 2021 • Jun Gao, YuHan Liu, Haolin Deng, Wei Wang, Yu Cao, Jiachen Du, Ruifeng Xu
The emotion cause is a stimulus for human emotions.
no code implementations • 22 May 2025 • Song Jin, Juntian Zhang, YuHan Liu, Xun Zhang, Yufei Zhang, Guojun Yin, Fei Jiang, Wei Lin, Rui Yan
To bridge this gap, we introduce RecInter, a novel agent-based simulation platform for recommender systems featuring a robust interaction mechanism.
no code implementations • 22 May 2025 • Xiaoqing Zhang, Huabin Zheng, Ang Lv, YuHan Liu, Zirui Song, Flood Sung, Xiuying Chen, Rui Yan
We hope our approach can inspire future research on using reinforcement learning to improve the generalization of LLMs.
no code implementations • 14 May 2025 • Botao Amber Hu, YuHan Liu, Helena Rong
The recent trend of self-sovereign Decentralized AI Agents (DeAgents) combines Large Language Model (LLM)-based AI agents with decentralization technologies such as blockchain smart contracts and trusted execution environments (TEEs).
no code implementations • 13 May 2025 • YuHan Liu, Yuxuan Liu, Xiaoqing Zhang, Xiuying Chen, Rui Yan
Concurrently, the InsightFlow Agents consist of two specialized sub-agents: the Synthesis Agent and the Analysis Agent.
no code implementations • 12 May 2025 • Minnie Zhu, YuHan Liu, Simon Gong
This paper investigates the impact of monetary policy surprises on U. S. Treasury bond yields and the implications for portfolio managers.
1 code implementation • CVPR 2025 • Lei Wang, Senmao Li, Fei Yang, Jianye Wang, Ziheng Zhang, YuHan Liu, Yaxing Wang, Jian Yang
The diffusion models, in early stages focus on constructing basic image structures, while the refined details, including local features and textures, are generated in later stages.
no code implementations • 2 May 2025 • YuHan Liu, Lin Ning, Neo Wu, Karan Singhal, Philip Andrew Mansfield, Devora Berlowitz, Sushant Prakash, Bradley Green
User sequence modeling is crucial for modern large-scale recommendation systems, as it enables the extraction of informative representations of users and items from their historical interactions.
no code implementations • 15 Apr 2025 • Jincheng Kang, Yi Cen, Yigang Cen, Ke Wang, YuHan Liu
Wood defect detection is critical for ensuring quality control in the wood processing industry.
5 code implementations • 14 Apr 2025 • Yuqian Fu, Xingyu Qiu, Bin Ren, Yanwei Fu, Radu Timofte, Nicu Sebe, Ming-Hsuan Yang, Luc van Gool, Kaijin Zhang, Qingpeng Nong, Xiugang Dong, Hong Gao, Xiangsheng Zhou, Jiancheng Pan, Yanxing Liu, Xiao He, Jiahao Li, Yuze Sun, Xiaomeng Huang, Zhenyu Zhang, Ran Ma, YuHan Liu, Zijian Zhuang, Shuai Yi, Yixiong Zou, Lingyi Hong, Mingxi Chen, Runze Li, Xingdong Sheng, Wenqiang Zhang, Weisen Chen, Yongxin Yan, Xinguo Chen, Yuanjie Shao, Zhengrong Zuo, Nong Sang, Hao Wu, Haoran Sun, Shuming Hu, Yan Zhang, Zhiguang Shi, Yu Zhang, Chao Chen, Tao Wang, Da Feng, Linhai Zhuo, Ziming Lin, Yali Huang, Jie Me, Yiming Yang, Mi Guo, Mingyuan Jiu, Mingliang Xu, Maomao Xiong, Qunshu Zhang, Xinyu Cao, Yuqing Yang, Dianmo Sheng, Xuanpu Zhao, Zhiyu Li, Xuyang Ding, Wenqian Li
Cross-Domain Few-Shot Object Detection (CD-FSOD) poses significant challenges to existing object detection and few-shot detection models when applied across domains.
Cross-Domain Few-Shot
Cross-Domain Few-Shot Object Detection
+3
1 code implementation • 13 Apr 2025 • Jiahao Qiu, Yinghui He, Xinzhe Juan, Yimin Wang, YuHan Liu, Zixin Yao, Yue Wu, Xun Jiang, Ling Yang, Mengdi Wang
Experiments conducted in popular character-based chatbots show that emotionally engaging dialogues can lead to psychological deterioration in vulnerable users, with mental state deterioration in more than 34. 4% of the simulations.
no code implementations • 27 Mar 2025 • YuHan Liu, Yunbo Long
While large language model (LLM)-based chatbots have been applied for effective engagement in credit dialogues, their capacity for dynamic emotional expression remains limited.
no code implementations • CVPR 2025 • YuHan Liu, Yixiong Zou, Yuhua Li, Ruixuan Li
Based on this phenomenon and interpretation, we further propose a method that includes two plug-and-play modules: one to flatten the loss landscapes for low-level features during source-domain training as a novel sharpness-aware minimization method, and the other to directly supplement target-domain information to the model during target-domain testing by low-level-based calibration.
1 code implementation • 12 Mar 2025 • Xinyu Zhang, Haonan Chang, YuHan Liu, Abdeslam Boularias
To address this, we propose Motion Blender Gaussian Splatting (MBGS), a novel framework that uses motion graphs as an explicit and sparse motion representation.
no code implementations • 3 Mar 2025 • YuYang Huang, YuHan Liu, Haryadi S. Gunawi, Beibin Li, Changho Hwang
Continual learning has become a promising solution to refine large language models incrementally by leveraging user feedback.
no code implementations • 24 Feb 2025 • YuHan Liu, Máté Kiss, Roland Tóth, Maarten Schoukens
While optimal input design for linear systems has been well-established, no systematic approach exists for nonlinear systems where robustness to extrapolation/interpolation errors is prioritized over minimizing estimated parameter variance.
no code implementations • 21 Feb 2025 • Xueran Han, YuHan Liu, Mingzhe Li, Wei Liu, Sen Hu, Rui Yan, Zhiqiang Xu, Xiuying Chen
Great novels create immersive worlds with rich character arcs, well-structured plots, and nuanced writing styles.
no code implementations • 15 Feb 2025 • Zirui Song, Bin Yan, YuHan Liu, Miao Fang, Mingzhe Li, Rui Yan, Xiuying Chen
Large Language Models (LLMs) have demonstrated remarkable success in various tasks such as natural language understanding, text summarization, and machine translation.
no code implementations • 21 Jan 2025 • Jun Xu, Zhengxue Cheng, Guangchuan Chi, YuHan Liu, Yuelin Hu, Li Song
However, this architecture exhibits insufficient RD performance due to two main drawbacks: (1) the inadequate performance of the quantizer, challenging training processes, and issues such as codebook collapse; (2) the limited representational capacity of the encoder and decoder, making it difficult to meet feature representation requirements across various bitrates.
no code implementations • 7 Jan 2025 • Xiaoqing Zhang, Ang Lv, YuHan Liu, Flood Sung, Wei Liu, Shuo Shang, Xiuying Chen, Rui Yan
Recognizing the lack of multi-task datasets with diverse many-shot distributions, we develop the Many-Shot ICL Benchmark (ICL-50)-a large-scale benchmark of 50 tasks that cover shot numbers from 1 to 350 within sequences of up to 8, 000 tokens-for fine-tuning purposes.
no code implementations • CVPR 2025 • Kangyi Wu, Pengna Li, Jingwen Fu, Yizhe Li, Yang Wu, YuHan Liu, Jinjun Wang, Sanping Zhou
Models trained on these datasets will pay less attention to out-of-distribution events.
no code implementations • 30 Dec 2024 • Xiaoqing Zhang, YuHan Liu, Flood Sung, Xiuying Chen, Shuo Shang, Rui Yan
To further minimize test-time computation overhead, we introduce preference-driven optimization with Reinforced Self-Training (ReST), which uses exploration trajectories from ThinkCoder to guide LLM's evolution.
no code implementations • 5 Nov 2024 • YuHan Liu, YuYang Huang, Jiayi Yao, Zhuohan Gu, Kuntai Du, Hanchen Li, Yihua Cheng, Junchen Jiang, Shan Lu, Madan Musuvathi, Esha Choukse
Large Language Models (LLMs) are increasingly employed in complex workflows, where different LLMs and fine-tuned variants collaboratively address complex tasks.
no code implementations • 24 Oct 2024 • YuHan Liu, Zirui Song, Juntian Zhang, Xiaoqing Zhang, Xiuying Chen, Rui Yan
With the growing spread of misinformation online, understanding how true news evolves into fake news has become crucial for early detection and prevention.
1 code implementation • 4 Oct 2024 • Xinyu Zhang, YuHan Liu, Haonan Chang, Liam Schramm, Abdeslam Boularias
Based on CCT, we propose the Autoregressive Policy (ARP) architecture, which solves manipulation tasks by generating hybrid action sequences.
Ranked #3 on
Robot Manipulation
on RLBench
no code implementations • 26 Sep 2024 • Owen Xingjian Zhang, Shuyao Zhou, Jiayi Geng, YuHan Liu, Sunny Xun Liu
In response to the increasing mental health challenges faced by college students, we sought to understand their perspectives on how AI applications, particularly Large Language Models (LLMs), can be leveraged to enhance their mental well-being.
no code implementations • 10 Sep 2024 • YuHan Liu, Shahriar Negaharipour
We propose an optimization technique for 3-D underwater object modeling from 2-D forward-scan sonar images at known poses.
1 code implementation • 31 Aug 2024 • Haonan Chang, Kowndinya Boyalakuntla, YuHan Liu, Xinyu Zhang, Liam Schramm, Abdeslam Boularias
We present a novel Diffusion-based Affordance Prediction (DAP) pipeline for the multi-modal object storage problem.
no code implementations • 29 Jul 2024 • YuHan Liu, Sheng Wang, Yixuan Liu, Feifei Li, Hong Chen
To provide a rigorous DP guarantee for SVT, prior works in the literature adopt a conservative privacy analysis by assuming the direct disclosure of noisy query results as in typical private query releases.
1 code implementation • 18 Jul 2024 • YuHan Liu, Qianxin Huang, Siqi Hui, Jingwen Fu, Sanping Zhou, Kangyi Wu, Pengna Li, Jinjun Wang
In our work, we seek another way to use the semantic information, that is semantic-aware feature representation learning framework. Based on this, we propose SRMatcher, a new detector-free feature matching method, which encourages the network to learn integrated semantic feature representation. Specifically, to capture precise and rich semantics, we leverage the capabilities of recently popularized vision foundation models (VFMs) trained on extensive datasets.
1 code implementation • 22 Jun 2024 • Shangbin Feng, Taylor Sorensen, YuHan Liu, Jillian Fisher, Chan Young Park, Yejin Choi, Yulia Tsvetkov
Modular Pluralism is uniquely compatible with black-box LLMs and offers the modular control of adding new community LMs for previously underrepresented communities.
1 code implementation • 12 Jun 2024 • Xinyu Zhang, YuHan Liu, Haonan Chang, Abdeslam Boularias
Learning general-purpose models from diverse datasets has achieved great success in machine learning.
no code implementations • 4 Jun 2024 • Yixuan Liu, Li Xiong, YuHan Liu, Yujie Gu, Ruixuan Liu, Hong Chen
Third, the model is updated with the gradient reconstructed from recycled common knowledge and noisy incremental information.
2 code implementations • 26 May 2024 • Jiayi Yao, Hanchen Li, YuHan Liu, Siddhant Ray, Yihua Cheng, Qizheng Zhang, Kuntai Du, Shan Lu, Junchen Jiang
To speed up the prefill of the long LLM inputs, one can pre-compute the KV cache of a text and re-use the KV cache when the context is reused as the prefix of another LLM input.
no code implementations • 16 May 2024 • YuHan Liu, Roland Tóth, Maarten Schoukens
A new weighted regularization term is added to the cost function to penalize the difference between the state and output function of the baseline physics-based and final identified model.
no code implementations • 10 May 2024 • YuHan Liu, Ke Tu
This paper introduces the Structured Similarity Index Measure for Time Series (TS3IM), a novel approach inspired by the success of the Structural Similarity Index Measure (SSIM) in image analysis, tailored to address these limitations by assessing structural similarity in time series.
no code implementations • 28 Apr 2024 • YuHan Liu, Yongjian Deng, Hao Chen, Bochen Xie, Youfu Li, Zhen Yang
Moreover, given that event data can provide accurate visual references at scene edges between consecutive frames, we introduce a learned visibility map derived from event data to adaptively mitigate the occlusion problem in the warping refinement process.
1 code implementation • 14 Mar 2024 • YuHan Liu, Xiuying Chen, Xiaoqing Zhang, Xing Gao, Ji Zhang, Rui Yan
Our simulation results uncover patterns in fake news propagation related to topic relevance, and individual traits, aligning with real-world observations.
no code implementations • 8 Mar 2024 • Zinan Zeng, Sen Ye, Zijian Cai, Heng Wang, YuHan Liu, Haokai Zhang, Minnan Luo
For instance, the metadata and the corresponding user's information of a review could be helpful.
no code implementations • 23 Jan 2024 • Hanchen Li, YuHan Liu, Yihua Cheng, Siddhant Ray, Kuntai Du, Junchen Jiang
To render each generated token in real-time for users, the Large Language Model (LLM) server generates tokens one by one and streams each token (or group of a few tokens) through the network to the user right after generation, which we refer to as LLM token streaming.
no code implementations • CVPR 2024 • YuHan Liu, Yongjian Deng, Hao Chen, Zhen Yang
Video Frame Interpolation (VFI) has witnessed a surge in popularity due to its abundant downstream applications.
1 code implementation • 16 Nov 2023 • YuHan Liu, Shangbin Feng, Xiaochuang Han, Vidhisha Balachandran, Chan Young Park, Sachin Kumar, Yulia Tsvetkov
In this work, we take a first step towards designing summarization systems that are faithful to the author's intent, not only the semantic content of the article.
no code implementations • 24 Oct 2023 • YuHan Liu, Pengyu Wang, Chang-Hun Lee, Roland Tóth
One major challenge for autonomous attitude takeover control for on-orbit servicing of spacecraft is that an accurate dynamic motion model of the combined vehicles is highly nonlinear, complex and often costly to identify online, which makes traditional model-based control impractical for this task.
2 code implementations • 11 Oct 2023 • YuHan Liu, Hanchen Li, Yihua Cheng, Siddhant Ray, YuYang Huang, Qizheng Zhang, Kuntai Du, Jiayi Yao, Shan Lu, Ganesh Ananthanarayanan, Michael Maire, Henry Hoffmann, Ari Holtzman, Junchen Jiang
Compared to the recent systems that reuse the KV cache, CacheGen reduces the KV cache size by 3. 5-4. 3x and the total delay in fetching and processing contexts by 3. 2-3. 7x with negligible impact on the LLM response quality.
no code implementations • 7 Oct 2023 • YuHan Liu, Chengcheng Wan, Kuntai Du, Henry Hoffmann, Junchen Jiang, Shan Lu, Michael Maire
ML APIs have greatly relieved application developers of the burden to design and train their own neural network models -- classifying objects in an image can now be as simple as one line of Python code to call an API.
no code implementations • 3 Oct 2023 • Kuntai Du, YuHan Liu, Yitian Hao, Qizheng Zhang, Haodong Wang, YuYang Huang, Ganesh Ananthanarayanan, Junchen Jiang
While the high demand for network bandwidth and GPU resources could be substantially reduced by optimally adapting the configuration knobs, such as video resolution and frame rate, current adaptation techniques fail to meet three requirements simultaneously: adapt configurations (i) with minimum extra GPU or bandwidth overhead; (ii) to reach near-optimal decisions based on how the data affects the final DNN's accuracy, and (iii) do so for a range of configuration knobs.
1 code implementation • 2 Oct 2023 • Wenxuan Ding, Shangbin Feng, YuHan Liu, Zhaoxuan Tan, Vidhisha Balachandran, Tianxing He, Yulia Tsvetkov
The novel setting of geometric knowledge reasoning necessitates new LM abilities beyond existing atomic/linear multi-hop QA, such as backtracking, verifying facts and constraints, reasoning with uncertainty, and more.
1 code implementation • 22 Sep 2023 • Xinyu Zhang, YuHan Liu, Yuting Wang, Abdeslam Boularias
We evaluate DE-ViT on few-shot, and one-shot object detection benchmarks with Pascal VOC, COCO, and LVIS.
Ranked #2 on
Few-Shot Object Detection
on MS-COCO (10-shot)
Binary Classification
Cross-Domain Few-Shot Object Detection
+4
no code implementations • 21 May 2023 • Yihua Cheng, Ziyi Zhang, Hanchen Li, Anton Arapin, Yue Zhang, Qizheng Zhang, YuHan Liu, Xu Zhang, Francis Y. Yan, Amrita Mazumdar, Nick Feamster, Junchen Jiang
In real-time video communication, retransmitting lost packets over high-latency networks is not viable due to strict latency requirements.
2 code implementations • 15 May 2023 • Shangbin Feng, Chan Young Park, YuHan Liu, Yulia Tsvetkov
We focus on hate speech and misinformation detection, aiming to empirically quantify the effects of political (social, economic) biases in pretraining data on the fairness of high-stakes social-oriented tasks.
no code implementations • 11 Apr 2023 • Yixuan Liu, Suyun Zhao, Li Xiong, YuHan Liu, Hong Chen
In this work, a general framework (APES) is built up to strengthen model privacy under personalized local privacy by leveraging the privacy amplification effect of the shuffle model.
no code implementations • 20 Mar 2023 • YuHan Liu, Anna Fang, Glen Moriarty, Robert Kraut, Haiyi Zhu
Online mental health communities (OMHCs) are an effective and accessible channel to give and receive social support for individuals with mental and emotional issues.
no code implementations • 7 Nov 2022 • YuHan Liu, Pengyu Wang, Roland Tóth
Gaussian process (GP) based estimation of system models is an effective tool to learn unknown dynamics directly from input/output data.
no code implementations • 7 Nov 2022 • Jayadev Acharya, YuHan Liu, Ziteng Sun
Perhaps surprisingly, we show that in suitable parameter regimes, having $m$ samples per user is equivalent to having $m$ times more users, each with only one sample.
1 code implementation • 26 Sep 2022 • Pengzhi Yang, YuHan Liu, Shumon Koga, Arash Asgharivaskasi, Nikolay Atanasov
This paper proposes a method for learning continuous control policies for active landmark localization and exploration using an information-theoretic cost.
1 code implementation • 9 Jun 2022 • Shangbin Feng, Zhaoxuan Tan, Herun Wan, Ningnan Wang, Zilong Chen, Binchi Zhang, Qinghua Zheng, Wenqian Zhang, Zhenyu Lei, Shujie Yang, Xinshun Feng, Qingyue Zhang, Hongrui Wang, YuHan Liu, Yuyang Bai, Heng Wang, Zijian Cai, Yanbo Wang, Lijing Zheng, Zihan Ma, Jundong Li, Minnan Luo
Twitter bot detection has become an increasingly important task to combat misinformation, facilitate social media moderation, and preserve the integrity of the online discourse.
no code implementations • 7 Jun 2022 • YuHan Liu, Ananda Theertha Suresh, Wennan Zhu, Peter Kairouz, Marco Gruteser
In this scenario, the amount of noise injected into the histogram to obtain differential privacy is proportional to the maximum user contribution, which can be amplified by few outliers.
no code implementations • 6 Jun 2022 • Zhaolin Zhang, Mingqi Song, Wugang Meng, YuHan Liu, Fengcong Li, Xiang Feng, Yinan Zhao
First, this method decomposes and reconstructs the radar signal according to the difference in the reflected echo frequency between the limbs and the trunk of the human body.
1 code implementation • 7 May 2022 • YuHan Liu, Jun Gao, Jiachen Du, Lanjun Zhou, Ruifeng Xu
The emotion-aware dialogue management contains two parts: (1) Emotion state tracking maintains the current emotion state of the user and (2) Empathetic dialogue policy selection predicts a target emotion and a user's intent based on the results of the emotion state tracking.
no code implementations • 22 Dec 2021 • YuHan Liu, Roland Tóth
In this paper, we present a novel Dual Gaussian Process (DGP) based model predictive control strategy that improves the performance of a quadcopter during trajectory tracking.
no code implementations • NeurIPS 2021 • Jayadev Acharya, Clement Canonne, YuHan Liu, Ziteng Sun, Himanshu Tyagi
We obtain tight minimax rates for the problem of distributed estimation of discrete distributions under communication constraints, where $n$ users observing $m $ samples each can broadcast only $\ell$ bits.
no code implementations • 15 Oct 2021 • Ryan Jacobs, Mingren Shen, YuHan Liu, Wei Hao, Xiaoshan Li, Ruoyu He, Jacob RC Greaves, Donglin Wang, Zeming Xie, Zitong Huang, Chao Wang, Kevin G. Field, Dane Morgan
In this work, we perform semantic segmentation of multiple defect types in electron microscopy images of irradiated FeCrAl alloys using a deep learning Mask Regional Convolutional Neural Network (Mask R-CNN) model.
no code implementations • 19 Aug 2021 • Mingren Shen, Guanzhao Li, Dongxia Wu, YuHan Liu, Jacob Greaves, Wei Hao, Nathaniel J. Krakauer, Leah Krudy, Jacob Perez, Varun Sreenivasan, Bryan Sanchez, Oigimer Torres, Wei Li, Kevin Field, Dane Morgan
Electron microscopy is widely used to explore defects in crystal structures, but human detecting of defects is often time-consuming, error-prone, and unreliable, and is not scalable to large numbers of images or real-time analysis.
no code implementations • CVPR 2021 • Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Meng Song, YuHan Liu, Yu-Ying Yeh, Rui Zhu, Nitesh Gundavarapu, Jia Shi, Sai Bi, Hong-Xing Yu, Zexiang Xu, Kalyan Sunkavalli, Milos Hasan, Ravi Ramamoorthi, Manmohan Chandraker
Finally, we demonstrate that our framework may also be integrated with physics engines, to create virtual robotics environments with unique ground truth such as friction coefficients and correspondence to real scenes.
no code implementations • 3 May 2021 • YuHan Liu, Yuhan Gao, Zhifan Nan, Long Chen
During the COVID-19 pandemic, people started to discuss about pandemic-related topics on social media.
1 code implementation • 2 Feb 2021 • YuHan Liu, Saurabh Agarwal, Shivaram Venkataraman
With the rapid adoption of machine learning (ML), a number of domains now use the approach of fine tuning models which were pre-trained on a large corpus of data.
no code implementations • 18 Jan 2021 • Arjun Balasubramanian, Adarsh Kumar, YuHan Liu, Han Cao, Shivaram Venkataraman, Aditya Akella
We present the design of GATI, an end-to-end prediction serving system that incorporates learned caches for low-latency DNN inference.
no code implementations • 30 Oct 2020 • Jayadev Acharya, Peter Kairouz, YuHan Liu, Ziteng Sun
We consider the problem of estimating sparse discrete distributions under local differential privacy (LDP) and communication constraints.
no code implementations • 14 Oct 2020 • Yunhai Han, YuHan Liu, David Paz, Henrik Christensen
Calibration of sensors is fundamental to robust performance for intelligent vehicles.
no code implementations • NeurIPS 2020 • Yuhan Liu, Ananda Theertha Suresh, Felix Yu, Sanjiv Kumar, Michael Riley
If each user has $m$ samples, we show that straightforward applications of Laplace or Gaussian mechanisms require the number of users to be $\mathcal{O}(k/(m\alpha^2) + k/\epsilon\alpha)$ to achieve an $\ell_1$ distance of $\alpha$ between the true and estimated distributions, with the privacy-induced penalty $k/\epsilon\alpha$ independent of the number of samples per user $m$.
no code implementations • 25 Jul 2020 • Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Meng Song, YuHan Liu, Yu-Ying Yeh, Rui Zhu, Nitesh Gundavarapu, Jia Shi, Sai Bi, Zexiang Xu, Hong-Xing Yu, Kalyan Sunkavalli, Miloš Hašan, Ravi Ramamoorthi, Manmohan Chandraker
Finally, we demonstrate that our framework may also be integrated with physics engines, to create virtual robotics environments with unique ground truth such as friction coefficients and correspondence to real scenes.
no code implementations • 3 Dec 2018 • Ching Hua Lee, Guangjie Li, YuHan Liu, Tommy Tai, Ronny Thomale, Xiao Zhang
Non-Hermitian nodal knot metals (NKMs) contains intricate complex-valued energy bands gives rise to knotted exceptional loops and new topological surface states.
Mesoscale and Nanoscale Physics Materials Science Other Condensed Matter Mathematical Physics Mathematical Physics Quantum Physics
1 code implementation • 24 Mar 2018 • Yuhan Liu, Xiao Zhang, Maciej Lewenstein, Shi-Ju Ran
In this work, we implement simple numerical experiments, related to pattern/images classification, in which we represent the classifiers by many-qubit quantum states written in the matrix product states (MPS).