Making Large Language Models Interactive: A Pioneer Study on Supporting Complex Information-Seeking Tasks with Implicit Constraints

Current interactive systems with natural language interfaces lack the ability to understand a complex information-seeking request which expresses several implicit constraints at once, and there is no prior information about user preferences e.g.,"find hiking trails around San Francisco which are accessible with toddlers and have beautiful scenery in summer", where output is a list of possible suggestions for users to start their exploration. In such scenarios, user requests can be issued in one shot in the form of a complex and long query, unlike conversational and exploratory search models, where require short utterances or queries are often presented to the system step by step. We have designed and deployed a platform to collect the data from approaching such complex interactive systems. Moreover, despite with the current advancement of generative language models these models suffer from hallucination in providing accurate factual knowledge. All language models are mostly trained in large part on web-scraped data from the past, which usually is not useful for immediate users' needs. In this article, we propose an IA that leverages Large Language Models (LLM) for complex request understanding and makes it interactive using Reinforcement learning that allows intricately refine user requests by making them complete, leading to better retrieval and reduce LLMs hallucination problems for current user needs. To demonstrate the performance of the proposed modeling paradigm, we have adopted various pre-retrieval metrics that capture the extent to which guided interactions with our system yield better retrieval results. Through extensive experimentation, we demonstrated that our method significantly outperforms several robust baselines.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here