no code implementations • 12 Apr 2024 • Leif Azzopardi, Vishwa Vinay
This paper introduces the concept of accessibility from the field of transportation planning and adopts it within the context of Information Retrieval (IR).
no code implementations • 4 Feb 2023 • Nihal Jain, Praneetha Vaddamanu, Paridhi Maheshwari, Vishwa Vinay, Kuldeep Kulkarni
In this work, we consider the setting where a query for similar images is derived from a collection of images.
no code implementations • 4 Nov 2022 • Gaurav Verma, Vishwa Vinay, Ryan A. Rossi, Srijan Kumar
Our work aims to highlight and encourage further research on the robustness of deep multimodal models to realistic variations, especially in human-facing societal applications.
no code implementations • 8 Jul 2022 • Rishi Agarwal, Tirupati Saketh Chandra, Vaidehi Patil, Aniruddha Mahapatra, Kuldeep Kulkarni, Vishwa Vinay
To this end, we formulate scene graph expansion as a sequential prediction task involving multiple steps of first predicting a new node and then predicting the set of relationships between the newly predicted node and previous nodes in the graph.
no code implementations • 6 Jun 2022 • Vishwa Vinay, Manoj Kilaru, David Arbour
Search engines and recommendation systems attempt to continually improve the quality of the experience they afford to their users.
1 code implementation • 28 May 2022 • Shashank Goel, Hritik Bansal, Sumit Bhatia, Ryan A. Rossi, Vishwa Vinay, Aditya Grover
Recent advances in contrastive representation learning over paired image-text data have led to models such as CLIP that achieve state-of-the-art performance for zero-shot classification and distributional robustness.
1 code implementation • 28 Apr 2022 • Hansi Zeng, Hamed Zamani, Vishwa Vinay
Recent work has shown that more effective dense retrieval models can be obtained by distilling ranking knowledge from an existing base re-ranking model.
no code implementations • 5 Dec 2021 • Victor S. Bursztyn, Jennifer Healey, Vishwa Vinay
Based on recent advances in realistic language modeling (GPT-3) and cross-modal representations (CLIP), Gaud\'i was developed to help designers search for inspirational images using natural language.
no code implementations • 22 Sep 2021 • Paridhi Maheshwari, Nihal Jain, Praneetha Vaddamanu, Dhananjay Raut, Shraiysh Vaishay, Vishwa Vinay
While this dataset is specialized for our investigations on color, the method can be extended to other visual dimensions where composition is of interest.
no code implementations • 6 Apr 2021 • Paridhi Maheshwari, Ritwick Chaudhry, Vishwa Vinay
In this work, we employ a graph convolutional network to exploit structure in scene graphs and produce image embeddings useful for semantic image retrieval.
no code implementations • 2 Mar 2021 • Sunny Dhamnani, Ritwik Sinha, Vishwa Vinay, Lilly Kumari, Margarita Savova
Malicious bots make up about a quarter of all traffic on the web, and degrade the performance of personalization and recommendation algorithms that operate on e-commerce sites.
no code implementations • 17 Jun 2020 • Paridhi Maheshwari, Manoj Ghuhan, Vishwa Vinay
We leverage historical clickthrough data to produce a colour representation for search queries and propose a recurrent neural network architecture to encode unseen queries into colour space.
no code implementations • 5 Jun 2020 • Gaurav Verma, Niyati Chhaya, Vishwa Vinay
With rising concern around abusive and hateful behavior on social media platforms, we present an ensemble learning method to identify and analyze the linguistic properties of such content.
no code implementations • 2 Mar 2020 • Gaurav Verma, Vishwa Vinay, Sahil Bansal, Shashank Oberoi, Makkunda Sharma, Prakhar Gupta
Interactive search sessions often contain multiple queries, where the user submits a reformulated version of the previous query in response to the original results.
no code implementations • 18 Aug 2019 • Moumita Sinha, Vishwa Vinay, Harvineet Singh
In this paper we use a survival analysis framework to predict the time to open an email once it has been received.
no code implementations • 27 Apr 2018 • Shuai Li, Yasin Abbasi-Yadkori, Branislav Kveton, S. Muthukrishnan, Vishwa Vinay, Zheng Wen
We analyze our estimators and prove that they are more efficient than the estimators that do not use the structure of the click model, under the assumption that the click model holds.