1 code implementation • 14 Mar 2024 • Jennifer Hsia, Afreen Shaikh, Zhiruo Wang, Graham Neubig
RAGGED offers further insights into LMs' context utilization habits, where we find that encoder-decoder models rely more on contexts and are thus more sensitive to retrieval quality, while decoder-only models tend to rely on knowledge memorized during training.
1 code implementation • 22 Oct 2022 • Rajdeep Mukherjee, Abhinav Bohra, Akash Banerjee, Soumya Sharma, Manjunath Hegde, Afreen Shaikh, Shivani Shrivastava, Koustuv Dasgupta, Niloy Ganguly, Saptarshi Ghosh, Pawan Goyal
Despite tremendous progress in automatic summarization, state-of-the-art methods are predominantly trained to excel in summarizing short newswire articles, or documents with strong layout biases such as scientific articles or government reports.
no code implementations • 18 Oct 2022 • Afreen Shaikh, Sharmila Botcha, Murali Krishna
In the proposed DE and OTSU algorithm, instead of passing the fitness function variables, the entire image is passed as an input to the DE algorithm after obtaining the threshold values for the input number of levels in the OTSU algorithm.
no code implementations • 24 Nov 2021 • Sarang Shrivastava, Afreen Shaikh, Shivani Shrivastava, Chung Ming Ho, Pradeep Reddy, Vijay Saraswat
This problem is easily solved in pages where the text is organized into a sequence of lines and vertical alignment runs the height of the page (producing multiple columns which can be read from left to right).