no code implementations • ACL 2022 • Shijie Geng, Zuohui Fu, Yingqiang Ge, Lei LI, Gerard de Melo, Yongfeng Zhang
In modern recommender systems, there are usually comments or reviews from users that justify their ratings for different items.
no code implementations • RANLP 2021 • Xiaoli He, Gerard de Melo
In recent years, a number of studies have used linear models for personality prediction based on text.
no code implementations • Findings (ACL) 2022 • ZeFeng Cai, LinLin Wang, Gerard de Melo, Fei Sun, Liang He
Generating explanations for recommender systems is essential for improving their transparency, as users often wish to understand the reason for receiving a specified recommendation.
no code implementations • EMNLP 2020 • Abu Awal Md Shoeb, Gerard de Melo
Given the growing ubiquity of emojis in language, there is a need for methods and resources that shed light on their meaning and communicative role.
no code implementations • EACL (LANTERN) 2021 • Sreyasi Nag Chowdhury, Rajarshi Bhowmik, Hareesh Ravi, Gerard de Melo, Simon Razniewski, Gerhard Weikum
Modern web content - news articles, blog posts, educational resources, marketing brochures - is predominantly multimodal.
no code implementations • 12 Jan 2025 • Shunfan Zheng, Xiechi Zhang, Gerard de Melo, Xiaoling Wang, LinLin Wang
To address these challenges, we introduce HDCEval, a Hierarchical Divide-and-Conquer Evaluation framework tailored for fine-grained alignment in medical evaluation.
1 code implementation • 17 Dec 2024 • Xin Yi, Shunfan Zheng, LinLin Wang, Gerard de Melo, Xiaoling Wang, Liang He
The core of our framework is first to construct a safety reference model from an initially aligned model to amplify safety-related features in neurons.
no code implementations • 16 Dec 2024 • Xiechi Zhang, Shunfan Zheng, LinLin Wang, Gerard de Melo, Zhu Cao, Xiaoling Wang, Liang He
As multimodal large language models (MLLMs) gain prominence in the medical field, the need for precise evaluation methods to assess their effectiveness has become critical.
no code implementations • 9 Dec 2024 • Roi Cohen, Konstantin Dobler, Eden Biran, Gerard de Melo
Large Language Models are known to capture real-world knowledge, allowing them to excel in many downstream tasks.
1 code implementation • 16 Oct 2024 • Leo Kohlenberg, Leonard Horns, Frederic Sadrieh, Nils Kiele, Matthis Clausen, Konstantin Ketterer, Avetis Navasardyan, Tamara Czinczoll, Gerard de Melo, Ralf Herbrich
We also propose a new evaluation metric for this scenario, HAMS4, that can be used to compare a set of strings with multiple reference sets.
no code implementations • 8 Oct 2024 • Moritz Feuerpfeil, Marco Cipriano, Gerard de Melo
Scalable Vector Graphics (SVG) is a popular format on the web and in the design industry.
1 code implementation • 4 Oct 2024 • Zetian Ouyang, Yishuai Qiu, LinLin Wang, Gerard de Melo, Ya zhang, Yanfeng Wang, Liang He
With the proliferation of Large Language Models (LLMs) in diverse domains, there is a particular need for unified evaluation standards in clinical medical scenarios, where models need to be examined very thoroughly.
1 code implementation • 28 Aug 2024 • Konstantin Dobler, Gerard de Melo
We investigate continued pretraining of LLMs for language adaptation on a tight academic budget: a setting in which only a few GPUs can be used in parallel, for a heavily constrained duration.
1 code implementation • 25 Jun 2024 • Sedigheh Eslami, Gerard de Melo
We design AlignCLIP, in order to answer these questions and through extensive experiments, we show that AlignCLIP achieves noticeable enhancements in the cross-modal alignment of the embeddings, and thereby, reduces the modality gap, while improving the performance across several zero-shot and fine-tuning downstream evaluations.
no code implementations • 3 Jun 2024 • Maryam Hosseini, Marco Cipriano, Sedigheh Eslami, Daniel Hodczak, Liu Liu, Andres Sevtsuk, Gerard de Melo
Existing Open Vocabulary Detection (OVD) models exhibit a number of challenges.
1 code implementation • 2 May 2024 • Tolga Buz, Benjamin Frost, Nikola Genchev, Moritz Schneider, Lucie-Aimée Kaffee, Gerard de Melo
Recent Large Language Models (LLMs) have shown the ability to generate content that is difficult or impossible to distinguish from human writing.
1 code implementation • 8 Mar 2024 • Maximilian Schall, Tamara Czinczoll, Gerard de Melo
Writing commit messages is a tedious daily task for many software developers, and often remains neglected.
no code implementations • 28 Feb 2024 • Laura Manduchi, Kushagra Pandey, Robert Bamler, Ryan Cotterell, Sina Däubener, Sophie Fellenz, Asja Fischer, Thomas Gärtner, Matthias Kirchler, Marius Kloft, Yingzhen Li, Christoph Lippert, Gerard de Melo, Eric Nalisnick, Björn Ommer, Rajesh Ranganath, Maja Rudolph, Karen Ullrich, Guy Van Den Broeck, Julia E Vogt, Yixin Wang, Florian Wenzel, Frank Wood, Stephan Mandt, Vincent Fortuin
The field of deep generative modeling has grown rapidly and consistently over the years.
1 code implementation • 27 Feb 2024 • Tamara Czinczoll, Christoph Hönes, Maximilian Schall, Gerard de Melo
While (large) language models have significantly improved over the last years, they still struggle to sensibly process long sequences found, e. g., in books, due to the quadratic scaling of the underlying attention mechanism.
1 code implementation • IEEE/CVF Winter Conference on Applications of Computer Vision 2024 • Tibor Bleidt, Sedigheh Eslami, Gerard de Melo
The task of Visual Question Answering (VQA) has been studied extensively on general-domain real-world images.
Ranked #1 on
Visual Question Answering (VQA)
on ArtQuest
(using extra training data)
no code implementations • 20 Dec 2023 • Yan Cai, LinLin Wang, Ye Wang, Gerard de Melo, Ya zhang, Yanfeng Wang, Liang He
The emergence of various medical large language models (LLMs) in the medical domain has highlighted the need for unified evaluation standards, as manual evaluation of LLMs proves to be time-consuming and labor-intensive.
1 code implementation • 9 Nov 2023 • Johannes Hagemann, Samuel Weinbach, Konstantin Dobler, Maximilian Schall, Gerard de Melo
In this work, we conduct a comprehensive ablation study of possible training configurations for large language models.
no code implementations • 18 Oct 2023 • Zengguang Hao, Jie Zhang, Binxia Xu, Yafang Wang, Gerard de Melo, Xiaolong Li
Intent detection and identification from multi-turn dialogue has become a widely explored technique in conversational agents, for example, voice assistants and intelligent customer services.
1 code implementation • 25 Sep 2023 • Lucas Liebe, Franz Sauerwald, Sylwester Sawicki, Matthias Schneider, Leo Schuhmann, Tolga Buz, Paul Boes, Ahmad Ahmadov, Gerard de Melo
To address this, we provide a novel framework for automatic real-time vehicle speed calculation, which copes with more diverse data from publicly available traffic cameras to achieve greater robustness.
no code implementations • 11 Aug 2023 • Jeff Z. Pan, Simon Razniewski, Jan-Christoph Kalo, Sneha Singhania, Jiaoyan Chen, Stefan Dietze, Hajira Jabeen, Janna Omeliyanenko, Wen Zhang, Matteo Lissandrini, Russa Biswas, Gerard de Melo, Angela Bonifati, Edlira Vakaj, Mauro Dragoni, Damien Graux
Large Language Models (LLMs) have taken Knowledge Representation -- and the world -- by storm.
1 code implementation • 23 May 2023 • Margarita Bugueño, Gerard de Melo
Given the success of Graph Neural Networks (GNNs) for structure-aware machine learning, many studies have explored their use for text classification, but mostly in specific domains with limited data characteristics.
2 code implementations • 23 May 2023 • Konstantin Dobler, Gerard de Melo
However, if we want to use a new tokenizer specialized for the target language, we cannot transfer the source model's embedding matrix.
1 code implementation • 16 Mar 2023 • Sepehr Janghorbani, Gerard de Melo
Recent breakthroughs in self supervised training have led to a new class of pretrained vision language models.
no code implementations • 31 Dec 2022 • Tolga Buz, Gerard de Melo
We present a data-driven empirical study of investment recommendations of WSB in comparison to recommendations made by leading investment banks, based on more than 1. 6 million WSB posts published since 2018.
no code implementations • 11 Oct 2022 • Terry Yue Zhuo, Yaqing Liao, Yuecheng Lei, Lizhen Qu, Gerard de Melo, Xiaojun Chang, Yazhou Ren, Zenglin Xu
We introduce ViLPAct, a novel vision-language benchmark for human activity planning.
2 code implementations • 6 Aug 2022 • Ziyi Lin, Shijie Geng, Renrui Zhang, Peng Gao, Gerard de Melo, Xiaogang Wang, Jifeng Dai, Yu Qiao, Hongsheng Li
Video recognition has been dominated by the end-to-end learning paradigm -- first initializing a video recognition model with weights of a pretrained image model and then conducting end-to-end training on videos.
Ranked #29 on
Action Classification
on Kinetics-400
(using extra training data)
4 code implementations • 9 Jun 2022 • Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocoń, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, ZiRui Wang, Ziyi Wu
BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models.
2 code implementations • 1 Mar 2022 • Xiang Hu, Haitao Mi, Liang Li, Gerard de Melo
We propose to use a top-down parser as a model-based pruning method, which also enables parallel encoding during inference.
1 code implementation • 23 Feb 2022 • Konstantin Dobler, Florian Hübscher, Jan Westphal, Alejandro Sierra-Múnera, Gerard de Melo, Ralf Krestel
Our approach is based on the StyleGAN neural network architecture, but incorporates a custom multi-conditional control mechanism that provides fine-granular control over characteristics of the generated paintings, e. g., with regard to the perceived emotion evoked in a spectator.
no code implementations • 27 Dec 2021 • Sedigheh Eslami, Gerard de Melo, Christoph Meinel
This work evaluates the effectiveness of CLIP for the task of Medical Visual Question Answering (MedVQA).
Ranked #7 on
Medical Visual Question Answering
on SLAKE-English
1 code implementation • 22 Dec 2021 • Honglu Zhou, Advith Chegu, Samuel S. Sohn, Zuohui Fu, Gerard de Melo, Mubbasir Kapadia
Digraph Representation Learning (DRL) aims to learn representations for directed homogeneous graphs (digraphs).
2 code implementations • 6 Dec 2021 • Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Shrivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Tanya Goyal, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S., Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J. Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmański, Tianbao Xie, Usama Yaseen, Michael A. Yee, Jing Zhang, Yue Zhang
Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on.
no code implementations • 24 Sep 2021 • Lei Shi, Kai Shuang, Shijie Geng, Peng Gao, Zuohui Fu, Gerard de Melo, Yunpeng Chen, Sen Su
To overcome these issues, we propose unbiased Dense Contrastive Visual-Linguistic Pretraining (DCVLP), which replaces the region regression and classification with cross-modality region contrastive learning that requires no annotations.
no code implementations • ACL 2021 • Xin Dong, Yaxin Zhu, Zuohui Fu, Dongkuan Xu, Gerard de Melo
Due to recent pretrained multilingual representation models, it has become feasible to exploit labeled data from one language to train a cross-lingual model that can then be applied to multiple new languages.
Cross-Lingual Natural Language Inference
Data Augmentation
+1
1 code implementation • ACL 2021 • Xiang Hu, Haitao Mi, Zujie Wen, Yafang Wang, Yi Su, Jing Zheng, Gerard de Melo
Human language understanding operates at multiple levels of granularity (e. g., words, phrases, and sentences) with increasing levels of abstraction that can be hierarchically combined.
no code implementations • 6 May 2021 • Tolga Buz, Gerard de Melo
200% over the last three years and approx.
no code implementations • EMNLP 2021 • Shahab Raji, Gerard de Melo
What do word vector representations reveal about the emotions associated with words?
no code implementations • EMNLP 2021 • Zhe Hu, Zuohui Fu, Yu Yin, Gerard de Melo
Impressive milestones have been achieved in text matching by adopting a cross-attention mechanism to capture pertinent semantic connections between two sentence representations.
1 code implementation • NAACL 2021 • Yaxin Zhu, Yikun Xian, Zuohui Fu, Gerard de Melo, Yongfeng Zhang
Knowledge graphs (KG) have become increasingly important to endow modern recommender systems with the ability to generate traceable reasoning paths to explain the recommendation process.
1 code implementation • EACL (Louhi) 2021 • Rajarshi Bhowmik, Karl Stratos, Gerard de Melo
Additionally, we modify our dual encoder model for end-to-end biomedical entity linking that performs both mention span detection and entity disambiguation and out-performs two recently proposed models.
no code implementations • ACL 2021 • Abu Awal Md Shoeb, Gerard de Melo
Emojis have become ubiquitous in digital communication, due to their visual appeal as well as their ability to vividly convey human emotion, among other factors.
no code implementations • COLING 2020 • Xiang Hu, Zujie Wen, Yafang Wang, Xiaolong Li, Gerard de Melo
In this work, we propose a reinforcement model to clarify ambiguous questions by suggesting refinements of the original query.
no code implementations • COLING 2020 • Arun Ramachandran, Gerard de Melo
Emotion lexicons provide information about associations between words and emotions.
no code implementations • COLING 2020 • Xunjie Zhu, Gerard de Melo
While important properties of word vector representations have been studied extensively, far less is known about the properties of sentence vector representations.
no code implementations • COLING 2020 • Binxia Xu, Siyuan Qiu, Jie Zhang, Yafang Wang, Xiaoyu Shen, Gerard de Melo
Utterance classification is a key component in many conversational systems.
no code implementations • COLING 2020 • Sm Mazharul Islam, Xin Dong, Gerard de Melo
Sentiment analysis is an area of substantial relevance both in industry and in academia, including for instance in social studies.
no code implementations • COLING 2020 • Wangshu Zhang, Junhong Liu, Zujie Wen, Yafang Wang, Gerard de Melo
We present a novel two-stage distillation method for ranking problems that allows a smaller student model to be trained while benefitting from the better performance of the teacher model, providing better control of the inference latency and computational burden.
2 code implementations • 13 Nov 2020 • Liqiang Wang, Xiaoyu Shen, Gerard de Melo, Gerhard Weikum
Prior work has focused on supervised learning with training data from the same domain.
1 code implementation • 29 Oct 2020 • Yikun Xian, Zuohui Fu, Handong Zhao, Yingqiang Ge, Xu Chen, Qiaoying Huang, Shijie Geng, Zhou Qin, Gerard de Melo, S. Muthukrishnan, Yongfeng Zhang
User profiles can capture prominent user behaviors from the history, and provide valuable signals about which kinds of path patterns are more likely to lead to potential items of interest for the user.
1 code implementation • 9 Oct 2020 • Honglu Zhou, Hareesh Ravi, Carlos M. Muniz, Vahid Azizi, Linda Ness, Gerard de Melo, Mubbasir Kapadia
Given its crucial role, there is a need to better understand and model the dynamics of GitHub as a social platform.
no code implementations • 21 Aug 2020 • Zuohui Fu, Yikun Xian, Yaxin Zhu, Yongfeng Zhang, Gerard de Melo
In this work, we present a new dataset for conversational recommendation over knowledge graphs in e-commerce platforms called COOKIE.
no code implementations • 29 Jul 2020 • Xin Dong, Yaxin Zhu, Yupeng Zhang, Zuohui Fu, Dongkuan Xu, Sen yang, Gerard de Melo
The resulting model then serves as a teacher to induce labels for unlabeled target language samples that can be used during further adversarial training, allowing us to gradually adapt our model to the target language.
no code implementations • 26 Jul 2020 • Lei Shi, Kai Shuang, Shijie Geng, Peng Su, Zhengkai Jiang, Peng Gao, Zuohui Fu, Gerard de Melo, Sen Su
We evaluate CVLP on several down-stream tasks, including VQA, GQA and NLVR2 to validate the superiority of contrastive learning on multi-modality representation learning.
no code implementations • NeurIPS 2020 • Yipeng Kang, Tonghan Wang, Gerard de Melo
Emergentism and pragmatics are two research fields that study the dynamics of linguistic communication along substantially different timescales and intelligence levels.
Multi-agent Reinforcement Learning
Reinforcement Learning (RL)
+2
no code implementations • 3 Jun 2020 • Zuohui Fu, Yikun Xian, Ruoyuan Gao, Jieyu Zhao, Qiaoying Huang, Yingqiang Ge, Shuyuan Xu, Shijie Geng, Chirag Shah, Yongfeng Zhang, Gerard de Melo
There has been growing attention on fairness considerations recently, especially in the context of intelligent decision making systems.
no code implementations • 27 May 2020 • Bingchen Liu, Kunpeng Song, Yizhe Zhu, Gerard de Melo, Ahmed Elgammal
Focusing on text-to-image (T2I) generation, we propose Text and Image Mutual-Translation Adversarial Networks (TIME), a lightweight but effective model that jointly learns a T2I generator G and an image captioning discriminator D under the Generative Adversarial Network framework.
no code implementations • 9 May 2020 • Shijie Geng, Ji Zhang, Zuohui Fu, Peng Gao, Hang Zhang, Gerard de Melo
Without identifying the connection between appearing people and character names, a model is not able to obtain a genuine understanding of the plots.
no code implementations • LREC 2020 • Kshitij Shah, Gerard de Melo
In this paper, we explore the artificial generation of typographical errors based on real-world statistics.
no code implementations • 2 May 2020 • Abu Shoeb, Gerard de Melo
Given the growing ubiquity of emojis in language, there is a need for methods and resources that shed light on their meaning and communicative role.
no code implementations • LREC 2020 • Da Huo, Gerard de Melo
Given the well-established usefulness of part-of-speech tag annotations in many syntactically oriented downstream NLP tasks, the recently proposed notion of semantic tagging (Bjerva et al. 2016) aims at tagging words with tags informed by semantic distinctions, which are likely to be useful across a range of semantic tasks.
1 code implementation • 1 May 2020 • Rajarshi Bhowmik, Gerard de Melo
Despite their large-scale coverage, cross-domain knowledge graphs invariably suffer from inherent incompleteness and sparsity.
2 code implementations • 19 Apr 2020 • Honglu Zhou, Shuyuan Xu, Zuohui Fu, Gerard de Melo, Yongfeng Zhang, Mubbasir Kapadia
In this paper, we present a Hierarchical Information Diffusion (HID) framework by integrating user representation learning and multiscale modeling.
no code implementations • 9 Mar 2020 • Xunjie Zhu, Gerard de Melo
While important properties of word vector representations have been studied extensively, far less is known about the properties of sentence vector representations.
2 code implementations • 4 Mar 2020 • Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia d'Amato, Gerard de Melo, Claudio Gutierrez, José Emilio Labra Gayo, Sabrina Kirrane, Sebastian Neumaier, Axel Polleres, Roberto Navigli, Axel-Cyrille Ngonga Ngomo, Sabbir M. Rashid, Anisa Rula, Lukas Schmelzeisen, Juan Sequeda, Steffen Staab, Antoine Zimmermann
In this paper we provide a comprehensive introduction to knowledge graphs, which have recently garnered significant attention from both industry and academia in scenarios that require exploiting diverse, dynamic, large-scale collections of data.
no code implementations • 2 Mar 2020 • Liang Jiang, Zujie Wen, Zhongping Liang, Yafang Wang, Gerard de Melo, Zhe Li, Liangzhuang Ma, Jiaxing Zhang, Xiaolong Li, Yuan Qi
The long-term teacher draws on snapshots from several epochs ago in order to provide steadfast guidance and to guarantee teacher--student differences, while the short-term one yields more up-to-date cues with the goal of enabling higher-quality updates.
no code implementations • 29 Jan 2020 • Zuohui Fu, Yikun Xian, Shijie Geng, Yingqiang Ge, Yuting Wang, Xin Dong, Guang Wang, Gerard de Melo
A number of cross-lingual transfer learning approaches based on neural networks have been proposed for the case when large amounts of parallel text are at our disposal.
no code implementations • 18 Dec 2019 • Xin Dong, Jingchao Ni, Wei Cheng, Zhengzhang Chen, Bo Zong, Dongjin Song, Yanchi Liu, Haifeng Chen, Gerard de Melo
In practice, however, these two sets of reviews are notably different: users' reviews reflect a variety of items that they have bought and are hence very heterogeneous in their topics, while an item's reviews pertain only to that single item and are thus topically homogeneous.
no code implementations • IJCNLP 2019 • Xin Dong, Gerard de Melo
Based on massive amounts of data, recent pretrained contextual representation models have made significant strides in advancing a number of different English NLP tasks.
2 code implementations • ICLR 2020 • Jindong Jiang, Sepehr Janghorbani, Gerard de Melo, Sungjin Ahn
Scalability in terms of object density in a scene is a primary challenge in unsupervised sequential object-oriented representation learning.
no code implementations • RANLP 2019 • Abu Awal Md Shoeb, Shahab Raji, Gerard de Melo
We evaluate the induced emotion profiles of emojis with regard to their ability to predict word affect intensities as well as sentiment scores.
1 code implementation • ACL 2019 • Zhi-Qiang Liu, Zuohui Fu, Jie Cao, Gerard de Melo, Yik-Cheung Tam, Cheng Niu, Jie zhou
Rhetoric is a vital element in modern poetry, and plays an essential role in improving its aesthetics.
1 code implementation • 12 Jun 2019 • Yikun Xian, Zuohui Fu, S. Muthukrishnan, Gerard de Melo, Yongfeng Zhang
To this end, we propose a method called Policy-Guided Path Reasoning (PGPR), which couples recommendation and interpretability by providing actual paths in a knowledge graph.
1 code implementation • 26 May 2019 • Bingchen Liu, Yizhe Zhu, Zuohui Fu, Gerard de Melo, Ahmed Elgammal
Exploring the potential of GANs for unsupervised disentanglement learning, this paper proposes a novel GAN-based disentanglement framework with One-Hot Sampling and Orthogonal Regularization (OOGAN).
1 code implementation • 16 Apr 2019 • Rajarshi Bhowmik, Gerard de Melo
Despite being vast repositories of factual information, cross-domain knowledge graphs, such as Wikidata and the Google Knowledge Graph, only sparsely provide short synoptic descriptions for entities.
1 code implementation • NAACL 2019 • Malihe Alikhani, Sreyasi Nag Chowdhury, Gerard de Melo, Matthew Stone
This paper presents a novel crowd-sourced resource for multimodal discourse: our resource characterizes inferences in image-text contexts in the domain of cooking recipes in the form of coherence relations.
1 code implementation • WS 2019 • Michael A. Hedderich, Andrew Yates, Dietrich Klakow, Gerard de Melo
However, they typically cannot serve as a drop-in replacement for conventional single-sense embeddings, because the correct sense vector needs to be selected for each word.
no code implementations • ACL 2018 • Xin Dong, Gerard de Melo
Deep convolutional neural networks excel at sentiment polarity classification, but tend to require substantial amounts of training data, which moreover differs quite significantly between domains.
Ranked #75 on
Sentiment Analysis
on SST-2 Binary classification
no code implementations • ACL 2018 • Xunjie Zhu, Tingfeng Li, Gerard de Melo
Neural vector representations are ubiquitous throughout all subfields of NLP.
1 code implementation • ACL 2018 • Rajarshi Bhowmik, Gerard de Melo
While large-scale knowledge graphs provide vast amounts of structured facts about entities, a short textual description can often be useful to succinctly characterize an entity and its type.
5 code implementations • CVPR 2018 • Xiang Long, Chuang Gan, Gerard de Melo, Jiajun Wu, Xiao Liu, Shilei Wen
In this paper, however, we show that temporal information, especially longer-term patterns, may not be necessary to achieve competitive results on common video classification datasets.
no code implementations • IJCNLP 2017 • Gerard de Melo
Neural vector representations are now ubiquitous in all subfields of natural language processing and text mining.
3 code implementations • 30 Jun 2017 • Kai Hui, Andrew Yates, Klaus Berberich, Gerard de Melo
Neural IR models, such as DRMM and PACRR, have achieved strong results by successfully capturing relevance matching signals.
3 code implementations • EMNLP 2017 • Kai Hui, Andrew Yates, Klaus Berberich, Gerard de Melo
In order to adopt deep learning for information retrieval, models are needed that can capture all relevant information required to assess the relevance of a document to a given user query.
no code implementations • TACL 2018 • Xiang Long, Chuang Gan, Gerard de Melo
Recently, video captioning has been attracting an increasing amount of interest, due to its potential for improving accessibility and information retrieval.
no code implementations • 3 Oct 2016 • Sixue Liu, Gerard de Melo
We analyze to what extent the random SAT and Max-SAT problems differ in their properties.
no code implementations • LREC 2016 • Eneldo Loza Menc{\'\i}a, Gerard de Melo, Jinseok Nam
In recent years, we have seen an increasing amount of interest in low-dimensional vector representations of words.
no code implementations • LREC 2016 • John Philip McCrae, Christian Chiarcos, Francis Bond, Philipp Cimiano, Thierry Declerck, Gerard de Melo, Jorge Gracia, Sebastian Hellmann, Bettina Klimek, Steven Moran, Petya Osenova, Antonio Pareja-Lora, Jonathan Pool
The Open Linguistics Working Group (OWLG) brings together researchers from various fields of linguistics, natural language processing, and information technology to present and discuss principles, case studies, and best practices for representing, publishing and linking linguistic data collections.
no code implementations • TACL 2016 • E.D. Guti{\'e}rrez, Ekaterina Shutova, Patricia Lichtenstein, Gerard de Melo, Luca Gilardi
Understanding cross-cultural differences has important implications for world affairs and many aspects of the life of society.
no code implementations • LREC 2014 • Valeria de Paiva, Livy Real, Alex Rademaker, re, Gerard de Melo
This paper presents NomLex-PT, a lexical resource describing Portuguese nominalizations.
no code implementations • LREC 2014 • Gerard de Melo
Research on the history of words has led to remarkable insights about language and also about the history of human civilization more generally.
no code implementations • LREC 2014 • Lars Borin, Jens Allwood, Gerard de Melo
Evaluation of automatic language-independent methods for language technology resource creation is difficult, and confounded by a largely unknown quantity, viz.
no code implementations • TACL 2013 • Gerard de Melo, Mohit Bansal
Adjectives like good, great, and excellent are similar in meaning, but differ in intensity.
no code implementations • LREC 2012 • Gerard de Melo, Collin F. Baker, Nancy Ide, Rebecca J. Passonneau, Christiane Fellbaum
We analyze how different conceptions of lexical semantics affect sense annotations and how multiple sense inventories can be compared empirically, based on annotated text.