16 papers with code • 13 benchmarks • 10 datasets
Libraries
Use these libraries to find models and implementationsDatasets
Subtasks
Most implemented papers
Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing
In this paper, we challenge this assumption by showing that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models.
An efficient encoder-decoder architecture with top-down attention for speech separation
In addition, a large-size version of TDANet obtained SOTA results on three datasets, with MACs still only 10\% of Sepformer and the CPU inference time only 24\% of Sepformer.
Gradient Gating for Deep Multi-Rate Learning on Graphs
We present Gradient Gating (G$^2$), a novel framework for improving the performance of Graph Neural Networks (GNNs).
Dual Cross-Attention for Medical Image Segmentation
DCA addresses the semantic gap between encoder and decoder features by sequentially capturing channel and spatial dependencies across multi-scale encoder features.
Stronger, Fewer, & Superior: Harnessing Vision Foundation Models for Domain Generalized Semantic Segmentation
Driven by the motivation that Leveraging Stronger pre-trained models and Fewer trainable parameters for Superior generalizability, we introduce a robust fine-tuning approach, namely Rein, to parameter-efficiently harness VFMs for DGSS.
Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy with Semantic Search and Hybrid Query-Based Retrievers
Retrieval-Augmented Generation (RAG) is a prevalent approach to infuse a private knowledge base of documents with Large Language Models (LLM) to build Generative Q\&A (Question-Answering) systems.