1 code implementation • Findings (ACL) 2022 • Pritom Saha Akash, Jie Huang, Kevin Chen-Chuan Chang, Yunyao Li, Lucian Popa, ChengXiang Zhai
We propose a probabilistic approach to select a subset of a \textit{target domain representative keywords} from a candidate set, contrasting with a context domain.
no code implementations • UbiComp/ISWC '19 Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers 2019 • Md. Eusha Kadir, Pritom Saha Akash, Sadia Sharmin, Amin Ahsan Ali, Mohammad Shoyaib
For the last two decades, more and more complex methods have been developed to identify human activities using various types of sensors, e. g., data from motion capture, accelerometer, and gyroscopes sensors.
no code implementations • 16 Mar 2022 • Pritom Saha Akash, Kevin Chen-Chuan Chang
Then, the variational graph auto-encoder is used to learn a vector representation for each method.
no code implementations • 16 Oct 2022 • Pritom Saha Akash, Jie Huang, Kevin Chen-Chuan Chang
It then uses the axes to model a corpus for easily understandable representation.
no code implementations • 19 Jun 2023 • Lam Thanh Do, Pritom Saha Akash, Kevin Chen-Chuan Chang
To solve this problem, we propose a seq2seq model that consists of two modules, namely \textit{phraseness} and \textit{informativeness} module, both of which can be built in an unsupervised and open-domain fashion.
no code implementations • 8 Oct 2023 • Pritom Saha Akash, Trisha Das, Kevin Chen-Chuan Chang
Topic models are popular statistical tools for detecting latent semantic topics in a text corpus.
no code implementations • 24 Oct 2023 • Pritom Saha Akash, Jie Huang, Kevin Chen-Chuan Chang
Besides, we provide a simple solution extending a neural topic model to reduce the effect of noisy out-of-topics text generation from PLMs.
no code implementations • 15 Nov 2023 • Pritom Saha Akash, Kashob Kumar Roy, Lucian Popa, Kevin Chen-Chuan Chang
From an extensive experiment on both an open domain and a technical domain QA dataset, we find that our model outperforms the state-of-the-art models on various textual and factual metrics for the LFQA task.