no code implementations • 16 Aug 2024 • Jinwei Hu, Yi Dong, Xiaowei Huang
Guardrails have become an integral part of Large language models (LLMs), by moderating harmful or toxic response in order to maintain LLMs' alignment to human expectations.
no code implementations • 5 Jun 2024 • Jianyu Liu, Wei Chen, Yong Zhang, Zhenfeng Chen, Bin Wan, Jinwei Hu
In order to solve the problems such as difficult to extract effective features and low accuracy of sales volume prediction caused by complex relationships such as market sales volume in time series prediction, we proposed a time series prediction method of market sales volume based on Sequential General VMD and spatial smoothing Long short-term memory neural network (SS-LSTM) combination model.
no code implementations • 3 Jun 2024 • Yi Dong, Ronghui Mu, Yanghao Zhang, Siqi Sun, Tianle Zhang, Changshun Wu, Gaojie Jin, Yi Qi, Jinwei Hu, Jie Meng, Saddek Bensalem, Xiaowei Huang
In the burgeoning field of Large Language Models (LLMs), developing a robust safety mechanism, colloquially known as "safeguards" or "guardrails", has become imperative to ensure the ethical use of LLMs within prescribed boundaries.
1 code implementation • 11 Feb 2024 • Dayou Chen, Sibo Cheng, Jinwei Hu, Matthew Kasoar, Rossella Arcucci
Wildfire prediction has become increasingly crucial due to the escalating impacts of climate change.
no code implementations • 2 Feb 2024 • Yi Dong, Ronghui Mu, Gaojie Jin, Yi Qi, Jinwei Hu, Xingyu Zhao, Jie Meng, Wenjie Ruan, Xiaowei Huang
As Large Language Models (LLMs) become more integrated into our daily lives, it is crucial to identify and mitigate their risks, especially when the risks can have profound impacts on human users and societies.