no code implementations • 13 Aug 2024 • Vladimir Cherkassky, Eng Hock Lee
However, LLM responses are synthesized from a large LLM model trained on all available data.
no code implementations • 31 May 2022 • Eng Hock Lee, Vladimir Cherkassky
There has been growing interest in generalization performance of large multilayer neural networks that can be trained to achieve zero training error, while generalizing well on test data.
1 code implementation • NeurIPS 2019 • Sauptik Dhar, Vladimir Cherkassky, Mohak Shah
We introduce the notion of learning from contradictions, a. k. a Universum learning, for multiclass problems and propose a novel formulation for multiclass universum SVM (MU-SVM).
no code implementations • 21 Sep 2019 • Sauptik Dhar, Vladimir Cherkassky
This paper extends the idea of Universum learning [1, 2] to single-class learning problems.
1 code implementation • 23 Aug 2018 • Sauptik Dhar, Vladimir Cherkassky, Mohak Shah
We introduce Universum learning for multiclass problems and propose a novel formulation for multiclass universum SVM (MU-SVM).
no code implementations • 29 Sep 2016 • Sauptik Dhar, Naveen Ramakrishnan, Vladimir Cherkassky, Mohak Shah
We introduce Universum learning for multiclass problems and propose a novel formulation for multiclass universum SVM (MU-SVM).
no code implementations • 27 May 2016 • Sauptik Dhar, Vladimir Cherkassky
This paper extends the idea of Universum learning [18, 19] to regression problems.