Models Alignment
5 papers with code • 0 benchmarks • 1 datasets
Models Alignment is the process of ensuring that multiple models used in a machine learning system are consistent with each other and aligned with the goals of the system. This involves defining clear and consistent objectives for each model, identifying and addressing any inconsistencies or biases in the data used to train each model, testing and validating each model to ensure its accuracy, and ensuring that the predictions and decisions made by each model are consistent and aligned with the overall goals of the system.
Benchmarks
These leaderboards are used to track progress in Models Alignment
Most implemented papers
A Survey on LLM-as-a-Judge
Accurate and consistent evaluation is crucial for decision-making across numerous fields, yet it remains a challenging task due to inherent subjectivity, variability, and scale.
Answering Numerical Reasoning Questions in Table-Text Hybrid Contents with Graph-based Encoder and Tree-based Decoder
In the real-world question answering scenarios, hybrid form combining both tabular and textual contents has attracted more and more attention, among which numerical reasoning problem is one of the most typical and challenging problems.
Re-basin via implicit Sinkhorn differentiation
The recent emergence of new algorithms for permuting models into functionally equivalent regions of the solution space has shed some light on the complexity of error surfaces, and some promising properties like mode connectivity.
Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment
However, a major challenge faced by practitioners is the lack of clear guidance on evaluating whether LLM outputs align with social norms, values, and regulations.
VFA: Vision Frequency Analysis of Foundation Models and Human
Machine learning models often struggle with distribution shifts in real-world scenarios, whereas humans exhibit robust adaptation.