Models Alignment

3 papers with code • 0 benchmarks • 0 datasets

Models Alignment is the process of ensuring that multiple models used in a machine learning system are consistent with each other and aligned with the goals of the system. This involves defining clear and consistent objectives for each model, identifying and addressing any inconsistencies or biases in the data used to train each model, testing and validating each model to ensure its accuracy, and ensuring that the predictions and decisions made by each model are consistent and aligned with the overall goals of the system.

Most implemented papers

Answering Numerical Reasoning Questions in Table-Text Hybrid Contents with Graph-based Encoder and Tree-based Decoder

lfy79001/reghnt COLING 2022

In the real-world question answering scenarios, hybrid form combining both tabular and textual contents has attracted more and more attention, among which numerical reasoning problem is one of the most typical and challenging problems.

Re-basin via implicit Sinkhorn differentiation

fagp/sinkhorn-rebasin CVPR 2023

The recent emergence of new algorithms for permuting models into functionally equivalent regions of the solution space has shed some light on the complexity of error surfaces, and some promising properties like mode connectivity.

Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment

kevinyaobytedance/llm_eval 10 Aug 2023

However, a major challenge faced by practitioners is the lack of clear guidance on evaluating whether LLM outputs align with social norms, values, and regulations.