We propose a new shared task of semantic retrieval from legal texts, in which a so-called contract discovery is to be performed, where legal clauses are extracted from documents, given a few examples of similar clauses from other legal acts. The task differs substantially from conventional NLI and shared tasks on legal information extraction (e.g., one has to identify text span instead of a single document, page, or paragraph). The specification of the proposed task is followed by an evaluation of multiple solutions within the unified framework proposed for this branch of methods. It is shown that state-of-the-art pretrained encoders fail to provide satisfactory results on the task proposed. In contrast, Language Model-based solutions perform better, especially when unsupervised fine-tuning is applied. Besides the ablation studies, we addressed questions regarding detection accuracy for relevant text fragments depending on the number of examples available. In addition to the dataset and reference results, LMs specialized in the legal domain were made publicly available.

PDF Abstract Findings of 2020 PDF Findings of 2020 Abstract


Introduced in the Paper:

Contract Discovery

Used in the Paper:

MultiNLI SNLI ActivityNet
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Retrieval Contract Discovery Human baseline Soft-F1 0.84 # 1
Semantic Retrieval Contract Discovery Universal Sentence Encoder Soft-F1 0.38 # 5
Semantic Retrieval Contract Discovery Sentence BERT Soft-F1 0.31 # 6
Semantic Retrieval Contract Discovery LSA baseline Soft-F1 0.39 # 4
Semantic Retrieval Contract Discovery k-NN with sentence n-grams, GPT-2 embeddings, fICA Soft-F1 0.51 # 2