Script Generation
17 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Script Generation
Most implemented papers
Hierarchical Quantized Representations for Script Generation
This permits the decoder to softly decide what portions of the latent hierarchy to condition on by attending over the value embeddings for a given setting.
LSTM vs. GRU vs. Bidirectional RNN for script generation
In this paper, a case study is presented on how different sequence to sequence deep learning models perform in the task of generating new conversations between characters as well as new scenarios on the basis of a script (previous conversations).
ScriptWriter: Narrative-Guided Script Generation
In dialogue systems, it would also be useful to drive dialogues by a dialogue plan.
Learning to Repair: Repairing model output errors after deployment using a dynamic memory of feedback
Our goal is for an LM to continue to improve after deployment, without retraining, using feedback from the user.
Take a Break in the Middle: Investigating Subgoals towards Hierarchical Script Generation
Goal-oriented Script Generation is a new task of generating a list of steps that can fulfill the given goal.
ChatEDA: A Large Language Model Powered Autonomous Agent for EDA
The integration of a complex set of Electronic Design Automation (EDA) tools to enhance interoperability is a critical concern for circuit designers.
MULTISCRIPT: Multimodal Script Learning for Supporting Open Domain Everyday Tasks
Automatically generating scripts (i. e. sequences of key steps described in text) from video demonstrations and reasoning about the subsequent steps are crucial to the modern AI virtual assistants to guide humans to complete everyday tasks, especially unfamiliar ones.
MM-VID: Advancing Video Understanding with GPT-4V(ision)
We present MM-VID, an integrated system that harnesses the capabilities of GPT-4V, combined with specialized tools in vision, audio, and speech, to facilitate advanced video understanding.
ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code
Despite Large Language Models (LLMs) like GPT-4 achieving impressive results in function-level code generation, they struggle with repository-scale code understanding (e. g., coming up with the right arguments for calling routines), requiring a deeper comprehension of complex file interactions.
LLM4EDA: Emerging Progress in Large Language Models for Electronic Design Automation
Since circuit can be represented with HDL in a textual format, it is reasonable to question whether LLMs can be leveraged in the EDA field to achieve fully automated chip design and generate circuits with improved power, performance, and area (PPA).