no code implementations • 27 Mar 2025 • Rifat Mehreen Amin, Oliver Hans Kühle, Daniel Buschek, Andreas Butz
Generative AI models offer many possibilities for text creation and transformation.
no code implementations • 16 Mar 2025 • Hai Dang, Chelse Swoopes, Daniel Buschek, Elena L. Glassman
Many communities, including the scientific community, develop implicit writing norms.
no code implementations • 11 Feb 2025 • Tim Zindulka, Jannek Sekowski, Florian Lehmann, Daniel Buschek
Interacting with Large Language Models (LLMs) for text editing on mobile devices currently requires users to break out of their writing environment and switch to a conversational AI interface.
no code implementations • 10 Feb 2025 • Tim Zindulka, Sven Goller, Florian Lehmann, Daniel Buschek
We address this with a new UI concept called Content-Driven Local Response (CDLR), inspired by microtasking.
1 code implementation • 2 Oct 2024 • Julian Neuberger, Han van der Aa, Lars Ackermann, Daniel Buschek, Jannic Herrmann, Stefan Jablonski
Machine-learning based generation of process models from natural language text process descriptions provides a solution for the time-intensive and expensive process discovery phase.
no code implementations • 19 Sep 2024 • Lukas Mecke, Daniel Buschek, Uwe Gruenefeld, Florian Alt
Many important decisions in our everyday lives, such as authentication via biometric models, are made by Artificial Intelligence (AI) systems.
no code implementations • 27 May 2024 • Daniel Buschek
This essay proposes and explores the concept of Collage for the design of AI writing tools, transferred from avant-garde literature with four facets: 1) fragmenting text in writing interfaces, 2) juxtaposing voices (content vs command), 3) integrating material from multiple sources (e. g. text suggestions), and 4) shifting from manual writing to editorial and compositional decision-making, such as selecting and arranging snippets.
no code implementations • 14 Apr 2024 • Karim Benharrak, Tim Zindulka, Daniel Buschek
Large Language Models have become an integral part of new intelligent and interactive writing assistants.
no code implementations • 21 Mar 2024 • Mina Lee, Katy Ilonka Gero, John Joon Young Chung, Simon Buckingham Shum, Vipul Raheja, Hua Shen, Subhashini Venugopalan, Thiemo Wambsganss, David Zhou, Emad A. Alghamdi, Tal August, Avinash Bhat, Madiha Zahrah Choksi, Senjuti Dutta, Jin L. C. Guo, Md Naimul Hoque, Yewon Kim, Simon Knight, Seyed Parsa Neshaei, Agnia Sergeyuk, Antonette Shibani, Disha Shrivastava, Lila Shroff, Jessi Stark, Sarah Sterman, Sitong Wang, Antoine Bosselut, Daniel Buschek, Joseph Chee Chang, Sherol Chen, Max Kreminski, Joonsuk Park, Roy Pea, Eugenia H. Rho, Shannon Zejiang Shen, Pao Siangliulue
In our era of rapid technological advancement, the research landscape for writing assistants has become increasingly fragmented across various research communities.
no code implementations • 19 Sep 2023 • Karim Benharrak, Tim Zindulka, Florian Lehmann, Hendrik Heuer, Daniel Buschek
This is challenging, as writers may struggle to empathize with readers, get feedback in time, or gain access to the target group.
no code implementations • 6 Mar 2023 • Hai Dang, Sven Goller, Florian Lehmann, Daniel Buschek
We propose a conceptual perspective on prompts for Large Language Models (LLMs) that distinguishes between (1) diegetic prompts (part of the narrative, e. g. "Once upon a time, I saw a fox..."), and (2) non-diegetic prompts (external, e. g. "Write about the adventures of the fox.").
no code implementations • 6 Mar 2023 • Fiona Draxler, Anna Werner, Florian Lehmann, Matthias Hoppe, Albrecht Schmidt, Daniel Buschek, Robin Welsch
Participants were more likely to attribute ownership to supposedly human ghostwriters than AI ghostwriters, resulting in a higher ownership-authorship discrepancy for human ghostwriters.
no code implementations • 1 Feb 2023 • Maurice Jakesch, Advait Bhat, Daniel Buschek, Lior Zalmanson, Mor Naaman
Using the opinionated language model affected the opinions expressed in participants' writing and shifted their opinions in the subsequent attitude survey.
no code implementations • 3 Sep 2022 • Hai Dang, Lukas Mecke, Florian Lehmann, Sven Goller, Daniel Buschek
Deep generative models have the potential to fundamentally change the way we create high-fidelity digital content but are often hard to control.
no code implementations • 19 Aug 2022 • Hai Dang, Karim Benharrak, Florian Lehmann, Daniel Buschek
As a key finding, the summaries gave users an external perspective on their writing and helped them to revise the content and scope of their drafted paragraphs.
no code implementations • 1 Aug 2022 • Florian Lehmann, Niklas Markert, Hai Dang, Daniel Buschek
2) Writing with suggestions, the AI suggests phrases and user selects from a list.
no code implementations • 2 Feb 2022 • Hai Dang, Lukas Mecke, Daniel Buschek
We found that more control dimensions (sliders) significantly increase task difficulty and user actions.
no code implementations • 23 Jun 2021 • Oliver Schmitt, Daniel Buschek
We iteratively developed CharacterChat in a user-centred approach, starting with a survey on character creation with writers (N=30), followed by two qualitative user studies (N=7 and N=8).
no code implementations • 1 Apr 2021 • Daniel Buschek, Lukas Mecke, Florian Lehmann, Hai Dang
This position paper examines potential pitfalls on the way towards achieving human-AI co-creation with generative models in a way that is beneficial to the users' interests.
no code implementations • EACL (HCINLP) 2021 • Hendrik Heuer, Daniel Buschek
HCI and NLP traditionally focus on different evaluation methods.
no code implementations • 22 Jan 2021 • Daniel Buschek, Martin Zürn, Malin Eiband
We present an in-depth analysis of the impact of multi-word suggestion choices from a neural language model on user behaviour regarding input and text composition in email writing.
no code implementations • 21 Sep 2017 • Piergiorgio Caramazza, Alessandro Boccolini, Daniel Buschek, Matthias Hullin, Catherine Higham, Robert Henderson, Roderick Murray-Smith, Daniele Faccio
Light scattered from multiple surfaces can be used to retrieve information of hidden environments.