Search Results for author: Robert Feldt

Found 12 papers, 3 papers with code

Domain Generalization through Meta-Learning: A Survey

no code implementations3 Apr 2024 Arsham Gholamzadeh Khoee, Yinan Yu, Robert Feldt

Deep neural networks (DNNs) have revolutionized artificial intelligence but often lack performance when faced with out-of-distribution (OOD) data, a common scenario due to the inevitable domain shifts in real-world applications.

Domain Generalization Meta-Learning

Autonomous Large Language Model Agents Enabling Intent-Driven Mobile GUI Testing

no code implementations15 Nov 2023 Juyeon Yoon, Robert Feldt, Shin Yoo

On average, DroidAgent achieved 61% activity coverage, compared to 51% for current state-of-the-art GUI testing techniques.

Language Modelling Large Language Model

Test2Vec: An Execution Trace Embedding for Test Case Prioritization

no code implementations28 Jun 2022 Emad Jabbar, Soheila Zangeneh, Hadi Hemmati, Robert Feldt

In this paper, we hypothesize that execution traces of the test cases can be a good alternative to abstract their behavior for automated testing tasks.

A Taxonomy of Information Attributes for Test Case Prioritisation: Applicability, Machine Learning

1 code implementation16 Jan 2022 Aurora Ramírez, Robert Feldt, José Raúl Romero

However, the value added by ML-based TCP methods should be critically assessed with respect to the cost of collecting the information.

Attribute BIG-bench Machine Learning

Automated Support for Unit Test Generation: A Tutorial Book Chapter

no code implementations26 Oct 2021 Afonso Fontes, Gregory Gay, Francisco Gomes de Oliveira Neto, Robert Feldt

To illustrate how AI can support unit testing, this chapter introduces the concept of search-based unit test generation.

Towards Human-Like Automated Test Generation: Perspectives from Cognition and Problem Solving

no code implementations8 Mar 2021 Eduard Enoiu, Robert Feldt

The framework helps map test design steps and criteria used in human test activities and thus to better understand how effective human testers perform their tasks.

Applying Bayesian Analysis Guidelines to Empirical Software Engineering Data: The Case of Programming Languages and Code Quality

no code implementations29 Jan 2021 Carlo A. Furia, Richard Torkar, Robert Feldt

The high-level conclusions of our exercise will be that Bayesian statistical techniques can be applied to analyze software engineering data in a way that is principled, flexible, and leads to convincing results that inform the state of the art while highlighting the boundaries of its validity.

Software Engineering

Reducing DNN Labelling Cost using Surprise Adequacy: An Industrial Case Study for Autonomous Driving

no code implementations29 May 2020 Jinhan Kim, Jeongil Ju, Robert Feldt, Shin Yoo

The development process in use consists of multiple iterations of data collection, labelling, training, and evaluation.

Autonomous Driving Object +2

SINVAD: Search-based Image Space Navigation for DNN Image Classifier Test Input Generation

no code implementations19 May 2020 Sungmin Kang, Robert Feldt, Shin Yoo

The testing of Deep Neural Networks (DNNs) has become increasingly important as DNNs are widely adopted by safety critical systems.

Navigate

Guiding Deep Learning System Testing using Surprise Adequacy

5 code implementations25 Aug 2018 Jinhan Kim, Robert Feldt, Shin Yoo

Recently, a number of coverage criteria based on neuron activation values have been proposed.

Autonomous Driving

Cannot find the paper you are looking for? You can Submit a new open access paper.