no code implementations • 14 Dec 2023 • Audrey Chung, Francis Li, Jeremy Ward, Andrew Hryniowski, Alexander Wong
In this paper, we present the DarwinAI Visual Quality Inspection (DVQI) system, a hardware-integration artificial intelligence system for the automated inspection of printed circuit board assembly defects in an electronics manufacturing environment.
1 code implementation • 24 Apr 2022 • Hossein Aboutalebi, Maya Pavlova, Mohammad Javad Shafiee, Adrian Florea, Andrew Hryniowski, Alexander Wong
Since the World Health Organization declared COVID-19 a pandemic in 2020, the global community has faced ongoing challenges in controlling and mitigating the transmission of the SARS-CoV-2 virus, as well as its evolving subvariants and recombinants.
no code implementations • 14 Sep 2021 • Audrey Chung, Mahmoud Famouri, Andrew Hryniowski, Alexander Wong
The COVID-19 pandemic continues to have a devastating global impact, and has placed a tremendous burden on struggling healthcare systems around the world.
no code implementations • 29 Apr 2021 • Xiaoyu Wen, Mahmoud Famouri, Andrew Hryniowski, Alexander Wong
In this study, we introduce \textbf{AttendSeg}, a low-precision, highly compact deep neural network tailored for on-device semantic segmentation.
no code implementations • 7 Dec 2020 • Andrew Hryniowski, Alexander Wong
The quantitative analysis of information structure through a deep neural network (DNN) can unveil new insights into the theoretical performance of DNN architectures.
no code implementations • 3 Nov 2020 • Alexander Wong, Andrew Hryniowski, Xiao Yu Wang
In this study we explore the feasibility and utility of a multi-scale trust quantification strategy to gain insights into the fairness of a financial deep learning model, particularly under different scenarios at different scales.
no code implementations • 30 Sep 2020 • Andrew Hryniowski, Xiao Yu Wang, Alexander Wong
We experimentally leverage trust matrices to study several well-known deep neural network architectures for image recognition, and further study the trust density and conditional trust densities for an interesting actor-oracle answer scenario.
no code implementations • 12 Sep 2020 • Alexander Wong, Xiao Yu Wang, Andrew Hryniowski
In this study, we take a step towards simple, interpretable metrics for trust quantification by introducing a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
no code implementations • 21 Nov 2019 • Andrew Hryniowski, Alexander Wong
In this work, we present a novel approach that enables end-to-end learning of deep RBF networks with fully learnable activation basis functions in an automatic and tractable manner.
no code implementations • 15 Oct 2019 • Mohammad Javad Shafiee, Andrew Hryniowski, Francis Li, Zhong Qiu Lin, Alexander Wong
A particularly interesting class of compact architecture search algorithms are those that are guided by baseline network architectures.
no code implementations • 26 May 2019 • Andrew Hryniowski, Alexander Wong
Researchers are actively trying to gain better insights into the representational properties of convolutional neural networks for guiding better network designs and for interpreting a network's computational nature.
no code implementations • 10 Nov 2018 • Andrew Hryniowski, Alexander Wong
However, this attention has not been equally shared by one of the fundamental building blocks of a deep neural network, the neurons.