no code implementations • 20 Nov 2024 • Yijie Zhang, Luzhe Huang, Nir Pillar, Yuzhu Li, Lukasz G. Migas, Raf Van de Plas, Jeffrey M. Spraggins, Aydogan Ozcan
Imaging mass spectrometry (IMS) is a powerful tool for untargeted, highly multiplexed molecular mapping of tissue in biomedical research.
no code implementations • 18 Nov 2024 • Danny Barash, Emilie Manning, Aidan Van Vleck, Omri Hirsch, Kyi Lei Aye, Jingxi Li, Philip O. Scumpia, Aydogan Ozcan, Sumaira Aasi, Kerri E. Rieger, Kavita Y. Sarin, Oren Freifeld, Yonatan Winetraub
This is achieved without the need for model retraining or fine-tuning.
no code implementations • 26 Oct 2024 • Yijie Zhang, Luzhe Huang, Nir Pillar, Yuzhu Li, Hanlong Chen, Aydogan Ozcan
Virtual staining of tissue offers a powerful tool for transforming label-free microscopy images of unstained tissue into equivalents of histochemically stained samples.
no code implementations • 23 Oct 2024 • Shiqi Chen, Yuhang Li, Hanlong Chen, Aydogan Ozcan
Generative models cover various application areas, including image, video and music synthesis, natural language processing, and molecular design, among many others.
no code implementations • 23 Oct 2024 • Michael John Fanous, Christopher Michael Seybold, Hanlong Chen, Nir Pillar, Aydogan Ozcan
We developed a rapid scanning optical microscope, termed "BlurryScope", that leverages continuous image acquisition and deep learning to provide a cost-effective and compact solution for automated inspection and analysis of tissue sections.
no code implementations • 20 Oct 2024 • Yuhang Li, Shiqi Chen, Bijie Bai, Aydogan Ozcan
We introduce an all-optical system, termed the "lying mirror", to hide input information by transforming it into misleading, ordinary-looking patterns that effectively camouflage the underlying image data and deceive the observers.
no code implementations • 19 Oct 2024 • Yuzhu Li, Hao Li, WeiJie Chen, Keelan O'Riordan, Neha Mani, Yuxuan Qi, Tairan Liu, Sridhar Mani, Aydogan Ozcan
It blindly achieved a sensitivity of 97. 92% and a specificity of 96. 77% for DB10, and a sensitivity of 100% and a specificity of 97. 22% for H6.
no code implementations • 9 Sep 2024 • Yuzhu Li, Nir Pillar, Tairan Liu, Guangdong Ma, Yuxuan Qi, Kevin De Haan, Yijie Zhang, Xilin Yang, Adrian J. Correa, Guangqian Xiao, Kuang-Yu Jen, Kenneth A. Iczkowski, Yulun Wu, William Dean Wallace, Aydogan Ozcan
Here, we present a panel of virtual staining neural networks for lung and heart transplant biopsies, which digitally convert autofluorescence microscopic images of label-free tissue sections into their brightfield histologically stained counterparts, bypassing the traditional histochemical staining process.
no code implementations • 10 Aug 2024 • Guangdong Ma, Che-Yung Shen, Jingxi Li, Luzhe Huang, Cagatay Isil, Fazil Onuralp Ardic, Xilin Yang, Yuhang Li, Yuntian Wang, Md Sadman Sakib Rahman, Aydogan Ozcan
Here, we report unidirectional imaging under spatially partially coherent light and demonstrate high-quality imaging only in the forward direction (A->B) with high power efficiency while distorting the image formation in the backward direction (B->A) along with low power efficiency.
no code implementations • 17 Jul 2024 • Cagatay Isil, Hatice Ceylan Koydemir, Merve Eryilmaz, Kevin De Haan, Nir Pillar, Koray Mentesoglu, Aras Firat Unal, Yair Rivenson, Sukantha Chandrasekaran, Omai B. Garner, Aydogan Ozcan
Gram staining has been one of the most frequently used staining protocols in microbiology for over a century, utilized across various fields, including diagnostics, food safety, and environmental monitoring.
no code implementations • 7 Jul 2024 • Luzhe Huang, Xiongye Xiao, Shixuan Li, Jiawen Sun, Yi Huang, Aydogan Ozcan, Paul Bogdan
The advance of diffusion-based generative models in recent years has revolutionized state-of-the-art (SOTA) techniques in a wide variety of image analysis and synthesis tasks, whereas their adaptation on image restoration, particularly within computational microscopy remains theoretically and empirically underexplored.
no code implementations • 15 Jun 2024 • Md Sadman Sakib Rahman, Aydogan Ozcan
Optical imaging and sensing systems based on diffractive elements have seen massive advances over the last several decades.
no code implementations • 12 Jun 2024 • Artem Goncharov, Zoltan Gorocs, Ridhi Pradhan, Brian Ko, Ajmal Ajmal, Andres Rodriguez, David Baum, Marcell Veszpremi, Xilin Yang, Maxime Pindrys, Tianle Zheng, Oliver Wang, Jessica C. Ramella-Roman, Michael J. McShane, Aydogan Ozcan
This compact and cost-effective PLI is designed to capture phosphorescence lifetime images of an insertable sensor through the skin, where the lifetime of the emitted phosphorescence signal is modulated by the local concentration of glucose.
no code implementations • 5 Jun 2024 • Ali Momeni, Babak Rahmani, Benjamin Scellier, Logan G. Wright, Peter L. McMahon, Clara C. Wanjura, Yuhang Li, Anas Skalli, Natalia G. Berloff, Tatsuhiro Onodera, Ilker Oguz, Francesco Morichetti, Philipp del Hougne, Manuel Le Gallo, Abu Sebastian, Azalia Mirhoseini, Cheng Zhang, Danijela Marković, Daniel Brunner, Christophe Moser, Sylvain Gigan, Florian Marquardt, Aydogan Ozcan, Julie Grollier, Andrea J. Liu, Demetri Psaltis, Andrea Alù, Romain Fleury
Research over the past few years has shown that the answer to all these questions is likely "yes, with enough research": PNNs could one day radically change what is possible and practical for AI systems.
no code implementations • 29 Apr 2024 • Luzhe Huang, Yuzhu Li, Nir Pillar, Tal Keidar Haran, William Dean Wallace, Aydogan Ozcan
Here, we present an autonomous quality and hallucination assessment method (termed AQuA), mainly designed for virtual tissue staining, while also being applicable to histochemical staining.
no code implementations • 1 Apr 2024 • Sahan Yoruc Selcuk, Xilin Yang, Bijie Bai, Yijie Zhang, Yuzhu Li, Musa Aydin, Aras Firat Unal, Aditya Gomatam, Zhen Guo, Darrow Morgan Angus, Goren Kolodney, Karine Atlan, Tal Keidar Haran, Nir Pillar, Aydogan Ozcan
Human epidermal growth factor receptor 2 (HER2) is a critical protein in cancer cell growth that signifies the aggressiveness of breast cancer (BC) and helps predict its prognosis.
no code implementations • 21 Mar 2024 • Michael John Fanous, Paloma Casteleiro Costa, Cagatay Isil, Luzhe Huang, Aydogan Ozcan
The integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging.
no code implementations • 16 Mar 2024 • Che-Yung Shen, Jingxi Li, Tianyi Gan, Yuhang Li, Langxing Bai, Mona Jarrahi, Aydogan Ozcan
These wavelength-multiplexed patterns are projected onto a single field-of-view (FOV) at the output plane of the diffractive processor, enabling the capture of quantitative phase distributions of input objects located at different axial planes using an intensity-only image sensor.
no code implementations • 14 Mar 2024 • Xilin Yang, Bijie Bai, Yijie Zhang, Musa Aydin, Sahan Yoruc Selcuk, Zhen Guo, Gregory A. Fishbein, Karine Atlan, William Dean Wallace, Nir Pillar, Aydogan Ozcan
Systemic amyloidosis is a group of diseases characterized by the deposition of misfolded proteins in various organs and tissues, leading to progressive organ dysfunction and failure.
no code implementations • 27 Feb 2024 • Hyun-June Jang, Hyou-Arm Joung, Artem Goncharov, Anastasia Gant Kanegusuku, Clarence W. Chan, Kiang-Teck Jerry Yeo, Wen Zhuang, Aydogan Ozcan, Junhong Chen
This study explores the fusion of a field-effect transistor (FET), a paper-based analytical cartridge, and the computational power of deep learning (DL) for quantitative biosensing via kinetic analyses.
no code implementations • 4 Feb 2024 • Guangdong Ma, Xilin Yang, Bijie Bai, Jingxi Li, Yuhang Li, Tianyi Gan, Che-Yung Shen, Yijie Zhang, Yuzhu Li, Mona Jarrahi, Aydogan Ozcan
We demonstrated the feasibility of this reconfigurable multiplexed diffractive design by approximating 256 randomly selected permutation matrices using K=4 rotatable diffractive layers.
no code implementations • 30 Jan 2024 • Jingxi Li, Yuhang Li, Tianyi Gan, Che-Yung Shen, Mona Jarrahi, Aydogan Ozcan
Here, we present a complex field imager design that enables snapshot imaging of both the amplitude and quantitative phase information of input fields using an intensity-based sensor array without any digital processing.
no code implementations • 17 Jan 2024 • Jingtian Hu, Kun Liao, Niyazi Ulas Dinc, Carlo Gigli, Bijie Bai, Tianyi Gan, Xurong Li, Hanlong Chen, Xilin Yang, Yuhang Li, Cagatay Isil, Md Sadman Sakib Rahman, Jingxi Li, Xiaoyong Hu, Mona Jarrahi, Demetri Psaltis, Aydogan Ozcan
To resolve subwavelength features of an object, the diffractive imager uses a thin, high-index solid-immersion layer to transmit high-frequency information of the object to a spatially-optimized diffractive encoder, which converts/encodes high-frequency information of the input into low-frequency spatial modes for transmission through air.
no code implementations • 15 Jan 2024 • Bijie Bai, Ryan Lee, Yuhang Li, Tianyi Gan, Yuntian Wang, Mona Jarrahi, Aydogan Ozcan
This information hiding transformation is valid for infinitely many combinations of secret messages, all of which are transformed into ordinary-looking output patterns, achieved all-optically through passive light-matter interactions within the optical processor.
no code implementations • 8 Nov 2023 • Che-Yung Shen, Jingxi Li, Tianyi Gan, Mona Jarrahi, Aydogan Ozcan
Optical phase conjugation (OPC) is a nonlinear technique used for counteracting wavefront distortions, with various applications ranging from imaging to beam focusing.
no code implementations • 5 Oct 2023 • Xilin Yang, Md Sadman Sakib Rahman, Bijie Bai, Jingxi Li, Aydogan Ozcan
Similarly, D2NNs can also perform arbitrary linear intensity transformations with spatially incoherent illumination; however, under spatially incoherent light, these transformations are non-negative, acting on diffraction-limited optical intensity patterns at the input field-of-view (FOV).
no code implementations • 17 Sep 2023 • Cagatay Isil, Tianyi Gan, F. Onuralp Ardic, Koray Mentesoglu, Jagrit Digani, Huseyin Karaca, Hanlong Chen, Jingxi Li, Deniz Mengu, Mona Jarrahi, Kaan Akşit, Aydogan Ozcan
Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of ~30-40%.
no code implementations • 29 Aug 2023 • Bijie Bai, Xilin Yang, Tianyi Gan, Jingxi Li, Deniz Mengu, Mona Jarrahi, Aydogan Ozcan
Here, we present a pyramid-structured diffractive optical network design (which we term P-D2NN), optimized specifically for unidirectional image magnification and demagnification.
no code implementations • 5 Aug 2023 • Che-Yung Shen, Jingxi Li, Deniz Mengu, Aydogan Ozcan
Here, we present the design of a diffractive processor that can all-optically perform multispectral quantitative phase imaging of transparent phase-only objects in a snapshot.
no code implementations • 2 Aug 2023 • Yuzhu Li, Nir Pillar, Jingxi Li, Tairan Liu, Di wu, Songyu Sun, Guangdong Ma, Kevin De Haan, Luzhe Huang, Sepehr Hamidi, Anatoly Urisman, Tal Keidar Haran, William Dean Wallace, Jonathan E. Zuckerman, Aydogan Ozcan
Histological examination is a crucial step in an autopsy; however, the traditional histochemical staining of post-mortem samples faces multiple challenges, including the inferior staining quality due to autolysis caused by delayed fixation of cadaver tissue, as well as the resource-intensive nature of chemical staining procedures covering large tissue areas, which demand substantial labor, cost, and time.
no code implementations • 22 May 2023 • Luzhe Huang, Jianing Li, Xiaofu Ding, Yijie Zhang, Hanlong Chen, Aydogan Ozcan
Uncertainty estimation is critical for numerous applications of deep neural networks and draws growing attention from researchers.
no code implementations • 20 Apr 2023 • Md Sadman Sakib Rahman, Tianyi Gan, Emir Arda Deger, Cagatay Isil, Mona Jarrahi, Aydogan Ozcan
In this scheme, an electronic neural network encoder and a diffractive optical network decoder are jointly trained using deep learning to transfer the optical information or message of interest around the opaque occlusion of an arbitrary shape.
no code implementations • 12 Apr 2023 • Yuhang Li, Jingxi Li, Yifan Zhao, Tianyi Gan, Jingtian Hu, Mona Jarrahi, Aydogan Ozcan
We demonstrate universal polarization transformers based on an engineered diffractive volume, which can synthesize a large set of arbitrarily-selected, complex-valued polarization scattering matrices between the polarization states at different positions within its input and output field-of-views (FOVs).
no code implementations • 23 Mar 2023 • Md Sadman Sakib Rahman, Xilin Yang, Jingxi Li, Bijie Bai, Aydogan Ozcan
Under spatially-coherent light, a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view (FOVs) if the total number (N) of optimizable phase-only diffractive features is greater than or equal to ~2 Ni x No, where Ni and No refer to the number of useful pixels at the input and the output FOVs, respectively.
no code implementations • 9 Jan 2023 • Hanlong Chen, Luzhe Huang, Tairan Liu, Aydogan Ozcan
The application of deep learning techniques has greatly enhanced holographic imaging capabilities, leading to improved phase recovery and image reconstruction.
no code implementations • 25 Dec 2022 • Bijie Bai, Heming Wei, Xilin Yang, Deniz Mengu, Aydogan Ozcan
We numerically demonstrated all-optical class-specific transformations covering A-->A, I-->I, and P-->I transformations using various image datasets.
no code implementations • 10 Dec 2022 • Deniz Mengu, Anika Tabassum, Mona Jarrahi, Aydogan Ozcan
Moreover, we experimentally demonstrate a diffractive multispectral imager based on a 3D-printed diffractive network that creates at its output image plane a spatially-repeating virtual spectral filter array with 2x2=4 unique bands at terahertz spectrum.
no code implementations • 5 Dec 2022 • Jingxi Li, Tianyi Gan, Yifan Zhao, Bijie Bai, Che-Yung Shen, Songyu Sun, Mona Jarrahi, Aydogan Ozcan
A unidirectional imager would only permit image formation along one direction, from an input field-of-view (FOV) A to an output FOV B, and in the reverse path, the image formation would be blocked.
no code implementations • 13 Nov 2022 • Bijie Bai, Xilin Yang, Yuzhu Li, Yijie Zhang, Nir Pillar, Aydogan Ozcan
Histological staining is the gold standard for tissue examination in clinical pathology and life-science research, which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid the microscopic assessment of tissue.
no code implementations • 17 Sep 2022 • Luzhe Huang, Hanlong Chen, Tairan Liu, Aydogan Ozcan
Here, we report a self-supervised learning model, termed GedankenNet, that eliminates the need for labeled or experimental training data, and demonstrate its effectiveness and superior generalization on hologram reconstruction tasks.
no code implementations • 30 Aug 2022 • Yi Luo, Yijie Zhang, Tairan Liu, Alan Yu, Yichen Wu, Aydogan Ozcan
To address this need, we present a mobile and cost-effective label-free bio-aerosol sensor that takes holographic images of flowing particulate matter concentrated by a virtual impactor, which selectively slows down and guides particles larger than ~6 microns to fly through an imaging window.
no code implementations • 23 Aug 2022 • Md Sadman Sakib Rahman, Aydogan Ozcan
Here we demonstrate, for the first time, a "time-lapse" image classification scheme using a diffractive network, significantly advancing its classification accuracy and generalization performance on complex input objects by using the lateral movements of the input objects and/or the diffractive network, relative to each other.
no code implementations • 13 Aug 2022 • Jingxi Li, Bijie Bai, Yi Luo, Aydogan Ozcan
We report deep learning-based design of a massively parallel broadband diffractive neural network for all-optically performing a large group of arbitrarily-selected, complex-valued linear transformations between an input and output field-of-view, each with N_i and N_o pixels, respectively.
no code implementations • 8 Aug 2022 • Yi Luo, Bijie Bai, Yuhang Li, Ege Cetintas, Aydogan Ozcan
Classification of an object behind a random and unknown scattering medium sets a challenging task for computational imaging and machine vision fields.
no code implementations • 14 Jul 2022 • Xilin Yang, Bijie Bai, Yijie Zhang, Yuzhu Li, Kevin De Haan, Tairan Liu, Aydogan Ozcan
Unlike a single neural network structure which only takes one stain type as input to digitally output images of another stain type, C-DNN first uses virtual staining to transform autofluorescence microscopy images into H&E and then performs stain transfer from H&E to the domain of the other stain in a cascaded manner.
no code implementations • 6 Jul 2022 • Yijie Zhang, Luzhe Huang, Tairan Liu, Keyi Cheng, Kevin De Haan, Yuzhu Li, Bijie Bai, Aydogan Ozcan
Here, we introduce a fast virtual staining framework that can stain defocused autofluorescence images of unlabeled tissue, achieving equivalent performance to virtual staining of in-focus label-free images, also saving significant imaging time by lowering the microscope's autofocusing precision.
no code implementations • 30 Jun 2022 • Tairan Liu, Yuzhu Li, Hatice Ceylan Koydemir, Yijie Zhang, Ethan Yang, Merve Eryilmaz, Hongda Wang, Jingxi Li, Bijie Bai, Guangdong Ma, Aydogan Ozcan
We also demonstrated that this data-driven plaque assay offers the capability of quantifying the infected area of the cell monolayer, performing automated counting and quantification of PFUs and virus-infected areas over a 10-fold larger dynamic range of virus concentration than standard viral plaque assays.
no code implementations • 21 Jun 2022 • Deniz Mengu, Yifan Zhao, Anika Tabassum, Mona Jarrahi, Aydogan Ozcan
Permutation matrices form an important computational building block frequently used in various fields including e. g., communications, information security and data processing.
no code implementations • 15 Jun 2022 • Cagatay Isil, Deniz Mengu, Yifan Zhao, Anika Tabassum, Jingxi Li, Yi Luo, Mona Jarrahi, Aydogan Ozcan
We report a deep learning-enabled diffractive display design that is based on a jointly-trained pair of an electronic encoder and a diffractive optical decoder to synthesize/project super-resolved images using low-resolution wavefront modulators.
no code implementations • 26 May 2022 • Bijie Bai, Yi Luo, Tianyi Gan, Jingtian Hu, Yuhang Li, Yifan Zhao, Deniz Mengu, Mona Jarrahi, Aydogan Ozcan
Here, we demonstrate a camera design that performs class-specific imaging of target objects with instantaneous all-optical erasure of other classes of objects.
no code implementations • 7 May 2022 • Yuzhu Li, Tairan Liu, Hatice Ceylan Koydemir, Hongda Wang, Keelan O'Riordan, Bijie Bai, Yuta Haga, Junji Kobashi, Hitoshi Tanaka, Takaya Tamaru, Kazunori Yamaguchi, Aydogan Ozcan
Due to the large scalability, ultra-large field-of-view, and low cost of the TFT-based image sensors, this platform can be integrated with each agar plate to be tested and disposed of after the automated CFU count.
no code implementations • 1 May 2022 • Yuhang Li, Yi Luo, Bijie Bai, Aydogan Ozcan
During its training, random diffusers with a range of correlation lengths were used to improve the diffractive network's generalization performance.
no code implementations • 22 Apr 2022 • Hanlong Chen, Luzhe Huang, Tairan Liu, Aydogan Ozcan
Deep learning-based image reconstruction methods have achieved remarkable success in phase recovery and holographic imaging.
no code implementations • 25 Mar 2022 • Jingxi Li, Yi-Chun Hung, Onur Kulce, Deniz Mengu, Aydogan Ozcan
The transmission layers of this polarization multiplexed diffractive network are trained and optimized via deep learning and error-backpropagation by using thousands of examples of the input/output fields corresponding to each one of the complex-valued linear transformations assigned to different input/output polarization combinations.
no code implementations • 27 Jan 2022 • Luzhe Huang, Xilin Yang, Tairan Liu, Aydogan Ozcan
Here, we demonstrate a few-shot transfer learning method that helps a holographic image reconstruction deep neural network rapidly generalize to new types of samples using small datasets.
no code implementations • 22 Jan 2022 • Deniz Mengu, Aydogan Ozcan
Quantitative phase imaging (QPI) is a label-free computational imaging technique that provides optical path length information of specimens.
no code implementations • 8 Dec 2021 • Bijie Bai, Hongda Wang, Yuzhu Li, Kevin De Haan, Francesco Colonnese, Yujie Wan, Jingyi Zuo, Ngan B. Doan, Xiaoran Zhang, Yijie Zhang, Jingxi Li, Wenjie Dong, Morgan Angus Darrow, Elham Kamangar, Han Sung Lee, Yair Rivenson, Aydogan Ozcan
The immunohistochemical (IHC) staining of the human epidermal growth factor receptor 2 (HER2) biomarker is widely practiced in breast tissue analysis, preclinical studies and diagnostic decisions, guiding cancer treatment and investigation of pathogenesis.
no code implementations • 2 Nov 2021 • Yi Luo, Deniz Mengu, Aydogan Ozcan
Based on this architecture, we numerically optimized the design of a diffractive neural network composed of 4 passive layers to all-optically perform NAND operation using the diffraction of light, and cascaded these diffractive NAND gates to perform complex logical functions by successively feeding the output of one diffractive NAND gate into another.
no code implementations • 22 Aug 2021 • Onur Kulce, Deniz Mengu, Yair Rivenson, Aydogan Ozcan
In addition to this data-free design approach, we also consider a deep learning-based design method to optimize the transmission coefficients of diffractive surfaces by using examples of input/output fields corresponding to the target transformation.
no code implementations • 18 Aug 2021 • Deniz Mengu, Muhammed Veli, Yair Rivenson, Aydogan Ozcan
In addition to all-optical classification of overlapping phase objects, we also demonstrate the reconstruction of these phase images based on a shallow electronic neural network that uses the highly compressed output of the diffractive network as its input (with e. g., ~20-65 times less number of pixels) to rapidly reconstruct both of the phase images, despite their spatial overlap and related phase ambiguity.
no code implementations • 31 Mar 2021 • Yi Luo, Yichen Wu, Liqiao Li, Yuening Guo, Ege Cetintas, Yifang Zhu, Aydogan Ozcan
To evaluate the effects of e-liquid composition on aerosol dynamics, we measured the volatility of the particles generated by flavorless, nicotine-free e-liquids with various PG/VG volumetric ratios, revealing a negative correlation between the particles' volatility and the volumetric ratio of VG in the e-liquid.
no code implementations • 4 Mar 2021 • Yijie Zhang, Tairan Liu, Manmohan Singh, Yilin Luo, Yair Rivenson, Kirill V. Larin, Aydogan Ozcan
Using 2-fold undersampled spectral data (i. e., 640 spectral points per A-line), the trained neural network can blindly reconstruct 512 A-lines in ~6. 73 ms using a desktop computer, removing spatial aliasing artifacts due to spectral undersampling, also presenting a very good match to the images of the same samples, reconstructed using the full spectral OCT data (i. e., 1280 spectral points per A-line).
no code implementations • 12 Feb 2021 • Luzhe Huang, Tairan Liu, Xilin Yang, Yi Luo, Yair Rivenson, Aydogan Ozcan
Digital holography is one of the most widely used label-free microscopy techniques in biomedical imaging.
no code implementations • 22 Dec 2020 • Xilin Yang, Luzhe Huang, Yilin Luo, Yichen Wu, Hongda Wang, Yair Rivenson, Aydogan Ozcan
We present a virtual image refocusing method over an extended depth of field (DOF) enabled by cascaded neural networks and a double-helix point-spread function (DH-PSF).
no code implementations • 1 Dec 2020 • Calvin Brown, Artem Goncharov, Zachary Ballard, Mason Fordham, Ashley Clemens, Yunzhe Qiu, Yair Rivenson, Aydogan Ozcan
Conventional spectrometers are limited by trade-offs set by size, cost, signal-to-noise ratio (SNR), and spectral resolution.
no code implementations • 24 Oct 2020 • Deniz Mengu, Yair Rivenson, Aydogan Ozcan
Recent research efforts in optical computing have gravitated towards developing optical neural networks that aim to benefit from the processing speed and parallelism of optics/photonics in machine learning applications.
no code implementations • 21 Oct 2020 • Luzhe Huang, Yilin Luo, Yair Rivenson, Aydogan Ozcan
Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical, medical and life sciences.
no code implementations • 15 Sep 2020 • Md Sadman Sakib Rahman, Jingxi Li, Deniz Mengu, Yair Rivenson, Aydogan Ozcan
A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning.
no code implementations • 20 Aug 2020 • Kevin de Haan, Yijie Zhang, Jonathan E. Zuckerman, Tairan Liu, Anthony E. Sisk, Miguel F. P. Diaz, Kuang-Yu Jen, Alexander Nobori, Sofia Liou, Sarah Zhang, Rana Riahi, Yair Rivenson, W. Dean Wallace, Aydogan Ozcan
Based on evaluation by three renal pathologists, followed by adjudication by a fourth renal pathologist, we show that the generation of virtual special stains from existing H&E images improves the diagnosis in several non-neoplastic kidney diseases sampled from 58 unique subjects.
no code implementations • 25 Jul 2020 • Onur Kulce, Deniz Mengu, Yair Rivenson, Aydogan Ozcan
Precise engineering of materials and surfaces has been at the heart of some of the recent advances in optics and photonics.
no code implementations • 12 Jul 2020 • Zoltan Gorocs, David Baum, Fang Song, Kevin DeHaan, Hatice Ceylan Koydemir, Yunzhe Qiu, Zilin Cai, Thamira Skandakumar, Spencer Peterman, Miu Tamamitsu, Aydogan Ozcan
We report a field-portable and cost-effective imaging flow cytometer that uses deep learning to accurately detect Giardia lamblia cysts in water samples at a volumetric throughput of 100 mL/h.
no code implementations • 1 Jul 2020 • Tairan Liu, Kevin de Haan, Bijie Bai, Yair Rivenson, Yi Luo, Hongda Wang, David Karalli, Hongxiang Fu, Yibo Zhang, John FitzGerald, Aydogan Ozcan
Our analysis shows that a trained deep neural network can extract the birefringence information using both the sample specific morphological features as well as the holographic amplitude and phase distribution.
no code implementations • 30 Jun 2020 • Muhammed Veli, Deniz Mengu, Nezih T. Yardimci, Yi Luo, Jingxi Li, Yair Rivenson, Mona Jarrahi, Aydogan Ozcan
Recent advances in deep learning have been providing non-intuitive solutions to various inverse problems in optics.
no code implementations • 23 May 2020 • Deniz Mengu, Yifan Zhao, Nezih T. Yardimci, Yair Rivenson, Mona Jarrahi, Aydogan Ozcan
By modeling the undesired layer-to-layer misalignments in 3D as continuous random variables in the optical forward model, diffractive networks are trained to maintain their inference accuracy over a large range of misalignments; we term this diffractive network design as vaccinated D2NN (v-D2NN).
no code implementations • 15 May 2020 • Jingxi Li, Deniz Mengu, Nezih T. Yardimci, Yi Luo, Xurong Li, Muhammed Veli, Yair Rivenson, Mona Jarrahi, Aydogan Ozcan
3D engineering of matter has opened up new avenues for designing systems that can perform various computational tasks through light-matter interaction.
no code implementations • 21 Mar 2020 • Yilin Luo, Luzhe Huang, Yair Rivenson, Aydogan Ozcan
We demonstrate a deep learning-based offline autofocusing method, termed Deep-R, that is trained to rapidly and blindly autofocus a single-shot microscopy image of a specimen that is acquired at an arbitrary out-of-focus plane.
no code implementations • 29 Jan 2020 • Hongda Wang, Hatice Ceylan Koydemir, Yunzhe Qiu, Bijie Bai, Yibo Zhang, Yiyin Jin, Sabiha Tok, Enis Cagatay Yilmaz, Esin Gumustekin, Yair Rivenson, Aydogan Ozcan
Our experiments further confirmed that this method successfully detects 90% of bacterial colonies within 7-10 h (and >95% within 12 h) with a precision of 99. 2-100%, and correctly identifies their species in 7. 6-12 h with 80% accuracy.
Cultural Vocal Bursts Intensity Prediction General Classification
no code implementations • 20 Jan 2020 • Yijie Zhang, Kevin De Haan, Yair Rivenson, Jingxi Li, Apostolos Delis, Aydogan Ozcan
This approach uses a single deep neural network that receives two different sources of information at its input: (1) autofluorescence images of the label-free tissue sample, and (2) a digital staining matrix which represents the desired microscopic map of different stains to be virtually generated at the same tissue section.
no code implementations • 14 Sep 2019 • Yi Luo, Deniz Mengu, Nezih T. Yardimci, Yair Rivenson, Muhammed Veli, Mona Jarrahi, Aydogan Ozcan
We report a broadband diffractive optical neural network design that simultaneously processes a continuum of wavelengths generated by a temporally-incoherent broadband source to all-optically perform a specific task learned using deep learning.
no code implementations • 15 Jul 2019 • Tairan Liu, Zhensong Wei, Yair Rivenson, Kevin De Haan, Yibo Zhang, Yichen Wu, Aydogan Ozcan
We report a framework based on a generative adversarial network (GAN) that performs high-fidelity color image reconstruction using a single hologram of a sample that is illuminated simultaneously by light at three different wavelengths.
no code implementations • 8 Jun 2019 • Jingxi Li, Deniz Mengu, Yi Luo, Yair Rivenson, Aydogan Ozcan
Similar to ensemble methods practiced in machine learning, we also independently-optimized multiple differential diffractive networks that optically project their light onto a common detector plane, and achieved testing accuracies of 98. 59%, 91. 06% and 51. 44% for MNIST, Fashion-MNIST and grayscale CIFAR-10, respectively.
1 code implementation • 31 Jan 2019 • Yichen Wu, Yair Rivenson, Hongda Wang, Yilin Luo, Eyal Ben-David, Laurent A. Bentolila, Christian Pritz, Aydogan Ozcan
Three-dimensional (3D) fluorescence microscopy in general requires axial scanning to capture images of a sample at different planes.
no code implementations • 30 Jan 2019 • Kevin de Haan, Zachary S. Ballard, Yair Rivenson, Yichen Wu, Aydogan Ozcan
We report resolution enhancement in scanning electron microscopy (SEM) images using a generative adversarial network.
no code implementations • 17 Nov 2018 • Yichen Wu, Yilin Luo, Gunvant Chaudhari, Yair Rivenson, Ayfer Calis, Kevin De Haan, Aydogan Ozcan
Deep learning brings bright-field microscopy contrast to holographic images of a sample volume, bridging the volumetric imaging capability of holography with the speckle- and artifact-free image contrast of bright-field incoherent microscopy.
no code implementations • 15 Oct 2018 • Tairan Liu, Kevin De Haan, Yair Rivenson, Zhensong Wei, Xin Zeng, Yibo Zhang, Aydogan Ozcan
We present a deep learning framework based on a generative adversarial network (GAN) to perform super-resolution in coherent imaging systems.
no code implementations • 10 Oct 2018 • Deniz Mengu, Yi Luo, Yair Rivenson, Xing Lin, Muhammed Veli, Aydogan Ozcan
In their Comment, Wei et al. (arXiv:1809. 08360v1 [cs. LG]) claim that our original interpretation of Diffractive Deep Neural Networks (D2NN) represent a mischaracterization of the system due to linearity and passivity.
no code implementations • 3 Oct 2018 • Deniz Mengu, Yi Luo, Yair Rivenson, Aydogan Ozcan
Furthermore, we report the integration of D2NNs with electronic neural networks to create hybrid-classifiers that significantly reduce the number of input pixels into an electronic network using an ultra-compact front-end D2NN with a layer-to-layer distance of a few wavelengths, also reducing the complexity of the successive electronic network.
no code implementations • 20 Jul 2018 • Yair Rivenson, Tairan Liu, Zhensong Wei, Yibo Zhang, Aydogan Ozcan
Using a deep neural network, we demonstrate a digital staining technique, which we term PhaseStain, to transform quantitative phase images (QPI) of labelfree tissue sections into images that are equivalent to brightfield microscopy images of the same samples that are histochemically stained.
no code implementations • 23 May 2018 • Yair Rivenson, Aydogan Ozcan
We discuss recently emerging applications of the state-of-art deep learning methods on optical microscopy and microscopic image reconstruction, which enable new transformations among different modes and modalities of microscopic imaging, driven entirely by image data.
no code implementations • 14 Apr 2018 • Xing Lin, Yair Rivenson, Nezih T. Yardimci, Muhammed Veli, Mona Jarrahi, Aydogan Ozcan
We introduce an all-optical Diffractive Deep Neural Network (D2NN) architecture that can learn to implement various functions after deep learning-based design of passive diffractive layers that work collectively.
no code implementations • 30 Mar 2018 • Yair Rivenson, Hongda Wang, Zhensong Wei, Yibo Zhang, Harun Gunaydin, Aydogan Ozcan
Here, we demonstrate a label-free approach to create a virtually-stained microscopic image using a single wide-field auto-fluorescence image of an unlabeled tissue sample, bypassing the standard histochemical staining process, saving time and cost.
no code implementations • 21 Mar 2018 • Yichen Wu, Yair Rivenson, Yibo Zhang, Zhensong Wei, Harun Gunaydin, Xing Lin, Aydogan Ozcan
Holography encodes the three dimensional (3D) information of a sample in the form of an intensity-only recording.
no code implementations • 12 Dec 2017 • Yair Rivenson, Hatice Ceylan Koydemir, Hongda Wang, Zhensong Wei, Zhengshuang Ren, Harun Gunaydin, Yibo Zhang, Zoltan Gorocs, Kyle Liang, Derek Tseng, Aydogan Ozcan
Mobile-phones have facilitated the creation of field-portable, cost-effective imaging and sensing technologies that approach laboratory-grade instrument performance.
no code implementations • 12 May 2017 • Yair Rivenson, Zoltan Gorocs, Harun Gunaydin, Yibo Zhang, Hongda Wang, Aydogan Ozcan
We demonstrate that a deep neural network can significantly improve optical microscopy, enhancing its spatial resolution over a large field-of-view and depth-of-field.
no code implementations • 10 May 2017 • Yair Rivenson, Yibo Zhang, Harun Gunaydin, Da Teng, Aydogan Ozcan
Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography.