Author Affiliations
Abstract
1 Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, P. R. China
2 School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China Hefei 230026, P. R. China
3 Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, P. R. China
The prediction of fundus fluorescein angiography (FFA) images from fundus structural images is a cutting-edge research topic in ophthalmological image processing. Prediction comprises estimating FFA from fundus camera imaging, single-phase FFA from scanning laser ophthalmoscopy (SLO), and three-phase FFA also from SLO. Although many deep learning models are available, a single model can only perform one or two of these prediction tasks. To accomplish three prediction tasks using a unified method, we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network. The three prediction tasks are processed as follows: data preparation, network training under FFA supervision, and FFA image prediction from fundus structure images on a test set. By comparing the FFA images predicted by our model, pix2pix, and CycleGAN, we demonstrate the remarkable progress achieved by our proposal. The high performance of our model is validated in terms of the peak signal-to-noise ratio, structural similarity index, and mean squared error.
Fundus fluorescein angiography image fundus structure image image translation unified deep learning model generative adversarial networks Journal of Innovative Optical Health Sciences
2024, 17(3): 2450003
Author Affiliations
Abstract
School of Astronautics, Harbin Institute of Technology, Harbin, Heilongjiang 150000, P. R. China
Photoacoustic imaging (PAI) is a noninvasive emerging imaging method based on the photoacoustic effect, which provides necessary assistance for medical diagnosis. It has the characteristics of large imaging depth and high contrast. However, limited by the equipment cost and reconstruction time requirements, the existing PAI systems distributed with annular array transducers are difficult to take into account both the image quality and the imaging speed. In this paper, a triple-path feature transform network (TFT-Net) for ring-array photoacoustic tomography is proposed to enhance the imaging quality from limited-view and sparse measurement data. Specifically, the network combines the raw photoacoustic pressure signals and conventional linear reconstruction images as input data, and takes the photoacoustic physical model as a prior information to guide the reconstruction process. In addition, to enhance the ability of extracting signal features, the residual block and squeeze and excitation block are introduced into the TFT-Net. For further efficient reconstruction, the final output of photoacoustic signals uses ‘filter-then-upsample’ operation with a pixel-shuffle multiplexer and a max out module. Experiment results on simulated and in-vivo data demonstrate that the constructed TFT-Net can restore the target boundary clearly, reduce background noise, and realize fast and high-quality photoacoustic image reconstruction of limited view with sparse sampling.
Deep learning feature transformation image reconstruction limited-view measurement photoacoustic tomography Journal of Innovative Optical Health Sciences
2024, 17(3): 2350028
Author Affiliations
Abstract
1 Institute of Modern Optics, Nankai University, Tianjin Key Laboratory of Micro-scale Optical Information Science and Technology, Tianjin 300350, China
2 Department of Thyroid and Neck Tumor, Tianjin Medical University Cancer Institute and Hospital National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin 300060, China
Limited by the dynamic range of the detector, saturation artifacts usually occur in optical coherence tomography (OCT) imaging for high scattering media. The available methods are difficult to remove saturation artifacts and restore texture completely in OCT images. We proposed a deep learning-based inpainting method of saturation artifacts in this paper. The generation mechanism of saturation artifacts was analyzed, and experimental and simulated datasets were built based on the mechanism. Enhanced super-resolution generative adversarial networks were trained by the clear–saturated phantom image pairs. The perfect reconstructed results of experimental zebrafish and thyroid OCT images proved its feasibility, strong generalization, and robustness.
Optical coherence tomography saturation artifacts deep learning image inpainting Journal of Innovative Optical Health Sciences
2024, 17(3): 2350026
Author Affiliations
Abstract
1 Zhejiang Laboratory, Research Center for Frontier Fundamental Studies, Hangzhou, China
2 Zhejiang University, College of Optical Science and Engineering, State Key Laboratory of Extreme Photonics and Instrumentation, Hangzhou, China
3 ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, China
4 Shanghai Jiao Tong University, Chip Hub for Integrated Photonics Xplore (CHIPX), Wuxi, China
With the rapid development of sensor networks, machine vision faces the problem of storing and computing massive data. The human visual system has a very efficient information sense and computation ability, which has enlightening significance for solving the above problems in machine vision. This review aims to comprehensively summarize the latest advances in bio-inspired image sensors that can be used to improve machine-vision processing efficiency. After briefly introducing the research background, the relevant mechanisms of visual information processing in human visual systems are briefly discussed, including layer-by-layer processing, sparse coding, and neural adaptation. Subsequently, the cases and performance of image sensors corresponding to various bio-inspired mechanisms are introduced. Finally, the challenges and perspectives of implementing bio-inspired image sensors for efficient machine vision are discussed.
bio-inspired image sensor machine vision layer-by-layer processing sparse coding neural adaptation Advanced Photonics
2024, 6(2): 024001
1 武汉科技大学 信息科学与工程学院,湖北 武汉 430080
2 武汉科技大学 冶金自动化与检测技术教育部工程研究中心,湖北 武汉 430080
针对传统SIFT匹配算法复杂、特征冗余点多、难以满足实时性等问题,本文提出了一种具有局部自适应阈值的SIFT快速图像匹配算法。首先,所提方法在SIFT算法的基础上,对构建的高斯金字塔进行了优化,通过减少金字塔层数来消除冗余特征点以提高检测效率,并根据图像局部对比度来自适应提取FAST算法中的阈值从而实现高质量的特征点检测,筛选出鲁棒性较强的特征点进行更准确的匹配;其次,采用高斯圆形窗口建立32维降维特征向量,提高算法运行效率;最后,根据匹配特征点对之间的几何一致性对特征点进行提纯,有效减少误匹配。实验结果表明,本文方法在匹配精度和运算效率方面的综合表现均优于SIFT算法及其他对比匹配算法,相比传统的SIFT算法,匹配精度提高了约10%,算法运行时间缩短了约49%。在图像发生尺度、旋转以及光照变化的情况下,正确匹配率在93%以上。
SIFT算法 高斯金字塔 自适应阈值 特征描述符 图像匹配 SIFT algorithm Gaussian pyramid adaptive thresholds feature descriptor image matching
1 辽宁工程技术大学 软件学院,辽宁 葫芦岛 125105
2 汕头职业技术学院 计算机系,广东 汕头 515071
现有的层级式文本生成图像的方法在初始图像生成阶段仅使用上采样进行特征提取,上采样过程本质是卷积运算,卷积运算的局限性会造成全局信息被忽略并且远程语义无法交互。虽然已经有方法在模型中加入自注意力机制,但依然存在图像细节缺失、图像结构性错误等问题。针对上述存在的问题,提出一种基于自监督注意和图像特征融合的生成对抗网络模型SAF-GAN。将基于ContNet的自监督模块加入到初始特征生成阶段,利用注意机制进行图像特征之间的自主映射学习,通过特征的上下文关系引导动态注意矩阵,实现上下文挖掘和自注意学习的高度结合,提高低分辨率图像特征的生成效果,后续通过不同阶段网络的交替训练实现高分辨率图像的细化生成。同时加入了特征融合增强模块,通过将模型上一阶段的低分辨率特征与当前阶段的特征进行融合,生成网络可以充分利用低层特征的高语义信息和高层特征的高分辨率信息,更加保证了不同分辨率特征图的语义一致性,从而实现高分辨率的逼真的图像生成。实验结果表明,相较于基准模型(AttnGAN),SAF-GAN模型在IS和FID指标上均有改善,在CUB数据集上的IS分数提升了0.31,FID指标降低了3.45;在COCO数据集上的IS分数提升了2.68,FID指标降低了5.18。SAF-GAN模型能够有效生成更加真实的图像,证明了该方法的有效性。
计算机视觉 生成对抗网络 文本生成图像 CotNet 图像特征融合 computer vision generative adversarial networks text-to-image cotnet image feature fusion
1 北京信息科技大学仪器科学与光电工程学院,北京 102206
2 北京信息科技大学光电测试技术及仪器教育部重点实验室,北京 102206
光学相干层析成像(OCT)是一种高空间分辨率的光学成像方法,可以对生物组织进行非接触、无标记的二维截面和三维体积成像,能为临床疾病的诊断提供具有重要参考价值的影像信息。在传统的台式OCT系统中,扫描探头被固定在工作台上,探头结构较大,灵活性差,不利于深入狭小腔体内部成像或在床旁检测。本团队设计了一种视频引导的手持式高速OCT系统,其手持探头结构紧凑、体积小巧,便于抓取和深入狭小腔体内部;探头内部集成了相机成像功能,可以实时获得成像区域的视频图像,引导OCT成像。该系统的A线扫描速率可以达到200 kHz。为了克服成像过程中的抖动问题,本团队提出了图像自动配准算法,该算法能显著提高图像质量。采用该系统对离体猪眼角膜和离体猪牙齿进行成像,以验证系统的性能。结果显示该系统能够高速获取高分辨的组织图像。
医用光学 光学相干层析成像 手持探头 图像配准