Advanced Photonics, 2024, 6 (1): 010501, Published Online: Jan. 22, 2024  

OAM-based diffractive all-optical classification

Author Affiliations
1 University of California, Electrical and Computer Engineering Department, Los Angeles, California, United States
2 University of California, Bioengineering Department, Los Angeles, California, United States
3 University of California, California NanoSystems Institute (CNSI), Los Angeles, California, United States
Abstract
The article comments on a recent framework for all-optical classification using orbital-angular-momentum-encoded diffractive networks.

Object classification is an important aspect of machine intelligence. Current practices in object classification entail the digitization of object information followed by the application of digital algorithms such as deep neural networks. The execution of digital neural networks is power-consuming, and the throughput is limited. The existing von Neumann digital computing paradigm is also less suited for the implementation of highly parallel neural network architectures.1 Diffractive deep neural networks (D2NNs), also known as diffractive optical networks or diffractive networks, form an all-optical visual processor with the potential to address some of these issues, especially when the information of interest is already represented in the analog domain.2 Comprising a set of spatially engineered thin surfaces that successively modulate the incident light, a diffractive network performs information processing in the analog optical domain, completing its task as the light is transmitted through a passive thin optical volume. This all-optical computing architecture can benefit from the numerous degrees of freedom of light, such as the spectrum, polarization, phase, and amplitude, achieving improved parallelism and performance.37 As another degree of freedom, the orbital angular momentum has emerged as an important feature for enhancing the information capacity of optical systems.8 A helical wavefront exp(ilφ), where φ is the azimuthal angle and l is the helical mode index, carries a form of angular momentum known as the orbital angular momentum (OAM), which bears the potential to multiplex large amounts of optical information within the same physical space due to the theoretically unbounded mode index l.9

In their recently published paper, Zhang et al. bring the power of OAM mode multiplexing to all-optical object classification using diffractive optical networks.10 In their framework, information about the input object class is carried by the OAM distribution of the light. The beam that illuminates the object comprises several OAM mode components l, one corresponding to each object class, as shown in Fig. 1. The diffractive network is optimized in a data-driven manner using deep learning to reinforce the OAM component corresponding to the class of the input object at the expense of the others. The authors report a blind testing accuracy of 85.49% on the MNIST handwritten digit test dataset. While higher inference accuracies have been achieved on the same dataset using diffractive optical classifiers, for a first attempt at all-optical classification using OAM, this represents a highly promising approach.

Fig. 1. OAM-based all-optical classification schemes with diffractive deep neural networks (D2NNs). (a) A single classification task with a single detector OAM-encoded D2NN. (b) Multiple classification tasks with a single detector OAM-encoded D2NN. (c) Multiple classification tasks with a multidetector OAM-encoded D2NN. Adapted from Ref. 10.

下载图片 查看所有图片

The authors proceed to demonstrate the multitasking capability of this framework, which was demonstrated earlier using phase-encoding schemes.11 In one realization shown in Fig. 1(b), two different MNIST digits on the input plane are classified using a single detector, i.e., an OAM spectrum analyzer. The two strongest OAM components in the output light represent the classes of the input digits. This single-detector OAM-encoded D2NN for two-digit classification achieved a blind testing accuracy of 64.13%. However, this scheme fails for repeating classes, i.e., when both digits belong to the same class. This can be solved using phase-encoding, as demonstrated earlier, even where there is spatial overlap between the input objects.11 As an alternative approach, M objects, possibly belonging to the same class, located at distinct positions on the input aperture can be classified by having M detectors at the network output, as shown in Fig. 1(c). For example, the reported blind testing accuracy for M handwritten digits on the input plane with M detectors is 70.94% for M=2 and 40.13% for M=4.10

Zhang et al. also showed that OAM-encoded D2NNs are resilient to misalignment errors, which is a major advantage for practical settings. By multiplexing a multitude of OAM modes on the input beam, it is possible to utilize OAM-encoded D2NNs for distinguishing among a large number of classes of the input object. For example, earlier work4 reported the use of spectral encoding (with >50 wavelengths) to increase the number of classes that can be processed by a single-pixel diffractive optical network. Following a similar design strategy, it might also be possible to assign a separate task to each OAM mode for the same input scene. However, it will require advanced OAM mode multiplexing techniques, and the success will depend on the precision with which the different modes can be distinguished. The ability to generate OAM combs,12 together with the advancement of fabrication techniques for diffractive networks, could inspire novel OAM-encoded D2NNs for diverse applications such as sensing, imaging, and communication, taking this exciting work of Zhang et al. into new frontiers.

References

[1] H.Amrouchet al., “Towards reliable in-memory computing: from emerging devices to post-von-Neumann architectures,” in IFIP/IEEE 29th Int. Conf. Very Large Scale Integr. (VLSI-SoC), pp. 16 (2021).

[2] X. Lin, et al.. All-optical machine learning using diffractive deep neural networks. Science, 2018, 361(6406): 1004-1008.

[3] D. Mengu, et al.. Analysis of diffractive optical neural networks and their integration with electronic neural networks. IEEE J. Sel. Top. Quantum Electron., 2020, 26(1): 3700114.

[4] J. Li, et al.. Spectrally encoded single-pixel machine vision using diffractive networks. Sci. Adv., 2021, 7(13): eabd7690.

[5] Y. Luo, et al.. Computational imaging without a computer: seeing through random diffusers at the speed of light. eLight, 2022, 2(1): 4.

[6] M. S. S. Rahman, A. Ozcan. Time-lapse image classification using a diffractive neural network. Adv. Intell. Syst., 2023, 5(5): 2200387.

[7] Y. Wang, et al.. Matrix diffractive deep neural networks merging polarization into meta-devices. Laser Photonics Rev., 2023: 2300903.

[8] A. M. Yao, M. J. Padgett. Orbital angular momentum: origins, behavior and applications. Adv. Opt. Photonics, 2011, 3(2): 161-204.

[9] X. Fang, H. Ren, M. Gu. Orbital angular momentum holography for high-security encryption. Nat. Photonics, 2020, 14(2): 102-108.

[10] K. Zhang, et al.. Advanced all-optical classification using orbital-angular-momentum-encoded diffractive networks. Adv. Photonics Nexus, 2023, 2(6): 066006.

[11] D. Mengu, et al.. Classification and reconstruction of spatially overlapping phase images using diffractive optical networks. Sci. Rep., 2022, 12(1): 8446.

[12] S. Fu, et al.. Orbital angular momentum comb generation from azimuthal binary phases. Adv. Photonics Nexus, 2022, 1(1): 016003.

Md Sadman Sakib Rahman, Aydogan Ozcan. OAM-based diffractive all-optical classification[J]. Advanced Photonics, 2024, 6(1): 010501.

引用该论文: TXT   |   EndNote

相关论文

加载中...

关于本站 Cookie 的使用提示

中国光学期刊网使用基于 cookie 的技术来更好地为您提供各项服务,点击此处了解我们的隐私策略。 如您需继续使用本网站,请您授权我们使用本地 cookie 来保存部分信息。
全站搜索
您最值得信赖的光电行业旗舰网络服务平台!