Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Generative-adversarial-network-based dimensional measurement of optical waveguides

Open Access Open Access

Abstract

We propose a high-throughput and precise waveguide-dimensional-measurement method consisting of a generative adversarial network (GAN) and curve-fitting-based dimensional calculator using sidewall functions. The GAN can learn the differences between low-magnification (LM) and high-magnification (HM) optical microscope images taken with different objective lenses at different magnifications over the same area. The LM and HM images of the waveguides are captured using an optical microscope at magnifications of 500× and 2000×, respectively. We obtained a standard deviation of the waveguide widths of approximately 0.8 pixels (∼ 42 nm), and confirmed precise width measurement using super-resolution images at the same imaging throughput as with an LM microscope.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Years of progress in semiconductor-manufacturing technologies have reduced the wiring dimensions of integrated circuits to nanoscale. As optical integrated circuits become more advanced, a nanoscale error in an optical integrated circuit can affect the velocities of optical signals, which impacts the wavelength dependence of optical transmittance [1]. Thus, a precise dimensional-measurement method for evaluating nanoscale circuit patterns is required to improve manufacturing yield.

A silica-based planar lightwave circuit (PLC) is one practical option for optical integrated circuits. PLC components are widely used as functional devices in fiber-optic communication systems because of their compactness, reliability, and high functionality [24]. A PLC is com-posed of stripe-shaped silica patterns and fabricated on a silicon wafer. Optical signals can propagate along the patterns. As the performance of PLC components improves, their wiring patterns become more complex [5,6], and precise dimensional measurement technique is required. The most straightforward high-precision dimensional measurement method is to use a high-resolution microscopy such as scanning electron microscopy. However, due to its small field of view, it is difficult to inspect the fabricated patterns over a large area with feasible throughput. Therefore, a precise and high-throughput dimensional measurement method is required.

Optical microscopy, one of the most ubiquitous observation methods, has been used in biology, surgery, engineering, and quality control in manufacturing processes. However, despite advances in semiconductor fabrication process technology, measurements using traditional optical microscopy often suffer from lateral resolution limitations [7]. The resolution of an optical microscope d is defined as follows:

$$d = \frac{\lambda }{{2n\sin \theta }}$$
where n is the refractive index of the imaging medium and λ and θ are the wavelength and incident angle of the irradiating light, respectively. In a common dry objective lens, when λ is 400 nm, d is in the range of 200 to 400 nm. This makes it difficult to precisely measure errors in nanoscale waveguide dimensions using traditional optical microscopy. A possible solution is to process low-resolution (LR) microscope images using machine-learning-based super-resolution (SR) techniques.

Image processing with deep learning models has been rapidly developing. Generative adversarial networks (GANs) proposed by Goodfellow et al. [8] have received considerable attention due to their high performance such as unsupervised representation learning [9], precise anomaly detection [10], and high-quality SR [11,12]. However, there are a limited number of studies on precise dimensional measurements using microscopy. Grant-Jacob et al. investigated SR biological imaging using a conditional GAN [13]. In their study, a low-magnification (LM) image was converted into a high-magnification (HM) equivalent image with a conversion error of several pixels because the conditional GAN requires the elements of the image to be precisely labeled for generalization. Zhang et al. applied the GAN to optical microscopy to achieve single-image SR (SISR) over a large field of view [14]. With the SISR method, original microscope images are input to the discriminator, and down-sampled images generated from the original microscope images are input to the generator; thus, a simple deblurring effect can be expected. They proposed an effective image down-sampling model to generate LR images for training and demonstrated SR image generation from LR images. However, their resolution was microscale and insufficient to achieve nanoscale dimensional measurement.

Precise measurements that require high-magnification (HM) images over a wafer are time consuming. The circuit dimensions must be precisely measured with feasible throughput in microscope-based visual inspections. We propose a high-throughput and precise waveguide-dimensional measurement method consisting of the GAN and a curve-fitting-based dimensional calculator using sidewall functions based on the multiplication of the Gaussian and sigmoid functions. With the proposed method, LM images are input to a generator and HM images are input to a discriminator. The generator can learn the differences between LM and HM images taken with different objective lenses at different magnifications over the same area. Toward the establishment of a sub-pixel-width measurement method using a sidewall function, we demonstrated the effectiveness of the proposed method for the HM-equivalent actual inspection of PLC component manufacturing at the same imaging throughput as an LM microscope.

2. Target

The sharp increase in the demand for Internet services is driving the rapid development of high-speed and high-capacity optical networking such as the wavelength division multiplexing (WDM) system [15]. WDM is a high-speed and high-capacity information transmission method that simultaneously transmits optical signals of different optical frequencies through a single optical fiber cable. A multi/demultiplexer (MUX/DEMUX) is an indispensable device in WDM systems, and various types of MUX/DEMUX devices have been studied [16]. Among these, the arrayed waveguide grating (AWG) is one of the most complex PLC components used for WDM systems [17].

Figure 1 shows an optical microscope image of a fabricated AWG. The AWG performs multiplexing and demultiplexing of wavelength signals on the basis of a diffraction grating spectroscopy in which waveguides are arranged in an array. The AWG contains bending waveguides with a constant path-length difference between neighboring waveguides. The central optical frequency ${f_0}\; $ of the light output from the AWG is a typical feature that determines the performance of the AWG and is calculated as

$${f_0} = \frac{{mc}}{{{n_{\textrm{eff}}}\Delta L}},$$
where ${n_{\textrm{eff}}}$ is the effective refractive index of the arrayed waveguide, $\Delta L$ is the constant path-length difference between the neighboring waveguides, m is an integer, and c is the speed of light. In the fabricated AWG, the deviation of ${n_{\textrm{eff}}}$ degrades the stability of ${f_0}$. Here, ${n_{\textrm{eff}}}$ is determined by the refractive index of the waveguide material and dimensions of the waveguide [18]. The nanoscale deviation of the waveguide width impacts ${f_0}$ [1], so careful visual inspection and precise measurement are indispensable for efficient manufacturing. The waveguide-width deviation is defined as the difference between the nominal width and actual waveguide width.

 figure: Fig. 1.

Fig. 1. Optical microscope image of AWG (20× magnification). Inset waveguide is at 500× magnification

Download Full Size | PDF

The stability of ${f_0}$ is crucial for WDM systems. For example, according to the International Telecommunication Union Telecommunication Standardization Sector (ITU-T), the center-wavelength deviation should not exceed 1/10th the optical channel spacing [19]. Therefore, in a dense WDM (DWDM) system with a channel spacing of 100 GHz [20], the center-wavelength-deviation limit is assumed to be 10 GHz (${f_0}$±5 GHz). As shown in Fig. 2, we calculated the waveguide-width dependence of ${f_0}$ (0.074 GHz/nm) using Eq. (2) and Malcatili's method [21]. We determined that the waveguide-width deviation should be within ± 70 nm to satisfy the requirement (± 5 GHz) for ${f_0}$ stability of the DWDM system (100 GHz spacing). Thus, a waveguide-width measurement method that can stably discriminate ± 70 nm is needed.

 figure: Fig. 2.

Fig. 2. Waveguide-width dependence of outputted central optical frequency of AWG

Download Full Size | PDF

3. Method

As mentioned previously, the waveguide dimensions must be precisely measured with feasible throughput in microscope-based visual inspections. However, precise measurements that require HM images over a wafer are time consuming. One possible countermeasure is to process LM images using SR methods. Figure 3 shows a schematic illustration of optical microscopy. Microscopic images are obtained with the same scale resolution as that of the diffraction limit of incident light. Therefore, this is not a simple deblurring problem because the reflected-light-intensity distribution expressed as the superposition of distorted Gaussian beams exists in real space, as shown in Fig. 4. Figure 4 shows brightness-value distribution of the LM and HM images. The HM objective lens produces a dark and sharp image. Because the distortion factor of the brightness-value distribution depends on the lens type and individual shape differences, an approach using a deep learning model trained for each microscope would be highly effective.

 figure: Fig. 3.

Fig. 3. Schematic illustration of optical microscopy for silica-based waveguide imaging

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Brightness-value distribution in direction of waveguide cross section

Download Full Size | PDF

We combine a simple vanilla GAN with a sidewall detector using a function based on the multiplication of the Gaussian and sigmoid functions. The GAN have received considerable attention for their SR quality as they make it possible to learn generative models for generating more detailed, realistic images compared with other SR methods [8]. Radford et al. investigated deep convolutional GANs and showed that the GANs are capable of capturing semantic image content, enabling vector operation for visual concepts [9]. Ledig et al. devised the GAN for image SR, which is the first framework capable of inferring photo-realistic natural images for high upscaling factors [11]. We applied and extended the GAN to high-throughput and precise dimensional measurement.

Figure 5(a) shows the GAN training phase. LM and HM data are generated by aligning LM and HM images by pattern matching, then the regions containing array waveguides are trimmed. The LM and HM data are input to the generator and discriminator, respectively. We followed the network architectural guidelines described by Ledig et al. [11]. Objective of the discriminator is to differentiate LM and HM images whereas the generator is trained to confuse the discriminator as much as possible by generating more realistic SR images. The GAN is trained using a perceptual-loss-based function [22]. The generator, which is trained by referring to the actual acquired LM and HM images, outputs a shape-informative SR image.

 figure: Fig. 5.

Fig. 5. Architecture of waveguide-width measurement method. (a) GAN training phase and (b) operation phase of proposed method.

Download Full Size | PDF

We trained the generator using a feed-forward convolutional neural network (CNN). In the CNN, the weights and biases of a deep network are used as parameters. A perceptual loss, the weighted sum of a content loss and adversarial loss, is used for training the generator. We minimized adversarial loss $l_{Gen,}^{SR}$ which is defined as

$$l_{Gen}^{SR} = \mathop \sum \limits_{i = 1}^N - \log {D_{{\theta _D}}}({{G_{{\theta_G}}}({{I^{LM}}} )} ),$$
where ${I^{LM}}$ is the LM image and ${D_{{\theta _D}}}({{G_{{\theta_G}}}({{I^{LM}}} )} )$ is the probability that the generated image ${G_{{\theta _G}}}({{I^{LM}}} )$ is an HM image. As the content loss, we use the Visual Geometry Group (VGG)-network-based perceptual loss (VGG loss) on the basis of the rectified-linear-unit activation layers of the pre-trained 16-layer VGG network [23]. The VGG loss, based on the high performance pre-trained model, is defined using the feature map obtained by the 9th convolution. As mentioned above, perceptual loss is defined as the weighted sum of the content loss and adversarial loss. The weight of these losses is 1: 0.001. The trained generator converts LM images into SR images with HM-image-equivalent resolution.

The sidewalls of the waveguides are identified by convoluting a sidewall function into the microscope images. The waveguide widths can be calculated using the distance between the sidewalls. We formulate a sidewall function as

$$V(x )= \frac{{A\textrm{exp}\left[ { - \frac{{{{({x - {x_0}} )}^2}}}{{{w^2}}}} \right]}}{{1 + \textrm{exp}[{B({x - {x_0}} )} ]}} + C({x - {x_0}} )+ D,$$
where $V(x )$ is the brightness distribution of the microscope image in the direction of the waveguide cross section, A (usually negative) is the amplitude of the Gaussian distribution, B and C are parameters related to the distortion factor, D is an intercept, w is the width of the Gaussian distribution, and ${x_0}$ is regarded as the center of the sidewall.

Equation (4) is based on the multiplication of the Gaussian and sigmoid functions. The waveguide sidewalls, which can be fitted using Eq. (4) in the SR image, are detected with high probability. Here, ${x_0}$ in Eq. (4) corresponds to the sub-pixel coordinates where the sidewall exists, and the waveguide width is calculated using the coordinates.

4. Results

First, we evaluated the fitting accuracy of Eq. (4). Commonly used polynomial curve fitting is inadequate for comparison because of the additional processing to find a sidewall position and degradation of width-measurement results due to error propagation. Therefore, we compared Eq. (4) with the simple weighted fitting using the Gaussian function.

Figure 6 shows the fitting results of Eq. (4) and the Gaussian function for a randomly selected grayscale waveguide sidewall. The root-mean-square errors of Eq. (4) and Gaussian function were 3.10 and 6.12 for 10 randomly selected sidewalls, respectively. As shown in Fig. 6, the bottom of the fitted Gaussian function tended to shift away from the side wall.

 figure: Fig. 6.

Fig. 6. Fitting results of Eq. (4) and Gaussian function for randomly selected grayscale waveguide sidewalls

Download Full Size | PDF

Next, we present quantitative results of waveguide-width measurement using the proposed method. With the SISR method, a common SR method, high-resolution (HR) images were input to a discriminator, and LR images (usually downsampled HR images) were input to a generator [24]. We compared the width-measurement results of SR images generated with the SISR-based GAN with those generated with the proposed method.

The training and testing images were microscope images obtained from AWGs manufactured on different days. The LM and HM images of the waveguides were captured using an optical microscope at magnifications of 500× and 2000×, respectively. We fabricated 800 AWGs in the semiconductor manufacturing process and acquired 8000 training images (4000 LM and 4000 HM) using the optical microscope. For demonstration, 50 AWGs and 500 testing images (250 LM and 250 HM) were randomly extracted from testing AWG sets manufactured on different days. Through the capturing process, the same individual LM or HM objective lens was always used. The extracted LM images were input to the generator and converted to SR images. Figures 7(a–d) show the evaluated microscope images. We trained all networks on a NVIDIA GeForce RTX 3090 GPU and generated SR images. In the training process, the networks were trained for 2500 epochs with a batch size of 8, input image size of 224 × 224, learning rate of 0.01 respectively. The training time was approximately 30 minutes.

 figure: Fig. 7.

Fig. 7. Microscope images of (a) interpolated LM image using bilinear interpolation, (b) HM image, (c) SR image using conventional SISR method, and (d) shape-informative SR image using proposed method. Waveguide width error histograms of (e) conventional SISR and (f) proposed methods. Histogram obtained using waveguide width errors of interpolated LM images is shown for comparison. Single pixel corresponds to approximately 50 nm. Bootstrap resampled average width error distributions of (g) conventional SISR and (h) proposed method (1000 resamples). 95% confidence interval of average width error is approximately -0.10 to 0.10 pixels for conventional SISR images and -0.08 to 0.08 pixels for proposed shape-informative SR images.

Download Full Size | PDF

The structural similarity index measure (SSIM) is commonly used to evaluate image quality including structural similarity [25]. In order to evaluate structural similarity of waveguides in LM, SISR, and proposed shape-informative SR images to HM images, we calculated their average SSIM values to be 0.834, 0.838, and 0.922, respectively.

The waveguide sidewalls existing in all the testing images were fitted using Eq. (4), and the waveguide widths were calculated from the difference of the ${x_0}$ obtained for each sidewall. Figures 7(e) and 7(f) shows a histogram of the resulting width measurements of the SR images generated using the SISR and proposed methods, respectively. For comparison, a histogram obtained using the waveguide-width errors of the interpolated LM images is also shown. The width error was calculated by subtracting the measured width using the HM image from the measurement result of the interpolated LM image or SR image at the same position. The standard deviation of the width errors using the interpolated LM images was approximately 1.6 pixels. Additionally, the standard deviation of the width-measurement results using the SR image generated with the SISR method was approximately 1.3 pixels. We obtained a standard deviation of the waveguide widths of approximately 0.8 pixels (∼ 42 nm in real space), which is 62% that of the SISR method.

For unseen AWG images (250 LM and 250 HM) produced on a different day from the training data, we resampled average width errors from 250 images (approximately 2500 waveguides) 1000 times to produce bootstrap average width error distribution as shown in Fig. 7(g) and Fig. 7(h). The 95% confidence interval of the average width error is approximately -0.10 to 0.10 pixels for conventional SISR images and -0.08 to 0.08 pixels for proposed shape-informative SR images. Our results show that the generator, which is trained by referring to the actual acquired LM and HM images, outputs a shape-informative SR image.

The measurement precision of the proposed method cannot be improved beyond that of the HM images in principle. The standard deviation of HM images is approximately 0.6 pixels, and this value is included in the width errors in Fig. 7. Therefore, for further precision improvement, a higher-resolution microscope image should be prepared as input to the discriminator. Variations in the results of the waveguide-width measurement can be further reduced by averaging the results along the longitudinal direction of the waveguide.

5. Conclusion

We proposed a waveguide-dimensional-measurement method that uses shape-informative SR-microscope images and consist of a GAN and curve-fitting-based dimensional calculator using sidewall functions. The GAN could learn the differences between LM and HM images taken with different objective lenses at different magnifications over a large area. We demonstrated SR-image generation and width measurement of silica-based optical waveguides at the same imaging throughput as an LM microscope. The LM and HM images of the waveguides were obtained using an optical microscope at magnifications of 500× and 2000×, respectively. We obtained a standard deviation of the waveguide widths of approximately 0.8 pixels (∼ 42 nm), which is 62% that of the SISR method. We verified that our method can be used for high-throughput and high-precision dimensional measurement, especially for PLC component manufacturing.

Disclosures

MO, KY: NTT Corporation (E, P). KS: NTT Corporation (E, P), Photonics Electronics Technology Research Association (E).

Data Availability

Data underlying the results presented in this paper are not publicly available.

References

1. T. Goh, S. Suzuki, and A. Sugita, “Estimation of Waveguide Phase Error in Silica-based Waveguides,” J. Lightwave Technol. 15(11), 2107–2113 (1997). [CrossRef]  

2. A. Kaneko, T. Goh, H. Yamada, T. Tanaka, and I. Ogawa, “Design and applications of silica-based planar lightwave circuits,” IEEE J. Select. Topics Quantum Electron 5(5), 1227–1236 (1999). [CrossRef]  

3. K. Suzuki, K. Seno, and Y. Ikuma, “Application of Wave-guide/Free-Space Optics Hybrid to ROADM Device,” J. Lightwave Technol. 35(4), 596–606 (2017). [CrossRef]  

4. K. Yamaguchi, Y. Ikuma, M. Nakajima, K. Suzuki, M. Itoh, and T. Hashimoto, “M × N Wavelength Selective Switches Using Beam Splitting by Space Light Modulators,” IEEE Photonics J. 8(2), 1–9 (2016). [CrossRef]  

5. Y. Ma, K. Suzuki, I. Clarke, A. Yanagihara, P. Wong, T. Saida, and S. Camatel, “Novel CDC ROADM Architecture Utilizing Low Loss WSS and MCS without Necessity of Inline Amplifier and Filter,” in Optical Fiber Communication Conference (OFC) 2019, OSA Technical Digest (Optical Society of America, 2019), paper M1A.3.

6. O. Moriwaki and K. Suzuki, “Fast Switching of 84 µs for Silica-based PLC Switch,” in Optical Fiber Communication Conference (OFC) 2020, OSA Technical Digest (Optical Society of America, 2020), paper Th3B.5.

7. S. S. Kaderuppan, E. W. L. Wong, A. Sharma, and W. L. Woo, “Smart Nanoscopy: A Review of Computational Approaches to Achieve Super-Resolved Optical Microscopy,” IEEE Access 8, 214801–214831 (2020). [CrossRef]  

8. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. W. Farley, S. Ozair, and Y. Bengio, “Generative Adversarial Nets,” In Proceedings of Advances in Neural Information Processing Systems2672–2680 (2014).

9. A. Radford, L. Metz, and S. Chintala, “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks,” arXiv 1511.06434v2., (2015).

10. T. Schlegl, P. Seebock, S. M. Waldstein, U. Schmidt-Erfurth, and G. Langs, “Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery,” In Proceedings of International Conference on Information Processing in Medical Imaging, 146–157 (2017).

11. C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Torz, Z. Wang, and W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition15(11), 4681–4690 (2017).

12. X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. Change Loy, “ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks,” In Proceedings of the European Conference on Computer Vision Workshops, (2018).

13. J. A. Grant-Jacob, B. S. Mackay, J. A. G. Baker, Y. Xie, D. Heath, M. Loxham, R. Eason, and B. Mills, “A neural lens for super-resolution biological imaging,” Journal of Pysics Communications 3(6), 065004 (2019).

14. H. Zhang, F. Chunyu, X. Xie, Y. Yang, W. Mei, D. Jin, and P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” Biomed. Opt. Express 10(3), 1044–1063 (2019). [CrossRef]  

15. K. Iwatsuki and J. Kani, “Applications and Technical Issues of Wavelength-Division Multiplexing Passive Optical Networks with Colorless Optical Network Units,” J. Opt. Commun. Netw. 1(4), C17–C24 (2009). [CrossRef]  

16. A. Himeno, K. Kato, and T. Miya, “Silica-based Planar Lightwave Circuits,” IEEE J. Select. Topics Quantum Electron 4(6), 913–924 (1998). [CrossRef]  

17. Y. Hibino, “Recent advances in high-density and large-scale AWG multi/demultiplexers with higher index-contrast silica-based PLCs,” IEEE J. Select. Topics Quantum Electron 8(6), 1090–1101 (2002). [CrossRef]  

18. E. A. J. Marcatili, “Slab-coupled waveguides,” Bell Syst. Tech. J. 53(4), 645–674 (1974). [CrossRef]  

19. Recommendation ITU-T G. 692, “Optical interfaces for multi-channel systems with optical amplifiers,” https://www.itu.int/rec/T-REC-G.692-199810-I (1998), Accessed: 2021-11-01.

20. Recommendation ITU-T G. 694.1, “Spectral grids for WDM applications: DWDM frequency grid,” https://www.itu.int/rec/T-REC-G.694.1 (2020), Accessed: 2021-11-01.

21. C. Dong, C. C. Loy, and X. Tang, “Accelerating the Super-Resolution Convolutional Neural Network,” In Proceedings of European Conference on Computer Vision391–407 (2016).

22. C. Y. Yang, C. Ma, and M. H Yang, “Single-image super-resolution: A benchmark,” In Proceedings of European Conference on Computer Vision372–386 (2014).

23. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556 (2014).

24. W. Yang, X. Zhang, Y. Tian, W. Wang, J. H. Xue, and Q. Liao, “Deep Learning for Single Image Super-Resolution: A Brief Review,” IEEE Trans. Multimedia 21(12), 3106–3121 (2019). [CrossRef]  

25. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image. Process 13(4), 600–612 (2004). [CrossRef]  

Data Availability

Data underlying the results presented in this paper are not publicly available.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Optical microscope image of AWG (20× magnification). Inset waveguide is at 500× magnification
Fig. 2.
Fig. 2. Waveguide-width dependence of outputted central optical frequency of AWG
Fig. 3.
Fig. 3. Schematic illustration of optical microscopy for silica-based waveguide imaging
Fig. 4.
Fig. 4. Brightness-value distribution in direction of waveguide cross section
Fig. 5.
Fig. 5. Architecture of waveguide-width measurement method. (a) GAN training phase and (b) operation phase of proposed method.
Fig. 6.
Fig. 6. Fitting results of Eq. (4) and Gaussian function for randomly selected grayscale waveguide sidewalls
Fig. 7.
Fig. 7. Microscope images of (a) interpolated LM image using bilinear interpolation, (b) HM image, (c) SR image using conventional SISR method, and (d) shape-informative SR image using proposed method. Waveguide width error histograms of (e) conventional SISR and (f) proposed methods. Histogram obtained using waveguide width errors of interpolated LM images is shown for comparison. Single pixel corresponds to approximately 50 nm. Bootstrap resampled average width error distributions of (g) conventional SISR and (h) proposed method (1000 resamples). 95% confidence interval of average width error is approximately -0.10 to 0.10 pixels for conventional SISR images and -0.08 to 0.08 pixels for proposed shape-informative SR images.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

d = λ 2 n sin θ
f 0 = m c n eff Δ L ,
l G e n S R = i = 1 N log D θ D ( G θ G ( I L M ) ) ,
V ( x ) = A exp [ ( x x 0 ) 2 w 2 ] 1 + exp [ B ( x x 0 ) ] + C ( x x 0 ) + D ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.