Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Block-based compressed sensing for fast optic fiber bundle imaging with high spatial resolution

Open Access Open Access

Abstract

The resolution of traditional fiber bundle imaging is usually limited by the density and the diameter of the fiber cores. To improve the resolution, compression sensing was introduced to resolve multiple pixels from a single fiber core, but current methods have the drawbacks of excessive sampling and long reconstruction time. In this paper, we present, what we believe to be, a novel block-based compressed sensing scheme for fast realization of high-resolution optic fiber bundle imaging. In this method, the target image is segmented into multiple small blocks, each of which covers the projection area of one fiber core. All block images are independently and simultaneously sampled and the intensities are recorded by a two-dimensional detector after they are collected and transmitted through corresponding fiber cores. Because the size of sampling patterns and the sampling numbers are greatly reduced, the reconstruction complexity and reconstruction time are also decreased. According to the simulation analysis, our method is 23 times faster than the current compressed sensing optical fiber imaging for reconstructing a fiber image of 128 × 128 pixels, while the sampling number is only 0.39%. Experiment results demonstrate that the method is also effective for reconstructing large target images and the number of sampling does not increase with the size of the image. Our finding may provide a new idea for high-resolution real-time imaging of fiber bundle endoscope.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Endoscopy has been widely applied in medical inspection and industrial detection. Especially in the medical field, endoscopic imaging technology assists the investigation of the health status inside human bodies and meets the needs of cell biopsy and in vivo tissue observation [13]. Recently, in order to observe organs or tissues in a non-invasive or minimally invasive form, the application of optic fiber imaging in endoscopic technology has attracted extensive attention [4]. Compared to the traditional electronic endoscope, the optic fiber probe has smaller diameter and higher flexibility. However, due to the structural feature of the optical fiber, the visual effect of images directly transmitted through the optical fiber is poor.

Research have been carried out for improving the quality of fiber imaging. Various strategies have been proposed for single multimode fiber imaging configuration, such as wavefront shaping [510], deep learning [11,12], speckle analysis [13,14], etc. Although imaging quality can be improved, these methods face challenges of time cost for distal scanning, being vulnerable to fiber bending, repeated calibration, iterative optimization or large dataset for model training. Moreover, substantial breakthrough is still under command to resist the bend-sensitive issue of fiber. Therefore, fiber bundles and multi-core fibers are still the main forms of fiber endoscopes in the current clinical applications, such as optical coherence tomography (OCT) [15], confocal endomicroscopy [1618], multi-photon endomicroscopy [19,20] and structured illumination endomicroscopy [21,22] et al. However, in traditional fiber bundle endoscopy, each core in a fiber bundle only resolve single pixel [23], so the spatial resolution is limited. Resolution enhancement of fiber bundles can be achieved by deep learning methods [24], but the training of the network requires comparatively high computational cost.

In order to improve the image resolution and enhance the robustness of the imaging, compressed sensing (CS) is introduced into fiber optic imaging [2530]. The resolution of CS imaging depends on the size of the sensing masks, so the limitation on the single mode of fiber cores in conventional fiber bundle imaging can be ignored. The method of capturing sparse matrix by sequentially lighting the cores of a multi-core fiber has proven to be insensitive to fiber bending in confocal fluorescence imaging endoscopes [31]. However, it needs to measure the sensing masks in advance. A computational strategy has been proposed that utilized sparse reconstruction algorithms to resolve multiple pixels within each core in the bundle, enabling resolution enhancement of fiber bundles imaging [32]. But the complex calculation procedures are time-consuming for large targets.

The block-based compressed sensing (BCS) [33] can greatly reduce the computational complexity of sparse reconstruction by sampling the target in blocks and reducing the sampling numbers. Different from traditional CS, in which the target is sampled by sampling patterns of the same size as the target, BCS divides the target into multiple blocks and sample the blocks sequentially with sampling patterns of the size as the blocks. Because the blocks are much smaller than the whole target, the variable numbers to be solved in the reconstruction algorithm are much fewer than that in CS and consequently the calculation time is greatly reduced. Compared to the traditional CS method, the sensing matrix of BCS is more flexible and the reconstruction of large objects by BCS is more efficient [34,35].

In this paper, we propose a novel fiber bundle imaging method based on BCS. In this method, the target area is segmented into multiple blocks and each block covers the projection area of one fiber core; These blocks are independently and simultaneously illuminated by sampling patterns. Because the sampling patterns can be identical in all the blocks in each sampling run, the size of sampling patterns is greatly reduced from the full image size to the block size and the number of sampling is corresponding reduced by the same factor. Then the target area with sampling patterns are projected to the entrance end of the fiber bundle, so each fiber core collects and transmits the intensity of one block. By utilizing the transmission independence between different fiber cores, the intensity signals of all the fiber cores, i.e. all the blocks, can be recorded by a two-dimensional camera simultaneously. In this way, the spatial scanning over the blocks in original BCS is avoided. Because of the great decrease in the sampling masks size and the number of sampling, the reconstruction complexity and reconstruction time are enormously reduced. Compared to CS, we experimentally prove that the number of sampling times is reduced to an order of thousand and the reconstruction time is shorter more than twenty times, while the reconstructed image quality is similar. Moreover, compared with the traditional optic fiber bundle imaging method, the imaging resolution of each core is no longer limited by the single mode, but related to the size of the sampling pattern. According to our knowledge, this is the first time that BCS method is used for optical fiber imaging.

2. Principle

Traditionally, an optic fiber bundle can be viewed as a close-packed arrangement of multiple single-mode fibers with incoherent illumination, i.e. each core is treated as one pixel. The gaps between the cores cannot transmit signals. Therefore, image transmission through the bundle is discretized. The image sampling and modulation by a single core can be expressed as:

$${X_{sc\_i}} = {B_i} \cdot ({{K_g} \ast {X_i}} )$$
where ${X_{sc\_i}}$ is the output intensity distribution matrix of the i-th core of the bundle, Kg is the convolution kernel of a single core corresponding to the modulation effect of the fiber transmission, * denotes convolution, ${X_i}$ is the sub-image of the original whole image X sampled by the i-th core of the bundle, the dot ${\cdot}$ denotes elementwise multiplication. ${B_i}$ is a binary matrix of the i-th core, in which elements with value ‘1’ represent the core area and ‘0’ represents the gap area. The size of image X is set to I × I and the dimensions of ${X_{sc}}$, ${X_i}$ and ${B_i}$ are all set to n × n. The number of cores c determines the number of sub-images. Usually c < I2/n2. If $R_{i = 1}^c[ \cdots ]$ means splicing all sub-images according to the core position, the output image of the whole fiber bundle can be expressed as:
$${X_{fb}} = R_{i = 1}^c[{{X_{sc\_i}}} ]$$

When the fiber core has no spatial resolution, the output intensity ${X_{sc}}$ can be approximated as a Gaussian distribution. Therefore, the resolution of the transmitted image ${X_{fb}}$ is limited by the diameter and the number of the fiber cores. To recover the spatial resolution of a single core from ${X_{sc}}$, BSC method is introduced into the image reconstruction process. If the sampling area of each core is treated as a block, the image reconstruction can be expressed as:

$${Y_{sc}} = ({{B_i} \cdot M} )\odot {X_i} + nois{e_i}$$
where a series of n × n sensing mask M is used to sample the sub-image Xi, Ysc is the observations matrix with the size of $m \times n\textrm{ }({m \ll n} )$ and ${\odot}$ represents matrix multiplication. Then the sub-image Xi can be restored according to Eq. (3) and the original image X can be reconstructed by splicing all the restored sub-images, which can be expressed as $X = R_{i = 1}^c[{{X_i}} ]$. Equation (3) can be modified to vector representation as:
$${y_{sc}} = \bar{M} \cdot {b_i} \odot {x_i} + nois{e_i}$$
where ysc is the vectorized observation, xi is the vectorized object, bi is the vectorized binary mask, so the size of ysc is m2 × 1, the sizes of ${x_i}$ and ${b_i}$ are n2 × 1. $\bar{M}$ is a m2×n2 diagonal sensing matrix. Each column vector of $\bar{M} \cdot {b_i}$ is a vectorized mask of Bi multiplied by a M matrix. The observation of the j-th core ${y_{sc\_j}}$ is the sum of the grayscale values of the sub-image Xi multiplied by the j-th sensing mask Mj and modulated by the i-th core, which can be expressed as:
$${y_{sc\_j}} = \sum\limits_{p = 1}^n {\sum\limits_{q = 1}^n {{K_g} \ast ({{M_j} \cdot {X_{sc\_p,q}}} )\cdot {B_i}} } + nois{e_i}$$

In this paper, the observation ${y_{sc\_j}}$ of all the cores can be collected by a two-dimensional detector at the same time, so the total number of samples is m2.

For compressed sampling, Hadamard transformation matrix is chosen as the sensing basis. Hadamard matrix is completely orthogonal square matrix with the size of 2k × 2 k (k = 2,3,4…). The Hadamard basis is given by:

$${\hat{H}_{2k}} = {\hat{H}_2} \otimes {\hat{H}_{{2^{k - 1}}}} = \left[ {\begin{array}{cc} {{{\hat{H}}_{{2^{k - 1}}}}}&{{{\hat{H}}_{{2^{k - 1}}}}}\\ {{{\hat{H}}_{{2^{k - 1}}}}}&{ - {{\hat{H}}_{{2^{k - 1}}}}} \end{array}} \right]$$
where ${\otimes}$ denotes the Kronecker product, k is a positive integer order and ${\hat{H}_2} = \left[ {\begin{array}{cc} 1&1\\ 1&{ - 1} \end{array}} \right]$. The elements of the Hadamard transformation matrix only include ±1. However, the digital micromirror device (DMD) used in this paper can only load binary patterns, so the mask M needs to be loaded by the difference method, in which the Hadamard matrix $\hat{H}$ is decomposed into two complementary matrices ${\hat{H}^ + } = {{({1 + \hat{H}} )} / 2}$ and ${\hat{H}^ - } = {{({1 - \hat{H}} )} / 2}$ are projected respectively. Then the difference can be adapted to the binary situation.

Equation (5) and Eq. (6) describe the generation of the detected values and the composition of the sensing mask in Eq. (4), respectively. Since $m \ll n$, Eq. (4) is an underdetermined problem. In this paper, total variation minimization by augmented Lagrangian and alternating direction algorithms (TVAL3) [36] is used to solve this problem and reconstruct the sub-image Xi. TVAL3 includes two parts: 1. defining optimization problem by total variation minimization, which can be described as:

$$\min {\sum\nolimits_t {||{{\omega_t}} ||} _2},\textrm{ s}\textrm{.t}\textrm{. }\bar{M} \cdot {b_i} \odot {x_i} + nois{e_i} = {y_{sc}}\&{D_t}{x_i} = {\omega _t}\forall t$$
where the ${\omega _t} = {D_t}{x_i} \in {{\mathbb R}^{2 \times 1}}$ is the discrete gradient of xi at position t, and ${||\cdots ||_2}$ is the ${\ell _2} - norm$. 2. Using the augmented Lagrangian multiplier method to the objective function to solve the optimization problem in Eq. (7):
$$\begin{aligned} \min {L_A}({{\omega_t},{x_i}} )&= \mathop {\min }\limits_{{\omega _t},{x_i}} \sum\limits_t {\left( {{{\|{{\omega_t}} \|}_2} - v_t^T({{D_t}{x_i} - {\omega_t}} )+ \frac{{{\beta_t}}}{2}\|{{D_t}{x_i} - {\omega_t}} \|_2^2} \right)} \\& \textrm{ } - {\lambda ^T}({\Phi \odot {x_i} - {y_{sfb}}} )+ \frac{\mu }{2}\|{\Phi \odot {x_i} - {y_{sfb}}} \|_2^2 \end{aligned}$$
where ${v_t} \in {{\mathbb R}^{2 \times 1}}$ and $\lambda \in {{\mathbb R}^{n \times 1}}$ are Lagrange multipliers and $\beta ,\mu \in {\mathbb R}$ are regularization parameters associated with penalty terms of each constraint. $\Phi = \bar{M} \cdot {b_i}$ is the sensing matrix and ${(\cdots )^T}$ stands for transpose. By reconstructing, matrixing and stitching all the xi, the recovered image X can be obtained. The number of Lagrange multipliers vt and λ in CS method are I2/n2 times of those in the BCS method, so the data size of equation iteration in CS method is far larger than that in BCS method. Consequently, BCS method requires much less iteration that reduces the computational complexity of image reconstruction.

3. Simulation

The feasibility of the proposed method was tested firstly by simulation. We simulated the image reconstruction by BCS and CS methods respectively. The modulation effect of a fiber bundle on the image was simulated by adding Gaussian blur and random noise to the sub-image in each core. The size of the target image is 128 × 128 pixels (the units of all images are pixel unless specified). Hadamard transformation matrix was used as the sensing mask. Hadamard matrix was generated by the functions hadamard.m in MATLAB and Hadamard basis patterns were obtained by multiplying each column to each row in this Hadamard matrix (vector multiplication). In this way, a N × N Hadamard matrix produced a number of N2 orthogonal matrices with the size of N × N. By sequentially using different Hadamard patterns to sample the target, single-pixel signal values are obtained. In order to improve the reconstruction quality under low sampling number, we chose the Cake-cutting order (cc-order) [37,38] to set the priority of the generated Hadamard basis patterns in the sampling pattern series. In cc-order, the Hadamard basic patterns are sorted according to the portion of connected regions in the pattern. If a sampling pattern contains fewer connected regions, it has higher possibility to sample more details in the object. Therefore cc-order can effectively sample more details of the object with fewer sampling number. The size of the block mask used in BCS is 8 × 8 pixels, and the size of the full mask used in CS is 128 × 128 pixels. So the full sampling number for BCS is 64 (128 when using difference method for DMD projection) and for CS is 16384 (128 × 128). The process of sampling the target image by block masks and full-size masks are shown in Fig. 1(a) and (b) respectively. In CS method, the observations were accumulated over the pixels of the entire observed image. In BCS, the observations for each block were accumulated over the pixels in each core.

 figure: Fig. 1.

Fig. 1. Sampling the target image with CS method and BCS method. (a) is the sampling process of CS method, in which the full-scale Hadamard matrices are superimposed on the mask of the whole fiber bundle to sample the target image. (b) is the sampling process of BCS method, in which the block Hadamard matrices are superimposed on each core of the bundle to sample the target image (the masks for different fiber cores are identical).

Download Full Size | PDF

TVAL3 was used in the reconstruction process. The data processing in this paper uses MATLAB 2021a and the system environment is 11th Gen Intel Core i5-11400 H @ 2.70 GHz with 16GB RAM. The simulation results are shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Simulation for image reconstruction: (a) Target image. (b) Image with traditional optic fiber bundle imaging. (c) Reconstructed image by BCS method. (d) Reconstructed image by CS method. (e) Line profiles across a single fiber along the row marked with a red line in the zoom-in images in (a-d).

Download Full Size | PDF

Figure 2(a) is the original image. Figure 2(b) shows the output image of traditional fiber bundle imaging. Because the fiber cores have no spatial resolution, each core has a Gaussian intensity distribution plus noise. Figure 2(c) shows the reconstruction result with BCS method at a 30% compressed ratio, i.e. 19 samples. Compressed ratio is the number of sampled pixels over the total number of the image pixels. The number of the blocks is 248, which is consistent with the number of cores. The reconstruction process took 0.306s. Figure 2(d) shows the reconstruction result with CS method at a 30% compressed ratio, i.e. 4915 samples. The reconstruction process took 7.026s. Difference method for DMD was not used in simulation. The pixel value profiles across the three lines marked by the red short lines in the zoomed images are shown in Fig. 2(e). From Fig. 2, the contour of the image passed through the optic fiber cannot be distinguished with traditional fiber bundle imaging, but can be resolved by both BCS and CS methods. In this numerical simulation, each fiber core contains 52 pixels, so the CS and BCS methods can improve the image reconstruction resolution by about 50 times in terms of pixel numbers, compared with the traditional optical fiber imaging. However, the BCS is 23 times faster than CS in image reconstruction by TVAL3 and the sampling times are only 0.39% of CS. We have also tested other sparse algorithms for compressed imaging reconstruction, please see section 2 in Supplement 1.

Figure 3 shows the reconstruction effects of CS and BCS methods when the compressed ratio is increased from 10% to 100%. As shown in Fig. 3(a), when the compression ratio is greater than 30%, both methods can reconstruct images with high-quality and the SSIM values as well as the PSNR results are also approximate as shown in Fig. 3(b). In Fig. 3(c), with the compression ratio increases, the operation time of the CS method greatly increases, while the operation time of the BCS method does not increase significantly. It proves that the BCS method has high reconstruction efficiency at different compressed ratio.

 figure: Fig. 3.

Fig. 3. Comparison of CS method and BCS method when compressed ratio increases from 10% to 100%. (a) Images that are reconstructed by BSC method and CS method with different compression ratios. (b). Comparison of PSNR and SSIM between CS and BCS method. (c). Comparison of reconstruction time of CS and BCS.

Download Full Size | PDF

A comparison of the simulation results is shown in Table. 1. After overlaying Fig. 2(a). with the corresponding binary mask, a numerical comparison is made with Fig. 2(c) and (d). In terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), the reconstruction results of the BCS method and the CS method are close, which proves that BCS method and CS method have similar signal reconstruction ability in optic fiber bundle imaging. We also performed the same simulation analysis on eight different standard test images from Digital Image Processing database [39], calculated the average of the PSNR values, the SSIM values, and the computational time respectively over the eight tests, and found similar results (see section 1 in Supplement 1).

Tables Icon

Table 1. Comparison of the performance between BCS method and CS method

To compare the imaging resolution of CS method, BCS method and conventional fiber bundle imaging method, the Modulation Transfer Function (MTF) of these three methods based on Fig. 2(c) and Fig. 2(d) was calculated and shown in Fig. 4. MTF is defined as the contrast of the reconstructed image over that of the original image at different frequencies, therefore higher MTF value mean less contrast loss and the maximum value is 1. MTF is usually used as a measure of the spatial resolution of optical imaging system, so in this paper we calculated the MTF to compare the imaging resolution of different methods (the calculation method of MTF is detailed in Supplement 1). The total 12 groups of line pairs in Fig. 2 represent 12 spatial frequencies, so 12 MTF values were calculated corresponding to the 12 spatial frequencies and the MTF curves were drawn with smooth line. The unit of the spatial frequency in the MTF figures was converted to lp/128 pixels instead of lp/mm for comparing convenience, because the image size was 128 × 128 pixels. It can be seen from Fig. 4 that the traditional fiber bundle imaging (orange dot lines) has no spatial resolution, BCS method (red diamond line) and CS method (blue square line) have MTF values higher than 0.4 or even 0.6 in all the frequencies, except for the frequencies of 14 and 16. In Fig. 4(a), the MTF value at the spatial frequency of 16 is only 0.124, which may be due to insufficient samplings in this block that results in distortion of the reconstructed signal. It can be also noticed that BCS and CS has different resolution advantages for different frequencies.

 figure: Fig. 4.

Fig. 4. (a) MTF calculated from horizontal line pairs. (b) MTF calculated from vertical line pairs.

Download Full Size | PDF

4. Experiment

4.1 Experimental setup

The sketch of the experimental setup is shown in Fig. 5 (the photo of the optical system is in Supplement 1). The target was illuminated by a LED light source (MCWHL7, Thorlabs) through a collimating Lens1 (SM1U25-A, Thorlabs) and was imaged by Lens2 onto a digital micromirror device (DMD, DLP VisionFLY6500), where it was sampled by the modulation masks loaded on the DMD. When imaging different size of targets, different objective lens (the parameters of the objectives are given in section 4.4) and imaging lens in Lens2 were chosen to image the target onto to DMD. At the same time, when DMD loads a matrix with all elements equal to 1, the ground truth image of target can be taken by Detector1 by rotating the BS by 90°. Because the detector must be fixed and we do not have extra detectors, we rotate the BSC by 90° to obtain the ground-truth images. The BS was rotated 90° to switch between the ground truth image recording and the calibration (the system just needs once calibration and the rotation of the BS does not influence the transmitted light). Then both the object image and the masks were projected at approximately 10x demagnification to the distal face of the fiber bundle (Schott Image Bundle) through a microscope objective, Obj1 (NA:0.25, DHC, GCO-2112). The distal face of the fiber was observed through Lens3 and Detector1 (CCD, MV-CB120_10UM-B) for calibration. In this paper, the masks were resized to 16 × 16 by using 2 × 2 DMD mirrors for one effective element. The bundle containing 18000 cores with the core diameter of 7.6µm. Each fiber core received approximately 46-52 effective mask pixels. The microscope objective Obj2 (NA:0.25, DHC, GCO-2112) imaged the proximal end of the fiber onto Detector2 (CCD, MV-CE060_10UM). A data acquisition card (HK_USB6203) was used for synchronizing the mask loading of DMD and the signal acquisition of Detector2. The switching speed of the DMD and the response speed of Detector2 were both adjusted to 30 fps. This process was controlled by a customized program.

 figure: Fig. 5.

Fig. 5. Experimental setup. LED light source (350-700 nm); Lens1 for light beam collimation, f = 100 mm; Lens2 consists of an objective lens (Olympus) and an industrial camera lens (Microvision BT-MPX); DMD, available resolution is 1024 × 768; BS: beam splitter; Lens3: f = 75 mm; Obj1and Obj2: 10×microscope objectives; OFB: optic fiber bundle (length = 760 mm); Detector1 and detector2 are CCD (charge coupled device); DAQ: data acquisition card (200 kHz, 8 channels, purple wire for GPIO, blue wire for Opto_Out and yellow wire for Opto_In).

Download Full Size | PDF

4.2 Calibration

The mask loaded on the DMD is the superposition of the fiber core binary matrix and the Hadamard matrix. Therefore, a calibration for the relative position between DMD elements and the fiber cores was needed. We first loaded a matrix of all ones on the DMD and recorded the reflected image of the distal face of the fiber bundle with Detector1. The recorded image was shown in Fig. 6(a). Then, affine transformation was applied to adjust the image size to match the size of the matrix loaded on DMD and the result is shown in Fig. 6(b). Afterwards, the positions of the fiber cores were extracted with circular detection method, as shown in Fig. 6(c). Finally, the binary mask that corresponds to the fiber cores can be obtained by thresholding the extracted fiber core image. The block Hadamard matrices are arranged according to the center position of each core. Since the fiber cores are independent, the sub-matrix of the mask array on the fiber cores is identical.

 figure: Fig. 6.

Fig. 6. (a) Reflected image of the illuminated area of the fiber bundle (captured by Detector1). (b) Calibration image calculated from (a) using affine transformation. (c) Fiber cores extracted from (b) with circular detection method. (d) Binary mask of the fiber core obtained by thresholding the image in (b) and some cores on the edges are removed to simplify the calculation.

Download Full Size | PDF

4.3 Mask and noise test

Because the masks in each block take a very small area on the DMD, the light intensity, especially the intensity changes introduced by different masks, is low and may be merged into noise after the transmission through the fiber core. So, we first measured the fluctuation range and the standard deviation of the detected values of the mask series in a single block. Figure 7(a) shows the fluctuations of the detected values when using Hadamard and random matrices respectively. Negative values exist because of the difference method. The red squares represent the detected values of the Hadamard matrix, the yellow diamonds represent the detected values of the random matrix, and the blue triangles represent the detected values of the system noise without masks. Note that the first Hadamard mask is an all-one matrix and the corresponding detection value is extremely high, so it is excluded from the calculation of the standard deviation. Figure 7(b) shows the standard deviation of detection values with different sizes of Hadamard matrices and random matrices. The standard deviation of the detection signal is less than 5 times of that of the noise when the mask size is smaller than 5 × 5 pixels. In this case, the reconstructed image will be overwhelmed by noise. According to our experimental experience, the reconstructed image was not obviously influenced by the system noise only when the standard deviation of the mask is greater than 10 times of the standard deviation of the noise in our experimental setup.

 figure: Fig. 7.

Fig. 7. (a) Detected output values from a single core with the change of 8 × 8 Hadamard matrices and random matrices. (b) Standard deviation of detected values for Hadamard and random matrices with different sizes. The orange horizontal line marks the value that is 5 times the standard deviation of the system noise.

Download Full Size | PDF

4.4 Results

First, a positive 1951 USAF resolution test chart (R3L3S1P, Thorlabs) was used as the target to evaluate the performance of BCS method. A 10× objective (NA:0.25, Olympus, PLN10×) and a 100 mm focal length camera lens were used to project the image of the resolution target onto the DMD. The ground truth image was taken by Detector1 by rotating the BS by 90°. Figure 8(a) is the ground truth image of the resolution test chart (256 × 256 pixels). Figure 8(b)-(d) are the output images of the conventional fiber bundle imaging method, BCS method and CS method respectively. The compressed ratio in both BCS method and CS method was 30%. Compared to conventional fiber bundle imaging, BSC method and CS method resolve more pixels and details per fiber core.

To improve the visual effect, it is necessary to remove the fiber bundle artifacts. Several researches have been conducted, such as frequency domain filtering [40,41], interpolation [42,43], mosaic [4446], image superposition [47,48], and transmission matrix [49] methods et al. The interpolation method is relatively simple in calculation. So, the reconstructed images were interpolated to remove the filling gaps between the cores. In the interpolation, we first filled the opaque areas in the image with the nearest neighborhood interpolation method in the horizontal and vertical directions. Then the results from the horizontal and vertical interpolation were averaged. The pixelation artifact removal results obtained by this method for Fig. 8(b), (c) and (d) are shown in Fig. 8(e), (f), and (g). It is clearly show that the lines in the resolution target are difficult to discern in Fig. 8(e) that is the conventional fiber bundle imaging result, while the lines of group 7 can be clearly observed in Fig. 8(f) and Fig. 8(g) that are the result of the BCS and CS method respectively. Figure 88(h) is the zoom-in images of Fig. 8(a) and Fig. 8(e)-(g). The resolution improvement of BCS method and CS method compared to conventional method is clear. Note that the interpolation is only to improve the visual effect, it does not recover the occluded information.

 figure: Fig. 8.

Fig. 8. (a) Ground truth image of the resolution target. (b) Image acquired with conventional fiber bundle configuration. (c) Reconstructed image by BCS with 30% differential observations (38 times) and the image contains 722 fiber cores. (d) Reconstructed image by CS with 30% differential observations (39320 times). (e), (f) and (g) The interpolated images of (b), (c) and (d). (h) From left to right, the zoom-in view of the areas in the red boxes in (a), (e), (f) and (g).

Download Full Size | PDF

The output images of BCS method and CS method, i.e. Figure 8(f) and Fig. 8(g), look similar. To further compare the performance of BCS method and CS method, numerical comparison between these two methods is shown in Table 2. It can be seen that the sampling time and the reconstruction time of CS method are much longer than those of BCS method, while the image quality is similar, which is consistent with the simulation result.

Tables Icon

Table 2. Comparison of BCS method and CS method for fiber bundle imaging

Then, an antistatic-fiber section and a colon stained section (JUEQI Biological Microscope Slides, Qianjiang Teaching Instrument Factory) were used as the targets. These two targets have different size. The compressed ratio of these experiments was 40%. Figure 9(a) is the ground truth image of the antistatic-fiber and the size of the image was 128 × 128 pixels. A 4× objective (NA:0.1, Olympus, PLN4×) and a camera lens of 150 mm focal length were used to project the image onto the DMD. Figure 9(f) is the ground truth original image of colon stained section (256 × 256 pixels). A 10× objective and a 100 mm focal length camera lens were used to project the image of the colon gland onto the DMD. The glands of the kidney section can be seen in Fig. 9(f).

 figure: Fig. 9.

Fig. 9. (a) and (f): Ground truth images of the sections. (b) and (g): Images acquired with conventional fiber bundle configuration. (c) and (h): The interpolated images from (b) and (g). (d) and (i): Reconstructed images by BCS with 40% differential observations (52 times) and contains 248 and 1056 cores respectively. (e) and (j): The interpolated images from (d) and (i). (k) From left to right, the zoom-in view of the areas in the red boxes in (a), (c), and (e). (l) From left to right, the zoom-in view of the areas in the red boxes in (f), (h), and (j). The images were adjusted to the same size for presenting convenience.

Download Full Size | PDF

The output images of the target sections through conventional fiber bundle configuration are shown in Fig. 9(b) and (g). The details of the fibers cannot be resolved. Images reconstructed by BSC method are shown in Fig. 8(d) and (i). Compared to conventional fiber bundle imaging, BSC method resolves more pixels and details per fiber core. The sampling process spent 1.73s, regardless of the original image size. The number of differential observations is 52 (26 Hadamard masks). The reconstruction time of Fig. 9(d) and (i) by BCS was 1.21s (with 248 blocks) and 2.19s (with 1056 blocks), respectively. The pixelation artifact removal results obtained by interpolation method for Fig. 9(b), (g), (d) and (i) are shown in Fig. 9(c), (h), (e) and (j). The details of the fibers are difficult to discern from the zoom-in view of (c) that is the conventional fiber bundle imaging, while the outlines of overlapping fibers can be clearly observed from the zoom-in view of (e), the result of the BCS fiber bundle imaging. The interpolated images have streak noises due to the missing signal and interpolation. This is similar to the block artifact in ordinary BCS imaging.

The advantage of the method proposed in this paper is that it is still efficient in reconstructing image with large number of cores. Figure 10(a) is the original image of Plasmodesmata Section (JUEQI Biological Microscope Slides, Qianjiang Teaching Instrument Factory) with the size of 384 × 256 pixels. A 20× objective (NA:0.4, Olympus, PLN20×) and a 150 mm focal length camera lens were used to project the center of the section onto the DMD. The BCS method spent 3.13s in reconstruction on 1595 blocks and the results are shown in Fig. 10(c). After image interpolation, plasmodesmata between cells can be observed in Fig. 10(e).

 figure: Fig. 10.

Fig. 10. (a) Section image. (b) Conventional fiber bundle imaging of (a). (c) Reconstructed image by BCS with 40% differential observations (52 times) and the image contains 1592 cores. (d) and (e) Interpolated images of (b) and (c) respectively with nearest neighbor interpolation in vertical and horizontal directions. (f) Stitched image after six shifts. (g) From left to right, the zoom-in view of the areas in the red boxes in (a), (d), (e) and (f).

Download Full Size | PDF

The interpolation method uses the reconstructed image signal from the adjacent fiber cores to fill the missing signal due to the opaque area in the bundle, which in essence improves the visual effect by pixel prediction rather than restoring the actual image signal. Therefore, there are still some artifacts of line shape in the zoom-in view of Fig. 10(e), which makes the details of plasmodesmata blurry.

To observe the plasmodesmata more clearly, we used image stitching to remove the pixelation artifacts of the bundle, which is inspired by literature [43,47]. The splicing method is similar to scanning the target. By using BCS method to sample and reconstruct the target at different locations, the missing signal due to optic fiber cladding occlusion can be recollected after displacement. In this experiment, a four-dimensional translation stage was used to move the fiber bundle twice in the horizontal and vertical directions and once in the two diagonal directions. The displacement distance was half the core diameter. To keep the collimation between the fiber cores and the masks, the masks loaded on the DMD also moved 3 pixels to the same direction as the translation stage moved. The target was reconstructed after each displacement. Then, the six reconstructed images were averaged and smoothed. The result is shown in Fig. 10(f). From Fig. 10(g), we can find that the cell plasmodesmata in the zoom-in view are clearly visible. In Table. 3, the values of PSNR and SSIM prove that the quality of reconstructed images is effectively improved by splicing.

Tables Icon

Table 3. Comparison of the results of the interpolation method and the splicing method

5. Conclusion and discussion

In conclusion, we proposed a fast strategy to improve the resolution of fiber bundle imaging by using BCS method. Through simulation analysis, the proposed method is 23 times faster than CS method in reconstructing target images of 128 × 128 pixels, while the number of samples is only 0.39% of CS method. Experiments were conducted on different sizes of targets, such as 128 × 128, 256 × 256 and 384 × 256 pixels. The imaging resolution is not limited by the single mode of fiber cores as in conventional fiber bundle imaging, and the reconstruction process takes only several seconds that can be further shortened. After removing the influence of pixelation artifacts through image processing, the visual effect of the BCS method has been greatly improved compared with the traditional fiber bundle imaging. Our strategy can provide new ideas for improving the efficiency of fiber optic endoscopy.

However, placing a DMD at the distal face of the fiber bundle for loading the mask is not applicable to endoscope system. Besides, because the BCS method needs to project the sampling masks onto each fiber core, accurate calibration is necessary, which also increases the complexity of practical applications. For endoscope application, one solution is to install a chrome-etched mask with a miniature actuator at the distal end of the fiber [32]. Another possible solution may be to install micro-grating and micro-lens arrays at the tip of the fiber. A variable wavelength coherent light source is used for illumination at the incident end of the fiber. A variable sensing mask will be obtained by wavelength variation. Similar methods have been initially investigated in multimode fiber imaging [28].

In terms of reconstruction time, due to the needs for each block to be reconstructed in turn, the image reconstruction time of the BCS method increases with the number of cores. However, this means parallel computing can easily and greatly further improve the reconstruction speed. Besides, the DAQ in our experiment only uses 3 input channels. If the full 8-channel data input is used, the speed of data processing will be improved at least 2 times. At the same time, in terms of sampling, if a high-speed camera is used, sampling can be completed in a short time. In addition, although the BCS method improves the overall imaging speed, it is not able to image moving objects. The existing methods for real-time fiber bundle video [44] and single pixel imaging of moving objects [50] may provide a reference for realizing real-time fiber bundle imaging of moving objects. We expect that BCS can provide a new idea for fiber-optic endoscope imaging technology.

Funding

National Key Research and Development Program of China (2018YFB0504400); Tianjin Science and Technology Program (22JCYBJC01310); National Natural Science Foundation of China (61605092, 62075106).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. W. Jung, D. Mccormick, Y. Ahn, A. Sepehr, M. Brenner, B. Wong, N. Tien, and Z. Chen, “In vivo three-dimensional spectral domain endoscopic optical coherence tomography using a microelectromechanical system mirror,” Opt. Lett. 32(22), 3239–3241 (2007). [CrossRef]  

2. F. a. Narter, “Re&58; Optical Endomicroscopy and the Road to Real-Time, in vivo Pathology&58; Present and Future,” Journal of Urological Surgery 2 (2015).

3. G. E. Tontini, G. Manfredi, S. Orlando, H. Neumann, M. Vecchi, E. Buscarini, and L. Elli, “Endoscopic ultrasonography and small-bowel endoscopy: Present and future,” Digestive Endoscopy 31(6), 627–643 (2019). [CrossRef]  

4. B. A. Flusberg, E. D. Cocker, W. Piyawattanametha, J. C. Jung, E. Cheung, and M. J. Schnitzer, “Fiber-optic fluorescence imaging,” Nat. Methods 2(12), 941–950 (2005). [CrossRef]  

5. I. N. Papadopoulos, S. Farahi, C. Moser, and D. Psaltis, “Focusing and scanning light through a multimode optical fiber using digital phase conjugation,” Opt. Express 20(10), 10583–10590 (2012). [CrossRef]  

6. I. N. Papadopoulos, S. Farahi, C. Moser, and D. Psaltis, “High-resolution, lensless endoscope based on digital scanning through a multimode optical fiber,” Biomed. Opt. Express 4(2), 260–270 (2013). [CrossRef]  

7. T. Čižmár and K. Dholakia, “Shaping the light transmission through a multimode optical fibre: complex transformation analysis and applications in biophotonics,” Opt. Express 19(20), 18871 (2011). [CrossRef]  

8. M. C. Velsink, L. V. Amitonova, and P. W. H. Pinkse, “Spatiotemporal focusing through a multimode fiber via time-domain wavefront shaping,” Opt. Express 29(1), 272–290 (2021). [CrossRef]  

9. N. G. Moussa, T. B. Norris, M. Eric, and N. R. Rao, “Mode control in a multimode fiber through acquiring its transmission matrix from a reference-less optical system,” Opt. Lett. 43(3), 419 (2018). [CrossRef]  

10. S. Turtaev, I. T. Leite, T. Altwegg-Boussac, J. M. P. Pakan, N. L. Rochefort, and T. Čižmár, “High-fidelity multimode fibre-based endoscopy for deep brain in vivo imaging,” Light: Sci. Appl. 7(1), 92 (2018). [CrossRef]  

11. J. Zhao, Y. Sun, H. Zhu, Z. Zhu, J. E. Antonio-Lopez, R. A. Correa, S. Pang, and A. Schülzgen, “Deep-learning cell imaging through Anderson localizing optical fiber,” Adv. Photon. 1(06), 1 (2019). [CrossRef]  

12. B. Song, C. Jin, J. Wu, W. Lin, B. Liu, W. Huang, and S. Chen, “Deep learning image transmission through a multimode fiber based on a small training dataset,” Opt. Express 30(4), 5657–5672 (2022). [CrossRef]  

13. F.-W. Sheu and J.-Y. Chen, “Fiber cross-sectional imaging by manually controlled low coherence light sources,” Opt. Express 16(26), 22113–22118 (2008). [CrossRef]  

14. M. Lan, Y. Xiang, J. Li, L. Gao, Y. Liu, Z. Wang, S. Yu, G. Wu, and J. Ma, “Averaging speckle patterns to improve the robustness of compressive multimode fiber imaging against fiber bend,” Opt. Express 28(9), 13662–13669 (2020). [CrossRef]  

15. A. M. Zysk, F. T. Nguyen, A. L. Oldenburg, D. L. Marks, and S. A. Boppart, “Optical coherence tomography: a review of clinical development from bench to bedside,” J. Biomed. Opt. 12(5), 051403 (2007). [CrossRef]  

16. H. Inoue, S. E. Kudo, and A. Shiokawa, “Technology Insight: laser-scanning confocal microscopy and endocytoscopy for cellular observation of the gastrointestinal tract,” Nat. Clin. Pract. Gastroenterol. Hepatol. 2(1), 31–37 (2005). [CrossRef]  

17. M. Hughes, T. P. Chang, and G. Z. Yang, “Fiber bundle endocytoscopy,” Biomed. Opt. Express 4(12), 2781–2794 (2013). [CrossRef]  

18. A. Meining, “Confocal Endomicroscopy,” Gastrointestinal Endoscopy Clinics of North America 19(4), 629–635 (2009). [CrossRef]  

19. W. Gbel, J. N. D. Kerr, A. Nimmerjahn, and F. Helmchen, “Miniaturized two-photon microscope based on a flexible coherent fiber bundle and a gradient-index lens objective,” Opt. Lett. 29(21), 2521–2523 (2004). [CrossRef]  

20. M. Sato, Y. Motegi, S. Yagi, K. Gengyo-Ando, and J. Nakai, “Fast varifocal two-photon microendoscope for imaging neuronal activity in the deep brain,” Biomed. Opt. Express 8(9), 4049 (2017). [CrossRef]  

21. A. Thrapp and M. Hughes, Motion compensation in structured illumination fluorescence endomicroscopy, SPIE BiOS (SPIE, 2019), Vol. 10854.

22. Z. Meng, M. Qiao, J. Ma, Z. Yu, K. Xu, and X. Yuan, “Snapshot multispectral endomicroscopy,” Opt. Lett. 45(14), 3897–3900 (2020). [CrossRef]  

23. J. Wang and S. K. Nadkarni, “The influence of optical fiber bundle parameters on the transmission of laser speckle patterns,” Opt. Express 22(8), 8908 (2014). [CrossRef]  

24. A. Hussain, C.-L. Liu, B. Luo, J. Ren, H. Zhao, X. Zhao, and J. Zheng, “Advances in Brain Inspired Cognitive Systems : 9th International Conference, BICS 2018, Xi'an, China, July 7-8, 2018, Proceedings,” in Lecture Notes in Artificial Intelligence 10989, (Springer International Publishing : Imprint: Springer, Cham, 2018), pp. 1 online resource (XVIII, 870 pages 362 illustrations).

25. L. V. Amitonova and J. Boer, “Compressive imaging through a multimode fiber,” Opt. Lett. 43(21), 5427 (2018). [CrossRef]  

26. M. Lan, D. Guan, L. Gao, J. Li, S. Yu, and G. Wu, “Robust compressive multimode fiber imaging against bending with enhanced depth of field,” Opt. Express 27(9), 12957–12962 (2019). [CrossRef]  

27. D. Yang, M. Hao, G. Wu, C. Chang, B. Luo, and L. Yin, “Single multimode fiber imaging based on low-rank recovery,” Optics and Lasers in Engineering 149, 106827 (2022). [CrossRef]  

28. J. Shin, B. T. Bosworth, and M. A. Foster, “Single-pixel imaging using compressed sensing and wavelength-dependent scattering,” Opt. Lett. 41(5), 886 (2016). [CrossRef]  

29. T. Okubo, T. Katagiri, and Y. Matsuura, “Compact fluorescence endoscope with speckle-generating fiber probe,” in 2019 24th Microoptics Conference (MOC), (2019), 124–125.

30. J. Dumas, M. Lodhi, W. Bajwa, and M. Pierce, A compressed sensing approach for resolution improvement in fiber-bundle based endomicroscopy, SPIE BiOS (SPIE, 2018), Vol. 10470.

31. J. Shin, B. T. Bosworth, and M. A. Foster, “Compressive fluorescence imaging using a multi-core fiber and spatially dependent scattering,” Opt. Lett. 42(1), 109–112 (2017). [CrossRef]  

32. J. P. Dumas, M. A. Lodhi, B. A. Taki, W. U. Bajwa, and M. C. Pierce, “Computational endoscopy—a framework for improving spatial resolution in fiber bundle imaging,” Opt. Lett. 44(16), 3968–3971 (2019). [CrossRef]  

33. G. Lu, “Block Compressed Sensing of Natural Images,” in 2007 15th International Conference on Digital Signal Processing, (2007), 403–406.

34. J. Ke and E. Y. Lam, “Object reconstruction in block-based compressive imaging,” Opt. Express 20(20), 22102–22117 (2012). [CrossRef]  

35. L. Sun, X. Wen, M. Lei, H. Xu, J. Zhu, and Y. Wei, “Signal Reconstruction Based on Block Compressed Sensing,” in Artificial Intelligence and Computational Intelligence, (Springer Berlin Heidelberg, 2011), 312–319.

36. C. Li, W. Yin, H. Jiang, and Y. Zhang, “An efficient augmented Lagrangian method with applications to total variation minimization,” Computational Optimization and Applications 56(3), 507–530 (2013). [CrossRef]  

37. W. K. Yu, “Super sub-Nyquist single-pixel imaging by means of cake-cutting Hadamard basis sort,” Sensors 19(19), 4122 (2019). [CrossRef]  

38. P. G. Vaz, D. Amaral, L. Ferreira, A. Morgado, and J. Cardoso, “Image quality of compressive single-pixel imaging using different Hadamard orderings,” Opt. Express 28(8), 11666–11681 (2020). [CrossRef]  

39. R. E. W. Rafael C. Gonzalez and Steven L. Eddins, “"Standard” test images” (2008), retrieved https://www.imageprocessingplace.com/root_files_V3/image_databases.htm.

40. M. M. Dickens, M. P. Houlne, S. Mitra, and D. J. Bornhop, “Method for depixelating micro-endoscopic images,” Opt. Eng. 38(11), 1836–1842 (1999). [CrossRef]  

41. J. H. Han, J. Lee, and J. U. Kang, “Pixelation effect removal from fiber bundle probe based optical coherence tomography imaging,” Opt. Express 18(7), 7427–7439 (2010). [CrossRef]  

42. P. Wang, G. Turcatel, C. Arnesano, D. Warburton, S. E. Fraser, and F. Cutrale, “Fiber pattern removal and image reconstruction method for snapshot mosaic hyperspectral endoscopic images,” Biomed. Opt. Express 9(2), 780–790 (2018). [CrossRef]  

43. S. Rupp, C. Winter, and M. Elter, “Evaluation of spatial interpolation strategies for the removal of comb-structure in fiber-optic images,” in 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, (2009), 3677–3680.

44. N. Bedard, T. Quang, K. Schmeler, R. Richards-Kortum, and T. S. Tkaczyk, “Real-time video mosaicing with a high-resolution microendoscope,” Biomed. Opt. Express 3(10), 2428–2435 (2012). [CrossRef]  

45. C. Renteria, J. Suarez, A. Licudine, and S. A. Boppart, “Depixelation and enhancement of fiber bundle images by bundle rotation,” Appl. Opt. 59(2), 536–544 (2020). [CrossRef]  

46. K. Vyas, M. Hughes, B. G. Rosa, and G.-Z. Yang, “Fiber bundle shifting endomicroscopy for high-resolution imaging,” Biomed. Opt. Express 9(10), 4649–4664 (2018). [CrossRef]  

47. S. Rupp, M. Elter, and C. Winter, “Improving the accuracy of feature extraction for flexible endoscope calibration by spatial super resolution,” in Annual International Conference of the IEEE Engineering in Medicine and Biology - Proceedings, (2007), 6565–6571.

48. G. W. Cheon, J. Cha, and J. Kang, Spatial compound imaging for fiber-bundle optic microscopy, SPIE BiOS (SPIE, 2014), Vol. 8938.

49. D. Kim, J. Moon, M. Kim, T. D. Yang, J. Kim, E. Chung, and W. Choi, “Toward a miniature endomicroscope: pixelation-free and diffraction-limited imaging through a fiber bundle,” Opt. Lett. 39(7), 1921–1924 (2014). [CrossRef]  

50. S. Li, Y. Cai, Y. Wang, X. R. Yao, and Q. Zhao, “Single-pixel imaging of a translational object,” Opt. Express 31(4), 5547–5560 (2023). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplement 1 includes the technical details of simulation and experiment in the paper.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Sampling the target image with CS method and BCS method. (a) is the sampling process of CS method, in which the full-scale Hadamard matrices are superimposed on the mask of the whole fiber bundle to sample the target image. (b) is the sampling process of BCS method, in which the block Hadamard matrices are superimposed on each core of the bundle to sample the target image (the masks for different fiber cores are identical).
Fig. 2.
Fig. 2. Simulation for image reconstruction: (a) Target image. (b) Image with traditional optic fiber bundle imaging. (c) Reconstructed image by BCS method. (d) Reconstructed image by CS method. (e) Line profiles across a single fiber along the row marked with a red line in the zoom-in images in (a-d).
Fig. 3.
Fig. 3. Comparison of CS method and BCS method when compressed ratio increases from 10% to 100%. (a) Images that are reconstructed by BSC method and CS method with different compression ratios. (b). Comparison of PSNR and SSIM between CS and BCS method. (c). Comparison of reconstruction time of CS and BCS.
Fig. 4.
Fig. 4. (a) MTF calculated from horizontal line pairs. (b) MTF calculated from vertical line pairs.
Fig. 5.
Fig. 5. Experimental setup. LED light source (350-700 nm); Lens1 for light beam collimation, f = 100 mm; Lens2 consists of an objective lens (Olympus) and an industrial camera lens (Microvision BT-MPX); DMD, available resolution is 1024 × 768; BS: beam splitter; Lens3: f = 75 mm; Obj1and Obj2: 10×microscope objectives; OFB: optic fiber bundle (length = 760 mm); Detector1 and detector2 are CCD (charge coupled device); DAQ: data acquisition card (200 kHz, 8 channels, purple wire for GPIO, blue wire for Opto_Out and yellow wire for Opto_In).
Fig. 6.
Fig. 6. (a) Reflected image of the illuminated area of the fiber bundle (captured by Detector1). (b) Calibration image calculated from (a) using affine transformation. (c) Fiber cores extracted from (b) with circular detection method. (d) Binary mask of the fiber core obtained by thresholding the image in (b) and some cores on the edges are removed to simplify the calculation.
Fig. 7.
Fig. 7. (a) Detected output values from a single core with the change of 8 × 8 Hadamard matrices and random matrices. (b) Standard deviation of detected values for Hadamard and random matrices with different sizes. The orange horizontal line marks the value that is 5 times the standard deviation of the system noise.
Fig. 8.
Fig. 8. (a) Ground truth image of the resolution target. (b) Image acquired with conventional fiber bundle configuration. (c) Reconstructed image by BCS with 30% differential observations (38 times) and the image contains 722 fiber cores. (d) Reconstructed image by CS with 30% differential observations (39320 times). (e), (f) and (g) The interpolated images of (b), (c) and (d). (h) From left to right, the zoom-in view of the areas in the red boxes in (a), (e), (f) and (g).
Fig. 9.
Fig. 9. (a) and (f): Ground truth images of the sections. (b) and (g): Images acquired with conventional fiber bundle configuration. (c) and (h): The interpolated images from (b) and (g). (d) and (i): Reconstructed images by BCS with 40% differential observations (52 times) and contains 248 and 1056 cores respectively. (e) and (j): The interpolated images from (d) and (i). (k) From left to right, the zoom-in view of the areas in the red boxes in (a), (c), and (e). (l) From left to right, the zoom-in view of the areas in the red boxes in (f), (h), and (j). The images were adjusted to the same size for presenting convenience.
Fig. 10.
Fig. 10. (a) Section image. (b) Conventional fiber bundle imaging of (a). (c) Reconstructed image by BCS with 40% differential observations (52 times) and the image contains 1592 cores. (d) and (e) Interpolated images of (b) and (c) respectively with nearest neighbor interpolation in vertical and horizontal directions. (f) Stitched image after six shifts. (g) From left to right, the zoom-in view of the areas in the red boxes in (a), (d), (e) and (f).

Tables (3)

Tables Icon

Table 1. Comparison of the performance between BCS method and CS method

Tables Icon

Table 2. Comparison of BCS method and CS method for fiber bundle imaging

Tables Icon

Table 3. Comparison of the results of the interpolation method and the splicing method

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

X s c _ i = B i ( K g X i )
X f b = R i = 1 c [ X s c _ i ]
Y s c = ( B i M ) X i + n o i s e i
y s c = M ¯ b i x i + n o i s e i
y s c _ j = p = 1 n q = 1 n K g ( M j X s c _ p , q ) B i + n o i s e i
H ^ 2 k = H ^ 2 H ^ 2 k 1 = [ H ^ 2 k 1 H ^ 2 k 1 H ^ 2 k 1 H ^ 2 k 1 ]
min t | | ω t | | 2 ,  s .t M ¯ b i x i + n o i s e i = y s c & D t x i = ω t t
min L A ( ω t , x i ) = min ω t , x i t ( ω t 2 v t T ( D t x i ω t ) + β t 2 D t x i ω t 2 2 )   λ T ( Φ x i y s f b ) + μ 2 Φ x i y s f b 2 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.