Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

SmartOCT: smartphone-integrated optical coherence tomography

Open Access Open Access

Abstract

Smartphone devices have seen unprecedented technical innovation in computational power and optical imaging capabilities, making them potentially invaluable tools in scientific imaging applications. The smartphone’s compact form-factor and broad accessibility has motivated researchers to develop smartphone-integrated imaging systems for a wide array of applications. Optical coherence tomography (OCT) is one such technique that could benefit from smartphone-integration. Here, we demonstrate smartOCT, a smartphone-integrated OCT system that leverages built-in components of a smartphone for detection, processing and display of OCT data. SmartOCT uses a broadband visible-light source and line-field OCT design that enables snapshot 2D cross-sectional imaging. Furthermore, we describe methods for processing smartphone data acquired in a RAW data format for scientific applications that improves the quality of OCT images. The results presented here demonstrate the potential of smartphone-integrated OCT systems for low-resource environments.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In 2021, there were an estimated 6.2 billion smartphone users across the globe [1]. The extreme popularity of smartphone devices has placed them at the center of technical innovation: modern smartphones are equipped with high-resolution camera systems, state-of-the-art computational and graphical processors, a wide array of electrical and mechanical sensors, powerful wireless communication capabilities and a variety of software development packages. Not surprisingly, smartphones feature widely in many contexts, including for clinical and scientific purposes, and several researchers have sought to integrate smartphone cameras into scientific imaging systems [2,3]. For example, commercial microscopes outfitted with smartphone cameras circumvent the need for expensive scientific cameras [4,5]. Some researchers have developed standalone devices, such as otoscopes, confocal and fluorescent microscopes and endoscopes, that leverage the portability and compact nature of the smartphone for low-resource applications [611]. Still others have used the smartphone camera for multispectral or true spectroscopic imaging and analysis in advanced biosensing applications [1215]. A key benefit of smartphone integration is the ability to create more portable and affordable systems.

As with the aforementioned scientific applications, optical coherence tomography (OCT) is a platform technology for bioimaging that could benefit from the capabilities provided by smartphones. Recent attempts to integrate smartphones into OCT data collection and processing pipelines have focused only on using the native computational and wireless connectivity capabilities of the smartphone to process or transmit data collected by a separate, more traditional OCT system. For example, one group demonstrated web-based interactive control of an OCT system [16], showing that remote access to OCT imaging could enable advanced telemedicine evaluation of remote patient data. Another group used the smartphone as a mobile computational platform to perform deep learning-based image processing that can analyze and display key diagnostic features from standard clinical OCT images [17], showing that smartphone integration can reduce the need for bulky computers for processing. Neither of these demonstrations has shown integration of the smartphone camera for OCT data collection.

Here, we introduce smartOCT, the first smartphone-integrated OCT system to leverage the built-in components of smartphones for detection, processing and display of OCT data. We demonstrate a proof-of-concept system showing the use of a smartphone camera to capture interferometric OCT data at visible wavelengths, which overlap with the wavelength sensitivity of high-speed commercial smartphone sensors and thus can be performed without tampering with the embedded color filters. Importantly visible-wavelength OCT is a field of growing clinical significance that lacks low-cost and small form-factor options, of which smartOCT may be a promising implementation [1821]. Using a combination of custom and existing smartphone applications, we perform real-time visualization of OCT B-scans and image processing directly on the smartphone.

With future improvements to system design and OCT technology, we believe this scheme could result in cheaper and more portable OCT devices at visible and near infrared wavelengths that can be used for clinical diagnostics in primary care suites, satellite clinics and low-resource environments.

2. Methods

2.1 OCT system design

The smartOCT system design employed a line-field OCT (LF-OCT) configuration that used the full 2D smartphone sensor to capture 2D cross-sectional images in a single frame [2224]. The use of a line-field configuration removes the need for mechanical scanners and allows single-shot B-scan imaging. Similar to other visible-light OCT systems, we used a supercontinuum laser source (EUL-10, NKT Photonics) filtered to yield visible light with a 100-nm bandwidth centered at 570 nm. Figure 1(a) and (b) show a schematic of the optical design and photograph of the benchtop system, respectively.

 figure: Fig. 1.

Fig. 1. SmartOCT optical schematic (a) with a representative color interferogram showing the 2D spectrum from a mirror sample (top left inset). The blue box around the reference objective and mirror indicates that these components translate together. (b) Photograph of the benchtop smartOCT system.

Download Full Size | PDF

The supercontinuum output was first collimated to a 4-mm beam diameter using a reflective collimator (RC04APC-P01, Thorlabs) and focused along the y-axis using a 50-mm achromatic cylindrical lens (68-161, Edmund Optics). The beam was then split into the sample and reference arms using a 50:50 beamsplitter (CCM5-BS016, Thorlabs) and focused along the x-axis of the sample and reference mirror, respectively, using identical 45-mm 4x objective lenses (RMS4X, Thorlabs). The use of commercial objective lenses helped to reduce the chromatic aberration in the system, which was helpful given the broad bandwidth. The return light was sent through a unit-magnification relay using two 50-mm lenses (AC254-050-A, Thorlabs) with a 50-µm slit aperture placed in the intermediate image plane (IP1, Fig. 1), conjugate to the sample and reference image planes. The slit aperture was used primarily to block extraneous reflections from lens surfaces and stray light. The relayed light was then spectrally dispersed using a 900-lpmm transmissive diffraction grating (Wasatch Photonics) with the focused line oriented orthogonal to the holographic features of the grating. The dispersed beam was focused using a 25-mm focal length lens group at intermediate image plane 2 (IP2, Fig. 1). The 2D spectrum formed at IP2 was relayed to the smartphone sensor using a 4-f unit-magnification relay consisting of two identical smartphone lenses, symmetric about intermediate image plane 3 (IP3, Fig. 1). This setup, termed a reverse-lens configuration, has been shown to reduce distortion and minimize aberrations while imaging through native smartphone lenses [25]. For prototyping, the smartOCT system was aligned on a 12”x12” optical breadboard using commercially available optomechanics. To aid alignment and stability, a 3D printed mount was designed for mounting the smartphone to the breadboard and spectrometer optics. We found that mounting the smartphone in this way helped to reduce any potential misalignment and vibrations caused when the smartphone was in use.

2.2 Smartphone selection and specifications

A key consideration for this work was to couple the smartphone camera unit to the spectrometer optics in its native condition without tampering (i.e., removing components such as the lens or sensor filters or additional modification of the smartphone), as this would be helpful for future deployment in real-world environments. The main hardware considerations for smartphone selection were the number of sensor pixels, pixel size and exposure time, which impact the imaging depth, spectral sampling density and susceptibility to motion and fringe washout, respectively.

We chose to use the Samsung Galaxy S10 smartphone largely because of its processing capabilities, capacity for low exposure time and availability of versatile data formats. Its Sony ISOCELL 2L4 sensor features a 4032 × 3024 (width x height) RGB color pixel layout with a pixel size of 1.4 µm. The S10 camera unit enabled image acquisition at 30fps at full resolution with a tunable exposure time from 33.3 ms – 40µs (30 Hz - 24kHz) per frame. In software, the native camera app enables “pro” picture and video modes that provide access to tuning of camera features (i.e., ISO, exposure time, frame size, etc.). Notably, the usability of various features through the native camera app during video-mode acquisition is somewhat limited, and the user can only tailor certain sensor settings under predetermined modes.

In fact, many commercial smartphone camera systems prioritize simplicity (for the user) over custom setting controls. This makes it difficult to control camera settings and access direct unprocessed sensor data, as one would when using a scientific camera. Moreover, photos and recorded videos captured with smartphones are subject to several proprietary internal processing steps, such as color-space linearization and dynamic non-linear color tuning, which are intended to make photographic pictures look better and are not representative of the true color and/or intensity of the incident light [26,27]. Moreover, images acquired through native software are compressed when saved, which can further impact the fidelity of scientific images. Fortunately, smartphones are now a major technical platform for professional media creation, which has motivated the accessibility of unprocessed image data for custom image processing. The S10 enables RAW data capture for pictures, and community-designed open-source apps have made it possible to capture RAW video data, which we leverage in this work. RAW data can be understood as any image file that contains an uncompressed image of direct sensor counts per pixel together with meta-information about the image collected from the sensor. Often the meta-information files contain information about the sensor model, color space specifications, preset calibration values (such as white balance multipliers), active area image width and height, etc. While many proprietary commercial variations of RAW data files are used, the common file format Digital Negative (DNG) has become a standard in the industry, and several software packages are available to convert proprietary file types into DNG formats. The RAW sensor data from the S10 is output as a DNG image type. For the rest of this work, we will use the capitalized term ‘RAW’ when referring to the DNG file type. In Section 3.1, we detail the importance for RAW data processing and its impact on OCT data.

2.3 Smartphone software

We implemented a software pipeline for real-time preview, RAW video capture and OCT data processing. The first component, real-time preview, was developed as a custom app using MATLAB Simulink and Android Studio. The app enables live visualization of the OCT spectrum and processed B-scans for optimization and alignment of sample images. The second component leveraged an open-source app, MotionCam, for RAW video capture. For the third component, we implemented a processing pipeline to load and process OCT data directly on the smartphone using a commercial app from MathWorks. Figure 2 shows screenshots for the apps associated with each component.

 figure: Fig. 2.

Fig. 2. Screenshots of the smartOCT apps in action. Three apps are used for (a) real-time display, (b) RAW video capture and (c) RAW data processing.

Download Full Size | PDF

2.3.1 Real-time preview

The real-time preview app (shown in Fig. 2(a)) was designed to grab live image data from the native camera system, perform basic OCT processing and display a 2D B-scan to the user. The preview app was built in Simulink and deployed through Android Studio. On opening the app, the user could choose to view the direct sensor output (2D spectra) or a processed B-scan by swiping left or right on the image. During app use, the sensor data (OCT spectra) were continuously read into the app back-end as three 8-bit RGB mp4 frames, merged into a full-color image (size 2280 × 1080 pixels) using the phone’s internal visualization process within Simulink and displayed as a full-color image. In this app, mp4 data were used instead of RAW data because neither Simulink nor the S10 camera app natively support RAW video capture (although, the S10 natively supports RAW picture capture). Nonetheless, the quality of the processed mp4 frames was sufficient for sample alignment and focus adjustment.

When visualizing OCT data, the user had the option to first capture a background image that was used for background subtraction. If no image was selected, no subtraction was performed. When the app was swiped to B-scan view, an OCT processing algorithm was performed that began by subtracting the background image and separating the green channel data from the red and blue channels. The red and blue channels were then omitted from further processing to reduce computational load. We found that omitting these color channels had minimal effect on the preview quality, since the red and blue spectra were heavily attenuated in the selected wavelength range due to the Bayer filter. The green channel data were then resampled to be linear with respect to wavenumber using a calibrated polynomial function (the polynomial parameters can be adjusted within the app if a new calibration was performed). Finally, the fast-Fourier transform is performed, and the log of the 2D B-scan was displayed on the main UI.

2.3.2 RAW video capture

While RAW data photography was a capability of the native S10 camera app, the app did not support RAW video capture. Thus, a freely available app called MotionCam was used for acquisition of 10-bit RAW videos of the 2D interferogram (Fig. 2(b)). The MotionCam app enabled simple tuning of camera settings such as exposure time, ISO and field-of-view (FOV) cropping. Data acquisition was initiated by physical touch of the record button or by voice command. Once captured, the recorded data were saved to the smartphone device and/or external memory directly for processing. Switching between the apps was done by navigating to a shortcut menu on the smartphone homepage.

2.3.3 RAW data processing

Processing of the acquired RAW OCT interferograms was done using the MatLab Mobile app, which enables the use of MatLab code loaded directly on the smartphone’s hardware (Fig. 2(c)). The processing pipeline is shown in Fig. 3, and it differs from the real-time preview app pipeline in that there are additional steps taken for intensity correction and distortion correction prior to OCT processing.

  • 1. Load OCT spectrum: On startup of the app, the processing script prompted the user to select the RAW dataset of interest from a folder in the smartphone’s local memory. The data were then loaded into the app as a 4032 × 1908 x N-pixel (spectrum x position x frame) RGB-mosaicked image stack. Note that the image size was automatically cropped relative to the full sensor size (4032 × 3024) when loaded to remove the inactive pixels specified in the RAW meta-information.
  • 2. Scale RGB pixel intensity: The intensity of each RGB pixel was then scaled to compensate for the non-uniform spectral attenuation of the Bayer filter. This intensity correction was accomplished by dividing each R, G and B pixel of the RAW OCT spectrum with an intensity value derived from a color-specific, normalized, spectral attenuation function (Fig. 4(a)). The attenuation functions were measured experimentally following methods in Ref. [28]. The result of this operation was a spectral reshaping that compensates for the spectral attenuation induced by the Bayer filter. Figure 4(b) and (c) show three 1D RGB plots of a representative interferogram taken from the center of the FOV of a mirror sample before and after intensity correction.
  • 3. Correct distortion: The intensity-corrected data were then sent through a custom distortion-correction algorithm, described below, that compensates for the distortions caused by the smartOCT imaging optics, including the additional optics associated with the OCT interferometer. In brief, a B-spline unwarping transform was used to apply the correction.
  • 4. Process OCT image: The corrected spectral image was then processed using traditional OCT methods. Background subtraction was performed, followed by resampling of the spectral data to be linear with respect to wavenumber using a polynomial function obtained via pixel-to-wavelength calibration of the spectrometer (Section 2.5). Next, the resampled spectrum was multiplied by a Hann window, and system dispersion was corrected using previously described methods [29]. Finally, the fast Fourier transform was performed, and the log of the transformed data was displayed on screen.
  • 5. Save and transfer data: The processed data could then be stored locally using the smartphone internal memory or on a local machine through wired USB-C connection. Using the MatLab app or the smartphone’s native file system, the user could also transfer data wirelessly to any local or remote device.

 figure: Fig. 3.

Fig. 3. Flow diagram for RAW data processing. First, the RAW OCT spectrum was loaded into the processing app. Second, RGB pixel values were scaled to compensate for the Bayer filter attenuation, yielding an intensity-corrected OCT spectrum. Third, the spectrum was sent through a distortion-correction algorithm. Finally, the corrected spectral data were run through OCT processing pipeline consisting of background subtraction, k-space linearization, dispersion compensation, Fourier transformation and log compression before being stored on local memory and/or transferred to a local or remote machine.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Intensity scaling of RGB pixels. (a) The value of each RGB pixel of the RAW OCT spectrum was scaled by dividing it by the corresponding attenuation function. Representative OCT interferogram taken from the center of the FOV of a mirror sample (b) before and (c) after intensity scaling.

Download Full Size | PDF

2.4 Distortion correction

Extracting the distortion-correction coefficients only needed to be performed once for a given imaging configuration. The distortion correction method involved imaging a grid chart of known spacing in the sample plane and using a B-spline unwarping transform to register the measured grid with a synthesized ground truth image of the same grid [30]. The grid target (R1L3S3P, Thorlabs) had a 500-µm spacing at the focus of the sample arm. Note that because our system was designed for line imaging, a single point on the sample illumination line that was incident on a grid line resulted in a single spectral line on the sensor. To increase the contrast between the spectrum and grid lines, the grid target was placed slightly out of focus, which resulted in dark lines on the spectrum, as shown in Fig. 5(a) and (b).

 figure: Fig. 5.

Fig. 5. Distortion correction of a smartOCT spectrum. (a) Unprocessed, distorted RGB spectrum of a grid chart with 0.5-mm spacing captured on the smartphone, (b) Distortion-corrected spectrum. (c) Source (white) and target (red) points used for establishing the unwarping transform. (d) Original distorted OCT B-scan of Scotch tape and (e) distortion-corrected OCT B-scan. The white and yellow dotted boxes correspond to the regions used as signal and background in the SCR calculation, respectively. The blue and magenta boxes represent the regions that were averaged and plotted in panel (f), which shows an averaged A-scan. Scale bars are 250 µm along the positional axis (horizonal) and 50 µm along the depth axis (vertical).

Download Full Size | PDF

The resulting 2D spectrum was processed by first segmenting and binarizing the individual grid lines. Then, 10 lateral positions on each binarized line, spaced 100 pixels apart, were selected as “source” point coordinates, which resulted in 70 source points (white circles, Fig. 5(c)). Target “ground-truth” points (red circles, Fig. 5(c)) were identified by first selecting the centermost source coordinate (at the center of the field-of-view) and calculating the λ- and y-axis pixel offset to the next closest source point. These offsets were used as the target point spacings to form a uniform grid with the same number of target points as source points. Note that this method only accounts for distortions along the y-axis (i.e., spatial distortions) since spectral distortions are compensated for during k-space linearization. Following point identification, the source points were registered to the target points using a non-linear unwarping transform (bUnwarpJ, FIJI). Next, the raw transform coefficients were saved to the calibration file and used as inputs in the main processing code to unwarp each 2D spectral frame prior to OCT processing.

Figure 5(d) and (e) show a representative B-scan image of Scotch tape before and after the correction. Notably, the surface of the tape looks similar in the central portion of the FOV where there are minimal distortions. Toward the outer edges of the FOV (left and right of center), the surface of the tape in Fig. 5(d) is significantly blurred when compared to the same region in the corrected image. To illustrate this point, the data within the magenta and blue boxes of the distorted and corrected B-scans, respectively, were averaged along the lateral (position) axis to enhance contrast and plotted in Fig. 5(f). The plots show a sharpened surface peak around the 20µm depth position with a 3 dB SNR improvement (boxed inset in Fig. 5(f)) and overall improved contrast between tape layers. Quantitatively, we calculated a speckle contrast ratio (SCR) between the second tape layer and tape gap for both images (shown as white and yellow boxes, respectively, in Fig. 5(d) and (e)), which resulted in a SCR of 1.52 and 1.66 for distorted and corrected B-scans, respectively.

2.5 Spectrometer calibration

Spectrometer calibration was performed by leveraging the wavelength tunability of the supercontinuum laser source and filter unit. Using the NKT control software, the wavelength output of the source was set to a 10-nm bandwidth (the minimum bandwidth of this unit) centered at 520 nm. The source was then swept across each 10-nm sub-band in steps of 10 nm, and a RAW video (frames are averaged in processing to reduce noise) of the 2D spectrum was captured at each of 11 sequential wavelength values from 520-620 nm. To extract the pixel associated with each wavelength, each 2D sub-band spectrum was corrected for distortion and then fit to a Gaussian profile along the spectral axis. The pixel value corresponding to the peak location of the fit was identified and estimated as the center wavelength of that sub-band. Since the output of each filtered sub-band was inherently Gaussian, this method produced a reliable and repeatable calibration. A third-order polynomial fit was then calculated to provide a pixel-to-wavelength mapping function for each row of the OCT spectral data. Notably, the mapping was not the same for each row, which relates to distortion along the spectral axis.

3. Results

3.1 Data type analysis: MP4 vs RAW

To evaluate the difference in mp4 and RAW data processing on the smartOCT system, we analyzed 2D interferograms of a mirror sample saved as RAW (10-bit) and mp4 (8-bit) data types. Each image was acquired at an exposure time of 1/8,000 sec., an ISO of 50 and 1x magnification. The smartphone’s autofocus feature was disabled and set to a consistent value for all acquisitions. Figures 6(a) and (b) show the RGB components from a row at the center of the FOV of the OCT interferogram for the two data types, respectively, with the black dotted box showing a zoom-in of the blue and red channels.

 figure: Fig. 6.

Fig. 6. Plot obtained from the central line of each RGB channel of the RAW and MP4 interferograms (a) and (b), respectively. Zoom-in regions of the blue and red channels in the dotted black box of each data type showing artificial cropping of the MP4 data at zero intensity due to the smartphones internal processing. OCT B-scans of the mirror sample (c-d) from the full RAW data and mp4 data. A-scan from the central line of the RAW and mp4 B-scans (magenta and blue dotted lines, respectively) showing the presence of artifacts through the full depth of the A-scan. Scale bars are 100 µm along the positional axis (horizonal) and 25 µm along the depth axis (vertical).

Download Full Size | PDF

The zoomed in regions show a significant difference in spectral shape and intensity values between the two data types. Importantly, the mp4 spectra contain zero-valued data points where the interferogram was effectively cut off after the smartphone’s internal processing. This occurs because the internal processing imparts a non-linear color scaling that adjusts colors fit the color space of commercial displays and make colors more aesthetically pleasing to the human eye. For scientific data, however, this scaling can lead to incorrect image content or misinterpretation of data. When processed as OCT data, the zeroed regions of the spectrum result in ringing artifacts akin to saturation artifacts commonly seen in OCT data. To highlight these effects, Fig. 6(c) and (d) show processed B-scans of the mirror sample from the RAW and mp4 data, respectively. The RAW B-scan shows a typical OCT signal from a mirror peak, including a single sharp peak and uniform speckle background, while the mp4 data contains significant artifacts throughout the full depth of the B-scan. Figure 6(e) shows a comparative A-scan plot taken from the magenta and blue dotted lines of the RAW and mp4 B-scans, respectively. In our experimentation, the artifacts seen in the mp4 B-scan were more pronounced in highly reflective samples, but present in most test cases, including scattering samples.

3.2 Resolution and sensitivity characterization

We characterized the performance of the smartOCT system by measuring its sensitivity, SNR falloff, and lateral and axial resolutions. The system sensitivity was measured by illuminating a mirror placed in the sample arm with 10 mW of power spread laterally across 1000 pixels. The sample illumination was then attenuated using an OD-2 absorptive neutral density filter. Considering the gaussian intensity profile created by the cylindrical lens, we estimated the peak intensity to be 40µW at the central field point. Using an exposure time of 1.25 ms, the theoretical sensitivity was 93 dB, and the obtained peak sensitivity was 84 dB [31].

Next, the sensitivity falloff was evaluated by translating the reference mirror over a depth of 500µm in 50-µm increments. The measured 6-dB falloff point was ∼260µm, as shown in Fig. 7(a). The axial resolution was measured to be 2.2µm using a mirror peak at a depth of roughly 100µm (Fig. 7(b)). The 6-dB falloff point and axial resolution are worse than their theoretical values of 843µm and 1.43µm, respectively. We believe this may be due to aberrations induced by the native smartphone optics, specifically chromatic aberration, that can significantly reduce the achievable spectral resolution. Moreover, chromatic aberration has been demonstrated as a source of axial blurring in other visible-light OCT systems [21].

 figure: Fig. 7.

Fig. 7. SmartOCT performance characterization. (a) SNR falloff, (b) axial resolution (c) USAF chart group 7 and group 6 element 1 zoom-in and a corresponding maximum-intensity projection of 40 adjacent B-scans and cross-sectional plot showing that group 6 element 1.

Download Full Size | PDF

Finally, the lateral resolution was measured by imaging a USAF-1951 chrome negative resolution chart (38-256, Edmund Optics). Figure 7(c) shows a microscope image of group 7 and group 6 element 1 of the resolution chart (R3L1S4N, Thorlabs) and a corresponding maximum-intensity projection of 40 adjacent B-scans and cross-sectional plot showing that group 6 element 1 (15.8 µm) is clearly resolved. The measured resolution was greater than the theoretical diffraction-limited spot of 6µm; we attribute the degradation to unknown aberrations on the transmitted spectrum associated with imaging through the reverse lens and native smartphone camera system.

3.3 Sample imaging

To demonstrate the imaging capability of smartOCT, we successfully imaged two scattering samples: Scotch tape and cucumber (Fig. 8). The data were acquired using 16 mW of extended illumination on the sample and a 5-ms exposure time.

 figure: Fig. 8.

Fig. 8. Sample imaging with smartOCT system (a) and (b) raw spectral interferograms of tape and cucumber and the corresponding B-scans (c) and (d), respectively. Scale bars are 150 µm along the y-axis (horizonal) and 50 µm along the z-axis (vertical).

Download Full Size | PDF

Figure 8(a) and (b) show representative single-frame raw spectra from a roll of tape and cucumber and Fig. 8(c) and (d) show 10 and 20 frame-averaged B-scans of the same samples, respectively. The image of tape shows six layers with clear differentiation of layers over a depth of ∼300 µm. The image of cucumber reveals clear cell structures. The full lateral FOV is 3.5 mm; however, there is notable signal reduction towards the edge of the FOV that results from the Gaussian illumination profile of the cylindrical lens and vignetting on the reverse phone lens relay.

To further demonstrate the utility of our system, we imaged the anterior segment of an ex vivo porcine eye (Fig. 9). The data were imaged using the same illumination power as the previous samples with an exposure time of 1.25 ms. Figure 9(a) shows a photograph of the eye with the red line showing the location where the B-scan was acquired. Figure 9(b) and (c) show a representative single-frame raw spectrum and 10-frame averaged B-scan from the corneal limbus of the porcine eye, respectively.

 figure: Fig. 9.

Fig. 9. Sample image of ex vivo porcine anterior segment. Photograph of the anterior segment of the eye (a) with the red line showing the location of the B-scan. Raw spectrum (b) and 10-frame averaged B-scan (c) of the corneal limbus. Scale bars are 150 µm along the y-axis (horizonal) and 50 µm along the z-axis (vertical)

Download Full Size | PDF

4. Discussion

In this work we developed the first OCT system to integrate the native smartphone optics along with custom software to visualize and acquire 2D OCT B-scans in real time. In doing so, we demonstrate the potential utility of smartphones to replace some of the costly components (e.g., camera, scanner, computer, display) for OCT. In addition, we developed an image processing pipeline that improves imaging performance through native smartphone optics and enables high-performance scientific imaging that may be tailored for OCT or other imaging science applications. We also demonstrated the importance of using RAW vs mp4 data to yield accurate images of high quality. The smartOCT system provides several advantages compared to traditional OCT systems. Mainly, the use of a smartphone integrates several components (camera, PC, display) that are normally separate entities into a single compact device. As such, the cost is lower (<$\$$6,000) than other comparable visible-light OCT systems, including the phone (market value <$\$$300) and excluding the light source. Smartphones are at the center of innovation for small formfactor computational and graphical processing units, which can be leveraged for improved on-board image processing methods including machine learning algorithms and data visualization. Smartphones also provide simple and efficient connectivity to Wi-Fi and cellular networks that can be used for telemedicine applications. Lastly, the ubiquity of smartphones has led to the development of countless first- and third-party software tools that make custom application development more accessible than other portable PC or microcontroller options.

The current design is a proof-of-concept benchtop system that we believe can be improved to provide a portable all-in-one smartOCT system. For example, a major limitation of this work is the use of a supercontinuum laser source, which is a common source for visible-light OCT and was helpful to ensure sufficient power for imaging. Recently, there has been progress on using broadband LED sources for visible-light OCT [32]. With additional improvements to the technology in this space, LED light sources may be viable for future smartOCT designs. That said, one limiting factor is the power throughput of the spectrometer (roughly 30%) and the camera system of the smartphone. As discussed in the methods, the smartphone sensor uses a Bayer-filter that significantly attenuates wavelengths outside of the bandpass of each RGB filter. Considering that at 550 nm, the red, green and blue filters transmit 5%, 90% and 10% of the incident light, this means that an RGB super-pixel comprising one red, two green and one blue sub-pixel receives <50% of the incident light before considering the responsivity of the sensor itself. While it is possible to remove the Bayer filter, tampering with internal phone optics may be undesirable for deployment in certain contexts.

Alignment of the smartOCT system was another challenge, made difficult due to the small size and limited degrees of freedom of the smartphone optics. Here, we used custom-designed 3-D printed parts to mount the smartphone and various components to standard optomechanical mounts; however, the number of mechanics contributes to the overall bulk of the system. Moving forward, one could use custom machined mounts and scaffolds to reduce bulk and improve alignment sensitivity. The popularity of commercial rapid prototyping has made such components much more accessible than they have been in previous years.

Another potential improvement relates to the generation of 3D datasets. Our design removed the traditional mechanical scanning mirrors that conventional OCT systems use to enable B-scans and volumetric imaging. In future iterations, we plan to integrate a more compact optical system into an ergonomic design that would allow manual scanning of samples, which has been demonstrated in previous handheld OCT designs [33]. Moreover, additional sensors on the smartphone, such as the gyroscope and accelerometer, could provide useful tools to monitor the motion of the system and assist with image registration.

Overall, the smartOCT system compares favorably in SNR and resolution to other published visible-light line-field OCT systems that use traditional cameras and spectrometer designs [32,34]. We believe the work presented here can be used as a foundation for future development of smartphone-integrated OCT systems. The ubiquity of smartphones, along with the continually advancing technology and their compact design offer a unique opportunity for developing OCT systems for low-resource settings.

Funding

Congressionally Directed Medical Research Programs (W81CXWH2010938); National Institutes of Health (1RO1-EY032670-01); Dorothy J. Wingfield Phillips Chancellor Faculty Fellowship.

Acknowledgments

The authors would like to thank Xiao Tang for his assistance with features of the smartphone application.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data can be made available upon request to the authors.

References

1. “Smartphone subscriptions worldwide 2027 | Statista,” https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/ (accessed Feb. 08, 2023).

2. B. Hunt, A. J. Ruiz, and B. W. Pogue, “Smartphone-based imaging systems for medical applications: a critical review,” J. Biomed. Opt. 26(04), 040902 (2021). [CrossRef]  

3. A. K. Bowden and I. Hussain, “Smartphone-based optical spectroscopic platforms for biomedical applications: a review [Invited],” Biomed. Opt. Express 12(4), 1974–1998 (2021). [CrossRef]  

4. L. Bellina and E. Missoni, “Mobile cell-phones (M-phones) in telemicroscopy: Increasing connectivity of isolated laboratories,” Diagn. Pathol. 4(1), 1–4 (2009). [CrossRef]  

5. S. Roy, L. Pantanowitz, M. Amin, et al., “Smartphone adapters for digital photomicrography,” J. Pathol. Inform. 5(1), 24 (2014). [CrossRef]  

6. T. C. Cavalcanti, S. Kim, K. Lee, S. Y. Lee, M. K. Park, and J. Y. Hwang, “Smartphone-based spectral imaging otoscope: System development and preliminary study for evaluation of its potential as a mobile diagnostic tool,” J. Biophotonics 13(6), e2452 (2020). [CrossRef]  

7. B. Dai, Z. Jiao, L. Zheng, et al., “Colour compound lenses for a portable fluorescence microscope,” Light: Sci. Appl. 8(1), 1–13 (2019). [CrossRef]  

8. A. Semeere, A. Semeere, H. Osman, et al., “Smartphone confocal microscopy for imaging cellular structures in human skin in vivo,” Biomed. Opt. Express 9(4), 1906–1915 (2018). [CrossRef]  

9. J. C. Teichman, K. Baig, and I. I. K. Ahmed, “Simple technique to measure toric intraocular lens alignment and stability using a smartphone,” J. Cataract Refract. Surg. 40(12), 1949–1952 (2014). [CrossRef]  

10. D. N. Breslauer, R. N. Maamari, N. A. Switz, W. A. Lam, and D. A. Fletcher, “Mobile Phone Based Clinical Microscopy for Global Health Applications,” PLoS One 4(7), e6320 (2009). [CrossRef]  

11. K. C. Lee, K. Lee, J. Jung, S. H. Lee, D. Kim, and S. A. Lee, “A smartphone-based Fourier ptychographic microscope using the display screen for illumination,” ACS Photonics 8(5), 1307–1315 (2021). [CrossRef]  

12. D. Zhang and Q. Liu, “Biosensors and bioelectronics on smartphone for portable biochemical detection,” Biosens. Bioelectron. 75, 273–284 (2016). [CrossRef]  

13. X. Huang, D. Xu, J. Chen, et al., “Smartphone-based analytical biosensors,” Analyst 143(22), 5339–5351 (2018). [CrossRef]  

14. Q. He, R. Wang, and R. Wang, “Hyperspectral imaging enabled by an unmodified smartphone for analyzing skin morphological features and monitoring hemodynamics,” Biomed. Opt. Express 11(2), 895–910 (2020). [CrossRef]  

15. R. D. Uthoff, B. Song, M. Maarouf, V. Y. Shi, and R. Liang, “Point-of-care, multispectral, smartphone-based dermascopes for dermal lesion screening and erythema monitoring,” J. Biomed. Opt. 25(06), 1 (2020). [CrossRef]  

16. R. Mehta, D. Nankivi, D. J. Zielinski, et al., “Wireless, Web-Based Interactive Control of Optical Coherence Tomography with Mobile Devices,” Transl. Vis. Sci. Technol. 6(1), 5 (2017). [CrossRef]  

17. A. Rao and H. A. Fishman, “OCTAI: Smartphone-based Optical Coherence Tomography Image Analysis System,” 2021 IEEE World AI IoT Congr. AIIoT 2021, pp. 72–76, May 2021.

18. S. Pi, T. T. Hormel, X. Wei, W. Cepurna, J. C. Morrison, and Y. Jia, “Imaging retinal structures at cellular-level resolution by visible-light optical coherence tomography,” Opt. Lett. 45(7), 2107 (2020). [CrossRef]  

19. X. Shu, L. Beckmann, Y. Wang, et al., “Designing visible-light optical coherence tomography towards clinics,” Quant. Imaging Med. Surg. 9(5), 769–781 (2019). [CrossRef]  

20. J. Yi, S. Chen, X. Shu, A. A. Fawzi, and H. F. Zhang, “Human retinal imaging using visible-light optical coherence tomography guided by scanning laser ophthalmoscopy,” Biomed. Opt. Express 6(10), 3701 (2015). [CrossRef]  

21. S. P. Chong, T. Zhang, A. Kho, M. T. Bernucci, A. Dubra, and V. J. Srinivasan, “Ultrahigh resolution retinal imaging by visible light OCT with longitudinal achromatization,” Biomed. Opt. Express 9(4), 1477 (2018). [CrossRef]  

22. Y. Nakamura, S. Makita, M. Yamanari, M. Itoh, T. Yatagai, and Y. Yasuno, “High-speed three-dimensional human retinal imaging by line-field spectral domain optical coherence tomography,” Opt. Express 15(12), 7103 (2007). [CrossRef]  

23. D. J. Fechtig, T. Schmoll, B. Grajciar, W. Drexler, and R. A. Leitgeb, “Line-field parallel swept source interferometric imaging at up to 1 MHz,” Opt. Lett. 39(18), 5333 (2014). [CrossRef]  

24. L. Han, B. Tan, Z. Hosseinaee, et al., “Line-scanning SD-OCT for in-vivo, non-contact, volumetric, cellular resolution imaging of the human cornea and limbus,” Biomed. Opt. Express 13(7), 4007–4020 (2022). [CrossRef]  

25. N. A. Switz, M. V Ambrosio, and D. A. Fletcher, “Low-Cost Mobile Phone Microscopy with a Reversed Mobile Phone Camera Lens,” PLoS One 9(5), e95330 (2014). [CrossRef]  

26. C. Morikawa, M. Kobayashi, M. Satoh, et al., “Image and video processing on mobile devices: a survey,” Vis. Comput. 37(12), 2931–2949 (2021). [CrossRef]  

27. V. Blahnik and O. Schindelbeck, “Smartphone imaging technology and its applications,” Adv. Opt. Technol. 10(3), 145–232 (2021). [CrossRef]  

28. S. Pascual, N. Schmidt, J. Zamorano, et al., “Standardized spectral and radiometric calibration of consumer cameras,” Opt. Express 27(14), 19075–19101 (2019). [CrossRef]  

29. M. Wojtkowski, V. J. Srinivasan, T. H. Ko, et al., “Ultrahigh-resolution, high-speed, Fourier domain optical coherence tomography and methods for dispersion compensation,” Opt. Express 12(11), 2404–2422 (2004). [CrossRef]  

30. I. Arganda-Carreras, C. O. S. Sorzano, R. Marabini, J. M. Carazo, C. Ortiz-De-Solorzano, and J. Kybic, “Consistent and elastic registration of histological sections using vector-spline regularization,” Lect. Notes Comput. Sci. 4241, 85–95 (2006). [CrossRef]  

31. M. Choma, M. Sarunic, C. Yang, and J. Izatt, “Sensitivity advantage of swept source and Fourier domain optical coherence tomography,” Opt. Express 11(18), 2183 (2003). [CrossRef]  

32. Y. Wang and X. Liu, “Line field Fourier domain optical coherence tomography based on a spatial light modulator,” Appl. Opt. 60(4), 985–992 (2021). [CrossRef]  

33. R. Dsouza, J. Won, G. L. Monroy, D. R. Spillman, and S. A. Boppart, “Economical and compact briefcase spectral-domain optical coherence tomography system for primary care and point-of-care applications,” J. Biomed. Opt. 23(09), 1 (2018). [CrossRef]  

34. F. Xing, F. Xing, J.-H. Lee, C. Polucha, J. Lee, and J. Lee, “Design and optimization of line-field optical coherence tomography at visible wavebands,” Biomed. Opt. Express 12(3), 1351–1365 (2021). [CrossRef]  

Data availability

Data can be made available upon request to the authors.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. SmartOCT optical schematic (a) with a representative color interferogram showing the 2D spectrum from a mirror sample (top left inset). The blue box around the reference objective and mirror indicates that these components translate together. (b) Photograph of the benchtop smartOCT system.
Fig. 2.
Fig. 2. Screenshots of the smartOCT apps in action. Three apps are used for (a) real-time display, (b) RAW video capture and (c) RAW data processing.
Fig. 3.
Fig. 3. Flow diagram for RAW data processing. First, the RAW OCT spectrum was loaded into the processing app. Second, RGB pixel values were scaled to compensate for the Bayer filter attenuation, yielding an intensity-corrected OCT spectrum. Third, the spectrum was sent through a distortion-correction algorithm. Finally, the corrected spectral data were run through OCT processing pipeline consisting of background subtraction, k-space linearization, dispersion compensation, Fourier transformation and log compression before being stored on local memory and/or transferred to a local or remote machine.
Fig. 4.
Fig. 4. Intensity scaling of RGB pixels. (a) The value of each RGB pixel of the RAW OCT spectrum was scaled by dividing it by the corresponding attenuation function. Representative OCT interferogram taken from the center of the FOV of a mirror sample (b) before and (c) after intensity scaling.
Fig. 5.
Fig. 5. Distortion correction of a smartOCT spectrum. (a) Unprocessed, distorted RGB spectrum of a grid chart with 0.5-mm spacing captured on the smartphone, (b) Distortion-corrected spectrum. (c) Source (white) and target (red) points used for establishing the unwarping transform. (d) Original distorted OCT B-scan of Scotch tape and (e) distortion-corrected OCT B-scan. The white and yellow dotted boxes correspond to the regions used as signal and background in the SCR calculation, respectively. The blue and magenta boxes represent the regions that were averaged and plotted in panel (f), which shows an averaged A-scan. Scale bars are 250 µm along the positional axis (horizonal) and 50 µm along the depth axis (vertical).
Fig. 6.
Fig. 6. Plot obtained from the central line of each RGB channel of the RAW and MP4 interferograms (a) and (b), respectively. Zoom-in regions of the blue and red channels in the dotted black box of each data type showing artificial cropping of the MP4 data at zero intensity due to the smartphones internal processing. OCT B-scans of the mirror sample (c-d) from the full RAW data and mp4 data. A-scan from the central line of the RAW and mp4 B-scans (magenta and blue dotted lines, respectively) showing the presence of artifacts through the full depth of the A-scan. Scale bars are 100 µm along the positional axis (horizonal) and 25 µm along the depth axis (vertical).
Fig. 7.
Fig. 7. SmartOCT performance characterization. (a) SNR falloff, (b) axial resolution (c) USAF chart group 7 and group 6 element 1 zoom-in and a corresponding maximum-intensity projection of 40 adjacent B-scans and cross-sectional plot showing that group 6 element 1.
Fig. 8.
Fig. 8. Sample imaging with smartOCT system (a) and (b) raw spectral interferograms of tape and cucumber and the corresponding B-scans (c) and (d), respectively. Scale bars are 150 µm along the y-axis (horizonal) and 50 µm along the z-axis (vertical).
Fig. 9.
Fig. 9. Sample image of ex vivo porcine anterior segment. Photograph of the anterior segment of the eye (a) with the red line showing the location of the B-scan. Raw spectrum (b) and 10-frame averaged B-scan (c) of the corneal limbus. Scale bars are 150 µm along the y-axis (horizonal) and 50 µm along the z-axis (vertical)
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.