Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Systematic meta-analysis of computer-aided detection to detect early esophageal cancer using hyperspectral imaging

Open Access Open Access

Abstract

One of the leading causes of cancer deaths is esophageal cancer (EC) because identifying it in early stage is challenging. Computer-aided diagnosis (CAD) could detect the early stages of EC have been developed in recent years. Therefore, in this study, complete meta-analysis of selected studies that only uses hyperspectral imaging to detect EC is evaluated in terms of their diagnostic test accuracy (DTA). Eight studies are chosen based on the Quadas-2 tool results for systematic DTA analysis, and each of the methods developed in these studies is classified based on the nationality of the data, artificial intelligence, the type of image, the type of cancer detected, and the year of publishing. Deeks’ funnel plot, forest plot, and accuracy charts were made. The methods studied in these articles show the automatic diagnosis of EC has a high accuracy, but external validation, which is a prerequisite for real-time clinical applications, is lacking.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The anticipated projection of cancer to surpass all other noncommunicable diseases as the foremost cause of mortality worldwide is escalating [1]. Esophageal cancer (EC) is the eighth most prevalent cancer globally among the 200 types of cancer. It is associated with a bleak prognosis, rendering it one of the primary contributors to cancer-related mortality [2]. Esophageal squamous cell carcinoma (ESCC) is the predominant form of esophageal cancer (EC), while esophageal carcinoma has been the most frequently occurring type of EC over the last four decades [3]. Esophageal squamous cell carcinoma (ESCC) is predominantly prevalent in developing nations, particularly in Africa and Asia, where squamous cells are more prevalent. In Japan, approximately 90% of esophageal cancer cases are attributed to ESCC [4]. The act of smoking is a prevalent risk factor associated with the development of ESSC, as documented in literature [5]. Moreover, the consumption of alcohol represents the most potent risk factor for esophageal squamous cell carcinoma (ESCC) among individuals diagnosed with squamous cell carcinoma of the neck and head [6]. In contrast, developed nations such as those in America and Europe exhibit a higher prevalence of esophageal adenocarcinoma (EAC). Several risk factors associated with it include smoking, obesity, and gastroesophageal reflux disease [7]. Moreover, EAC is a form of EC that exhibits a higher prevalence in males as compared to females. Over the recent years, there has been a significant increase in the incidence of fatalities associated with esophageal complications, with an estimated annual death toll of over 400,000 [8]. According to research, the five-year survival rate for EC is between 15% and 20%. However, if the cancer is detected in its early stages, the survival rate can increase significantly to 80%. This information is supported by previous studies [9]. Odynophagia and dysphagia are typically identified during the latter stages of the disease, resulting in the diagnosis of esophageal cancer being predominantly made during its advanced stages [10].

The development of technologies for early and efficient detection of EC is a top priority for researchers across various disciplines, given the concerning mortality rate associated with this condition. Such advancements are expected to significantly improve the survival rate of EC [11]. Numerous scholars are presently engaged in the development of computer-aided diagnosis (CAD) models for the early detection of cancer [1243]. The utilization of CAD models has made a remarkable contribution towards aiding endoscopists in the detection of EC [44]. Wang et al. conducted a study wherein they devised a deep convolutional neural network (CNN) that was integrated with a single-shot multibox detector (SSD) to detect early EC by utilizing the outcomes of narrow-band imaging (NBI) and white light imaging (WLI) [10]. In their research, Li and colleagues investigated the feasibility of utilizing serum-enhanced Raman spectroscopy with silver nanoparticles (Ag NPs) in conjunction with a support vector machine (SVM) to differentiate between patients with EC and those without cancer. The diagnostic accuracy achieved was 85.2% [45]. The utilization of semantic segmentation has been investigated as a means for the early detection and classification of esophageal cancer [15]. Biosensors have been demonstrated to be efficacious in the detection of cancer, as evidenced by prior research studies [4669]. Furthermore, biosensors have been suggested as a cost-efficient means for the timely detection of cancer, in addition to CAD [70]. Tseng and colleagues have effectively fabricated a photoelectrochemical (PEC) biosensor featuring a p-n heterojunction, which incorporates a well-crystallized Cu2O/ZnO structure that endows the device with visible light photo response and electrical characteristics. The biosensor's configuration facilitates the identification of two distinct levels of cancerous esophageal cells, namely a normative cell type derived from a Caucasian male (OE21) and a highly invasive cancer cell variant prevalent among Caucasian males (OE21-1) [63]. Wu et al. conducted a study wherein they integrated a p-type Cu2O film with n-type ZnO nanorods to fabricate a photoelectrochemical biosensor. This biosensor was designed to detect human EC cells in the absence of external bias. The biosensor that was developed exhibited an enhancement in the photo current signal, thereby facilitating the detection of EC cells of a severe nature [67]. The analysis of EC cells using a microchip with dielectrophoretic impedance measurement for early treatment and diagnosis was observed in the study of Wang et al. Using the technology of this biosensor helpfully distinguished different stages of ESCC and provided consistent findings similar to those using hyperspectral imaging (HSI) technology [65].

Despite the capabilities of CAD in combination with RGB image processing and biosensors as methods of providing quality information in cancer detection, these approaches have some limitations. Traditional CAD requires processing a large amount of collective training data and computational power to obtain a better performance in machine learning as they use only three color channels (red, green and blue) [71]. The efficacy of CAD models in detecting cancer is compromised due to a technological limitation observed in the recent CAD model for colonoscopy, which employed a restricted dataset of only 6,000 images for machine learning purposes [72]. The detection of tumor markers through biosensors is feasible owing to the optical, magnetic, and distinctive characteristics of nanomaterials, which enable the identification of biomarkers and tumor vasculature with a high degree of specificity. Nevertheless, the adaptability of these nanomaterials to the environment remains a significant obstacle [73,74]. One of the methods that could overcome all the challenges is the combination of HSI with CAD methodologies. The utilization of a wide spectrum of light ranging from UV to far-infrared for image capture, as opposed to the conventional RGB color model, has the potential to enhance cancer detection performance. This noninvasive approach provides more comprehensive information about the subject, as evidenced by research findings [75].

HSI follows the principle of optical sensing technology for imaging and spectroscopy [76]. It provides both spatial and spectral information about the subject in a noninvasive way [77]. A hyperspectral image is formed when a 2D spectral data of each pixel is detected, and the spatial and spectral information are obtained; therefore, the origin of each spectrum in the subject can be identified [78]. Compared with the naked eye with very limited capabilities in distinguishing objects in terms of the electromagnetic spectrum, HSI has the function to emanate spectral information larger than RGB data [79]. HSI has three common techniques, namely, push broom, filter based, and whisk broom, which results in a 3D hypercube that includes a spectral and two spatial axes [80]. Push broom provides data in high spectral and spatial resolution, but it can cause complexities during post processing because it records only the lines of spectral data per exposure [81]. Whisk broom records lateral and spectral information pixel by pixel, making it more time consuming than push broom [82]. The filter-based technique is wavelength-coded imaging that uses optical filters such as RGB narrowband filters to capture spectral data [83] [84]. Various improvements in HSI have been made in the past 30, and it effectively contributed to different fields not only limited to environment, medicine, astronomy, security, archaeology, agriculture, art conservation, military defense, and food quality [85181]. HSI technology is beneficial in the field of medicine, especially in various cancer detection and diagnosis methods. Table 1 shows research studies of cancer detection diagnosis using HSI technology in recent years.

Tables Icon

Table 1. Studies of HSI in other medical application including other Cancer Detection

Table 2 briefly describes the endoscopic machines available commercially that used CAD. FujiFilm, known for its converging technologies from analog photography, digital diagnostic systems, all the way through medical and healthcare technologies, developed its endoscopic machine that has a new artificial intelligence (AI) called CAD EYE, which uses WLI, bioluminescence imaging, and linked color imaging [187]. Yoshida et al. provided a study where CAD EYE AI was evaluated and produced impressive results helpful for recognizing lesions by CADe and optically diagnosing colorectal polyps by CADx with an accuracy of 87.6% [188]. CAD EYE can also function as a real-time colorectal polyp detector [189]. GI Genius is a medical device trained for processing colonoscopic images (CADe) and displays suspicious lesion in real time using NBI with an accuracy of 99.7% [190]. Brand et al. suggested that the AI used in GI genius (CADe) has a great potential in real-time endoscopic detection [191]. Odin Vision has a current machine learning that aids the decision making of a device in real-time polyp detection that can influence the clinician’s interpretation and diagnosis [192]. Odin Vision uses NBI endoscopic imaging. EVIS XI is a third-generation chromoendoscopic technology developed by Olympus to address the needs of endoscopists for better detection and diagnosis of lesions in real time using NBI, red dichromatic texture, and color enhancement endoscopic imaging [193]. Tang et al. used different endoscopes from Olympus including variations of EVIS to predict gastric cancer better, which resulted in a better diagnostic performance that helped endoscopists [187,194]. Pentax was also mentioned from another study by Milluzzo et al. that aimed to review several AIs responsible for colonoscopy; they found that Pentax Discovery has capabilities to support endoscopists in the real-time detection of polyps and lesions with an accuracy of 90% [195].

Tables Icon

Table 2. Endoscopic machines using CAD from different companies

This study analyzes recent studies related to EC detection and diagnosis using HSI technology in combination with CAD methodology. It evaluates the diagnostic performances of CAD + HSI algorithms used for EC detection and diagnosis. The diagnostic performances included in the study are observed in terms of sensitivity, specificity, accuracy, and area under the curve (AUC). The review briefly explains the studies and presents recommendations according to the meta-analysis of different CAD + HSI methods involved.

2. Materials and methods

This section discusses the processes encountered in acquiring studies relevant to this review, specifically, studies related to EC detection and diagnosis using HSI technology. This section presents the inclusion and exclusion criteria in selecting appropriate studies.

2.1 Study selection criteria

The purpose of this review is to provide comprehension of the advances in EC detection and diagnosis using HSI that exhibits the strengths and weaknesses of this system in terms of EC detection. This review focuses on studies that meet the established inclusion criteria: (1) studies with definitive numerical results such as dataset, sensitivity, accuracy, precision, and AUC; (2) based on HSI dealing with EC detection; (3) published in the last six years; (4) publication journal has an H-index greater than 50 and is in the first quartile (Q1); (5) studies with prospective or retrospective design; and (6) studies written in English. Additionally, this review disregards studies that fall under the exclusion criteria: (1) studies with insufficient data; (2) studies under narrative, systematic review, and meta-analyses; (3) comments, proceedings, or study protocols; and (4) conference papers. The Quality Assessment of Diagnostic Accuracy Studies Version 2 (QUADAS-2) is introduced in this study by the two authors to assess the quality of the methodologies of the articles to be reviewed. QUADAS-2 contains bias assessment in selecting patients and during the index test. It also assesses the standard of reference and the risk bias in terms timing and flow as domains. The applicability assessment is also observed with the bias assessment the by two authors [196].

2.2 QUADAS-2 result

Table 3 summarizes the QUADAS-2 results of the eight studies included in this re-view. It contains the applicability concerns and the level of risk of bias of the studies. Each study was reviewed based on flow and timing, patient selection, reference standard, and index test for risk of bias as well as under patient selection, reference standard, and index test for applicability concerns.

Tables Icon

Table 3. QUADAS-2 Summary

3. Results

This section shows the results of the review including the clinical features observed and a brief explanation of each study. This section also contains the numerical results gathered from each study. This section also explains the comparisons of the results in terms of sensitivity, specificity, accuracy, and AUC. This section also incorporates the tabulated results and the summary receiver operating characteristic curve (SROC).

3.1 Clinical features observed in the studies

The studies selected for this article analysis examine the performance of various CAD methods for EC detection and diagnosis. Studies included in this review are explained briefly, highlighting their objectives, CAD algorithm used, and the results. Moreover, the accuracy, sensitivity, specificity, and AUC in detecting and classifying EC lesions and neoplasms from each article were recorded using subgrouping and meta-analysis. These indices were assessed and compared from different CAD methods used in the articles.

Tsai et al. used HSI to identify EC stages and mark the locations using the CNN of the SSD model. A total of 155 WLI and 153 NBI images from 1,232 endoscopic images were used for dataset training (AI-HSI) in the visible band between 380 and 780 nm. The results showed that the use of HSI increased the accuracy of detecting cancer lesions by 5% in both WLI and NBI images [202]. In another study, Maktabi et al. used ex vivo specimen images from 11 patients in determining the performance of HSI during esophageal resection intra-operative analysis. The dataset was trained and classified in four different methods, namely, SVM, k-nearest neighbors (k-NN), multiple perception classifier (MLP), and random forest (RF). The results showed that the use of HSI in classifying esophagogastric restates had sensitivity of 63% and specificity of 69%, which were excellent [203].

Homann et al. presented the results from an endoscopy of the stomach and esophagus for early cancer detection using Gaussian and linear kernel SVM, RobustBoost (RB), AdaBoost (AB), and RF-walk applied with HSI in the visible band between 400 nm to 650 nm. The result of this study indicated that RB was the best in all five classifiers having sensitivity of 63%, specificity of 65%, and accuracy of 64% [186]. Nakano et al. proposed the use of low-concentration Lugol stain and narrowband illumination, a highly sensitive yet less invasive cancer detection technique in the visible band between 420 nm and 720 nm. The proposed illumination (MBSI) was compared with WLI and NBI in differentiating normal and cancerous elements. The proposed illuminations, namely, MBSI-6 and MBSI-9 with stained Lugol, showed a better performance of greater than 90% [182]. A study conducted by Grigoroiu et al. tested a five-layer CNN system established under a standard Macbeth color-classification method for analyzing endoscopic HSI images in real time in which the data acquired at center wavelengths of 450, 550 and 650 nm. The ex vivo data of biopsies of 12 human esophagi were applied with CNN. The application of CNN to the human biopsies obtained an average consistency of 86.9% in classifying tissue biopsies into four types labeled normal, adenocarcinoma, Barrett’s esophagus, and squamous epithelium [204].

A study by Maktabi et al. used the pixel-wise classification method to differentiate HSI images such as EAC, tumor stroma, and squamous epithelium cells gathered from 95 patients with oncologic esophagectomy in the visible band of 500 nm and 750 nm. Logistic regression, MLP, SVM, RF were the learning methods for analysis. The study obtained an average accuracy of 78% and a specificity of 84% for tumor stroma and EAC discrimination; for squamous epithelium cells, the overall average was 81%, and MLP was the best classifier [205]. Wu et al. proposed an early detection method for esophageal cancerous lesion by HSI and endoscopy. This study presented the use of HSI and principal component analysis as an optical detection approach for esophageal cancerous lesions, which can help physicians in the early identification of cancer lesions in the visible band between 380 nm and 780 nm [206]. Collins et al. examined the chance to expand training datasets to overcome the limitation of machine learning that is to use many datasets. A total of 22 images consisting of healthy to cancerous colon and esophagogastric tissues were used in the visible and NIR band between 550 nm and 1000 nm. The results showed that the 3DCNN model provided more accuracy than traditional learning models, and MLP obtained better results than RBF-SVM [207].

Table 4 shows the observed clinical characteristics of the studies involved. Studies involving CAD algorithms for EC detection can be classified based on image analysis [197,203] and patient/specimen analysis [198202,182]. These studies have different approaches on how to gather the needed images. .The study by Wu et al. showcased both patient/specimen based and image-based analyses [203]. Out of the 2 analyses based on image, 1,452 images comprising 260 normal, 276 low-grade dysplasia, 425 high-grade dysplasia, and 491 ECs were associated. Moreover, 414 patients were involved in all of the studies. Three studies used an Asian dataset representing Asian populations [197,200,203] and 5 out of these 8 studies represented European populations [198,199,201,202,182]. CNN [197,201,203,182] and SVM [198200,182] were some of the most common CAD algorithms used in the studies, and two studies used MLP [198,202]. Most studies used WLI images as the default endoscopic image to be studied [197,199202], whereas one study introduced a new type of endoscopic image [200]. The prediction of the severity of EC was observed in five studies [197,200202,182] where 2 studies used CNN algorithm [197,201], one study had SVM [200], one study had MLP [202], and another study contained CNN, SVM, and MLP algorithms [182]. These studies categorized the severity of EC lesions, whether ESCC or EAC, from normal to severe.

Tables Icon

Table 4. Clinical Features of the Studies considered in this study including the nationality, method, lighting, accuracy, sensitivity, specificity, the number of images, and AUC of the studies.

3.3 Meta-analysis of the studies

Table 5 shows the meta-analysis and subgroup analysis for EC diagnosis. Among the 8 studies included in this review, the average accuracy, sensitivity, specificity, and AUC were 79.55% (58.2%–89.5%), 71.15% (28%–90%), 79.64% (61%–89%), and 80.21 (70%–91%) respectively. Studies on Asian population showcased much higher accuracy of 84%, sensitivity of 83.22%, and specificity of 86.71% compared with studies on European population with accuracy of 76.7%, sensitivity of 76.7%, and specificity of 74.33%. One of the factors that made Asian studies better was the number of images involved. Tsai et al. used 1,232 WLI and NBI images with 89.5% accuracy, 89.4% sensitivity, and 89.15% specificity. Tsai et al.’s dataset was comparatively larger than that of the European study by Maktabi et al. with only 95 specimens and 85% accuracy, 73.7% sensitivity, and 77.3% specificity, which underperformed compared with the study by Tsai et al. This finding supported the information that a larger dataset for training in CAD algorithms yielded a better diagnostic performance [204].

Tables Icon

Table 5. Sub-Group and Meta-Analysis of Diagnostic Test Accuracy which includes the classification of data based on Nationality, Machine learning model, Endoscopic Images, Esophageal cancer type and publishing date

The employment of deep learning for image recognition in the medicine field has been a trend in the recent years [205]. The meta-analysis suggests that CNN was the leading CAD for EC detection and diagnosis supported by accuracy, sensitivity, specificity, and AUC of 88.2%, 83.77%, 90.35%, and 91%, respectively. CNN had the best performance compared with other machine/deep learning such as SVM (68.33% accuracy, 63.85% sensitivity, 90.35% specificity, and 74.5% AUC) and MLP (68.33% accuracy, 47.85% sensitivity, 76.15% specificity, and 83.65% AUC). Although CNN and SVM were AIs that are both used in EC image recognition, they somehow exhibited different expertise. On the one hand, CNN recognized overlooked biological features and predicted the risk of someone for certain diseases [206]. On the other hand, SVM can inhibit data noise due to its sparse property [207]. Nevertheless, CNN was more effective in EC detection based on the data obtained by this meta-analysis.

Studies that used NBI endoscopic images presented the best results (83% accuracy, 83.59% sensitivity, and 85.87% specificity) compared with studies that used WLI endoscopic images (78.48% accuracy, 78.28% sensitivity, and 76.56% specificity), proving that NBI is more sensitive than WLI, as also observed by Tsai et al. One of the benefits of NBI is preventing small cancerous lesions from being missed during detection [208]. NBI performs better than WLI in cancer detection because it uses narrowband width filters with center wavelengths between 540 nm and 415 nm [209]. Furthermore, the 415 nm light can be fully absorbed by hemoglobin, which enables distinctly recognizing the microvascular structure of an organ [210]. Another type of image was proposed from the study of Nakano et al., specifically, the multiband switching illumination (MBSI). MBSI was designed using multiple three-band illuminations inside the endoscope. The three-band illumination (3- BI) was composed of RGB LEDs combined with the characteristics of a narrowband light emission. MBSI showed engaging results (81.65% accuracy, 86.55% sensitivity, and 81% specificity) [200].

EAC and ESCC are the two histologically most prevalent kinds of EC [211]. EAC and ESCC share some common risk factors such as smoking, alcohol consumption, and increased prevalence in men and in older patients [212]. Despite the similarities of the risk factors of these EC types, some risks can only be observed on one type. For example, Barrett’s esophagus is the main risk factor of EAC, whereas Achalasia is the main risk factor of ESCC [213]. The endoscopic images of EAC are more commonly used by the studies involved in this review (76.7% accuracy, 73.4% sensitivity, 75.07% specificity, and 78.65% AUC) considering that most studies used European data, where EAC is more prominent than ESCC [214]. However, insufficient data were reflected for ESCC because most studies did not specify the type of EC used, whereas Maktabi et al. used both ESCC and EAC data with results of 28.2% sensitivity and 72.13% specificity.

Furthermore, an increasing trend was observed in accuracy, sensitivity, specificity, and SROC when studies involved were observed based on their year of publication. An example is a study before 2018 with 58.2% accuracy, 59.6% sensitivity, 61% specificity, and 70% AUC to 87.25% accuracy, 81.68% sensitivity, 83.23% specificity, and 84.33% AUC. This result was the aftermath of the researchers’ increasing attention to HSI [215]. Hence, progressive studies conducted for HSI technology can be advantageous for improving its overall performance in cancer recognition and diagnosis [216].

5.1 SROC curve and subgroup meta-analysis

SROC was created for EC detection with endoscopic image. The average sensitivities and specificities of the studies were plotted to visualize and observe the performances better [217]. Studies with CAD methods that achieved a better performance were plotted in the upper-leftmost part of the graph, approaching 100% on both sensitivity and specificity. The study by Tsai et al. using CNN as CAD method obtained the highest sensitivity of 91.8%. However, the methods near the origin were considered to have low performance. The study by Maktabi et al. using RF as CAD method generated the lowest sensitivity of 22%. Figure 1 shows the SROC based on the sensitivity and specificity of the CAD methods. The SROC obtained 0.13 as the p value, inferring that heterogeneity was not present among the studies involved because the obtained results were statistically insignificant.

 figure: Fig. 1.

Fig. 1. SROC curve of the studies. Different number represents the index number of the study. AUC = 0.686. SROC curve based on Sensitivity and Specificity of CAD Methods. The number points to the study index of the study. (For study index refer to Table 5).

Download Full Size | PDF

Additionally, an accuracy graph was generated to visually compare the CAD methods used in the studies based on their accuracy in EC detection. Figure 2 shows the overall accuracy of different CAD methods used in the studies. The most commonly used method was CNN with several types of SVM. The use of CNN in EC detection was the most dominant among the CAD methods used in different studies. Consequentially, the highest accuracy of 89.5% was obtained through the use of CNN in the investigation by Tsai et al. By contrast, Hohmann et al. achieved the lowest accuracy of 55% in EC detection using AB and RF as the CAD methods.

 figure: Fig. 2.

Fig. 2. Overall Accuracy Performance of CAD Methods. Different colors represent the methods used in the study

Download Full Size | PDF

Furthermore, forest plots for the sensitivity and specificity of each CAD method and for each study involved were generated under 95% level of confidence. In the CAD methods classification, the CNN provided the best performance of 85% sensitivity and obtained the largest weight overall. Meanwhile, under the study classification, the study by Tsai et al. also gathered the best performance of 90% sensitivity. In contrast, studies under Maktabi et al. (2022) and Wu et al., and CAD methods like Robust Boost and Ada Boost, provided limited data thus, unable to determine their upper and lower limits. Furthermore, a meta-regression analysis was made to be able to compare sensitivities and specificities of each data in accordance with the nationality, image type, AI, esophageal cancer type, and date published. Figure 3 shows the univariable meta-regression of sensitivity and specificity under 95% level of confidence. The univariable meta-regression provided quality analysis in terms of the sensitivities in different classification having short limits in exception with the published year. Finally, Deeks’ funnel plot was produced based on different classifications such as CAD method, Nationality, Image Type, and also in all classifications combined. The Deeks’ funnel plot contains diagnostic odds ratio of each classification and the fraction of the root of each sample size [218,219]. The Deeks’ funnel plot obtained from this study provided no indication of heterogeneity [Study (p = 0.20), Image Type (p = 0.78), CAD Method (p = 0.11), All Classification (p = 0.43)]. Figure 4 shows the funnel plot of the studies involved. It was found that these studies have correlation with each other having the regression line of 36.69.

 figure: Fig. 3.

Fig. 3. Univariable Meta-Regression of Different Sub-Group Analysis including Nationality, Image Type, AI, EC Type and Published Year (a) Sensitivity (b) Specificity

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Deeks’ Funnel Plot for Studies of the studies used in the meta-analysis.

Download Full Size | PDF

4. Discussion

Studies conducted on the detection and diagnosis of EC using HSI technology have yielded a diagnostic accuracy that is deemed to be “very good” according to Youden's index, as determined by the area under the curve (AUC) [220]. The aforementioned discovery implies that the CAD algorithm is appropriate for clinical diagnosis, particularly in the diagnosis of EC in compliance with the DTA standard [221]. Several recent studies have been published on the use of CAD algorithms for cancer detection, demonstrating their excellent performance as computer-based aids in diagnosing and identifying EC.

While HSI technology shows promise in the detection of esophageal cancer, the current clinical guidelines for managing and diagnosing this condition continue to heavily depend on the judgment and perspective of the endoscopist [222]. The integration of AI in intra-operative analysis has resulted in a gradual adoption of EC detection, with a relatively slow transition observed among endoscopists. It is anticipated that the utilization of HSI technology in EC detection will gain greater acceptance in the forthcoming period. Despite the endoscopists’ response to this technology, it appears that CAD algorithms offer advantageous diagnostic capabilities, regardless of the endoscopists’ level of fatigue. Therefore, it is highly probable that there will be an increase in the identification of lesions. This study demonstrated the efficacy of HSI technology in detecting cancer at an early stage. The challenge of identifying EC during its initial phases was established in a previous study [217]. Numerous studies examined in this review concentrated on detecting early-stage EC and yielded noteworthy outcomes, indicating that the possibility of early diagnosis of EC would furnish insights for prompt treatment and betterment of patient prognosis.

The American Society for Gastrointestinal Endoscopy has implemented the Preservation and Incorporation of Valuable Endoscopic Innovation (PIVI) to establish a mandatory performance threshold of 90% sensitivity and 80% specificity per patient for image-enhanced endoscopy in the detection of high-grade dysplasia and EAC [223]. The studies under consideration met the PIVI criteria in terms of their overall specificity. However, the overall sensitivity of the studies did not meet the aforementioned criteria. The sensitivity exhibited a wide range of values, spanning from 28% to 90%, owing to the disparate outcomes reported by various studies. While some studies reported suboptimal outcomes, others demonstrated exceptional performance. Furthermore, the efficacy of the CAD algorithm may vary depending on the specific characteristics of the endoscopic images employed. The performance of CAD utilizing NBI images was superior to that of CAD utilizing WLI images. The diagnostic efficacy was found to be analogous across the various CAD models employed, namely CNN, SVM, and Multilayer Perceptron (MLP).

Despite the robust diagnostic performance, various limitations were identified in the studies. One limitation of the studies is the restricted sample size of patients. This review involved a sample of 414 patients, which is notably smaller in comparison to the study conducted by Bang et al. that included a sample size of up to 2,102 patients [224]. A further constraint pertains to the inadequate involvement of patients diagnosed with ESCC. The restricted number of participants in the aforementioned studies may have undermined the validity of the findings, thus it is advisable to include a larger sample size for future investigations. The second issue pertains to the restricted size of the training dataset. A limited quantity of training data can impede the learning process of artificial intelligence, thereby affecting its diagnostic efficacy. Maktabi et al. employed a training dataset comprising 94 out of 95 total specimens, which was relatively limited in size compared to the 39,662 out of 67,740 training datasets utilized by Garcia et al. in their investigation of diagnostic accuracy [225]. Moreover, none of the aforementioned studies included a forecast of the invasion depth of EC. However, Tokai et al. conducted a study that demonstrated the diagnostic potential of CNN in accurately and sensitively measuring the depth of invasion in ESCC. The study reported an accuracy and sensitivity of 80.9% and 84.1%, respectively [226]. It would have been advantageous to conduct an observation and analysis of the depth of EC invasion, in order to expand the scope of CAD beyond merely automated EC detection. Future research endeavors may involve the exploration of HSI technology that incorporates a greater number of CAD algorithms, with a heightened emphasis on the invasion depth of EC. The efficacy of CAD algorithms is contingent upon either the extent of training data or the fidelity of the dataset representation to the primary distribution. Furthermore, the studies included in this review did not furnish a comparison between the diagnosis of CAD and conventional endoscopists.

5. Conclusion

Recent advances in HSI for EC detection has an optimistic performance. With further research, the dilemma of early EC detection will slowly be overcome to facilitate the decrease in EC mortality. Evidently, as the years of study about HSI in EC go by, the performance each study achieves improves. Using appropriate CAD algorithms can excellently support diagnostic accuracy for EC. In this study, certain CAD methods were suited perfectly to the use of HSI in EC detection. The effectiveness of HSI in EC detection was also dependent on the type of image (NBI, WLI) used in the diagnosis However, certain aspects of experimentation and studies should be considered such as recognizing the limitations of the involved studies encountered, for example, the insufficiency of patient participation and the limited endoscopic images and training dataset involved during diagnostics, to improve the purpose HSI technology in EC diagnosis.

Funding

National Taiwan University Hospital (112-N0033); Kaohsiung Armed Forces General Hospital (110-015); National Science and Technology Council (NSTC 111-2221-E-194-007).

Acknowledgments

Author Contribution. All authors have read and agreed to the published version of the manuscript. Conceptualization, C.S, A.M, H.-C.W, methodology, C.S, A.M, ; software, A.M, and; validation, A.M, and H.-C.W; formal analysis, A.M, ; investigation, A.M, ; resources, H.-C.W, ; data curation, A.M,; writing—Original draft preparation, C.S, A.M,; writing—Review and editing, A.M,; supervision, A.M, H.-C.W; project administration, H.-C.W. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of National Taiwan University Hospital (202301103RIND) and Kaohsiung Armed Forces General Hospital (KAFGHIRB 112-018).

Informed Consent Statement: Written informed consent was waived in this study because of the retrospective, anonymized nature of study design.

Disclosures

The authors declare no conflict of interest

Data availability

The data presented in this study are available in this article.

Supplemental document

See Supplement 1 for supporting content.

References

1. J. Ferlay, M. Colombet, I. Soerjomataram, C. Mathers, D. M. Parkin, M. Pineros, A. Znaor, and F. Bray, “Estimating the global cancer incidence and mortality in 2018: GLOBOCAN sources and methods,” Int. J. Cancer 144(8), 1941–1953 (2019). [CrossRef]  

2. R. E. Melhado, D. Alderson, and O. Tucker, “The changing face of esophageal cancer,” Cancers 2(3), 1379–1404 (2010). [CrossRef]  

3. E. O. Then, M. Lopez, S. Saleem, V. Gayam, T. Sunkara, A. Culliford, and V. Gaduputi, “Esophageal cancer: an updated surveillance epidemiology and end results database analysis,” World J. Oncol. 11(2), 55–64 (2020). [CrossRef]  

4. N. Ishimura, E. Okimoto, K. Shibagaki, and S. Ishihara, “Endoscopic diagnosis and screening of Barrett's esophagus: Inconsistency of diagnostic criteria between Japan and Western countries,” DEN Open 2, e73 (2022). [CrossRef]  

5. K. J. Napier, M. Scheerer, and S. Misra, “Esophageal cancer: A Review of epidemiology, pathogenesis, staging workup and treatment modalities,” World J Gastrointest Oncol 6(5), 112–120 (2014). [CrossRef]  

6. Y. K. Wang, Y. S. Chuang, T. S. Wu, K. W. Lee, C. W. Wu, H. C. Wang, C. T. Kuo, C. H. Lee, W. R. Kuo, C. H. Chen, D. C. Wu, and I. C. Wu, “Endoscopic screening for synchronous esophageal neoplasia among patients with incident head and neck cancer: Prevalence, risk factors, and outcomes,” Int. J. Cancer 141(10), 1987–1996 (2017). [CrossRef]  

7. M. Rao, “Esophageal Cancer,” Mount Sinai Expert Guides: Oncology (Wiley, 2019), pp. 139–152.

8. C. Mariette, S. R. Markar, T. S. Dabakuyo-Yonli, B. Meunier, D. Pezet, D. Collet, X. B. D’Journo, C. Brigand, T. Perniceni, N. Carrere, J. Y. Mabrut, S. Msika, F. Peschaud, M. Prudhomme, F. Bonnetain, G. Piessen, C. Federation de Recherche en, and G. French Eso-Gastric Tumors Working, “Hybrid Minimally Invasive Esophagectomy for Esophageal Cancer,” N Engl J Med 380(2), 152–162 (2019). [CrossRef]  

9. T. W. Rice, H. Ishwaran, W. L. Hofstetter, D. P. Kelsen, C. Apperson-Hansen, E. H. Blackstone, and I. Worldwide Esophageal Cancer Collaboration, “Recommendations for pathologic staging (pTNM) of cancer of the esophagus and esophagogastric junction for the 8th edition AJCC/UICC staging manuals,” Dis Esophagus 29(8), 897–905 (2016). [CrossRef]  

10. Y. K. Wang, H. Y. Syu, Y. H. Chen, C. S. Chung, Y. S. Tseng, S. Y. Ho, C. W. Huang, I. C. Wu, and H. C. Wang, “Endoscopic images by a single-shot multibox detector for the identification of early cancerous lesions in the esophagus: a pilot study,” Cancers 13(2), 321 (2021). [CrossRef]  

11. Y. Zhang, “Epidemiology of esophageal cancer,” World J Gastroenterol 19(34), 5598–5606 (2013). [CrossRef]  

12. H. Akbari, L. V. Halig, D. M. Schuster, A. Osunkoya, V. Master, P. T. Nieh, G. Z. Chen, and B. Fei, “Hyperspectral imaging and quantitative analysis for prostate cancer detection,” J. Biomed. Opt. 17(7), 0760051 (2012). [CrossRef]  

13. H. Chung, G. Lu, Z. Tian, D. Wang, Z. G. Chen, and B. Fei, “Superpixel-based spectral classification for the detection of head and neck cancer with hyperspectral imaging,” Proc SPIE 9788, 978813 (2016). [CrossRef]  

14. H. Fabelo, S. Ortega, D. Ravi, et al., “Spatio-spectral classification of hyperspectral images for brain cancer detection during surgical operations,” PLoS One 13(3), e0193721 (2018). [CrossRef]  

15. Y. J. Fang, A. Mukundan, Y. M. Tsao, C. W. Huang, and H. C. Wang, “Identification of early esophageal cancer by semantic segmentation,” J. Pers. Med. 12(8), 1204 (2022). [CrossRef]  

16. B. Fei, G. Lu, X. Wang, H. Zhang, J. V. Little, M. R. Patel, C. C. Griffith, M. W. El-Diery, and A. Y. Chen, “Label-free reflectance hyperspectral imaging for tumor margin assessment: a pilot study on surgical specimens of cancer patients,” J. Biomed. Opt. 22(08), 1–7 (2017). [CrossRef]  

17. M. Halicek, J. D. Dormer, J. V. Little, A. Y. Chen, and B. Fei, “Tumor detection of the thyroid and salivary glands using hyperspectral imaging and deep learning,” Biomed. Opt. Express 11(3), 1383–1400 (2020). [CrossRef]  

18. M. Halicek, J. D. Dormer, J. V. Little, A. Y. Chen, L. Myers, B. D. Sumer, and B. Fei, “Hyperspectral Imaging of Head and Neck Squamous Cell Carcinoma for Cancer Margin Detection in Surgical Specimens from 102 Patients Using Deep Learning,” Cancers 11(9), 1367 (2019). [CrossRef]  

19. M. Halicek, H. Fabelo, S. Ortega, G. M. Callico, and B. Fei, “In-Vivo and Ex-Vivo Tissue Analysis through Hyperspectral Imaging Techniques: Revealing the Invisible Features of Cancer,” Cancers 11(6), 756 (2019). [CrossRef]  

20. M. Halicek, J. V. Little, X. Wang, A. Y. Chen, and B. Fei, “Optical biopsy of head and neck cancer using hyperspectral imaging and convolutional neural networks,” J. Biomed. Opt. 24(03), 1–9 (2019). [CrossRef]  

21. A. Hamed, V. H. Luma, Z. Hongzheng, W. Dongsheng, C. Zhuo Georgia, and F. Baowei, “Detection of cancer metastasis using a novel macroscopic hyperspectral method,” Proc. SPIE 8317, 831711 (2012). [CrossRef]  

22. D. Haunschild and A. Eikhof, “Understanding in multinational organizations,” J. Organ. Behav 28, 303–325 (2007). [CrossRef]  

23. B. Jansen-Winkeln, M. Barberio, C. Chalopin, K. Schierle, M. Diana, H. Kohler, I. Gockel, and M. Maktabi, “Feedforward Artificial Neural Network-Based Colorectal Cancer Detection Using Hyperspectral Imaging: A Step towards Automatic Optical Biopsy,” Cancers 13(5), 967 (2021). [CrossRef]  

24. P. R. Jeyaraj and E. R. Samuel Nadar, “Computer-assisted medical image classification for early diagnosis of oral cancer employing deep learning algorithm,” J. Cancer Res. Clin. Oncol. 145(4), 829–837 (2019). [CrossRef]  

25. S. Kiyotoki, J. Nishikawa, T. Okamoto, K. Hamabe, M. Saito, A. Goto, Y. Fujita, Y. Hamamoto, Y. Takeuchi, S. Satori, and I. Sakaida, “New method for detection of gastric cancer by hyperspectral imaging: a pilot study,” J. Biomed. Opt. 18(2), 026010 (2013). [CrossRef]  

26. R. Leon, H. Fabelo, S. Ortega, J. F. Pineiro, A. Szolna, M. Hernandez, C. Espino, A. J. O’Shanahan, D. Carrera, S. Bisshopp, C. Sosa, M. Marquez, J. Morera, B. Clavo, and G. M. Callico, “VNIR-NIR hyperspectral imaging fusion targeting intraoperative brain cancer detection,” Sci. Rep. 11(1), 19696 (2021). [CrossRef]  

27. R. Leon, B. Martinez-Vega, H. Fabelo, S. Ortega, V. Melian, I. Castano, G. Carretero, P. Almeida, A. Garcia, E. Quevedo, J. A. Hernandez, B. Clavo, and M. C. Griffith, “Non-Invasive Skin Cancer Diagnosis Using Hyperspectral Imaging for In-Situ Clinical Support,” J. Clin. Med. 9(6), 1662 (2020). [CrossRef]  

28. G. Lu, J. V. Little, X. Wang, H. Zhang, M. R. Patel, C. C. Griffith, M. W. El-Deiry, A. Y. Chen, and B. Fei, “Detection of Head and Neck Cancer in Surgical Specimens Using Quantitative Hyperspectral Imaging,” Clin Cancer Res 23(18), 5426–5436 (2017). [CrossRef]  

29. G. Lu, X. Qin, D. Wang, Z. G. Chen, and B. Fei, “Quantitative Wavelength Analysis and Image Classification for Intraoperative Cancer Diagnosis with Hyperspectral Imaging,” Proc. SPIE 9415, 94151B (2015). [CrossRef]  

30. G. Lu, D. Wang, X. Qin, L. Halig, S. Muller, H. Zhang, A. Chen, B. W. Pogue, Z. G. Chen, and B. Fei, “Framework for hyperspectral image processing and quantification for cancer detection during animal tumor surgery,” J. Biomed. Opt. 20(12), 126012 (2015). [CrossRef]  

31. L. Ma, G. Lu, D. Wang, X. Qin, Z. G. Chen, and B. Fei, “Adaptive deep learning for head and neck cancer detection using hyperspectral imaging,” Vis. Comput. Ind. Biomed. Art 2(1), 18 (2019). [CrossRef]  

32. B. Martinez, R. Leon, H. Fabelo, S. Ortega, J. F. Pineiro, A. Szolna, M. Hernandez, C. Espino, J. O. S. A. D. Carrera, S. Bisshopp, C. Sosa, M. Marquez, R. Camacho, M. L. Plaza, J. Morera, and M. G. Callico, “Most Relevant Spectral Bands Identification for Brain Cancer Detection Using Hyperspectral Imaging,” Sensors 19(24), 5481 (2019). [CrossRef]  

33. B. Regeling, W. Laffers, A. O. H. Gerstner, S. Westermann, N. A. Müller, K. Schmidt, J. Bendix, and B. Thies, “Development of an image pre-processor for operational hyperspectral laryngeal cancer detection,” J. Biophotonics 9(3), 235–245 (2016). [CrossRef]  

34. B. Regeling, B. Thies, A. O. Gerstner, S. Westermann, N. A. Muller, J. Bendix, and W. Laffers, “Hyperspectral Imaging Using Flexible Endoscopy for Laryngeal Cancer Detection,” Sensors 16(8), 1288 (2016). [CrossRef]  

35. A. M. Siddiqi, H. Li, F. Faruque, W. Williams, K. Lai, M. Hughson, S. Bigler, J. Beach, and W. Johnson, “Use of hyperspectral imaging to distinguish normal, precancerous, and cancerous cells,” Cancer 114(1), 13–21 (2008). [CrossRef]  

36. E. Torti, G. Florimbi, F. Castelli, S. Ortega, H. Fabelo, G. Callicó, M. Marrero-Martin, and F. Leporati, “Parallel K-Means Clustering for Brain Cancer Detection Using Hyperspectral Images,” Electronics 7(11), 283 (2018). [CrossRef]  

37. T. J. Tsai, A. Mukundan, Y. S. Chi, Y. M. Tsao, Y. K. Wang, T. H. Chen, I. C. Wu, C. W. Huang, and H. C. Wang, “Intelligent Identification of Early Esophageal Cancer by Band-Selective Hyperspectral Imaging,” Cancers 14(17), 4292 (2022). [CrossRef]  

38. H.-C. Wang, “Cancerous lesion identifying method via hyper-spectral imaging technique,” patent (2018).

39. X. Yuan, D. Zhang, C. Wang, B. Dai, M. Zhao, and B. Li, “Hyperspectral Imaging and SPA–LDA Quantitative Analysis for Detection of Colon Cancer Tissue,” J. Appl. Spectrosc. 85(2), 307–312 (2018). [CrossRef]  

40. Y. Zhang, X. Wu, L. He, C. Meng, S. Du, J. Bao, and Y. Zheng, “Applications of hyperspectral imaging in the detection and diagnosis of solid tumors,” Transl Cancer Res 9(2), 1265–1277 (2020). [CrossRef]  

41. Y.-H. Zhang, L.-J. Guo, X.-L. Yuan, and B. Hu, “Artificial intelligence-assisted esophageal cancer management: Now and future,” World J. Gastroenterol. 26(35), 5256–5271 (2020). [CrossRef]  

42. H.-Y. Huang, Y.-P. Hsiao, A. Mukundan, Y.-M. Tsao, W.-Y. Chang, and H.-C. Wang, “Classification of Skin Cancer Using Novel Hyperspectral Imaging Engineering via YOLOv5,” J. Clin. Med. 12(3), 1134 (2023). [CrossRef]  

43. C.-Y. Wang, A. Mukundan, Y.-S. Liu, Y.-M. Tsao, F.-C. Lin, W.-S. Fan, and H.-C. Wang, “Optical Identification of Diabetic Retinopathy Using Hyperspectral Imaging,” J. Pers. Med. 13(6), 939 (2023). [CrossRef]  

44. Y. Mori, S. E. Kudo, H. E. N. Mohmed, M. Misawa, N. Ogata, H. Itoh, M. Oda, and K. Mori, “Artificial intelligence and upper gastrointestinal endoscopy: Current status and future perspective,” Dig Endosc 31(4), 378–388 (2019). [CrossRef]  

45. S.-X. Li, Q.-Y. Zeng, L.-F. Li, Y.-J. Zhang, M.-M. Wan, Z.-M. Liu, H.-L. Xiong, Z.-Y. Guo, and S.-H. Liu, “Study of support vector machine and serum surface-enhanced Raman spectroscopy for noninvasive esophageal cancer detection,” J. Biomed. Opt. 18(2), 027008 (2013). [CrossRef]  

46. S. Augustine, P. Kumar, and B. D. Malhotra, “Amine-Functionalized MoO3@RGO Nanohybrid-Based Biosensor for Breast Cancer Detection,” ACS Appl. Bio Mater. 2(12), 5366–5378 (2019). [CrossRef]  

47. L. L. Chan, S. L. Gosangari, K. L. Watkin, and B. T. Cunningham, “A label-free photonic crystal biosensor imaging method for detection of cancer cell cytotoxicity and proliferation,” Apoptosis 12(6), 1061–1068 (2007). [CrossRef]  

48. S. Chupradit, S. Ashfaq, D. Bokov, W. Suksatan, A. T. Jalil, A. M. Alanazi, and M. Sillanpaa, “Ultra-Sensitive Biosensor with Simultaneous Detection (of Cancer and Diabetes) and Analysis of Deformation Effects on Dielectric Rods in Optical Microstructure,” Coatings 11(12), 1564 (2021). [CrossRef]  

49. R. D’Agata, M. C. Giuffrida, and G. Spoto, “Peptide Nucleic Acid-Based Biosensors for Cancer Diagnosis,” Molecules 22(11), 1951 (2017). [CrossRef]  

50. R. Geetha Bai, K. Muthoosamy, R. Tuvikene, H. Nay Ming, and S. Manickam, “Highly Sensitive Electrochemical Biosensor Using Folic Acid-Modified Reduced Graphene Oxide for the Detection of Cancer Biomarker,” Nanomaterials 11(5), 1272 (2021). [CrossRef]  

51. A. S. Ghrera, C. M. Pandey, M. A. Ali, and B. D. Malhotra, “Quantum dot-based microfluidic biosensor for cancer detection,” Appl. Phys. Lett. 106(19), 193703 (2015). [CrossRef]  

52. Y. P. Hsiao, A. Mukundan, W. C. Chen, M. T. Wu, S. C. Hsieh, and H. C. Wang, “Design of a Lab-On-Chip for Cancer Cell Detection through Impedance and Photoelectrochemical Response Analysis,” Biosensors 12(6), 405 (2022). [CrossRef]  

53. W.-C. Law, K.-T. Yong, A. Baev, and P. N. Prasad, “Sensitivity Improved Surface Plasmon Resonance Biosensor for Cancer Biomarker Detection Based on Plasmonic Enhancement,” ACS Nano 5(6), 4858–4864 (2011). [CrossRef]  

54. J. H. Leung, H. T. Nguyen, S. W. Feng, S. B. Artemkina, V. E. Fedorov, S. C. Hsieh, and H. C. Wang, “Characteristics of P-Type and N-Type Photoelectrochemical Biosensors: A Case Study for Esophageal Cancer Detection,” Nanomaterials 11(5), 1065 (2021). [CrossRef]  

55. K. C. Li, M. Y. Lu, H. T. Nguyen, S. W. Feng, S. B. Artemkina, V. E. Fedorov, and H. C. Wang, “Intelligent Identification of MoS(2) Nanostructures with Hyperspectral Imaging by 3D-CNN,” Nanomaterials 10(6), 1161 (2020). [CrossRef]  

56. G. Liu, X. Mao, J. A. Phillips, H. Xu, W. Tan, and L. Zeng, “Aptamer-nanoparticle strip biosensor for sensitive detection of cancer cells,” Anal. Chem. 81(24), 10013–10018 (2009). [CrossRef]  

57. S. Lu and Y. Wang, “Fluorescence resonance energy transfer biosensors for cancer detection and evaluation of drug efficacy,” Clin. Cancer Res. 16(15), 3822–3824 (2010). [CrossRef]  

58. A. Mukundan, S. W. Feng, Y. H. Weng, Y. M. Tsao, S. B. Artemkina, V. E. Fedorov, Y. S. Lin, Y. C. Huang, and H. C. Wang, “Optical and Material Characteristics of MoS(2)/Cu(2)O Sensor for Detection of Lung Cancer Cell Types in Hydroplegia,” Int. J. Mol. Sci. 23(9), 4745 (2022). [CrossRef]  

59. A. Mukundan, Y. M. Tsao, S. B. Artemkina, V. E. Fedorov, and H. C. Wang, “Growth Mechanism of Periodic-Structured MoS(2) by Transmission Electron Microscopy,” Nanomaterials 12(1), 135 (2021). [CrossRef]  

60. S. Myung, A. Solanki, C. Kim, J. Park, K. S. Kim, and K. B. Lee, “Graphene-encapsulated nanoparticle-based biosensor for the selective detection of cancer biomarkers,” Adv. Mater. 23(19), 2221–2225 (2011). [CrossRef]  

61. J. C. Nicole, W. H. Scott, A. Mehrdad, A. G. Frederick, W. L. Ira, and A. E. Eric, “Evidence of a heterogeneous tissue oxygenation: renal ischemia/reperfusion injury in a large animal model,” J. Biomed. Opt. 18(3), 035001 (2013). [CrossRef]  

62. T. Pasinszki, M. Krebsz, T. T. Tung, and D. Losic, “Carbon Nanomaterial Based Biosensors for Non-Invasive Detection of Cancer and Disease Biomarkers for Clinical Diagnosis,” Sensors 17(8), 1919 (2017). [CrossRef]  

63. K. W. Tseng, Y. P. Hsiao, C. P. Jen, T. S. Chang, and H. C. Wang, “Cu(2)O/PEDOT:PSS/ZnO Nanocomposite Material Biosensor for Esophageal Cancer Detection,” Sensors 20(9), 2455 (2020). [CrossRef]  

64. Y. Uludag, F. Narter, E. Sağlam, G. Köktürk, M. Y. Gök, M. Akgün, S. Barut, and S. Budak, “An integrated lab-on-a-chip-based electrochemical biosensor for rapid and sensitive detection of cancer biomarkers,” Anal. Bioanal. Chem. 408(27), 7775–7783 (2016). [CrossRef]  

65. H. C. Wang, N. V. Nguyen, R. Y. Lin, and C. P. Jen, “Characterizing Esophageal Cancerous Cells at Different Stages Using the Dielectrophoretic Impedance Measurement Method in a Microchip,” Sensors 17(5), 1053 (2017). [CrossRef]  

66. L. Wang, Y. Wang, J. I. Wong, T. Palacios, J. Kong, and H. Y. Yang, “Functionalized MoS(2) nanosheet-based field-effect biosensor for label-free sensitive detection of cancer marker proteins in solution,” Small 10(6), 1101–1105 (2014). [CrossRef]  

67. I. C. Wu, Y.-H. Weng, M.-Y. Lu, C.-P. Jen, V. E. Fedorov, W. C. Chen, M. T. Wu, C.-T. Kuo, and H.-C. Wang, “Nano-structure ZnO/Cu2O photoelectrochemical and self-powered biosensor for esophageal cancer cell detection,” Opt. Express 25(7), 7689–7706 (2017). [CrossRef]  

68. T. Xu, Y. Song, W. Gao, T. Wu, L. P. Xu, X. Zhang, and S. Wang, “Superwettable Electrochemical Biosensor toward Detection of Cancer Biomarkers,” ACS Sens. 3(1), 72–78 (2018). [CrossRef]  

69. A. Yasli, “Cancer Detection with Surface Plasmon Resonance-Based Photonic Crystal Fiber Biosensor,” Plasmonics 16(5), 1605–1612 (2021). [CrossRef]  

70. L. Wang, “Screening and Biosensor-Based Approaches for Lung Cancer Detection,” Sensors 17(10), 2420 (2017). [CrossRef]  

71. S. M. Anwar, M. Majid, A. Qayyum, M. Awais, M. Alnowami, and M. K. Khan, “Medical Image Analysis using Convolutional Neural Networks: A Review,” J. Med. Syst. 42(11), 226 (2018). [CrossRef]  

72. Y. Mori, S. E. Kudo, T. M. Berzin, M. Misawa, and K. Takeda, “Computer-aided diagnosis for colonoscopy,” Endoscopy 49(08), 813–819 (2017). [CrossRef]  

73. L. Chao, Y. Liang, X. Hu, H. Shi, T. Xia, H. Zhang, and H. Xia, “Recent advances in field effect transistor biosensor technology for cancer detection: a mini review,” J. Phys. D: Appl. Phys. 55(15), 153001 (2022). [CrossRef]  

74. R. Goldoni, A. Scolaro, E. Boccalari, C. Dolci, A. Scarano, F. Inchingolo, P. Ravazzani, P. Muti, and G. Tartaglia, “Malignancies and Biosensors: A Focus on Oral Cancer Detection through Salivary Biomarkers,” Biosensors 11(10), 396 (2021). [CrossRef]  

75. Y. Li, H. Zhang, and Q. Shen, “Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network,” Remote Sensing 9(12), 1330 (2017). [CrossRef]  

76. J. Ma, D.-W. Sun, H. Pu, Q. Wei, and X. Wang, “Protein content evaluation of processed pork meats based on a novel single shot (snapshot) hyperspectral imaging sensor,” J. Food Eng. 240, 207–213 (2019). [CrossRef]  

77. S.-C. Chang, H.-Y. Syu, Y.-L. Wang, C.-J. Lai, S.-Y. Huang, and H.-C. Wang, “Identifying the incidence level of periodontal disease through hyperspectral imaging,” Opt. Quantum Electron. 50(11), 409 (2018). [CrossRef]  

78. A. Signoroni, M. Savardi, A. Baronio, and S. Benini, “Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review,” J. Imaging 5(5), 52 (2019). [CrossRef]  

79. M. J. Khan, H. S. Khan, A. Yousaf, K. Khurshid, and A. Abbas, “Modern Trends in Hyperspectral Image Analysis: A Review,” IEEE Access 6, 14118–14129 (2018). [CrossRef]  

80. S. Paulus and A.-K. Mahlein, “Technical workflows for hyperspectral plant image assessment and processing on the greenhouse and laboratory scale,” GigaScience 9(8), giaa090 (2020). [CrossRef]  

81. M. B. Stuart, A. J. S. McGonigle, and J. R. Willmott, “Hyperspectral Imaging in Environmental Monitoring: A Review of Recent Developments and Technological Advances in Compact Field Deployable Systems,” Sensors 19(14), 3071 (2019). [CrossRef]  

82. M. C. Bassler, M. Stefanakis, I. Sequeira, E. Ostertag, A. Wagner, J. W. Bartsch, M. Roeßler, R. Mandic, E. F. Reddmann, A. Lorenz, K. Rebner, and M. Brecht, “Comparison of Whiskbroom and Pushbroom darkfield elastic light scattering spectroscopic imaging for head and neck cancer identification in a mouse model,” Anal. Bioanal. Chem. 413(30), 7363–7383 (2021). [CrossRef]  

83. L. Huang, R. Luo, X. Liu, and X. Hao, “Spectral imaging with deep learning,” Light: Sci. Appl. 11(1), 61 (2022). [CrossRef]  

84. N. Mehta, S. Shaik, R. Devireddy, and M. R. Gartia, “Single-Cell Analysis Using Hyperspectral Imaging Modalities,” J. Biomech. Eng. 140(2), 0208021 (2018). [CrossRef]  

85. Environmental Security and Ecoterrorism, 1 ed., NATO Science for Peace and Security Series C: Environmental Security (Springer), pp. XIV, 188.

86. 10.25165_ijabe.v9i2.2137.pdf.

87. Brown-2005-Hyperspectral-imaging-spectroscopy-.pdf.

88. Yudovsky-2010-Hyperspectral-imaging-in-diabetic-f.pdf.

89. Hyperspectral Imaging Technology in Food and Agriculture, 1 ed. (Springer, 2015), Vol. 1.

90. Advances in Crop Environment Interaction, 1 ed. (Springer, 2018), pp. XIII, 437.

91. J. M. Amigo, “Practical issues of hyperspectral imaging analysis of solid dosage forms,” Anal. Bioanal. Chem. 398(1), 93–109 (2010). [CrossRef]  

92. I. Aneece and P. S. Thenkabail, “Classifying Crop Types Using Two Generations of Hyperspectral Sensors (Hyperion and DESIS) with Machine Learning on the Cloud,” Remote Sens. 13(22), 4704 (2021). [CrossRef]  

93. L. Arttu, J. M. Aaron, and N. G. Erich, “Passive hyperspectral terahertz imagery for security screening using a cryogenic microbolometer,” in Proc. SPIE, (2005), 127–134.

94. J. Y. Barnaby, T. D. Huggins, H. Lee, A. M. McClung, S. R. M. Pinson, M. Oh, G. R. Bauchan, L. Tarpley, K. Lee, M. S. Kim, and J. D. Edwards, “Vis/NIR hyperspectral imaging distinguishes sub-population, production environment, and physicochemical grain properties in rice,” Sci. Rep. 10(1), 9284 (2020). [CrossRef]  

95. Sebastián Bayarri and Ripoll, “Hyperspectral Imaging Techniques for the Study, Conservation and Management of Rock Art,” Appl. Sci. 9(23), 5011 (2019). [CrossRef]  

96. V. Bayarri, E. Castillo, S. Ripoll, and M. A. Sebastián, “Improved Application of Hyperspectral Analysis to Rock Art Panels from El Castillo Cave (Spain),” Appl. Sci. 11(3), 1292 (2021). [CrossRef]  

97. J. Behmann, K. Acebron, D. Emin, S. Bennertz, S. Matsubara, S. Thomas, D. Bohnenkamp, M. T. Kuska, J. Jussila, H. Salo, A. K. Mahlein, and U. Rascher, “Specim IQ: Evaluation of a New, Miniaturized Handheld Hyperspectral Camera and Its Application for Plant Phenotyping and Disease Detection,” Sensors 18(2), 441 (2018). [CrossRef]  

98. C. H. Bock, G. H. Poole, P. E. Parker, and T. R. Gottwald, “Plant Disease Severity Estimated Visually, by Digital Photography and Image Analysis, and by Hyperspectral Imaging,” Critical Reviews in Plant Sciences 29(2), 59–107 (2010). [CrossRef]  

99. C. M. Chance, N. C. Coops, A. A. Plowright, T. R. Tooke, A. Christen, and N. Aven, “Invasive Shrub Mapping in an Urban Environment from Hyperspectral and LiDAR-Derived Attributes,” Front. Plant Sci. 07, 528 (2016). [CrossRef]  

100. C. T. Charles, W. B. Paul, D. Heidi, D. R. K. David, A. M. Mark, L. M. James, E. P. Richard, S. T. Michael, and J. R. V. Zaneveld, “Monitoring water transparency and diver visibility in ports and harbors using aircraft hyperspectral remote sensing,” Proc. SPIE 5780, 91–98 (2005). [CrossRef]  

101. C.-W. Chen, Y.-S. Tseng, A. Mukundan, and H.-C. Wang, “Air Pollution: Sensitive Detection of PM2.5 and PM10 Concentration Using Hyperspectral Imaging,” Appl. Sci. 11(10), 4543 (2021). [CrossRef]  

102. H. Chen, Z. Liu, C. Tanougast, and J. Ding, “Optical Hyperspectral Image Cryptosystem Based on Affine Transform and Fractional Fourier Transform,” Appl. Sci. 9(2), 330 (2019). [CrossRef]  

103. C. Cheng and B. Zhao, “Prospect of Application of Hyperspectral Imaging Technology in Public Security,” in International Conference on Applications and Techniques in Cyber Security and Intelligence ATCI 2018, (Springer International Publishing, 2019), 299–304.

104. C. Cucci, J. K. Delaney, and M. Picollo, “Reflectance Hyperspectral Imaging for Investigation of Works of Art: Old Master Paintings and Illuminated Manuscripts,” Acc. Chem. Res. 49(10), 2070–2079 (2016). [CrossRef]  

105. I. Dhau, E. Adam, O. Mutanga, and K. K. Ayisi, “Detecting the severity of maize streak virus infestations in maize crop using in situ hyperspectral data,” Trans. R. Soc. S. Afr. 73(1), 8–15 (2018). [CrossRef]  

106. P. Duan, Z. Xie, X. Kang, and S. Li, “Self-supervised learning-based oil spill detection of hyperspectral images,” Sci. China: Technol. Sci. 65(4), 793–801 (2022). [CrossRef]  

107. A. A. Edward, D. W. Brian, A. L. Robert, and V. D. Trijntje, “A novel method for illumination suppression in hyperspectral images,” Proc. SPIE 6966, 69660C (2008. [CrossRef]  

108. S. A. El Rahman, “Performance of Spectral Angle Mapper and Parallelepiped Classifiers in Agriculture Hyperspectral Image,” Int. J. Adv. Comput. Sci. Appl. 7(5), 1 (2016). [CrossRef]  

109. G. Elmasry, D. F. Barbin, D.-W. Sun, and P. Allen, “Meat Quality Evaluation by Hyperspectral Imaging Technique: An Overview,” Crit. Rev. Food Sci. Nutr. 52(8), 689–711 (2012). [CrossRef]  

110. R. Ennis, F. Schiller, M. Toscani, and K. R. Gegenfurtner, “Hyperspectral database of fruits and vegetables,” J. Opt. Soc. Am. A 35(4), B256–B266 (2018). [CrossRef]  

111. Y. Fu, C. Zhao, J. Wang, X. Jia, G. Yang, X. Song, and H. Feng, “An Improved Combination of Spectral and Spatial Features for Vegetation Classification in Hyperspectral Images,” Remote Sens. 9(3), 261 (2017). [CrossRef]  

112. X. Ge, J. Ding, X. Jin, J. Wang, X. Chen, X. Li, J. Liu, and B. Xie, “Estimating Agricultural Soil Moisture Content through UAV-Based Hyperspectral Images in the Arid Region,” Remote Sens. 13(8), 1562 (2021). [CrossRef]  

113. A. Habib, Y. Han, W. Xiong, F. He, Z. Zhang, and M. Crawford, “Automated Ortho-Rectification of UAV-Based Hyperspectral Data over an Agricultural Field Using Frame RGB Imagery,” Remote Sens. 8(10), 796 (2016). [CrossRef]  

114. E. K. Hege, O. C. Dan, J. William, B. Shridhar, and L. D. Eustace, “Hyperspectral imaging for astronomy and space surveillance,” Proc. SPIE 5159, 380–391 (2004). [CrossRef]  

115. W. D. Hively, G. W. McCarty, J. B. Reeves, M. W. Lang, R. A. Oesterling, and S. R. Delwiche, “Use of Airborne Hyperspectral Imagery to Map Soil Properties in Tilled Agricultural Fields,” Applied and Environmental Soil Science 2011, 1–13 (2011). [CrossRef]  

116. Y. Inoue and J. Peñuelas, “An AOTF-based hyperspectral imaging system for field use in ecophysiological and agricultural applications,” Int. J. Remote Sens. 22(18), 3883–3888 (2001). [CrossRef]  

117. S. D. M. Jacques, C. K. Egan, M. D. Wilson, M. C. Veale, P. Seller, and R. J. Cernik, “A laboratory system for element specific hyperspectral X-ray imaging,” Analyst 138(3), 755–759 (2013). [CrossRef]  

118. W. Jiang, J. Li, X. Yao, E. Forsberg, and S. He, “Fluorescence Hyperspectral Imaging of Oil Samples and Its Quantitative Applications in Component Analysis and Thickness Estimation,” Sensors 18(12), 4415 (2018). [CrossRef]  

119. F. John, A. A. John, R. Chris, and X. LianQin, “Hyperspectral imaging sensor for the coastal environment,” Proc. SPIE 3482, 179–186 (1998). [CrossRef]  

120. Y. Kunshan, “Nondestructive detection for egg freshness grade based on hyperspectral imaging technology,” J. Food Process Eng. 43(7), e13422 (2020). [CrossRef]  

121. C. H. Lee, A. Mukundan, S. C. Chang, Y. L. Wang, S. H. Lu, Y. C. Huang, and H. C. Wang, “Comparative Analysis of Stress and Deformation between One-Fenced and Three-Fenced Dental Implants Using Finite Element Analysis,” J. Clin. Med. 10(17), 3986 (2021). [CrossRef]  

122. H. Liang, “Advances in multispectral and hyperspectral imaging for archaeology and art conservation,” Appl. Phys. A 106(2), 309–323 (2012). [CrossRef]  

123. B. Liu, Y. Li, G. Li, and A. Liu, “A Spectral Feature Based Convolutional Neural Network for Classification of Sea Surface Oil Spill,” ISPRS International Journal of Geo-Information 8(4), 160 (2019). [CrossRef]  

124. L. Liu and M. O. Ngadi, “Detecting Fertility and Early Embryo Development of Chicken Eggs Using Near-Infrared Hyperspectral Imaging,” Food Bioprocess Technol. 6(9), 2503–2513 (2013). [CrossRef]  

125. V. Lodhi, D. Chakravarty, and P. Mitra, “Hyperspectral Imaging for Earth Observation: Platforms and Instruments,” Journal of the Indian Institute of Science 98(4), 429–443 (2018). [CrossRef]  

126. A. Lowe, N. Harrison, and A. P. French, “Hyperspectral image analysis techniques for the detection and classification of the early onset of plant disease and stress,” Plant Methods 13(1), 80 (2017). [CrossRef]  

127. S. Lu, K. Oki, Y. Shimizu, and K. Omasa, “Comparison between several feature extraction/classification methods for mapping complicated agricultural land use patches using airborne hyperspectral data,” Int. J. Remote Sens. 28(5), 963–984 (2007). [CrossRef]  

128. D. MacLennan, K. Trentelman, Y. Szafran, A. T. Woollett, J. K. Delaney, K. Janssens, and J. Dik, “Rembrandt's An Old Man in Military Costume: Combining hyperspectral and MA-XRF imaging to understand how two paintings were painted on a single panel,” J. Am. Inst. Conserv. 58(1-2), 54–68 (2019). [CrossRef]  

129. T. D. Martin, L. Joncas, and G., “ORBS, ORCS, OACS, a Software Suite for Data Reduction and Analysis of the Hyperspectral Imagers SITELLE and SpIOMM,” Astronomical Data Analysis Software an Systems XXIV (ADASS XXIV) 495, (2015).

130. R. Milestad and I. Darnhofer, “Building Farm Resilience: The Prospects and Challenges of Organic Farming,” Journal of Sustainable Agriculture 22(3), 81–97 (2003). [CrossRef]  

131. G. T. Miyoshi, N. N. Imai, A. M. G. Tommaselli, E. Honkavaara, R. Näsi, and É. A. S. Moriya, “Radiometric block adjustment of hyperspectral image blocks in the Brazilian environment,” Int. J. Remote Sens. 39(15-16), 4910–4930 (2018). [CrossRef]  

132. G. Mozgeris, V. Juodkienė, D. Jonikavičius, L. Straigytė, S. Gadal, and W. Ouerghemmi, “Ultra-Light Aircraft-Based Hyperspectral and Colour-Infrared Imaging to Identify Deciduous Tree Species in an Urban Environment,” Remote Sens. 10(10), 1668 (2018). [CrossRef]  

133. M. E. Murphy, B. Boruff, J. N. Callow, and K. C. Flower, “Detecting Frost Stress in Wheat: A Controlled Environment Hyperspectral Study on Wheat Plant Components and Implications for Multispectral Field Sensing,” Remote Sens. 12(3), 477 (2020). [CrossRef]  

134. R. J. Murphy, B. Whelan, A. Chlingaryan, and S. Sukkarieh, “Quantifying leaf-scale variations in water absorption in lettuce from hyperspectral imagery: a laboratory study with implications for measuring leaf water content in the context of precision agriculture,” Precision Agric. 20(4), 767–787 (2019). [CrossRef]  

135. W. Nie, B. Zhang, and S. Zhao, “Discriminative Local Feature for Hyperspectral Hand Biometrics by Adjusting Image Acutance,” Appl. Sci. 9(19), 4178 (2019). [CrossRef]  

136. H. Okamoto, Y. Suzuki, T. Kataoka, and K. Sakai, “Unified hyperspectral imaging methodology for agricultural sensing using software framework,” in International Society for Horticultural Science (ISHS), (Leuven, 2009), 49–56.

137. K. Oksana, P. N. Matthew, W. G. Charles, and R. G. Nathaniel, “Advanced shortwave infrared and Raman hyperspectral sensors for homeland security and law enforcement operations,” Proc. SPIE 9455, 94550O (2015). [CrossRef]  

138. P. Pandey, “High throughput in vivo analysis of plant leaf chemical properties using hyperspectral imaging,” Front. Plant Sci. 8, 1348 (2017). [CrossRef]  

139. S. Pascucci, S. Pignatti, R. Casa, R. Darvishzadeh, and W. Huang, “Special Issue “Hyperspectral Remote Sensing of Agriculture and Vegetation”,” Remote Sens. 12(21), 3665 (2020). [CrossRef]  

140. T. Prasad S, “Evaluation of Narrowband and Broadband Vegetation Indices for Determining Optimal Hyperspectral Wavebands for Agricultural Crop Characterization,” PE&RS 68(6), 607–621 (2002).

141. S. Primpke, M. Godejohann, and G. Gerdts, “Rapid Identification and Quantification of Microplastics in the Environment by Quantum Cascade Laser-Based Hyperspectral Infrared Chemical Imaging,” Environ. Sci. Technol. 54(24), 15893–15903 (2020). [CrossRef]  

142. Z. Qiu, J. Chen, Y. Zhao, S. Zhu, Y. He, and C. Zhang, “Variety Identification of Single Rice Seed Using Hyperspectral Imaging Combined with Convolutional Neural Network,” Appl. Sci. 8(2), 212 (2018). [CrossRef]  

143. R. Qureshi, M. Uzair, K. Khurshid, and H. Yan, “Hyperspectral document image processing: Applications, challenges and future prospects,” Pattern Recognition 90, 12–22 (2019). [CrossRef]  

144. N. R. Rao, “Development of a crop-specific spectral library and discrimination of various agricultural crop varieties using hyperspectral imagery,” Int. J. Remote Sens. 29(1), 131–144 (2008). [CrossRef]  

145. N. R. Rao, P. K. Garg, and S. K. Ghosh, “Development of an agricultural crops spectral library and classification of crops at cultivar level using hyperspectral data,” Precision Agric. 8(4-5), 173–185 (2007). [CrossRef]  

146. F. A. Rodrigues Jr., G. Blasch, P. Defourny, J. I. Ortiz-Monasterio, U. Schulthess, P. J. Zarco-Tejada, J. A. Taylor, and B. Gerard, “Multi-Temporal and Spectral Analysis of High-Resolution Hyperspectral Airborne Imagery for Precision Agric.: Assessment of Wheat Grain Yield and Grain Protein Content,” Remote Sens 10(6), 930 (2018). [CrossRef]  

147. J. Rubio-Delgado, C. J. Pérez, and M. A. Vega-Rodríguez, “Predicting leaf nitrogen content in olive trees using hyperspectral data for precision agriculture,” Precision Agric. 22(1), 1–21 (2021). [CrossRef]  

148. H. Saari, A. Akujärvi, C. Holmlund, H. Ojanen, J. Kaivosoja, A. Nissinen, and O. Niemeläinen, “Visible, Very near Ir and Short Wave Ir Hyperspectral Drone Imaging System for Agriculture and Natural Water Applications,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W3, 165–170 (2017).

149. L. Sebastian van der, J. Andreas, W. Björn, E. Michael, and H. Patrick, “Classifying segmented hyperspectral data from a heterogeneous urban environment using support vector machines,” J. Appl. Remote Sens. 1(1), 013543 (2007). [CrossRef]  

150. K. Sendin, P. J. Williams, and M. Manley, “Near infrared hyperspectral imaging in quality and safety evaluation of cereals,” Crit. Rev. Food Sci. Nutr. 58(4), 575–590 (2018). [CrossRef]  

151. V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. 109(26), E1679–1687 (2012). [CrossRef]  

152. M. Taghizadeh, A. Gowen, and C. P. O’Donnell, “Prediction of white button mushroom (Agaricus bisporus) moisture content using hyperspectral imaging,” Sens. Instrumen. Food Qual. 3(4), 219–226 (2009). [CrossRef]  

153. K. Tan, W. Ma, F. Wu, and Q. Du, “Random forest–based estimation of heavy metal concentration in agricultural soils with hyperspectral sensor data,” Environ. Monit. Assess. 191(7), 446 (2019). [CrossRef]  

154. P. S. Thenkabail, “Optimal hyperspectral narrowbands for discriminating agricultural crops,” Remote Sensing Reviews 20(4), 257–291 (2001). [CrossRef]  

155. J. Transon, R. d’Andrimont, A. Maugnard, and P. Defourny, “Survey of Hyperspectral Earth Observation Applications from Space in the Sentinel-2 Context,” Remote Sens. 10(3), 157 (2018). [CrossRef]  

156. F. Viallefont-Robinet, L. Roupioz, K. Caillault, and P.-Y. Foucher, “Remote sensing of marine oil slicks with hyperspectral camera and an extended database,” J. Appl. Rem. Sens. 15(02), 024504 (2021). [CrossRef]  

157. M. Voss and R. Sugumaran, “Seasonal Effect on Tree Species Classification in an Urban Environment Using Hyperspectral Data, LiDAR, and an Object- Oriented Approach,” Sensors 8(5), 3020–3036 (2008). [CrossRef]  

158. J. Ward, M. Farries, C. Pannell, and E. Wachman, “An acousto-optic based hyperspectral imaging camera for security and defence applications,” Proc. SPIE 7835, 78350U (2010). [CrossRef]  

159. R. J. William, W. W. Daniel, F. Wolfgang, S. H. M. D. Mark, and H. B. Gregory, “Snapshot hyperspectral imaging in ophthalmology,” J. Biomed. Opt. 12(1), 014036 (2007). [CrossRef]  

160. H. Y. Yao, K. W. Tseng, H. T. Nguyen, C. T. Kuo, and H. C. Wang, “Hyperspectral Ophthalmoscope Images for the Diagnosis of Diabetic Retinopathy Stage,” J. Clin. Med. 9(6), 1613 (2020). [CrossRef]  

161. L. Yi, J. M. Chen, G. Zhang, X. Xu, X. Ming, and W. Guo, “Seamless Mosaicking of UAV-Based Push-Broom Hyperspectral Images for Environment Monitoring,” Remote Sens. 13(22), 4720 (2021). [CrossRef]  

162. R. Zhang, Y. Ying, X. Rao, and J. Li, “Quality and safety assessment of food and agricultural products by hyperspectral fluorescence imaging,” J. Sci. Food Agric. 92(12), 2397–2408 (2012). [CrossRef]  

163. T. Zhang, B. Wang, P. Yan, K. Wang, X. Zhang, H. Wang, and Y. Lv, “Nondestructive Identification of Salmon Adulteration with Water Based on Hyperspectral Data,” Journal of Food Quality 2018, 1809297 (2018). [CrossRef]  

164. R. Zhao, Z. Shi, Z. Zou, and Z. Zhang, “Ensemble-Based Cascaded Constrained Energy Minimization for Hyperspectral Target Detection,” Remote Sens. 11(11), 1310 (2019). [CrossRef]  

165. P. Zhihong, E. H. Glenn, P. Manish, and J. T. Bruce, “Hyperspectral face recognition for homeland security,” Proc. SPIE 5074, 767–776 (2003). [CrossRef]  

166. W. Zhou, J. Zhang, M. Zou, X. Liu, X. Du, Q. Wang, Y. Liu, Y. Liu, and J. Li, “Feasibility of Using Rice Leaves Hyperspectral Data to Estimate CaCl(2)-extractable Concentrations of Heavy Metals in Agricultural Soil,” Sci. Rep. 9(1), 16084 (2019). [CrossRef]  

167. W. Zhou, J. Zhang, M. Zou, X. Liu, X. Du, Q. Wang, Y. Liu, Y. Liu, and J. Li, “Prediction of cadmium concentration in brown rice before harvest by hyperspectral remote sensing,” Environ. Sci. Pollut. Res. 26(2), 1848–1856 (2019). [CrossRef]  

168. A. Mukundan, C.-C. Huang, T.-C. Men, F.-C. Lin, and H.-C. Wang, “Air Pollution Detection Using a Novel Snap-Shot Hyperspectral Imaging Technique,” Sensors 22(16), 6231 (2022). [CrossRef]  

169. A. Mukundan, Y.-M. Tsao, W.-M. Cheng, F.-C. Lin, and H.-C. Wang, “Automatic Counterfeit Currency Detection Using a Novel Snapshot Hyperspectral Imaging Algorithm,” Sensors 23(4), 2026 (2023). [CrossRef]  

170. A. Mukundan and H.-C. Wang, “The Brahmavarta Initiative: A Roadmap for the First Self-Sustaining City-State on Mars,” Universe 8(11), 550 (2022). [CrossRef]  

171. A. Mukundan, N. Hong-Thai, and H.-C. Wang, “Detection of PM 2.5 Particulates using a Snap-Shot Hyperspectral Imaging Technology,” in Proceedings of the 2022 Conference on Lasers and Electro-Optics Pacific Rim, Technical Digest Series (Optica Publishing Group, 2022), CPDP_08.

172. A. Mukundan, A. Patel, B. Shastri, H. Bhatt, A. Phen, and H.-C. Wang, “The Dvaraka Initiative: Mars’s First Permanent Human Settlement Capable of Self-Sustenance,” Aerospace 10(3), 265 (2023). [CrossRef]  

173. A. Mukundan, A. Patel, K. D. Saraswat, A. Tomar, and T. Kuhn, “Kalam Rover,” in AIAA SCITECH 2022 Forum, (2022), 1047.

174. A. Mukundan, A. Patel, K. D. Saraswat, A. Tomar, and H.-. Wang, “Novel Design of a Sweeping 6-Degree of Freedom Lunar Penetrating Radar,” in AIAA AVIATION 2023 Forum, (2023), 4124.

175. A. Mukundan, H.-C. Wang, and Y.-M. Tsao, “A Novel Multipurpose Snapshot Hyperspectral Imager used to Verify Security Hologram,” in 2022 International Conference on Engineering and Emerging Technologies (ICEET), (IEEE, 2022), 1–3.

176. A. Mukundan, Y.-M. Tsao, F.-C. Lin, and H.-C. Wang, “Portable and low-cost hologram verification module using a snapshot-based hyperspectral imaging algorithm,” Sci. Rep. 12(1), 18475 (2022). [CrossRef]  

177. S.-Y. Huang, A. Mukundan, Y.-M. Tsao, Y. Kim, F.-C. Lin, and H.-C. Wang, “Recent Advances in Counterfeit Art, Document, Photo, Hologram, and Currency Detection Using Hyperspectral Imaging,” Sensors 22(19), 7308 (2022). [CrossRef]  

178. A. Mukundan and H.-C. Wang, “Simplified Approach to Detect Satellite Maneuvers Using TLE Data and Simplified Perturbation Model Utilizing Orbital Element Variation,” Appl. Sci. 11(21), 10181 (2021). [CrossRef]  

179. A. Mukundan and H.-C. Wang, “The Space Logistics needs will be necessary for Sustainable Space Activities Horizon 2030,” in AIAA SCITECH 2023 Forum, (2023), 1603.

180. A. Mukundan, A. Patel, K. D. Saraswat, A. Tomar, and H.-. Wang, “Spriallift Mechanism Based Drill for Deep Subsurface Lunar Exploration,” in AIAA AVIATION 2023 Forum, (2023), 4123.

181. A. Reddy and A. Mukundan, “The Strategic Needs Necessary for Sustainable Marine Ecology Horizon 2030,” International Journal of Innovative Research in Science Engineering and Technology 5(8), 1141–1147 (2020). [CrossRef]  

182. T. Collins, M. Maktabi, M. Barberio, V. Bencteux, B. Jansen-Winkeln, C. Chalopin, J. Marescaux, A. Hostettler, M. Diana, and I. Gockel, “Automatic recognition of colon and esophagogastric cancer with machine learning and hyperspectral imaging,” Diagnostics 11(10), 1810 (2021). [CrossRef]  

183. N. Liu, Y. Guo, H. Jiang, and W. Yi, “Gastric cancer diagnosis using hyperspectral imaging with principal component analysis and spectral angle mapper,” J. Biomed. Opt. 25(06), 066005 (2020). [CrossRef]  

184. I. H. Aboughaleb, M. H. Aref, and Y. H. El-Sharkawy, “Hyperspectral imaging for diagnosis and detection of ex-vivo breast cancer,” Photodiagn. Photodyn. Ther. 31, 101922 (2020). [CrossRef]  

185. L. A. Courtenay, D. González-Aguilera, S. Lagüela, S. del Pozo, C. Ruiz-Mendez, I. Barbero-García, C. Román-Curto, J. Cañueto, C. Santos-Durán, M. E. Cardeñoso-Álvarez, M. Roncero-Riesco, D. Hernandez-Lopez, D. Guerrero-Sevilla, and P. Rodríguez-Gonzalvez, “Hyperspectral imaging and robust statistics in non-melanoma skin cancer analysis,” Biomed. Opt. Express 12(8), 5107–5127 (2021). [CrossRef]  

186. D. Eggert, M. Bengs, S. Westermann, N. Gessert, A. O. H. Gerstner, N. A. Mueller, J. Bewarder, A. Schlaefer, C. Betz, and W. Laffers, “In vivo detection of head and neck tumors by hyperspectral imaging combined with deep learning methods,” J. Biophotonics 15, e202100167 (2022). [CrossRef]  

187. M. Kodama and T. Shibata, “Research Article,” Know. Process Mgmt. 23(4), 274–292 (2016). [CrossRef]  

188. N. Yoshida, K. Inoue, Y. Tomita, R. Kobayashi, H. Hashimoto, S. Sugino, R. Hirose, O. Dohi, H. Yasuda, Y. Morinaga, Y. Inada, T. Murakami, X. Zhu, and Y. Itoh, “An analysis about the function of a new artificial intelligence, CAD EYE with the lesion recognition and diagnosis for colorectal polyps in clinical practice,” Int. J. Colorectal. Dis. 36(10), 2237–2245 (2021). [CrossRef]  

189. H. Neumann, A. Kreft, V. Sivanathan, F. Rahman, and P. R. Galle, “Evaluation of novel LCI CAD EYE system for real time detection of colon polyps,” PLoS One 16(8), e0255955 (2021). [CrossRef]  

190. A. Repici, M. Badalamenti, R. Maselli, et al., “Efficacy of Real-Time Computer-Aided Detection of Colorectal Neoplasia in a Randomized Trial,” Gastroenterology 159(2), 512–520.e7 (2020). [CrossRef]  

191. M. Brand, J. Troya, A. Krenzer, Z. Sassmannshausen, W. G. Zoller, A. Meining, T. J. Lux, and A. Hann, “Development and evaluation of a deep learning model to improve the usability of polyp detection systems during interventions,” UEG Journal 10(5), 477–484 (2022). [CrossRef]  

192. D. S. G. team, “Data Study Group Final Report: SenSat,” (2020).

193. Z. He, P. Wang, Y. Liang, Z. Fu, and X. Ye, “Clinically available optical imaging technologies in endoscopic lesion detection: current status and future perspective,” Journal of Healthcare Engineering 2021, 7594513 (2021). [CrossRef]  

194. D. Tang, J. Zhou, L. Wang, M. Ni, M. Chen, S. Hassan, R. Luo, X. Chen, X. He, L. Zhang, X. Ding, H. Yu, G. Xu, and X. Zou, “A Novel Model Based on Deep Convolutional Neural Network Improves Diagnostic Accuracy of Intramucosal Gastric Cancer (With Video),” Front. Oncol. 11, 622827 (2021). [CrossRef]  

195. S. M. Milluzzo, P. Cesaro, L. M. Grazioli, N. Olivari, and C. Spada, “Artificial Intelligence in Lower Gastrointestinal Endoscopy: The Current Status and Future Perspective,” Clin. Endosc. 54(3), 329–339 (2021). [CrossRef]  

196. P. F. Whiting, A. W. S. Rutjes, M. E. Westwood, S. Mallett, J. J. Deeks, J. B. Reitsma, M. M. G. Leeflang, J. A. C. Sterne, P. M. M. Bossuyt, and Q.-. Group, “QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies,” Ann Intern Med 155(8), 529–536 (2011). [CrossRef]  

197. C.-L. Tsai, A. Mukundan, C.-S. Chung, Y.-H. Chen, Y.-K. Wang, T.-H. Chen, Y.-S. Tseng, C.-W. Huang, I. C. Wu, and H.-C. Wang, “Hyperspectral imaging combined with artificial intelligence in the early detection of esophageal cancer,” Cancers 13(18), 4593 (2021). [CrossRef]  

198. M. Maktabi, H. Köhler, M. Ivanova, B. Jansen-Winkeln, J. Takoh, S. Niebisch, S. M. Rabe, T. Neumuth, I. Gockel, and C. Chalopin, “Tissue classification of oncologic esophageal resectates based on hyperspectral data,” International Journal of Computer Assisted Radiology and Surgery 14(10), 1651–1661 (2019). [CrossRef]  

199. M. Hohmann, R. Kanawade, F. Klampfl, A. Douplik, J. Mudter, M. F. Neurath, and H. Albrecht, “In-vivo multispectral video endoscopy towards in-vivo hyperspectral video endoscopy,” J Biophotonics 10(4), 553–564 (2017). [CrossRef]  

200. K. Nakano, Y. Saito, Y. Kurabuchi, T. Ohnishi, S. Ota, M. Uesato, and H. Haneishi, “Design of multiband switching illumination with low-concentration lugol stain for esophageal cancer detection,” IEEE Access 8, 216043–216054 (2020). [CrossRef]  

201. A. Grigoroiu, J. Yoon, and S. E. Bohndiek, “Deep learning applied to hyperspectral endoscopy for online spectral classification,” Sci. Rep. 10(1), 3947 (2020). [CrossRef]  

202. M. Maktabi, Y. Wichmann, H. Köhler, H. Ahle, D. Lorenz, M. Bange, S. Braun, I. Gockel, C. Chalopin, and R. Thieme, “Tumor cell identification and classification in esophageal adenocarcinoma specimens by hyperspectral imaging,” Sci. Rep. 12(1), 4508 (2022). [CrossRef]  

203. I. C. Wu, H.-Y. Syu, C.-P. Jen, M.-Y. Lu, Y.-T. Chen, M.-T. Wu, C.-T. Kuo, Y.-Y. Tsai, and H.-C. Wang, “Early identification of esophageal squamous neoplasm by hyperspectral endoscopic imaging,” Sci. Rep. 8(1), 13797 (2018). [CrossRef]  

204. A. Rácz, D. Bajusz, and K. Héberger, “Effect of Dataset Size and Train/Test Split Ratios in QSAR/QSPR Multiclass Classification,” Molecules 26(4), 1111 (2021). [CrossRef]  

205. M. Takeuchi, T. Seto, M. Hashimoto, N. Ichihara, Y. Morimoto, H. Kawakubo, T. Suzuki, M. Jinzaki, Y. Kitagawa, H. Miyata, and Y. Sakakibara, “Performance of a deep learning-based identification system for esophageal cancer from CT images,” Esophagus 18(3), 612–620 (2021). [CrossRef]  

206. C.-K. Yang, J. C. Yeh, W.-H. Yu, L.-I. Chien, K.-H. Lin, W.-S. Huang, and P.-K. Hsu, “Deep Convolutional Neural Network-Based Positron Emission Tomography Analysis Predicts Esophageal Cancer Outcome,” J. Clin. Med. 8(6), 844 (2019). [CrossRef]  

207. X. Jin, X. Zheng, D. Chen, J. Jin, G. Zhu, X. Deng, C. Han, C. Gong, Y. Zhou, C. Liu, and C. Xie, “Prediction of response after chemoradiation for esophageal cancer using a combination of dosimetry and CT radiomics,” Eur. Radiol. 29(11), 6080–6088 (2019). [CrossRef]  

208. G. Kazuhiro, O. Takashi, Y. Masahiro, O. Nagaaki, M. Hirohisa, S. Yasushi, Y. Shigeaki, H. Yasuo, and E. Takao, “Appearance of enhanced tissue features in narrow-band endoscopic imaging,” J. Biomed. Opt. 9(3), 568–577 (2004). [CrossRef]  

209. K. Gono, K. Yamazaki, N. Doguchi, T. Nonami, T. Obi, M. Yamaguchi, N. Ohyama, H. Machida, Y. Sano, S. Yoshida, Y. Hamamoto, and T. Endo, “Endoscopic Observation of Tissue by Narrowband Illumination,” Opt. Rev. 10(4), 211–215 (2003). [CrossRef]  

210. H. Ikematsu, Y. Saito, S. Tanaka, T. Uraoka, Y. Sano, T. Horimatsu, T. Matsuda, S. Oka, R. Higashi, H. Ishikawa, and K. Kaneko, “The impact of narrow band imaging for colon polyp detection: a multicenter randomized controlled trial by tandem colonoscopy,” J. Gastroenterol. 47(10), 1099–1107 (2012). [CrossRef]  

211. T. Chen, H. Cheng, X. Chen, Z. Yuan, X. Yang, M. Zhuang, M. Lu, L. Jin, and W. Ye, “Family history of esophageal cancer increases the risk of esophageal squamous cell carcinoma,” Sci. Rep. 5(1), 16038 (2015). [CrossRef]  

212. H. Zeng, R. Zheng, S. Zhang, T. Zuo, C. Xia, X. Zou, and W. Chen, “Esophageal cancer statistics in China, 2011: Estimates based on 177 cancer registries,” Thorac. Cancer 7(2), 232–237 (2016). [CrossRef]  

213. M. Torres-Aguilera and J. M. Remes Troche, “Achalasia and esophageal cancer: risks and links,” Clin. Exp. Gastroenterol. 11, 309–316 (2018). [CrossRef]  

214. M. E. Salem, A. Puccini, J. Xiu, D. Raghavan, H. J. Lenz, W. M. Korn, A. F. Shields, P. A. Philip, J. L. Marshall, and R. M. Goldberg, “Comparative molecular analyses of esophageal squamous cell carcinoma, esophageal adenocarcinoma, and gastric adenocarcinoma,” The Oncologist 23(11), 1319–1327 (2018). [CrossRef]  

215. R. Li, S. Zheng, C. Duan, Y. Yang, and X. Wang, “Classification of hyperspectral image based on double-branch dual-attention mechanism network,” Remote Sens. 12(3), 582 (2020). [CrossRef]  

216. H. Gao, Y. Yang, C. Li, H. Zhou, and X. Qu, “Joint alternate small convolution and feature reuse for hyperspectral image classification,” ISPRS International Journal of Geo-Information 7(9), 349 (2018). [CrossRef]  

217. J. Kim and I. C. Hwang, “Drawing guidelines for receiver operating characteristic curve in preparation of manuscripts,” J. Korean Med. Sci. 35(24), e171–170 (2020). [CrossRef]  

218. A. S. Glas, J. G. Lijmer, M. H. Prins, G. J. Bonsel, and P. M. M. Bossuyt, “The diagnostic odds ratio: a single indicator of test performance,” Journal of Clinical Epidemiology 56(11), 1129–1135 (2003). [CrossRef]  

219. W. Zhu, N. Zeng, and N. Wang, “Sensitivity, specificity, accuracy, associated confidence interval and ROC analysis with practical SAS implementations,” NESUG proceedings: health care and life sciences 19, 67 (2019).

220. U. Okeh and C. Okoro, “Evaluating measures of indicators of diagnostic test performance: fundamental meanings and formulars,” J Biom Biostat 03(01), 2 (2012). [CrossRef]  

221. J. J. Deeks, “Systematic reviews of evaluations of diagnostic and screening tests,” Bmj 323(7305), 157–162 (2001). [CrossRef]  

222. D. C. Whiteman, M. Appleyard, F. F. Bahin, Y. V. Bobryshev, M. J. Bourke, I. Brown, A. Chung, A. Clouston, E. Dickins, and J. Emery, “Australian clinical practice guidelines for the diagnosis and management of B arrett's esophagus and early esophageal adenocarcinoma,” J. Gastroenterol. Hepatol. 30(5), 804–820 (2015). [CrossRef]  

223. L.-M. Huang, W.-J. Yang, Z.-Y. Huang, C.-W. Tang, and J. Li, “Artificial intelligence technique in detection of early esophageal cancer,” World J. Gastroenterol. 26(39), 5959–5969 (2020). [CrossRef]  

224. C. S. Bang, J. J. Lee, and G. H. Baik, “Computer-aided diagnosis of esophageal cancer and neoplasms in endoscopic images: a systematic review and meta-analysis of diagnostic test accuracy,” Gastrointestinal Endoscopy 93(5), 1006–1015.e13 (2021). [CrossRef]  

225. L. C. García-Peraza-Herrera, M. Everson, L. Lovat, H.-P. Wang, W. L. Wang, R. Haidry, D. Stoyanov, S. Ourselin, and T. Vercauteren, “Intrapapillary capillary loop classification in magnification endoscopy: open dataset and baseline methodology,” International Journal of Computer Assisted Radiology and Surgery 15(4), 651–659 (2020). [CrossRef]  

226. Y. Tokai, T. Yoshio, K. Aoyama, Y. Horie, S. Yoshimizu, Y. Horiuchi, A. Ishiyama, T. Tsuchida, T. Hirasawa, and Y. Sakakibara, “Application of artificial intelligence using convolutional neural networks in determining the invasion depth of esophageal squamous cell carcinoma,” Esophagus 17(3), 250–256 (2020). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplemental document

Data availability

The data presented in this study are available in this article.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. SROC curve of the studies. Different number represents the index number of the study. AUC = 0.686. SROC curve based on Sensitivity and Specificity of CAD Methods. The number points to the study index of the study. (For study index refer to Table 5).
Fig. 2.
Fig. 2. Overall Accuracy Performance of CAD Methods. Different colors represent the methods used in the study
Fig. 3.
Fig. 3. Univariable Meta-Regression of Different Sub-Group Analysis including Nationality, Image Type, AI, EC Type and Published Year (a) Sensitivity (b) Specificity
Fig. 4.
Fig. 4. Deeks’ Funnel Plot for Studies of the studies used in the meta-analysis.

Tables (5)

Tables Icon

Table 1. Studies of HSI in other medical application including other Cancer Detection

Tables Icon

Table 2. Endoscopic machines using CAD from different companies

Tables Icon

Table 3. QUADAS-2 Summary

Tables Icon

Table 4. Clinical Features of the Studies considered in this study including the nationality, method, lighting, accuracy, sensitivity, specificity, the number of images, and AUC of the studies.

Tables Icon

Table 5. Sub-Group and Meta-Analysis of Diagnostic Test Accuracy which includes the classification of data based on Nationality, Machine learning model, Endoscopic Images, Esophageal cancer type and publishing date

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.