Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Computer-vision–based intelligent adaptive transmission for optical wireless communication

Open Access Open Access

Abstract

Optical wireless communication (OWC) has been presented as a promising candidate for future space-air-ground-ocean-integrated communication. However, the OWC is quite sensitive to the variation of the channel transmission characteristics. The light beam absorption and the scattering in the transmission media affect not only the channel feature, but also the imaging quality. Thus, there is an inherent relationship between the OWC performance and the optical imaging quality. Based on this consideration, we firstly present the idea of introducing computer vision mechanisms into the OWC systems, and then propose a computer vision-based multi-domain cooperative adjustment (CV-MDCA) mechanism’s functional modules to realize the intelligent adaptive transmission in OWC systems. The CV-MDCA mechanism are specifically designed, with the emphasis on how to quantitatively determine the exact on-line channel quality from the captured images by using effective computer vision schemes. Two groups of experiments, the indoor-simulated underwater visible light communication and the outdoor-practical atmospheric free-space optics, are implemented in order to evaluate the presented CV-MDCA mechanism’s performance. The results not only validate the feasibility to determine the channel quality, according to the captured channel images, but also reveal the presented three computer vision-based criteria’s limitations.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical fiber transmission has achieved tremendous success over the past decades as the fundamental technology for the long-haul terrestrial communication. However, it is now facing great challenge since the geographic scope of communication has increasingly expanded to the deep-space and deep-ocean, in which the wired fiber cannot be easily deployed. To solve this problem, the optical wireless communication (OWC) has been presented as a promising candidate for future space-air-ground-ocean (SAGO)-integrated communication since it combines the advantages of both the large bandwidth of optical communication and the ubiquitous availability of wireless communication [1,2]. There are basically two categories for OWC. One is the free-space optics (FSO) which utilizes the coherent lightwave of laser diode (LD) to transfer information in long-distance free space, and the other is the visible light communication (VLC) which utilizes the incoherent illumination lightwave of the white light-emitting-diode (LED) to transfer information in short-distance wireless channel.

However, the OWC system is sensitive to the variation of the open channel transmission characteristics [3]. The light beams transmitted in the atmospheric channel and the underwater channel, which are the two typical scenarios in the SAGO-integrated communication, suffer from complex physical interactions with the transmission media such as the intrinsic absorption, the impurity absorption, the Mie scattering, the Rayleigh scattering, and the geometric scattering. Therefore, it is a vital challenge to compensate the effects of media absorptions and scatterings in OWC system design.

The adaptive transmission is the widely-deployed mechanism in current optical wireless communication system to address the time-varying channel problem, in which the system flexibly adjusts the power, modulation and encoding formats according to different channel quality to maximize the transmission efficiency [4,5]. However, in conventional adaptive transmission mechanisms, the channel quality is always estimated by the training sequences or the pilot information from the feed-back channels, without specific analysis on the physical causes for the channel degradation, leading to inadequate intelligence. Moreover, the insert of training sequences or pilot information occupies certain transmission bandwidth, which decreases the spectrum efficiency of the whole system. The deep-learning based artificial intelligence technology has already been introduced into the optical communication systems to analyze the system performance degradation due to the effects of fiber impairments [6], as well as to realize the intelligent dimming in the VLC system [7].

Besides the machine learning, the computer vision is another major category of artificial intelligence, which has attracted increasing attention due to the wide deployment of cameras in video surveillance applications. However, to the best of our knowledge, there is no research on introducing computer vision techniques into optical communication systems. The working principle of camera is based on the optical imaging theory. Therefore, if we embed the camera into the communication system to monitor the channel status, the absorption and the scattering of the light beams in the transmission media affect not only the channel feature but also the imaging quality, and thus there is an inherent relationship between the optical communication performance and the optical imaging quality. Based on this consideration, we firstly present the idea of introducing computer vision mechanisms into the OWC systems, and propose a computer-vision-based multi-domain cooperative adjustment (CV-MDCA) mechanism to realize the intelligent adaptive transmission in the OWC systems.

2. Principle and implementation of CV-MDCA mechanism

Figure 1 presents the block diagram of the proposed CV-MDCA mechanism. The camera embedded in the transmitter captures on-line images of the communication channel. Such images is pre-processed for noise elimination, field rectification, and then sent to the feature extraction module, in which the information of channel transmission characteristics is extracted using the computer vision algorithms such as image recognition, segmentation, and restoration. The extracted features are then compared with the standard channel sample attributes restored in the reference database, and thus the channel quality index (CQI) for the current moment is determined. Such judgment operations can also be assisted by the machine learning techniques. The cooperative adjustment controller then generates corresponding adjustment instructions to the transmission layer according to the CQI value.

 figure: Fig. 1

Fig. 1 Block diagram of the CV-MDCA mechanism.

Download Full Size | PDF

We design the multi-domain cooperative adjustment scheme to optimize the system transmission performance in the presented CV-MDCA mechanism, in which the transmission layer parameters within the information domain, the communication domain, the power domain, and the antenna domain will be cooperatively adjusted according to the dynamical CQI variation.

In the information domain, the original data from the user should be formatted according to the CQI levels. Take the video service as an example. If the CQI is in the top range, which suggests the channel quality is good enough to support high data-rate transmission, the high-definition video with the resolution 1080p and above can be transmitted. If the CQI is in the middle range, the standard-definition video with the resolution below 720p can be transferred. If the CQI is in the bottom range, the fluent video transmission cannot be guaranteed and only the discrete sampled images may be transferred.

In the communication domain, the modulation order, encoding rate and the series number for equalization [8] can be adaptively adjusted according to the CQI levels, which has already widely been implemented in current optical and mobile communication systems.

In the power domain, not only the electrical signal power but also the optical power from the transmitters (LD or LED) will be dynamically adjusted according to the CQI variation, in order to improve the signal-noise-ratio (SNR) performance at the receiver end. Such power adjustment should be limited in a certain range in accordance with the small-range continuous CQI variation within a certain level, considering the stable working scope and the linear scale of the electrical and optical devices.

In the antenna domain, a set of optical lens is embedded in the transmitter which can be mechanically adjusted by the electric controller. The focal distance of the lens should be dynamically adjusted in accordance with the relative movements of the receiver, along with the light-beam divergence angle for the purpose of making a tradeoff between the convergence power and the coverage scale.

The practical adjustments among these domains should not be implemented separately, but instead, be cooperatively scheduled in order to optimize the performance of the whole system. The adjustments in the information domain and communication domain are discrete, whereas the adjustments in the power domain and antenna domain are continuous in certain ranges. Therefore, in the presented CV-MDCA mechanism, we divide the CQI parameter into a series of successive levels. The modulation order, encoding rate, along with the information format will be combined into different groups corresponding to certain CQI levels, whereas within each level, the power and the focal distance of the optical antenna will be specifically adjusted.

The vital part of the CV-MDCA mechanism lies in the feasibility of quantitatively determining the exact on-line channel quality from the captured images using effective image processing schemes. Here we present three typical mechanisms based on different computer vision algorithms.

The first mechanism is based on the image visibility calculation, which is a basic analysis tool in the image enhancement scenarios [9]. The image visibility represents the clarity of the gray-scale for a given image, which is defined as:

V=Imax-IminImax+Imin

in which Imax and Imin refer to the maximum and minimum value of gray intensities among all the pixels respectively. If the image is captured in a clear weather or pure water environments, adequate gray-level information will be reserved in the image, and thus the Imax is much larger than Imin, leading to the value of visibility close to 1. However, if the atmospheric or underwater impurity density increases, the image visibility will decrease. For example, the image captured in the fog or haze environment always looks gray. Therefore, we use the degradation of the image visibility as the first criterion for CQI.

The second mechanism is based on the image edge detection, which is an efficient tool in the image segmentation scenarios [10]. If the image is captured in a clear channel environment, all the objects in the image show clear contours which can be easily detected by special edge detection algorithms. However, if the imaging process is affected by the impurity absorption and scattering, the object contours will be blurred, and thus be lost in the edge detection procedure. The binary image is always utilized to demonstrate the results for edge detection, in which the pixels for edge are white and the others are black. Therefore, we use the comparison of edge pixel number as the second criterion for CQI.

The third mechanism is based on the atmospheric imaging model, a fundamental analysis tool in the image restoration scenarios [11]. According to the atmospheric imaging theory, if the impurity in the channel is evenly-distributed and isotropic, the receiving light intensity at the camera IR can be represented as:

IR=Oeβd+E(1eβd)

in which O refers to the reflected light intensity of a certain object in the field of view, E refers to the average ambient light intensity in the environment, d refers to the depth of field, and β refers to the channel transmission attenuation factor. The parameter β represents the absorption and scattering effect of the transmission media, and thus we use it as the third criterion for CQI:

β=1dln(OEIRE)

For derivation of β, the reflected light intensity O should be obtained from the original image captured in a clear channel environment restored in the reference database, and IR is obtained from the current image captured in a degraded channel environment. The depth of field should be measured in advance, and the ambient light intensity E is obtained from the reference database which is also off-line measured in different weather conditions in advance.

The above three criteria establish quantitative relations between the channel quality and the image characteristic based on three different evaluation models. However, these three criteria have different limitations. The image visibility and edge detection mechanisms are both based on the statistical analysis on the image spatial-domain characteristics, so the performances of these two mechanisms are largely affected by the amount of the details in the captured images. If there is little color or gray-scale information in the original field of view, the first criterion becomes invalid, and if there is little edge information in the original field of view, the second criterion becomes invalid. The third criterion is based on the physical imaging theory, and thus has few requirements on the imaging environments; however, the derivation of the third criterion requires two important pre-determined parameters, which can only be obtained by additional measurement on the communication scenarios. On the other hand, all of the three criteria require standard reference sample images captured in the clear channel condition, which makes it possible to be integrated with popular machine learning methods. Comparing with the traditional training sequences or pilot information based mechanisms, the presented CV-MDCA mechanism focuses on physical causes for the channel degradation, and thus can realize intelligent adaptive transmission.

3. Experiments and results analysis

In this section, we implement a series of experiments to evaluate the performance of the presented mechanisms and analyze the results. Since the adaptive transmission has already been widely adopted in current communication systems, the experiments focus mainly on the feasibility of the presented three computer-vision based criteria for quantitative channel quality judgment. FSO and VLC are the two typical categories of OWC, and the atmospheric channel and the underwater channel are the two typical channels for OWC-based SAGO- integrated networks. Therefore, we select two representative scenarios for experiment evaluation, the underwater VLC and the atmospheric FSO.

The first group of experiments simulates the underwater VLC system in the laboratory, shown in Fig. 2. The arbitrary-waveform-generator (AWG, Tektronix AWG5012) generates the orthogonal-frequency-division-multiplexing (OFDM) baseband signals, in which different quadrature-amplitude-modulation (QAM) orders are designed using the Matlab programming for OFDM modulation including 4-QAM, 8-QAM, 16-QAM, 32-QAM and 64-QAM. The FFT size is 512, in which 16 points are used for CP. The effective sub-carrier number is 128. In our experiment, the computer vision mechanism is deployed for adaptive transmission, so there is no pilot used in the OFDM. The AWG is working at the sampling rate of 20MHz. The OFDM signals are uploaded on the DC-bias to modulate the white-light LED lamp (Cree PLCC4). The underwater environment is simulated by a water tank with 1.5m*0.5m*0.5m. After the underwater transmission, a commercially available APD (Hamamatsu, S8664-20K) is used to detect the optical signals after the blue light filter, and the received signal is amplified by a low noise PA (Mini-Circuits ZFL-1000LN + ), and then sent back to the oscilloscope (OSC, Lecroy 735Zi, 50Ω) for transmission performance analysis. A twelve megapixel CMOS-based camera is deployed at the transmitter end to capture the channel images and a reference object with black-and-white grid pattern is placed at the receiver end.

 figure: Fig. 2

Fig. 2 Block diagram and experiment setup for indoor simulated visible light communication.

Download Full Size | PDF

In the experiments, the pure water is firstly added into the tank until the water height reaches 0.4m, so there is altogether 300 liters of water. Then we add 1.2 ml Mg(OH)2 into the water each time to simulate the impurity. We measure the system transmission performance in different impurity concentration, and on-line channel images are also captured during the experiment implementation. Figure 3 shows the channel images captured in seven successive impurity concentration situations, with which the three computer-vision judgment criteria can be implemented. For the first criterion, to eliminate the uncertainties caused by random noise, the average intensity of 100 pixels with the highest brightness value is set as the Imax, and the average intensity of 100 pixels with the lowest brightness value is set as the Imin. For the second criterion, there are many popular edge detection operators such as Robert operator, Sobel operator, LOG operator, Canny operator, etc [10]. We select the Canny operator in our experiments, and Fig. 3 also shows the results of edge detection. For the third criterion, the light transmission suffers from similar influences in the atmospheric channel and the underwater channel, and therefore the Eq. (3) can also be used in the underwater scenarios [12,13]. We calculate the value of β at ten reference points of the black-and-white grid pattern using the Eq. (3) and use the average value as the attenuation index.

 figure: Fig. 3

Fig. 3 Captured channel images and the edge detection results for indoor simulated underwater VLC.

Download Full Size | PDF

Figure 4(a) presents the system bit-error-rate (BER) in successive impurity concentration situations. Here we use the value of 3.8x10−3 as the BER threshold to determine the effective transmission, which is the typical limit for the efficient use of the forward error correction (FEC) codes. The results show that when the impurity concentration reaches 7*10−6, the 64QAM becomes invalid; when the concentration reaches 1.6*10−5, the 32QAM becomes invalid; when the concentration reaches 1.9*10−5, the 16QAM becomes invalid; when the concentration reaches 2.4*10−5, the system BER with 8QAM modulation is 0.0013, which is still available. The BER of the 4QAM modulation is always far from the FEC limit, so it is not presented in the figure. The impurity can be continually added into the water to measure the performance threshold of the 8QAM and 4QAM; however, the edge detection algorithm has already been invalid when the impurity concentration reaches 2.4*10−5, so we stop the experiments at this moment. This phenomenon reveals the limitation of the second criterion. Figure 4(b) presents the variation curves for the three typical parameters of computer vision judgment criteria, in which we use the normalized representation.

 figure: Fig. 4

Fig. 4 (a) Underwater VLC system BER in different impurity concentration; (b) Variation curves of normalized image parameters in different impurity concentration.

Download Full Size | PDF

For the variation curve of the first criterion, the visibility of the channel image captured in the pure water case is normalized as 1 (the real value is 0.82), and the values of the image visibility in the following cases are all normalized in this standard. For the variation curve of the second criterion, the number of the edge pixels within the channel image captured in the pure water case is normalized as 1 (the real value is 27024), and the edge pixel numbers in the following cases are all normalized in this standard. For the variation curve of the third criterion, the attenuation factor β is increasing with the impurity addition, and therefore we show the variation curve of the reciprocal 1/β in the figure. Figure 4(b) shows that the image visibility and the edge pixel number are both exponentially degraded with the increase of impurity concentration, whereas the attenuation factor β is approximately linear-changed with the increase of impurity concentration, and thus the third criterion is more suitable for practical judgment. Comparing the system BER performance in Fig. 4(a) with the variation curves of Fig. 4(b), the relationship between the channel image quality and the transmission performance is valid.

The second group of experiments evaluates the performance of the presented mechanism in practical atmospheric FSO system. The AWG generates the baseband signals in which different pulse amplitude modulation (PAM) orders are designed using the Matlab including OOK, 4PAM and 8PAM. The AWG is working at the sampling rate of 300MHz. The MPAM signals are amplified and then uploaded to the FSO transmitter (GIOC LD100), in which the 1550nm wavelength laser beams are generated to carry the signal and the divergence angle is 2mrad. The transmitter and the receiver are placed in two research building of our campus, and the line-of-sight distance between them is 893m. We measure the FSO transmission performance of three modulation orders in six different stable (no turbulence) weather conditions. The on-line measured values of the PM2.5 density index in these six cases are 28, 51, 112, 146, 243 and 377 respectively, corresponding to different air quality levels. Figure 5 presents the channel images captured in the six weather cases, along with the edge detection results. The channel images are captured in practical natural scenarios which have much more details and objects with different depths of field than that in the first group of experiments. The results in Fig. 5 show that due to the haze effects, the Canny operator based edge detection criterion becomes invalid in the latter four weather cases since the edge information of the distant objects is totally lost.

 figure: Fig. 5

Fig. 5 Captured channel images and the edge detection results for outdoor atmospheric FSO.

Download Full Size | PDF

Figure 6(a) presents the FSO system performance in six weather cases. Using the FEC limit 3.8x10−3 as the BER threshold, the experiment results show that the 8PAM is invalid in the latter three cases, and the 4PAM is invalid in the last case. Figure 6(b) presents the variation curves for the three parameters of computer vision judgment criteria in normalized representation manner. For the first criterion, Fig. 6(b) shows that the image visibility of the whole image (the black line) is not increasingly degraded with the continuous aggravation of haze. The reason is that the visibility is statistically calculated according to the information of the whole image, but from Fig. 5 we can see the haze affects the distant objects much more serious than the objects in the near places. Therefore, it is more suitable to concentrate on the distant object imaging to symbolize haze affects. We re-calculate the visibility in which the pixels for the objects with the distance less than 90m (10% of the distance between the transmitter and the receiver) from the camera are not included in the calculation, and Fig. 6 presents that (the red line) it is approximately exponentially degraded. For the second criterion, Fig. 6(b) shows that the edge number is also not increasingly degraded with the continuous aggravation of haze. From Fig. 5 we can see that due to the haze effects, the Canny operator based edge detection criterion becomes invalid in the latter four weather cases since the edge information of the distant objects is totally lost and much noise occurs. Such results reveal that the edge-detection mechanism is inefficient in the long-distance scenarios. For the third criterion, the attenuation factor β is still approximately linear-changed with the continuous aggravation of haze, and thus the third criterion is more suitable for practical judgment.

 figure: Fig. 6

Fig. 6 (a) Atmospheric FSO system BER in different weather cases; (b) Variation curves of normalized image parameters in different weather cases.

Download Full Size | PDF

The above two groups of experiments validate the relationship between the channel image quality and the OWC transmission performance, and therefore the CV-MDCA mechanism is feasible to be used in the OWC systems. For practical implementation of the presented CV-MDCA mechanism, we use the variation curves of the typical image parameters to divide the CQI parameter into a series of successive levels. The modulation order, encoding rate, along with the information format can be combined into different groups corresponding to certain CQI levels, whereas within each level, the power and the focal distance of the optical antenna will be specifically adjusted.

4. Discussion

The above experiments not only validate the feasibility of the CV-MDCA mechanism, but also reveal its limitations. Firstly, current commercial cameras are almost based on visible light imaging, so the implementation of the presented mechanism has strict requirements on the environmental illumination. For example, this mechanism becomes invalid in the evening for outdoor wild environments. Secondly, the accurate quantitative judgment on the channel quality by the computer vision mechanisms requires statistical analysis on the captured images, and therefore the camera resolution greatly affects the performance of the proposed mechanism. Thirdly, all of the above three criteria require sample images captured in a clear channel environment as the reference standard, so off-line channel measurement should be implemented in advance, and the completeness of the sample database greatly affects the accuracy of the CQI decision. Lastly, the CV-MDCA mechanism shows its advantages on the representation of channel absorption and scattering effects, but there are still other important types of channel transmission characteristics, such as turbulence and strong ambient light, which are difficult to quantitatively analyze using the above three computer vision algorithms.

To overcome these limitations and improve the performance of the presented mechanism, the following two issues are the prior interests for our future research. Firstly, the infrared- band imaging camera is a powerful complement to realize a full-time imaging, and thus the corresponding algorithms for infrared image processing should be integrated. Secondly, many other advanced computer vision algorithms should be further studied and introduced into the presented system for performance enhancements, along with the representation of other channel transmission characteristics. For example, if a mobile object blocks the channel, the original pilot-based mechanism may only detect the service interruption, but if the image recognition and tracking functions are embedded in computer vision module, such channel blocking situation can be predicted and the positions of the transmitter may be adjusted in advance to avoid the moving obstacles. Furthermore, all of the computer vision algorithms can be conveniently integrated with the machine learning technology to further enhance the system intelligence.

5. Conclusions

The computer vision based multi-domain cooperative adjustment (CV-MDCA) mechanism is presented in this paper, which realizes the intelligent adaptive transmission by introducing computer vision mechanisms into the OWC systems. Three of computer vision based criteria, the image visibility, the edge detection, and the atmospheric imaging model, are designed to quantitatively determine the exact on-line channel quality from the captured images. The experimental results validate the feasibility of the CV-MDCA mechanism, but also reveal its limitations, which will be the prior interests for our future research, such as infrared image processing integration, more computer vision and machine learning mechanisms evaluation.

Funding

National Key R&D Program of China (2017Y FB0403605); National Natural Science Foundation of China (61801165); National 973 Program of China (2013CB329205).

References

1. Z. Huang, Z. Wang, M. Huang, W. Li, T. Lin, P. He, and Y. Ji, “Hybrid optical wireless network for future SAGO-integrated communication based on FSO/VLC heterogeneous interconnection,” IEEE Photonics J. 9(2), 1 (2017). [CrossRef]  

2. Y. Ji, J. Zhang, X. Wang, and H. Yu, “Towards converged, collaborative and co-automatic (3C) optical networks,” Sci. China Inf. Sci. 61(12), 121301 (2018). [CrossRef]  

3. M. A. Khalighi and M. Uysal, “Survey on free space optical communication: a communication theory perspective,” IEEE Comm. Surv. and Tutor. 16(4), 2231–2258 (2014). [CrossRef]  

4. M. Z. Hassan, M. J. Hossain, J. Cheng, and V. Leung, “Adaptive transmission for coherent OWC with multiple parallel optical beams,” IEEE Photonics Technol. Lett. 30(12), 1119–1122 (2018). [CrossRef]  

5. I. B. Djordjevic, “Adaptive modulation and coding for free-space optical channels,” J. Opt. Commun. Netw. 2(5), 221–229 (2010). [CrossRef]  

6. B. Karanov, M. Chagnon, F. Thouin, T. A. Eriksson, H. Bülow, D. Lavery, P. Bayvel, and L. Schmalen, “End-to-end deep learning of optical fiber communications,” J. Lightw. Tech. 36(20), 4843–4855 (2018). [CrossRef]  

7. H. Lee, I. Lee, T. Q. S. Quek, and S. H. Lee, “Binary signaling design for visible light communication: a deep learning framework,” Opt. Express 26(14), 18131–18142 (2018). [CrossRef]   [PubMed]  

8. J. Li, Z. Huang, X. Liu, and Y. Ji, “Hybrid time-frequency domain equalization for LED nonlinearity mitigation in OFDM-based VLC systems,” Opt. Express 23(1), 611–619 (2015). [CrossRef]   [PubMed]  

9. J. P. Oakley and B. L. Satherley, “Improving image quality in poor visibility conditions using a physical model for contrast degradation,” IEEE Trans. Image Process. 7(2), 167–179 (1998). [CrossRef]   [PubMed]  

10. G. Papari and N. Petkov, “Edge and line oriented contour detection: State of the art,” Image Vis. Comput. 29(2-3), 79–103 (2011). [CrossRef]  

11. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell. 25(6), 713–724 (2003). [CrossRef]  

12. Y. T. Peng and P. C. Cosman, “Underwater Image Restoration Based on Image Blurriness and Light Absorption,” IEEE Trans. Image Process. 26(4), 1579–1594 (2017). [CrossRef]   [PubMed]  

13. K. O. Amer, M. Elbouz, A. Alfalou, C. Brosseau, and J. Hajjami, “Enhancing underwater optical imaging by using a low-pass polarization filter,” Opt. Express 27(2), 621–643 (2019). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Block diagram of the CV-MDCA mechanism.
Fig. 2
Fig. 2 Block diagram and experiment setup for indoor simulated visible light communication.
Fig. 3
Fig. 3 Captured channel images and the edge detection results for indoor simulated underwater VLC.
Fig. 4
Fig. 4 (a) Underwater VLC system BER in different impurity concentration; (b) Variation curves of normalized image parameters in different impurity concentration.
Fig. 5
Fig. 5 Captured channel images and the edge detection results for outdoor atmospheric FSO.
Fig. 6
Fig. 6 (a) Atmospheric FSO system BER in different weather cases; (b) Variation curves of normalized image parameters in different weather cases.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

V = I max - I min I max + I min
I R = O e β d + E ( 1 e β d )
β = 1 d ln ( O E I R E )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.