Abstract

Most of the head-mounted displays take the active-matrix organic light emitting diode (AMOLED) as the primary display panel because of its displaying superiorities. Yet, the AMOLED displays are still regarded as power-hungry components; in order to reduce the power consumption of AMOLED displays, the input image would be suppressed based on the proposed dynamic lightness adjustment algorithm that incorporates the depth information from the stereoscopic images which indicates the saliency, and the lightness of image pixel-wisely. The experiments reveal that the proposed method could achieve the approximately high power-saving rate with lower computational overheads compared to the existing methods.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The active matrix organic light emitting diode (AMOLED) is an emerging display technology, which has been applied to broad range of consumer electronics including: televisions, smart-phones, or particularly the head-mounted displays (HMDs)[1,2]. Despite its superiority in contrast ratio or display brightness compared to other display technologies, the AMOLED displays are still regarded as a high energy-demanding component with consumptions linear to the pixel intensity. Consequently, reducing power on AMOLED displays is still an essential issue.

There have been several state-of-the-arts that have attempted to address this issue. For instance, the histogram modification (HM) technique adjusts the image histogram to reduce the power consumption by increasing the black points [3,4]. Despite their power-saving, these methods are prone to image over-enhancement and demand high computational cost. To preserve more perceptual quality, Chang et al.[5] reduces the image intensities pixel-wisely based on a quality-constrained function. In addition, Chondro et al. [6] proposed an improved pixel dimming algorithm that suppresses any overexposure region and inefficient blue color spectra. Due to the high demand of computational overhead, the algorithms are not suitable for real-time applications. Consequently, a power-saving scheme [7] is specifically designed for video playback on AMOLEDs. Though [7] could reduce the power with low computations, the scheme is prone to pixel degradations over the lower intensities and may produce inter-frame edge-artifacts.

Based on the merits and drawbacks of the prior arts, a trade-off between image quality and algorithm complexity is apparent. For the AMOLED-based HMD, image quality and system latency work hand-in-hand to provide immersive user experience (UX). Any image latency in HMDs could induce motion sickness that decimates the UX performance. According to Lang et al.[8], depth cue on stereoscopic images for HMDs provides a direct relation to the human saliency that could support the formulated modeling of human saliency[9]. In this study, a low-latency depth-based pixel dimming algorithm is proposed to gradually reduce the pixel intensities from foreground to background based on the depth information, which are obtained from stereoscopic frames on AMOLED-based HMDs. The proposed method exploits the limited human salience to reduce pixel intensities of unimportant areas as solely based on the depth cue.

 

Fig. 1 The flow diagram of the proposed power-saving algorithm.

Download Full Size | PPT Slide | PDF

2. Dynamic lightness adjustment

The flow diagram of the proposed power-saving method for AMOLED-based HMDs is shown in Fig. 1. Based on the development, the following contributions are aimed to be provided:

  • Dynamic lightness adjustment that preserves both image quality (i.e. contrast, lightness) and depth information with significant power-saving rate on AMOLED-based HMDs.
  • Low latency power-saving technique to reduce the possibility of having motion sickness during the user experience on the virtual reality environment.

The details of the proposed power-saving method are elaborated in the following subsections.

2.1. Lightness calculation

The human visual system (HVS) is a complex model that would require a sophisticated formula to represent. To accurately fit the human visual perception, the CIELAB color space [10] is utilized as it could provide the most accurate luminance channel of RGB (lightness) compared to the other color spaces [11,12]. In the CIELAB color space, a pixel is specified as {L,a,b }, where L represents the lightness and a,b metrics indicate the color information that are independent to lightness. The amount of numerical change of lightness value is same as the visually perceived change. Moreover, the CIELAB color space defines colors independent of how they are displayed, which is device-independent. For a pixel of input image, I(θ) = {RI(θ),GI(θ),BI(θ) }, is converted from the RGB to the CIELAB color space, I(θ) = {LI(θ),aI(θ),bI(θ) }, and the θ denotes the pixel positions in the image.

2.2. Depth map calculation

The depth map is also an essential information to design the proposed power-saving algorithm. The HMD exploits the stereoscopic image to provide VR experience and make the user perceive the depth information from the RGB image pairs. Then the depth estimation algorithm to take left input image, Ileft(θ) = {RI,left(θ),GI,left(θ),BI,left(θ) } and right input image, Iright(θ) = {RI,right(θ),GI,right(θ),BI,right(θ) } to generate the depth map, DI(θ)(0,255). Furthermore, to reduce the computation overhead, the block matching algorithm [13] is empirically utilized to generate the necessary depth map since this algorithm faster than most existing depth estimation algorithms with decent visual disparity estimation based on experimental result.

2.3. Lightness adjustment ratio

To reduce the power consumption of AMOLED-based HMDs, a dynamic lightness adjustment technique has been designed to specifically modify the lightness, expressed as follows:

{LO(θ)=γ(θ)×LI(θ),aO(θ)=aI(θ),bO(θ)=bI(θ),
where γ(θ),(0<γ(θ)1) represents the proposed lightness adjustment ratio which may adjust the intensity by dimming the input image. To prevent the color distortion in the output image, both aO(θ) and bO(θ) metrics should remain the same and LO(θ) denotes the resultant lightness of output image. In this paper, The LI(θ) and DI(θ) are two crucial parameters to adaptively tune the γ(θ) pixel-wisely. For instance, in Fig. 2, the yellow color represents the pixels are set to the higher γ because these pixels have low DI(θ), which could draw more human attention; thus requiring to preserve the image quality. Conversely, the blue color indicates the pixels have a lower γ and could open the possibility to reduce more power consumption since their salience are insignificant. As for either case, lower LI(θ) enables higher tuning of γ.

 

Fig. 2 The distributions of the γ(θ) based on LI(θ) and DI(θ).

Download Full Size | PPT Slide | PDF

2.4. Non-linear weight transformation

Although in HVS, the near objects are saliently more important than the far ones, the human perspective on saliency is not linear to the depth value. Consequently, the DI(θ) value could not be directly used to tune ma ratio. To overcome this drawback, this study proposes a non-linear weighted transformation to define the γ for fitting the human perception on saliency. Initially, because LI(θ) and DI(θ) use different numerical range to represent their values, thus, both LI(θ) and DI(θ) are normalized by certain normalization coefficients, respectively, which are formulated as:

LI¯(θ)=LI(θ)ρ,
and
DI¯(θ)=DI(θ)σ,
where ρ and σ represent the lightness normalization coefficient and the depth normalization coefficient and are empirically set to 10 and 25.5, respectively.

Next, both LI¯(θ) and DI¯(θ) were transformed according to the lightness weighting factor, ωL(θ) and depth weighting factor, ωD(θ), respectively, based on the proposed weighted functions that could be mathematically expressed as:

ωL(θ)=e2b1ρLI¯(θ)1+e2b1ρLI¯(θ)+δL,
and
ωD(θ)=e2b1σDI¯(θ)1+e2b1σDI¯(θ)+δD,
where δL and δD denote the constants which are empirically set to 0.25 and 0.45, respectively, these constants prevent zero weighting factors, that could lead to zero γ and produce black pixels as well as undesired image distortions. The b in Eqs. (4) and (5) represents the number of bits that is used to represent an image, commonly defined as 8 bits, depending on the image types. After the transformation, the γ can be deduced by multiplying both weighting factors, that is expressed with the following formulation:
γ(θ)=ωL(θ)×ωD(θ).

 

Fig. 3 Illustration of (a) the experiment setup and (b) diagram of power measurement.

Download Full Size | PPT Slide | PDF

3. Experimental result

3.1. Power measurement setup

To appropriately benchmark the performances of all prior and proposed methods in terms of the power-saving rate, power measurements were performed on a 5.5 inches full HD AMOLED display module (AUO H546DLB01.1), which was mounted on an HMD case as shown in Fig. 3(a). The stereoscopic images had been processed on an identical personal computer and were uploaded to the display module through a data-only connection. A power-meter continuously monitored the amount of power consumed by the display module and output the final measurement after a steady-state had been achieved for each measurement. Moreover, to retrieve the power consumption on AMOLED displays module correctly, the supplementary power, which was generated to drive the AMOLED module, hadbeen subtracted from each power measurement. This setup in Fig. 3(b) enables specific measurements of the displaying power of the AMOLED display. The power-saving rate of the AMOLED display module can be expressed as follows:

Psaving%=PoriginalPresultantPoriginal×100%,
where Poriginal and Presultant denote the power consumption while displaying the original and the resultant stereoscopic images, respectively.

Tables Icon

Table 1. The image specifications of the benchmark datasets

3.2. Datasets and image quality assessment tools

All prior and proposed method were benchmarked using identical RGB stereoscopic images, which were obtained from publicly available datasets including: a) Stereo Tracking Dataset for a Person Following Robot (STD) [14,15] and b) MPI Sintel Flow Dataset (SFD) [16]. These datasets provide different objects (i.e. types and poses) and various perceptual environment constraints (i.e. lighting, aspect ratio) to comprehensively challenge the benchmarked power-saving methods. The detailed specifications of images from these datasets are presented in Table 1.

To evaluate the performance of the proposed algorithm, three image quality assessment (IQA) algorithms are utilized: visual saliency-induced index (VSI) [17] has been adopted to assess the resultant frames quality by comparing the perceptual similarities between the reference and tested frame. The SSIM [18] is exploited as an auxiliary IQA to assess the resultant frames quality. The SSIM index is a perception-based model that considers image degradation as a perceived change in structural information. To acquire the single overall quality measurement of the entire frame, the mean SSIM index was employed to evaluate the whole frame quality. In addition, the feature-similarity index (FSIMc) [19] has been implemented to assess from quality based on the phase congruency and the gradient magnitude as the primary and the complementary features including the chroma information, respectively.

Tables Icon

Table 2. IQA and power-saving comparisons of the prior and proposed method

3.3. Quantitative evaluations

Table 2 presents a set of quantitative data to compare the performance of the prior and proposed power-saving methods when applied to AMOLED-based HMDs in terms of the perceptual quality and the power-saving rate. In an ideal or desired condition, the benchmarked method should be able to achieve high perceptual qualities and high power-saving rate. According to the VSI scores in Table 2, the proposed method along with the prior methods achieved similarly high VSI scores above 0.95 on both datasets, which suggests that in terms of visual saliency the proposed method can preserve the salient information even after the stereoscopic images were dimmed.

In terms of image structural comparison, the SSIM scores in Table 2 have suggest that the proposed method was able to preserve more image structures compared to the prior methods with SSIM index averages at 0.732 and 0.759 for STD and SFD datasets, respectively. As the proposed method gradually dims the pixel intensities based on the depth cue, the edge information between foreground(s) and background(s) are kept as firm; thus preserving the image structure as characterized with image contrast. As for the color-based feature similarity using FSIMc in Table 2, the proposed method is shown to have generated few minor chroma degradations compared to the prior methods due to the color alterations over the blue spectra to achieve higher power-saving rate. This condition could be address by employing a quality-constrained function that could control the adjustment of γ value for blue-colored pixels in the future development.

The power-saving rates in Table 2 have indicated that the proposed method could achieved the highest power-saving rate among the prior methods. On both datasets, the proposed method achieved an average Psaving% at 36.32%, which is 0.14% higher than [7], 4.39% higher than [6], and 16.96% higher than [5]. Consequently, there is a significant improvement in power-saving rate of the proposed method compared to [5,6], which may indicate the desirable performance of the proposed method to minimize the trade-off between image quality and power-saving rate. Although the difference of average Psaving% between [7] and the proposed method in Table 2 is marginal; the proposed method is capable of preserving the image details and lightness.

Figure 4 shows several montages of original and resultant stereoscopic images from the prior [5, 6, 7] and the proposed method. Notice that in the resultant images between [7] and the proposed method in Fig. 4, there are subtle differences in terms of the total lightness reproduced from either resultant stereoscopic images, in which the proposed method was able to preserve more image brightness at higher power-saving rate or as high as [7]. In terms of color reproduction, the proposed method increases the chroma saturation to its limit without having extreme color alteration that would decimate the overall performance and applicability.

 

Fig. 4 Montages of stereoscopic images that were processed with prior and proposed method.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 3. The computational performance of the proposed algorithm compared with [7]

3.4. Computational performance

As it has been mentioned, computational complexity of power-saving algorithm yields a vital aspect for AMOLED-based HMDs that largely determines the user experience during the virtual reality environment. Any significant latency would cause motion sickness, which decimates the utilization of power-saving algorithm on the AMOLED-based HMDs. To understand the performance of the proposed method as well as the state-of-the art [7] in terms of computational complexity, a series of complexity measurements has been conducted. The assessment employed an identical personal computer with a 3.2 GHz AMD Ryzen 5-1600 CPU and 16 GByte RAM. In this study, the look-up table (LUT) technique is utilized to accelerate the proposed non-linear weighted transformation since it would consume computational overhead mostly.

In Table 3, the time cost of the proposed method to process an stereoscopic image was 37.53 milliseconds using the STD dataset and 71.72 milliseconds on SFD dataset. The average processing time of the proposed algorithm is at 5.34 times faster than [7] that also reduces the power consumption on AMOLED displays especially for video applications, and achieves considerable-level of power-saving rate based on Table 2. Consequently, the performance of computational overhead shows the proposed algorithm can be applied for the AMOLED-based HMDs on limited image resolutions.

4. Conclusion

This study employs the lightness and the depth information to design a dynamic lightness adjustment scheme that is utilized to reduce the lightness of input images and reduce power consumption on AMOLED displays. In the beginning, the input stereoscopic images are calculated by block matching algorithm to provide the depth map. After normalizing the lightness and depth value from input images, a non-linear weighted transformation is exploited to combine both lightness and depth to determine the lightness adjustment ratio, based on the human saliency perception. The experimental result shows the proposed method could achieve the average 0.756 SSIM and 0.962 VSI score. In terms of power reducing, the power-saving rate of proposed method is up to 44.26%. In addition, the performance of computational overhead showed the proposed power-saving algorithm can achieve average 5.35 times faster than the existing method with a considerably high power-saving rate.

References

1. K.-D. Chang, C.-Y. Li, J.-W. Pan, and K.-Y. Cheng, “A hybrid simulated method for analyzing the optical efficiency of a head-mounted display with a quasi-crystal oled panel,” Optics Express 22, A567–A576 (2014). [CrossRef]   [PubMed]  

2. C.-C. Li, H.-Y. Tseng, H.-C. Liao, H.-M. Chen, T. Hsieh, S.-A. Lin, H.-C. Jau, Y.-C. Wu, Y.-L. Hsu, W.-H. Hsu, and T.-H. Lin, “Enhanced image quality of oled transparent display by cholesteric liquid crystal back-panel,” Optics Express 25, 29199–29206 (2017). [CrossRef]  

3. C. Lee, C. Lee, and C.-S. Kim, “Power-constrained contrast enhancement for oled displays based on histogram equalization,” in Image Processing (ICIP), 2010 17th IEEE International Conference on, (IEEE, 2010), pp. 1689–1692.

4. L.-M. Jan, F.-C. Cheng, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “A power-saving histogram adjustment algorithm for oled-oriented contrast enhancement,” Journal of Display Technology 12, 368–375 (2016). [CrossRef]  

5. T.-C. Chang, S. S.-D. Xu, and S.-F. Su, “Ssim-based quality-on-demand energy-saving schemes for oled displays,” IEEE Trans. Systems, Man, and Cybernetics: Systems 46, 623–635 (2016). [CrossRef]  

6. P. Chondro and S.-J. Ruan, “Perceptually hue-oriented power-saving scheme with overexposure corrector for amoled displays,” Journal of Display Technology 12, 791–800 (2016). [CrossRef]  

7. P. Chondro, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “Advanced multimedia power saving method using dynamic pixel dimmer on amoled displays,” IEEE Transactions on Circuits and Systems for Video Technology 28, 2200–2209 (2017). [CrossRef]  

8. C. Lang, T. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, and S. Yan, “Depth matters: Influence of depth cues on visual saliency,” in European Conference on Computer Vision (ECCV), (Springer, 2012), pp. 101–115.

9. R. Cong, J. Lei, C. Zhang, Q. Huang, X. Cao, and C. Hou, “Saliency detection for stereoscopic images based on depth confidence analysis and multiple cues fusion,” IEEE Signal Processing Letters 23, 819–823 (2016). [CrossRef]  

10. J. Schwiegerling, Colorimetry: CIELAB Color Space(SPIE Press, 2004), Chap. 4, pp. 77–78.

11. B. Hill, T. Roger, and F. Vorhagen, “Comparative analysis of the quantization of color spaces on the basis of the cielab color-difference formula,” ACM Transactions on Graphics 16, 109–154 (1997). [CrossRef]  

12. M. Safda, G. Cui, Y. J. Kim, and M. R. Luo, “Perceptually uniform color space for image signals including high dynamic range and wide gamut,” Optics Express 25, 15131–15151 (2017). [CrossRef]  

13. A. Kaehler and G. Bradski, Projection and Three-Dimensional Vision (O’Reilly Media, Inc., 2016), Chap. 19, pp. 737–761.

14. B. X. Chen, R. Sahdev, and J. K. Tsotsos, “Person following robot using selected online ada-boosting with stereo camera,” in Computer and Robot Vision (CRV), 2017 14th Conference on, (IEEE, 2017), pp. 48–55.

15. B. X. Chen, R. Sahdev, and J. K. Tsotsos, “Integrating stereo vision with a cnn tracker for a person-following robot,” in International Conference on Computer Vision Systems, (Springer, 2017), pp. 300–313. [CrossRef]  

16. D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, “A naturalistic open source movie for optical flow evaluation,” in European Conference on Computer Vision, (Springer, 2012), pp. 611–625.

17. L. Zhang, Y. Shen, and H. Li, “Vsi: A visual saliency-induced index for perceptual image quality assessment,” IEEE Transactions on Image Processing 23, 4270–4281 (2014). [CrossRef]   [PubMed]  

18. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing 13, 600–612 (2004). [CrossRef]   [PubMed]  

19. L. Zhang, L. Zhang, X. Mou, and D. Zhang, “Fsim: A feature similarity index for image quality assessment,” IEEE Transactions on Image Processing 20, 2378–2386 (2011). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. K.-D. Chang, C.-Y. Li, J.-W. Pan, and K.-Y. Cheng, “A hybrid simulated method for analyzing the optical efficiency of a head-mounted display with a quasi-crystal oled panel,” Optics Express 22, A567–A576 (2014).
    [Crossref] [PubMed]
  2. C.-C. Li, H.-Y. Tseng, H.-C. Liao, H.-M. Chen, T. Hsieh, S.-A. Lin, H.-C. Jau, Y.-C. Wu, Y.-L. Hsu, W.-H. Hsu, and T.-H. Lin, “Enhanced image quality of oled transparent display by cholesteric liquid crystal back-panel,” Optics Express 25, 29199–29206 (2017).
    [Crossref]
  3. C. Lee, C. Lee, and C.-S. Kim, “Power-constrained contrast enhancement for oled displays based on histogram equalization,” in Image Processing (ICIP), 2010 17th IEEE International Conference on, (IEEE, 2010), pp. 1689–1692.
  4. L.-M. Jan, F.-C. Cheng, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “A power-saving histogram adjustment algorithm for oled-oriented contrast enhancement,” Journal of Display Technology 12, 368–375 (2016).
    [Crossref]
  5. T.-C. Chang, S. S.-D. Xu, and S.-F. Su, “Ssim-based quality-on-demand energy-saving schemes for oled displays,” IEEE Trans. Systems, Man, and Cybernetics: Systems 46, 623–635 (2016).
    [Crossref]
  6. P. Chondro and S.-J. Ruan, “Perceptually hue-oriented power-saving scheme with overexposure corrector for amoled displays,” Journal of Display Technology 12, 791–800 (2016).
    [Crossref]
  7. P. Chondro, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “Advanced multimedia power saving method using dynamic pixel dimmer on amoled displays,” IEEE Transactions on Circuits and Systems for Video Technology 28, 2200–2209 (2017).
    [Crossref]
  8. C. Lang, T. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, and S. Yan, “Depth matters: Influence of depth cues on visual saliency,” in European Conference on Computer Vision (ECCV), (Springer, 2012), pp. 101–115.
  9. R. Cong, J. Lei, C. Zhang, Q. Huang, X. Cao, and C. Hou, “Saliency detection for stereoscopic images based on depth confidence analysis and multiple cues fusion,” IEEE Signal Processing Letters 23, 819–823 (2016).
    [Crossref]
  10. J. Schwiegerling, Colorimetry: CIELAB Color Space(SPIE Press, 2004), Chap. 4, pp. 77–78.
  11. B. Hill, T. Roger, and F. Vorhagen, “Comparative analysis of the quantization of color spaces on the basis of the cielab color-difference formula,” ACM Transactions on Graphics 16, 109–154 (1997).
    [Crossref]
  12. M. Safda, G. Cui, Y. J. Kim, and M. R. Luo, “Perceptually uniform color space for image signals including high dynamic range and wide gamut,” Optics Express 25, 15131–15151 (2017).
    [Crossref]
  13. A. Kaehler and G. Bradski, Projection and Three-Dimensional Vision (O’Reilly Media, Inc., 2016), Chap. 19, pp. 737–761.
  14. B. X. Chen, R. Sahdev, and J. K. Tsotsos, “Person following robot using selected online ada-boosting with stereo camera,” in Computer and Robot Vision (CRV), 2017 14th Conference on, (IEEE, 2017), pp. 48–55.
  15. B. X. Chen, R. Sahdev, and J. K. Tsotsos, “Integrating stereo vision with a cnn tracker for a person-following robot,” in International Conference on Computer Vision Systems, (Springer, 2017), pp. 300–313.
    [Crossref]
  16. D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, “A naturalistic open source movie for optical flow evaluation,” in European Conference on Computer Vision, (Springer, 2012), pp. 611–625.
  17. L. Zhang, Y. Shen, and H. Li, “Vsi: A visual saliency-induced index for perceptual image quality assessment,” IEEE Transactions on Image Processing 23, 4270–4281 (2014).
    [Crossref] [PubMed]
  18. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing 13, 600–612 (2004).
    [Crossref] [PubMed]
  19. L. Zhang, L. Zhang, X. Mou, and D. Zhang, “Fsim: A feature similarity index for image quality assessment,” IEEE Transactions on Image Processing 20, 2378–2386 (2011).
    [Crossref] [PubMed]

2017 (3)

C.-C. Li, H.-Y. Tseng, H.-C. Liao, H.-M. Chen, T. Hsieh, S.-A. Lin, H.-C. Jau, Y.-C. Wu, Y.-L. Hsu, W.-H. Hsu, and T.-H. Lin, “Enhanced image quality of oled transparent display by cholesteric liquid crystal back-panel,” Optics Express 25, 29199–29206 (2017).
[Crossref]

P. Chondro, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “Advanced multimedia power saving method using dynamic pixel dimmer on amoled displays,” IEEE Transactions on Circuits and Systems for Video Technology 28, 2200–2209 (2017).
[Crossref]

M. Safda, G. Cui, Y. J. Kim, and M. R. Luo, “Perceptually uniform color space for image signals including high dynamic range and wide gamut,” Optics Express 25, 15131–15151 (2017).
[Crossref]

2016 (4)

R. Cong, J. Lei, C. Zhang, Q. Huang, X. Cao, and C. Hou, “Saliency detection for stereoscopic images based on depth confidence analysis and multiple cues fusion,” IEEE Signal Processing Letters 23, 819–823 (2016).
[Crossref]

L.-M. Jan, F.-C. Cheng, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “A power-saving histogram adjustment algorithm for oled-oriented contrast enhancement,” Journal of Display Technology 12, 368–375 (2016).
[Crossref]

T.-C. Chang, S. S.-D. Xu, and S.-F. Su, “Ssim-based quality-on-demand energy-saving schemes for oled displays,” IEEE Trans. Systems, Man, and Cybernetics: Systems 46, 623–635 (2016).
[Crossref]

P. Chondro and S.-J. Ruan, “Perceptually hue-oriented power-saving scheme with overexposure corrector for amoled displays,” Journal of Display Technology 12, 791–800 (2016).
[Crossref]

2014 (2)

L. Zhang, Y. Shen, and H. Li, “Vsi: A visual saliency-induced index for perceptual image quality assessment,” IEEE Transactions on Image Processing 23, 4270–4281 (2014).
[Crossref] [PubMed]

K.-D. Chang, C.-Y. Li, J.-W. Pan, and K.-Y. Cheng, “A hybrid simulated method for analyzing the optical efficiency of a head-mounted display with a quasi-crystal oled panel,” Optics Express 22, A567–A576 (2014).
[Crossref] [PubMed]

2011 (1)

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “Fsim: A feature similarity index for image quality assessment,” IEEE Transactions on Image Processing 20, 2378–2386 (2011).
[Crossref] [PubMed]

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing 13, 600–612 (2004).
[Crossref] [PubMed]

1997 (1)

B. Hill, T. Roger, and F. Vorhagen, “Comparative analysis of the quantization of color spaces on the basis of the cielab color-difference formula,” ACM Transactions on Graphics 16, 109–154 (1997).
[Crossref]

Black, M. J.

D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, “A naturalistic open source movie for optical flow evaluation,” in European Conference on Computer Vision, (Springer, 2012), pp. 611–625.

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing 13, 600–612 (2004).
[Crossref] [PubMed]

Bradski, G.

A. Kaehler and G. Bradski, Projection and Three-Dimensional Vision (O’Reilly Media, Inc., 2016), Chap. 19, pp. 737–761.

Butler, D. J.

D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, “A naturalistic open source movie for optical flow evaluation,” in European Conference on Computer Vision, (Springer, 2012), pp. 611–625.

Cao, X.

R. Cong, J. Lei, C. Zhang, Q. Huang, X. Cao, and C. Hou, “Saliency detection for stereoscopic images based on depth confidence analysis and multiple cues fusion,” IEEE Signal Processing Letters 23, 819–823 (2016).
[Crossref]

Chang, C.-H.

P. Chondro, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “Advanced multimedia power saving method using dynamic pixel dimmer on amoled displays,” IEEE Transactions on Circuits and Systems for Video Technology 28, 2200–2209 (2017).
[Crossref]

L.-M. Jan, F.-C. Cheng, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “A power-saving histogram adjustment algorithm for oled-oriented contrast enhancement,” Journal of Display Technology 12, 368–375 (2016).
[Crossref]

Chang, K.-D.

K.-D. Chang, C.-Y. Li, J.-W. Pan, and K.-Y. Cheng, “A hybrid simulated method for analyzing the optical efficiency of a head-mounted display with a quasi-crystal oled panel,” Optics Express 22, A567–A576 (2014).
[Crossref] [PubMed]

Chang, T.-C.

T.-C. Chang, S. S.-D. Xu, and S.-F. Su, “Ssim-based quality-on-demand energy-saving schemes for oled displays,” IEEE Trans. Systems, Man, and Cybernetics: Systems 46, 623–635 (2016).
[Crossref]

Chen, B. X.

B. X. Chen, R. Sahdev, and J. K. Tsotsos, “Person following robot using selected online ada-boosting with stereo camera,” in Computer and Robot Vision (CRV), 2017 14th Conference on, (IEEE, 2017), pp. 48–55.

B. X. Chen, R. Sahdev, and J. K. Tsotsos, “Integrating stereo vision with a cnn tracker for a person-following robot,” in International Conference on Computer Vision Systems, (Springer, 2017), pp. 300–313.
[Crossref]

Chen, H.-M.

C.-C. Li, H.-Y. Tseng, H.-C. Liao, H.-M. Chen, T. Hsieh, S.-A. Lin, H.-C. Jau, Y.-C. Wu, Y.-L. Hsu, W.-H. Hsu, and T.-H. Lin, “Enhanced image quality of oled transparent display by cholesteric liquid crystal back-panel,” Optics Express 25, 29199–29206 (2017).
[Crossref]

Cheng, F.-C.

L.-M. Jan, F.-C. Cheng, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “A power-saving histogram adjustment algorithm for oled-oriented contrast enhancement,” Journal of Display Technology 12, 368–375 (2016).
[Crossref]

Cheng, K.-Y.

K.-D. Chang, C.-Y. Li, J.-W. Pan, and K.-Y. Cheng, “A hybrid simulated method for analyzing the optical efficiency of a head-mounted display with a quasi-crystal oled panel,” Optics Express 22, A567–A576 (2014).
[Crossref] [PubMed]

Chondro, P.

P. Chondro, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “Advanced multimedia power saving method using dynamic pixel dimmer on amoled displays,” IEEE Transactions on Circuits and Systems for Video Technology 28, 2200–2209 (2017).
[Crossref]

P. Chondro and S.-J. Ruan, “Perceptually hue-oriented power-saving scheme with overexposure corrector for amoled displays,” Journal of Display Technology 12, 791–800 (2016).
[Crossref]

Cong, R.

R. Cong, J. Lei, C. Zhang, Q. Huang, X. Cao, and C. Hou, “Saliency detection for stereoscopic images based on depth confidence analysis and multiple cues fusion,” IEEE Signal Processing Letters 23, 819–823 (2016).
[Crossref]

Cui, G.

M. Safda, G. Cui, Y. J. Kim, and M. R. Luo, “Perceptually uniform color space for image signals including high dynamic range and wide gamut,” Optics Express 25, 15131–15151 (2017).
[Crossref]

Hill, B.

B. Hill, T. Roger, and F. Vorhagen, “Comparative analysis of the quantization of color spaces on the basis of the cielab color-difference formula,” ACM Transactions on Graphics 16, 109–154 (1997).
[Crossref]

Hou, C.

R. Cong, J. Lei, C. Zhang, Q. Huang, X. Cao, and C. Hou, “Saliency detection for stereoscopic images based on depth confidence analysis and multiple cues fusion,” IEEE Signal Processing Letters 23, 819–823 (2016).
[Crossref]

Hsieh, T.

C.-C. Li, H.-Y. Tseng, H.-C. Liao, H.-M. Chen, T. Hsieh, S.-A. Lin, H.-C. Jau, Y.-C. Wu, Y.-L. Hsu, W.-H. Hsu, and T.-H. Lin, “Enhanced image quality of oled transparent display by cholesteric liquid crystal back-panel,” Optics Express 25, 29199–29206 (2017).
[Crossref]

Hsu, W.-H.

C.-C. Li, H.-Y. Tseng, H.-C. Liao, H.-M. Chen, T. Hsieh, S.-A. Lin, H.-C. Jau, Y.-C. Wu, Y.-L. Hsu, W.-H. Hsu, and T.-H. Lin, “Enhanced image quality of oled transparent display by cholesteric liquid crystal back-panel,” Optics Express 25, 29199–29206 (2017).
[Crossref]

Hsu, Y.-L.

C.-C. Li, H.-Y. Tseng, H.-C. Liao, H.-M. Chen, T. Hsieh, S.-A. Lin, H.-C. Jau, Y.-C. Wu, Y.-L. Hsu, W.-H. Hsu, and T.-H. Lin, “Enhanced image quality of oled transparent display by cholesteric liquid crystal back-panel,” Optics Express 25, 29199–29206 (2017).
[Crossref]

Huang, Q.

R. Cong, J. Lei, C. Zhang, Q. Huang, X. Cao, and C. Hou, “Saliency detection for stereoscopic images based on depth confidence analysis and multiple cues fusion,” IEEE Signal Processing Letters 23, 819–823 (2016).
[Crossref]

Jan, L.-M.

L.-M. Jan, F.-C. Cheng, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “A power-saving histogram adjustment algorithm for oled-oriented contrast enhancement,” Journal of Display Technology 12, 368–375 (2016).
[Crossref]

Jau, H.-C.

C.-C. Li, H.-Y. Tseng, H.-C. Liao, H.-M. Chen, T. Hsieh, S.-A. Lin, H.-C. Jau, Y.-C. Wu, Y.-L. Hsu, W.-H. Hsu, and T.-H. Lin, “Enhanced image quality of oled transparent display by cholesteric liquid crystal back-panel,” Optics Express 25, 29199–29206 (2017).
[Crossref]

Kaehler, A.

A. Kaehler and G. Bradski, Projection and Three-Dimensional Vision (O’Reilly Media, Inc., 2016), Chap. 19, pp. 737–761.

Kankanhalli, M.

C. Lang, T. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, and S. Yan, “Depth matters: Influence of depth cues on visual saliency,” in European Conference on Computer Vision (ECCV), (Springer, 2012), pp. 101–115.

Katti, H.

C. Lang, T. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, and S. Yan, “Depth matters: Influence of depth cues on visual saliency,” in European Conference on Computer Vision (ECCV), (Springer, 2012), pp. 101–115.

Kim, C.-S.

C. Lee, C. Lee, and C.-S. Kim, “Power-constrained contrast enhancement for oled displays based on histogram equalization,” in Image Processing (ICIP), 2010 17th IEEE International Conference on, (IEEE, 2010), pp. 1689–1692.

Kim, Y. J.

M. Safda, G. Cui, Y. J. Kim, and M. R. Luo, “Perceptually uniform color space for image signals including high dynamic range and wide gamut,” Optics Express 25, 15131–15151 (2017).
[Crossref]

Lang, C.

C. Lang, T. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, and S. Yan, “Depth matters: Influence of depth cues on visual saliency,” in European Conference on Computer Vision (ECCV), (Springer, 2012), pp. 101–115.

Lee, C.

C. Lee, C. Lee, and C.-S. Kim, “Power-constrained contrast enhancement for oled displays based on histogram equalization,” in Image Processing (ICIP), 2010 17th IEEE International Conference on, (IEEE, 2010), pp. 1689–1692.

C. Lee, C. Lee, and C.-S. Kim, “Power-constrained contrast enhancement for oled displays based on histogram equalization,” in Image Processing (ICIP), 2010 17th IEEE International Conference on, (IEEE, 2010), pp. 1689–1692.

Lei, J.

R. Cong, J. Lei, C. Zhang, Q. Huang, X. Cao, and C. Hou, “Saliency detection for stereoscopic images based on depth confidence analysis and multiple cues fusion,” IEEE Signal Processing Letters 23, 819–823 (2016).
[Crossref]

Li, C.-C.

C.-C. Li, H.-Y. Tseng, H.-C. Liao, H.-M. Chen, T. Hsieh, S.-A. Lin, H.-C. Jau, Y.-C. Wu, Y.-L. Hsu, W.-H. Hsu, and T.-H. Lin, “Enhanced image quality of oled transparent display by cholesteric liquid crystal back-panel,” Optics Express 25, 29199–29206 (2017).
[Crossref]

Li, C.-Y.

K.-D. Chang, C.-Y. Li, J.-W. Pan, and K.-Y. Cheng, “A hybrid simulated method for analyzing the optical efficiency of a head-mounted display with a quasi-crystal oled panel,” Optics Express 22, A567–A576 (2014).
[Crossref] [PubMed]

Li, H.

L. Zhang, Y. Shen, and H. Li, “Vsi: A visual saliency-induced index for perceptual image quality assessment,” IEEE Transactions on Image Processing 23, 4270–4281 (2014).
[Crossref] [PubMed]

Liao, H.-C.

C.-C. Li, H.-Y. Tseng, H.-C. Liao, H.-M. Chen, T. Hsieh, S.-A. Lin, H.-C. Jau, Y.-C. Wu, Y.-L. Hsu, W.-H. Hsu, and T.-H. Lin, “Enhanced image quality of oled transparent display by cholesteric liquid crystal back-panel,” Optics Express 25, 29199–29206 (2017).
[Crossref]

Lin, S.-A.

C.-C. Li, H.-Y. Tseng, H.-C. Liao, H.-M. Chen, T. Hsieh, S.-A. Lin, H.-C. Jau, Y.-C. Wu, Y.-L. Hsu, W.-H. Hsu, and T.-H. Lin, “Enhanced image quality of oled transparent display by cholesteric liquid crystal back-panel,” Optics Express 25, 29199–29206 (2017).
[Crossref]

Lin, T.-H.

C.-C. Li, H.-Y. Tseng, H.-C. Liao, H.-M. Chen, T. Hsieh, S.-A. Lin, H.-C. Jau, Y.-C. Wu, Y.-L. Hsu, W.-H. Hsu, and T.-H. Lin, “Enhanced image quality of oled transparent display by cholesteric liquid crystal back-panel,” Optics Express 25, 29199–29206 (2017).
[Crossref]

Luo, M. R.

M. Safda, G. Cui, Y. J. Kim, and M. R. Luo, “Perceptually uniform color space for image signals including high dynamic range and wide gamut,” Optics Express 25, 15131–15151 (2017).
[Crossref]

Mou, X.

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “Fsim: A feature similarity index for image quality assessment,” IEEE Transactions on Image Processing 20, 2378–2386 (2011).
[Crossref] [PubMed]

Nguyen, T.

C. Lang, T. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, and S. Yan, “Depth matters: Influence of depth cues on visual saliency,” in European Conference on Computer Vision (ECCV), (Springer, 2012), pp. 101–115.

Pan, J.-W.

K.-D. Chang, C.-Y. Li, J.-W. Pan, and K.-Y. Cheng, “A hybrid simulated method for analyzing the optical efficiency of a head-mounted display with a quasi-crystal oled panel,” Optics Express 22, A567–A576 (2014).
[Crossref] [PubMed]

Roger, T.

B. Hill, T. Roger, and F. Vorhagen, “Comparative analysis of the quantization of color spaces on the basis of the cielab color-difference formula,” ACM Transactions on Graphics 16, 109–154 (1997).
[Crossref]

Ruan, S.-J.

P. Chondro, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “Advanced multimedia power saving method using dynamic pixel dimmer on amoled displays,” IEEE Transactions on Circuits and Systems for Video Technology 28, 2200–2209 (2017).
[Crossref]

P. Chondro and S.-J. Ruan, “Perceptually hue-oriented power-saving scheme with overexposure corrector for amoled displays,” Journal of Display Technology 12, 791–800 (2016).
[Crossref]

L.-M. Jan, F.-C. Cheng, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “A power-saving histogram adjustment algorithm for oled-oriented contrast enhancement,” Journal of Display Technology 12, 368–375 (2016).
[Crossref]

Safda, M.

M. Safda, G. Cui, Y. J. Kim, and M. R. Luo, “Perceptually uniform color space for image signals including high dynamic range and wide gamut,” Optics Express 25, 15131–15151 (2017).
[Crossref]

Sahdev, R.

B. X. Chen, R. Sahdev, and J. K. Tsotsos, “Person following robot using selected online ada-boosting with stereo camera,” in Computer and Robot Vision (CRV), 2017 14th Conference on, (IEEE, 2017), pp. 48–55.

B. X. Chen, R. Sahdev, and J. K. Tsotsos, “Integrating stereo vision with a cnn tracker for a person-following robot,” in International Conference on Computer Vision Systems, (Springer, 2017), pp. 300–313.
[Crossref]

Schwiegerling, J.

J. Schwiegerling, Colorimetry: CIELAB Color Space(SPIE Press, 2004), Chap. 4, pp. 77–78.

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing 13, 600–612 (2004).
[Crossref] [PubMed]

Shen, C.-A.

P. Chondro, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “Advanced multimedia power saving method using dynamic pixel dimmer on amoled displays,” IEEE Transactions on Circuits and Systems for Video Technology 28, 2200–2209 (2017).
[Crossref]

L.-M. Jan, F.-C. Cheng, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “A power-saving histogram adjustment algorithm for oled-oriented contrast enhancement,” Journal of Display Technology 12, 368–375 (2016).
[Crossref]

Shen, Y.

L. Zhang, Y. Shen, and H. Li, “Vsi: A visual saliency-induced index for perceptual image quality assessment,” IEEE Transactions on Image Processing 23, 4270–4281 (2014).
[Crossref] [PubMed]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing 13, 600–612 (2004).
[Crossref] [PubMed]

Stanley, G. B.

D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, “A naturalistic open source movie for optical flow evaluation,” in European Conference on Computer Vision, (Springer, 2012), pp. 611–625.

Su, S.-F.

T.-C. Chang, S. S.-D. Xu, and S.-F. Su, “Ssim-based quality-on-demand energy-saving schemes for oled displays,” IEEE Trans. Systems, Man, and Cybernetics: Systems 46, 623–635 (2016).
[Crossref]

Tseng, H.-Y.

C.-C. Li, H.-Y. Tseng, H.-C. Liao, H.-M. Chen, T. Hsieh, S.-A. Lin, H.-C. Jau, Y.-C. Wu, Y.-L. Hsu, W.-H. Hsu, and T.-H. Lin, “Enhanced image quality of oled transparent display by cholesteric liquid crystal back-panel,” Optics Express 25, 29199–29206 (2017).
[Crossref]

Tsotsos, J. K.

B. X. Chen, R. Sahdev, and J. K. Tsotsos, “Integrating stereo vision with a cnn tracker for a person-following robot,” in International Conference on Computer Vision Systems, (Springer, 2017), pp. 300–313.
[Crossref]

B. X. Chen, R. Sahdev, and J. K. Tsotsos, “Person following robot using selected online ada-boosting with stereo camera,” in Computer and Robot Vision (CRV), 2017 14th Conference on, (IEEE, 2017), pp. 48–55.

Vorhagen, F.

B. Hill, T. Roger, and F. Vorhagen, “Comparative analysis of the quantization of color spaces on the basis of the cielab color-difference formula,” ACM Transactions on Graphics 16, 109–154 (1997).
[Crossref]

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing 13, 600–612 (2004).
[Crossref] [PubMed]

Wu, Y.-C.

C.-C. Li, H.-Y. Tseng, H.-C. Liao, H.-M. Chen, T. Hsieh, S.-A. Lin, H.-C. Jau, Y.-C. Wu, Y.-L. Hsu, W.-H. Hsu, and T.-H. Lin, “Enhanced image quality of oled transparent display by cholesteric liquid crystal back-panel,” Optics Express 25, 29199–29206 (2017).
[Crossref]

Wulff, J.

D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, “A naturalistic open source movie for optical flow evaluation,” in European Conference on Computer Vision, (Springer, 2012), pp. 611–625.

Xu, S. S.-D.

T.-C. Chang, S. S.-D. Xu, and S.-F. Su, “Ssim-based quality-on-demand energy-saving schemes for oled displays,” IEEE Trans. Systems, Man, and Cybernetics: Systems 46, 623–635 (2016).
[Crossref]

Yadati, K.

C. Lang, T. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, and S. Yan, “Depth matters: Influence of depth cues on visual saliency,” in European Conference on Computer Vision (ECCV), (Springer, 2012), pp. 101–115.

Yan, S.

C. Lang, T. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, and S. Yan, “Depth matters: Influence of depth cues on visual saliency,” in European Conference on Computer Vision (ECCV), (Springer, 2012), pp. 101–115.

Zhang, C.

R. Cong, J. Lei, C. Zhang, Q. Huang, X. Cao, and C. Hou, “Saliency detection for stereoscopic images based on depth confidence analysis and multiple cues fusion,” IEEE Signal Processing Letters 23, 819–823 (2016).
[Crossref]

Zhang, D.

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “Fsim: A feature similarity index for image quality assessment,” IEEE Transactions on Image Processing 20, 2378–2386 (2011).
[Crossref] [PubMed]

Zhang, L.

L. Zhang, Y. Shen, and H. Li, “Vsi: A visual saliency-induced index for perceptual image quality assessment,” IEEE Transactions on Image Processing 23, 4270–4281 (2014).
[Crossref] [PubMed]

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “Fsim: A feature similarity index for image quality assessment,” IEEE Transactions on Image Processing 20, 2378–2386 (2011).
[Crossref] [PubMed]

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “Fsim: A feature similarity index for image quality assessment,” IEEE Transactions on Image Processing 20, 2378–2386 (2011).
[Crossref] [PubMed]

ACM Transactions on Graphics (1)

B. Hill, T. Roger, and F. Vorhagen, “Comparative analysis of the quantization of color spaces on the basis of the cielab color-difference formula,” ACM Transactions on Graphics 16, 109–154 (1997).
[Crossref]

IEEE Signal Processing Letters (1)

R. Cong, J. Lei, C. Zhang, Q. Huang, X. Cao, and C. Hou, “Saliency detection for stereoscopic images based on depth confidence analysis and multiple cues fusion,” IEEE Signal Processing Letters 23, 819–823 (2016).
[Crossref]

IEEE Trans. Systems, Man, and Cybernetics: Systems (1)

T.-C. Chang, S. S.-D. Xu, and S.-F. Su, “Ssim-based quality-on-demand energy-saving schemes for oled displays,” IEEE Trans. Systems, Man, and Cybernetics: Systems 46, 623–635 (2016).
[Crossref]

IEEE Transactions on Circuits and Systems for Video Technology (1)

P. Chondro, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “Advanced multimedia power saving method using dynamic pixel dimmer on amoled displays,” IEEE Transactions on Circuits and Systems for Video Technology 28, 2200–2209 (2017).
[Crossref]

IEEE Transactions on Image Processing (3)

L. Zhang, Y. Shen, and H. Li, “Vsi: A visual saliency-induced index for perceptual image quality assessment,” IEEE Transactions on Image Processing 23, 4270–4281 (2014).
[Crossref] [PubMed]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing 13, 600–612 (2004).
[Crossref] [PubMed]

L. Zhang, L. Zhang, X. Mou, and D. Zhang, “Fsim: A feature similarity index for image quality assessment,” IEEE Transactions on Image Processing 20, 2378–2386 (2011).
[Crossref] [PubMed]

Journal of Display Technology (2)

L.-M. Jan, F.-C. Cheng, C.-H. Chang, S.-J. Ruan, and C.-A. Shen, “A power-saving histogram adjustment algorithm for oled-oriented contrast enhancement,” Journal of Display Technology 12, 368–375 (2016).
[Crossref]

P. Chondro and S.-J. Ruan, “Perceptually hue-oriented power-saving scheme with overexposure corrector for amoled displays,” Journal of Display Technology 12, 791–800 (2016).
[Crossref]

Optics Express (3)

K.-D. Chang, C.-Y. Li, J.-W. Pan, and K.-Y. Cheng, “A hybrid simulated method for analyzing the optical efficiency of a head-mounted display with a quasi-crystal oled panel,” Optics Express 22, A567–A576 (2014).
[Crossref] [PubMed]

C.-C. Li, H.-Y. Tseng, H.-C. Liao, H.-M. Chen, T. Hsieh, S.-A. Lin, H.-C. Jau, Y.-C. Wu, Y.-L. Hsu, W.-H. Hsu, and T.-H. Lin, “Enhanced image quality of oled transparent display by cholesteric liquid crystal back-panel,” Optics Express 25, 29199–29206 (2017).
[Crossref]

M. Safda, G. Cui, Y. J. Kim, and M. R. Luo, “Perceptually uniform color space for image signals including high dynamic range and wide gamut,” Optics Express 25, 15131–15151 (2017).
[Crossref]

Other (7)

A. Kaehler and G. Bradski, Projection and Three-Dimensional Vision (O’Reilly Media, Inc., 2016), Chap. 19, pp. 737–761.

B. X. Chen, R. Sahdev, and J. K. Tsotsos, “Person following robot using selected online ada-boosting with stereo camera,” in Computer and Robot Vision (CRV), 2017 14th Conference on, (IEEE, 2017), pp. 48–55.

B. X. Chen, R. Sahdev, and J. K. Tsotsos, “Integrating stereo vision with a cnn tracker for a person-following robot,” in International Conference on Computer Vision Systems, (Springer, 2017), pp. 300–313.
[Crossref]

D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black, “A naturalistic open source movie for optical flow evaluation,” in European Conference on Computer Vision, (Springer, 2012), pp. 611–625.

C. Lee, C. Lee, and C.-S. Kim, “Power-constrained contrast enhancement for oled displays based on histogram equalization,” in Image Processing (ICIP), 2010 17th IEEE International Conference on, (IEEE, 2010), pp. 1689–1692.

C. Lang, T. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, and S. Yan, “Depth matters: Influence of depth cues on visual saliency,” in European Conference on Computer Vision (ECCV), (Springer, 2012), pp. 101–115.

J. Schwiegerling, Colorimetry: CIELAB Color Space(SPIE Press, 2004), Chap. 4, pp. 77–78.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1 The flow diagram of the proposed power-saving algorithm.
Fig. 2
Fig. 2 The distributions of the γ ( θ ) based on L I ( θ ) and D I ( θ ).
Fig. 3
Fig. 3 Illustration of (a) the experiment setup and (b) diagram of power measurement.
Fig. 4
Fig. 4 Montages of stereoscopic images that were processed with prior and proposed method.

Tables (3)

Tables Icon

Table 1 The image specifications of the benchmark datasets

Tables Icon

Table 2 IQA and power-saving comparisons of the prior and proposed method

Tables Icon

Table 3 The computational performance of the proposed algorithm compared with [7]

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

{ L O ( θ ) = γ ( θ ) × L I ( θ ) , a O ( θ ) = a I ( θ ) , b O ( θ ) = b I ( θ ) ,
L I ¯ ( θ ) = L I ( θ ) ρ ,
D I ¯ ( θ ) = D I ( θ ) σ ,
ω L ( θ ) = e 2 b 1 ρ L I ¯ ( θ ) 1 + e 2 b 1 ρ L I ¯ ( θ ) + δ L ,
ω D ( θ ) = e 2 b 1 σ D I ¯ ( θ ) 1 + e 2 b 1 σ D I ¯ ( θ ) + δ D ,
γ ( θ ) = ω L ( θ ) × ω D ( θ ) .
P s a v i n g % = P o r i g i n a l P r e s u l t a n t P o r i g i n a l × 100 % ,

Metrics