Abstract

This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple “ansatz” (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any N-D image data to any M-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping N-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.

© 2015 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition

Xiaoye Zhang, Yong Ma, Fan Fan, Ying Zhang, and Jun Huang
J. Opt. Soc. Am. A 34(8) 1400-1410 (2017)

A mathematical approach to best luminance maps

Ali Alsam and Hans Jakob Rivertz
J. Opt. Soc. Am. A 35(4) B239-B243 (2018)

Spectral image reconstruction using an edge preserving spatio-spectral Wiener estimation

Philipp Urban, Mitchell R. Rosen, and Roy S. Berns
J. Opt. Soc. Am. A 26(8) 1865-1875 (2009)

References

  • View by:
  • |
  • |
  • |

  1. J. B. Campbell and H. Wynne, Introduction to Remote Sensing, 5th ed. (Guilford, 2011).
  2. G. Hamarneh, C. McIntosh, and M. S. Drew, “Perception-based visualization of manifold-valued medical images using distance-preserving dimensionality reduction,” IEEE Trans. Med. Imaging 30, 1314–1327 (2011).
    [Crossref]
  3. N. Jacobson, M. Gupta, and J. Cole, “Linear fusion of image sets for display,” IEEE Trans. Geosci. Remote Sens. 45, 3277–3288 (2007).
    [Crossref]
  4. J. Tyo, A. Konsolakis, D. Diersen, and R. Olsen, “Principal-components-based display strategy for spectral imagery,” IEEE Trans. Geosci. Remote Sens. 41, 708–718 (2003).
    [Crossref]
  5. M. Cui, J. Hu, A. Razdan, and P. Wonka, “Color to gray conversion using ISOMAP,” Vis. Comput. 26, 1349–1360 (2010).
    [Crossref]
  6. C. Pohl and J. L. V. Genderen, “Multisensor image fusion in remote sensing: concepts, methods and applications,” Int. J. Remote Sens. 19, 823–854 (1998).
    [Crossref]
  7. P. J. Burt and E. H. Adelson, “Merging images through pattern decomposition,” Proc. SPIE 0575, 173–181 (1985).
  8. H. Li, B. Manjunath, and S. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models Image Process. 57, 235–245 (1995).
    [Crossref]
  9. A. Toet, “Natural colour mapping for multiband nightvision imagery,” Inf. Fusion 4, 155–166 (2003).
    [Crossref]
  10. L. Schaul, C. Fredembach, and S. Süsstrunk, “Color image dehazing using the near-infrared,” in 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt (2009), pp. 1629–1632.
  11. T. Cornsweet, Visual Perception (Academic, 1970).
  12. J. Tumblin, A. Agrawal, and R. Raskar, “Why I want a gradient camera,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, California (2005), pp. 103–110.
  13. R. Fattal, D. Lischinski, and M. Werman, “Gradient domain high dynamic range compression,” ACM Trans. Graph. 21, 249–256 (2002).
    [Crossref]
  14. P. Pérez, M. Gangnet, and A. Blake, “Poisson image editing,” ACM Trans. Graph. 22, 313–318 (2003).
    [Crossref]
  15. P. Bhat, C. L. Zitnick, M. Cohen, and B. Curless, “Gradientshop: a gradient-domain optimization framework for image and video filtering,” ACM Trans. Graph. 29, 1–14 (2010).
    [Crossref]
  16. D. Socolinsky and L. Wolff, “Multispectral image visualization through first-order fusion,” IEEE Trans. Image Process. 11, 923–931 (2002).
    [Crossref]
  17. M. Drew and G. Finlayson, “Improvement of colorization realism via the structure tensor,” Int. J. Image Graph. 11, 589–609 (2011).
    [Crossref]
  18. D. Connah, M. Drew, and G. Finlayson, “Method and system for generating accented image data,” U.S. patent8682093 and UK patent GB0914982.4 (March25, 2014).
  19. D. Connah, M. S. Drew, and G. D. Finlayson, “Spectral edge image fusion: theory and applications,” in European Conference on Computer Vision (2014), pp. 65–80.
  20. A. Toet, J. J. V. Ruyven, and J. M. Valeton, “Merging thermal and visual images by a contrast pyramid,” Opt. Eng. 28, 287789 (1989).
    [Crossref]
  21. J. Lewis, R. O’Callaghan, S. Nikolov, D. Bull, and C. Canagarajah, “Region-based image fusion using complex wavelets,” in 7th International Conference on Information Fusion (2004), Vol. 1, pp. 555–562.
  22. A. Waxman, A. Gove, D. Fay, J. Racamoto, J. Carrick, M. Seibert, and E. Savoye, “Color night vision: opponent processing in the fusion of visible and IR imagery,” Neural Networks 10, 1–6 (1997).
  23. K. Kotwal and S. Chaudhuri, “Visualization of hyperspectral images using bilateral filtering,” IEEE Trans. Geosci. Remote Sens. 48, 2308–2316 (2010).
    [Crossref]
  24. P. Scheunders, “A multivalued image wavelet representation based on multiscale fundamental forms,” IEEE Trans. Image Process. 11, 568–575 (2002).
    [Crossref]
  25. S. Paris, S. W. Hasinoff, and J. Kautz, “Local Laplacian filters: edge-aware image processing with a Laplacian pyramid,” Commun. ACM 58, 81–91 (2015).
    [Crossref]
  26. G. D. Finlayson, D. Connah, and M. S. Drew, “Lookup-table-based gradient field reconstruction,” IEEE Trans. Image Process. 20, 2827–2836 (2011).
    [Crossref]
  27. G. Piella, “Image fusion for enhanced visualization: a variational approach,” Int. J. Comput. Vis. 83, 1–11 (2009).
    [Crossref]
  28. D. Fay, A. M. Waxman, M. Aguilar, D. Ireland, J. Racamato, W. Ross, W. W. Streilein, and M. I. Braun, “Fusion of multi-sensor imagery for night vision: color visualization, target learning and search,” in 3rd International Conference on Information Fusion (2000), pp. 215–219.
  29. C. Lau, W. Heidrich, and R. Mantiuk, “Cluster-based color space optimizations,” in International Conference on Computer Vision (2011), pp. 1172–1179.
  30. D. Krishnan and R. Fergus, “Dark flash photography,” ACM Trans. Graph. 28, 96 (2009).
    [Crossref]
  31. S. Di Zenzo, “A note on the gradient of a multi-image,” Comp. Vis. Graph. Image Process. 33, 116–125 (1986).
    [Crossref]
  32. G. Golub and C. van Loan, Matrix Computations (Johns Hopkins University, 1983).
  33. R. T. Frankot and R. Chellappa, “A method for enforcing integrability in shape from shading algorithms,” IEEE Trans. Pattern Anal. Mach. Intell. 10, 439–451 (1988).
    [Crossref]
  34. A. Agrawal, R. Chellappa, and R. Raskar, “An algebraic approach to surface reconstruction from gradient fields,” in International Conference on Computer Vision (2005), pp. 174–181.
  35. G. Finlayson, D. Connah, and M. Drew, “Image reconstruction method and system,” U.S. patentUS20120263377 A1 (October18, 2012).
  36. J. Morovic, Color Gamut Mapping (Wiley, 2008).
  37. NASA, “Landsat imagery,” (2013), http://glcf.umd.edu/data/gls/ .
  38. NASA, “AVIRIS: Airborne Visible/Infrared Imaging Spectrophotometer,” (2013), http://aviris.jpl.nasa.gov/ .
  39. M. Stokes, M. Anderson, S. Chandrasekar, and R. Motta, “A standard default color space for the Internet—sRGB,” (1996), http://www.w3.org/Graphics/Color/sRGB .
  40. G. Roth, Handbook of Practical Astronomy (Springer, 2009).
  41. V. Arsigny, P. Fillard, X. Pennec, and N. Ayache, “Log-Euclidean metrics for fast and simple calculus,” Mag. Res. Med. 56, 411–421 (2006).
    [Crossref]
  42. C. Fredembach, N. Barbuscia, and S. Süsstrunk, “Combining visible and near-infrared images for realistic skin smoothing,” in IS&T/SID 17th Color Imaging Conference (2009).
  43. J. Davis and V. Sharma, “Background-subtraction using contour-based fusion of thermal and visible imagery,” Comput. Vis. Image Unders. 106, 162–182 (2007).
    [Crossref]

2015 (1)

S. Paris, S. W. Hasinoff, and J. Kautz, “Local Laplacian filters: edge-aware image processing with a Laplacian pyramid,” Commun. ACM 58, 81–91 (2015).
[Crossref]

2011 (3)

G. D. Finlayson, D. Connah, and M. S. Drew, “Lookup-table-based gradient field reconstruction,” IEEE Trans. Image Process. 20, 2827–2836 (2011).
[Crossref]

M. Drew and G. Finlayson, “Improvement of colorization realism via the structure tensor,” Int. J. Image Graph. 11, 589–609 (2011).
[Crossref]

G. Hamarneh, C. McIntosh, and M. S. Drew, “Perception-based visualization of manifold-valued medical images using distance-preserving dimensionality reduction,” IEEE Trans. Med. Imaging 30, 1314–1327 (2011).
[Crossref]

2010 (3)

M. Cui, J. Hu, A. Razdan, and P. Wonka, “Color to gray conversion using ISOMAP,” Vis. Comput. 26, 1349–1360 (2010).
[Crossref]

P. Bhat, C. L. Zitnick, M. Cohen, and B. Curless, “Gradientshop: a gradient-domain optimization framework for image and video filtering,” ACM Trans. Graph. 29, 1–14 (2010).
[Crossref]

K. Kotwal and S. Chaudhuri, “Visualization of hyperspectral images using bilateral filtering,” IEEE Trans. Geosci. Remote Sens. 48, 2308–2316 (2010).
[Crossref]

2009 (2)

G. Piella, “Image fusion for enhanced visualization: a variational approach,” Int. J. Comput. Vis. 83, 1–11 (2009).
[Crossref]

D. Krishnan and R. Fergus, “Dark flash photography,” ACM Trans. Graph. 28, 96 (2009).
[Crossref]

2007 (2)

N. Jacobson, M. Gupta, and J. Cole, “Linear fusion of image sets for display,” IEEE Trans. Geosci. Remote Sens. 45, 3277–3288 (2007).
[Crossref]

J. Davis and V. Sharma, “Background-subtraction using contour-based fusion of thermal and visible imagery,” Comput. Vis. Image Unders. 106, 162–182 (2007).
[Crossref]

2006 (1)

V. Arsigny, P. Fillard, X. Pennec, and N. Ayache, “Log-Euclidean metrics for fast and simple calculus,” Mag. Res. Med. 56, 411–421 (2006).
[Crossref]

2003 (3)

J. Tyo, A. Konsolakis, D. Diersen, and R. Olsen, “Principal-components-based display strategy for spectral imagery,” IEEE Trans. Geosci. Remote Sens. 41, 708–718 (2003).
[Crossref]

A. Toet, “Natural colour mapping for multiband nightvision imagery,” Inf. Fusion 4, 155–166 (2003).
[Crossref]

P. Pérez, M. Gangnet, and A. Blake, “Poisson image editing,” ACM Trans. Graph. 22, 313–318 (2003).
[Crossref]

2002 (3)

R. Fattal, D. Lischinski, and M. Werman, “Gradient domain high dynamic range compression,” ACM Trans. Graph. 21, 249–256 (2002).
[Crossref]

D. Socolinsky and L. Wolff, “Multispectral image visualization through first-order fusion,” IEEE Trans. Image Process. 11, 923–931 (2002).
[Crossref]

P. Scheunders, “A multivalued image wavelet representation based on multiscale fundamental forms,” IEEE Trans. Image Process. 11, 568–575 (2002).
[Crossref]

1998 (1)

C. Pohl and J. L. V. Genderen, “Multisensor image fusion in remote sensing: concepts, methods and applications,” Int. J. Remote Sens. 19, 823–854 (1998).
[Crossref]

1997 (1)

A. Waxman, A. Gove, D. Fay, J. Racamoto, J. Carrick, M. Seibert, and E. Savoye, “Color night vision: opponent processing in the fusion of visible and IR imagery,” Neural Networks 10, 1–6 (1997).

1995 (1)

H. Li, B. Manjunath, and S. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models Image Process. 57, 235–245 (1995).
[Crossref]

1989 (1)

A. Toet, J. J. V. Ruyven, and J. M. Valeton, “Merging thermal and visual images by a contrast pyramid,” Opt. Eng. 28, 287789 (1989).
[Crossref]

1988 (1)

R. T. Frankot and R. Chellappa, “A method for enforcing integrability in shape from shading algorithms,” IEEE Trans. Pattern Anal. Mach. Intell. 10, 439–451 (1988).
[Crossref]

1986 (1)

S. Di Zenzo, “A note on the gradient of a multi-image,” Comp. Vis. Graph. Image Process. 33, 116–125 (1986).
[Crossref]

1985 (1)

P. J. Burt and E. H. Adelson, “Merging images through pattern decomposition,” Proc. SPIE 0575, 173–181 (1985).

Adelson, E. H.

P. J. Burt and E. H. Adelson, “Merging images through pattern decomposition,” Proc. SPIE 0575, 173–181 (1985).

Agrawal, A.

J. Tumblin, A. Agrawal, and R. Raskar, “Why I want a gradient camera,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, California (2005), pp. 103–110.

A. Agrawal, R. Chellappa, and R. Raskar, “An algebraic approach to surface reconstruction from gradient fields,” in International Conference on Computer Vision (2005), pp. 174–181.

Aguilar, M.

D. Fay, A. M. Waxman, M. Aguilar, D. Ireland, J. Racamato, W. Ross, W. W. Streilein, and M. I. Braun, “Fusion of multi-sensor imagery for night vision: color visualization, target learning and search,” in 3rd International Conference on Information Fusion (2000), pp. 215–219.

Arsigny, V.

V. Arsigny, P. Fillard, X. Pennec, and N. Ayache, “Log-Euclidean metrics for fast and simple calculus,” Mag. Res. Med. 56, 411–421 (2006).
[Crossref]

Ayache, N.

V. Arsigny, P. Fillard, X. Pennec, and N. Ayache, “Log-Euclidean metrics for fast and simple calculus,” Mag. Res. Med. 56, 411–421 (2006).
[Crossref]

Barbuscia, N.

C. Fredembach, N. Barbuscia, and S. Süsstrunk, “Combining visible and near-infrared images for realistic skin smoothing,” in IS&T/SID 17th Color Imaging Conference (2009).

Bhat, P.

P. Bhat, C. L. Zitnick, M. Cohen, and B. Curless, “Gradientshop: a gradient-domain optimization framework for image and video filtering,” ACM Trans. Graph. 29, 1–14 (2010).
[Crossref]

Blake, A.

P. Pérez, M. Gangnet, and A. Blake, “Poisson image editing,” ACM Trans. Graph. 22, 313–318 (2003).
[Crossref]

Braun, M. I.

D. Fay, A. M. Waxman, M. Aguilar, D. Ireland, J. Racamato, W. Ross, W. W. Streilein, and M. I. Braun, “Fusion of multi-sensor imagery for night vision: color visualization, target learning and search,” in 3rd International Conference on Information Fusion (2000), pp. 215–219.

Bull, D.

J. Lewis, R. O’Callaghan, S. Nikolov, D. Bull, and C. Canagarajah, “Region-based image fusion using complex wavelets,” in 7th International Conference on Information Fusion (2004), Vol. 1, pp. 555–562.

Burt, P. J.

P. J. Burt and E. H. Adelson, “Merging images through pattern decomposition,” Proc. SPIE 0575, 173–181 (1985).

Campbell, J. B.

J. B. Campbell and H. Wynne, Introduction to Remote Sensing, 5th ed. (Guilford, 2011).

Canagarajah, C.

J. Lewis, R. O’Callaghan, S. Nikolov, D. Bull, and C. Canagarajah, “Region-based image fusion using complex wavelets,” in 7th International Conference on Information Fusion (2004), Vol. 1, pp. 555–562.

Carrick, J.

A. Waxman, A. Gove, D. Fay, J. Racamoto, J. Carrick, M. Seibert, and E. Savoye, “Color night vision: opponent processing in the fusion of visible and IR imagery,” Neural Networks 10, 1–6 (1997).

Chaudhuri, S.

K. Kotwal and S. Chaudhuri, “Visualization of hyperspectral images using bilateral filtering,” IEEE Trans. Geosci. Remote Sens. 48, 2308–2316 (2010).
[Crossref]

Chellappa, R.

R. T. Frankot and R. Chellappa, “A method for enforcing integrability in shape from shading algorithms,” IEEE Trans. Pattern Anal. Mach. Intell. 10, 439–451 (1988).
[Crossref]

A. Agrawal, R. Chellappa, and R. Raskar, “An algebraic approach to surface reconstruction from gradient fields,” in International Conference on Computer Vision (2005), pp. 174–181.

Cohen, M.

P. Bhat, C. L. Zitnick, M. Cohen, and B. Curless, “Gradientshop: a gradient-domain optimization framework for image and video filtering,” ACM Trans. Graph. 29, 1–14 (2010).
[Crossref]

Cole, J.

N. Jacobson, M. Gupta, and J. Cole, “Linear fusion of image sets for display,” IEEE Trans. Geosci. Remote Sens. 45, 3277–3288 (2007).
[Crossref]

Connah, D.

G. D. Finlayson, D. Connah, and M. S. Drew, “Lookup-table-based gradient field reconstruction,” IEEE Trans. Image Process. 20, 2827–2836 (2011).
[Crossref]

G. Finlayson, D. Connah, and M. Drew, “Image reconstruction method and system,” U.S. patentUS20120263377 A1 (October18, 2012).

D. Connah, M. Drew, and G. Finlayson, “Method and system for generating accented image data,” U.S. patent8682093 and UK patent GB0914982.4 (March25, 2014).

D. Connah, M. S. Drew, and G. D. Finlayson, “Spectral edge image fusion: theory and applications,” in European Conference on Computer Vision (2014), pp. 65–80.

Cornsweet, T.

T. Cornsweet, Visual Perception (Academic, 1970).

Cui, M.

M. Cui, J. Hu, A. Razdan, and P. Wonka, “Color to gray conversion using ISOMAP,” Vis. Comput. 26, 1349–1360 (2010).
[Crossref]

Curless, B.

P. Bhat, C. L. Zitnick, M. Cohen, and B. Curless, “Gradientshop: a gradient-domain optimization framework for image and video filtering,” ACM Trans. Graph. 29, 1–14 (2010).
[Crossref]

Davis, J.

J. Davis and V. Sharma, “Background-subtraction using contour-based fusion of thermal and visible imagery,” Comput. Vis. Image Unders. 106, 162–182 (2007).
[Crossref]

Di Zenzo, S.

S. Di Zenzo, “A note on the gradient of a multi-image,” Comp. Vis. Graph. Image Process. 33, 116–125 (1986).
[Crossref]

Diersen, D.

J. Tyo, A. Konsolakis, D. Diersen, and R. Olsen, “Principal-components-based display strategy for spectral imagery,” IEEE Trans. Geosci. Remote Sens. 41, 708–718 (2003).
[Crossref]

Drew, M.

M. Drew and G. Finlayson, “Improvement of colorization realism via the structure tensor,” Int. J. Image Graph. 11, 589–609 (2011).
[Crossref]

D. Connah, M. Drew, and G. Finlayson, “Method and system for generating accented image data,” U.S. patent8682093 and UK patent GB0914982.4 (March25, 2014).

G. Finlayson, D. Connah, and M. Drew, “Image reconstruction method and system,” U.S. patentUS20120263377 A1 (October18, 2012).

Drew, M. S.

G. D. Finlayson, D. Connah, and M. S. Drew, “Lookup-table-based gradient field reconstruction,” IEEE Trans. Image Process. 20, 2827–2836 (2011).
[Crossref]

G. Hamarneh, C. McIntosh, and M. S. Drew, “Perception-based visualization of manifold-valued medical images using distance-preserving dimensionality reduction,” IEEE Trans. Med. Imaging 30, 1314–1327 (2011).
[Crossref]

D. Connah, M. S. Drew, and G. D. Finlayson, “Spectral edge image fusion: theory and applications,” in European Conference on Computer Vision (2014), pp. 65–80.

Fattal, R.

R. Fattal, D. Lischinski, and M. Werman, “Gradient domain high dynamic range compression,” ACM Trans. Graph. 21, 249–256 (2002).
[Crossref]

Fay, D.

A. Waxman, A. Gove, D. Fay, J. Racamoto, J. Carrick, M. Seibert, and E. Savoye, “Color night vision: opponent processing in the fusion of visible and IR imagery,” Neural Networks 10, 1–6 (1997).

D. Fay, A. M. Waxman, M. Aguilar, D. Ireland, J. Racamato, W. Ross, W. W. Streilein, and M. I. Braun, “Fusion of multi-sensor imagery for night vision: color visualization, target learning and search,” in 3rd International Conference on Information Fusion (2000), pp. 215–219.

Fergus, R.

D. Krishnan and R. Fergus, “Dark flash photography,” ACM Trans. Graph. 28, 96 (2009).
[Crossref]

Fillard, P.

V. Arsigny, P. Fillard, X. Pennec, and N. Ayache, “Log-Euclidean metrics for fast and simple calculus,” Mag. Res. Med. 56, 411–421 (2006).
[Crossref]

Finlayson, G.

M. Drew and G. Finlayson, “Improvement of colorization realism via the structure tensor,” Int. J. Image Graph. 11, 589–609 (2011).
[Crossref]

D. Connah, M. Drew, and G. Finlayson, “Method and system for generating accented image data,” U.S. patent8682093 and UK patent GB0914982.4 (March25, 2014).

G. Finlayson, D. Connah, and M. Drew, “Image reconstruction method and system,” U.S. patentUS20120263377 A1 (October18, 2012).

Finlayson, G. D.

G. D. Finlayson, D. Connah, and M. S. Drew, “Lookup-table-based gradient field reconstruction,” IEEE Trans. Image Process. 20, 2827–2836 (2011).
[Crossref]

D. Connah, M. S. Drew, and G. D. Finlayson, “Spectral edge image fusion: theory and applications,” in European Conference on Computer Vision (2014), pp. 65–80.

Frankot, R. T.

R. T. Frankot and R. Chellappa, “A method for enforcing integrability in shape from shading algorithms,” IEEE Trans. Pattern Anal. Mach. Intell. 10, 439–451 (1988).
[Crossref]

Fredembach, C.

C. Fredembach, N. Barbuscia, and S. Süsstrunk, “Combining visible and near-infrared images for realistic skin smoothing,” in IS&T/SID 17th Color Imaging Conference (2009).

L. Schaul, C. Fredembach, and S. Süsstrunk, “Color image dehazing using the near-infrared,” in 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt (2009), pp. 1629–1632.

Gangnet, M.

P. Pérez, M. Gangnet, and A. Blake, “Poisson image editing,” ACM Trans. Graph. 22, 313–318 (2003).
[Crossref]

Genderen, J. L. V.

C. Pohl and J. L. V. Genderen, “Multisensor image fusion in remote sensing: concepts, methods and applications,” Int. J. Remote Sens. 19, 823–854 (1998).
[Crossref]

Golub, G.

G. Golub and C. van Loan, Matrix Computations (Johns Hopkins University, 1983).

Gove, A.

A. Waxman, A. Gove, D. Fay, J. Racamoto, J. Carrick, M. Seibert, and E. Savoye, “Color night vision: opponent processing in the fusion of visible and IR imagery,” Neural Networks 10, 1–6 (1997).

Gupta, M.

N. Jacobson, M. Gupta, and J. Cole, “Linear fusion of image sets for display,” IEEE Trans. Geosci. Remote Sens. 45, 3277–3288 (2007).
[Crossref]

Hamarneh, G.

G. Hamarneh, C. McIntosh, and M. S. Drew, “Perception-based visualization of manifold-valued medical images using distance-preserving dimensionality reduction,” IEEE Trans. Med. Imaging 30, 1314–1327 (2011).
[Crossref]

Hasinoff, S. W.

S. Paris, S. W. Hasinoff, and J. Kautz, “Local Laplacian filters: edge-aware image processing with a Laplacian pyramid,” Commun. ACM 58, 81–91 (2015).
[Crossref]

Heidrich, W.

C. Lau, W. Heidrich, and R. Mantiuk, “Cluster-based color space optimizations,” in International Conference on Computer Vision (2011), pp. 1172–1179.

Hu, J.

M. Cui, J. Hu, A. Razdan, and P. Wonka, “Color to gray conversion using ISOMAP,” Vis. Comput. 26, 1349–1360 (2010).
[Crossref]

Ireland, D.

D. Fay, A. M. Waxman, M. Aguilar, D. Ireland, J. Racamato, W. Ross, W. W. Streilein, and M. I. Braun, “Fusion of multi-sensor imagery for night vision: color visualization, target learning and search,” in 3rd International Conference on Information Fusion (2000), pp. 215–219.

Jacobson, N.

N. Jacobson, M. Gupta, and J. Cole, “Linear fusion of image sets for display,” IEEE Trans. Geosci. Remote Sens. 45, 3277–3288 (2007).
[Crossref]

Kautz, J.

S. Paris, S. W. Hasinoff, and J. Kautz, “Local Laplacian filters: edge-aware image processing with a Laplacian pyramid,” Commun. ACM 58, 81–91 (2015).
[Crossref]

Konsolakis, A.

J. Tyo, A. Konsolakis, D. Diersen, and R. Olsen, “Principal-components-based display strategy for spectral imagery,” IEEE Trans. Geosci. Remote Sens. 41, 708–718 (2003).
[Crossref]

Kotwal, K.

K. Kotwal and S. Chaudhuri, “Visualization of hyperspectral images using bilateral filtering,” IEEE Trans. Geosci. Remote Sens. 48, 2308–2316 (2010).
[Crossref]

Krishnan, D.

D. Krishnan and R. Fergus, “Dark flash photography,” ACM Trans. Graph. 28, 96 (2009).
[Crossref]

Lau, C.

C. Lau, W. Heidrich, and R. Mantiuk, “Cluster-based color space optimizations,” in International Conference on Computer Vision (2011), pp. 1172–1179.

Lewis, J.

J. Lewis, R. O’Callaghan, S. Nikolov, D. Bull, and C. Canagarajah, “Region-based image fusion using complex wavelets,” in 7th International Conference on Information Fusion (2004), Vol. 1, pp. 555–562.

Li, H.

H. Li, B. Manjunath, and S. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models Image Process. 57, 235–245 (1995).
[Crossref]

Lischinski, D.

R. Fattal, D. Lischinski, and M. Werman, “Gradient domain high dynamic range compression,” ACM Trans. Graph. 21, 249–256 (2002).
[Crossref]

Manjunath, B.

H. Li, B. Manjunath, and S. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models Image Process. 57, 235–245 (1995).
[Crossref]

Mantiuk, R.

C. Lau, W. Heidrich, and R. Mantiuk, “Cluster-based color space optimizations,” in International Conference on Computer Vision (2011), pp. 1172–1179.

McIntosh, C.

G. Hamarneh, C. McIntosh, and M. S. Drew, “Perception-based visualization of manifold-valued medical images using distance-preserving dimensionality reduction,” IEEE Trans. Med. Imaging 30, 1314–1327 (2011).
[Crossref]

Mitra, S.

H. Li, B. Manjunath, and S. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models Image Process. 57, 235–245 (1995).
[Crossref]

Morovic, J.

J. Morovic, Color Gamut Mapping (Wiley, 2008).

Nikolov, S.

J. Lewis, R. O’Callaghan, S. Nikolov, D. Bull, and C. Canagarajah, “Region-based image fusion using complex wavelets,” in 7th International Conference on Information Fusion (2004), Vol. 1, pp. 555–562.

O’Callaghan, R.

J. Lewis, R. O’Callaghan, S. Nikolov, D. Bull, and C. Canagarajah, “Region-based image fusion using complex wavelets,” in 7th International Conference on Information Fusion (2004), Vol. 1, pp. 555–562.

Olsen, R.

J. Tyo, A. Konsolakis, D. Diersen, and R. Olsen, “Principal-components-based display strategy for spectral imagery,” IEEE Trans. Geosci. Remote Sens. 41, 708–718 (2003).
[Crossref]

Paris, S.

S. Paris, S. W. Hasinoff, and J. Kautz, “Local Laplacian filters: edge-aware image processing with a Laplacian pyramid,” Commun. ACM 58, 81–91 (2015).
[Crossref]

Pennec, X.

V. Arsigny, P. Fillard, X. Pennec, and N. Ayache, “Log-Euclidean metrics for fast and simple calculus,” Mag. Res. Med. 56, 411–421 (2006).
[Crossref]

Pérez, P.

P. Pérez, M. Gangnet, and A. Blake, “Poisson image editing,” ACM Trans. Graph. 22, 313–318 (2003).
[Crossref]

Piella, G.

G. Piella, “Image fusion for enhanced visualization: a variational approach,” Int. J. Comput. Vis. 83, 1–11 (2009).
[Crossref]

Pohl, C.

C. Pohl and J. L. V. Genderen, “Multisensor image fusion in remote sensing: concepts, methods and applications,” Int. J. Remote Sens. 19, 823–854 (1998).
[Crossref]

Racamato, J.

D. Fay, A. M. Waxman, M. Aguilar, D. Ireland, J. Racamato, W. Ross, W. W. Streilein, and M. I. Braun, “Fusion of multi-sensor imagery for night vision: color visualization, target learning and search,” in 3rd International Conference on Information Fusion (2000), pp. 215–219.

Racamoto, J.

A. Waxman, A. Gove, D. Fay, J. Racamoto, J. Carrick, M. Seibert, and E. Savoye, “Color night vision: opponent processing in the fusion of visible and IR imagery,” Neural Networks 10, 1–6 (1997).

Raskar, R.

J. Tumblin, A. Agrawal, and R. Raskar, “Why I want a gradient camera,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, California (2005), pp. 103–110.

A. Agrawal, R. Chellappa, and R. Raskar, “An algebraic approach to surface reconstruction from gradient fields,” in International Conference on Computer Vision (2005), pp. 174–181.

Razdan, A.

M. Cui, J. Hu, A. Razdan, and P. Wonka, “Color to gray conversion using ISOMAP,” Vis. Comput. 26, 1349–1360 (2010).
[Crossref]

Ross, W.

D. Fay, A. M. Waxman, M. Aguilar, D. Ireland, J. Racamato, W. Ross, W. W. Streilein, and M. I. Braun, “Fusion of multi-sensor imagery for night vision: color visualization, target learning and search,” in 3rd International Conference on Information Fusion (2000), pp. 215–219.

Roth, G.

G. Roth, Handbook of Practical Astronomy (Springer, 2009).

Ruyven, J. J. V.

A. Toet, J. J. V. Ruyven, and J. M. Valeton, “Merging thermal and visual images by a contrast pyramid,” Opt. Eng. 28, 287789 (1989).
[Crossref]

Savoye, E.

A. Waxman, A. Gove, D. Fay, J. Racamoto, J. Carrick, M. Seibert, and E. Savoye, “Color night vision: opponent processing in the fusion of visible and IR imagery,” Neural Networks 10, 1–6 (1997).

Schaul, L.

L. Schaul, C. Fredembach, and S. Süsstrunk, “Color image dehazing using the near-infrared,” in 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt (2009), pp. 1629–1632.

Scheunders, P.

P. Scheunders, “A multivalued image wavelet representation based on multiscale fundamental forms,” IEEE Trans. Image Process. 11, 568–575 (2002).
[Crossref]

Seibert, M.

A. Waxman, A. Gove, D. Fay, J. Racamoto, J. Carrick, M. Seibert, and E. Savoye, “Color night vision: opponent processing in the fusion of visible and IR imagery,” Neural Networks 10, 1–6 (1997).

Sharma, V.

J. Davis and V. Sharma, “Background-subtraction using contour-based fusion of thermal and visible imagery,” Comput. Vis. Image Unders. 106, 162–182 (2007).
[Crossref]

Socolinsky, D.

D. Socolinsky and L. Wolff, “Multispectral image visualization through first-order fusion,” IEEE Trans. Image Process. 11, 923–931 (2002).
[Crossref]

Streilein, W. W.

D. Fay, A. M. Waxman, M. Aguilar, D. Ireland, J. Racamato, W. Ross, W. W. Streilein, and M. I. Braun, “Fusion of multi-sensor imagery for night vision: color visualization, target learning and search,” in 3rd International Conference on Information Fusion (2000), pp. 215–219.

Süsstrunk, S.

C. Fredembach, N. Barbuscia, and S. Süsstrunk, “Combining visible and near-infrared images for realistic skin smoothing,” in IS&T/SID 17th Color Imaging Conference (2009).

L. Schaul, C. Fredembach, and S. Süsstrunk, “Color image dehazing using the near-infrared,” in 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt (2009), pp. 1629–1632.

Toet, A.

A. Toet, “Natural colour mapping for multiband nightvision imagery,” Inf. Fusion 4, 155–166 (2003).
[Crossref]

A. Toet, J. J. V. Ruyven, and J. M. Valeton, “Merging thermal and visual images by a contrast pyramid,” Opt. Eng. 28, 287789 (1989).
[Crossref]

Tumblin, J.

J. Tumblin, A. Agrawal, and R. Raskar, “Why I want a gradient camera,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, California (2005), pp. 103–110.

Tyo, J.

J. Tyo, A. Konsolakis, D. Diersen, and R. Olsen, “Principal-components-based display strategy for spectral imagery,” IEEE Trans. Geosci. Remote Sens. 41, 708–718 (2003).
[Crossref]

Valeton, J. M.

A. Toet, J. J. V. Ruyven, and J. M. Valeton, “Merging thermal and visual images by a contrast pyramid,” Opt. Eng. 28, 287789 (1989).
[Crossref]

van Loan, C.

G. Golub and C. van Loan, Matrix Computations (Johns Hopkins University, 1983).

Waxman, A.

A. Waxman, A. Gove, D. Fay, J. Racamoto, J. Carrick, M. Seibert, and E. Savoye, “Color night vision: opponent processing in the fusion of visible and IR imagery,” Neural Networks 10, 1–6 (1997).

Waxman, A. M.

D. Fay, A. M. Waxman, M. Aguilar, D. Ireland, J. Racamato, W. Ross, W. W. Streilein, and M. I. Braun, “Fusion of multi-sensor imagery for night vision: color visualization, target learning and search,” in 3rd International Conference on Information Fusion (2000), pp. 215–219.

Werman, M.

R. Fattal, D. Lischinski, and M. Werman, “Gradient domain high dynamic range compression,” ACM Trans. Graph. 21, 249–256 (2002).
[Crossref]

Wolff, L.

D. Socolinsky and L. Wolff, “Multispectral image visualization through first-order fusion,” IEEE Trans. Image Process. 11, 923–931 (2002).
[Crossref]

Wonka, P.

M. Cui, J. Hu, A. Razdan, and P. Wonka, “Color to gray conversion using ISOMAP,” Vis. Comput. 26, 1349–1360 (2010).
[Crossref]

Wynne, H.

J. B. Campbell and H. Wynne, Introduction to Remote Sensing, 5th ed. (Guilford, 2011).

Zitnick, C. L.

P. Bhat, C. L. Zitnick, M. Cohen, and B. Curless, “Gradientshop: a gradient-domain optimization framework for image and video filtering,” ACM Trans. Graph. 29, 1–14 (2010).
[Crossref]

ACM Trans. Graph. (4)

R. Fattal, D. Lischinski, and M. Werman, “Gradient domain high dynamic range compression,” ACM Trans. Graph. 21, 249–256 (2002).
[Crossref]

P. Pérez, M. Gangnet, and A. Blake, “Poisson image editing,” ACM Trans. Graph. 22, 313–318 (2003).
[Crossref]

P. Bhat, C. L. Zitnick, M. Cohen, and B. Curless, “Gradientshop: a gradient-domain optimization framework for image and video filtering,” ACM Trans. Graph. 29, 1–14 (2010).
[Crossref]

D. Krishnan and R. Fergus, “Dark flash photography,” ACM Trans. Graph. 28, 96 (2009).
[Crossref]

Commun. ACM (1)

S. Paris, S. W. Hasinoff, and J. Kautz, “Local Laplacian filters: edge-aware image processing with a Laplacian pyramid,” Commun. ACM 58, 81–91 (2015).
[Crossref]

Comp. Vis. Graph. Image Process. (1)

S. Di Zenzo, “A note on the gradient of a multi-image,” Comp. Vis. Graph. Image Process. 33, 116–125 (1986).
[Crossref]

Comput. Vis. Image Unders. (1)

J. Davis and V. Sharma, “Background-subtraction using contour-based fusion of thermal and visible imagery,” Comput. Vis. Image Unders. 106, 162–182 (2007).
[Crossref]

Graphical Models Image Process. (1)

H. Li, B. Manjunath, and S. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models Image Process. 57, 235–245 (1995).
[Crossref]

IEEE Trans. Geosci. Remote Sens. (3)

N. Jacobson, M. Gupta, and J. Cole, “Linear fusion of image sets for display,” IEEE Trans. Geosci. Remote Sens. 45, 3277–3288 (2007).
[Crossref]

J. Tyo, A. Konsolakis, D. Diersen, and R. Olsen, “Principal-components-based display strategy for spectral imagery,” IEEE Trans. Geosci. Remote Sens. 41, 708–718 (2003).
[Crossref]

K. Kotwal and S. Chaudhuri, “Visualization of hyperspectral images using bilateral filtering,” IEEE Trans. Geosci. Remote Sens. 48, 2308–2316 (2010).
[Crossref]

IEEE Trans. Image Process. (3)

P. Scheunders, “A multivalued image wavelet representation based on multiscale fundamental forms,” IEEE Trans. Image Process. 11, 568–575 (2002).
[Crossref]

G. D. Finlayson, D. Connah, and M. S. Drew, “Lookup-table-based gradient field reconstruction,” IEEE Trans. Image Process. 20, 2827–2836 (2011).
[Crossref]

D. Socolinsky and L. Wolff, “Multispectral image visualization through first-order fusion,” IEEE Trans. Image Process. 11, 923–931 (2002).
[Crossref]

IEEE Trans. Med. Imaging (1)

G. Hamarneh, C. McIntosh, and M. S. Drew, “Perception-based visualization of manifold-valued medical images using distance-preserving dimensionality reduction,” IEEE Trans. Med. Imaging 30, 1314–1327 (2011).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

R. T. Frankot and R. Chellappa, “A method for enforcing integrability in shape from shading algorithms,” IEEE Trans. Pattern Anal. Mach. Intell. 10, 439–451 (1988).
[Crossref]

Inf. Fusion (1)

A. Toet, “Natural colour mapping for multiband nightvision imagery,” Inf. Fusion 4, 155–166 (2003).
[Crossref]

Int. J. Comput. Vis. (1)

G. Piella, “Image fusion for enhanced visualization: a variational approach,” Int. J. Comput. Vis. 83, 1–11 (2009).
[Crossref]

Int. J. Image Graph. (1)

M. Drew and G. Finlayson, “Improvement of colorization realism via the structure tensor,” Int. J. Image Graph. 11, 589–609 (2011).
[Crossref]

Int. J. Remote Sens. (1)

C. Pohl and J. L. V. Genderen, “Multisensor image fusion in remote sensing: concepts, methods and applications,” Int. J. Remote Sens. 19, 823–854 (1998).
[Crossref]

Mag. Res. Med. (1)

V. Arsigny, P. Fillard, X. Pennec, and N. Ayache, “Log-Euclidean metrics for fast and simple calculus,” Mag. Res. Med. 56, 411–421 (2006).
[Crossref]

Neural Networks (1)

A. Waxman, A. Gove, D. Fay, J. Racamoto, J. Carrick, M. Seibert, and E. Savoye, “Color night vision: opponent processing in the fusion of visible and IR imagery,” Neural Networks 10, 1–6 (1997).

Opt. Eng. (1)

A. Toet, J. J. V. Ruyven, and J. M. Valeton, “Merging thermal and visual images by a contrast pyramid,” Opt. Eng. 28, 287789 (1989).
[Crossref]

Proc. SPIE (1)

P. J. Burt and E. H. Adelson, “Merging images through pattern decomposition,” Proc. SPIE 0575, 173–181 (1985).

Vis. Comput. (1)

M. Cui, J. Hu, A. Razdan, and P. Wonka, “Color to gray conversion using ISOMAP,” Vis. Comput. 26, 1349–1360 (2010).
[Crossref]

Other (18)

J. B. Campbell and H. Wynne, Introduction to Remote Sensing, 5th ed. (Guilford, 2011).

L. Schaul, C. Fredembach, and S. Süsstrunk, “Color image dehazing using the near-infrared,” in 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt (2009), pp. 1629–1632.

T. Cornsweet, Visual Perception (Academic, 1970).

J. Tumblin, A. Agrawal, and R. Raskar, “Why I want a gradient camera,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, California (2005), pp. 103–110.

J. Lewis, R. O’Callaghan, S. Nikolov, D. Bull, and C. Canagarajah, “Region-based image fusion using complex wavelets,” in 7th International Conference on Information Fusion (2004), Vol. 1, pp. 555–562.

D. Connah, M. Drew, and G. Finlayson, “Method and system for generating accented image data,” U.S. patent8682093 and UK patent GB0914982.4 (March25, 2014).

D. Connah, M. S. Drew, and G. D. Finlayson, “Spectral edge image fusion: theory and applications,” in European Conference on Computer Vision (2014), pp. 65–80.

D. Fay, A. M. Waxman, M. Aguilar, D. Ireland, J. Racamato, W. Ross, W. W. Streilein, and M. I. Braun, “Fusion of multi-sensor imagery for night vision: color visualization, target learning and search,” in 3rd International Conference on Information Fusion (2000), pp. 215–219.

C. Lau, W. Heidrich, and R. Mantiuk, “Cluster-based color space optimizations,” in International Conference on Computer Vision (2011), pp. 1172–1179.

G. Golub and C. van Loan, Matrix Computations (Johns Hopkins University, 1983).

A. Agrawal, R. Chellappa, and R. Raskar, “An algebraic approach to surface reconstruction from gradient fields,” in International Conference on Computer Vision (2005), pp. 174–181.

G. Finlayson, D. Connah, and M. Drew, “Image reconstruction method and system,” U.S. patentUS20120263377 A1 (October18, 2012).

J. Morovic, Color Gamut Mapping (Wiley, 2008).

NASA, “Landsat imagery,” (2013), http://glcf.umd.edu/data/gls/ .

NASA, “AVIRIS: Airborne Visible/Infrared Imaging Spectrophotometer,” (2013), http://aviris.jpl.nasa.gov/ .

M. Stokes, M. Anderson, S. Chandrasekar, and R. Motta, “A standard default color space for the Internet—sRGB,” (1996), http://www.w3.org/Graphics/Color/sRGB .

G. Roth, Handbook of Practical Astronomy (Springer, 2009).

C. Fredembach, N. Barbuscia, and S. Süsstrunk, “Combining visible and near-infrared images for realistic skin smoothing,” in IS&T/SID 17th Color Imaging Conference (2009).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Remote sensing example using images from Landsat [37] (see text for details). (a) RGB (putative), (b) SWIR, and (c) SpE output.
Fig. 2.
Fig. 2. Second image from the Landsat database, showing results for the bilateral filtering approach of [23] and the proposed approach. (a) RGB (putative), (b) bilateral filter (pseudo-color), (c) ansatz, (d) MWIR, (e) luminance replacement (“natural” color), and (f) SpE.
Fig. 3.
Fig. 3. Example of hyperspectral image fusion; images taken from AVIRIS dataset [38]. In (b), the largely blue output is due to most of the energy measured in each pixel spectrum residing in the visible band, which is on the small-wavelength end in the full measured spectrum extending from 370.5 to 2507.6 nm. (a) RGB (putative), (b) stretched RGB [3], (c) SWIR, and (d) SpE output.
Fig. 4.
Fig. 4. Comparison of SpE with other methods for an RGB+NIR fusion application. (a) RGB (putative), (b) NIR, (c) alpha-blend, with α = 0.5 , (d) max wavelet, (e) Lau et al., and (f) the SpE approach.
Fig. 5.
Fig. 5. RGB+NIR fusion application (data taken from http://ivrl.epfl.ch/research/infrared/skinsmoothing and used with permission). (a) RGB (putative), (b) result from [42], and (c) the SpE approach.
Fig. 6.
Fig. 6. Master of the Retable of the Reyes Católicos, Spanish, active 15th century. The Annunciation, late 15th century. Oil on wood panel. 60 3 / 8 × 37 in. The Fine Arts Museums of San Francisco, gift of the Samuel H. Kress Foundation, 61.44.2. (a), (b) Courtesy of Cultural Heritage Imaging and Fine Arts Museums of San Francisco. (a) RGB (putative), (b) NIR (1000 nm), (c) contrast mapping ansatz, and (d) SpE output.
Fig. 7.
Fig. 7. Example of thermal+RGB fusion; images taken from the OTCBVS dataset [43]. (a) Thermal (7–14 μm), (b) RGB (putative), and (c) SpE output.
Fig. 8.
Fig. 8. Synthetic RGBC image. (a) RGB, (b) clear, and (c) SpE.
Fig. 9.
Fig. 9. Visualization of a multilighting image of the Archimedes Palimpsest. (a) RGB, (b) ansatz, and (c) SpE.
Fig. 10.
Fig. 10. Application to mapping multispectral images from multi-illuminant capture setups. Parts (a) and (b) show images captured under illuminants I 1 and I 2 , respectively, where the image under I 1 is used to constrain output colors; the original images are taken from Lau et al. [29]. (a) RGB under I 1 (putative), (b) RGB under I 2 , (c) Lau et al., and (d) SpE output.
Fig. 11.
Fig. 11. Application of our method to “dark flash” photography (input images taken from [30]).
Fig. 12.
Fig. 12. Visualization of 6D DTMRI data. (a), (b) PCA approach. (c), (d) MDS method.

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

C = [ C , x 1 C , y 1 C , x N C , y N ] ,
m 2 = d T ( C ) T C d .
Z C = ( k C , x k C , x k k C , x k C , y k k C , x k C , y k k C , y k C , y k ) .
C = U Λ V T .
Z C = C T C = V Λ T U T U Λ V T = V Λ 2 V T
R = ( R , x R , y G , x G , y B , x B , y ) .
Z H = ( H ) T ( H ) , Z R = ( R ) T ( R ) , Z ˜ R = ( R ˜ ) T ( R ˜ ) ,
R ˜ = U ˜ R Λ ˜ R V ˜ R T , H = U H Λ H V H T .
R U ˜ R Λ H V H T .
Z R = ( R ) T R = V H Λ H 2 V H T = Z H ,
R = R ˜ A ,
Z R Z H Z R = R T R = A T R ˜ T R ˜ A Z H A T Z ˜ R A Z H .
A = ( Z ˜ R ) + Z H ,
A T Z ˜ R A = ( Z H Z ˜ R + ) Z ˜ R ( Z ˜ R + Z H ) = Z H ,
A = ( Z ˜ R ) + O Z H , O T O = I 2 ,
A T Z ˜ R A = ( Z H O T Z ˜ R + ) Z ˜ R ( Z ˜ R + O Z H ) = Z H .
R R ˜ R ˜ A R ˜ A I 2 Z ˜ R + O Z H I 2 O Z H Z ˜ R ,
Z ˜ R ( Z H ) T = D Γ E T ,
O = D E T .
R ansatz U ˜ R Λ H V H T .
R ansatz = [ U ˜ R ] [ Λ H V H T ] , R SpE = [ U ˜ R ] ( V ˜ R T O V H ) [ Λ H V H T ] ,
Z ˜ R = V ˜ R Λ ˜ R V ˜ R T ( Z ˜ R ) + = V ˜ R Λ ˜ R + V ˜ R T .
R = R ˜ A = ( U ˜ R Λ ˜ R V ˜ R T ) [ ( Z ˜ R ) + O Z H ] = [ U ˜ R ] ( Λ ˜ R V ˜ R T ) ( V ˜ R Λ ˜ R + V ˜ R T ) O [ V H Λ H V H T ] = [ U ˜ R ] ( V ˜ R T O V H ) [ Λ H V H T ] ,
R i = arg min I P i I n ,

Metrics