Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Virtual image determination for mirrored surfaces

Open Access Open Access

Abstract

An object viewed via reflection from a mirrored surface is often perceived by the observer to be located behind the mirror’s surface. The image of this object behind the mirror is known as its virtual image. Conventional methods for determining the location and shape of a virtual image for non-planar mirrors are complex and impractical unless both the observer and object are near the optical axis. We have developed a technique designed to be simple and practical for determining the location of a virtual image in a non-planar mirror far from the optical axis. Results using this technique were compared with known results from geometric optics for an object point on the optical axis of a parabola and for an object point imaged off the optical axis of a spherical mirror. These results were also in agreement with experimental measurements for a hemispherical mirror viewed at large angles with respect to its optical axis. This technique has applications for display devices or imaging tools utilizing curved, mirrored surfaces.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Mirrored surfaces are used in a wide variety of applications, including anamorphic art [1], panoramic imaging tools [2], Pepper’s Ghost-based displays [3,4], and collimated flight simulator displays [5]. In all of these applications, objects are viewed through a mirrored surface. The image of this object when it appears in front of or behind the mirror is known as its real or virtual image, respectively. The use of a proper method to determine the shape and location of such images allows for mirrors and/or other components of an imaging system to be designed to produce a desired real or virtual image, however there remains a lack of techniques suitable for this purpose. Though the ideas presented in this paper readily apply to both real and virtual images in mirrors, only the more common case of virtual images will be referred to for the sake of simplicity.

A common misconception about virtual images is that they are located at a fixed position in space, independent of the observer’s position. This misconception may come from the widespread use of Gaussian optics’ formalism, which give a fixed optical axis and a fixed image position for any given optical system. However, it is often observed that different positions on a lens can focus an object point to different image points, a phenomenon usually associated with optical aberrations.

The minimization of those optical aberrations represent the main body of work in optical engineering. In this field, a common practice is to choose an existing optical design as a starting point, and to optimize the materials, distances, and curvatures of components with ray trace simulations [6,7]. This process usually aims at producing a high-quality virtual surface on a flat plane such as the surface of a CCD sensor.

This viewpoint is logical as long as the object and observer are near the optical axis of the mirror or lens, however when these conditions are not met, more general methods are required. The problem lies in the fact that in such cases the object point is not imaged sharply at any point, and so it becomes incorrect to say that the object point is imaged at one particular location. What one would normally do is to use geometric optics methods to determine the virtual locus of the principle centers of curvature of the reflected wavefront, the wave caustic, in order to estimate the imaged spot of an object. This is a rigorous and perfectly valid method, but it is difficult to interpret for the average display designer, and it does not account for the observer location. The technique described in this paper seeks to simplify the problem, by describing a method to estimate a single spot where a human observer will perceive the image point to be. Doing this properly, however, requires understanding how humans perceive depth.

The human brain uses a wide variety of both psychological and physiological depth cues to determine the depth of an object. The depth cues affected by a mirror’s shape are the physiological cues, including accommodation, convergence, and binocular and monocular parallax. Accommodation arises from the need to bend or relax the lens of the eye in order to bring the image of an object into focus. The degree of force placed on the lens necessary to bring the image into focus provides the cue to the rough distance of the image from the eye. Accommodation contributes to perceived depth for objects closer than about 2 m from the observer [8]. The convergence cue comes from the need for the eyes to be directed at slightly different angles to see an image clearly. The angle between the eyes is used as a cue for depth, though this cue is only effective to distances of approximately 10 m [9]. The final two cues, binocular and monocular parallax, arise from viewing an object from at least two different viewpoints. Binocular parallax arises from the brain seeing an image from both eyes simultaneously, while monocular parallax cues come from the brain seeing an image from different locations at different times. In both cases, the brain is able to infer depth based on the relative movement of objects viewed from the two or more different locations. Binocular parallax is an effective depth cue up to a distance of about 20 m [10]. The strength of the monocular parallax cue depends on the degree of movement of the head, with movements of only a few millimeters sufficient to contribute to perceived depth [11].

For physical objects viewed directly, these cues will give depth estimates that are in agreement. Applying geometric optics to light reflected from curved mirrored surfaces, however, can be used to show that a bundle of rays reflecting from a small portion of the mirrored surface has a non-simple divergence without rotational symmetry. The result is that different depth cues used by the brain to infer depth in these cases can provide conflicting results. For instance, the degree of bending of an eye’s lens necessary to see an object in focus can differ from where the observer perceives the image to be via stereoscopic cues, a problem sometimes referred to as the vergence-accommodation conflict [12] (a problem familiar to those working in the field of virtual reality). In another example, the perceived depth of a virtual image can change when an observer tilts their head [13].

The issue of conflicting depth cues can be dealt with by making some simple, practical assumptions. First, we assume that binocular parallax and horizontal motion parallax depth cues are the dominant cues used for depth perception. Often an observer is stationary, or only moves their head to the left and/or right, so this method for determining depth is all that is necessary for many situations. This method could be extended to include the accommodation cue and other parallax cues by properly estimating and properly weighting these cues together [14]. However, these estimates are often not needed and so this paper will focus on methods to determine the binocular and horizontal motion parallax cues.

2. Method

2.1 Existing methods for determining virtual surfaces

Existing methods for determining virtual surfaces for non-ideal mirrored surfaces are sparse and in some cases incorrect. In one instance, it was noted that there was a general agreement on an incorrect method for the virtual image in a conical mirror when reflecting an anamorphic image [15]. In this paper, a proper method for determining a virtual image for such a mirror was detailed. However, even this technique was only applicable to a conical mirror viewed on its optical axis, and with the observer far from the conical mirror.

A major reason for the lack of existing methods is due to the limitation depicted in Fig. 1(a) and Fig. 1(b). In the example shown in this figure, it would be incorrect to say that the object point is imaged to one particular image point. Instead, a caustic is needed to describe the imaged point. Nevertheless, if one were to view a display reflecting from a large spherical mirror, one would still clearly see an image of the reflected display appearing behind the mirror.

 figure: Fig. 1

Fig. 1 Rays from an object point reflecting from a spherical mirror. Reflected rays are back-propagated into the sphere, creating a virtual ray caustic indicated by dotted curves. In (a) and (b), rays from a distant and nearby object point, respectively, reflect from a spherical surface. In (c) and (d), two different observers perceive the virtual image point to be at two different locations lying on different portions of the ray caustic.

Download Full Size | PDF

The fact that the image point is described by a caustic means that there is no singular virtual image for a given mirror and object; the image is dependent on the location of the observer. The method described in this paper seeks to provide an outline of how to determine the location and shape of the image this observer perceives.

2.2 Virtual image points in two dimensions

The method detailed in this paper can be used to determine a virtual image for an object reflecting from a mirror with a complex shape by assuming two things: first, it assumes a fixed observer location rather than providing a general solution, and second, it assumes binocular depth cues and motion parallax over small distances are the cues primarily used to perceive depth. With these two assumptions, it becomes a valid estimation to set a specific image point, as seen in Fig. 1(c) and Fig. 1(d). These estimates become even more valid as the observer is placed further from the mirror or, equivalently, the two eyes points are brought closer together. Focusing on a particular observation location, and a particular portion of the mirror reflecting object light to the observer, can drastically simplify the problem of determining an image point.

To walk through a simple example, take an object that is viewed reflecting from a two dimensional parabolic mirror governed by the equation y = a·x2. The thin lens equation is sufficient to determine the virtual image of this object when the observer and object are near the optical axis, however when the observer views the object reflecting from the parabolic mirror at large angles relative to this axis, this formula breaks down. One can try to deal with this breakdown by accounting for a series of optical aberrations, but a much simpler approach is to take into account stereoscopic viewing from an observer at a fixed location. In this case, the light reflecting from the object to the viewer reflects only from a very small portion of the parabola. Therefore, one can treat the portion of the parabola the observer views the object from as its own separate mirror in order to determine the virtual image point.

To clarify, every piece of the parabola can itself be thought of as its own small mirror, with its own curvature. In mathematics, the circle that best fits a curve at a point is known as an osculating circle. For a two dimensional parameterized curve, this radius of curvature is:

R=((x')2+(y')2)3\2|x'y''y'x''|
where x = x(t) and y = y(t) are the Cartesian coordinates of a parametric curve with parameter t and the apostrophes denote derivation with respect to t. For the parabola, the radius of curvature at any point is then:

R=(1+4a2x2)3/22a

Using Eq. (2), every infinitesimal portion of the parabola will be approximated as a circular mirror. What remains is to determine the image point for an object viewed by an observer through a circular mirror.

To determine this image point, we consider two nearly parallel rays starting from an object point and reflecting from a mirrored surface to an observer. The observer can be represented as a small aperture, with one of the rays, the principle ray, reflecting to the center of this aperture and the secondary ray reflecting to the edge of this small aperture. Back-propagating these rays into the mirror and solving for their intersection gives us the position where the object’s reflection appears to the observer. The solution of this intersection can be performed numerically (and this will be done in the following section), or it can be done by solving for the angle between these rays as they reflect from a spherical surface, which is a readily available result in geometric optics [16,17]. Though this result already exists, a derivation is shown here in a particular form designed to be useful for virtual surface computations.

A diagram of two such rays is shown in Fig. 2:where Do is the distance from the object point, O, to the mirror along the primary ray, θ is the angle between the primary ray and the surface’s normal vector n1^, Di is the distance from the mirror surface to the virtual image point V, R is the radius of curvature of the osculating circle at the mirror surface, is the angle between the two rays originating from the object, and is the angle between the reflection points of the rays from the mirrored surface and the center of the osculating circle at the mirrored surface.

 figure: Fig. 2

Fig. 2 Two light rays depicted reflecting from the osculating circle used to approximate the surface curvature of a mirror (a). In (b), a closer look at the relevant angles made between lines connecting the object point, virtual image point, and points of reflection on the mirror’s surface.

Download Full Size | PDF

The total distance of the virtual image point from an observer, L, can be determined by the angle between these two reflected rays, 2 + , and the distance between the rays at the observer, E (equal to the distance between two human eyes for stereoscopic depth):

L=E2tan(dγ+dβ2)

If the angle 2 + is small, then a small-angle approximation on tan can be used. This is accurate to within 1% when 2 + is less than about 0.35 radians (20°). For an observer determining distance based on their binocular vision, this angle corresponds to a distance of 0.19 m. For an observer viewing the two rays with a single eye (iris size approx. 4 mm), this corresponds to a distance of 0.012 m. For the following derivation, we will assume we are dealing with virtual image points that are further from the observer than these distances, and so we will be able to assume that the angle between the two rays in Fig. 2, 2 + , is small.

Taking the triangle made by the image point and the points of reflection of the two rays, making use of the law of sines, and the fact that 2 + is small (and consequently and individually) gives the following relation:

Di=Rdγcos(θ+32dγ+dβ)2dγ+dβ

Making use of the cosine law of angular addition and retaining only the first order of the expansion:

Di=Rdγcos(θ)2dγ+dβ

The triangle made by the object point, and the two points of intersection of the two rays with the mirror has the properties shown in Fig. 2(b). Once again making use of the law of sines, the angular addition property of cosines, and assuming and are small:

dβ=RDodγcos(θ)

Keeping only the first order of the expansion, and combining Eq. (5) and Eq. (6) gives the final result:

Di=DoRcos(θ)2Do+Rcos(θ)

This equation, with an object near the optical axis viewed from an observer near the optical axis, simplifies to the paraxial mirror equation:

1Di=2R+1Do

However, for objects that are large with respect to the focal length of the parabola, or when the viewer observes an object reflecting from the mirror far from the parabola’s optical axis, these equations differ from the results of the paraxial mirror equation. In Fig. 3, the results of a simulation of the reflection of a vertical line object from a 2D parabolic mirror, viewed from two different locations is shown. Typical ray tracing centers around the paraxial domain, which consists of rays nearly parallel, and very close, to the optical axis. It is usually not emphasized that the position of the observer has a significant impact over the position of the virtual point V.

 figure: Fig. 3

Fig. 3 Ray tracing simulating depicting the virtual image of an object reflecting from a parabolic mirror with a focal length of 1 m. Near the optical axis, depicted in (a), the virtual image can be readily predicted with Gaussian optics (illustrated using traditional rays taught in introductory physics courses), however, observed far from the optical axis, depicted in (b), the virtual image deviates from these predictions.

Download Full Size | PDF

2.3 Virtual image points in three dimensions

In two dimensions, the angle between two rays reflecting from a mirror can be used to trace back to a virtual image point. In three dimensions, the situation is more complicated. Two nearly parallel rays will diverge at different angles depending on the plane they intersect the mirror. For example, two rays that strike the mirror in the plane of the object and the mirror’s optical axis (known as the tangential plane) will diverge at different angles than two rays that strike the mirror in the plane orthogonal to this plane (known as the sagittal plane). The solution to simplify this problem is to determine which plane is primarily used to determine depth, and work out the angle of divergence of two rays striking the mirror in this plane. This can be done by first deriving the angle of divergence for two rays striking the mirror in an arbitrary plane, and then by determining the plane used for monocular parallax cues, assuming the observer moves horizontally, along the line connecting to the observer’s eyes (which, as already mentioned, will be consistent with the binocular parallax and convergence cues).

The equation for the distance from the mirror’s surface to the virtual image point, in the tangential plane, is equivalent to that found for rays in two dimensions:

Di=DoRcos(θ)2Do+Rcos(θ)

For three dimensions, the secondary ray in Fig. 2 is not necessarily in the same plane as the mirror’s normal vector and the primary ray. In the plane of the primary and secondary rays from the object, the primary ray makes an angle α with the mirror’s normal vector (not to be confused with the angle θ, which is the angle between the primary ray and normal vector specifically in the tangential plane).

One can use the results from Eq. (9) to determine the virtual image depth in this plane, provided the following substitutions are made:

cos(θ)cos(α)
DoDocos(θ)cos(α)
DiDicos(θ)cos(α)

Incorporating these substitutions yields the following result:

Di=DoRcos2(α)2Docos(θ)+Rcos2(α)

Setting α equal to θ, and α equal to zero gives the results for the virtual image depth in the tangential and sagittal planes, respectively. These planes and the corresponding lengths and angles are depicted in Fig. 4.

 figure: Fig. 4

Fig. 4 Primary ray reflecting from a mirrored surface to an observer, depicted in the tangential plane, sagittal plane, and an arbitrary plane

Download Full Size | PDF

The final step to determine the virtual image point location is to determine the plane the eyes use to interpret depth via parallax. This can be done by projecting the interocular vector (a unit vector directed from one eye to another),E^, onto the surface of the mirror. The resulting vector, e^, is:

e^=(E^u^1)u^1+(E^u^2)u^2(E^u^1)2+(E^u^2)2

The angle α is:

α=tan1(D0re^D0cos(θ))
where u1^ and u2^ are unit vectors that define the plane tangent to the mirror’s surface. Together the mirror’s normal vector and e^ define the plane primarily used for perceiving depth for binocular parallax, convergence, and for monocular parallax cues when the observers moves their head horizontally.

The virtual image point can then be found using these equations by partitioning the mirror into pieces, and determining the orientation and radii of curvature of these partitions. An example calculation with experimental verification will next be described for a hemisphere, but in principle this technique could be extended to an arbitrarily curved surface. This requires determining the effective radius of curvature along different planes of the mirror’s surface and then replacing R in Eq. (13) with a radius of curvature that depends on the relevant plane, R(α).

An alternative way to write Eq. (13) is in terms of the object distance:

Do=DiRcos2(α)Rcos2(α)2DiRcos(θ)

This formula could be applied when designing a front or rear projection screen in a collimated display. In these displays, an observer views a curved front or rear projection screen reflecting from a spherical mirror. Given a desired virtual image, and consequently the image distance, Di, one can use Eq. (16) to solve for the object distances from the mirror to projection screen, Do. A projection screen can hence be designed to produce a virtual image that has been designed for a specific application of the display.

These results were checked with known results from geometric optics. First, for an object point on the optical axis of a paraboloid, one finds the image points to be equivalent to those found using the paraxial mirror equation by setting θ (and consequently α) to 0. Second, the two image points of a point object viewed reflected from a spherical mirrored surface in the tangential and sagittal planes can be found by setting α equal to 0 and θ, respectively. These results are equivalent to the astigmatic image equation for spherical mirrors found using equivalent geometric optics derivations [18].

3. Virtual surface of a hemisphere

The virtual surface for a planar display, viewed by an observer reflecting from a hemisphere, was determined using two techniques; one based on the equations derived in the previous section, and an alternative technique found numerically using ray tracing in Zemax.

The first process involved finding the point of reflection on the mirror for a given observer position and a given object point. This was done by iteratively minimizing the residual from the law of reflection using the normal vector to the mirror at this point, the ray from the object to this point, and the ray from this point to the observer. The point of reflection on the mirror was then modelled as a small mirror with an optical axis parallel to its normal vector. The angle between this axis and the primary ray was θ. Utilizing a spherical coordinate system, the polar angle and azimuthal angles were used for u1^ and u2^, respectively. Using Eq. (13), the value of Di was determined, and the primary ray was then traced from the observer and past the mirror’s surface a distance Di to determine the virtual image point location. This process was done for every pixel on a planar screen to construct the corresponding virtual image, depicted in red in Fig. 5.

 figure: Fig. 5

Fig. 5 Virtual surface of a planar screen reflecting from a hemispherical mirror. In blue, virtual image points calculated based on ray-tracing performed in Zemax. In red, simulated virtual image points based on semi-analytical ray-tracing method. Points where a solution was not found in Zemax are not shown.

Download Full Size | PDF

The second process was performed in the optical design program, Zemax. This process involved defining two small apertures to represent two viewpoints for an observer, separated by 54 mm, a distance chosen to match with the experimental setup described in the following section (and also approximates the interocular distance between human eyes). Primary rays were defined with respect to the left eye, ranging in horizontal angles from −12° to 12° in increments of 0.4°, and vertical angles ranging from −10° to 4° in increments of 0.5°, for a total of 1,769 rays. Depth measurements were obtained using secondary rays, originating from the right eye, that were initially set parallel to their corresponding primary ray. A gaze direction was defined for each eye point to be directed along the z-direction, and a gazing plane of the eyes to lie on the xz-plane (Fig. 5 depicts this coordinate system). The ray direction of the secondary ray was then iterated until the secondary rays and primary rays converged to a point on the object screen. The difference in the angle in the gazing plane between these rays is known as the convergence angle, while the angle between the two eyes in the orthogonal plane is known as the dipvergence angle. The convergence angle was used to triangulate the depth relative to the observer using the following equation [19]:

L=Btan(θ+ϕ)tan(θ)
where L is the distance of the object point from the left eye, B is the distance between the two eye points, θ is the angle made between gaze direction and the primary ray in the gazing plane, and φ is the convergence angle between the two rays. This virtual image is also depicted in Fig. 5, in blue.

For a quantitative comparison, these techniques were repeated for select object points in order to allow for a direct comparison. These results are shown in Fig. 6. The mean distance between corresponding points using both techniques was about 600 µm, indicating that both of these methods provide nearly identical results. The main difference between these two techniques lies primarily in how they can be applied. The semi-analytical ray tracing technique described in the previous section lends itself well to optimization algorithms and optical design for specific virtual surfaces when the analytical properties of the surface are known, while numerical ray-tracing provides a method to model a virtual surface for more difficult mirrored surfaces, provided they can be modeled in a program such as Zemax.

 figure: Fig. 6

Fig. 6 Object points, depicted by green squares, create virtual image points when viewed reflected from a spherical mirror. Virtual image points determined using a semi-analytical ray tracing solution are shown by small red points. Nearly identical virtual image points, depicted by blue circles, were found using numerical ray-tracing performed in Zemax.

Download Full Size | PDF

4. Experimental verification

These results were then verified experimentally, by measuring the depth of image points reflecting from a mirrored surface using two cameras. A silver-coated acrylic hemisphere of 18” in diameter was used as the hemispherical mirror. A 55” LG OLED HD television was placed 15 mm above this hemisphere, and two Point Grey BlackFly cameras were mounted on an optics Table 686 mm from the center of the hemisphere. A custom 3D-printed part was used to keep the cameras at a fixed separation of 54 mm and to ensure the cameras were parallel to one another.

The distance of objects relative to the two cameras were determined via triangulation, based on a technique often used in computer vision [20], depicted in Fig. 7. Based on the distance between the two cameras, B, the focal length of the cameras, fL and fR, and the measured location of the object in the two images, xL and xR (with respect to their respective origins OL and OR), the depths were determined using the following formulas:

 figure: Fig. 7

Fig. 7 Stereoscopic measurements for pinhole cameras.

Download Full Size | PDF

xLfL=BLL,xRfR=BLL
BLBR=B=L(xRfRxLfL)
L=B(xRfRxLfL)1

The camera’s focal lengths and orientations were calibrated in order to be able to convert a pixel location in an image to an angle from the camera’s optical axis. The focal lengths of the cameras were calibrated by taking an image of graph paper marked with a grid of evenly spaced lines, centered between the two cameras at a distance of 432 mm. The spacing between the grid lines was 10 mm. The spacing between these grid lines in the captured images was found to be 73 +/− 1 pixels for each camera. The CCD pixel pitch in each of these cameras was 2.8 µm. Using Eq. (18), the focal lengths for both cameras were then determined to be 8.8 +/− 0.1 mm. A point at a height of 84.5 mm and at equal distance from both cameras was marked with a pen. Photos were then taken with the cameras, and the image of this point was measured to be a few pixels below and above the center of the images in the photo, indicating a slight vertical offset in the orientations of the cameras, equal to 1.6° below the horizontal for the left camera, and 1.2° degrees below the horizontal for the right camera.

To complete the calibration, an image was taken of an array of evenly spaced points at known locations, and stereoscopic depth measurements were determined based on these pictures to ensure the accuracy of the measurement process. The pixel locations of these points in the images were determined based on the photos of these points using Microsoft Paint. The depth and locations of these points were then determined using triangulation based on Eq. (20). It was found that there was a slight error in the depth measurements for points that increased radially with distance from the center of the image. The physical locations of these points were measured, and the difference in the calculated locations from the measured locations for each point were squared and added together. Radial distortion coefficients were determined by minimizing this sum of square differences, and it was found that a radial distortion based on the Brown distortion model [21] was what minimized this error. This lens distortion parameter was applied to all subsequent measurements:

r'=r(1+ar2+br4+cr6+dr8)
where r is the actual radial distance from the center of the sensor to a given pixel, in units of pixels and r' is the expected radial distance. A least squares regression was applied to find the parameters a, b, c, and d using Excel’s Gsolver add-in. These values were fitted for the left and right cameras separately and were found to be equal to the values given in Table 1:

Tables Icon

Table 1. Radial distortion coefficients for camera calibration

Next, a picture composed of a black background and a grid of single green pixels was displayed on the OLED screen. These points were set to be approximately evenly spaced when viewed reflecting from the spherical surface. A picture of this screen reflecting from the hemispherical mirror was taken by each camera. The picture taken from one of these cameras is shown in Fig. 8.

 figure: Fig. 8

Fig. 8 Image of display screen reflecting from the spherical mirror in a dark room (a) and in a well-lit room (b). The display screen is displaying a black image with an array of lighted, green pixels. The contrast was increased by the same amount in both pictures to enhance the visibility of the pixels.

Download Full Size | PDF

The pixel locations of each one of these points was measured by hand, using Microsoft Paint. The error in determining the location of each pixel was determined to be +/− one pixel based on repeating the photograph and measurement procedure five different times. The previously described lens distortion was applied and, based on these pixel locations, the depth of each virtual pixel was determined using Eq. (20). The x and y locations were determined via triangulation. Finally, a coordinate transformation was performed to correct for the slight vertical misalignment of the cameras. The results are depicted in Fig. 9.

 figure: Fig. 9

Fig. 9 Side (a) and front (b) views of measured virtual surface points (purple) along with corresponding simulated virtual surface points (red). Only a portion of the hemispherical mirror is shown for clarity.

Download Full Size | PDF

The uncertainty in the experimental measurements was primarily due to three factors. The first was the uncertainty in determining the pixel location of every point in the image. Based on repeated photographs and measurements, this was done with a precision of +/− 1 pixel. The resulting error in depth calculations associated with one pixel was +/− 2 mm. The second major source of error in the measurements was in determining the location of the aperture of the camera with respect to the center of the hemisphere, which contributed an additional error of +/− 1 mm. Finally, there was error associated with the physical location of the lighted pixels on the OLED screen. The error in determining the exact position of the physical pixels resulted in an additional uncertainty in the measurement of +/− 2 mm. The total estimated error in the data was determined by adding these three errors in quadrature, and was equal to +/− 3 mm.

The chi-square statistic between the experimentally measured virtual image point locations and the numerically calculated image point locations based on this estimated error was 227 for the 238 data points, with a corresponding p-value of 0.83. This indicates the results of the semi-analytical model presented in this paper were consistent with the experimental measurements. As the experimental error was much greater than even the largest deviations between the two virtual images depicted in Fig. 6, no chi-square test was deemed necessary between the virtual image determined using Zemax and the experimental data.

5. Conclusion

We have described a method that can be used to determine the virtual image of an object viewed from a mirrored surface. This method relies on the idea that in many cases, an observer views object light reflected from only a small portion of the mirror. By focusing on only a small portion of a mirror, an object that would otherwise be seen as a complex wave caustic, appears as a sharp image point to an observer.

This idea was implemented in two ways, one using a semi-analytical ray-tracing technique suitable for fast calculations and virtual image shape optimization, and a numerical ray-tracing technique performed using Zemax. These results were compared with experimental measurements of an LCD display viewed reflected from a hemispherical mirror, and found to be consistent with a p-value of 0.83.

The ideas outlined in this article can be used to improve the design of mirror-based displays and imaging technologies, as well as to shed light on virtual images themselves.

References and links

1. L. Baltrueaitis, Anamorphic Art (Harry N. Abrams, Inc. Publishers, 1977).

2. O. Faugeras, Panoramic Vision: Sensors, Theory, and Applications, Ryad Benosman, and Sing Bing Kang, eds. (Springer Science & Business Media, 2013).

3. D. Zhao, B. Su, G. Chen, and H. Liao, “360 degree viewable floating autostereoscopic display using integral photography and multiple semitransparent mirrors,” Opt. Express 23(8), 9812–9823 (2015). [CrossRef]   [PubMed]  

4. G. Chen, C. Ma, D. Zhao, Z. Fan, and H. Liao, “Crosstalk-free 360-Degree Viewable 3D Display based on Pyramidal Mirrors and Diaphragms,” in Digital Holography and Three Dimensional Imaging (Optical Society of America, 2015), paper DW3A–7.

5. “Flight simulator visual display system.” U.S. Patent 3,904,289, issued September 9, 1975.

6. W. J. Smith, Modern Optical Engineering (McGraw-Hill, 2008), Chap. 16.

7. R. E. Fischer, Optical System Design (McGraw-Hill, 2000), Chapter 9.

8. M. Teittinen, “Depth Cues in the Human Visual System,” http://www.hitl.washington.edu/research/knowledge_base/virtual-worlds/EVE/III.A.1.c.DepthCues.html

9. T. Okoshi, Three-dimensional imaging techniques (Elsevier, 2012).

10. C. A. Levin and R. N. Haber, “Visual angle as a determinant of perceived interobject distance,” Percept. Psychophys. 54(2), 250–259 (1993). [CrossRef]   [PubMed]  

11. S. J. Watt, K. Akeley, M. O. Ernst, and M. S. Banks, “Focus cues affect perceived depth,” J. Vision 5(10), 834–862 (2005). [CrossRef]   [PubMed]  

12. D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks, “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue,” J. Vision 8(3), 1–30 (2008). [CrossRef]   [PubMed]  

13. J. L. Hunt, B. G. Nickel, and C. Gigault, “Anamorphic Images,” Am. J. Phys. 68(3), 232–237 (2000). [CrossRef]  

14. M. O. Ernst and M. S. Banks, “Humans integrate visual and haptic information in a statistically optimal fashion,” Nature 415(6870), 429–433 (2002). [CrossRef]   [PubMed]  

15. D. Murra and P. Di Lazzaro, “Analytical treatment and experiments of the virtual image of cone mirrors,” Appl. Phys. B 117(1), 145–150 (2014). [CrossRef]  

16. G. Monk, Light: Principles and Experiments, (McGraw Hill Book Company Inc., 1937) Appendix III.

17. M. Herzberger, Modern Geometrical Optics, (Robert E. Krieger Publishing Company, 1980), Chap. 2.

18. F. Jenkins and H. White, Fundamentals of Optics, 4th Edition (McGraw-Hill, 1976), Chap. 6.

19. J. Mrovlje and D. Vrančić, “Distance measuring based on stereoscopic pictures,” In Proceedings of the 9th International PhD Workshop on Systems and Control, M. Gašperin and B. Pregelj, ed. (Institut Jožef Stefan, 2008), pp.1–6.

20. E. Trucco and A. Verri, Introductory Techniques for 3-D computer vision (Prentice Hall, 1998).

21. J. G. Fryer and D. C. Brown, “Lens distortion for close-range photogrammetry,” Photogramm. Eng. Remote Sensing 52(1), 51–58 (1986).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Rays from an object point reflecting from a spherical mirror. Reflected rays are back-propagated into the sphere, creating a virtual ray caustic indicated by dotted curves. In (a) and (b), rays from a distant and nearby object point, respectively, reflect from a spherical surface. In (c) and (d), two different observers perceive the virtual image point to be at two different locations lying on different portions of the ray caustic.
Fig. 2
Fig. 2 Two light rays depicted reflecting from the osculating circle used to approximate the surface curvature of a mirror (a). In (b), a closer look at the relevant angles made between lines connecting the object point, virtual image point, and points of reflection on the mirror’s surface.
Fig. 3
Fig. 3 Ray tracing simulating depicting the virtual image of an object reflecting from a parabolic mirror with a focal length of 1 m. Near the optical axis, depicted in (a), the virtual image can be readily predicted with Gaussian optics (illustrated using traditional rays taught in introductory physics courses), however, observed far from the optical axis, depicted in (b), the virtual image deviates from these predictions.
Fig. 4
Fig. 4 Primary ray reflecting from a mirrored surface to an observer, depicted in the tangential plane, sagittal plane, and an arbitrary plane
Fig. 5
Fig. 5 Virtual surface of a planar screen reflecting from a hemispherical mirror. In blue, virtual image points calculated based on ray-tracing performed in Zemax. In red, simulated virtual image points based on semi-analytical ray-tracing method. Points where a solution was not found in Zemax are not shown.
Fig. 6
Fig. 6 Object points, depicted by green squares, create virtual image points when viewed reflected from a spherical mirror. Virtual image points determined using a semi-analytical ray tracing solution are shown by small red points. Nearly identical virtual image points, depicted by blue circles, were found using numerical ray-tracing performed in Zemax.
Fig. 7
Fig. 7 Stereoscopic measurements for pinhole cameras.
Fig. 8
Fig. 8 Image of display screen reflecting from the spherical mirror in a dark room (a) and in a well-lit room (b). The display screen is displaying a black image with an array of lighted, green pixels. The contrast was increased by the same amount in both pictures to enhance the visibility of the pixels.
Fig. 9
Fig. 9 Side (a) and front (b) views of measured virtual surface points (purple) along with corresponding simulated virtual surface points (red). Only a portion of the hemispherical mirror is shown for clarity.

Tables (1)

Tables Icon

Table 1 Radial distortion coefficients for camera calibration

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

R= ( (x') 2 + (y') 2 ) 3\2 | x'y''y'x'' |
R= (1+4 a 2 x 2 ) 3/2 2a
L= E 2tan(dγ+ dβ 2 )
D i = Rdγcos(θ+ 3 2 dγ+dβ) 2dγ+dβ
D i = Rdγcos(θ) 2dγ+dβ
dβ= R D o dγcos(θ)
D i = D o Rcos(θ) 2 D o +Rcos(θ)
1 D i = 2 R + 1 D o
D i = D o Rcos(θ) 2 D o +Rcos(θ)
cos(θ)cos(α)
D o D o cos(θ) cos(α)
D i D i cos(θ) cos(α)
D i = D o R cos 2 (α) 2 D o cos(θ)+R cos 2 (α)
e ^ = ( E ^ u ^ 1 ) u ^ 1 +( E ^ u ^ 2 ) u ^ 2 ( E ^ u ^ 1 ) 2 + ( E ^ u ^ 2 ) 2
α= tan 1 ( D 0 r e ^ D 0 cos(θ) )
D o = D i R cos 2 (α) R cos 2 (α)2 D i Rcos(θ)
L= B tan(θ+ϕ)tan(θ)
x L f L = B L L , x R f R = B L L
B L B R =B=L( x R f R x L f L )
L=B ( x R f R x L f L ) 1
r'=r(1+a r 2 +b r 4 +c r 6 +d r 8 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.