## Abstract

The Huygens’ principle is thoroughly investigated under scalar theory. The rigorous expressions of Huygens’ principle must be independent of ∂*u*/∂*n*, and their boundaries can only be taken as either spherical or flat; thus, three cases can be concluded. An extended version of Huygens’ principle is proposed to cover these cases, whose rigorous expressions are shown in this paper. Specifically, when the radius of the spherical boundary approaches infinity, the corresponding expressions become the form corresponding to the flat boundary. Expressions with spherical boundary can change the area and average intensity of small angle diffraction pattern proportionally, thus providing a promising mathematical tool for the design of curved imaging systems.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

As the most fundamental physical fact, when light propagates freely, the space-time distribution of the light is entirely determined by the light sources. Inspired by Hooke’s view that the wavefront generated originally from a point is spherical, Huygens bypasses the use of light sources to propose a dynamic construction to depict the free propagation of light, which is now called Huygens’ principle and can be expressed as follows: treating light disturbance locus at a definite instant $t_0$ as the primary wavefront, and regarding each point on the primary wavefront as a secondary source emitting a spherical wave whose radius is proportional to $(t_1-t_0)$ at any subsequent $t_1$; then, the envelope of these spherical waves constitutes a new wavefront [1–3]. In short, as Courant said, the propagation of the subsequent light is determined by the light at the boundary, not by the light inside [4].

In his book $Treatise~on~Light$, Huygens demonstrated his principle geometrically: draw arcs, each of whose center is the point on the primary wavefront; then, the common tangent of these arcs forms the new wavefront. We find that Huygens’ geometric method requires that the primary wavefront is not arbitrary. For example, it can only be spherical or flat under vacuum condition, which is illustrated in Fig. 1.

Suppose the shape of the primary wavefront is arbitrary, e.g., a wavefront with bulges and depressions. The new wavefront based on the Huygens’ geometric method is obtained in Fig. 2. It can be found that on the new wavefront, the bulges will be proportionally enlarged, while the depressions will be proportionally reduced until it disappears. The real physical scenario shows that after propagating far enough in the vacuum, both the bulges and depressions will become smaller until they vanish ultimately; then, any wavefronts will become perfectly smooth spherical or planar. If and only if the primary wavefront is restricted to be flat or spherical surface, Huygens’ geometric method can lead to the correct conclusion.

However, the original Huygens’ principle is only a geometrical theory; thus it has many limitations and cannot give quantitative analytical results [5]. Thanks to the electromagnetic wave properties of the light identified by Maxwell, researchers realized the free propagation of light can be represented by the scalar wave equation under vacuum condition. Kirchhoff proposed the first wave-equation based expression for the Huygens’ principle, i.e., the generalized Kirchhoff’s theory (Eq. (1)) [6]:

Theoretically, Poincaré and Sommerfeld have proved that $\partial u/\partial n$ could cause a mathematical paradox [3,7]. Besides, $\partial u/\partial n$ rises an additional degree of freedom on $u( P_{0},t-r_{01}/v )$, resulting in the one-one correspondence of $u( P_{0},t-r_{01}/v )$ and $u(P_1,t)$ becomes many-one correspondence. This is like many different objects generating the same image. As we all know, one-one correspondence of object and image is one of the most important properties of imaging systems.

Technically, one can at most obtain the amplitude and phase of $u$ by setting detectors at the boundary [8,9]. If one wants to apply Eq. (1), the detectors must be placed outside of the boundary to compute $\partial u/\partial n$. This implies that measuring the value at the boundary alone is not enough to calculate the value we cared about. Besides, any numerical method for simulating electromagnetic wave propagation requires the ansatz wave functions, i.e., planar or spherical wave function, to obtain the value of $\partial u/\partial n$ at the boundary [10–13]. This means that the light field is predetermined not only at the boundary but throughout space. Moreover, for any imaging system, we only record the $u$ value of the image, and $\partial u/\partial n$ is redundant. On the whole, the existence of $\partial u/\partial n$ fails to satisfy the requirement that the primary wavefront determines the subsequent light propagation.

These theoretical and technical issues indicate that, for a rigorous expression of the Huygens’ principle, term $\partial u/\partial n$ should be avoided. However, over the past one hundred years, only Sommerfeld successfully removed $\partial u/\partial n$. By applying the method of images on Kirchhoff’s deduction [14], he obtained the general form of the Rayleigh-Sommerfeld diffraction formula (RSDF) [3]:

On the other hand, it can be found from the right side of these expressions that the time variable $t$ always takes the form $(t-r_{01}/v)$, whose physical meaning is clear: the light at $P_1$ and instant $t_1$ is obtained after the light at $P_0$ and instant $t_0$ propagating a certain distance $r_{01}$, while due to the constant speed of light, the time difference between $t_1$ and $t_0$ is $r_{01}/v$. In other words, the light produced at $P_0$ at other instant except at $(t-r_{01}/v)$, makes no contribution to the light at $P_1$ and instant $t$. This effect is known as the retarded potential in physics. Mathematically, for all solutions of the wave equation, the time variable $t$ must be in the form of $(t\pm r_{01}/v)$ [17], where $(t+r_{01}/v)$ represents the advanced potential in electromagnetics and is usually discarded because it conflicts both with experiences and elementary notions of macroscopic causality [18].

The geometric construction of the RSDF is just on the left of Fig. 1, while the right of Fig. 1 indicates that the rigorous expressions of the Huygens’ principle should also be valid in the situation of the spherical boundary. Coincidentally, J. Hadamard has intuitively proposed such a version of the Huygens’ principle, which describes the propagation of a given spherical light in the free space [19]. The geometric construction of Hadamard’s version is illustrated in Fig. 3(a), which shows that the light produced by a point source $O$ firstly reaches surface $S_0$ successively at instant $t_0$, then reaches surface $S_1$ at instant $t_1$. Similar to the Courant’s opinion, he considers the physical essence of the Huygens’ principle as that the effect of the light from $O$ on $S_1$ can be replaced by the effect from $S_0$ on $S_1$. For convenience, the abbreviation HP for Huygens’ principle refers to Hadamard’s version in this paper.

Fascinatingly, the geometric constructions of the HP and the holographic principle are complementary in terms of the spherical surface (Fig. 3). The holographic principle manifests that all the information in a three-dimensional space is coded on the two-dimensional surface wrapping it [20,21]. That is to say, the information at the boundary of a region has a one-one correspondence with the events inside this region. Therefore, we can safely give the holographic principle of light (HPL): the light within a vacuum region that is surrounded by a spherical boundary, has a one-one correspondence with the light at the boundary. The vacuum region in this paper means a region without light source or matter, and the light in it is absolutely free to propagate. Correspondingly, the source region contains light sources and matters.

The above discussion leads to the core concept of the Huygens’ principle, which is the light on a certain boundary has a one-one correspondence with the light after free propagation. Therefore, the HPL should be regarded as a special case of the Huygens’ principle. It is easy to transform the geometric constructions of the RSDF, the HP and the HPL to one algebraic problem:

In the first section of this paper, the rigorous expressions of cases (ii) and (iii) are given by applying a set of compatible mathematical tools in the framework of the scalar theory. In the second section, to give readers a more concrete comprehension of the new expressions, we compared the aperture diffraction between the RSDF and the HP. The results show that by adjusting the receiving screen from flat to concave, the area and the average intensity of the small angle diffraction pattern will be enlarged $M^2$ and $(M+1)^2/4$ times, respectively; while by adjusting the receiving screen from flat to convex, the area and the average intensity of the small angle diffraction pattern will be reduced $M^2$ and $(1/M+1)^2/4$ times, respectively. These laws also offer a quantitative interpretation of the imaging of concave and convex mirrors from the perspective of wave optics. In the conclusion section, we integrated the RSDF, the HP and the HPL into an extended version of the Huygens’ principle: if the boundary shape of a vacuum region is spherical or flat, the light in it will have a one-one correspondence with the light on the boundary. Finally, compared with the existing methods, the new expressions derived in this paper will be a more promising tool for the design of curved surface imaging system.

## 2. Solutions of the mathematical model of the Huygens’ principle

This paper mainly aims to derive the rigorous expressions of cases (ii) and (iii) by solving the wave equation. Firstly, considering any light at one point can be represented by the superposition of numerous different monochromatic lights, the propagation issue of the light disturbance can be transformed to the stationary state issue of the complex amplitude. Then, two innovative Green’s functions are proposed based on the method of images to solve the steady state issues of the HP and the HPL, respectively. Finally, the ideal expression pair is obtained via the clockwise-type Fourier transform.

#### 2.1 Complex amplitude and Fourier transform

The monochromatic light disturbance is a simple harmonic wave in nature, which could be expressed as a sine or cosine function at a point $P$.

where $A$, $f$ and $\phi$ are amplitude, frequency and initial phase, respectively. Bring the Euler’s formula into Eq. (4), and introduce a Hermite function $U(P,f )= \textrm {i} A(P, f) \exp \left [ -\textrm {i} \phi (P,f) \right ]/2$, where $A(P,-f )=-A(P,f )$, $\phi (P,-f )=-\phi (P,f )$; we can have:According to the linear superposition principle, if each clockwise rotation satisfies the wave equation, $u$ satisfies too. Replace $u$ by $U\exp ( -\textrm {i}\omega t)$ in the wave equation, the famous Helmholtz equation arises:

where $k$ is the wave number of the monochromatic light and $k=\omega /v$. Because $U(P,\omega )$ represents a complex scalar field distributing throughout the whole vacuum region, the Helmholtz equation is actually in charge of the stationary state issue. Given the compatibility between $U$ and $u$, once the complex amplitude relationship between $P_0$ and $P_1$ is derived from the Helmholtz equation, the expressions of the Huygens’ principle can be deduced by the clockwise-type Fourier transform. Therefore, the goal afterward is to solve the Helmholtz equation.#### 2.2 Green’s function and method of images

The analytical solution of the Helmholtz equation is usually obtained through the Green’s theorem: if $U$ and $G$ are both the single-valued functions and are twice continuously differentiable about position in a simply connected domain $V$ whose boundary is $S$, the Green’s second identity is set up as:

Refer to the initial version of the Huygens’ principle that each point on the wavefront can emit a spherical wave; the Green’s function for the Huygens’ principle should be expressed as:

In geometry, for any two different points in space, if the distances between the two points and certain points are in a fixed proportion $M$, the set of other points will be a surface. The two points are known as the mirror points about the surface. Especially, if $M$ is equal to one, the surface will be an infinite plane, and if not equal to one, the surface will be a spherical surface (Apollonius spherical surface). Sommerfeld employs the former, and the latter will be adopted in this paper.

To eliminate $\partial u/\partial n$, Sommerfeld adds another negative source on Kirchhoff’s Green’s function:

#### 2.3 Expression for the HP

Before giving the Green’s function, a spherical coordinate system is constructed in Fig. 4.

For the geometric construction of the HP, the vacuum region is $V_1$ and the source region is $V_2$. For any point $P_1(c,0,0)$, it is always possible to find its mirror point $P_2(a,0,0)$ about $S_0$, and $a$, $b$, $c$ satisfy $0<a<b<c$ and $ac=b^2$. Because the whole structure is rotationally symmetric, it is more convenient to display one slice crossing the $Oz$ axis only in the next step. Based on the characters of the Apollonius spherical surface, the distance from $P_0$ to $P_2$ can be denoted as $r_{02}$; thus, it is tenable that [22]:

Therefore, the $M$ in this paper is always larger than one. Now it is the time to give the dedicated Green’s function $G_1$ for the HP:Bring Eq. (7) and Eq. (14) into the left of Eq. (8):

Eq. (19) is the solution of the Helmholtz equation; thus, it is a stationary state expression. The solution for the wave equation can be obtained via the clockwise-type Fourier transform. Bringing Eq. (19) into Eq. (6) leads to the following equation:

#### 2.4 Expression for the HPL

For the HPL, the source region is $V_1$ and the vacuum region is $V_2$. If $P_2$ is considered as the point to be calculated, then $u(P_2,t)$ can be derived in the same way. In this situation, the coordinate system is still the same, but the Green’s function changes to Eq. (22), which is shown in Fig. 7.

## 3. Aperture diffraction

Although the diffraction and the Huygens’ principle are two different physical concepts, they do share one application scenario, viz. aperture diffraction. Investigating what new variations that the HP and the HPL expressions can bring to aperture diffraction may help readers to further understand the Huygens’ principle.

For the RSDF, the common mode of the aperture diffraction consists of a beam of light shooting to an aperture in an infinite-flat diffraction screen, followed by a flat receiving screen placed parallelly to the diffraction screen for recording the diffraction pattern. According to the Kirchhoff’s boundary conditions, the opaque part of the diffraction screen must completely absorb the light touching it, while the light at the aperture part remains unchanged. As a result, the mathematical model of the aperture diffraction is equivalent to the combination of the Huygens’ principle and the Kirchhoff’s boundary conditions.

For the HP and the HPL, the first difference relative to the RSDF is that the flat diffraction screen turns to spherical, while the Kirchhoff’s boundary conditions and the integration surface change along with it. The comparison of the diffraction screen between the RSDF and the HP is illustrated in Fig. 8. The second difference is the diffraction pattern. The patterns of the greatest concern are those at the small angle diffraction zone, e.g., the Fresnel zone and the Fraunhofer zone. Considering that the geometric constructions of the HP and the HPL are complementary, we only compare the small angle aperture diffraction of the RSDF and the HP in the next step for convenience.

Figure 8 shows that $S_0$ and $S'_0$ are so close that their integral elements and bounds of the RSDF and the HP are almost identical. In addition, for the same light field, their disturbance distributions ($A$ and $\varphi$) on the aperture are also similar, which gives us an opportunity to compare their integrands directly. We omit the coefficients of the integrands temporarily and consider the monochromatic component of light:

Figure 9 shows that, for the RSDF, $r'_{01}$ could be expressed by the polar parameters:

Given the curved diffraction screen, we intuitively put forward the third difference that the receiving screen is also curved so as to parallel to the curved diffraction screen. For the HP, the receiving screen is concave, while for the HPL it is convex. Figure 10 is the coordinate system for the HP, $r_{01}$ can be expressed by the geodesic polar parameters similarly:

Now it is the time to compare their coefficients. The coefficient of the RSDF can be written as:

while the coefficient of the HP is: The result shows that the coefficient of the HP is $(M+1)/2$ times bigger than the RSDF, which means the amplitude of disturbance is magnified. It is within our expectation because the concave receiving screen is closer to the aperture than the flat on average. Therefore, the average intensity of the small angle diffraction of the HP is enlarged $(M+1)^2/4$ times than the RSDF.A meaningful conclusion is that: for the HP, the concave receiving screen will enlarge the area and the average intensity of the small angle diffraction pattern by $M^2$ and $(M+1)^2/4$ times respectively than the flat; for the HPL, similar steps can be used to verify that the convex receiving screen will reduce the area and the average intensity of the small angle diffraction pattern by $M^2$ and $(1/M+1)^2/4$ times respectively than the flat. It should be noted that for the HPL, the center of the spherical diffraction screen must be placed behind the point $Z$, otherwise no light from the aperture will touch the receiving screen; or there will be mechanical interference between the receiving screen and the spherical diffraction screen. These laws also offer a quantitative explanation for the imaging of concave and convex mirrors from the perspective of wave optics.

Moreover, if $b\rightarrow \infty$, $M$ will become one even without the small angle approximation. At this point, Eq. (28) will become Eq. (27), which means Eq. (21) will become Eq. (2). Similarly, when $b\rightarrow \infty$, Eq. (23) will become Eq. (2). This is consistent with our hypothesis in the introduction part.

## 4. Conclusions and discussion

Given that the Huygens’ principle was proposed to depict the free propagation of light, the rigorous expressions based on scalar wave equation are the best way to represent it. Notably, the concept of wavefront actually belongs to geometry optics and is only applicable to the single point source (For a flat wavefront, the point source is at infinity) [25,26]. During the deduction process, however, we just divide the whole space into two parts, the source region and the vacuum region, by $S_0$; and in the source region, the number, location, shape, size, and magnitude of the light sources are arbitrary. Thus, $S_0$ is just an interface. The general pictures of the HP, the RSDF and the HPL are illustrated in Fig. 11.

Figure 11 shows that, except for the different radius of $S_0$, these three diagrams are basically the same. For example, if radius $b\rightarrow \infty$, $S_0$ will become an infinite plane and $M$ will turn to one, then both the HP and the HPL will become the RSDF. Therefore, they can be combined into an extended version of the Huygens’ principle: if the boundary shape of a vacuum region is spherical or flat, the light in it will have a one-one correspondence with the light on the boundary, and the corresponding relationships are represented by Eq. (2), Eq. (21) and Eq. (23). Strictly speaking, the initial version of the Huygens’ principle is a special case of this extended version when the light source is a single point, i.e., if $S_0$ is a spherical wavefront, the point source is at $S_0$’s centre, or if $S_0$ is a flat wavefront, the point source is at infinity.

From this new version, we can conclude that ideal 2-D display screens, images and sensors all should be flat or spherical. Specifically, the RSDF represents the flat surface imaging system, while the HP and the HPL relate to the curved surface imaging system. Due to the limitation of the imaging theories of the lens and the production technology, most of the practical sensors, display screens and other related imaging devices are flat or spherical. Thus, the proposed theory can be widely adopted.

In addition, it is generally believed that the curved sensor has inherent advantages compared with the flat ones [27,28], e.g., studies show that the retina of vertebrate eye evolves from flat to hemispheric [29,30]. Nowadays, the Kepler telescope has equipped with curved CCD detector; and many groups such as Sony, Sarnoff and CEA-LETI have studied the curved sensor for decades [31–35]. In the future, the curved sensor is highly likely to replace the flat one. To this end, the matching design of the curved sensor and the lens system is indispensable. According to the published documents, three kinds of designing methods appear to be available: 1) The ray-tracing software based on the geometrical optics. But there seems to be a clear divergence between the simulated data and the prototype [36]. 2) Seidel aberrations optimization based on the Gaussian optics. However, the effect on the off-axis part is limited [37,38]. 3) Direct reference to the structural parameters of eyes [39]. In conclusion, the current design approaches are inevitably flawed. On the other hand, referring to the conclusions of the aperture diffraction section, we can give a quantitative explanation of why a curved sensor is better than a flat one from the wave optics perspective. Taking the lens of imaging systems as an example, its ideal imaging zone is the Fresnel zone [40]; the transverse area and the average intensity of this Fresnel zone can be magnified by adapting a concave sensor. In this way, the ideal image is enhanced and the off-axis aberrations are suppressed. The expressions for the HP and the HPL contain the position and curvature of the senor, i.e., $a$, $b$ and $c$. Therefore, they can replace the RSDF in literature, and automatically generate a set of mathematical tools for designing curved surface imaging systems.

## Funding

National Natural Science Foundation of China (11605112, 60906053, 61204069, 61274118, 61306144).

## Disclosures

The authors declare no conflicts of interest.

## References

**1. **E. T. Whittaker, “The theory of the aether to the death of Newton,” in * A History of the Theories of Aether and Electricity-The Classical Theories*, (Thomas Nelson and Sons Ltd, 1951), pp. 23–28.

**2. **M. Born and E. Wolf, “Foundations of geometrical optics,” in * Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light*, (Pergamon, 1980), pp. 116–141, 7th ed.

**3. **J. W. Goodman, “Foundations of scalar diffraction theory,” in * Introduction to Fourier Optics*, (McGraw-Hill, 1996), pp. 32–55, 2nd ed.

**4. **R. Courant and D. Hilbert, “The generalized Huygens principle,” in * Methods of Mathematical Physics: Volume II*, (Wiley, 1989), pp. 735–736.

**5. **B. B. Baker and E. T. Copson, “The analytical representation of Huygens’ principle,” in * The Mathematical Theory Huygens’ Principle*, (Oxford, 1939), pp. 3–4.

**6. **M. Born and E. Wolf, “Elements of the theory of diffraction,” in * Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light*, (Pergamon, 1980), pp. 412–514, 7th ed.

**7. **A. Sommerfeld, “The theory of diffraction,” in * Optics, Lecture on Theoretical Physics*, (Academic, 1954), pp. 179–266.

**8. **G. Koppelmann and M. Totzeck, “Diffraction near field of small phase objects: comparison of 3-cm wave measurements with moment-method calculations,” J. Opt. Soc. Am. A **8**(3), 554–558 (1991). [CrossRef]

**9. **M. Totzeck, “Validity of the scalar Kirchhoff and Rayleigh-Sommerfeld diffraction theories in near field of small phase object,” J. Opt. Soc. Am. A **8**(1), 27–32 (1991). [CrossRef]

**10. **D. W. Prather, M. S. Mirotznik, and J. N. Mait, “Boundary element method for vector modelling diffractive optical elements,” Proc. SPIE **2404**, 28–39 (1995). [CrossRef]

**11. **J. M. Bendickson, E. N. Glytsis, and T. K. Gaylord, “Scalar integral diffraction methods: unification, accuracy, and comparison with a rigorous boundary element method with application to diffractive cylindrical lenses,” J. Opt. Soc. Am. A **15**(7), 1822–1837 (1998). [CrossRef]

**12. **F. M. Kahnert, “Numerical methods in electromagnetic scattering theory,” J. Quant. Spectrosc. Radiat. Transfer **79-80**, 775–824 (2003). [CrossRef]

**13. **S. Teng, G. Li, C. Zhang, and D. Liu, “The diffraction by a small aperture,” Optik **124**(16), 2507–2510 (2013). [CrossRef]

**14. **A. Sommerfeld, “Introduction,” in * Mathematical Theory of Diffraction*, (Springer Science+Bussiness Media, 2004), pp. 3–6.

**15. **A. Walther, “Waves in homogeneous,” in * The Ray and Wave Theory of Lenses*, (Cambridge University, 1995), pp. 125–165.

**16. **J. W. Goodman, “Fresnel and Fraunhofer diffraction,” in * Introduction to Fourier Optics*, (McGraw-Hill, 1996), pp. 63–95, 2nd ed.

**17. **R. Haberman, “Infinite space Green’s function for the three-dimensional wave equation (Huygens’ principle),” in * Applied Partial Differential Equations: With Fourier Series and Boundary Value Problems*, (Pearson Education, 2004), pp. 518–520, 4th ed.

**18. **J. A. Wheeler and R. P. Feynman, “Classical electrodynamics in terms of direct interparticle action,” Rev. Mod. Phys. **21**(3), 425–433 (1949). [CrossRef]

**19. **J. Hadamard, “The fundamental formula and the elementary solution,” in * Lectures on Cauchy’s Problem in Linear Partial Differential Equations*, (Yale University, 1923), pp. 53–57.

**20. **G. ’t Hooft, “Dimensional reduction in quantum gravity,” https://arXiv:gr-qc/9310026.

**21. **R. Bousso, “The holographic principle,” Rev. Mod. Phys. **74**(3), 825–874 (2002). [CrossRef]

**22. **R. Haberman, “Green’s function for time-independent problems,” in * Applied Partial Differential Equations: with Fourier Series and Boundary Value Problems*, (Pearson Education, 2004), pp. 380–443, 4th ed.

**23. **A. Sommerfeld, “Infinite domains and continuous spectra of eigen values. The condition of radiation,” in * Partial Differential Equations in Physics*, (Academic, 1949), pp. 188–200.

**24. **A. Sommerfeld, “A fundamental theorem of vector analysis,” in * Mechanics of Deformable Bodies, Lecture on Theoretical Physics*, (Academic, 1950), pp. 36–40.

**25. **G. R. Lemaitre, “Introduction to optics and elasticity,” in * Astronomical Optics and Elasticity Theory*, (Springer-Verlag Berlin Heidelberg, 2009), pp. 1–130.

**26. **W. T. Welford, “Rays and geometrical wavefront,” in * Aberrations of Optical Systems*, (Adam Hilger, 1986), pp. 10–12.

**27. **S. Rim, P. Catrysse, R. Dinyari, K. Huang, and P. Peumans, “The optical advantages of curved focal plane arrays,” Opt. Express **16**(7), 4965–4971 (2008). [CrossRef]

**28. **C. Gaschet, B. Chambion, S. Gétin, G. Moulin, A. Vandeneynde, S. Caplet, D. Henry, E. Hugot, W. Jahn, T. Behaghel, S. Lombardo, M. Roulet, E. Muslimov, and M. Ferrari, “Curved sensors for compact high-resolution wide field designs,” Proc. SPIE **10376**, 1037603 (2017). [CrossRef]

**29. **D. Nilsson and S. Pelger, “A pessimistic estimate of the time required for an eye to evolve,” Proc. R. Soc. London, Ser. B **256**(1345), 53–58 (1994). [CrossRef]

**30. **M. F. Land and D. Nilsson, “The origin of vision,” in * Animal Eyes*, (Oxford university, 2012), pp. 1–21, 2nd ed.

**31. **K. Itonaga, T. Arimura, K. Matsumoto, G. Kondo, K. Terahata, S. Makimoto, M. Baba, Y. Honda, S. Bori, T. Kai, K. Kasahara, M. Nagano, M. Kimura, Y. Kinoshita, E. Kishida, T. Baba, S. Baba, Y. Nomura, N. Tanabe, N. Kimizuka, Y. Matoba, T. Takachi, E. Takagi, T. Haruta, N. Ikebe, K. Matsuda, T. Niimi, T. Ezaki, and T. Hirayama, “A novel curved CMOS image sensor integrated with imaging system,” in * 2014 Symposium on VLSI Technology (VLSI-Technology): Digest of Technical Papers*, (2014), pp. 1–2.

**32. **P. Swain, D. Channin, G. Taylor, S. Lipp, and D. Mark, “Curved CCDs and their application with astronomical telescopes and stereo panoramic cameras,” in * Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications V*, vol. 5301N. Sampat, R. J. Motta, and M. M. Blouke, eds., International Society for Optics and Photonics (SPIE, 2004), pp. 109–129.

**33. **E. Hugot, W. Jahn, B. Chambion, L. Nikitushkina, C. Gaschet, D. Henry, S. Getin, G. Moulin, M. Ferrari, and Y. Gaeremynck, “Flexible focal plane arrays for UVOIR wide field instrumentation,” in * High Energy, Optical, and Infrared Detectors for Astronomy VII*, vol. 9915A. D. Holland and J. Beletic, eds., International Society for Optics and Photonics (SPIE, 2016), pp. 514–522.

**34. **E. Hugot, S. Lombardo, T. Behaghel, B. Chambion, W. Jahn, C. Gaschet, S. Hugot, J. L. Gach, M. Ferrari, and D. Henry, “Curved sensors: experimental performance of CMOS prototypes and wide field related imagers,” in * International Conference on Space Optics-ICSO 2018*, vol. 11180Z. Sodnik, N. Karafolas, and B. Cugny, eds., International Society for Optics and Photonics (SPIE, 2019), pp. 1127–1133.

**35. **B. Guenter, N. Joshi, R. Stoakley, A. Keefe, K. Geary, R. Freeman, J. Hundley, P. Patterson, D. Hammon, G. Herrera, E. Sherman, A. Nowak, R. Schubert, P. Brewer, L. Yang, R. Mott, and G. McKnight, “Highly curved image sensors: a practical approach for improved optical performance,” Opt. Express **25**(12), 13010–13023 (2017). [CrossRef]

**36. **H. Ko, M. Stoykovich, J. Song, V. Malyarchuk, W. Choi, C. Yu, J. Geddes, J. Xiao, S. Wang, Y. Huang, and J. Roger, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature **454**(7205), 748–753 (2008). [CrossRef]

**37. **C. Gaschet, W. Jahn, B. Chambion, E. Hugot, T. Behaghel, S. Lombardo, S. Lemared, M. Ferrari, S. Caplet, S. Gétin, A. Vandeneynde, and D. Henry, “Methodology to design optical systems with curved sensors,” Appl. Opt. **58**(4), 973–978 (2019). [CrossRef]

**38. **F. Barrière, G. Druart, N. Guérineau, and J. Taboury, “Design strategies to simplify and miniaturize imaging systems,” Appl. Opt. **50**(6), 943–951 (2011). [CrossRef]

**39. **L. Gu, S. Poddar, Y. Lin, Z. Long, D. Zhang, Q. Zhang, L. Shu, X. Qiu, M. Kam, A. Javey, and Z. Fan, “A biomimetic eye with a hemispherical perovskite nanowire array retina,” Nature **581**(7808), 278–282 (2020). [CrossRef]

**40. **J. W. Goodman, “Wave-optics analysis of coherent optical systems,” in * Introduction to Fourier Optics*, (McGraw-Hill, 1996), pp. 96–120, 2nd ed.