Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Quantum imaging and metrology with undetected photons: tutorial

Open Access Open Access

Abstract

We present a tutorial on the phenomenon of induced coherence without induced emission, and specifically its application to imaging and metrology. It is based on a striking effect where two nonlinear crystals, by sharing a coherent pump and one or two output beams, can induce coherence between the other two output beams. This can be thought of as a type of quantum-erasure effect, where the “welcher-weg” (which-way), or in this case, “which-source,” information is erased when the shared beams are aligned. With the correct geometry, this effect can allow an object to be imaged using only photons that have never interacted with the object—in other words, the image is formed using undetected photons. Interest in this and related setups has been accelerating in recent years due to a number of desirable properties, mostly centered around the fact that the fields for detection and imaging (since separate) may have different optical properties, entailing significant advantages for various applications. The purpose of this tutorial is to introduce researchers to this area of research, to provide practical tools for setting up experiments as well as understanding the underlying theory, and also to provide a comprehensive overview of the sub-field as a whole.

© 2022 Optica Publishing Group

1. INTRODUCTION

One thousand years ago, in 1021, Ḥasan Ibn al-Haytham (a.k.a. Alhazen) completed the Book of Optics, in which he laid the foundations of modern optics and detailed the apparatus that later Kepler dubbed camera obscura (pinhole camera). In the intervening 10 centuries, imaging technology has progressed immeasurably—and up to the present day, developing ever-better imaging and sensing devices remains an extremely active field of research. Evolving approaches to imaging technology have enabled new abilities such as extreme sensitivity and resolutions, imaging at non-visible wavelengths, and many others too diverse to fully enumerate.

One of the very latest areas of research is a suite of new technologies enabled by the development of quantum theory, which contains the most accurate description of optical fields. Devices that take advantage of the co-called “quantum nature of light” include new imaging schemes that may beat the classical limits of sensitivity [14], spatial resolution [510], and phase metrology [1113]. Quantum optics has also enabled the emergence of imaging schemes where the light that interacts with the sample is not captured by the pixelated detector/camera, e.g., interaction free imaging [14], ghost imaging [1521], and quantum imaging with undetected photons (QIUP) [2224].

With ever more sophisticated camera and photon-source technologies emerging in recent years, quantum imaging and quantum-inspired imaging techniques have become even more promising avenues for research and development—and will likely be key components of the 21st century quantum revolution, alongside quantum computing, quantum communication, and quantum metrology.

Here we focus on QIUP and its spin-offs. In these experiments, coherence is induced between light produced in two twin-photon sources placed within a nonlinear interferometer. Quantum imaging with nonlinear interferometers was introduced in [22], and applications of this method to bio imaging, spectroscopy, optical coherence tomography (OCT), and moving images were first proposed in [25,26]. The most important advantage of this kind of technique is that one can obtain information about an object probed by a light beam of one wavelength by detecting only a separate light field at a different wavelength. The light field that illuminates the sample is not detected at all. This is especially useful when the illumination wavelength is one for which detectors are not available or unsatisfactory, and in the case of delicate samples that require low intensity illumination.

How to read this tutorial. In this tutorial, we will prepare you both theoretically and experimentally to investigate a few methods of quantum imaging and metrology with undetected photons, i.e., quantum-imaging-based technologies using induced coherence without induced emission (ICWIE) within a nonlinear interferometer in the low-gain regime. We will assume basic knowledge of the quantum optics formalism and familiarity with spontaneous parametric downconversion (SPDC).

In Section 2, we give an extended introduction to basic interferometric devices and the effect of ICWIE, including the basic state-vector description and conceptual overview. In Section 3, we present an introduction to one of the main applications of interferometry: phase metrology, and show how ICWIE may be exploited in this field, yielding a number of advantages and technological prospects. In Section 4, we provide the theoretical description for a multi-mode nonlinear interferometer and its application to phase and absorption imaging. In Section 5, we describe OCT, holography, and spectral imaging with undetected photons. Sections 6 and 7 are aimed at giving experimental guidelines to researchers who are building a Zou–Wang–Mandel interferometer (ZWMI) or a SU(1,1) interferometer in the low-gain regime. In Section 8, we give an outlook of some interesting research directions one could explore. Appendix A is dedicated to a basic overview of the main quantum optics states and operators, for those who have not encountered quantum optics formalism before or would like a recap.

This tutorial does not have to be read in a linear fashion, but if this is your first encounter with imaging and metrology using ICWIE, we suggest you start by reading Section 2. Those who would like to revisit key points of basic quantum optics theory might want to read Appendix A at that point, before proceeding on. Basic theoretical background on quantum optics and quantization of electromagnetic fields is required for understanding Sections 3 and 4. Good accounts of these topics can be found in standard textbooks on quantum optics (e.g., [27,28]). Also, readers with different interests might jump to different sections. In particular, readers interested in the application of ICWIE to phase metrology can learn about it in Section 3, which can be skipped by those not interested in phase metrology. Researchers aiming at understanding the rigorous theory behind imaging with undetected photons should refer to Section 4, which can be skipped by experimentalists who are not planning on doing detailed calculations. On the other hand, researchers who are not planning to build a setup in the laboratory might want to skip Sections 6 and 7. Please note that background knowledge in elementary Fourier optics and multi-mode theory of SPDC will be useful for a thorough understanding of imaging, particularly of the theory developed in Section 4. An excellent account of Fourier optics can be found in [29], and a multi-mode description of the theory of SPDC can be found in [30].

 figure: Fig. 1.

Fig. 1. (a) Mach–Zehnder interferometer. A light field is split at the 50:50 beam splitter, ${{\rm{BS}}_1}$, and is recombined at ${{\rm{BS}}_2}$. A phase shifter is placed in path $A$ and an object with complex field transmittance $T$ placed in path $B$. Interference is analyzed at the detector. (b) Two-Photon Interferometer. A pump is split into paths ${P_1}$ and ${P_2}$ at a 50:50 beam splitter (${{\rm{BS}}_1}$) and illuminates two nonlinear sources, ${Q_1}$ and ${Q_2}$, producing correlated photon pairs. When a pair is produced in source ${Q_1}$ (${Q_2}$), the so-called signal photon is emitted into path ${S_1}$ (${S_2}$) and so-called idler photon is emitted into path ${I_1}$ (${I_2}$). Signal paths ${S_1}$ and ${S_2}$ are combined at ${{\rm{BS}}_2}$, and idler paths ${I_1}$ and ${I_2}$ are combined at ${{\rm{BS}}_3}$. No interference is observed in the signal intensity at the detector because which-way information is in principle obtainable. One can observe interference only using post-selection, i.e., by detecting idlers at one output of ${{\rm{BS}}_3}$ in coincidence with signals at one output of ${{\rm{BS}}_2}$.

Download Full Size | PDF

2. INDUCED COHERENCE WITHOUT INDUCED EMISSION

Richard Feynman considered quantum interference the biggest mystery in quantum mechanics [31]. Quantum interference is a phenomenon observed at a detector if and only if it is impossible to associate each detected quantum (e.g., photon, electron, atom, molecule) to a particular path, between the two or more paths connecting that detection apparatus to the quantum source [31,32]. Moreover, the fringe visibility in a two-way interferometer gives an upper bound on the available which-way (welcher-weg) information [33]. The quintessential example is the double slit experiment, where no interference is observed if the alternative paths between the source and the detector are distinguishable and interference is observed if those paths are indistinguishable [31,34]. Other important examples in optics are the Michelson, Mach–Zehnder, and Sagnac interferometers [3537].

When setting up any interferometer in the laboratory, to observe good interference visibility, one aligns the beams incoming to the detector and adjusts the path lengths of the interferometer so that they differ by not more than the coherence length of the quanta. In the language of quantum information, the alignment and the length adjustment of the interferometer paths amount to ensuring indistinguishability between quanta arriving at the detector, thus enabling interference [32].

A. Mach–Zehnder Interferometer

Let us consider what happens to light in a Mach–Zehnder interferometer (MZI), illustrated in Fig. 1. The incoming light field is split at the first beam splitter (BS) ${{\rm{BS}}_1}$ and recombined at a second BS, ${{\rm{BS}}_2}$. For interference to be seen, the optical path lengths of paths $A$ and $B$ between the two BSs must be equal to within the coherence length of the light field, such that each photon is described as being in a superposition, $({|1{\rangle _A} + {e^{- i{\phi _A}}}|1{\rangle _B}})/\sqrt 2$, where $|1{\rangle _x}$ denotes a photon in path $x$. The phase ${\phi _A}$ is associated with relative optical delays between the two modes of the interferometer and can be adjusted by slightly shifting a mirror or BS or by inserting a slab of a transparent material, such as silica. The count rate at a detector placed after ${{\rm{BS}}_2}$ in path $A$ (or $B$) is given by

$${{\cal R}_{A/B}} = \frac{{\left({1 + |T{|^2}} \right) \pm 2|T|\cos ({\phi _A} - \gamma)}}{4},$$
where $T = |T|{e^{{i\gamma}}}$, with $0 \le |T| \le 1$ and $0 \le \gamma \lt 2\pi$, is the complex field transmittance of an object placed in path $B$. The interference visibility obtained by scanning ${\phi _A}$ is defined as
$${\cal V} = \frac{{{{\cal R}_{{\max}}} - {{\cal R}_{{\min}}}}}{{{{\cal R}_{{\max}}} + {{\cal R}_{{\min}}}}}.$$

Assuming equal optical path lengths, the interference visibility of the MZI is thus

$${{\cal V}_{{MZ}}} = \frac{{2|T|}}{{1 + |T{|^2}}}.$$

In the language of quantum information, the reduction of visibility for $|T| \lt 1$ is due to the path distinguishability introduced by the object. One application of the MZI is interaction free measurements [38,39], which can be used for imaging [14]. An object is placed in one arm of a MZI where photons are sent one at a time. The object affects the interference pattern at the output and, in a fraction of the experimental runs, the presence of the object can be deduced without the photon having interacted with it.

B. Two-Photon Interferometer

Now let us consider an interferometer that uses two identical sources of photon pairs, e.g., nonlinear crystals that can generate photon pairs, commonly referred to as signal and idler, through SPDC. The sources are prepared such that the biphoton fields emerging from them are mutually coherent. These crystals are weakly pumped by mutually coherent laser beams, for example, generated by splitting a laser beam into two as shown in Fig. 1(b). Let us denote the two sources as ${Q_1}$ and ${Q_2}$, and the emitted beams will be referred to as signal beams (${S_1}$ and ${S_2}$) and idler beams (${I_1}$ and ${I_2}$). Beams ${S_1}$ and ${S_2}$ are combined at a BS, the outputs of which are sent to detectors.

A few simple calculations can help us understand what is going on. In a first approach to this problem, it is instructive to write down the photon states in the device as relatively simple state vectors. In this picture, the action of a nonlinear source is simply to add one photon each to the appropriate modes. For example, in reference to Fig. 1(b), the action of source ${Q_j}$, with $j = \{1,2\}$, is modeled as taking the vacuum input in modes ${S_j}$ and ${I_j}$ and transforming them as $|0{\rangle _{{S_j}}}|0{\rangle _I} \to |1{\rangle _{{S_j}}}|1{\rangle _{{I_j}}}$, where $|1{\rangle _{{S_j}}},\;|1{\rangle _{{I_j}}}$ represent a photon occupying mode ${S_j},\;{I_j}$, respectively. Given this assumption, it is also very unlikely that both nonlinear sources, ${Q_1}$ and ${Q_2}$, fire in sync. Please note that here we keep the kets as single-photon number states so as to maintain the same notation as the rest of the paper, but since first quantization does not use the Fock basis (all modes are assumed to have only one photon) we could equally well omit the occupation number—that is, $|1{\rangle _I} = |I\rangle$, for example.

Let us first consider that $T = 1$ in Fig. 1(b), which is equivalent to removing that object/sample. Assuming both sources are identical and their emissions are coherent, the state just before the BS ${{\rm BS}_2}$ will be the superposition

$$\frac{{|1{\rangle _{{S_1}}}|1{\rangle _{{I_j}}} + {e^{- i\phi}}|1{\rangle _I}|1{\rangle _{{S_2}}}}}{{\sqrt 2}}.$$

Now let us consider the more general case. We will see in Section 4 that the object/sample in the idler path is modeled as a BS with complex field transmittance function $T = |T|{e^{{i\gamma}}}$, with $0 \le |T| \le 1$ and $0 \le \gamma \lt \pi /2$. In this picture, the action of the object in Fig. 1(b) is to transform the state produced in source ${Q_1}$ as

$$|1{\rangle _{{S_1}}}|1{\rangle _{{I_1}}} \to |1{\rangle _{{S_1}}}\left({T|1{\rangle _{{I_1}}} + i\sqrt {1 - |T{|^2}} |1{\rangle _0}} \right),$$
where $|1{\rangle _0}$ represents a photon absorbed or scattered by the object. Hence, when we apply this to the interferometer we are considering, Eq. (4) is replaced by
$$\frac{{\left({T|1{\rangle _{{I_1}}} + i\sqrt {1 - |T{|^2}} |1{\rangle _0}} \right)|1{\rangle _{{S_1}}} + {e^{- i\phi}}|1{\rangle _{{I_2}}}|1{\rangle _{{S_2}}}}}{{\sqrt 2}}.$$

The action of a BS is described in Appendix A [Eq. (A21)]. Essentially, ${{\rm BS}_2}$ combines the signal modes as $|1{\rangle _{{S_1}}} \to ({|1{\rangle _{{S_1}}} - \imath |1{\rangle _{{S_2}}}})/\sqrt 2$ and $|1{\rangle _{{S_2}}} \to ({|1{\rangle _{{S_2}}} - \imath |1{\rangle _{{S_1}}}})/\sqrt 2$. Similarly, ${{\rm BS}_3}$ combines the idler modes, such that the two-photon state after ${{\rm{BS}}_2}$ and ${{\rm{BS}}_3}$ in Fig. 1(b) is

$$\begin{split}\!\!\!&\left({\frac{{{f_ -}|1{\rangle _{{I_1}}} + i{f_ +}|1{\rangle _{{I_2}}} + i\sqrt {2(1 - |T{|^2})} |1{\rangle _0}}}{{2\sqrt 2}}} \right)|1{\rangle _{{S_1}}} \\&\quad+ \left({\frac{{i{f_ +}|1{\rangle _{{I_1}}} - {f_ -}|1{\rangle _{{I_2}}} - \sqrt {2(1 - |T{|^2})} |1{\rangle _0}}}{{2\sqrt 2}}} \right)|1{\rangle _{{S_2}}},\!\end{split}$$
where ${f_ \pm} \equiv ({T{e^{{i\phi}}} \pm 1})$.

If the idler photons are not detected, mathematically we perform a partial trace over the idler modes and obtain a constant ($= 1/2$) signal photon counting rate at either output of ${{\rm{BS}}_2}$. In other words, interference is not directly observed in intensity measurements at the signal detector without any post-selection (coincidence detection). An intuitive explanation for this is that which-source information is retrievable in principle; for example, one could (hypothetically) add a fourth BS combining the two idler outputs of ${{\rm BS}_3}$, which could reveal which idler came from which source. In that case, effectively, spatial modes ${I_1}$ and ${I_2}$ would be inputs to a MZI [Fig. 1(a)] defined by ${{\rm BS}_3}$ and (hypothetical) ${{\rm BS}_4}$, such that the output of ${{\rm BS}_4}$ could be set to be the original ${I_1}$ and ${I_2}$ fields. The astonishing thing is that even if one does not actually perform that measurement, by leaving the idlers undetected, one leaves the possibility, in principle, of using these idlers to extract which-source information of the signal photons. That mere possibility, even if it is not actually realized, is enough to inhibit signal intensity modulation due to interference.

 figure: Fig. 2.

Fig. 2. Three architectures of the Zou–Wang–Mandel interferometer. (a), (b) The idler paths produced in sources ${Q_1}$ and ${Q_2}$ are aligned, and a 50:50 beam splitter, ${{\rm{BS}}_2}$, combines signal paths ${S_1}$ and ${S_2}$; which-path information is erased, and single-photon interference can be observed in the detector, even though idler photons are not detected at all. The field transmittance function, $T$, of an object illuminated by the idler field $I$ can be observed in the interference pattern of the signal beams at the detector, even though signal photons have not interacted with that object. The phase $\phi$ can be tuned by adjusting signal, idler, or pump optical path lengths. (b), (c) The signal and idler are emitted in the same direction as the pump (collinear emission), and if they are all at distinct frequencies, dichroic mirrors can be used to separate them. In (b), the dichroic mirror ${{\rm{DM}}_1}$ reflects idler photons and transmits signal wavelength, whereas ${{\rm{DM}}_2}$ transmits the pump and reflects idler photons. In (c), a single crystal is pumped from both sides. ${{\rm{DM}}_1}$ reflects the pump and transmits both signal and idler, whereas ${{\rm{DM}}_2}$ reflects idler and transmits the signal. Finally, ${{\rm{DM}}_3}$ reflects the signal while transmitting the idler and pump. Notice that in this architecture, undetected light goes twice through the same sample ($T^\prime = {T^2}$).

Download Full Size | PDF

Note that if one detects one output of ${{\rm{BS}}_2}$ in coincidence with an output of ${{\rm{BS}}_3}$, i.e., by using post-selection, it is possible to observe interference [40,41], and the visibility is ${\cal V} = 2|T|/(|T{|^2} + 1)$.

C. Zou–Wang–Mandel Interferometer

In the MZI, as well as the Michelson interferometer and the Sagnac, classical wave models, including classical electromagnetism, can describe the interference visibility due to (mis)alignment, path length difference, and/or an object placed in a path of the interferometer. Seeking to unravel the connection between quantum indistinguishability and interference, in 1991, Zou, Wang, and Mandel, with essential insight from Jeff Ou, created an interferometer that cannot be described by classical wave optics models [42,43]. The difference between this interferometer and the two-photon interferometer described above is that now both sources emit idler beams into the same spatial mode, $I$. Considering that the idler photons are not detected at all, do you expect that interference fringes can be observed in the detected signal outputs? Why or why not?

Let us take a look at the state of a photon pair just before ${{\rm{BS}}_2}$ in Fig. 2:

$$\frac{{\left({T|1{\rangle _I} + i\sqrt {1 - |T{|^2}} |1{\rangle _0}} \right)|1{\rangle _{{S_1}}} + {e^{- i\phi}}|1{\rangle _I}|1{\rangle _{{S_2}}}}}{{\sqrt 2}},$$
where $T = |T|{e^{{i\gamma}}}$ is the complex field transmittance of the object/sample placed in the idler path, $0 \le \phi \lt 2\pi$ is an adjustable interferometric phase, and $|1{\rangle _0}$ represents a photon absorbed or scattered by the object, i.e., mode 0 corresponds to the “loss mode.” Note that mode $I$ does not acquire a second photon, as the single photon in that mode is assumed to have come from either the second or first crystal. At this point, the origin of this photon could be determined by seeing which of modes ${S_1}$ and ${S_2}$ contains a photon.

The final BS ${{\rm{BS}}_2}$ combines the signal fields, after which the state of the twin photons can be written as

$$\begin{split}\!\!\!|{\psi _f}\rangle & = \frac{1}{2}\left({({T + i{e^{- i\phi}}} )|1{\rangle _I} + i\sqrt {1 - |T{|^2}} |1{\rangle _0}} \right)|1{\rangle _{{S_1}}}\\&\quad + \frac{1}{2}\left({({{e^{- i\phi}} + iT} )|1{\rangle _I} + \sqrt {1 - |T{|^2}} |1{\rangle _0}} \right)|1{\rangle _{{S_2}}}.\!\end{split}$$

By tracing out the idler mode $I$ and the loss mode 0, we can obtain the count rate at a detector placed at either output of ${{\rm{BS}}_2}$:

$${{\cal R}_{{S_1}({S_2})}} = \frac{{1 \pm |T|\cos (\phi + \gamma)}}{2},$$
where—strikingly—an interference pattern modulated by the object $T$ can now be observed in the signal photon counting rate, despite the fact that neither ${S_1}$ nor ${S_2}$ interacted with that object. Coherence is thus “induced” between the two modes as the result of aligning as precisely as possible the shared idler mode. We explain in Section 7 how to do this alignment in the laboratory. Idler mode $I$, which contains no phase information, is typically discarded.

In the ZWMI that we have just described, the interference visibility is directly proportional to the absolute value of the field transmission coefficient:

$${{\cal V}_{{\rm{ZWM}}}} \propto |T|.$$

This relationship holds even if the intensities of the signal beams are not equal. In fact, any photon loss in the idler arm results in a reduction of the total visibility, as it introduces partial path distinguishability (welcher-weg information).

Let us compare this with the nonlinear effect of loss ($|T| \lt 1$) in an arm of a MZI [Eq. (3)]. The linear relation between interference visibility and loss in the undetected idler photon path between the two sources is a distinguishing feature of the ZWMI. It is shown in [42,44] that this characterizes the non-classicality of induced coherence without induced (stimulated) emission. This is a very important point: stimulated emission at the second source ${Q_2}$ due to the input idler field $I$ is not necessary for induced coherence (interference), a fact that highlights the non-classicality of the phenomenon [45]. In cases where stimulated emission is not negligible in ${Q_2}$, the interference visibility is a nonlinear function of the field transmittance $|T|$ [44,46]. This regime can be achieved using very high gain sources (e.g., using very high pump power), or by seeding ${Q_1}$ and ${Q_2}$ via mode $I$ with a coherent state (laser beam) with the idler beam wavelength [40,47].

 figure: Fig. 3.

Fig. 3. Three architectures for the SU(1,1) interferometer. (a) Both signal and idler paths $S$ and $I$ from the first nonlinear source (${Q_1}$) are aligned with signal and idler paths originating in the second nonlinear source (${Q_2}$). (b), (c) The laser pumps a crystal and is reflected back through the same crystal. The undetected light traverses twice through the same sample ($T^\prime = {T^2}$). In (c), the pump, signal, and idler leave the crystal collinear with each other, and if they are all at different frequencies, they can be separated using three dichroic mirrors: ${{\rm{DM}}_1}$ transmits the pump but reflects signal and idler photons; ${{\rm{DM}}_2}$ transmits the pump and idler photons, but reflects signal photons; ${{\rm{DM}}_3}$ transmits the pump and signal photons, but reflects idler photons. In all three architectures, single-photon interference is seen in signal and idler outputs without post-selection (coincidence detection). The field transmittance function of an object placed in either path $S$ or $I$ can be seen in the interference pattern appearing in a camera placed in either output path.

Download Full Size | PDF

The ZWMI was generalized to many spatial modes using spatially correlated photon pairs to produced QIUP [22,23]. Quantum interference and spatial correlations between signal and idler photons [30] together produce in the detected signal photons images of an object placed in the idler beam [23,24,48,49]. Both the real and imaginary parts of the object’s spatially varying field transmittance function can be observed. The real part $|T|$ is encoded in the interference visibility. The imaginary part $\gamma$ appears as an interferometric phase.

D. SU(1,1) Interferometer

Let us now consider that both signal and idler photons from source ${Q_1}$ are fed into source ${Q_2}$ [50,51], as shown in Fig. 3. Here we will refer to this interferometer as an “SU(1,1) interferometer,” also known as a “nonlinear Mach–Zehnder” [50,5254]. This experiment can be thought of as a nonlinear adaptation of the MZI, which has two BSs, whereas the SU(1,1) has instead two nonlinear media, ${Q_1}$ and ${Q_2}$. The object with transmission function $T = |T|{e^{{i\gamma}}}$ is placed in the idler mode between the crystals. At the output modes, the two-photon state can be written as

$$|\psi \rangle = \frac{{(|T|{e^{\gamma + \phi}} + 1)|1{\rangle _S}|1{\rangle _I} + \sqrt {1 - |T{|^2}} |1{\rangle _S}|1{\rangle _0}}}{2}.$$

The (singles) counting rate at detectors placed on either output path is therefore

$${{\cal R}_{S/I}} = \frac{{1 + |T|\cos (\phi + \gamma)}}{2},$$
giving the interference visibility ${{\cal V}_Y} = |T|$, just as in the case of the ZWMI [Eq. (11)]. As the interferometer path of the signal, idler, or pump is adjusted, both signal and idler count rates oscillate, a clear manifestation of interference. The very unique feature of this particular interferometer is that the single photon output ports are in phase with each other, though out of phase with the laser output port. That means that if maximum (minimum) counts are observed in output mode $S$, maximum (minimum) counts are simultaneously observed in output mode $I$. This leads to the curious phenomenon of frustrated downconversion, analyzed in [51]. If one introduces which-path information in the signal or idler paths between the crystal, for example, by unaligning the modes, interference is reduced or even lost in both signal and idler modes, showing complementarity between welcher-weg (which-path) information and interference visibility.

The experiments described above and illustrated in Figs. 2 and 3 can be viewed as “quantum eraser” experiments [55,56]. In the ZWMI (Fig. 2), after the crystals but before the final BS, no interference would appear in either of the detection modes since the path itself marks which crystal experienced the photon-generating downconversion; however, after the final BS, this information is erased (a photon in either mode could have come from either crystal), and thus interference appears. In the SU(1,1) interferometer (Fig. 3) which-source information is erased at the second crystal. Also note that, unlike interferometers such as Mach–Zehnder, Michelson, and Sagnac, where only a single phase shift is possible, in the nonlinear interferometers we discuss here, one can independently change the phases of the pump, of the signal field, and of the idler fields, and usually these have different frequencies.

Notice that in the setup in Fig. 4(a), the signal and idler go through the object, and in Fig. 4(a), all three fields, signal, idler, and pump, go through the imaged object (labeled each time with $T$). In that case, the equations in the theory sections must be adapted accordingly. In addition, we show in this tutorial the imaging due to light transmitted through an object, but it is trivial to adapt the theory to the case of a reflective object, which is the case of the OCT setup [Fig. 13(b)], described in Section 5.B.

 figure: Fig. 4.

Fig. 4. These are two alternative versions of the SU(1,1) interferometer. (a) The same crystal acts as sources ${Q_1}$ and ${Q_2}$, as pump, signal, and idler are reflected back into that crystal. the dichroic mirror (DM) reflects only the signal field, and long pass (LP) reflects only the pump. (b) Setup for collinear non-degenerate downconversion, where both signal and idler pass through the sample. Dichroic mirror ${{\rm DM}_1}$ separates the signal $S$ from the other fields, and dichroic mirror ${{\rm DM}_2}$ separates the idler $I$ from the pump.

Download Full Size | PDF

It turns out that, in addition to the ZWMI and the SU(1,1) configurations, there is a whole range of interferometric architectures that involve two or more nonlinear sources [50,52]. These “nonlinear interferometers” have proved to be interesting and useful for imaging, spectroscopy, metrology, and other applications [54,57,58,6066]. In this tutorial, we will explain how to build and model some of them.

In Section 3, we will introduce the theoretical underpinnings of phase metrology (one of the most common applications of interferometers), and in Section 4, we will show a generalization of the single-mode description to a multi-mode one, allowing the description of imaging with undetected photons. Those familiar with the basics of quantum optics and the main statistical states of light (coherent, number, etc.) can proceed to those sections. Those wishing an introduction or refresher can see Appendix A at the end of this paper. Experimentalists who do not plan on performing detailed calculations can skip ahead to Section 5.

We point out that in our theoretical expositions in this tutorial, we use the language of squeezed correlated fields, such as states produced in parametric downconversion. However, the interferometers we analyze and the mathematical formalism we introduce can also be applied to a variety of other systems, such as atomic spin waves [67], superconducting microwave cavities [68,69], and four wave mixing [70].

3. THEORY OF PHASE METROLOGY WITH UNDETECTED PHOTONS

A typical task in interferometry is to determine the relative phase delay between two (or more) modes of the device. For example, in a MZI (Fig. 1), we can measure the delay between the two modes that yields information about the difference of path lengths between modes A and B (or, perhaps, the index of refraction of some intervening transparent material, or some other similar thing). In the simplest case, this information can be abstracted out as a phase shift, and in fundamental studies of interferometers, the actual mechanism of the shift is typically ignored. Devices are then characterized by the minimum phase shift that can be observed—corresponding to the most sensitive configuration.

The fundamental limit for classical interferometers (typically classified as those that use coherent light only) is called the “standard quantum limit” (SQL), which itself is the combination of two effects.

The first is radiation-pressure noise, which is a result of the light beam imparting a fluctuating amount of momentum into the mirror. Obviously, the imparted momentum causes “jitter,” fowling up the very sensitive phase measurements an interferometer might otherwise perform. This noise increases as the power of the light increases. Conversely, it decreases as the mass of the mirror increases. Much of the research into reducing radiation-pressure noise is concerned with mirror stabilization.

The second is shot noise, which is a result of the photon number fluctuations from “shot to shot.” The relative shot noise decreases as the intensity increases, in contrast to radiation-pressure noise. This is the noise source that is typically the limiting factor. In principle, the mass of the mirrors in an interferometer may be made very large, such that radiation pressure becomes small when compared to shot noise. Though in practice this may be very difficult, there are a number of systems where the dominant source of noise is indeed shot noise, and it is often the case that the terms SQL and shot noise limit (SNL) are used interchangeably.

By using simple arguments about the statistics of coherent states, the limiting case for classical interferometry may be found to be

$$\Delta {\phi _{{\min}}} = \frac{1}{{\sqrt {\bar n}}},$$
which is the SNL on the minimum detectable phase shift. This is the best that can be done classically. Factors $\Delta {\phi _{{\min}}}$ and $\bar n$ are the minimum detectable phase shift and average photon number, respectively.

Now the obvious questions are, “Can this be improved upon using quantum resources? And, if so, what new fundamental limit constrains quantum devices?” The answers to these questions are commonly agreed to be “yes” and “the Heisenberg limit,” respectively.

The Heisenberg limit presumes to be the absolute limit on the phase sensitivity of an interferometer. Unlike the SNL in Eq. (14), it makes no assumptions about the specific kind of light being used. Instead, the Heisenberg limit draws upon the fundamental laws of quantum mechanics to place a bound on how accurately we may measure a light field’s (or matter wave’s) phase. However, it must be noted that it is not straightforward to provide a rigorous derivation of the Heisenberg limit for phase measurement, which is accepted by all. The root of the problem lies in the definition of the phase operator. Below we provide a brief and lucid description of the issue.

A phase operator was originally introduced by Dirac in his celebrated paper on quantum theory of radiation [71]. To understand Dirac’s approach, let us take the annihilation operator acting on a coherent state:

$$\hat a|\alpha \rangle = \alpha |\alpha \rangle = {e^{{i\phi}}}|\alpha ||\alpha \rangle = {e^{{i\phi}}}\sqrt {\bar n} |\alpha \rangle .$$

The annihilation operator is a purely quantum mechanical object with no classical analog. However, it can be decomposed into quantities that are familiar in classical optics: the average intensity, $\bar n$, and phase, $\phi$, of an optical field. Dirac took this to mean that the creation and annihilation operators could be factored into Hermitian observables as

$$\hat a = {e^{i\hat \phi}}\sqrt {\hat n} ,$$
$${\hat a^\dagger} = \sqrt {\hat n} {e^{- i\hat \phi}},$$
where the second equation is obtained by simply taking the conjugate transpose of the first. He thus defined the phase operator $\hat \phi$. This combined with the commutation relation $[\hat a,{\hat a^\dagger}] = 1$ yields $[\hat n,\hat \phi] = i$.

Dirac thus concluded that the photon number and phase are conjugate (canonical) observables. Therefore, to become more certain about one, we must become less certain about the other. This relationship can be made quantitative by employing the generalized Heisenberg uncertainty principle for non-commuting operators: $\Delta A\Delta B \ge \frac{1}{2}|\langle [\hat A,\hat B]\rangle |$. Using this, we have $\Delta n\Delta \phi \ge 1/2$. So to know as much as we can about the phase, we must reduce by as much as possible our knowledge of the number of photons in the field. Since we are concerned with fundamental limits, we will take the case where the photon number is as uncertain as possible: when $\Delta n = \bar n/2$. The uncertainty cannot be made any larger than this because then there would be a non-zero probability of detecting a negative number of photons in the field (which is physically meaningless). Therefore, we get the following expression for the Heisenberg limit:

$$\Delta {\phi _{{\min}}} = \frac{1}{{\bar n}}.$$

However, there is a flaw in this argument. The problem exists in the definition of the phase operator that was later shown to be non-Hermitian [72,73]. In fact, if we attempt to write down an eigenstate for this phase operator, we can visualize the issue.

Eigenstates of the number operator (representing a light field with an exactly known number of photons but a completely undefined phase) are well defined as a ring in quadrature space with a diameter equal to the intensity of the field and a finite area (and thus energy); see Fig. 14. However, in quadrature space, a phase eigenstate is a wedge radiating out from the origin to infinity (all of the space that exists at a particular angle). Such a state has infinite area and thus infinite energy, and thus is not normalizable (see Fig. 14 again).

Since the problem with Dirac’s phase operator was pointed out, there have been numerous discussions and proposals on this issue (see, for example, [7376]). Furthermore, there exists another approach to understand the achievable precision of phase measurement from the perspective of quantum estimation theory [77,78]. Nevertheless, the Heisenberg limit ($\Delta {\phi _{{\min}}} = 1/\bar n$) is widely used and commonly regarded as an approximate bound, in the limit of high photon numbers [72,76]. The Heisenberg limit remains a useful and common goalpost for studies in interferometry.

Now suppose we wish to consider a specific device that probes the abstract phase shift by imprinting it on some measurable quantity. Mathematically, this means we have some quantity that is a function of the phase $M(\phi)$. So we ask the question, “Given that we are measuring $M$, what is the smallest change we can detect in $\phi$?” To answer this question, start with the Taylor series expansion of a function $M$ with the variable as $\phi$ about a point ${\phi _o}$:

$$M(\phi) = M({\phi _0}) + (\phi - {\phi _0}){\left. {\frac{{\partial M}}{{\partial \phi}}} \right|_{\phi \to {\phi _0}}} + \quad ...,$$
where $\partial (\phi - {\phi _0}) = \partial \phi$ since ${\phi _0}$ is a constant. The smallest detectable phase shift would be equal to the smallest we could make $\phi - {\phi _0}$. Since we are considering this quantity to be very small, we can truncate the series after the second term and recast the above as
$$\frac{{M(\phi) - M({\phi _0})}}{{{{\left. {\frac{{\partial M}}{{\partial \phi}}} \right|}_{\phi \to {\phi _0}}}}} = \phi - {\phi _0} = \Delta {\phi _{{\min}}}.$$

If we take ${\phi _0}$ to be the average value of the phase, then $M(\phi) - M({\phi _0})$ gains the interpretation of being the variance of $M$ for a single measurement, as $M(\langle \phi \rangle) = \langle M(\phi)\rangle$—and the derivative becomes the derivative with respect to the phase of $\langle M\rangle$. However, we want the statistically averaged variance for a series of measurements (standard deviation) of $M$, so we take ${(M - \langle M\rangle)^2}$ and average it (which is the standard deviation squared), which yields $\langle {M^2}\rangle - {\langle M\rangle ^2}$.

Then, squaring both sides of Eq. (19), making the substitutions above, and identifying $\langle {M^2}\rangle - {\langle M\rangle ^2}$ as the square of the variance, we arrive at

$$\Delta {\phi _{{\min}}} = \frac{{\Delta \hat M}}{{\left| {\frac{{\partial \langle \hat M\rangle}}{{\partial \phi}}} \right|}},$$
where we have promoted $M$ to a quantum mechanical observable and taken the root of both sides. This is the minimum detectable phase shift. To calculate this, we need both a choice of measurement operator, and the quantum-mechanical state that operator works on (to take the expectation values).

Now, recall the limit of a standard MZI with coherent light input—Eq. (14). We wish to use this new formula to calculate the sensitivity of this device to changes in the abstract phase $\phi$. We need the relationship between the input modes and output modes given by Eq. (A21). These will allow us to write the operators at the detection end of the MZI in terms of the operators at the input end. We then take the expectation values of these operators at the input end.

The most important question, with regard to our sensitivity formula and the MZI, is the choice of the detection scheme $\hat M$, which for a MZI is the difference of the intensities at the bright port and dark port:

$$\hat M = \hat b_f^\dagger {\hat b_f} - \hat a_f^\dagger {\hat a_f},$$
where the subscript indicates that these operators act on the final state. This corresponds to an intensity difference measurement between modes A and B in Fig. 1. Using this information and Eq. (20), we find for this setup $\Delta {\phi _{{\min}}} = 1/|\alpha | = 1/\sqrt {\bar n}$, which is, unsurprisingly, the SNL in Eq. (14).

An analysis of the phase sensitivity has also been recently performed for the ZWMI by some of us and others [79]. Several interesting effects are found. First, as might be expected for a “highly quantum” device, the minimum detectable phase shift reaches below the classical bound of the SNL, meaning that the device is “super-sensitive.” This effect is maintained regardless of gain regime (at least in principle). Furthermore, when the initial crystal ${Q_1}$ is seeded with a strong coherent (laser) beam, the sensitivity is further increased (“boosted”) into the bright-light regime while still maintaining some aspects of super-sensitive scaling. Though the general equations produced by this calculation are very large, a simple case of the minimum detectable phase shift (squared) can be presented for coherent light injection into one of the modes (in this case, the one that does not pass through the sample, corresponding to mode ${S_1}$ in Fig. 2) and intensity difference subtraction measurement between the two detected modes (modes ${S_1}$ and ${S_2}$ in the same), when the gains are very large (and equal to each other), and with the probe phase (and all other phases) set to zero:

$$\Delta \phi _{{\min}}^2 = \frac{{{e^{- 2r}}}}{{4(1 + {\beta ^2})}}.$$

Note, as a reminder, that $r$ represents the modulus of the squeezing parameter $\xi$ ($\xi = r{e^{{i\theta}}}$) and $\beta$ the coherent pump field. Also, this equation is not optimal—rather, it is presented because of its tractability (for detailed discussion, see [79]). From this equation, it is clear that both the squeezing due to nonlinearities and the coherent light injection improve the sensitivity. The improvement is exponential for the gain and inverse-squared for the coherent light injection.

The SU(1,1) interferometer, Fig. 3, has become a commonly studied device [50,51,54,61,80] due to the fact that it allows super-sensitive detection with bright light. Recently, these interferometers have also been modified so that a MZI is nested inside [81], which has some conceptual similarity to the ZWMI configuration in the sense that both BS and squeezing operations are performed.

An argument against the use of all nonlinear interferometers could be paraphrased as, “If we need a bright laser to pump the nonlinearity, would it not just be better to use that bright light in a MZI?” To confront this criticism, any light that is used to pump a nonlinear source is added to the MZI “light budget,” making the comparison “fair.” In Fig. 5, the phase sensitivity of the ZWM, SU(1,1), and MZIs are compared. This graph uses the concept of a “fair comparison state.” We see that, though the SU(1,1) configuration performs best, the ZWMI type also displays super-sensitivity, beating the MZI after even a modest nonlinear gain.

 figure: Fig. 5.

Fig. 5. Metrology with undetected photons. Figure taken from [79]. The minimum detectable phase-shift squared of several fair comparison interferometric setups and detection schemes as a function of gain [of the first crystal for ZWMI and of both crystals for SU(1,1)]. Here we display the boosted ZWMI setup with intensity detection at mode $B$ (green), intensity difference detection between modes $B$ and $C$ (brown), the boosted SU(1,1) setup (black), and a standard coherent-light-seeded MZI with the extra light needed to create the aforementioned squeezings added to the initial input (red). The latter is equivalent to the shot-noise limit. All other parameters are numerically optimized at each point. The circular points (upper set) represent injected coherent light of about the same intensity as would be needed for a high-gain nonlinearity, and the square points (lower set) represent a much brighter coherent input.

Download Full Size | PDF

Surprisingly, this super-sensitive phase detection is available regardless of which of the two initial input modes is “boosted” into the bright-light regime (corresponding to injecting coherent light either into mode ${S_1}$ or into mode $I$ in Fig. 2). Therefore, one can choose not to shine the extra laser through the sample and achieve the same sensitivity increase as if it had been. Likewise, one can shine the light through the sample into the mode that is discarded and thus avoid hitting the detectors. This technique could prove very useful in cases where either the detectors or sample is sensitive to bright coherent light. Furthermore, when contrasted with the SU(1,1) configuration, which requires adaptive intensity measurements (where the signal is produced by summing the intensities of the two output modes) or homodyning for detection, the ZWMI geometry uses intensity subtraction, making it a more stable and thus at least in some cases more experimentally desirable.

The ZWMI has also been studied from the perspective of the signal-to-noise ratio (SNR). In [82], the authors theoretically study a ZWMI with a variable-field transmittance BS inserted in the idler path between two nonlinear sources, with different pump intensities for the nonlinear sources as well. They look at the SNR and visibility of the output as a function of the gain in the nonlinearities and the field transmittance of the aforementioned BS, paying special attention to the qualitative and quantitative difference between various regimes of gain. They also find that the visibility of the system may be optimized by proper choice of the field transmittance of the BS.

So far we have examined only “single-mode” descriptions of the ZWMI and the SU(1,1). That is, we do not take into account the real momentum and frequency distributions. We have assumed that fields can differ only in path. In the following section, we build on the previous to create a more complete picture, especially as it relates to imaging.

4. THEORY OF IMAGING WITH UNDETECTED PHOTONS

Amplitude and phase imaging using a ZWMI was introduced in [22] using collinear SPDC, as shown in Fig. 2(b). Two absorptive objects (a cardboard cut-out and an etched silicon plate) and a phase object (an etched fused silica plate) were placed in the undetected idler arm with wavelength 1550 nm. The images of these objects were retrieved in the interference pattern of the combined 810 nm signal field using a low-light sensitive camera, although there is no requirement of single-photon detection.

In this section, we present a rigorous theoretical description of the image formation in the ZWMI. To have a thorough idea of the imaging, we must consider the multi-mode structure of optical fields.

QIUP relies on transverse spatial correlations between signal and idler photons. Currently, there exist two complementary setups. In one of them, lenses in the idler path between the crystals are used such that the object is in the far field relative to ${Q_1}$ and ${Q_2}$ [22,23,64,65,83]. The object is imaged onto the camera also using lenses in the signal fields. In that case, the imaging is enabled by momentum correlation between the twin photons. In the other case, the object and camera can be placed in the near field (source plane) relative to the twin photon sources and, in this case, the imaging is enabled by the position correlation between twin photons [24,84,85]. (Evanescent fields play no role in this configuration and, therefore, must not be confused with conventional near-field imaging.) Images generated in these two cases have distinct features. We discuss the two configurations separately in Sections 4.B and 4.C. We stress here that the theory is not restricted to twin photons generated by SPDC; it applies to spatially correlated twin photons generated by any source, provided one considers only pure states.

A. Multi-mode Twin-Photon States

Throughout the analysis, we assume that photons propagate as paraxial beams and are always incident normally on both the object and the detector. Under these assumptions, the two-photon quantum state can be written as (see, for example, [30])

$$|\tilde \psi \rangle = \int {\rm d}{{\textbf{q}}_s}\;d{{\textbf{q}}_I}\;C({{\textbf{q}}_s},{{\textbf{q}}_I})|{{\textbf{q}}_s}{\rangle _s}|{{\textbf{q}}_I}{\rangle _I},$$
where $|{{\textbf{q}}_s}{\rangle _s} \equiv \hat a_s^\dagger ({{\textbf{q}}_s})|vac\rangle$ denotes a signal photon Fock state labeled by the transverse component ${{\textbf{q}}_s}$ of the wave vector ${{\textbf{k}}_s}$. Similarly, $|{{\textbf{q}}_I}{\rangle _I} \equiv \hat a_I^\dagger ({{\textbf{q}}_I})|vac\rangle$ denotes an idler photon Fock state labeled by the transverse component ${{\textbf{q}}_I}$ of the wave vector ${{\textbf{k}}_I}$. The complex quantity $C({{\textbf{q}}_S},{{\textbf{q}}_I})$ ensures that $|\psi \rangle$ is normalized, i.e.,
$$\int {\rm d}{{\textbf{q}}_s}d{{\textbf{q}}_I}|C({{\textbf{q}}_s},{{\textbf{q}}_I}{)|^2} = 1.$$

Imaging enabled by both momentum correlation (Section 4.B) and position correlation (Section 4.C) can be described by the quantum state given by Eq. (23). Such a quantum state is usually generated by SPDC at a nonlinear crystal. However, the theoretical analysis applies to any source that can generate such a state.

B. Imaging Enabled by Momentum Correlation

1. General Theory

It is evident from Eq. (23) that the joint probability density of detecting a signal photon with transverse momentum $\hbar {{\textbf{q}}_s}$ and idler photon with transverse momentum $\hbar {{\textbf{q}}_I}$ is given by

$$P({{\textbf{q}}_s},{{\textbf{q}}_I}) \propto |C({{\textbf{q}}_s},{{\textbf{q}}_I}{)|^2}.$$

This probability density characterizes the momentum correlation between the twin photons. We now show that in the far-field configuration, this momentum correlation enables image formation.

The experimental setup is illustrated in Fig. 6. There are two sources, ${Q_1}$ and ${Q_2}$, each of which can emit a photon pair. ${Q_1}$ emits the signal and idler photons into beams ${S_1}$ and ${I_1}$, respectively. Likewise, ${S_2}$ and ${I_2}$ represent the beams into which the signal and idler photons are emitted by ${Q_2}$. The two sources (${Q_1}$ and ${Q_2}$) almost never emit simultaneously and almost never produce more than two photons individually. Furthermore, the two sources emit coherently. Under these circumstances, the quantum state of light generated by the two sources is given by the superposition of the states generated by them individually, i.e., by

$$\begin{split}|\psi \rangle &= \int {\rm d}{{\textbf{q}}_s}d{{\textbf{q}}_I} C({{\textbf{q}}_s},{{\textbf{q}}_I}) \\&\quad\times \left[{{\alpha _1}\hat a_{{s_1}}^\dagger ({{\textbf{q}}_s})\hat a_{{I_1}}^\dagger ({{\textbf{q}}_I}) + {\alpha _2}\hat a_{{s_2}}^\dagger ({{\textbf{q}}_s})\hat a_{{I_2}}^\dagger ({{\textbf{q}}_I})} \right]|vac\rangle,\end{split}$$
where ${\alpha _1}$ and ${\alpha _2}$ are complex numbers satisfying the condition $|{\alpha _1}{|^2} + |{\alpha _2}{|^2} = 1$, and $|vac\rangle$ represents the vacuum state.
 figure: Fig. 6.

Fig. 6. Quantum imaging with undetected photons. (a) In a collinear non-degenerate ZWMI [86], non-degenerate photon pairs are emitted along the propagation axis of a laser at each source. Wave plates and a polarizing beam splitter (PBS) in the pump are used to control the relative phases and amplitudes of the two-photon states generated in sources ${Q_1}$ and ${Q_2}$. Dichroic mirrors or long pass filters can be used to separate the pump from the daughter fields after each crystal. A lens ${L_0}$ is used to control the pump waist at the crystals, which affects the twin photon transverse momentum correlations, which in turn affects the image resolution, as shown in Section 4.B.3. Imaging system $A$ (e.g., a lens or lens system) ensures a good overlap of the combined signal fields, and optical system $C$ (again a lens or lens system) is used to image the plane of the object with spatial features $T(\boldsymbol\rho)$ onto the plane of the camera or scanning detector. (b) In the case of imaging enabled by momentum correlation, optical systems $B$ and $B^\prime $ (e.g., also a lens or lens system) guarantee that the object $T(\boldsymbol\rho)$ is at the Fourier plane of sources ${Q_1}$ and ${Q_2}$. An effective positive lens with focal length ${f_c}$ associates the plane on the camera with the Fourier plane of sources ${Q_1}$ and ${Q_2}$. A plane wave vector ${q_s}$ makes an angle $\theta$ with the optical axis and is focused along a circle of radius $|{\boldsymbol\rho _c}|$. (c) In the case of imaging enabled by position correlation, optical systems $B$ and $B^\prime $ are imaging systems; a point ${\boldsymbol\rho _s}$ on ${Q_j}(j = 1,2)$ is imaged onto ${\boldsymbol\rho _c} = {M_s}{\boldsymbol\rho _s}$ on the camera by $A$.

Download Full Size | PDF

As shown in Fig. 6, lens systems are used to place both the object and the camera in the far field (Fourier plane) of the sources. These lens systems must also ensure that source ${Q_1}$ is imaged onto ${Q_2}$ and the camera is on the image plane of the object. For the best possible alignment of idler beams, it is important that ${Q_1}$ is imaged onto ${Q_2}$ with unit magnification. Here we present a generic treatment that does not consider any specific lens systems and provides a full understanding of the imaging mechanism. The theory can certainly be tailored to specific systems, and this may result in only incremental differences such as a different sign of image magnification.

A lens system $B$ that places the object in the Fourier plane of ${Q_1}$ can be effectively modeled by a single positive lens, where ${Q_1}$ and the object are located on the back and front focal planes of this positive lens, respectively. Similarly, the lens system $B^\prime $ that places ${Q_2}$ on the Fourier plane of the object can be modeled by another positive lens. For ${Q_1}$ to be imaged onto ${Q_2}$ with unit magnification, the focal length these two lenses must be equal, and shall here be denoted effective focal length, ${f_I}$. Since a transverse wave vector ${{\textbf{q}}_I}$ is focused on a point ${\boldsymbol\rho _o}$ on the object, within the paraxial approximation, we obtain from lens rules

$${\boldsymbol\rho _o} = \frac{{{\lambda _I}{f_I}}}{{2\pi}}{{\textbf{q}}_I},$$
where ${\lambda _I}$ is the mean wavelength of the idler photon.

The interaction of the idler field with the object can be represented by the transformation made by a BS, and we, therefore, have the following expression [23]:

$${\hat a_{{I_2}}}({{\textbf{q}}_I}) = {e^{i{\phi _I}}}\left[{T({\boldsymbol\rho _o}){{\hat a}_{{I_1}}}({{\textbf{q}}_I}) + R({\boldsymbol\rho _o}){{\hat a}_0}({{\textbf{q}}_I})} \right],$$
where ${\phi _I}$ is the phase due to propagation of the idler beam from ${Q_1}$ to ${Q_2}$, operator ${\hat a_0}$ represents the vacuum field at the unused port of the BS (object), $T({\boldsymbol\rho _o})$ is the amplitude transmission coefficient of the object at a point ${\boldsymbol\rho _o}$ that is related to ${{\textbf{q}}_I}$ by Eq. (27), and $|T({\boldsymbol\rho _o}{)|^2} + |R({\boldsymbol\rho _o}{)|^2} = 1$. The quantity, $R({\boldsymbol\rho _o})$, can be interpreted as the amplitude reflection coefficient at the same point while illuminated from the other side.

The quantum state of light generated by the system is obtained by combining Eqs. (26) and (28). We first determine $\hat a_{{I_2}}^\dagger ({{\textbf{q}}_I})$ (Hermitian conjugate of ${\hat a_{{I_2}}}({{\textbf{q}}_I})$) from Eq. (28). We then substitute for $\hat a_{{I_2}}^\dagger ({{\textbf{q}}_I})$ into Eq. (26). Finally, we use the relations

$$\hat a_{{s_{\!j}}}^\dagger ({{\textbf{q}}_s})\hat a_{{I_1}}^\dagger ({{\textbf{q}}_I})|vac\rangle = |{{\textbf{q}}_s}{\rangle _{{s_j}}}|{{\textbf{q}}_I}{\rangle _{{I_1}}},\quad j = 1,2,$$
$$\hat a_0^\dagger ({{\textbf{q}}_I})|vac\rangle = |{{\textbf{q}}_I}{\rangle _0},$$
and find that the quantum state of light generated by the system is given by
$$\begin{split} |\Psi \rangle & = \int {\rm d}{{\textbf{q}}_s}d{{\textbf{q}}_I} C({{\textbf{q}}_s},{{\textbf{q}}_I})[{\alpha _1}|{{\textbf{q}}_s}{\rangle _{{s_1}}} + {e^{- i{\phi _I}}}{\alpha _2}{T^ *}({\boldsymbol\rho _o})|{{\textbf{q}}_s}{\rangle _{{s_2}}}]|{{\textbf{q}}_I}{\rangle _{{I_1}}}\\& \quad+ \int {\rm d}{{\textbf{q}}_s}d{{\textbf{q}}_I} C({{\textbf{q}}_s},{{\textbf{q}}_I}){e^{- i{\phi _I}}}{\alpha _2}{R^ *}({\boldsymbol\rho _o})|{{\textbf{q}}_s}{\rangle _{{s_2}}}|{{\textbf{q}}_I}{\rangle _0}.\end{split}$$

Since the camera is placed in the combined signal field, at the far field relative to the sources, we can once again use the concept of an effective positive lens, as shown in Fig. 6(b). Suppose that the focal length of this lens is denoted by ${f_c}$. Following an argument similar to the one used to obtain Eq. (27), we find that point ${\boldsymbol\rho _c}$ on the camera is related to the transverse signal wave vector ${{\textbf{q}}_s}$ by the following formula:

$${\boldsymbol\rho _c} = \frac{{{\lambda _s}{f_c}}}{{2\pi}}{{\textbf{q}}_s},$$
where ${\lambda _s}$ is the mean wavelength of the signal photon. The quantized field at point ${\boldsymbol\rho _c}$ on the camera plane can now be represented by
$$\hat E_s^{(+)}({\boldsymbol\rho _c}) \propto {e^{i{\phi _{{s_1}}}}}{\hat a_{{s_1}}}({{\textbf{q}}_s}) + i{e^{i{\phi _{{s_2}}}}}{\hat a_{{s_2}}}({{\textbf{q}}_s}),$$
where ${\phi _{{s_1}}}$ and ${\phi _{{s_2}}}$ are phases due to propagation of the signal beams from ${Q_1}$ and ${Q_2}$, respectively, to the camera. The single-photon counting rate (intensity) at point ${\boldsymbol\rho _c}$ on the camera can be determined by the standard formula ${\cal R}({\boldsymbol\rho _c}) \propto \langle \Psi |\hat E_s^{(-)}({\boldsymbol\rho _c})\hat E_s^{(+)}({\boldsymbol\rho _c})|\psi \rangle$. It now follows from Eqs. (30) and (32) that
$${\cal R}({\boldsymbol\rho _c}) \propto \int {\rm d}{{\textbf{q}}_I} P({{\textbf{q}}_s},{{\textbf{q}}_I})\big[{1 + | {T({\boldsymbol\rho _o})} |\cos ({\phi _{{in}}} - {\arg}\{T({\boldsymbol\rho _o})\})} \big],$$
where ${\phi _{{in}}} = {\phi _{{s_2}}} - {\phi _{{s_1}}} - {\phi _I} + {\arg}\{{\alpha _2}\} - {\arg}\{{\alpha _1}\} + \pi /2$, arg represents the argument of a complex number, and ${\boldsymbol\rho _o}$, ${{\textbf{q}}_I}$ and ${\boldsymbol\rho _c}$, ${{\textbf{q}}_s}$ are related by Eqs. (27) and (31), respectively, and we have assumed $|{\alpha _1}| = |{\alpha _2}|$ for simplicity.

It is evident from Eq. (33) that the information about the object (both magnitude and phase of the amplitude transmission coefficient) appears in the interference pattern observed on the camera, even though the photons probing with the object are not detected by the camera. The presence of $P({{\textbf{q}}_s},{{\textbf{q}}_I})$ in Eq. (33) shows that the momentum correlation between the twin photons enables image acquisition. For example, when there is no correlation between the momenta, $P({{\textbf{q}}_s},{{\textbf{q}}_I})$ can be expressed in the product form $P({{\textbf{q}}_s},{{\textbf{q}}_I}) = {P_s}({{\textbf{q}}_s}){P_I}({{\textbf{q}}_I})$. It can be checked from Eq. (33) that in this case, no interference pattern will be observed and the information of the object will be absent in the photon counting rate measured by the camera. Therefore, for the imaging scheme to work, there must be some correlation between the momenta of the twin photons. Furthermore, the momentum correlation also determines image quality. In fact, we will see in Section 4.B.3 that this momentum correlation limits the image resolution.

In the ideal scenario, when the momenta of the twin photons are perfectly correlated, the probability density $P({{\textbf{q}}_s},{{\textbf{q}}_I})$ can be effectively replaced by a Dirac delta function. Consequently, it follows from Eq. (33) that

$${\cal R}({\boldsymbol\rho _c}) \propto 1 + \left| {T\left({{\boldsymbol\rho _o}} \right)} \right|\cos [{\phi _{{in}}} - {\arg}\{T({\boldsymbol\rho _o})\}].$$

The phase ${\phi _{{in}}}$ is varied experimentally, and consequently, an interference pattern is observed at each point ${\boldsymbol\rho _c}$ on the camera. It is evident from Eq. (34) that the information of a point (${\boldsymbol\rho _o}$) on the object appears in the interference pattern observed at a point (${\boldsymbol\rho _c}$) on the camera. Extraction of this information results in imaging.

We illustrate the imaging by first considering an absorptive object for which we can set ${\arg}\{T({\boldsymbol\rho _o})\} = 0$. It is seen from Eqs. (2) and (34) that the visibility of the single-photon interference pattern at a point $({\boldsymbol\rho _c})$ on the camera is given by [24]

$${\cal V}({\boldsymbol\rho _c}) = \left| {T({\boldsymbol\rho _o})} \right|.$$

Clearly, the spatially dependent visibility provides an image of the object.

Alternatively, one can acquire the image of an absorptive object by subtracting the minimum intensity from the maximum intensity, i.e., by determining the quantity

$$G({\boldsymbol\rho _c}) = {{\cal R}_{{\max}}}({\boldsymbol\rho _c}) - {{\cal R}_{{\min}}}({\boldsymbol\rho _c}).$$

In [22], images were acquired using this method. In Figs. 7-IA and 7-IB, interference is seen in the body of the cat, corresponding to regions of the idler field that are transmitted through a cardboard cutout. No interference is seen outside the cat, because the corresponding idler modes are blocked by the cardboard. If one sums the two outputs, the cat disappears (Fig. 7-ID), and the Gaussian profile of the signal field is seen. This shows that the total signal field intensity is not affected by the absorptive object. Subtracting the two outputs results in a high-contrast absorption image of the sample (Figs. 7-IC and 7-IIC).

 figure: Fig. 7.

Fig. 7. Absorption and phase imaging enabled by momentum correlations. I(A–D) and II(A-B) have been adapted from [86], which used the setup in Fig. 6(a) and realized imaging enabled by momentum correlations. In IA and IB are shown two intensity signal outputs of a collinear non-degenerate ZWMI. The detection wavelength is $810 \pm 1.5\;{\rm{nm}}$; the sample is a cardboard cutout placed at the Fourier plane of the sources and illuminated by an idler beam with wavelength centered at 1550 nm. The difference (sum) of those two outputs is shown in IC (ID). Phase imaging of an etched silica plate using the same setup is shown in IIA and IIB. Momentum correlation enabled absorption (IIC) and phase (IID) images (adapted from [64]) from a sample of a mouse heart. The setup is shown in Fig. 3(c) with the addition of lenses and an off-axis parabolic mirror. The detection and illumination central wavelengths are 0.8 µm and 3.8 µm, respectively.

Download Full Size | PDF

We call $G({\boldsymbol\rho _c})$ the image function. It can be readily checked from Eq. (34) that when the momenta of the twin photons are maximally correlated, $G({\boldsymbol\rho _c}) \propto | {T({\boldsymbol\rho _o})} |$.

Since ${\arg}\{T({\boldsymbol\rho _o})\}$ appears in Eq. (34), phase imaging is also possible using this scheme (Figs. 7-IIA, 7-IIB, and 7-IID). For objects with relatively simple phase distributions as the ones considered in [22], the image can be obtained by intensity subtraction.

2. Image Magnification

It follows from Eqs. (34) and (35) that point ${\boldsymbol\rho _o}$ on the object is imaged at point ${\boldsymbol\rho _c}$ on the camera. Therefore, the image magnification ($M$) is equal to the ratio $|{\boldsymbol\rho _c}|/|{\boldsymbol\rho _o}|$. It now follows from Eqs. (27) and (31) that

$$M \equiv \frac{{|{\boldsymbol\rho _c}|}}{{|{\boldsymbol\rho _o}|}} = \frac{{{f_c}{\lambda _s}|{{\textbf{q}}_s}|}}{{{f_I}{\lambda _I}|{{\textbf{q}}_I}|}}.$$

We now note that image blurring must be neglected to define the magnification. Therefore, we must consider only the ideal case in which the momenta of the twin photons are perfectly correlated. As mentioned above, in this case, the probability density governing the momentum correlation can be effectively replaced by a Dirac delta function. In fact, Eqs. (34) and (35) are obtained with this condition. In particular, we consider twin photons generated by SPDC, for which $P({{\textbf{q}}_s},{{\textbf{q}}_I}) \propto \delta ({{\textbf{q}}_s} + {{\textbf{q}}_I})$. Consequently, to determine image magnification, we need to use the condition ${{\textbf{q}}_s} = {{\textbf{q}}_I}$. Applying this condition to Eq. (37), we find that image magnification is given by

$$M = \frac{{{f_c}{\lambda _s}}}{{{f_I}{\lambda _I}}}.$$

An interesting feature of QIUP in the far-field configuration (i.e., enabled by momentum correlation) is that image magnification depends on the wavelengths of twin photons. This fact is illustrated by Fig. 8, which shows experimental observations presented in [83]. The wavelength dependence of magnification is also observed in the experiments reported in [22,64,65,83].

 figure: Fig. 8.

Fig. 8. Image magnification in momentum correlation enabled QIUP. The same object is imaged for two sets of values of ${\lambda _s}$ and ${\lambda _I}$, while other parameters such as focal lengths and distances are unchanged. Higher value of the ratio ${\lambda _s}/{\lambda _I}$ resulted in larger image magnification (right). (Adapted from Fig. 2 of [83].)

Download Full Size | PDF

Thus far, we have not considered the sign of the magnification. This is because the sign will depend on the details of the lens systems used in the setup. For example, if the lens system is exactly as chosen in [22], the image magnification will have a positive sign, i.e., the image will be erect. A detailed analysis of the image magnification for this case is presented in [23].

3. Spatial Resolution

The resolution limit of momentum correlation enabled QIUP can be studied by applying the theory discussed in Section 4.B.1. A detailed description of this topic can be found in [83]. Here we discuss one resolution measure, namely, the edge-spread function (ESF). We will consider another resolution measure in Section 4.C.3 when we will discuss the resolution limit of position correlation enabled QIUP.

We pointed out in Section 4.B.1 that the image of an absorptive object can be obtained by determining the image function, $G({\boldsymbol\rho _c})$. It follows from Eqs. (33) and (36) that

$$G({\boldsymbol\rho _c}) \propto \int {\rm d}{{\textbf{q}}_I} P({{\textbf{q}}_s},{{\textbf{q}}_I})\left| {T({\boldsymbol\rho _o})} \right|,$$
where ${\boldsymbol\rho _o}$, ${{\textbf{q}}_I}$ and ${\boldsymbol\rho _c}$, ${{\textbf{q}}_s}$ are related by Eqs. (27) and (31), respectively. We will use the image function to determine the resolution because the mathematical analysis becomes simpler. We stress that the results remain the same if one obtains the image from visibility.

It is evident from Eq. (39) that when the momenta of the twin photons are not perfectly correlated, information about a range of points on the object plane appears at a single point on the camera. The broader the probability distribution $P({{\textbf{q}}_s},{{\textbf{q}}_I})$, the larger the range of the points on the object plane. Therefore, it can be readily guessed that a weaker momentum correlation results in reduced resolution.

To study the resolution quantitatively, we need to know the form of the probability density function, $P({{\textbf{q}}_s},{{\textbf{q}}_I})$. To this end, we consider twin photons generated through SPDC and assume that the pump beam has a Gaussian profile. In this case, the probability density function can be approximated in the following form (see, for example, [30,87]):

$$P({{\textbf{q}}_s},{{\textbf{q}}_I}) \propto {\exp}\! \left({- \frac{1}{2}|{{\textbf{q}}_s} + {{\textbf{q}}_I}{|^2}w_p^2} \right),$$
where ${w_p}$ represents the waist of the Gaussian pump beam. Clearly, the standard deviation of the probability distribution is inversely proportional to the pump waist. Consequently, a larger pump waist (${w_p}$) results in a narrower probability distribution, i.e., enhanced momentum correlation.

To determine the ESF, a knife edge can be used as an object. The image, which turns out to be a blurred edge, effectively represents the ESF. Without any loss of generality, we assume that the knife edge is placed parallel to the ${y_o}$ axis and along the line ${x_o} = x_0^\prime $, such that the idler field is blocked form ${x_o} \le {x_0^{\prime}}$. Therefore, we can write

$$T({\boldsymbol\rho _o}) \equiv T({x_o},{y_o}) = \left\{{\begin{array}{*{20}{l}}0&\,\,\,{{x_o} \le {{x^\prime_0}},}\\1&\,\,\,{{x_o} \gt {{x^\prime_0}},}\end{array}} \right.\quad \forall \;{y_o}.$$

It now follows from Eqs. (39)–(41) that the ESF is given by

$${\rm{ESF}}({x_c}) \propto G({\boldsymbol\rho _c}) \propto {\rm{Erfc}}\left({\frac{{\sqrt 2 \pi {w_p}}}{{{f_c}{\lambda _s}}}\left({{x_c} - Mx_0^\prime} \right)} \right),$$
where Erfc is the complementary error function, and $M$ is the wavelength-dependent image magnification given by Eq. (38). We stress that to determine the ESF, one can also measure the position-dependent visibility instead of the image function.

Figure 9(a) shows an experimental observer image of a knife edge [83]. The experimental results are in full accordance with the theoretical predictions made by Eq. (42). A measure of image blurring is how steeply the complementary error function representing the ESF rises. A sharper rise means less blurring. Mathematically, the blurring can be quantified by the inverse of the coefficient of ${x_c}$ inside the Erfc in Eq. (42), i.e., by

$$\sigma = \frac{{{f_c}{\lambda _s}}}{{\sqrt 2 \pi {w_p}}}.$$
 figure: Fig. 9.

Fig. 9. Edge-spread function (ESF) and resolution. (a) The image of a knife edge is obtained by measuring the position-dependent visibility on the camera (left). The visibility measured along an axis (${x_c}$) is fitted with error function to experimentally determine the edge-spread function (right). The blurring ($\sigma$) is determined from the ESF. (b) Experimentally measured values (data points) of $\sigma$ are compared with theoretical prediction (solid lines) for two sets of wavelengths, ${\lambda _I} = 1550 \;{\rm{nm}}$, ${\lambda _s} = 810 \;{\rm{nm}}$ (red) and ${\lambda _I} = 780 \;{\rm{nm}}$, ${\lambda _s} = 842\; {\rm{nm}}$ (blue). Since the detected wavelengths are close to each other, the blurring appears to be almost equal despite a wide difference between the illuminating (undetected) wavelengths. (c) The resolution ($\sigma /M$) is measured experimentally (data points) and compared with theoretical results (solid curves) for the same sets of wavelengths. Shorter illumination wavelength results in higher resolution. The resolution enhances with increasing pump waist (${w_p}$), i.e., with stronger momentum correlation between twin photons. (Adapted from Fig. 4 of [83].)

Download Full Size | PDF

This quantity can be determined from the experimentally obtained ESF [Fig. 9(a)]: one can check from the properties of the complementary error function that $\sigma$ is the distance for which the value of ESF rises from 24% to 76% of the maximum attainable value.

Equation (43) shows how the resolution depends on the momentum correlation between the twin photons. As mentioned below Eq. (40), a larger value of the pump waist (${w_p}$) implies a stronger momentum correlation between the twin photons. It follows from Eq. (43) that a larger value of ${w_p}$ results in a smaller value of $\sigma$, i.e., less blurring implying higher resolution. Figure 9(b) shows the experimentally measured values of $\sigma$ for two experimental setups [83]. The solid lines represent theoretical predictions made from Eq. (43). Figure 10(a) demonstrates how the image of a collection of three slits gets blurred when the momentum correlation between twin photons is reduced.

 figure: Fig. 10.

Fig. 10. Resolution of momentum correlation enabled QIUP. (a) Resolution enhances as momentum correlation becomes higher. A set of slits is imaged for five values of pump waist (${w_p}$) in decreasing order (left to right). A bigger value of ${w_p}$ means a stronger momentum correlation between the twin photons, which results in higher resolution. (Wavelengths are kept the same for each measurement.) (b) A smaller value of undetected wavelength (${\lambda _I}$) results in better resolution. The same set of slits is imaged for ${\lambda _I} = 1550 \;{\rm{nm}}$ (left) and ${\lambda _I} = 780 \;{\rm{nm}}$ (right), while the pump waist is kept the same. (Adapted from Figs. 3b and 5b of [83].)

Download Full Size | PDF

We now discuss the wavelength dependence of the resolution. Equation (43) shows that $\sigma$ does not depend on the wavelength (${\lambda _I}$) of the undetected photon that interacts with the object; it instead depends on the wavelength (${\lambda _s}$) of the detected photon that never interacts with the object. However, it must not be concluded from this observation that the resolution depends on the detected wavelength. This is because $\sigma$ is the blurring measured in the camera coordinates, and the resolution is basically the minimum resolvable distance on the object place. Therefore, a more accurate measure of resolution is obtained if one divides $\sigma$ by the magnification, $M$, which essentially results in expressing $\sigma$ in object coordinates. The division by magnification is essential for understanding the wavelength dependence of the resolution because in this imaging configuration, magnification depends on wavelength. It now follows from Eqs. (38) and (43) that

$${\rm{res}} = \frac{\sigma}{M} = \frac{{{f_I}{\lambda _I}}}{{\sqrt 2 \pi {w_p}}},$$
which is a measure of resolution of this imaging scheme. It follows from Eq. (44) that the resolution depends only on the undetected wavelength, i.e., the wavelength that probes the object. Here we note that the resolution depends on the momentum correlation in the same way $\sigma$ does. Therefore, our conclusions regarding the dependence of resolution on the momentum correlation remains unchanged.

The wavelength dependence of resolution has been verified experimentally by building two experimental setups for which the values of detected wavelength are very close (810 nm and 842 nm), whereas the undetected wavelengths are widely separated (1550 nm and 780 nm) [83]. The experimental results are displayed in Fig. 9(b), which confirms the prediction made by Eq. (43): the values of $\sigma$ for the two setups are very close to each other because the values of detected wavelength are also very close. However, the experimentally obtained value of $\sigma /M$ shows that the resolution of the two setups are significantly different [Fig. 9(c)]. This is because the undetected wavelengths for the two setups are widely separated. The wavelength dependence of the resolution has also been experimentally tested by using a 1951 USAF resolution test chart and it has been found that the results match accurately with theoretical predictions [83]. In Fig. 10(b), we show images of the same collection of slits for two values of undetected wavelength while the pump waist (i.e., momentum correlation) is kept fixed. It is evident from this figure that the resolution is higher for a shorter undetected wavelength.

In [64,65], the same imaging resolution Eq. (44) was verified for the $SU(1,1)$ interferometer with illumination photons at the mid-infrared ($3\;{{\unicode{x00B5}{\rm m}}}$) and detection wavelength suitable for silicon-based cameras.

4. Resolution and Field of View

A useful parameter to calculate is the number of spatial modes per direction, also sometimes referred to as “number of spatial modes,” which is typically estimated by the field of view (FoV) divided by the spatial resolution. The FoV is straightforwardly given by the emission angle of the downconverted idler light that defines the size of the illuminating area:

$${{\rm{FOV}}_{{\rm{MC}}}} = 2{f_I}\tan ({\theta _I}) \approx 2{f_I}{\theta _I},$$
where ${f_I}$ denotes the focal length of the collimating optical element adjacent to the crystal, and ${\theta _I}$ is the idler divergence angle. Defined as half-width at half-maximum (HWHM), it is given by
$${\theta _I} = {\lambda _I}\sqrt {\frac{{2.78{n_s}{n_I}}}{{\pi L({n_s}{\lambda _I} + {n_I}{\lambda _s})}}} ,$$
where ${n_s}\!({n_I})$ is the index of refraction of the signal (idler) field in the crystal (see supplementary material in [64]).

The number of spatial modes per direction can therefore be estimated as

$${m_{{MC}}} = \frac{{{{\rm{FOV}}_{{\rm{MC}}}}}}{{{\rm{re}}{{\rm{s}}^{{\rm{FWHM}}}}}} \propto {w_p}\sqrt {\frac{n}{{L({\lambda _I} + {\lambda _s})}}} ,$$
where ${\rm{re}}{{\rm{s}}^{{\rm{FWHM}}}} = 2\sqrt {ln2} \;{\rm{res}}$ and $n = {n_s} \approx {n_I}$. Unsurprisingly, the number of spatial modes per direction does not depend on ${f_I}$ or the magnification ($M$)—provided that the intermediate optics features a sufficiently large numerical aperture.

We conclude this section by summarizing the key features of the resolution of momentum correlation enabled QIUP. The resolution enhances if the momentum correlation between twin photons becomes stronger. For photons generated by SPDC, the resolution is linearly proportional to the standard deviation of the probability distribution that governs the momentum correlation. The resolution is linearly proportional to the wavelength of the undetected photon (i.e., the photon that probes the object). Therefore, a shorted undetected wavelength results in a higher correlation.

Finally, the method described in this section applies to any twin-photon state. Here we considered a Gaussian probability distribution. For other forms of probability distribution, an exact mathematical expression may not be obtained, and resorting to numerical simulation may be necessary.

C. Imaging Enabled by Position Correlation

1. General Theory

We now analyze the complementary scenario in which both the object and the camera are placed in the near field relative to the source. In this case, the imaging is enabled by position correlation between the twin photons. We stress that we do not use evanescent fields for image acquisition, and therefore, our method is not to be confused with conventional near-field imaging. We simply place the object and the camera at separate image planes of the sources.

The joint probability density of detecting signal and idler photons at positions (transverse coordinates) ${\boldsymbol\rho _s}$ and ${\boldsymbol\rho _I}$, respectively, on the source plane is given by [30]

$$P({\boldsymbol\rho _s},{\boldsymbol\rho _I}) \propto {\left| {\int {\rm d}{{\textbf{q}}_s}\;d{{\textbf{q}}_I}\;C({{\textbf{q}}_s},{{\textbf{q}}_I})\;{e^{i({{\textbf{q}}_s.\boldsymbol\rho _s} + {{\textbf{q}}_I.\boldsymbol\rho _I})}}} \right|^2},$$
where $C({{\textbf{q}}_s},{{\textbf{q}}_I})$ is introduced in Eq. (23). The position correlation between the two photons is governed by this joint probability density. If $P({\boldsymbol\rho _s},{\boldsymbol\rho _I})$ can be expressed as a product of a function of ${\boldsymbol\rho _s}$ and a function of ${\boldsymbol\rho _I}$, there is no position correlation. In the other extreme case, when the positions of the two photons are maximally correlated, the joint probability density is proportional to a Dirac delta function.

The schematic of the imaging setup is given in Fig. 6. In contrast to the case of momentum correlation enabled QIUP, in this case, the propagating part of the source field is recreated on the object by an imaging system.

As usual, beam ${I_1}$ from source ${Q_1}$ illuminates the object, passes through source ${Q_2}$, and gets perfectly aligned with beam ${I_2}$. An imaging system, $B$, is placed between source ${Q_1}$ and the object ($O$) such that the idler field at ${Q_1}$ is imaged onto the object with magnification ${M_I}$. Another imaging system, $B^\prime $, images the idler field at the object onto source ${Q_2}$ with magnification $1/{M_I}$ (i.e., demagnified by the equal amount). These two imaging systems also ensure that ${Q_2}$ lies on the image plane of ${Q_1}$. For simplicity, we have assumed that magnifications of $B$ and $B^\prime $ have the same sign. To obtain the best possible alignment of beams ${I_1}$ and ${I_2}$, it is essential that the magnitude of the total magnification due to the combined effect of $B$ and $B^\prime $ is one.

The object is once again characterized by its complex amplitude transmission coefficient $T({\boldsymbol\rho _o})$, where ${\boldsymbol\rho _o}$ represents a point on the object plane. Since the object is treated like a BS, the quantum field associated with the idler photon at ${Q_2}$ is related to that at ${Q_1}$ by the following formula [24]:

$$\hat E_{{I_2}}^{(+)}({\boldsymbol\rho _I}) = {e^{i{{\phi ^\prime_I}}({\boldsymbol\rho _I})}}\left[{{e^{i{\phi _I}({\boldsymbol\rho _o})}}T({\boldsymbol\rho _o})\hat E_{{I_1}}^{(+)}({\boldsymbol\rho _I}) + R({\boldsymbol\rho _o})\hat E_0^{(+)}({\boldsymbol\rho _I})} \right],$$
where $\hat E_0^{(+)}({\boldsymbol\rho _I})$ is the corresponding vacuum field, $|T({\boldsymbol\rho _o}{)|^2} + |R({\boldsymbol\rho _o}{)|^2} = 1$, phases ${\phi _I}({\boldsymbol\rho _o})$ and ${\phi ^\prime _I}({\boldsymbol\rho _I})$ are introduced by imaging systems $B$ and $B^\prime $, respectively, and due to the presence of these imaging systems, ${\boldsymbol\rho _o} = {M_I}$ and ${\boldsymbol\rho _I}$ are related by the formula
$${\boldsymbol\rho _o} = {M_I}{\boldsymbol\rho _I}.$$

It follows from Eq. (49) that (see [24] for a detailed proof)

$$\begin{split}{{\hat a}_{{I_2}}}({{\textbf{q}}_I}) & = \int {\rm d}{\textbf{q}}_I^\prime \frac{1}{{M_I^2}}\left[\tilde T^\prime \left({\frac{{{{\textbf{q}}_I} - {\textbf{q}}_I^\prime}}{{{M_I}}}} \right)\;{{\hat a}_{{I_1}}}({\textbf{q}}_I^\prime)\right.\\& \quad+ \left.\tilde R^\prime \left({\frac{{{{\textbf{q}}_I} - {\textbf{q}}_I^\prime}}{{{M_I}}}} \right)\;{{\hat a}_0}({\textbf{q}}_I^\prime)\right],\end{split}$$
where $\tilde T^\prime ({{\textbf{q}}_I}/{M_I})$ and $\tilde R^\prime ({{\textbf{q}}_I}/{M_I})$ are the Fourier transforms of ${\exp}[i\{{\phi _I}({M_I}{\boldsymbol\rho _I}) + {\phi ^\prime _I}({\boldsymbol\rho _I})\}]T({M_I}{\boldsymbol\rho _I})$ and ${\exp}[i{\phi ^\prime _I}({\boldsymbol\rho _I})]R({M_I}{\boldsymbol\rho _I})$, respectively. We encourage readers to convince themselves that ${\hat a_0}$ is related to $\hat E_0^{(+)}$ in the same way ${\hat a_{{I_j}}}$ is related to $\hat E_{{I_j}}^{(+)}$.

It becomes evident by comparing Eq. (51) with Eq. (28) that the conditions due to the alignment of the idler beams are significantly different in near- and far-field QIUP. Although the initial quantum state generated by the two sources is once again given by Eq. (26), the difference between the alignment conditions ensures that the final quantum state of light generated by the imaging system is distinct in the two configurations. In the present scenario, the quantum state generated by the system is obtained by combining Eqs. (26) and (51) and is given by [24]

$$\begin{split}|\psi \rangle &= {\alpha _1}\int {\rm d}{{\textbf{q}}_{{I_1}}}\;d{{\textbf{q}}_{{s_1}}}\;C({{\textbf{q}}_{{I_1}}},{{\textbf{q}}_{{s_1}}})\;|{{\textbf{q}}_{{I_1}}}{\rangle _{{I_1}}}|{{\textbf{q}}_{{s_1}}}{\rangle _{{s_1}}}\\& \quad+ {\alpha _2}\int {\rm d}{{\textbf{q}}_{{I_2}}}\;d{{\textbf{q}}_{{s_2}}}\;d{\textbf{q}}_I^\prime \;C({{\textbf{q}}_{{I_2}}},{{\textbf{q}}_{{s_2}}})\\&\quad \times \frac{1}{{M_I^2}}{\left[\tilde T^{\prime *}\left({\frac{{{{\textbf{q}}_{{I_2}}} - {\textbf{q}}_I^\prime}}{{{M_I}}}} \right)\;|{\textbf{q}}_I^\prime \rangle _{{I_1}}\right.}\\&\quad + \left.\tilde R^{\prime *}\left({\frac{{{{\textbf{q}}_{{I_2}}} - {\textbf{q}}_I^\prime}}{{{M_I}}}} \right)\;|{\textbf{q}}_I^\prime {\rangle _0}\right]|{{\textbf{q}}_{{s_2}}}{\rangle _{{s_2}}},\end{split}$$
where $|{\textbf{q}}{\rangle _0} = \hat a_0^\dagger ({\textbf{q}})|vac\rangle$.

The two signal beams (${S_1}$ and ${S_2}$) are superposed by a 50:50 BS, and one of the outputs of BS is detected by a camera. An imaging system ($A$) with magnification ${M_S}$ ensures that the signal field at the sources is imaged onto the camera [Figs. 6(a) and 6(b)]. If we represent the signal field at each source ($z = 0$) by its angular spectrum ([27], Sec. 3.2; see also [87]), the positive frequency part of the total signal field at point ${\boldsymbol\rho _c}$ on the camera is given by [24]

$$\hat E_s^{(+)}({\boldsymbol\rho _c}) \propto \int {\rm d}{{\textbf{q}}_s}\left[{\;{{\hat a}_{{s_1}}}\!({{\textbf{q}}_s}) + i{e^{[i{\phi _{s0}} + {\phi _s}({\boldsymbol\rho _c})]}}\;{{\hat a}_{{s_2}}}({{\textbf{q}}_s})\;} \right]{e^{i{{\textbf{q}}_s} \cdot {\boldsymbol\rho _s}}},$$
where the phase difference between the two signal fields is expressed as a sum of ${\phi _{s0}}$ and ${\phi _s}({\boldsymbol\rho _c})$; the former is a spatially independent phase that can be varied to obtain interference patterns, and the latter is a spatially dependent phase that can arise due to the presence of imaging system $A$. The presence of this imaging system results in the following relationship between the coordinates:
$${\boldsymbol\rho _c} = {M_s}{\boldsymbol\rho _s}.$$

The photon counting rate at point ${\boldsymbol\rho _c}$ on the camera is determined by the standard formula ${\cal R}({\boldsymbol\rho _c}) \propto \langle \psi |\hat E_s^{(-)}({\boldsymbol\rho _c})\hat E_s^{(+)}({\boldsymbol\rho _c})|\psi \rangle ,$ where $\hat E_s^{(-)}({\boldsymbol\rho _c}) = [\hat E_s^{(+)}({\boldsymbol\rho _c}{)]^\dagger}$. Using Eqs. (48), (50), and (52)–(54), we find that [24]

$$\begin{split}{\cal R}({\boldsymbol\rho _c}) &\propto \int {\rm d}{\boldsymbol\rho _I}P({\boldsymbol\rho _s},{\boldsymbol\rho _I})\left(\vphantom{\left({\frac{{{\boldsymbol\rho _o}}}{{{M_I}}}} \right)}1 + |T({\boldsymbol\rho _o})| {\cos}\left[\vphantom{\left({\frac{{{\boldsymbol\rho _o}}}{{{M_I}}}} \right)}{\phi _{{in}}} + {\phi _s}({\boldsymbol\rho _c}) \right.\right.\\&\quad- {\phi _I}({\boldsymbol\rho _o})- \left.\left.{ {{\phi ^\prime_I}}\left({\frac{{{\boldsymbol\rho _o}}}{{{M_I}}}} \right) - {\phi _T}({\boldsymbol\rho _o})} \right] \right),\end{split}$$
where ${\phi _{{in}}} = {\phi _{s0}} + {\arg}\{{\alpha _2}\} - {\arg}\{{\alpha _1}\}$, and we have assumed $|{\alpha _1}| = |{\alpha _2}| = 1/\sqrt 2$ for simplicity.

It follows from Eq. (55) that the information about the object (both magnitude and phase of the amplitude transmission coefficient) appears in the interference pattern observed on the camera, even though the photons probing with the object are not detected by the camera. The presence of $P({\boldsymbol\rho _s},{\boldsymbol\rho _I})$ in Eq. (55) shows that the position correlation between the twin photons enables image acquisition. For example, when there is no correlation between the momenta, $P({\boldsymbol\rho _s},{\boldsymbol\rho _I})$ can be expressed in the product form $P({\boldsymbol\rho _s},{\boldsymbol\rho _I}) = {P_s}({\boldsymbol\rho _s}){P_I}({\boldsymbol\rho _I})$. It can be checked from Eq. (55) that, in this case, no interference pattern will be observed, and the information of the object will be absent in the photon counting rate measured by the camera. Therefore, for the imaging scheme to work, there must be some correlation between the positions of the twin photons. Furthermore, the position correlation also determines the image resolution; we will elaborate on this in Section 4.C.3.

To illustrate the image formation, we consider the special case in which the positions of the photon pair are maximally correlated, i.e., $P({\boldsymbol\rho _s},{\boldsymbol\rho _I}) \propto \delta ({\boldsymbol\rho _s} - {\boldsymbol\rho _I})$. We now have from Eq. (55) that [24]

$${\cal R}({\boldsymbol\rho _c}) \propto 1 + \left| {T\left({{\boldsymbol\rho _o}} \right)} \right|\cos [{\phi _{{in}}} - {\phi _T}\!\left({{\boldsymbol\rho _o})} \right],$$
where, for simplicity, we have assumed that the phases introduced by the imaging systems are spatially independent and have included them in ${\phi _{{in}}}$. It is evident that if ${\phi _{{in}}}$ is varied, the photon counting rate (intensity) at each point on the camera varies sinusoidally, i.e., a single-photon interference pattern is observed at each point on the camera.

The image is acquired from these interference patterns in the same way as discussed in Section 4.B.1. For example, when the object is purely absorptive, i.e., when $T({\boldsymbol\rho _o}) = |T({\boldsymbol\rho _o})|$, the image is given by the spatially dependent visibility measured on the camera [24]:

$${\cal V}({\boldsymbol\rho _c}) = \left| {T({\boldsymbol\rho _o})} \right|.$$

Alternatively, the image can also be obtained by measuring the image function given by Eq. (36). It can be verified from Eq. (55) that both the visibility and image function are independent of the phases introduced by the imaging systems. Therefore, the image of an absorptive object can be acquired using this method even if the phases introduced by the imaging systems are spatially dependent. However, image acquisition of a phase object may require information about these phases.

2. Image Magnification

To determine the magnification, we once again need to neglect image blurring. Therefore, we must consider the case in which the twin photons are maximally position correlated, i.e., $P({\boldsymbol\rho _s},{\boldsymbol\rho _I}) \propto \delta ({\boldsymbol\rho _s} - {\boldsymbol\rho _I})$. It now follows from Eqs. (50) and (54) that the image magnification is given by [24]

$$M = \frac{{{M_s}}}{{{M_I}}}.$$

Clearly, the magnification does not have any explicit dependence on the wavelengths of the photons. This fact marks one important distinction between momentum correlation enabled and position correlation enabled imaging.

3. Spatial Resolution

We note from Eq. (55) that information about a range of points on the object plane, averaged by the joint probability density $P({\boldsymbol\rho _s},{\boldsymbol\rho _I})$, appears at a single point on the camera. Since this probability distribution characterizes the position correlation between the twin photons [see Eq. (48)], it becomes evident that the position correlation plays the key role in the image formation in this case.

The resolution of position correlation enabled QIUP will be studied following the method described in Section 4.B.3. However, we choose a different resolution measure, namely, the minimum resolvable distance. In this case, we consider two radially opposite points, separated by a distance $d$ and located on an axis (say, ${X_o}$) on the object plane. These two points can be represented by the amplitude transmission coefficient

$$T({\boldsymbol\rho _o}) \equiv T({x_o},{y_o}) \propto \delta ({y_o})[\delta ({x_o} - d/2) + \delta ({x_o} + d/2)],$$
where ${x_o}$ and ${y_o}$ represent the position along two mutually orthogonal Cartesian coordinate axes ${X_o}$ and ${Y_o}$, respectively.

The image of the pair of points can be obtained by determining the image function. It follows from Eqs. (36) and (55) that for position correlation enabled QIUP, the image function takes the form

$$G({\boldsymbol\rho _c}) \propto \int {\rm d}{\boldsymbol\rho _o}P\left({\frac{{{\boldsymbol\rho _c}}}{{{M_s}}},\frac{{{\boldsymbol\rho _o}}}{{{M_I}}}} \right)|T({\boldsymbol\rho _o})|.$$

To assume a form of the joint probability density, we once again assume that the twin photons are generated through SPDC. In this case, one can represent the probability density function in the following form [84]:

$$\!\!\!P\left({\frac{{{\boldsymbol\rho _c}}}{{{M_s}}},\frac{{{\boldsymbol\rho _o}}}{{{M_I}}}} \!\right) \propto \exp \left[\!{- \frac{{4\pi}}{{L({\lambda _I} + {\lambda _s})}}{{\left| {\frac{{{\boldsymbol\rho _c}}}{{{M_s}}} - \frac{{{\boldsymbol\rho _o}}}{{{M_I}}}} \right|}^2}} \right],\!$$
where $L$ represents the length of the nonlinear crystal. The standard deviation of this probability distribution is linearly proportional to $\sqrt L$. Therefore, a shorter crystal generates stronger position correlation between twin photons.

Using Eqs. (59)–(61), we find that the image of the two points is given by the image function

$$\begin{array}{*{20}{l}}G({\boldsymbol\rho _c}) &\propto \exp \left[{- \frac{{4\pi y_c^2}}{{M_s^2L({\lambda _I} + {\lambda _s})}}} \right] \times \left\{{\exp \left[{- \frac{{4\pi}}{{L({\lambda _I} + {\lambda _s})}}{{\left({\frac{{{x_c}}}{{{M_s}}} - \frac{d}{{2{M_I}}}} \right)}^2}} \right]} \right.\\[5pt]&\,+\left. { \exp \left[{- \frac{{4\pi}}{{L({\lambda _I} + {\lambda _s})}}{{\left({\frac{{{x_c}}}{{{M_s}}} + \frac{d}{{2{M_I}}}} \right)}^2}} \right]} \right\}.\end{array}$$

Figure 11(a) displays a simulated image of a pair of points separated by 70 µm for $L = 2 \;{\rm{mm}}$. The other parameters are given in the figure caption. To quantify how resolved the two points are, we consider the function $G({x_c},0)$, which is obtained by setting ${y_c} = 0$ in the image function. If we plot $G({x_c},0)$ against ${x_c}$, we get a double-humped curve, which is illustrated by Fig. 11(b). A measure of how well two points are resolved can be given by the ratio ($\beta$) of the value of $G$ at the dip (${G_{{\rm{dip}}}}$) to that at one of the peaks (${G_{{\rm{peak}}}}$), i.e.,

$$\beta \equiv \frac{{{G_{{\rm{dip}}}}}}{{{G_{{\rm{peak}}}}}}.$$
 figure: Fig. 11.

Fig. 11. Resolution and position correlation between twin photons. (a) Simulated camera image of two points separated by a distance of $d = 70\;{{\unicode{x00B5}{\rm m}}}$ for the following choice of parameters: $L = 2 \;{\rm{mm}}$, ${\lambda _s} = 810 \;{\rm{nm}}$, ${\lambda _I} = 1550 \;{\rm{nm}}$, and ${M_s} = {M_I} = 1$. (b) Image function, $G({x_c},0)$, plotted against ${x_c}$ the same set of parameters. The ratio ($\beta$) of its value at the dip to that at one of the peaks is $\beta \approx 0.08$. (c) Minimum resolvable distance (${d_{{\min}}}$) plotted against crystal length ($L$) for ${M_I} = 1$ and ${M_I} = 2$ using Eq. (64) (solid lines). The filled circles represent simulated data points for a pair of square pinholes with side length 1 µm. The minimum resolvable distance increases (i.e., resolution reduces) as the position correlation becomes weaker. The resolution also decreases as the imaging magnification, ${M_I}$, from the source to the object increases. [Remaining parameters are the same as in (a) and (b).] (Adapted from Figs. 3c, 3d, and 4c of [84].)

Download Full Size | PDF

The lower the value of $\beta$, the better resolved the two points are.

The two points cannot be resolved after $\beta$ exceeds a certain value, say, ${\beta _{{\max}}}$. We say that the two points are resolved just when $\beta = {\beta _{{\max}}}$; in this case, the separation between the two points becomes the minimum resolvable distance (i.e., $d = {d_{{\min}}}$). It follows from Eqs. (62) and (63) that ${d_{{\min}}} \propto {M_I}\sqrt {L({\lambda _I} + {\lambda _s})}$. (A detailed explanation is provided in [84].) To obtain a specific value of ${d_{{\min}}}$, one needs to specify the value of $\beta$. There is no strict rule to choose the value of ${\beta _{{\max}}}$. For the purpose of illustration, we choose ${\beta _{{\max}}} = 0.81$, which appears in the study of fine structure of the spectral lines with a Fabry–Perot interferometer ([88], Sec. 7.6.3). In this case, we numerically obtain the value of the proportionality constant and find it to be approximately 0.53. That is, the minimum resolvable distance defined by setting ${\beta _{{\max}}} = 0.81$ is given by the formula

$${d_{{\min}}} \approx 0.53{M_I}\sqrt {L({\lambda _I} + {\lambda _s})} .$$

To test the formula for minimum resolvable distance, we consider a pair of identical square apertures, each with side length 1 µm, placed radially opposite on the ${X_o}$ axis (object plane). We choose nine values of the crystal length ($L$), and for each crystal length (i.e., fixed amount of position correlation), we choose two values of ${M_I}$. In each case, we numerically simulate the distance between the centers of the apertures by setting ${\beta _{{\max}}} = 0.81$. In Fig. 11(c), we compare these numerically simulated distances (data points represented by filled circles) with theoretically predicted minimum resolvable distances (solid curves) that are predicted by Eq. (64). Clearly, the simulated data are in excellent agreement with the theoretical prediction.

In [85], the same resolution was found by employing an SU(1,1) interferometer. Instead of the minimum resolvable distance [see Eq. (64)], they define the resolution through the corresponding full width half-maximum (FWHM) of the edge spread function on the camera ($\sigma _{{\rm PC}}^{{\rm{FWHM}}}$):

$${\rm{res}}_{{\rm{PC}}}^{{\rm{FWHM}}} = \frac{{\sigma _{{\rm{PC}}}^{{\rm{FWHM}}}}}{M} = 0.44{M_I}\sqrt {\frac{{L({\lambda _I} + {\lambda _s})}}{n}} ,$$
where we assumed the same index of refraction, $n$, for signal and idler fields in the crystal. Equations (64) and (65) reveal important features of the resolution limit of position correlation enabled QIUP. We find that the resolution is linearly proportional to the square root of the crystal length. Since a shorter crystal length implies a stronger position correlation between the twin photons, it becomes evident that a stronger position correlation between the twin photons results in a higher spatial resolution.

Furthermore, the resolution is also linearly proportional to the magnification (${M_I}$) of the imaging system, $B$, placed on the path of the undetected photon. Therefore, if the cross section of the undetected beam (at source) is demagnified while illuminating the object, the spatial resolution enhances, i.e., the resolution can be enhanced at the cost of the FoV. It is to be noted that a smaller value of ${M_I}$ results in higher magnification of the imaging system [see Eq. (58)]. Therefore, if the image magnification is enhanced by using the optical components placed on the path of the undetected photon, the resolution also enhances. However, if one enhances the magnification using the optical elements placed in the path of the detected photon, resolution does not change.

4. Resolution and Field of View

For a Gaussian pump beam, the FoV is a Gaussian distribution with a FWHM given by [85]

$${{\rm FOV}_{{\rm PC}}} = \sqrt {2ln2} {M_I}{w_p}.$$

The ratio of the FoV and ${\rm{res}}_{{\rm{PC}}}^{{\rm{FWHM}}}$ approximates the number of spatial modes per direction:

$${m_{{\rm PC}}} = \frac{{{{\rm FOV}_{{\rm PC}}}}}{{{\rm{res}}_{{\rm{PC}}}^{{\rm{FWHM}}}}} \propto {w_p}\sqrt {\frac{n}{{L({\lambda _I} + {\lambda _s})}}} .$$

We conclude this section by summarizing the main results relating to the resolution of position correlation enabled QIUP. The resolution enhances if the position correlation between twin photons becomes stronger. For photon generated by SPDC, the resolution is linearly proportional to the square root of the crystal length, which is linearly proportional to standard deviation of the probability distribution corresponding to the position correlation. Both detected and undetected wavelengths play symmetric roles in determining the resolution, and the resolution can be enhanced at the cost of the FoV.

Tables Icon

Table 1. Experimental and Theoretical Comparison between QUIP Enabled by Momentum and Position Correlationsa

The method described in this section applies to any twin-photon state within idealized angular distribution such as the here considered Gaussian probability distribution. For other forms of probability distribution (see, for example, [89]), an exact mathematical expression may not be obtained, and resorting to numerical simulation may be necessary.

D. Comparison between Momentum and Position Correlation Enabled QIUP

Equations (47) and (67) show that the number of spatial modes per direction has the same dependence on experimental parameters in position correlation and momentum correlation enabled QUIP. However, these results were obtained considering the Gaussian approximation for the ${\sin}{{\rm{c}}^2}$-shaped angular emission probability, which for the case of standard collinear SPDC leads to significant deviations in the exact number of spatial modes per direction [85]. This is particularly noteworthy in the case of position correlation enabled imaging, where the number of spatial modes per direction is significantly reduced, compared to imaging via momentum correlation [85].

In Table 1, we summarize the comparison between QIUP schemes that utilize momentum correlation and position correlation between twin photons.

E. Imaging without Entanglement

The state in Eq. (23) can be entangled, i.e., $C({{\textbf{q}}_s},{{\textbf{q}}_I}) \ne {C_s}({{\textbf{q}}_s}){C_i}({{\textbf{q}}_I})$. Of course, a pure state such as Eq. (23) cannot present spatial correlations without being spatially entangled. However, one could consider a more general case in which the initial state produced at each source is not pure, but is a mixed transverse spatial state, and, in that case, it is possible to have spatial correlations without transverse spatial entanglement. This case highlights that spatial entanglement is not necessary for the general imaging scheme.

To illustrate this further, let us consider two separate SU(1,1)s (or two ZWMIs), as in Fig. 12(a). One interferometer is placed close to the other such that one camera can capture at once the signal outputs of both interferometers. In the top SU(1,1) one places a sample with field transmittance ${T_1}$ and in the other a sample with field transmittance ${T_2}$. Nevertheless, an image of ${T_1}$ and ${T_2}$, and their spatial separation, can be observed on the camera. This shows that spatial entanglement is not necessary for QIUP. This is equivalent to thinking of the initial downconverted state as a mixed state and using a scanning detector to obtain the image pixel by pixel.

 figure: Fig. 12.

Fig. 12. Classical transverse spatial correlations sufficient for imaging. (a) Two spatially separated interferometers can together produce an image of two objects with transmissions ${T_1}$ and ${T_2}$ on a camera. Transverse spatial entanglement is not necessary for QIUP. (b) In classical imaging with undetected light [90], a seed laser together with the pump laser in a nonlinear interferometer can together produce an image on the camera at the unseeded wavelength. While that scheme does not have all properties of its quantum light version, it shows that transverse spatial entanglement is thus shown not to be strictly necessary for the phenomenon of imaging with undetected light.

Download Full Size | PDF

A direct experimental result that proves that imaging in a nonlinear interferometer is possible without spatial entanglement is described in [90]. In that experiment, a seeding laser was injected through the first pass of an SU(1,1) [Fig. 12(b)], where it stimulated emission into the signal mode. This laser then went through the object, was reflected on the mirror, and passed again through the crystal, stimulating into the same signal mode. A picture of the object was seen on the camera. The signal and idler fields in this case are not entangled, but present classical transverse spatial correlations [87].

5. FURTHER APPLICATIONS

A main theme of applications for nonlinear interferometers is to enable measurements with high-performance silicon-based cameras or line cameras (arrays) for wavelength regions, where these and also corresponding sources are not easily available. These are mainly regions in the infrared. The applications can be divided into using spatial or spectral correlations or both.

Using spatial correlations enables imaging techniques such as phase imaging and amplitude imaging, as we have shown in the previous section, which can also can be extended to microscopy and holography, which is a combination of phase and amplitude imaging. Spectral correlations enable spectroscopy and OCT. Applying nonlinear interferometers to hyperspectral imaging represents the combination of using spatial and spectral correlations. Note that for spectral measurements using nonlinear interferometers based on SPDC, one can provide very large spectral bandwidths with specially designed crystals [91]. Such large bandwidths are typically otherwise complex and costly to achieve.

Also applications accessing the THz region are possible, addressing mainly OCT-type measurements [92] with a single spatial mode, since multi-spatial mode correlation is extremely challenging to achieve.

Another special class of applications is refractometry for not easily accessible wavelength regions [93]. Here the reflectivity and/or field transmittance of a dielectric medium can be measured (wavelength dependently), allowing to infer its (wavelength-dependent) refractive index.

A. Holography and 3D Imaging

The first quantitative phase imaging with undetected photons, i.e., extraction of the value of phase at each pixel of the image, was realized in [83] and later in [94]. Simple subtraction of the outputs is not enough to realize quantitative phase imaging. Fortunately, the interferogram described by Eqs. (34) and (56) has a form very similar to standard interferograms, and so the well-known digital holography methods can be used to extract the phase of the object ${\arg}\{T({\boldsymbol\rho _o})\}$.

The phase stepping method requires recording at least three interference patterns with different values of ${\phi _{{in}}}$ given by $\phi _{{in}}^j = \frac{{2\pi j}}{K}$, where K is the number of phase steps. The interferograms obtained for different phases need to be added with appropriately chosen complex prefactors. The resultant complex-valued function of position contains the phase information as its argument.

Disadvantages of phase stepping include the necessity of precisely controlling ${\phi _{{in}}}$ and recording multiple images from which a single-phase image is reconstructed. The sample and the setup must not change during the acquisition of images.

The Fourier off-axis holography method dispenses with the need of acquiring multiple frames at a cost of slightly more complex image processing and a small modification to the experimental setup. Here an extra tilt, or equivalently, a linear phase ${\phi _{{in}}}$, is introduced between the interfering beams at the object plane. This linear phase ${\phi _{{in}}}({\boldsymbol\rho _0}) = a{\boldsymbol\rho _0}$ enables the isolation of the second term in Eqs. (34) and (56), from which the phase can be extracted by taking its argument. The isolation of the phase is performed by filtering the interferogram in the Fourier domain (FD). The linear phase has to be subtracted from the reconstructed phase, and therefore, it has to be pre-calibrated.

Interestingly, digital holography can enable three-dimensional (3D) imaging. Multiple phase images, taken at different illumination angles can be combined algorithmically to obtain a single 3D image [95].

B. Optical Coherence Tomography

Optical coherence tomography (OCT) is an interferometric technique for 3D imaging. It has a host of applications in non-destructive testing as well as medical imaging, e.g., in ophthalmology. In its most common modality, Fourier domain-optical coherence tomography (FD-OCT), spectral phases on broadband light proportional to the depth of reflections from inside a sample can be read out after interference with a reference arm using a fast grating spectrometer. Fourier-transforming the spectra then yields an axial depth profile of the sample, which after $x - y$ scanning of the sample and/or of the probing beam can be used to reconstruct a full 3D image. Since nonlinear interferometers by definition measure interferometrically encoded information, they are naturally suited to implement OCT—in both time domain [58,59] and frequency domain [57]. The working principle of the latter is shown in Fig. 13.

 figure: Fig. 13.

Fig. 13. Fourier domain optical coherence tomography with undetected photons. (a) Working principle of traditional Fourier domain optical coherence tomography (OCT) using a broadband light source. Spectral phases proportional to the depth of reflections from inside a sample can be read out after interference with a reference arm using a fast grating spectrometer. The sample is such that the light can penetrate variable depths. (b) Adaptation of Fourier domain optical coherence tomography using a nonlinear interferometer, as reported in [57]. In this case, the wavelength for probing the sample can be very different from the wavelength for which suitable spectrometers are available. Optical coherence tomography using nonlinear interferometers in the time domain is reported in [58,59].

Download Full Size | PDF

As with imaging applications, the main advantage of OCT with undetected light is that one can illuminate the sample with light of a wavelength for which the detectors are not suitable. For example, infrared illumination can be desirable in the case of highly scattering (but water-free) samples, because scattering is strongly suppressed for longer wavelengths, enabling high penetration depths. In the practical implementation of OCT with nonlinear interferometers, one of the crucial goals often is to reach a high axial resolution $\Delta z$, which is directly related to the idler spectral bandwidth $\Delta \lambda$. For a Gaussian spectrum, it is given by

$$\Delta z = \frac{{0.44\lambda _I^2}}{{\Delta \lambda}},$$
which will be slightly modified for differently shaped spectra. Formulated in the frequency domain, as a rule of thumb, one can estimate that a 10 THz spectral bandwidth corresponds to about 20 µm axial resolution. Typical non-degenerate SPDC sources, however, feature bandwidths of the order of 1 THz, resulting in an axial resolution $\Delta z \gt 100\;{{\unicode{x00B5}{\rm m}}}$, which is not sufficient for most real-world applications. Notwithstanding, there are several strategies to engineer sources with much broader spectra.
  • • Choose crystals with a short length, L, because the bandwidth normally scales as $1/L$. Note, though, that this severely reduces the total brightness, which scales as ${L^2}$. Thus a 10-fold improvement in axial resolution would result in a 100-fold reduction in brightness, making this a sub-optimal design strategy.
  • • Choose chirped poling periods, which can lead to drastically increased bandwidth. Here again, the penalty to pay is in spectral brightness compared to a crystal of the same length. Nevertheless, this reduction in brightness is not as severe as for short crystals, making this a possible option, especially for reaching ultra-broad bandwidths.
  • • Use signal–idler group-velocity matched phase matching [91], which yields ultra-broadband spectra without sacrificing spectral brightness. Also, the bandwidth and total brightness are still dependent on the crystal length but the trade-off is different here: as a special trait of this type of phase-matching, the bandwidth scales only with $1/\sqrt L$, whereas the spectral brightness still scales with ${L^2}$ and therefore the total brightness with ${L^{3/2}}$. Thus, trading off a factor of two in axial resolution (with a 4x longer crystal) results in 8x more photons per pump power.

Other relevant parameters for implementing FD-OCT with nonlinear interferometers are the sensitivity (SNR), imaging depth, SNR roll-off, and speed. They are highly analogous to conventional FD-OCT, for which there is extensive introductory and overview literature, e.g., [96].

A special feature of FD-OCT with undetected photons is: SPDC has no spectral fluctuations beyond the shot noise, as it is induced by the vacuum with a temporally constant spectrum. Moreover, any intensity noise of the pump laser does not reduce sensitivity, because it affects only the absolute intensity of the whole spectrum. On the other hand, frequency noise of the pump laser would need to be as large as the spectrometer resolution to have an effect on sensitivity. Thus, the SNL for the sensitivity of FD-OCT can be reached quite straightforwardly [91]. Note that by harnessing the high-gain regime of SPDC, it is even possible to surpass the SNL in OCT [97].

For practical implementation of FD-OCT with or without a nonlinear interferometer, the large bandwidths used mandate careful dispersion management. This can be done numerically, but in practice, it is better to physically compensate for as much dispersion as possible directly in the setup. Interestingly and sometimes usefully, the dispersion in nonlinear interferometers can be compensated for in both the idler arm, and/or the signal arm. Note that due to phase-mismatch away from the center of the SPDC bandwidth, there is an additional dispersion term from the SPDC crystal to consider, in addition to the dispersion of the optical components (filters, lenses, dichroic mirrors) that signal and idler photons pass through.

Another practical requirement for implementation is the use of a suitably narrowband pump laser: in order not to affect the imaging depth and sensitivity roll-off, it should be at least a factor of two narrower than the resolution of the spectrometer used to acquire the signal spectra.

C. Spectral Imaging

One of the first applications of sensing with nonlinear interferometers has been the pioneering work of Kalashnikov et al. [62]. They used a highly multi-mode geometry, where a large, collimated laser pumps two relatively thin ppLN crystal in a gas cell. In between the crystals, the generated mid-IR light is partially absorbed by a pressure-controlled ${{\rm{CO}}_2}$ gas. Leveraging the spectral correlations between the visible and mid-IR light and the correlations between angle and wavelength in the SPDC process, the absorption spectrum of ${{\rm{CO}}_2}$ at 4.3 µm as well as wavelength dependent refractive index could be deduced from the non-trivial circular interference fringes imaged on a CCD camera. A sensitivity of around ${10^{- 5}}$ for measuring (n-1) and $0.1\;{\rm{c}}{{\rm{m}}^{{\rm{- 1}}}}$ for the absorption coefficient could be reached with this method.

Extending imaging to spectral imaging faces the “challenge of dimensionality” of the final sensor. As imaging information is typically 2D and spectral data 1D, a 3D sensor array would be required to achieve a single-shot wide-field spectral imaging instrument. Such sensors unfortunately do not exist, but there are multiple possibilities to address this challenge. The most obvious one is a scanning approach, where a single-mode nonlinear interferometer is used the measure the spectral properties, and the focused idler beam is scanned over the sample for spatial information. This approach is relatively simple, and robust and scanning can be fast and is routinely used in scanning confocal/fluorescence microscopes. Another approach is to sequentially take wide-field images at different wavelengths by either spectral filtering [64] or tuning the emission wavelength of SPDC, e.g., by temperature tuning the crystal [65]. In principle, other approaches are also possible and potentially useful, depending on the application. These include FD spectroscopy imaging approaches or hybrid schemes (for example, one spatial and one spectral dimension on a CMOS camera and scanning one remaining spatial dimension by moving the sample.)

6. DESIGNING A SETUP FOR A SPECIFIC APPLICATION

As discussed in Section 4, sensing based on ICWIE relies on photon correlations. These (classical and quantum) correlations depend on the physical properties of the experimental setup. In the case of SPDC, for example, the bandwidth of the pump laser together with the length of the nonlinear crystal determines the spectral entanglement of the twin photons. Similarly, the transverse spatial properties of the pump beam together with the dimensions of the nonlinear crystal determine the spatial entanglement. We emphasize, however, that spectral and spatial entanglements are not necessary for imaging with undetected photons. Classical correlations are enough.

A. Wavelength Considerations

Before getting started, the frequency range for probing the sample and for the detection need to be chosen, while also considering the resulting pump frequency given by energy conservation. It is useful to consider a number of trade-offs and boundary conditions for this multi-parameter selection.

For the selection of the probing wavelength, a range of special interest is the mid-IR region. Here ro-vibrational molecular absorption lines, which are both strong and very specific for the type of molecule that is probed, are excellently suited for (bio-)chemical analysis by spectroscopic measurements. As a consequence of the transparency range of commonly used nonlinear crystals such as periodically poled potassium titanyl phosphate (ppKTP), lithium niobate (ppLN), or stoichiometric lithium tantalite (ppSLT), one can access wavelengths up to ${\sim}5\;{{\unicode{x00B5}{\rm m}}}$, which contains important wavelengths used for gas sensing as well as the CH-stretch region around 3.45 µm, which is useful for the identification and analysis of organic compounds, including (micro)plastics, lipids in tissue, or collagen. Using nonlinear IR crystals such as silver thiogallate (${{\rm{AgGaS}}_2}$ or AGS) or orientation patterned gallium phosphide (opGAP), longer wavelengths in the so called “fingerprint region” (up to 12 µm) can be generated and used.

Longer wavelengths scatter much less than shorter wavelength. This is useful, for example, for OCT to perform 3D imaging into otherwise strongly scattering media such as ceramics [57]. Here the specific wavelength is less important than the bandwidth, which defines the axial resolution. Depending on the specific application, a large bandwidth is also relevant for spectroscopy or spectral imaging. Large bandwidths will be achieved if one matches the probing and detection wavelength’s group indices in the nonlinear crystal [91]. Interestingly, this can typically be achieved in ppKTP, ppLN, and ppSLT for a sensing wavelength in the mid-IR group-velocity matched with a wavelength in the NIR or short wave infrared (SWIR) [91]. In practice, photon generation rates, e.g., for 2 mm long ppKTP crystal of the order of ${10^9}/{\rm{s}}$ over the whole spectrum were demonstrated [85,91] for 500 mW of pump power. This corresponds to a spectral brightness typical for type-I phase-matched periodically poled crystals of the order of 50.000 generated pairs per mW per nm bandwidth.

The strategy of matching signal and idler group velocities can in principle also be applied for IR crystals such as AGS (via angle phase-matching) or opGAP. This allows a certain design freedom to also accommodate other criteria in the wavelength choices, such as specific pump wavelengths, water-absorption windows, or the availability or cost of necessary optical elements.

Another possibly interesting choice for the sensing wavelength is the UV. This can increase the maximal resolution by leveraging the lower Abbe limit for UV photons, while detecting in the technologically easier VIS/NIR region (see Sections 4.B.3 and 4.C.3). Also, spectral imaging in the UV has applications in bio-imaging. The challenge here is the generation of photon pairs. In SPDC, the pump wavelength needs to be shorter than the sensing wavelength, which has limitations due to the transparency ranges for dielectric nonlinear crystals. A possible route around this is to use spontaneous four-wave mixing in gas-filled hollow-core fibers [98].

For the optimal choice of detection wavelength using Si-based CCD and CMOS sensors and cameras, the detection wavelength should be chosen below ${\sim}900\;{\rm{nm}}$. Above this wavelength, the quantum efficiency of Si-based sensors typically drops quickly and vanishes above the Si bandgap (${\sim}1100\;{\rm{nm}}$).

B. Pump Lasers

Having selected the probing and detection wavelength automatically, the pump wavelength is automatically determined via energy conservation. Selecting a suitable pump laser is highly important, with availability, maximum power, coherence length, beam quality, and cost as the main criteria for selection. Especially when using spatial entanglement, the pump beam needs to be of high quality. For spectroscopic application, the laser bandwidth (associated with coherence length) needs to be below the target spectral resolution one wants to achieve. The same is true (in the time domain) for OCT applications.

1. Spatial Properties

Perfect momentum correlation would ideally be obtained for a plane wave pump, which obviously cannot be generated perfectly experimentally. However, this fact is an excellent guideline to what spatial properties of the pump beam are important for imaging applications: ideally, the pump beam is as spatially coherent as possible, its wavefront flat and with only a slowly varying amplitude.

Lasers differ strongly in their beam quality depending on their type. Typical gas, solid state, and fiber lasers have very good beam quality, whereas most diode lasers (popular for their compactness and efficiency) have a far less ideal beam quality. This can be remedied if the pump laser we plan to use is filtered spatially before pumping the nonlinear interferometer. We can either use a 4f system with a pinhole in the Fourier plane or couple the laser beam into a single-mode fiber. If the filter parameters are optimally matched to the incoming beam, only the higher order modes will be removed, and the fundamental mode will be transmitted. Clearly, the losses in both methods will depend on the initial beam quality. One should take into account those losses when estimating the required laser power.

2. Spectral Properties

The spectral properties of the pump laser are essential for spectroscopy, OCT, and spectral imaging, but it should be kept in mind that the coherence length of the pump can affect also the applications that are not directly related to spectral measurements (see Sections 7.A and 7.B). Interestingly, because of the typical spectral–angular correlations in SPDC from a bulk crystal, the spectral properties of the pump laser can affect the spatial properties of the SPDC. Therefore, it is advisable to choose a laser with coherence length as large as possible, whenever available, and carefully consider the effects of using lasers with a shorter coherence length. Another important feature can be wavelength stability, which if not sufficient can influence spectral/axial calibration of the spectroscopy/OCT data.

C. Transverse Spatial Correlations

The crucial element in the design of the interferometer for a given application is to provide correlations in the degree of freedom of interest. Spatial entanglement is in general not necessary for the applications of ICWIE that we discuss in this tutorial (please see the explanation in Section 4.A and Fig. 12). As shown in Sections 4.B.3 and 4.C.3, the resolution of the image is directly affected by the spatial correlations at the plane in which the sample is placed with respect to the crystal(s).

The transverse momentum correlations in SPDC can be thought of as a consequence of the transverse momentum conservation in this process [30]. The sum of momenta of signal and idler photons equals the momentum of the pump photon. As a consequence, if the nonlinear crystal was pumped with a plane wave, which has a well-defined momentum, we would obtain the perfect momentum correlation of the photon pair. Knowing the momentum of the signal photon, we could predict the momentum of the idler photon. Physically realizable beams have a finite spatial extent, and as a consequence, their momentum distribution has to have a finite width. This uncertainty of the momentum of the pump beam translates to imperfect correlations of signal and idler beams. Knowing the momentum of the signal photon, we can predict the range of idler photon momenta; the correlation is not perfect. The better the pump beam can be approximated by a plane wave, the sharper the momentum correlation between the idler and signal photons. Therefore, focusing the pump onto the crystal will tend to reduce the photon pair momentum correlation. In [48], one can find a detailed description of how a lens focusing the pump onto the crystal can affect the momentum correlations, and in [99], one can find a detailed theoretical and experimental analysis of its effect on the resolution of QIUP. Note that a lens focusing the pump onto the crystal can reduce the spatial entanglement of downconverted beams when the depth of focus of the laser becomes shorter than the length of the crystal [30].

In SPDC, transverse position correlations are quite sensitive to the crystal length [30]. The shorter the crystal length, the sharper the transverse position correlations between signal and idler. Transverse position correlations are not affected by (dichroic) mirrors and lenses in downconverted beams. However, the thinner the bandpass filter in front of the detector, the sharper the spatial resolution of the image [64].

D. Nonlinear Crystals

There is a wide range of nonlinear crystals that have been used for SPDC and therefore are in principle suited for nonlinear interferometers, such as ppKTP, ppLN, ppSLT, barium borate (BBO), bismuth borate (BiBO), and AGS, to name just a few. However, when choosing a crystal optimally suited for the targeted application, there are a number of important criteria to consider:

  • • achievable phase-matching: without phase-matching by either angle/bi-refringent or quasi-phase-matching, no SPDC can be generated efficiently (except in ultra-thin crystals);
  • • suitable transparency range: the crystal should be highly transparent for the used wavelengths;
  • • brightness (nonlinear coefficient): high brightness (i.e., a large photon rate) will lead to desirable short measurement times;
  • • large bandwidth: the bandwidth determines both spectral range in spectroscopy applications as well as axial resolution in OCT and should therefore be as large as possible for those;
  • • crystal length and aperture: the maximum length co-determines the brightness trade-off with the bandwidth; the size aperture defines (together with the selected wavelengths) the number of spatial modes that for a certain length crystal can be generated and used for imaging [85];
  • • general commercial availability and cost: important criterion for real-world applications; typically, however, the pump laser and final sensor will have a more poignant influence on the cost.

E. Cameras

Typically, Si-based detectors are used to detect signal photons, as the detection wavelength is often purposefully chosen to lie in their sensitivity range. If cameras are used, sCMOS cameras are often preferable to EMCCD cameras, because typically a photon flux per pixel of more than 10–100 photons/s can be easily achieved, and sCMOS cameras feature a better SNR at such illumination levels than EMCCDs and can have more and smaller pixels. In general, the number of illuminated pixels (magnification onto the camera) should be chosen such that one spatially resolved element corresponds to two to three pixels. Having more pixels per resolved element does not bring any benefit, but on the contrary increases the amount of total readout and dark noise from the camera and therefore lowers the SNR.

7. EXPERIMENTAL GUIDELINES: TIPS AND TRICKS

Most of the tips and tricks we provide below for the ZWM setup also apply to the SU(1,1) interferometer in the low-gain regime, such as those illustrated in Figs. 3, 13(b), and 4. Notice that in the setup in Fig. 4(b), all three fields, signal, idler, and pump, go through the imaged object, which will lead to further effects not described in this tutorial.

Setting up a ZWMI can be challenging because the alignment needs to be performed with SPDC light, not with bright laser beams. Moreover, correlations required for multimode sensing applications can result in a low degree of spatial and spectral coherence of individual photon beams. As a consequence, the visibility of interference depends critically on transverse and longitudinal shifts between interfering beams [100,101]. In this section, we give some tips and tricks for building and aligning a ZWMI, such as those in Figs. 2 and 6.

We start this section by discussing the conditions for the temporal alignment of the ZWMI. We explain the origin of these conditions and ways to fulfill them in the experiment. We move on to instructions for spatial alignment, where after sketching a basic alignment procedure, we briefly discuss the practical consequences of using thick lenses. We stress the importance of the spatial partial coherence of signal and idler beams and describe how to use an auxiliary stimulating laser beam overlapped with signal or idler beams. At the end of the section, we describe a convenient polarization-based method of controlling the phase in the ZWMI and the effect of losses in the idler arm, which are important factors limiting the maximal achievable interference visibility.

A. Temporal Alignment

1. From Indistinguishability to Zero-Delay Requirements

In building any interferometer, it is necessary to adjust the combined path lengths to within the coherence length of the detected light to observe interference. In the language of quantum information, this condition is due to the requirement of path indistinguishability [33]. Let us consider the ZWMI shown in Fig. 2, and denote ${l_{{P_1}}}$ and ${l_{{P_2}}}$ as the optical path lengths of the pump from ${{\rm BS}_1}$ to nonlinear sources ${Q_1}$ and ${Q_2}$, respectively. We will denote ${l_{{I_1}}}$ as the idler optical path in between the two sources and label the signal ${S_1}$ (${S_2}$) optical path length from ${Q_1}$ (${Q_2}$) to ${{\rm BS}_2}$ as ${l_{{S_1}}}$ (${l_{{S_2}}}$). Two path length requirements can be formulated [102]:

$$\delta {l_1} = {l_{{P_1}}} + {l_{{I_1}}} - {l_{{P_2}}} \lt {l_{\rm coh - pump}};$$
$$\delta {l_2} = {l_{{I_1}}} + {l_{{S_2}}} - {l_{{S_1}}} \lt {l_{\rm coh - signal}},$$
where ${l_{\rm coh - pump}}$ and ${l_{\rm coh - signal}}$ are the pump and signal field coherence lengths, respectively. The coherence length of the downconverted pair depends on the coherence length of the pump and is not affected by linear optical systems before or after the crystal (e.g., lenses, mirrors, free space propagation). The coherence length of the signal photons will be longer, the smaller the width of the bandpass filter used in front of the detector. This is an important point when considering the ease (or difficulty) of fulfilling the second path length requirement, while the first requirement is affected only by the properties of the pump.

The conditions above can be understood intuitively by analyzing how the two interfering alternatives (emissions from the two crystals) can be distinguished. To find the first condition [Eq. (69)], we consider a pulsed excitation of the two crystals. We require the idler photon generated in the first crystal to arrive at the second crystal together with the pump pulse, so that the idler photons generated in the two crystals overlap in time. Strictly speaking, it is the coherence length of the pump laser, not the pulse length, that governs the second condition. Note that condition Eq. (69) is valid for the case of excitation using so-called continuous wave (CW) lasers. To find the second condition [Eq. (70)], let us consider a single pair of photons generated in one of the crystals. If we were to detect both photons, we could measure the time difference between their arrival at the detectors, and if this time was different for the two pairs, we could distinguish between the two alternatives. Note, however, that even if we do not detect idler photons, just the possibility of obtaining welcher-weg information is enough to inhibit interference in the signal.

We stress again that it is not relevant in this reasoning if we actually perform the measurements required to distinguish the alternatives or not. The mere possibility of performing such measurements precludes the observation of interference or in the intermediate case reduces the visibility of interference fringes. For a detailed theoretical treatment, please refer to [43,102,103].

2. Finding the Zero Delay Using Fringe Visibility or Spectral Interference

To adjust path lengths for the conditions expressed above, one can maximize the visibility of interference fringes as a function of an optical delay added in the signal, idler, and/or pump modes. Alternatively, spectral interference of the signal beams from the two crystals can be used to determine the delay that needs to be compensated for. We recommend the latter strategy if interference is not immediately seen in the detector/camera. The non-zero path length difference $\delta l$ (corresponding to non-zero optical delay difference $\tau$ in the interferometer) results in the appearance of spectral interference fringes with the period given by $\frac{{2\pi}}{\tau}$. The effect of the spectrometer is as if one had many bandpass filters, one for each wavelength to within the spectral resolution of the equipment. In other words, the effective coherence length of the fields in the equations above is larger than that which most bandpass filters can provide. Therefore, one sees interference in the spectrometer before one can observe interference in the detector. If the resolution of the spectrometer is high enough to resolve the fringes for a given delay, it is straightforward to minimize the optical path length difference by maximizing the period of spectral interference fringes up to a point when the entire spectrum is in phase. In other words, as the interferometer path lengths are adjusted, the interference fringes become spectrally wider in the spectrometer. In fact, their width is inversely proportional to the path length difference $\delta l$. Of course, the broader spectrum results in shorter coherence length and in the requirements for the observation of interference becoming more stringent. Common possibilities for designing the imaging delay lines, which are necessary for aligning the correct path length difference, are a pair of matching, translatable wedges or a trombone-configured retro-reflector.

3. Spectral Bandpass Filters for Initial Alignment

Because SPDC in general produces broadband fields, it is useful to use narrow bandpass filters at the detectors to have a workable signal coherence length. The narrower the interference filter in front of the camera, the easier the alignment. Nevertheless, the final alignment needs to be performed with a spectral bandwidth broad enough for the final application.

B. Spatial Alignment

1. Alignment of Imaging Systems

Imaging requires multiple imaging and/or Fourier transform optical systems (Fig. 6). Each of these lenses or off-axis parabolic mirror systems has to be aligned separately. In the case of perfectly aligned imaging systems in the interferometer, we should obtain an unmodulated intensity profile at the camera. Misalignments in the setup can in turn lead to the appearance of intensity fringes at the camera plane, which for the case of the momentum correlation enabled QIUP setup were studied in [104]. In the ZWMI, misalignment of the signal field can, to a certain extent, be “compensated for” with the misalignment of the idler field. This is quite unfortunate because one can end up with sub-optimal interference visibility. In practice, with optimized and careful alignment, visibilities above 70% are commonly achievable.

Typically, dichroic mirrors and BSs allow one to perform the initial alignment of the setup using pump beams. In this case, the alignment is not different from that of the standard MZI. Later, the idler beams need to be overlapped to ensure the indistinguishability of the idler photons generated in the two crystals. It is convenient to overlap the two beams by monitoring them in two complementary bases, for example, at an image plane of the crystal and at a Fourier plane of the crystals. For this alignment, it is very helpful to have two lenses on flipper mounts just in front of the camera, one that images onto the camera the crystal plane and the other performing the Fourier transform of the electric field at the crystal plane.

In certain designs (see, for example, [85]), beams with very different wavelengths pass through certain imaging components. In such cases, chromatic aberrations need to be minimized by using achromatic lenses or replacing the lenses with off-axis parabolic mirrors.

2. Coupling $\textit{between}$ Spatial and Temporal Alignment by Thick Lenses

We often describe optical setups using thin lens approximation. In practice, when we want to minimize the distances in the setup, we use lenses with short focal lengths. Such lenses, because of their thickness, introduce a significant delay that depends on the distance from the optical axis. This can lead to complications when aligning the setup if one is not careful to always keep the optical axis of the propagating fields aligned with the center of the lenses.

Standard lens alignment procedures are helpful here: preparing a laser beam along the propagation direction of the beam of interest and making sure that the beam pointing direction is not modified when an extra lens is introduced and at the same time that the backreflection from the lens surface comes back very close to the source. If backreflections from both surfaces of a lens are visible, it suffices to overlap them.

3. Spatial Partial Incoherence of Signal and Idler Beams

In the case of spatially multimode SPDC, which is necessary for wide-field imaging, individual signal and idler beams are spatially partially incoherent. This has profound implications for the design of the experimental setup, as partially incoherent light propagates differently from perfectly coherent light. In particular, one should expect a much larger illumination area on the object (FoV) than if these were propagating coherent Gaussian fields. In other words, multimode SPDC light does not form a beam that can be treated as a laser beam, the latter being fully spatially coherent.

4. Using an Auxiliary Laser Beam to Stimulate Emission into Idler and Signal Beams for Easier Alignment

Parametric downconversion can be induced (stimulated) by aligning an auxiliary laser source that shares the same wavelength and polarization with at least one of the downconverted beams generated in the crystals. The process of stimulated parametric downconversion is a useful tool for the alignment of the ZWM or SU(1,1) interferometer [Fig. 12(b)] because it significantly increases the intensity of the detected signal and allows one to increase the coherence length of the downconverted light. Moreover, if the stimulating laser has a larger coherence length than the bandpass filter used in front of the camera, the resulting longer coherence length of the stimulated beams allows the experimentalist to focus on the spatial part of alignment before dealing with the fine adjustment of the optical path lengths (temporal alignment).

Stimulated emission inherits also the spatial coherence properties of the stimulating beam; therefore, the stimulation with coherent laser beams results in a downconverted beam with a narrower angular spread than in the case of spontaneously emitted light. When spots from the two crystals are overlapped at the camera, the smaller spread translates to a possibility of more precise alignment. Therefore, it is particularly useful to combine the stimulated emission with monitoring the Fourier and image planes.

The “trick” of using a seed laser can also be used for imaging via induced coherence with induced emission, as was demonstrated in [90]. Their setup is illustrated in Fig. 12(b). The sample is illuminated with the seeding laser. Detection, pumping, and seeding wavelengths are chosen according to one’s needs, taking into consideration the availability of lasers, crystals, and detectors. Notice that cheaper cameras can be used, as the signal is much brighter than in the case of spontaneous downconversion.

C. Controlling the Relative Phase and Power between Pair Emissions at Two Sources

In principle, a 50:50 BS could be used to split the pump beam in the ZWMI, and the phase between the interferometric arms could be scanned by shifting this BS or one of the mirrors, as shown in Fig. 2. In practice, however, it is advantageous to have fine-tuned control of the relative pump power illuminating the crystals. To do this, we recommend using a half-wave plate (HWP) (${{\rm HWP}_1}$) followed by a polarizing BS (PBS) instead of ${{\rm BS}_1}$, as shown in Fig. 6. Rotating ${{\rm HWP}_1}$ not only enables finding the optimal ratio of powers for the maximal visibility, but also can be used to aid alignment, by directing all the pump power to one of the crystals. This can be helpful, for example, when overlapping the SPDC beams from both crystals. Figure 6 also shows another HWP (${{\rm HWP}_2}$) at 45º placed in one of the pump paths, which guarantees that both crystals are pumped with the same polarization.

It is also useful to be able to shift the interferometric phase without touching any mirrors or BSs. For that, one can use a single quarter-wave plate (QWP) set at zero degrees before the PBS and rotated around its vertical axis. Alternatively, one can place before the PBS a combination of another HWP sandwiched between two QWPs at 45º. The interferometric phase is then tuned by rotating the sandwiched HWP.

D. Loss within the Interferometer

In a standard interferometer, loss in one of the arms can be compensated for by introducing an equal loss in the other arm. It is a distinguishing feature of the ZWMI and other interferometers based on ICWIE that losses in the idler arm between the two crystals lead to a reduction in the mutual coherence of the two signal beams and therefore cannot be compensated for. As a consequence, it is essential to minimize the losses in the path between the two crystals (or equivalent paths in the designs different from ZWM), as the transmission of this arm sets the upper bound for interference fringe visibility. Common sources of loss, and thus pointing at potential ways to reduce them, are: sub-optimal AR coatings of all optical components [including the nonlinear crystal(s)], mirror coatings, partially non-transparent materials (lenses, crystals), non-100% transmitting filters, as well as dust and other residues on optical surfaces.

To sum up, in the ZWM experiment, losses in the idler path between the crystal (refer to Fig. 1) control the mutual coherence between the signal beams and cannot be compensated for, whereas losses in other arms do not affect the coherence and can be balanced to recover maximal visibility.

8. OUTLOOK

One of the most important ways to categorize various types of interferometers (both linear and nonlinear) is how they measure up against specific limitations, and whether or not they exceed them. Based on what application is in mind for a device, it can be measured up against one, or more, of several types of limits on the ability of the device to perform specific tasks—typically, related to the sensitivity, SNR, and/or resolution.

With this in mind, there are a number of interesting directions where the field is headed. First, it will have a major impact when experimental techniques improve sufficiently to robustly enter the high-gain regime. Sub-shot-noise performances in different sensing arrangements, as well as compensating for SNR reductions due to losses and in general much higher photon rates, leading to shorter measurement times, will be in focus then, and also extending the wavelengths that are covered into the fingerprint region beyond 6 µm as well as further improving techniques in the THz region.

Moreover, a number of further interferometric sensing and imaging techniques could be adapted for nonlinear interferometers, such as tomographic holography for 3D imaging, a combination of coherence gating (with a broadband source) and holography, to achieve optical sectioning as well as aperture synthesis techniques such as Fourier ptychography. Finally, combining nonlinear interferometry with super-resolution techniques might lead to novel imaging modalities to be explored.

In addition to applications to imaging and metrology, the study of ZWM-type nonlinear interferometers has had important implications to fundamental quantum physics. The ZWMI has already been used to study quantum complementarity and its connection to spatial coherence [32,42] and partial polarization [105,106]. Similar setups have been recently used to measure twin-photon correlations [48,49] and entanglement [107,108]. Generalizations of the ZWM experiment have led to the discovery of novel methods of generating multi-photon and/or high-dimensional entangled states [109111]. Therefore, further studies of such nonlinear interferometers will potentially result in new research directions in fundamental quantum physics and quantum information science.

APPENDIX A: BRIEF INTRODUCTION TO THE MAIN QUANTUM OPTICS STATES AND OPERATORS

The basic description of interferometric devices is given in terms of the so-called creation and annihilation operators of the quantum-optical field, which are the ladder operators that work on the photon number degree of freedom as

$$|n\rangle = \frac{{{{({{\hat a}^\dagger})}^n}}}{{\sqrt {n!}}}|0\rangle ,$$
where $|0\rangle$ has the standard definition of the vacuum state {see Eq. (2.40) in [28]}. A mode in a vacuum state represents a physical electromagnetic mode with only the zero-point energy.

A monochromatic light field in a pure state (light is most commonly found in the form of pure states, as mixed states are difficult to produce with objects that do not typically interact with each other) may exist in a quantum mechanical superposition of containing different numbers of photons. Thus, when a measurement is performed on the state, it is found to have a specific number of photons. Mathematically, this may be expressed, for some state $|\psi \rangle$, as

$$|\psi \rangle = \sum\limits_{n = 0}^\infty {p_n}|n\rangle ,$$
where the ${p_n}$’s are the probability amplitudes of the various definite photon number states. Since any quantum state may be decomposed in the number basis, we can call it complete. Also, since orthonormality includes linear independence, we can state that the basis of definite number states (also called Fock states after Vladimir Fock, the Soviet physicist) constitutes a true and complete mathematical basis.

One can write the operator corresponding to the electric field in the following simplified form {see Eq. (2.51) in [28]}:

$$\hat E(t,z) = {E_0}\sin (kz)[\hat a{e^{- i\omega t}} + {\hat a^\dagger}{e^{i\omega t}}].$$

We can rewrite this in terms of two dimensionless quantities called quadrature operators defined as

$$\hat q = \frac{1}{2}(\hat a + {\hat a^\dagger}),$$
$$\hat p = \frac{1}{{2i}}(\hat a - {\hat a^\dagger}).$$

Inverting these equations and substituting into Eq. (A3) yields

$$\hat E(t,z) = 2{E_0}\sin (kz)\left[{\hat q\cos (\omega t) + \hat p\sin (\omega t)} \right].$$

From this, it is clear that the two quadrature operators are always $\pi /2$ out of phase, and thus always in different quadratures (hence the name).

It is standard to define an uncertainty relation using the generalized Heisenberg uncertainty principle for non-commuting operators $\Delta A\Delta B \ge \frac{1}{2}|\langle [\hat A,\hat B]\rangle |$. This yields

$$\Delta \hat q\Delta \hat p \ge \frac{1}{4}.$$

If the equality in this expression is obtained, we have a so-called “minimum uncertainty state” {see Eq. (2.56) in [28]}.

It is helpful to represent quantum-optical states as shapes in the quadrature space defined by Eqs. (A4) and (A5). Specific single-mode states are represented in this diagram by their variances (physical extent and shape of the states) and by their average values (coordinate position of the states). The diagram can also be interpreted as an intensity-phase plot using the circular-polar coordinate system (with radial distance as intensity, and angle as phase). States rotate about the origin as they evolve with time. So, for example, the coherent state will trace out sinusoids in its quadrature values.

A number of such visualizations can be seen in Fig. 14. The shapes represent the quadrature variances as defined in Eq. (A7); technically, they can be thought of as a slice through the Wigner distribution at half-maximum. The states themselves will be briefly described, and referenced, in the rest of this section and the next.

First, there is the vacuum state, representing an optical mode unoccupied by photons. Since in the quantum regime the zero-point energy is always present, this state still has finite quadrature variances. Second, and vital for the description of interferometers, are the coherent and squeezed states. Coherent states can be defined in three different ways: as the state that obtains the equality in the uncertainty relation with $\Delta p = \Delta q$, as a displaced vacuum state with the displacement operator defined as

$$\hat D(\alpha) \equiv {e^{(\alpha {{\hat a}^\dagger} - {\alpha ^*}\hat a)}},$$
which displaces the state it operates on in quadrature space by an amount $\alpha$ and generates a coherent state as $\hat D(\alpha)|0\rangle = |\alpha \rangle$ {see Fig. 14, and Eqs. (3.30)–(3.31) in [28]}, and as eigenstates of the annihilation operator $\hat a|\alpha \rangle = \alpha |\alpha \rangle$. For optical fields, the definitions are equivalent. Coherent states are also the most “classical” in the sense that their electric fields have a coherent waveform resembling a classical harmonic oscillator and that they are unaffected by the removal of a quantum of light (i.e., that they are eigenstates of annihilation).
 figure: Fig. 14.

Fig. 14. Quadrature diagram: a configuration space defined by the two quantum-optical quadratures $\hat p$ and $\hat q$. Six states are shown: the fundamental vacuum state (gray); a displaced vacuum state/coherent state (red), both symmetric minimum uncertainty states; a number state (green) with completely defined intensity and completely undefined phase; a phase state with opposite uncertainties from the number state (orange-brown, note that in principle, this state should infinitesimally thin, but then you would not see it); and two squeezed states, a vacuum state squeezed in the quadrature direction (blue), and a coherent state squeezed in the phase direction (purple).

Download Full Size | PDF

The squeezed states take their name from the uncertainty relation shown above. If $p$ and $q$ are associated with coordinates in quadrature space, then $\Delta p$ and $\Delta q$ can be thought of as distances and the product $\Delta p\Delta q$ as an area. If equality is obtained in the uncertainty relation, this sets a specific minimum area of uncertainty in quadrature space. However, though this area may be not reduced beyond this minimum, its shape may be altered, allowing a reduction of the uncertainty along one quadrature at the cost of increasing it along another. This “squeezing” of the area of uncertainty gives squeezed states their name. Quantum-optical states may also be squeezed along other bases (for example, photon number and phase). In Fig. 14, two squeezed states are displayed: one is a vacuum state that has been quadrature squeezed, and the other is a coherent state that has been phase squeezed.

Mathematically the squeezing operation on a single quantum-optical mode may be described as

$$\hat S(\xi) = {\exp}\left[{\frac{1}{2}\left({{\xi ^*}{{\hat a}^2} - \xi {{\hat a}^{\dagger 2}}} \right)} \right],$$
where $\xi$ is known as the squeezing parameter and quantifies the amount of squeezing. Acting on a mode operator, this produces
$${\hat S^\dagger}(\xi)\hat a\hat S(\xi) = \hat a\cosh (r) - {\hat a^\dagger}{e^{{i\theta}}}\sinh (r),$$
$${\hat S^\dagger}(\xi){\hat a^\dagger}\hat S(\xi) = {\hat a^\dagger}\cosh (r) - \hat a{e^{- i\theta}}\sinh (r),$$
where we have used the polar decomposition $\xi = r{e^{{i\theta}}}$ {see Eqs. (7.11)–(7.12) in [28]}. Equivalently, we can have the squeezing operator acting on a state [Eq. (7.68)]. Here the vacuum
$$\hat S(\xi)|0\rangle \equiv |\xi \rangle = \frac{1}{{\sqrt {\cosh (r)}}}\sum\limits_{n = 0}^\infty {(- 1)^n}\frac{{\sqrt {(2n)!}}}{{{2^n}n!}}{e^{{in\theta}}}{[\tanh (r)]^n}|2n\rangle .$$

So the squeezing operation adds a superposition of even numbers of photons to the vacuum.

Since squeezed light is a fundamental resource in nonlinear interferometry, it is important to see how it is produced physically. Inside of certain materials, where light fields spatially coincide, their component electric (and magnetic) fields may no longer simply add as vectors as they do in free space. Consider the expansion of the polarization of a general dielectric material {see Eq. (1.1.2) and Sec. (1.3) of [112]}:

$$\frac{{{P}_{i}}}{{{\epsilon}_{o}}}=\sum\limits_{j}{}\chi _{{ij}}^{(1)}{{E}_{j}}+\sum\limits_{{jk}}{}\chi _{{ijk}}^{(2)}{{E}_{j}}{{E}_{k}}+\sum\limits_{{jkl}}{}\chi _{{ijkl}}^{(3)}{{E}_{j}}{{E}_{k}}{{E}_{l}}+\ldots.$$

The polarization $P$ represents how a dielectric material reacts to the presence of electric fields. Index $i$ runs over the 3D vector components. Constant $\chi _{{ij}}^{(1)}$ is called the first order (or linear) susceptibility, and it is a complex vector constant. Likewise, $\chi _{{ijk}}^{(2)}$ is the second order susceptibility (tensor constant), and on and on in that manner. Most materials have only a nonnegligible ${\chi ^{(1)}}$. In this case, as an electric field interacts with the material, it induces an oscillating dipole moment, which in turn creates an oscillating electric field, and so on. Therefore, the light propagates through the material with a dispersion and absorption determined by the real and imaginary parts of ${\chi ^{(1)}}$, respectively.

However, now consider when the other terms in this series are nonnegligible. Take the case of a material with a large ${\chi ^{(2)}}$ for example. Such materials are those that do not display inversion symmetry (non-centrosymmetric crystals and chiral molecules); see [112]. A commonly used material in experiments is $\beta$-barium borate. Take an electric field that has two separate frequency components,

$$E(t) = {E_1}\left({{e^{i{\omega _1}t}} + {e^{- i{\omega _1}t}}} \right) + {E_2}\left({{e^{i{\omega _2}t}} + {e^{- i{\omega _2}t}}} \right),$$
and input it into the second term of the polarization, ignoring the tensor nature of ${\chi ^{(2)}}$, i.e., assuming co-linearity {see [112] Eqs. (1.2.3)–(1.2.7)}:
$$\begin{split} {{P}^{(2)}}(t) &= {{\epsilon}_{o}}{{\chi}^{(2)}}\left[E_{1}^{2}\left({{e}^{2i{{\omega}_{1}}t}}+{{e}^{-2i{{\omega}_{1}}t}} \right) \right.+E_{2}^{2}\left({{e}^{2i{{\omega}_{2}}t}}+{{e}^{-2i{{\omega}_{2}}t}} \right) \\&\quad +2{{E}_{1}}{{E}_{2}}\left({{e}^{i({{\omega}_{1}}+{{\omega}_{2}})t}}+{{e}^{-i({{\omega}_{1}}+{{\omega}_{2}}t)}} \right)\\&\quad\left. +2{{E}_{1}}{{E}_{2}}\left({{e}^{i({{\omega}_{1}}-{{\omega}_{2}})t}}+{{e}^{-i({{\omega}_{1}}-{{\omega}_{2}}t)}} \right)+2E_{1}^{2}+2E_{2}^{2} \right].\end{split}$$

Here we have assumed that the electric amplitudes are real. Equation (A15) gives rise to several interesting phenomena, but we will be interested in the effect caused by the terms that oscillate as ${\omega _1} - {\omega _2}$. This tells us that if two light beams of different frequencies, ${\omega _1}$ and ${\omega _2}$, pump a material with a large second order susceptibility (tensor constant, ${\chi ^{(2)}}$), a new light beam at a frequency of ${\omega _3} = {\omega _1} - {\omega _2}$ is generated. This is called “difference frequency generation.”

Quantum mechanically, the ${\omega _2}$ mode need not be populated by photons (that is, it may be in the vacuum state) for the process to occur. In this case, the interpretation is that a photon from a strong beam (called the pump) splits into two daughter photons inside the optical nonlinearity. This can be understood in the energy-level picture as a single electronic excitation to a higher virtual level followed by the double emission of two photons via the relaxation of that state back to the ground via another intermediary virtual level, such that the energy of the two later transitions equal the energy of the first. If the two daughter photons are in the same spatial and spectral modes, then we can write the interaction Hamiltonian for this process as

$${\hat H_I} = i\hbar {\chi ^{(2)}}\left({{{\hat a}^2}{{\hat b}^\dagger} - {{\hat a}^{\dagger 2}}\hat b} \right),$$
where $\hat b$ represents the pump mode, and $\hat a$ represents the mode of the daughter photons. The second term expresses one photon being transformed into two, and the first term is present because the Hamiltonian must be Hermitian {see Sec. (7.2) of [28]}. Suppose the pump beam is in a coherent state (this is called the parametric approximation, meaning that the pump is undepleted: no photons are lost); then we can write for the daughter fields alone
$$\begin{split} \langle \beta |{{\hat H}_I}|\beta \rangle & = i\hbar {\chi ^{(2)}}|\beta |\left({{{\hat a}^2}{e^{i{\omega _1}t}} - {{\hat a}^{\dagger 2}}{e^{- i{\omega _1}t}}} \right)\\& = i\hbar {\chi ^{(2)}}|\beta |\left({{{\hat a}^2}{e^{i({\omega _1} - 2{\omega _2})t}} - {{\hat a}^{\dagger 2}}{e^{- i({\omega _1} - 2{\omega _2})t}}} \right).\end{split}$$
where in the second line the time dependence of $\hat a$ and ${\hat a^\dagger}$ has been made explicit. Here we choose ${\omega _1} = 2{\omega _2}$, so the Hamiltonian is in fact time independent, and we can write the time evolution operator for the system simply as
$$\hat U(t) = {e^{- i{{\hat H}_I}t/\hbar}} = {e^{\hbar {\chi ^{(2)}}t|\beta |\left({{{\hat a}^2} - {{\hat a}^{\dagger 2}}} \right)}}.$$

Now compare this to Eq. (A19) and we see that we have the single-mode squeezing operator, where $\xi = 2\hbar {\chi ^{(2)}}t|\beta |$. Furthermore, we could get two-mode squeezing for the case where the daughter photons are not in the same modes. So we have a way to physically squeeze vacuum (and other) states.

This is the process that occurs in both Figs. 2 and 3 in the nonlinear crystals, creating squeezed light between the two non-pump output modes, typically called “signal” and “idler” for historical reasons not relevant here.

It should be remarked that in realistic settings, the Hamiltonian at different times will not commute since frequency values are not completely sharp and have some distribution. Also, higher order terms do indeed contribute. So the above is an approximation, though one that is almost universally standard and in good agreement with experiments involving two-photon coincidence measurements.

Now, if we take $\hat U(t)$ acting on the vacuum and expand it as a power series,

$$\hat U(t)|0\rangle = |0\rangle + \hbar {\chi ^{(2)}}t|\beta ||2\rangle + {(\hbar {\chi ^{(2)}}t|\beta |)^2}|4\rangle + \ldots.$$

For a coherent pump $|\beta |$ that is strong, we can get several of these terms. In this case, one pump photon can split into two (second term), two pump photons can combine and then split into four (third term), and so on. However, if the pump is not very strong, only the first two terms will be nonnegligible. In this case, we get the state vector for SPDC.

Now we move away from the description of the states created inside nonlinear interferometers and look at the other components. The transformation that represents a phase shift is

$${\hat a_2} = {e^{{i\phi}}}{\hat a_1}.$$

The theoretical description also needs the operator transformations for BSs, which are given as

$${\hat a_2} = \frac{1}{{\sqrt 2}}\left[{{{\hat a}_1} + i{{\hat b}_1}} \right],\quad {\hat b_2} = \frac{1}{{\sqrt 2}}\left[{{{\hat b}_1} + i{{\hat a}_1}} \right],$$
where the first two transformations are the fields after a 50:50 BS, with two input fields. Field operators $\hat a$ and $\hat b$ represent the two modes (both input and output), and subscripts 1 and 2 represent before and after the BS (input and output), respectively. The $i$ factor on the opposite mode is the result of the phase shift of the mode under reflection.

Given these transformations, the propagation of the quantum state of the device from input to output is straightforward, if sometimes somewhat burdensome. There are two basic methods to analyze the optical modes. We can take the second term in Eq. (A19)—that is, assume spontaneous parametric downconversion. Then the mode operators acting on the vacuum are transformed according to the relations above, and nonlinearities act on the overall state by adding creation operators to the relevant modes.

This approach is only an approximation since the full description of nonlinearities is given by all terms in Eq. (A19) and the action of the operators representing all optical elements in the device. However, performing these operations on the full state is highly non-trivial. It is much more efficient to start with a target detection operator and propagate them backwards through the device to the coherent or vacuum input states. Then all of the elements of the optical device become linear transformations on the set of all creation and annihilation operators.

With these basic building blocks, we can understand how the fields, represented by operators or states or diagrams, propagate through the device, and we can use these as tools to understand the various effects present.

Funding

Brazilian Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro—FAPERJ; Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES—Brasil) (Finance Code 001); Fundacja na rzecz Nauki Polskiej; European Regional Development Fund (POIR.04.04.00-00-3004/17-00); Deutsche Forschungsgemeinschaft (RA 2842/1-1); Bundesministerium für Bildung und Forschung (BMBF) (QUIN (13N15402), SimQPla (13N15944)).

Acknowledgment

We thank Inna Kviatkovsky, Helen M Chrzanowski, Victoria Borish, Jorge Fuenzalida, Armin Hochrainer, and Anton Zeilinger for numerous discussions. G.B.L. acknowledges support from the Brazilian National Council for Scientific and Technological Development (CNPq), from the Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro—FAPERJ, and from the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES—Brasil). R.L. acknowledges the funding by the Foundation for Polish Science within the FIRST TEAM project “Spatiotemporal photon correlation measurements for quantum metrology and super-resolution microscopy,” co-financed by the European Union under the European Regional Development Fund. S.R. acknowledges funding from Deutsche Forschungsgemeinschaft as well as from Bundesministerium für Bildung und Forschung (BMBF) within projects QUIN (13N15402) and SimQPla (13N15944).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

REFERENCES

1. G. Brida, M. Genovese, and I. R. Berchera, “Experimental realization of sub-shot-noise quantum imaging,” Nat. Photonics 4, 227–230 (2010). [CrossRef]  

2. J. Sabines-Chesterking, A. McMillan, P. Moreau, S. Joshi, S. Knauer, E. Johnston, J. Rarity, and J. Matthews, “Twin-beam sub-shot-noise raster-scanning microscope,” Opt Express 27, 30810–30818 (2019). [CrossRef]  

3. G. Triginer Garces, H. M. Chrzanowski, S. Daryanoosh, V. Thiel, A. L. Marchant, R. B. Patel, P. C. Humphreys, A. Datta, and I. A. Walmsley, “Quantum-enhanced stimulated emission detection for label-free microscopy,” Appl. Phys. Lett. 117, 024002 (2020). [CrossRef]  

4. C. Casacio, L. Madsen, A. Terrasson, M. Waleed, K. Barnscheidt, B. Hage, M. Taylor, and W. Bowen, “Quantum-enhanced nonlinear microscopy,” Nature 594, 201–206 (2021). [CrossRef]  

5. I. F. Santos, M. A. Sagioro, C. H. Monken, and S. Pádua, “Resolution and apodization in images generated by twin photons,” Phys. Rev. A 67, 033812 (2003). [CrossRef]  

6. I. F. Santos, L. Neves, G. Lima, C. Monken, and S. Pádua, “Generation and detection of magnified images via illumination by entangled photon pairs,” Phys. Rev. A 72, 033802 (2005). [CrossRef]  

7. O. Schwartz, J. M. Levitt, R. Tenne, S. Itzhakov, Z. Deutsch, and D. Oron, “Superresolution microscopy with quantum emitters,” Nano Lett. 13, 5832–5836 (2013). [CrossRef]  

8. A. Classen, J. von Zanthier, M. O. Scully, and G. S. Agarwal, “Superresolution via structured illumination quantum correlation microscopy,” Optica 4, 580–587 (2017). [CrossRef]  

9. M. Unternährer, B. Bessire, L. Gasparini, M. Perenzoni, and A. Stefanov, “Super-resolution quantum imaging at the Heisenberg limit,” Optica 5, 1150–1154 (2018). [CrossRef]  

10. R. Tenne, U. Rossman, B. Rephael, Y. Israel, A. Krupinski-Ptaszek, R. Lapkiewicz, Y. Silberberg, and D. Oron, “Super-resolution enhancement by quantum image scanning microscopy,” Nat. Photonics 13, 116–122 (2019). [CrossRef]  

11. V. Giovannetti, S. Lloyd, and L. Maccone, “Advances in quantum metrology,” Nat. Photonics 5, 222–229 (2011). [CrossRef]  

12. E. Polino, M. Valeri, N. Spagnolo, and F. Sciarrino, “Photonic quantum metrology,” AVS Quantum Sci. 2, 024703 (2020). [CrossRef]  

13. G. Tóth and I. Apellaniz, “Quantum metrology from a quantum information science perspective,” J. Phys. A 47, 424006 (2014). [CrossRef]  

14. A. G. White, J. R. Mitchell, O. Nairz, and P. G. Kwiat, ““interaction-free” imaging,” Phys. Rev. A 58, 605–613 (1998). [CrossRef]  

15. D. Klyshko, “Effect of focusing on photon correlation in parametric light scattering,” Zh. Eksp. Teor. Fiz. 94, 82–90 (1988).

16. A. Belinskii and D. Klyshko, “Two-photon optics: diffraction, holography, and transformation of two-dimensional signals,” Sov. J. Exp. Theor. Phys. 78, 259–262 (1994).

17. T. Pittman, Y. Shih, D. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52, R3429–R3432 (1995). [CrossRef]  

18. A. Gatti, E. Brambilla, and L. Lugiato, “Quantum imaging,” Prog. Opt. 51, 251–348 (2008). [CrossRef]  

19. K. W. C. Chan, M. N. O’Sullivan, and R. W. Boyd, “Two-color ghost imaging,” Phys. Rev. A 79, 033808 (2009). [CrossRef]  

20. R. S. Aspden, D. S. Tasca, R. W. Boyd, and M. J. Padgett, “EPR-based ghost imaging using a single-photon-sensitive camera,” New J. Phys. 15, 073032 (2013). [CrossRef]  

21. P.-A. Moreau, E. Toninelli, T. Gregory, and M. J. Padgett, “Imaging with quantum states of light,” Nat. Rev. Phys. 1, 367–380 (2019). [CrossRef]  

22. G. B. Lemos, V. Borish, G. D. Cole, S. Ramelow, R. Lapkiewicz, and A. Zeilinger, “Quantum imaging with undetected photons,” Nature 512, 409–412 (2014). [CrossRef]  

23. M. Lahiri, R. Lapkiewicz, G. B. Lemos, and A. Zeilinger, “Theory of quantum imaging with undetected photons,” Phys. Rev. A 92, 013832 (2015). [CrossRef]  

24. B. Viswanathan, G. B. Lemos, and M. Lahiri, “Position correlation enabled quantum imaging with undetected photons,” Opt. Lett. 46, 3496–3499 (2021). [CrossRef]  

25. G. B. Lemos, V. Borish, S. Ramelow, R. Lapkiewicz, G. Cole, and A. Zeilinger, “Quantum imaging with undetected photons,” US Patent 9,557.262 B2 (January 31, 2017).

26. G. B. Lemos, V. Borish, S. Ramelow, R. Lapkiewicz, G. Cole, and A. Zeilinger, “Quantum imaging with undetected photons,” European Union Patent 2 887 137 B1 (October 31, 2018).

27. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University, 1995).

28. C. Gerry and P. Knight, Introductory Quantum Optics (Cambridge University, 2005).

29. J. W. Goodman, Introduction to Fourier Optics, 4th ed. (W. H. Freeman, 2005).

30. S. P. Walborn, C. Monken, S. Pádua, and P. S. Ribeiro, “Spatial correlations in parametric down-conversion,” Phys. Rep. 495, 87–139 (2010). [CrossRef]  

31. R. P. Feynman, R. B. Leighton, and M. Sands, The Feynman Lectures on Physics (Addison-Wesley, 1964), Vol. III.

32. L. Mandel, “Coherence and indistinguishability,” Opt. Lett. 16, 1882–1883 (1991). [CrossRef]  

33. B.-G. Englert, “Fringe visibility and which-way information: an inequality,” Phys. Rev. Lett. 77, 2154–2157 (1996). [CrossRef]  

34. E. C. G. Sudarshan and T. Rothman, “The two-slit interferometer reexamined,” Am. J. Phys 59, 592–595 (1991). [CrossRef]  

35. K. P. Zetie, S. F. Adams, and R. M. Tocknell, “How does a Mach-Zehnder interferometer work?” Phys. Educ. 35, 46–48 (2000). [CrossRef]  

36. H. Rauch and S. A. Werner, Neutron Interferometry: Lessons in Experimental Quantum Mechanics, Wave-Particle Duality, and Entanglement (Oxford University, 2015), Vol. 12.

37. H.-A. Bachor, T. C. Ralph, S. Lucia, and T. C. Ralph, A Guide to Experiments in Quantum Optics (Wiley, 2004), Vol. 1.

38. A. C. Elitzur and L. Vaidman, “Quantum mechanical interaction-free measurements,” Found. Phys. 23, 987–997 (1993). [CrossRef]  

39. P. Kwiat, H. Weinfurter, T. Herzog, A. Zeilinger, and M. A. Kasevich, “Interaction-free measurement,” Phys. Rev. Lett. 74, 4763–4766 (1995). [CrossRef]  

40. Z. Ou, L. Wang, X. Zou, and L. Mandel, “Coherence in two-photon down-conversion induced by a laser,” Phys. Rev. A 41, 1597–1601 (1990). [CrossRef]  

41. M. A. Horne, A. Shimony, and A. Zeilinger, “Two-particle interferometry,” Phys. Rev. Lett. 62, 2209–2212 (1989). [CrossRef]  

42. X. Y. Zou, L. J. Wang, and L. Mandel, “Induced coherence and indistinguishability in optical interference,” Phys. Rev. Lett. 67, 318–321 (1991). [CrossRef]  

43. L. Wang, X. Zou, and L. Mandel, “Induced coherence without induced emission,” Phys. Rev. A 44, 4614–4622 (1991). [CrossRef]  

44. H. M. Wiseman and K. Molmer, “Induced coherence with and without induced emission,” Phys. Lett. A 270, 245–248 (2000). [CrossRef]  

45. M. Lahiri, A. Hochrainer, R. Lapkiewicz, G. B. Lemos, and A. Zeilinger, “Nonclassicality of induced coherence without induced emission,” Phys. Rev. A 100, 053839 (2019). [CrossRef]  

46. M. I. Kolobov, E. Giese, S. Lemieux, R. Fickler, and R. W. Boyd, “Controlling induced coherence for quantum imaging,” J. Opt. 19, 054003 (2017). [CrossRef]  

47. L. Wang, X. Zou, and L. Mandel, “Observation of induced coherence in two-photon downconversion,” J. Opt. Soc. Am. B 8, 978–980 (1991). [CrossRef]  

48. A. Hochrainer, M. Lahiri, R. Lapkiewicz, G. B. Lemos, and A. Zeilinger, “Quantifying the momentum correlation between two light beams by detecting one,” Proc. Natl. Acad. Sci. USA 114, 1508–1511 (2016). [CrossRef]  

49. M. Lahiri, A. Hochrainer, R. Lapkiewicz, G. B. Lemos, and A. Zeilinger, “Twin photon correlations in single-photon interference,” Phys. Rev. A 96, 013822 (2017). [CrossRef]  

50. B. Yurke, S. L. McCall, and J. R. Klauder, “SU (2) and SU (1, 1) interferometers,” Phys. Rev. A 33, 4033–4054 (1986). [CrossRef]  

51. T. J. Herzog, J. G. Rarity, H. Weinfurter, and A. Zeilinger, “Frustrated two-photon creation via interference,” Phys. Rev. Lett. 72, 629–632 (1994). [CrossRef]  

52. M. Chekhova and Z. Ou, “Nonlinear interferometers in quantum optics,” Adv. Opt. Photon. 8, 104–155 (2016). [CrossRef]  

53. A. Burlakov, M. Chekhova, D. Klyshko, S. Kulik, A. Penin, Y. Shih, and D. Strekalov, “Interference effects in spontaneous two-photon parametric scattering from two macroscopic regions,” Phys. Rev. A 56, 3214–3225 (1997). [CrossRef]  

54. Z. Ou and X. Li, “Quantum SU(1,1) interferometers: basic principles and applications,” APL Photon. 5, 080902 (2020). [CrossRef]  

55. A. Zajonc, L. Wang, X. Zou, and L. Mandel, “Quantum eraser,” Nature 353, 507–508 (1991). [CrossRef]  

56. P. G. Kwiat, A. M. Steinberg, and R. Y. Chiao, “Three proposed “quantum erasers”,” Phys. Rev. A 49, 61–68 (1994). [CrossRef]  

57. A. Vanselow, P. Kaufmann, I. Zorin, B. Heise, H. M. Chrzanowski, and S. Ramelow, “Frequency-domain optical coherence tomography with undetected mid-infrared photons,” Optica 7, 1729–1736 (2020). [CrossRef]  

58. A. Vallés, G. Jiménez, L. J. Salazar-Serrano, and J. P. Torres, “Optical sectioning in induced coherence tomography with frequency-entangled photons,” Phys. Rev. A 97, 023824 (2018). [CrossRef]  

59. A. V. Paterova, H. Yang, C. An, D. A. Kalashnikov, and L. A. Krivitsky, “Tunable optical coherence tomography in the infrared range using visible photons,” Quantum Sci. Technol. 3, 025008 (2018). [CrossRef]  

60. L. Cui, J. Su, J. Li, Y. Liu, X. Li, and Z. Y. Ou, “Quantum state engineering by nonlinear quantum interference,” Phys. Rev. A 102, 033718 (2020). [CrossRef]  

61. F. Hudelist, J. Kong, C. Liu, J. Jing, Z. Y. Ou, and W. Zhang, “Quantum metrology with parametric amplifier-based photon correlation interferometers,” Nat. Commun. 5, 3049 (2014). [CrossRef]  

62. D. A. Kalashnikov, A. V. Paterova, S. P. Kulik, and L. A. Krivitsky, “Infrared spectroscopy with visible light,” Nat. Photonics 10, 98–101 (2016). [CrossRef]  

63. T. S. Iskhakov, S. Lemieux, A. Perez, R. W. Boyd, G. Leuchs, and M. V. Chekhova, “Nonlinear interferometer for tailoring the frequency spectrum of bright squeezed vacuum,” J. Mod. Opt. 63, 64–70 (2015). [CrossRef]  

64. I. Kviatkovsky, H. M. Chrzanowski, E. G. Avery, H. Bartolomaeus, and S. Ramelow, “Microscopy with undetected photons in the mid-infrared,” Sci. Adv. 6, eabd0264 (2020). [CrossRef]  

65. A. V. Paterova, S. M. Maniam, H. Yang, G. Grenci, and L. A. Krivitsky, “Hyperspectral infrared microscopy with visible light,” Sci. Adv. 6, eabd0460 (2020). [CrossRef]  

66. A. Rojas-Santana, G. J. Machado, D. Lopez-Mago, and J. P. Torres, “Frequency-correlation requirements on the biphoton wave function in an induced-coherence experiment between separate sources,” Phys. Rev. A 102, 053711 (2020). [CrossRef]  

67. B. Chen, C. Qiu, S. Chen, J. Guo, L. Q. Chen, Z. Y. Ou, and W. Zhang, “Atom-light hybrid interferometer,” Phys. Rev. Lett. 115, 043602 (2015). [CrossRef]  

68. P. Lähteenmäki, G. S. Paraoanu, J. Hassel, and P. J. Hakonen, “Coherence and multimode correlations from vacuum fluctuations in a microwave superconducting cavity,” Nat. Commun. 7, 12548 (2016). [CrossRef]  

69. D. E. Bruschi, C. Sabín, and G. S. Paraoanu, “Entanglement, coherence, and redistribution of quantum resources in double spontaneous down-conversion processes,” Phys. Rev. A 95, 062324 (2017). [CrossRef]  

70. Z. J. Ou, Quantum Optics for Experimentalists (World Scientific, 2017).

71. P. A. M. Dirac, “The quantum theory of the emission and absorption of radiation,” Proc. R Soc. London 114, 243–265 (1927). [CrossRef]  

72. W. H. Louisell, “Amplitude and phase uncertainty relations,” Phys. Lett. 7, 60–61 (1963). [CrossRef]  

73. L. Susskind and J. Glogower, “Quantum mechanical phase and time operator,” Phys. Phys. Fiz. 1, 49 (1964). [CrossRef]  

74. S. Barnett and D. Pegg, “On the Hermitian optical phase operator,” J. Mod. Opt. 36, 7–19 (1989). [CrossRef]  

75. J. W. Noh, A. Fougères, and L. Mandel, “Measurement of the quantum phase by photon counting,” Phys. Rev. Lett. 67, 1426–1429 (1991). [CrossRef]  

76. Z. Ou, “Fundamental quantum limit in precision phase measurement,” Phys. Rev. A 55, 2598–2609 (1997). [CrossRef]  

77. S. Pirandola, B. R. Bardhan, T. Gehring, C. Weedbrook, and S. Lloyd, “Advances in photonic quantum sensing,” Nat. Photonics 12, 724–733 (2018). [CrossRef]  

78. D. Braun, G. Adesso, F. Benatti, R. Floreanini, U. Marzolino, M. W. Mitchell, and S. Pirandola, “Quantum-enhanced measurements without entanglement,” Rev. Mod. Phys. 90, 035006 (2018). [CrossRef]  

79. N. R. Miller, S. Ramelow, and W. N. Plick, “Versatile super-sensitive metrology using induced coherence,” Quantum 5, 458 (2021). [CrossRef]  

80. W. N. Plick, J. P. Dowling, and G. S. Agarwal, “Coherent-light-boosted, sub-shot noise, quantum interferometry,” New J. Phys. 12, 083014 (2010). [CrossRef]  

81. W. Du, J. Kong, J. Jia, S. Ming, C.-H. Yuan, J. Chen, Z. Ou, M. W. Mitchell, and W. Zhang, “Su(2)-in-SU(1,1) nested interferometer for high sensitivity, loss-tolerant quantum metrology,” Phys. Rev. Lett. 128, 033601 (2022). [CrossRef]  

82. E. Giese, S. Lemieux, M. Manceau, R. Fickler, and R. W. Boyd, “Phase sensitivity of gain-unbalanced nonlinear interferometers,” Phys. Rev. A 96, 053863 (2017). [CrossRef]  

83. J. Fuenzalida, A. Hochrainer, G. B. Lemos, E. Ortega, R. Lapkiewicz, M. Lahiri, and A. Zeilinger, “Resolution of quantum imaging with undetected photons,” Quantum 6, 646 (2022). [CrossRef]  

84. B. Viswanathan, G. B. Lemos, and M. Lahiri, “Resolution limit in quantum imaging with undetected photons using position correlations,” Opt. Express 29, 38185–38198 (2021). [CrossRef]  

85. I. Kviatkovsky, H. M. Chrzanowski, and S. Ramelow, “Mid-infrared microscopy via position correlations of undetected photons,” Opt. Express 30, 5916–5925 (2022). [CrossRef]  

86. G. B. Lemos, P. H. S. Ribeiro, and S. P. Walborn, “Optical integration of a real-valued function by measurement of a stokes parameter,” J. Opt. Soc. Am. A 31, 704–707 (2014). [CrossRef]  

87. C. H. Monken, P. S. Ribeiro, and S. Pádua, “Transfer of angular spectrum and image formation in spontaneous parametric down-conversion,” Phys. Rev. A 57, 3123–3126 (1998). [CrossRef]  

88. M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, 7th ed. (Cambridge University, 1999).

89. M. Reichert, X. Sun, and J. W. Fleischer, “Quality of spatial entanglement propagation,” Phys. Rev. A 95, 063836 (2017). [CrossRef]  

90. A. C. Cardoso, L. Berruezo, D. F. Ávila, G. B. Lemos, W. M. Pimenta, C. H. Monken, P. L. Saldanha, and S. Pádua, “Classical imaging with undetected light,” Phys. Rev. A 97, 033827 (2018). [CrossRef]  

91. A. Vanselow, P. Kaufmann, H. M. Chrzanowski, and S. Ramelow, “Ultra-broadband SPDC for spectrally far separated photon pairs,” Opt. Lett. 44, 4638–4641 (2019). [CrossRef]  

92. M. Kutas, B. Haase, P. Bickert, F. Riexinger, D. Molter, and G. von Freymann, “Terahertz quantum sensing,” Sci. Adv. 6, eaaz8065 (2020). [CrossRef]  

93. A. Paterova, H. Yang, C. An, D. Kalashnikov, and L. Krivitsky, “Measurement of infrared optical constants with visible photons,” New J. Phys. 20, 043015 (2018). [CrossRef]  

94. S. Töpfer, M. Gilaberte Basset, J. Fuenzalida, F. Steinlechner, J. P. Torres, and M. Gräfe, “Quantum holography with undetected light,” Sci. Adv. 8, eabl4301 (2022). [CrossRef]  

95. W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4, 717–719 (2007). [CrossRef]  

96. J. Fujimoto and W. Drexler, “Introduction to optical coherence tomography,” in Optical Coherence Tomography: Technology and Applications, W. Drexler and J. G. Fujimoto, eds., Biological and Medical Physics, Biomedical Engineering (Springer, 2008), pp. 1–45.

97. G. J. Machado, G. Frascella, J. P. Torres, and M. V. Chekhova, “Optical coherence tomography with a nonlinear interferometer in the high parametric gain regime,” Appl. Phys. Lett. 117, 094002 (2020). [CrossRef]  

98. S. Lopez-Huidobro, M. Lippl, N. Y. Joly, and M. V. Chekhova, “Fiber-based biphoton source with ultrabroad frequency tunability,” Opt. Lett. 46, 4033–4036 (2021). [CrossRef]  

99. J. Fuenzalida, A. Hochrainer, G. B. Lemos, M. Lahiri, and A. Zeilinger, “Resolution in quantum imaging with undetected photons,” in Frontiers in Optics + Laser Science APS/DLS (Optical Society of America, 2019), paper JW3A.103.

100. G. Barbosa, “Degree of visibility in experiments of induced coherence without induced emission: a heuristic approach,” Phys. Rev. A 48, 4730–4734 (1993). [CrossRef]  

101. T. Grayson and G. Barbosa, “Spatial properties of spontaneous parametric down-conversion and their effect on induced coherence without induced emission,” Phys. Rev. A 49, 2948–2961 (1994). [CrossRef]  

102. L. Wang, “Investigation of induced coherence with and without induced emission,” Ph.D. thesis (The University of Rochester, 1992).

103. X. Zou, “Nonclassical interference effects of down-converted photons,” Ph.D. thesis (The University of Rochester, 1993).

104. A. Hochrainer, M. Lahiri, R. Lapkiewicz, G. B. Lemos, and A. Zeilinger, “Interference fringes controlled by noninterfering photons,” Optica 4, 341–344 (2017). [CrossRef]  

105. M. Lahiri, “Wave-particle duality and polarization properties of light in single-photon interference experiments,” Phys. Rev. A 83, 045803 (2011). [CrossRef]  

106. M. Lahiri, A. Hochrainer, R. Lapkiewicz, G. B. Lemos, and A. Zeilinger, “Partial polarization by quantum distinguishability,” Phys. Rev. A 95, 033816 (2017). [CrossRef]  

107. M. Lahiri, R. Lapkiewicz, A. Hochrainer, G. B. Lemos, and A. Zeilinger, “Characterizing mixed-state entanglement through single-photon interference,” Phys. Rev. A 104, 013704 (2021). [CrossRef]  

108. G. B. Lemos, R. Lapkiewicz, A. Hochrainer, M. Lahiri, and A. Zeilinger, “Measuring mixed state entanglement through single-photon interference,” arXiv preprint arXiv:2009.02851 (2020).

109. M. Krenn, A. Hochrainer, M. Lahiri, and A. Zeilinger, “Entanglement by path identity,” Phys. Rev. Lett. 118, 080401 (2017). [CrossRef]  

110. M. Lahiri, “Many-particle interferometry and entanglement by path identity,” Phys. Rev. A 98, 033822 (2018). [CrossRef]  

111. J. Kysela, M. Erhard, A. Hochrainer, M. Krenn, and A. Zeilinger, “Path identity as a source of high-dimensional entanglement,” Proc. Natl. Acad. Sci. USA 117, 26118–26122 (2020). [CrossRef]  

112. R. W. Boyd, Nonlinear Optics (Academic, 2020).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. (a) Mach–Zehnder interferometer. A light field is split at the 50:50 beam splitter, ${{\rm{BS}}_1}$, and is recombined at ${{\rm{BS}}_2}$. A phase shifter is placed in path $A$ and an object with complex field transmittance $T$ placed in path $B$. Interference is analyzed at the detector. (b) Two-Photon Interferometer. A pump is split into paths ${P_1}$ and ${P_2}$ at a 50:50 beam splitter (${{\rm{BS}}_1}$) and illuminates two nonlinear sources, ${Q_1}$ and ${Q_2}$, producing correlated photon pairs. When a pair is produced in source ${Q_1}$ (${Q_2}$), the so-called signal photon is emitted into path ${S_1}$ (${S_2}$) and so-called idler photon is emitted into path ${I_1}$ (${I_2}$). Signal paths ${S_1}$ and ${S_2}$ are combined at ${{\rm{BS}}_2}$, and idler paths ${I_1}$ and ${I_2}$ are combined at ${{\rm{BS}}_3}$. No interference is observed in the signal intensity at the detector because which-way information is in principle obtainable. One can observe interference only using post-selection, i.e., by detecting idlers at one output of ${{\rm{BS}}_3}$ in coincidence with signals at one output of ${{\rm{BS}}_2}$.
Fig. 2.
Fig. 2. Three architectures of the Zou–Wang–Mandel interferometer. (a), (b) The idler paths produced in sources ${Q_1}$ and ${Q_2}$ are aligned, and a 50:50 beam splitter, ${{\rm{BS}}_2}$, combines signal paths ${S_1}$ and ${S_2}$; which-path information is erased, and single-photon interference can be observed in the detector, even though idler photons are not detected at all. The field transmittance function, $T$, of an object illuminated by the idler field $I$ can be observed in the interference pattern of the signal beams at the detector, even though signal photons have not interacted with that object. The phase $\phi$ can be tuned by adjusting signal, idler, or pump optical path lengths. (b), (c) The signal and idler are emitted in the same direction as the pump (collinear emission), and if they are all at distinct frequencies, dichroic mirrors can be used to separate them. In (b), the dichroic mirror ${{\rm{DM}}_1}$ reflects idler photons and transmits signal wavelength, whereas ${{\rm{DM}}_2}$ transmits the pump and reflects idler photons. In (c), a single crystal is pumped from both sides. ${{\rm{DM}}_1}$ reflects the pump and transmits both signal and idler, whereas ${{\rm{DM}}_2}$ reflects idler and transmits the signal. Finally, ${{\rm{DM}}_3}$ reflects the signal while transmitting the idler and pump. Notice that in this architecture, undetected light goes twice through the same sample ($T^\prime = {T^2}$).
Fig. 3.
Fig. 3. Three architectures for the SU(1,1) interferometer. (a) Both signal and idler paths $S$ and $I$ from the first nonlinear source (${Q_1}$) are aligned with signal and idler paths originating in the second nonlinear source (${Q_2}$). (b), (c) The laser pumps a crystal and is reflected back through the same crystal. The undetected light traverses twice through the same sample ($T^\prime = {T^2}$). In (c), the pump, signal, and idler leave the crystal collinear with each other, and if they are all at different frequencies, they can be separated using three dichroic mirrors: ${{\rm{DM}}_1}$ transmits the pump but reflects signal and idler photons; ${{\rm{DM}}_2}$ transmits the pump and idler photons, but reflects signal photons; ${{\rm{DM}}_3}$ transmits the pump and signal photons, but reflects idler photons. In all three architectures, single-photon interference is seen in signal and idler outputs without post-selection (coincidence detection). The field transmittance function of an object placed in either path $S$ or $I$ can be seen in the interference pattern appearing in a camera placed in either output path.
Fig. 4.
Fig. 4. These are two alternative versions of the SU(1,1) interferometer. (a) The same crystal acts as sources ${Q_1}$ and ${Q_2}$, as pump, signal, and idler are reflected back into that crystal. the dichroic mirror (DM) reflects only the signal field, and long pass (LP) reflects only the pump. (b) Setup for collinear non-degenerate downconversion, where both signal and idler pass through the sample. Dichroic mirror ${{\rm DM}_1}$ separates the signal $S$ from the other fields, and dichroic mirror ${{\rm DM}_2}$ separates the idler $I$ from the pump.
Fig. 5.
Fig. 5. Metrology with undetected photons. Figure taken from [79]. The minimum detectable phase-shift squared of several fair comparison interferometric setups and detection schemes as a function of gain [of the first crystal for ZWMI and of both crystals for SU(1,1)]. Here we display the boosted ZWMI setup with intensity detection at mode $B$ (green), intensity difference detection between modes $B$ and $C$ (brown), the boosted SU(1,1) setup (black), and a standard coherent-light-seeded MZI with the extra light needed to create the aforementioned squeezings added to the initial input (red). The latter is equivalent to the shot-noise limit. All other parameters are numerically optimized at each point. The circular points (upper set) represent injected coherent light of about the same intensity as would be needed for a high-gain nonlinearity, and the square points (lower set) represent a much brighter coherent input.
Fig. 6.
Fig. 6. Quantum imaging with undetected photons. (a) In a collinear non-degenerate ZWMI [86], non-degenerate photon pairs are emitted along the propagation axis of a laser at each source. Wave plates and a polarizing beam splitter (PBS) in the pump are used to control the relative phases and amplitudes of the two-photon states generated in sources ${Q_1}$ and ${Q_2}$. Dichroic mirrors or long pass filters can be used to separate the pump from the daughter fields after each crystal. A lens ${L_0}$ is used to control the pump waist at the crystals, which affects the twin photon transverse momentum correlations, which in turn affects the image resolution, as shown in Section 4.B.3. Imaging system $A$ (e.g., a lens or lens system) ensures a good overlap of the combined signal fields, and optical system $C$ (again a lens or lens system) is used to image the plane of the object with spatial features $T(\boldsymbol\rho)$ onto the plane of the camera or scanning detector. (b) In the case of imaging enabled by momentum correlation, optical systems $B$ and $B^\prime $ (e.g., also a lens or lens system) guarantee that the object $T(\boldsymbol\rho)$ is at the Fourier plane of sources ${Q_1}$ and ${Q_2}$. An effective positive lens with focal length ${f_c}$ associates the plane on the camera with the Fourier plane of sources ${Q_1}$ and ${Q_2}$. A plane wave vector ${q_s}$ makes an angle $\theta$ with the optical axis and is focused along a circle of radius $|{\boldsymbol\rho _c}|$. (c) In the case of imaging enabled by position correlation, optical systems $B$ and $B^\prime $ are imaging systems; a point ${\boldsymbol\rho _s}$ on ${Q_j}(j = 1,2)$ is imaged onto ${\boldsymbol\rho _c} = {M_s}{\boldsymbol\rho _s}$ on the camera by $A$.
Fig. 7.
Fig. 7. Absorption and phase imaging enabled by momentum correlations. I(A–D) and II(A-B) have been adapted from [86], which used the setup in Fig. 6(a) and realized imaging enabled by momentum correlations. In IA and IB are shown two intensity signal outputs of a collinear non-degenerate ZWMI. The detection wavelength is $810 \pm 1.5\;{\rm{nm}}$; the sample is a cardboard cutout placed at the Fourier plane of the sources and illuminated by an idler beam with wavelength centered at 1550 nm. The difference (sum) of those two outputs is shown in IC (ID). Phase imaging of an etched silica plate using the same setup is shown in IIA and IIB. Momentum correlation enabled absorption (IIC) and phase (IID) images (adapted from [64]) from a sample of a mouse heart. The setup is shown in Fig. 3(c) with the addition of lenses and an off-axis parabolic mirror. The detection and illumination central wavelengths are 0.8 µm and 3.8 µm, respectively.
Fig. 8.
Fig. 8. Image magnification in momentum correlation enabled QIUP. The same object is imaged for two sets of values of ${\lambda _s}$ and ${\lambda _I}$, while other parameters such as focal lengths and distances are unchanged. Higher value of the ratio ${\lambda _s}/{\lambda _I}$ resulted in larger image magnification (right). (Adapted from Fig. 2 of [83].)
Fig. 9.
Fig. 9. Edge-spread function (ESF) and resolution. (a) The image of a knife edge is obtained by measuring the position-dependent visibility on the camera (left). The visibility measured along an axis (${x_c}$) is fitted with error function to experimentally determine the edge-spread function (right). The blurring ($\sigma$) is determined from the ESF. (b) Experimentally measured values (data points) of $\sigma$ are compared with theoretical prediction (solid lines) for two sets of wavelengths, ${\lambda _I} = 1550 \;{\rm{nm}}$, ${\lambda _s} = 810 \;{\rm{nm}}$ (red) and ${\lambda _I} = 780 \;{\rm{nm}}$, ${\lambda _s} = 842\; {\rm{nm}}$ (blue). Since the detected wavelengths are close to each other, the blurring appears to be almost equal despite a wide difference between the illuminating (undetected) wavelengths. (c) The resolution ($\sigma /M$) is measured experimentally (data points) and compared with theoretical results (solid curves) for the same sets of wavelengths. Shorter illumination wavelength results in higher resolution. The resolution enhances with increasing pump waist (${w_p}$), i.e., with stronger momentum correlation between twin photons. (Adapted from Fig. 4 of [83].)
Fig. 10.
Fig. 10. Resolution of momentum correlation enabled QIUP. (a) Resolution enhances as momentum correlation becomes higher. A set of slits is imaged for five values of pump waist (${w_p}$) in decreasing order (left to right). A bigger value of ${w_p}$ means a stronger momentum correlation between the twin photons, which results in higher resolution. (Wavelengths are kept the same for each measurement.) (b) A smaller value of undetected wavelength (${\lambda _I}$) results in better resolution. The same set of slits is imaged for ${\lambda _I} = 1550 \;{\rm{nm}}$ (left) and ${\lambda _I} = 780 \;{\rm{nm}}$ (right), while the pump waist is kept the same. (Adapted from Figs. 3b and 5b of [83].)
Fig. 11.
Fig. 11. Resolution and position correlation between twin photons. (a) Simulated camera image of two points separated by a distance of $d = 70\;{{\unicode{x00B5}{\rm m}}}$ for the following choice of parameters: $L = 2 \;{\rm{mm}}$, ${\lambda _s} = 810 \;{\rm{nm}}$, ${\lambda _I} = 1550 \;{\rm{nm}}$, and ${M_s} = {M_I} = 1$. (b) Image function, $G({x_c},0)$, plotted against ${x_c}$ the same set of parameters. The ratio ($\beta$) of its value at the dip to that at one of the peaks is $\beta \approx 0.08$. (c) Minimum resolvable distance (${d_{{\min}}}$) plotted against crystal length ($L$) for ${M_I} = 1$ and ${M_I} = 2$ using Eq. (64) (solid lines). The filled circles represent simulated data points for a pair of square pinholes with side length 1 µm. The minimum resolvable distance increases (i.e., resolution reduces) as the position correlation becomes weaker. The resolution also decreases as the imaging magnification, ${M_I}$, from the source to the object increases. [Remaining parameters are the same as in (a) and (b).] (Adapted from Figs. 3c, 3d, and 4c of [84].)
Fig. 12.
Fig. 12. Classical transverse spatial correlations sufficient for imaging. (a) Two spatially separated interferometers can together produce an image of two objects with transmissions ${T_1}$ and ${T_2}$ on a camera. Transverse spatial entanglement is not necessary for QIUP. (b) In classical imaging with undetected light [90], a seed laser together with the pump laser in a nonlinear interferometer can together produce an image on the camera at the unseeded wavelength. While that scheme does not have all properties of its quantum light version, it shows that transverse spatial entanglement is thus shown not to be strictly necessary for the phenomenon of imaging with undetected light.
Fig. 13.
Fig. 13. Fourier domain optical coherence tomography with undetected photons. (a) Working principle of traditional Fourier domain optical coherence tomography (OCT) using a broadband light source. Spectral phases proportional to the depth of reflections from inside a sample can be read out after interference with a reference arm using a fast grating spectrometer. The sample is such that the light can penetrate variable depths. (b) Adaptation of Fourier domain optical coherence tomography using a nonlinear interferometer, as reported in [57]. In this case, the wavelength for probing the sample can be very different from the wavelength for which suitable spectrometers are available. Optical coherence tomography using nonlinear interferometers in the time domain is reported in [58,59].
Fig. 14.
Fig. 14. Quadrature diagram: a configuration space defined by the two quantum-optical quadratures $\hat p$ and $\hat q$. Six states are shown: the fundamental vacuum state (gray); a displaced vacuum state/coherent state (red), both symmetric minimum uncertainty states; a number state (green) with completely defined intensity and completely undefined phase; a phase state with opposite uncertainties from the number state (orange-brown, note that in principle, this state should infinitesimally thin, but then you would not see it); and two squeezed states, a vacuum state squeezed in the quadrature direction (blue), and a coherent state squeezed in the phase direction (purple).

Tables (1)

Tables Icon

Table 1. Experimental and Theoretical Comparison between QUIP Enabled by Momentum and Position Correlationsa

Equations (93)

Equations on this page are rendered with MathJax. Learn more.

R A / B = ( 1 + | T | 2 ) ± 2 | T | cos ( ϕ A γ ) 4 ,
V = R max R min R max + R min .
V M Z = 2 | T | 1 + | T | 2 .
| 1 S 1 | 1 I j + e i ϕ | 1 I | 1 S 2 2 .
| 1 S 1 | 1 I 1 | 1 S 1 ( T | 1 I 1 + i 1 | T | 2 | 1 0 ) ,
( T | 1 I 1 + i 1 | T | 2 | 1 0 ) | 1 S 1 + e i ϕ | 1 I 2 | 1 S 2 2 .
( f | 1 I 1 + i f + | 1 I 2 + i 2 ( 1 | T | 2 ) | 1 0 2 2 ) | 1 S 1 + ( i f + | 1 I 1 f | 1 I 2 2 ( 1 | T | 2 ) | 1 0 2 2 ) | 1 S 2 ,
( T | 1 I + i 1 | T | 2 | 1 0 ) | 1 S 1 + e i ϕ | 1 I | 1 S 2 2 ,
| ψ f = 1 2 ( ( T + i e i ϕ ) | 1 I + i 1 | T | 2 | 1 0 ) | 1 S 1 + 1 2 ( ( e i ϕ + i T ) | 1 I + 1 | T | 2 | 1 0 ) | 1 S 2 .
R S 1 ( S 2 ) = 1 ± | T | cos ( ϕ + γ ) 2 ,
V Z W M | T | .
| ψ = ( | T | e γ + ϕ + 1 ) | 1 S | 1 I + 1 | T | 2 | 1 S | 1 0 2 .
R S / I = 1 + | T | cos ( ϕ + γ ) 2 ,
Δ ϕ min = 1 n ¯ ,
a ^ | α = α | α = e i ϕ | α | | α = e i ϕ n ¯ | α .
a ^ = e i ϕ ^ n ^ ,
a ^ = n ^ e i ϕ ^ ,
Δ ϕ min = 1 n ¯ .
M ( ϕ ) = M ( ϕ 0 ) + ( ϕ ϕ 0 ) M ϕ | ϕ ϕ 0 + . . . ,
M ( ϕ ) M ( ϕ 0 ) M ϕ | ϕ ϕ 0 = ϕ ϕ 0 = Δ ϕ min .
Δ ϕ min = Δ M ^ | M ^ ϕ | ,
M ^ = b ^ f b ^ f a ^ f a ^ f ,
Δ ϕ min 2 = e 2 r 4 ( 1 + β 2 ) .
| ψ ~ = d q s d q I C ( q s , q I ) | q s s | q I I ,
d q s d q I | C ( q s , q I ) | 2 = 1.
P ( q s , q I ) | C ( q s , q I ) | 2 .
| ψ = d q s d q I C ( q s , q I ) × [ α 1 a ^ s 1 ( q s ) a ^ I 1 ( q I ) + α 2 a ^ s 2 ( q s ) a ^ I 2 ( q I ) ] | v a c ,
ρ o = λ I f I 2 π q I ,
a ^ I 2 ( q I ) = e i ϕ I [ T ( ρ o ) a ^ I 1 ( q I ) + R ( ρ o ) a ^ 0 ( q I ) ] ,
a ^ s j ( q s ) a ^ I 1 ( q I ) | v a c = | q s s j | q I I 1 , j = 1 , 2 ,
a ^ 0 ( q I ) | v a c = | q I 0 ,
| Ψ = d q s d q I C ( q s , q I ) [ α 1 | q s s 1 + e i ϕ I α 2 T ( ρ o ) | q s s 2 ] | q I I 1 + d q s d q I C ( q s , q I ) e i ϕ I α 2 R ( ρ o ) | q s s 2 | q I 0 .
ρ c = λ s f c 2 π q s ,
E ^ s ( + ) ( ρ c ) e i ϕ s 1 a ^ s 1 ( q s ) + i e i ϕ s 2 a ^ s 2 ( q s ) ,
R ( ρ c ) d q I P ( q s , q I ) [ 1 + | T ( ρ o ) | cos ( ϕ i n arg { T ( ρ o ) } ) ] ,
R ( ρ c ) 1 + | T ( ρ o ) | cos [ ϕ i n arg { T ( ρ o ) } ] .
V ( ρ c ) = | T ( ρ o ) | .
G ( ρ c ) = R max ( ρ c ) R min ( ρ c ) .
M | ρ c | | ρ o | = f c λ s | q s | f I λ I | q I | .
M = f c λ s f I λ I .
G ( ρ c ) d q I P ( q s , q I ) | T ( ρ o ) | ,
P ( q s , q I ) exp ( 1 2 | q s + q I | 2 w p 2 ) ,
T ( ρ o ) T ( x o , y o ) = { 0 x o x 0 , 1 x o > x 0 , y o .
E S F ( x c ) G ( ρ c ) E r f c ( 2 π w p f c λ s ( x c M x 0 ) ) ,
σ = f c λ s 2 π w p .
r e s = σ M = f I λ I 2 π w p ,
F O V M C = 2 f I tan ( θ I ) 2 f I θ I ,
θ I = λ I 2.78 n s n I π L ( n s λ I + n I λ s ) ,
m M C = F O V M C r e s F W H M w p n L ( λ I + λ s ) ,
P ( ρ s , ρ I ) | d q s d q I C ( q s , q I ) e i ( q s . ρ s + q I . ρ I ) | 2 ,
E ^ I 2 ( + ) ( ρ I ) = e i ϕ I ( ρ I ) [ e i ϕ I ( ρ o ) T ( ρ o ) E ^ I 1 ( + ) ( ρ I ) + R ( ρ o ) E ^ 0 ( + ) ( ρ I ) ] ,
ρ o = M I ρ I .
a ^ I 2 ( q I ) = d q I 1 M I 2 [ T ~ ( q I q I M I ) a ^ I 1 ( q I ) + R ~ ( q I q I M I ) a ^ 0 ( q I ) ] ,
| ψ = α 1 d q I 1 d q s 1 C ( q I 1 , q s 1 ) | q I 1 I 1 | q s 1 s 1 + α 2 d q I 2 d q s 2 d q I C ( q I 2 , q s 2 ) × 1 M I 2 [ T ~ ( q I 2 q I M I ) | q I I 1 + R ~ ( q I 2 q I M I ) | q I 0 ] | q s 2 s 2 ,
E ^ s ( + ) ( ρ c ) d q s [ a ^ s 1 ( q s ) + i e [ i ϕ s 0 + ϕ s ( ρ c ) ] a ^ s 2 ( q s ) ] e i q s ρ s ,
ρ c = M s ρ s .
R ( ρ c ) d ρ I P ( ρ s , ρ I ) ( ( ρ o M I ) 1 + | T ( ρ o ) | cos [ ( ρ o M I ) ϕ i n + ϕ s ( ρ c ) ϕ I ( ρ o ) ϕ I ( ρ o M I ) ϕ T ( ρ o ) ] ) ,
R ( ρ c ) 1 + | T ( ρ o ) | cos [ ϕ i n ϕ T ( ρ o ) ] ,
V ( ρ c ) = | T ( ρ o ) | .
M = M s M I .
T ( ρ o ) T ( x o , y o ) δ ( y o ) [ δ ( x o d / 2 ) + δ ( x o + d / 2 ) ] ,
G ( ρ c ) d ρ o P ( ρ c M s , ρ o M I ) | T ( ρ o ) | .
P ( ρ c M s , ρ o M I ) exp [ 4 π L ( λ I + λ s ) | ρ c M s ρ o M I | 2 ] ,
G ( ρ c ) exp [ 4 π y c 2 M s 2 L ( λ I + λ s ) ] × { exp [ 4 π L ( λ I + λ s ) ( x c M s d 2 M I ) 2 ] + exp [ 4 π L ( λ I + λ s ) ( x c M s + d 2 M I ) 2 ] } .
β G d i p G p e a k .
d min 0.53 M I L ( λ I + λ s ) .
r e s P C F W H M = σ P C F W H M M = 0.44 M I L ( λ I + λ s ) n ,
F O V P C = 2 l n 2 M I w p .
m P C = F O V P C r e s P C F W H M w p n L ( λ I + λ s ) .
Δ z = 0.44 λ I 2 Δ λ ,
δ l 1 = l P 1 + l I 1 l P 2 < l c o h p u m p ;
δ l 2 = l I 1 + l S 2 l S 1 < l c o h s i g n a l ,
| n = ( a ^ ) n n ! | 0 ,
| ψ = n = 0 p n | n ,
E ^ ( t , z ) = E 0 sin ( k z ) [ a ^ e i ω t + a ^ e i ω t ] .
q ^ = 1 2 ( a ^ + a ^ ) ,
p ^ = 1 2 i ( a ^ a ^ ) .
E ^ ( t , z ) = 2 E 0 sin ( k z ) [ q ^ cos ( ω t ) + p ^ sin ( ω t ) ] .
Δ q ^ Δ p ^ 1 4 .
D ^ ( α ) e ( α a ^ α a ^ ) ,
S ^ ( ξ ) = exp [ 1 2 ( ξ a ^ 2 ξ a ^ 2 ) ] ,
S ^ ( ξ ) a ^ S ^ ( ξ ) = a ^ cosh ( r ) a ^ e i θ sinh ( r ) ,
S ^ ( ξ ) a ^ S ^ ( ξ ) = a ^ cosh ( r ) a ^ e i θ sinh ( r ) ,
S ^ ( ξ ) | 0 | ξ = 1 cosh ( r ) n = 0 ( 1 ) n ( 2 n ) ! 2 n n ! e i n θ [ tanh ( r ) ] n | 2 n .
P i ϵ o = j χ i j ( 1 ) E j + j k χ i j k ( 2 ) E j E k + j k l χ i j k l ( 3 ) E j E k E l + .
E ( t ) = E 1 ( e i ω 1 t + e i ω 1 t ) + E 2 ( e i ω 2 t + e i ω 2 t ) ,
P ( 2 ) ( t ) = ϵ o χ ( 2 ) [ E 1 2 ( e 2 i ω 1 t + e 2 i ω 1 t ) + E 2 2 ( e 2 i ω 2 t + e 2 i ω 2 t ) + 2 E 1 E 2 ( e i ( ω 1 + ω 2 ) t + e i ( ω 1 + ω 2 t ) ) + 2 E 1 E 2 ( e i ( ω 1 ω 2 ) t + e i ( ω 1 ω 2 t ) ) + 2 E 1 2 + 2 E 2 2 ] .
H ^ I = i χ ( 2 ) ( a ^ 2 b ^ a ^ 2 b ^ ) ,
β | H ^ I | β = i χ ( 2 ) | β | ( a ^ 2 e i ω 1 t a ^ 2 e i ω 1 t ) = i χ ( 2 ) | β | ( a ^ 2 e i ( ω 1 2 ω 2 ) t a ^ 2 e i ( ω 1 2 ω 2 ) t ) .
U ^ ( t ) = e i H ^ I t / = e χ ( 2 ) t | β | ( a ^ 2 a ^ 2 ) .
U ^ ( t ) | 0 = | 0 + χ ( 2 ) t | β | | 2 + ( χ ( 2 ) t | β | ) 2 | 4 + .
a ^ 2 = e i ϕ a ^ 1 .
a ^ 2 = 1 2 [ a ^ 1 + i b ^ 1 ] , b ^ 2 = 1 2 [ b ^ 1 + i a ^ 1 ] ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.