Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Light in quantum computing and simulation: perspective

Open Access Open Access

Abstract

A summary is given of recent progress in photonic quantum simulation and computation. Non-error-corrected machines performing specialised tasks have already demonstrated a quantum advantage over the best algorithms running on conventional computers, and practical applications for such machines are being explored. Meanwhile, designs for error-corrected fault-tolerant quantum computers based on light are reducing the performance requirements for individual components and systems, although the engineering challenges are severe. Light also plays a central role in other platforms for quantum computing and simulation, from control of individual atomic qubits to remote entanglement of separate processing nodes, along with an important role in communications and other long-distance networks.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

Light is a unique medium for quantum technologies: it exhibits quantum features, such as entanglement and noise below the familiar shot noise, even in ambient conditions, and its large bandwidth provides high capacity for encoding and manipulating information. For these reasons optical systems are a critical element of future quantum technologies, from obvious applications such as imaging and remote sensing, to optical communications and even to computing and quantum simulation.

It is in these latter areas that recent progress has been significant, achieving clear quantum advantage in some limited processing tasks as well as opening the door to new algorithms relevant to real-world applications. Further, design of increasingly loss-tolerant schemes for producing large-scale entangled states necessary for quantum computing, along with rapid progress in the performance of components needed to realise them, has spurred large investment in a business sector dedicated to constructing a future fault-tolerant, scalable photonic quantum computer.

Quantum advantage in computing—the ability of a quantum machine to perform a task more rapidly than a conventional computer (optimally, and in principle, exponentially faster)—is a benchmark that machines achieving this can add real value to information processing tasks. It is not a hard limit, but contingent on the task being performed. Recently such advantage has been achieved in quantum simulators. These are devices that seek to encode a particular problem directly in the architecture of the computing machine. The problem may be a representation of a physical system the dynamics or structure of which is sought, or it may be a more abstract calculation.

Two recent demonstrations use optical implementations of a version of Boson Sampling (BS). Boson Sampling is an algorithm that seeks to sample the measurement statistics of a multi-particle, multi-mode Bosonic quantum state. If such a state has been prepared by colliding a set of independent bosons with one another, then it turns out that even approximating the distribution of bosons at the output modes is likely a computationally hard task; that is, one that cannot be efficiently undertaken by a classical computer [1]. Therefore, direct measurements are the most efficient means to draw samples from the distribution.

A Boson Sampling device using light, as shown in Fig. 1(a), can be built by taking a large collection of single photons, each in a separate mode so forming a set of independent particles, mixing the modes on a set of beamsplitters—a linear optical network—and then counting the number of photons that come out of each port of this network. Under specific achievable circumstances this mimics the Arkhipov–Aaronson protocol and allows testing whether sampling using such a quantum device is more rapid that of calculation on a super-computer to estimate the output distribution from the network design.

 figure: Fig. 1.

Fig. 1. (a) A linear quantum optical network operating as a Boson Sampling machine. Individual photons are inserted at the input nodes. The photons scatter from effective internal beamsplitters, and the number of photons at output nodes of the network are counted. Quantum interference of the Bosonic fields occupied by the photons means that it is computationally hard to calculate the output photon distribution if the network is randomly chosen. (b) A Gaussian Boson Sampling device consisting of a set of squeezed (S) or classical (laser) light (D) sources, a linear optical network, and a set of photon-number-resolving detectors. The resulting joint photocount distribution across all channel combinations is also hard to estimate using known algorithms running on conventional computers. (c) Laboratory implementation of a 100-mode GBS machine using build optics to minimise mode-coupling losses. (Photograph courtesy of Chaoyang Lu, USTC, China)

Download Full Size | PDF

The physical origin of the hardness of this problem arises from quantum interference. Consider a simple 2 × 2 network—a single 50:50 beamsplitter—into which one photon enters at each port. A well-known result is that if the photons are in a separable state in identical modes then they will both exit the same port. The probability that they exit through different ports is zero [2]. This probability is proportional to the Permanent [3] of the transfer function of this network, which in this case is zero. Calculating this function is possible for a smallish network. However, the known algorithms to calculate the Permanent [4] do not scale favorably with the size of the network and number of photons. (That is, the time they take to run increases more rapidly than a polynomial function of these parameters.) Therefore, when there are enough photons and a big enough network, it takes longer to run the calculation of the output photon distribution on a supercomputer than to record a sufficient number of samples in a physical device to approximate the distribution.

There are challenges to building such a system, of course, and there is also the difficulty of knowing what is the best algorithm that can run on a conventional computer against which the quantum device may be tested. First, it is important that each input photon is the same as all the others in its spatio-temporal shape (and polarization), and it is important that as few as possible are lost in propagating through the network. Then they must be detected with high efficiency. Such imperfections facilitate calculating the output photon statistics, since they mean that the system operates partly in a classical regime. In parallel, algorithms that make use of these imperfections to shorten the run time for estimating the output photon distribution on a classical computer are being developed.

A more recent version of a Boson Sampler uses not individual photons, but another form of non-classical light, primarily squeezed states [5]. These are states for which the noise in the two quadratures of the optical field is not equal, as it is in, say, laser light, but is lower than shot noise in one quadrature. Correspondingly it must be larger than shot noise in the other. A squeezed state may be characterized by the joint distribution of its quadrature amplitudes, which turns out to be Gaussian. Since a linear optical network preserved this Gaussian character, the protocol is called Gaussian Boson Sampling (GBS), as illustrated in Fig. 1(b). It nevertheless still relies on counting photons at the output, where the joint photon number distribution is specified by evaluating a different function of the machine’s properties (the strength of the squeezed light sources and the optical circuit transfer function parameters)—a so-called Hafnian. There is also no known efficient classical algorithm for finding the values of this function, and similar arguments to the case of BS show that it is also therefore hard to sample from the output photon number distribution efficiently.

Interestingly, the use of photon counting as a measurement scheme is critical for GBS, since it turns out that direct measurements of the field quadrature amplitudes at the output of the machine would render the problem amenable to efficient emulation on a classical computer. The quality of the input light for GBS is specified by how close it is to a single-mode squeezed state—this is often called the state purity—as well as how similar the light pulses are in spatial, temporal, and spectral shape, and whether this changes from run to run of the machine—this is often called the indistinguishability.

An important question is exactly how big a network must be constructed to demonstrate operation beyond that of a classical computer and, therefore, how many quantum light sources and how many detectors are needed. This is a bit of a moving target, since there is no formal definition of “advantage.” Further, supercomputers, and better algorithms for them, continue to improve at a rapid pace. Nevertheless, there are some reasonable expectations that at a scale of many tens of photons and similarly sized networks, it is possible to do better than the best current and foreseeable next generation of supercomputers. Two experiments in the past two years have implemented GBS in different ways and have shown this advantage. They illustrate well two very different physical arrangements, both of which suggest directions of improvement for the future.

First, a group at USTC in China built a machine consisting of 100 free-space optical beams connected by a bulk-glass monolithic beamsplitter array, together with squeezed light sources based on parametric downconverters [6]. The output beams were measured using photodetectors that told whether there were either no photons or one photon or more. This machine, named Jiŭzhāng, acquired and measured photocount distributions in 200 seconds, and simulations were estimated to take more than 500 million years on a current supercomputer. This result stimulated the exploration of new algorithms that could exploit imperfections in the machine (losses, distinguishability) to simplify the calculations. These showed that classical algorithms could then speed up by many orders of magnitude the computational time for a supercomputer to estimate the photon distributions [7]. Nevertheless, the claim of advantage remains.

A second group at Xanadu in Canada, built a time-multiplexed network using optical fibers to construct a time-bin coded network [8]. This machine, called Borealis, has the advantage of reconfigurability, as well as requiring only a single light source. It does require photon-number-resolving detectors fast enough to distinguish the time bins.

While the algorithms to evaluate Hafnians and Permanents are singular in enabling the demonstration of a quantum advantage in photonic simulators, they also map to some interesting and potentially useful problems, particularly in the analysis of graphs [9]. A specific task is the identification of completely connected subgraphs of a larger graph [10]. This is related to search and optimization problems in logistics and in molecular simulation, for instance. It turns out that the number of such sub-graphs is given by the Hafnian of the graph adjacency matrix. By constructing a GBS machine with the appropriate squeezing and unitary circuits, it is possible to efficiently find the probabilities for sub-graphs of different sizes. The performance is better than a random sampling of the graph to locate the sub-graphs [11]. This has recently been successfully applied to the problem of finding the optimal docking structures for molecules in target receptors, which may benefit research in biomedicine.

The BS and GBS protocols have shown that quantum machines can perform certain computational tasks better than conventional computers running the best known algorithms. The combination of these protocols (e.g., using single photons as well as squeezers and classical light) enables simulation of some physical problems and, therefore, opens the door to new applications. How extensive these may be is yet to be determined, and there is much territory to be explored. Further, the development of more efficient quantum light sources (such as deterministic single photon emitters, waveguide cavity based squeezers) lower-loss reconfigurable optical networks, and photon-counting detectors (such as integrated, higher-temperature superconducting devices) will enable machines of even larger scale.

In the long term a quantum simulator for larger size problems and more general tasks will be needed. This will be a fault-tolerant, scalable quantum computer, universally programmable to execute all quantum algorithms. There are still several platforms in contention for the prize of reaching this objective. All-optical schemes are among these. Other platforms, such as ions held in an electric trap, often use light for control of the processing elements. Another very promising approach uses superconducting circuits, in which different matter processing units (transmons) are connected by means of a microwave signal.

Optical systems have the very important feature that they can operate largely in ambient conditions, can be easily integrated to achieve large scale, and can be operated at high speed. They have the considerable drawback that there are few currently available means by which single photons can interact with one another. Two approaches show promise, but are not yet viable for scaling. These are cavity-based single-atom switches and interactions between photon-excited atomic pairs. In the first of these approaches, the transition of an atom from one state to another by absorption of a single photon can shift the cavity resonance so that a second photon is reflected with a phase shift compared to a reflection from the “normal” cavity [12]. In the second approach, the excitation of a pair of atoms by two photons causes each to experience a phase shift that is different than when a single photon excites the same pair of atoms. This is because the doubly excited state of the atomic pair has a lower energy, due to the atomic dipole interaction, than the sum of the energies of the two singly excited atoms individually [13,14].

The lack of a robust single photon conditional gate means that there is no simple gate model for a quantum computer. Further, photons can easily be lost through scattering or absorption, thus erasing the information being processed. Nevertheless, means to overcome these limitations have been proposed, and the main challenges at present are getting device performance to the point that they can enable a fully scalable quantum computer.

The lack of a conditional interaction between single photons is in contrast to the matter-based schemes, where either there is a direct electromagnetic interaction, or one mediated by radiation, between stable material qubits which form the register. In optics it turns out that it is possible to replace such interactions with quantum interference and measurement [15]. The key idea is that if there are multiple routes by which a photon may arrive at a detector then it is unknown which photon and which path was actually registered. Since quantum mechanics demands that all the probability amplitudes for all possible pathways leading to a particular event must be added, then detection of a photon requires the superposition of all the routes by which all input photons could lead to a single event at the photodetector. This leaves the remaining photons in a superposition of paths that do not lead to a photodetection event. So photons that do not physically encounter one another can become entangled by erasing the information about where they started from [16]. An example of this is shown in Fig. 2(a), where mixing the paths by which each of two input photons may arrive at detector A or $\rm A^\prime$, yet registering only one photon, leads to the other being in a superposition of paths.

 figure: Fig. 2.

Fig. 2. Schemes for generating multiphoton entangled states from sets of single independent photons. These superpositions of correlated multiphoton states are a resource for quantum computing. The scheme works by erasing the information about which photon and which path leads to a heralding photo-detection event at the banks (An, Bm) of detectors. Designs are shown for (a) coherent superposition of a single photon in two modes—a one-photon state; (b) coherent superposition of two photons in four modes—a Bell state; (c) coherent superposition of three photons in six modes—a 3-GHZ state. The scheme is extensible to produce more complex entangled states (e.g., cluster states) by combining GHZ states using related path-erasure and detection methods, a process known as “fusion.”

Download Full Size | PDF

These approaches are necessarily probabilistic, since it is not always the case that only detector A or $\rm A^\prime$ records a photon. Sometimes both do, and sometimes neither do so. These events are discarded, and another trial is run. In the example of Fig. 2(a), the desired output (a single photon superposed in the two output modes) occurs only half the time. The very indeterminacy that enables coherent superpositions is just that which renders the state preparation random.

This general approach can be extended to many photons, enabling large entangled states to be constructed from a collection of independent input photons. Two examples are shown in Figs. 2(b) and 2(c), where the outputs are a so-called Bell state—an entangled state of two photons in four modes, and a Greenberger–Horne–Zeilinger (GHZ) state—an entangled state of three photons in six modes. In each case, the probability of a successful outcome decreases as the number of photons increases, so that, for example, this circuit in Fig. 2(b) only generates a Bell pair in 4/16 of the trials.

Nevertheless, the concept of trial and error makes it possible to overcome the barrier of indeterminacy. If an event is random, but you know when it happens, then storing the outcome of successful trials (or running many trials at once and keeping the successful ones) means that for a machine of sufficient scale it is guaranteed that there will be an appropriate state available when needed.

There are proposals for producing some types of cluster states on demand, based on the emission of single photons from atoms or excitons (in cavities or waveguides) in which an internal metastable state of the atom is changed upon emission. By manipulating this state (in effect creating a superposition of atomic states analogous to the function of the beamsplitters in Fig. 2) and sequentially generating photons from this superposition, a linear cluster state can be prepared. In the long run, this may prove a more resource efficient way to initiate the preparation of cluster states for photonic quantum computers.

The loss of photons compounds the problem of indeterminacy, both because it reduces the rates at which successful operations occur, and, more seriously, means the prepared state is not the desired state. However, again with sufficient scale it is possible to construct entangled states with the appropriate degree and connectivity to be useful.

The original approach to a linear optical quantum computer envisioned a gate-based architecture, in which the quantum state of one photon could be conditioned on the state of another. However, this is not the optimal route since it demands too great an overhead of resources given the improbability of the success of a gate operation based on the concepts of photon interference outlined previously. Rather, current approaches to photonic quantum computers make use of measurement-based architectures, in which a large-scale entangled state is prepared, measurements are made according to the algorithm being run, and feedforward from the outputs of these measurements inform the next set of measurements to be made on another part of the state [16].

The kind of state that must be prepared is a so-called three-dimensional cluster state, in which photons are entangled with several neighbours using interference and detection. An example of such a state is represented in Fig. 3(a), showing a group of photons each entangled with four of its neighbours (in the X–Y plane) as well as with temporally earlier and later photons (in the T (time) direction). This approach is well suited to optics, both because of the means by which states are prepared and because the construction of such states is sequential, with different “layers” of the state being prepared at different times.

 figure: Fig. 3.

Fig. 3. (a) 3D entangled states (a “cluster state”) represented as an FCC lattice. Each vertex of the lattice represents an entangled photonic resource state as suggested in (b), consisting of multiple photons and created by fusing GHZ states using methods similar to those of Fig. 2. Measurements are made on the photons at lattice vertices, which enables both error correction and readout of the state of the logical qubit. The algorithms that run on this type of quantum computer work by specifying the sequence in which measurements are to be made on different “layers” of the lattice, conditioned on the outcome of measurements on previous layers. (Adapted from Ref. [17].) (c) A ${10}\times{10}$ mode SiN photonic integrated circuit forming a mesh of beamplitters that can implement any operation between input and output mode combinations. Such chips enable the generation of cluster states by means of the circuits shown in Fig. 2. (Photograph courtesy of S. Yu, Imperial College London)

Download Full Size | PDF

The cluster state is generally prepared using GHZ states, and the number of these required depends on the losses in the optical system, along with other imperfections, such as distinguishability of the photons. The best versions of this approach require end-to-end efficiencies (including detection efficiencies) of well above 90%, and require millions of photons per encoded qubit in the cluster [17]. The demands on individual components are therefore severe.

The need for a large number of photon sources, interferometers, and detectors lends itself to integration, since photonic integrated circuits can manage and route light using waveguides. Each waveguide has a cross-section of about one micron per mode. This means that a standard SI wafer can accommodate about 300,000 modes, which is not yet sufficient for a scalable photonic quantum computer. This approach has demanded innovations in light sources using novel designs for four-wave mixing, control of photon routing on the chips, since standard methods tend to introduce unacceptable losses, as well as new detector designs, which utilise superconducting materials. This means that the entire machine should ideally fit on a photonic integrated circuit and all must be kept at low temperatures.

Other approaches leverage telecommunications waveguide devices and materials. These will necessarily be physically larger than an all-integrated approach and must overcome the losses of interfacing between components. Nevertheless, the larger scale means that feedforward required for measurement configuration is more straightforward, and the majority of the machines, excepting the detectors, can operate in ambient conditions. Of course, the larger component scale also necessarily means a larger machine, and the trade-offs inherent in these two approaches has yet to be fully understood.

More generally, it is increasingly likely that future quantum computers will be networked. This is driven in part by the need for significant scale, and the challenges across all potential quantum computing platforms of building a single monolithic device with sufficient number of logical qubits which each use multiple physical qubits to protect against errors.

The leading gate-based architectures for quantum computing include trapped ions, which use hyperfine states as qubits, connected by trap vibrational motion, superconducting circuits, which use transmons as the qubits, connected by microwaves, and trapped neutral atoms, which use excited electronic states as qubits, connected by dipolar interactions. These platforms utilise large arrays of cooled ions or atoms in free-space traps, or transmons in superconducting electronic circuits. Scaling these machines to enable the numbers of qubits needed for a fully fault tolerant machine remains a challenge because of the difficult engineering needed to prepare, control, and correct the qubits especially as a single device. For this reason, there are a number of proposals to connect smaller-scale devices on a network to enable modular architectures for a fully scalable quantum computer [18]. For this task, light is an obvious choice.

Efforts to develop efficient, low-noise interfaces between different physical types of qubits and photons are therefore underway. These include micro-wave to optical photon interconversion as well as optical wavelength conversion into the telecommunications bands near 1550 nm. These devices will link several quantum computing processors together, including those of different types, and would enable a large-scale computer to operate across a multi-node network.

Indeed, the network could also link conventional high-performance computers to quantum processors, allowing specialist tasks to be devolved to appropriate machines, optimising the utility of the network to capitalise on the capabilities of each class of processor. Such a hybrid network may well be among the first to actually achieve quantum advantage in scalable computing.

Light has a bright future as an enabler of quantum computers, quantum simulators and quantum computing networks, deriving from its unique features as a quantum medium.

Funding

European Union Horizon 2020; UK Engineering and Physical Sciences Research Council.

Acknowledgment

IAW received funding for research on the topics of this article from UKRI (EPSRC, through the Quantum Computing and Simulation Hub), the European Union Horizon 2020 Marie Slodowska-Curie Actions, QuantERA and FET programmes.

Disclosures

IAW is Provost of Imperial College London and Chair and co-founder of ORCA Computing (I, C, P).

References

1. S. Aaronson and A. Arkhipov, “The computational complexity of linear optics,” Proc. STOC, June 2011, pp. 333–342.

2. C. K. Hong, Z. Y. Ou, and L. Mandel, “Measurement of subpicosecond time intervals between two photons by interference,” Phys. Rev. Lett. 59(18), 2044–2046 (1987). [CrossRef]  

3. The Permanent of a matrix is similar in form to the better-known Determinant, as a sum of the products of matrix entries from separate rows and columns. The difference is in the weighting of each of these products. This also allows for many variations in between the two functions.

4. L. Valiant, “The complexity of computing the permanent,” Theor. Comp. Sci. 8(2), 189–201 (1979). [CrossRef]  

5. C. S. Hamilton, R. Kruse, L. Sansoni, et al., “Gaussian boson sampling,” Phys. Rev. Lett. 119(17), 170501 (2017). [CrossRef]  

6. H. S. Zhong, H. Wang, Y. H. Deng, et al., “Quantum computational advantage using photons,” Science 370(6523), 1460–1463 (2020). [CrossRef]  

7. J. F. F. Bulmer, B. A. Bell, R. S. Chadwic, et al., “The boundary for quantum advantage in Gaussian boson sampling,” Sci. Adv. 8(4), eabl9236 (2022). [CrossRef]  

8. L. Madsen, F. Laudenbach, M. F. Askarani, et al., “Quantum computational advantage with a programmable photonic processor,” Nature 606(7912), 75–81 (2022). [CrossRef]  

9. A graph is a mathematical structure consisting a set of objects in which some pairs of the objects are related. It is specified as a set of “Vertices” representing the objects and “Edges” which connect the objects in a uni- or bi-directional sense. The properties of a graph may be encoded into the parameters of a GBS machine.

10. J. M. Arrazola and T. R. Bromley, “Using Gaussian Boson Sampling to find dense subgraphs,” Phys. Rev. Lett. 121(3), 030503 (2018). [CrossRef]  

11. S. Sempere-Llagostera, R. B. Patel, I. A. Walmsley, et al., “Experimentally finding dense subgraphs using a time-bin encoded Gaussian Boson Sampling device,” Phys. Rev. X 12(3), 031045 (2022). [CrossRef]  

12. S. Sun, H. Kim, Z. Luo, et al., “A single-photon switch and transistor enabled by a solid-state quantum memory,” Science 361(6397), 57–60 (2018). [CrossRef]  

13. S. Baur, D. Tiarks, G. Rempe, et al., “Single-photon switch based on Rydberg blockade,” Phys. Rev. Lett. 112(7), 073901 (2014). [CrossRef]  

14. P. Kok, W. J. Munro, K. Nemoto, et al., “Linear optical quantum computing with photonic qubits,” Rev. Mod. Phys. 79(1), 135–174 (2007). [CrossRef]  

15. D. E. Browne and T. Rudolph, “Resource-efficient linear optical quantum computation,” Phys. Rev. Lett. 95(1), 010501 (2005). [CrossRef]  

16. S. Bartolucci, P. Birchall, H. Bombín, et al., “Fusion-based quantum computation,” Nat. Commun. 14(1), 912 (2023). [CrossRef]  

17. B. Pankovich, A. Kan, K. H. Wan, et al., “High photon-loss threshold quantum computing using GHZ-state measurements,” arXiv, arXiv:2308.04192 (2023). [CrossRef]  

18. C. Monroe and J. Kim, “Scaling the ion trap quantum processor,” Science 339(6124), 1164–1169 (2013). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1.
Fig. 1. (a) A linear quantum optical network operating as a Boson Sampling machine. Individual photons are inserted at the input nodes. The photons scatter from effective internal beamsplitters, and the number of photons at output nodes of the network are counted. Quantum interference of the Bosonic fields occupied by the photons means that it is computationally hard to calculate the output photon distribution if the network is randomly chosen. (b) A Gaussian Boson Sampling device consisting of a set of squeezed (S) or classical (laser) light (D) sources, a linear optical network, and a set of photon-number-resolving detectors. The resulting joint photocount distribution across all channel combinations is also hard to estimate using known algorithms running on conventional computers. (c) Laboratory implementation of a 100-mode GBS machine using build optics to minimise mode-coupling losses. (Photograph courtesy of Chaoyang Lu, USTC, China)
Fig. 2.
Fig. 2. Schemes for generating multiphoton entangled states from sets of single independent photons. These superpositions of correlated multiphoton states are a resource for quantum computing. The scheme works by erasing the information about which photon and which path leads to a heralding photo-detection event at the banks (An, Bm) of detectors. Designs are shown for (a) coherent superposition of a single photon in two modes—a one-photon state; (b) coherent superposition of two photons in four modes—a Bell state; (c) coherent superposition of three photons in six modes—a 3-GHZ state. The scheme is extensible to produce more complex entangled states (e.g., cluster states) by combining GHZ states using related path-erasure and detection methods, a process known as “fusion.”
Fig. 3.
Fig. 3. (a) 3D entangled states (a “cluster state”) represented as an FCC lattice. Each vertex of the lattice represents an entangled photonic resource state as suggested in (b), consisting of multiple photons and created by fusing GHZ states using methods similar to those of Fig. 2. Measurements are made on the photons at lattice vertices, which enables both error correction and readout of the state of the logical qubit. The algorithms that run on this type of quantum computer work by specifying the sequence in which measurements are to be made on different “layers” of the lattice, conditioned on the outcome of measurements on previous layers. (Adapted from Ref. [17].) (c) A ${10}\times{10}$ mode SiN photonic integrated circuit forming a mesh of beamplitters that can implement any operation between input and output mode combinations. Such chips enable the generation of cluster states by means of the circuits shown in Fig. 2. (Photograph courtesy of S. Yu, Imperial College London)
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.