Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Reference-enhanced x-ray single-particle imaging

Open Access Open Access

Abstract

X-ray single-particle imaging involves the measurement of a large number of noisy diffraction patterns of isolated objects in random orientations. The missing information about these patterns is then computationally recovered in order to obtain the 3D structure of the particle. While the method has promised to deliver room-temperature structures at near-atomic resolution, there have been significant experimental hurdles in collecting data of sufficient quality and quantity to achieve this goal. This paper describes two ways to modify the conventional methodology that significantly ease the experimental challenges, at the cost of additional computational complexity in the reconstruction procedure. Both these methods involve the use of holographic reference objects in close proximity to the sample of interest, whose structure can be described with only a few parameters. A reconstruction algorithm for recovering the unknown degrees of freedom is also proposed and tested with toy model simulations. The techniques proposed here enable 3D imaging of biomolecules that is not possible with conventional methods and open up a new family of methods for recovering structures from datasets with a variety of hidden parameters.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Single-particle imaging (SPI) at x-ray free-electron lasers (XFELs) should, in principle, be able to image the structure and dynamics of biomolecules at near-atomic resolution and subpicosecond time scales [1]. Challenges still remain in collecting a sufficient number of high-quality diffraction patterns, where “high-quality” refers to diffraction patterns with low background and high signal, enough to enable the orientation determination and merging of individual patterns into a 3D structure. Various studies have been performed on the minimum quality of patterns that are still tolerable [25], and they conclude that single proteins can be imaged with currently available XFEL sources as long as the background scattering is significantly less than the scattered signal from the particle and if ${10^5} - {10^6}$ patterns from identical objects can be collected. Most experimental work [68] has been focused on method development on much larger particles that scatter enough to be comfortably over the theoretical boundaries.

Various techniques have been employed to deliver the samples into the x-ray focus. Aerosol methods have the lowest background [9,10], but the particle densities are often so low as to make collection of a large number of patterns infeasible. One can collect more patterns by using a larger x-ray focus, but this proportionally reduces the scattered signal per pattern, which means that the integrated signal stays constant over time.

Alternatively, one can use a carrier medium for the particles, which can significantly increase the data collection rate. This medium can either be a liquid jet [11,12] or a solid substrate that is scanned in the x-ray focus [1315]. Unfortunately, the scattering from the carrier medium overpowers the signal from the particle, usually making hit detection of even single biomolecules impossible. This can, in principle, be improved by reducing the focus size significantly, such that it almost matches the particle size. In that case, only a very small volume of the carrier medium will be illuminated, which should make the signal from the particle detectable. However, x-ray optics capable of such small foci and high flux densities at XFELs do not exist yet.

In this paper, we discuss two alternative strategies for obtaining high-quality diffraction patterns with minimal modifications to currently available sample preparation and delivery technologies. The general principle for both strategies is to gain signal-to-noise by including scattering from a strongly scattering reference [16,17]. This is, of course, the holographic principle that has already been applied in diffractive imaging settings, notably in the form of Fourier transform holography [18] or as “free-flying” holography [19]. In both of those cases, the stated goal has been to recover the structure of the particle in single shots without the need for phase retrieval. In contrast, the objective here is to recover the full 3D structure of a mostly reproducible object from a large number of patterns of composite structures consisting of the target object as well as a reference.

The first composite object we consider is one where a gold nanoparticle (preferably a sphere) is chemically attached to the target object in an aerosol imaging setup. The second system is one where a 2D crystal is placed in the beam path with a unit cell comparable to the target object size. This can be achieved on a substrate in a straightforward manner by placing the 2D crystal on one side of the substrate and the sample on the other.

The common feature of these methods is that they add heterogeneity to the dataset, since the diffraction patterns vary not only in the orientation of the particles in the beam but also in the properties and relative position of the reference. Composite objects like those we will discuss in Section 2 have been proposed before [20], but this structural variability has been ignored, and the reference and the target needed to be separated by a distance larger than the size of either, which is not the case here.

As we will show, in the methods proposed here, we gain experimental efficiency at the cost of computational complexity. In the next sections, we will discuss the two types of systems in detail. We will also describe a reconstruction algorithm for reconstructing the structure of the samples from these holographic patterns by treating these additional latent variables in a way similar to when one does the unknown orientations in conventional SPI. For the nanoparticle reference case, we will also show the results of some 2D simulations on a toy model to show the efficacy of the algorithm.

In the following discussion, for convenience we refer to an identical or reproducible target object. One should note that exact, atomic-resolution reproducibility is not required. The problem of conformational variability is the same one faced by conventional SPI, and the techniques being developed to deal with structural variability should also be applicable to the imaging methods described here.

2. SINGLE-PARTICLE REFERENCE

For the first holographic system, we consider a situation where the unknown target particle is attached to a single reference structure, specifically a spherical gold nanoparticle (AuNP). This reference has the benefit of alleviating problems with finding the hits over background due to the high scattering cross section, thus enabling the use of smaller particles than what could be used in conventional SPI. Second, due to the high density, the acceleration of the particles in the flow field is lower and the density of particles in the aerosol stream is higher, increasing the hit rate, i.e., the fraction of pulses for which a particle is in the x-ray focus. Finally, these spherical references have just a single parameter to describe their structure: the radius. Gold nanospheres of a wide range of sizes are relatively easy to produce and are even commercially available. Various methods for linking them to proteins and DNA have also been extensively studied [2123]. The best involve linkers where one end attaches site-specifically to certain residues/bases on the biomolecule and the other to the surface of the AuNP. However, these experimental benefits come at the cost of increased heterogeneity.

In addition to the inherent structural variability of the target, we will have to solve for the relative positions of the reference and the unknown object and the size of the reference. If the reference was anisotropic and not a sphere, one would also have to contend with the relative orientation of the two objects, consequently making spheres even more desirable. Thus, we have four additional degrees of freedom for spherical references, and more for an arbitrary one. However, we should note that not all of these degrees need to be independent. Since the spheres are linked to points on the surface, there is a strong correlation between the position of the center and the size. Nevertheless, there is a substantial increase in the phase space of parameters that need to be solved for each diffraction pattern.

If one was performing a conventional SPI experiment with such samples, while the data collection process would be considerably eased by the experimental benefits described above, one would need to find a subset of patterns corresponding to the same composite object so that a 3D structure could be retrieved. This means throwing away a lot of data in order to find this subset. The holographic approach would be to decompose the composite object as the sum of the density of the spherical AuNP ${\rho _s}({\bf r},d)$ and the unknown object ${\rho _o}({\bf r})$, where $d$ represents the diameter of the sphere. The total electron density is

$$\rho ({\bf r}) = {\rho _o}({\bf r}) + {\rho _s}({\bf r} - {\bf t},d),$$
where ${\bf t}$ is the relative shift of the centers of the two objects. The 3D intensity distribution of this object sampled in a single shot then becomes
$$I({\bf q},d,{\bf t}) = {\left| {{F_o}({\bf q}) + {F_s}({\bf q},d){e^{2\pi i{\bf q.\bf t}}}} \right|^2},$$
where the $F$ terms represent the Fourier transform of the densities and the shift of the sphere becomes a phase ramp in 3D. The Fourier transform of a sphere is straightforward to calculate analytically:
$${F_s}({\bf q},d) \propto {d^3}\left({\frac{{\sin (s) - s\cos (s)}}{{{s^{\!3}}}}} \right),$$
with $s = \pi\! d|{\bf q}|$. This is illustrated in Fig. 1, where one can see the effect on the intensity distributions due to the addition of a spherical AuNP on a randomly generated organic-like cluster.
 figure: Fig. 1.

Fig. 1. Random sphere cluster used as the test object for illustration. (a) Intensity distribution of the test object shown on a logarithmic scale. Inset shows the projected electron density on a linear scale. (b) The same test object with a strongly scattering reference sphere attached. The main figure again shows the log-scale intensity distribution, while the inset shows the projected electron density. The size of the intensity image is ${185} \times {185}$ pixels, while the inset is ${50} \times {50}$ pixels.

Download Full Size | PDF

Equation (2) makes it explicit that one must solve for the diameter and relative shift of each pattern in order to recover the structure of the common object. Unlike in the conventional SPI method, all diffraction patterns contribute to the structure, increasing the signal-to-noise ratio (SNR) and generating a higher resolution structure. Of course, the best case scenario would still be when the parameters $d$ and ${\bf t}$ have very narrow distributions, but this method effectively makes the experiment more tolerant to variations in the attachment process while still benefiting from the experimental advantages of the gold reference.

For a single shot, if photons can be reliably counted, the noise at a given pixel from Poisson statistics is the square root of the measured intensity, which is the sum of the expected intensity from Eq. (2) and a background term $B({\bf q})$. The signal is the difference compared to the sphere diffraction pattern and the background. Keeping the convention of a positive value, the SNR can be written as

$$\text{SNR}({\bf q}) = \left| {\frac{{I({\bf q},d,{\bf t}) - |{F_s}({\bf q},d{{)|}^2}}}{{\sqrt {I({\bf q},d,{\bf t}) + B({\bf q})}}}} \right|.$$

In the absence of a reference, the SNR simplifies to

$$\frac{{|{F_o}({\bf q})|}}{{\sqrt {1 + B({\bf q})/|{F_o}({\bf q}{)|^2}}}}.$$

In the limit where the sphere signal ${F_s}({\bf q})$ is much larger than that of the object, the SNR can be approximated as

$$\frac{{2|{F_o}({\bf q})||\cos (2\pi {\bf q.\bf t} + {\phi _o})|}}{{\sqrt {1 + B({\bf q})/|{F_s}({\bf q},d{{)|}^2}}}}.$$

Two points can be noted here regarding the SNR in these two limits. The first is that the detrimental effect of a given background is lower in the case with a strong reference. Thus, even though the noise increases in absolute terms with a strong reference, the signal becomes more background tolerant.

In the second expression, the cosine term represents the fluctuation of the signal as the reference is translated with respect to the object. The amplitude of this fluctuation is the term relevant to determining whether one can solve for the relative positions from the patterns and recover the complex structure factor ${F_o}({\bf q})$. Due to the coherent holographic addition, this SNR amplitude is double what one would obtain if there was no reference.

A. Reconstruction Algorithm

The data set described above contains diffraction patterns which are noisy Ewald-sphere slices through many 3D intensities described by Eq. (2) at a random, unknown orientation and scale factor, due to variations in the incident fluence. A reconstruction algorithm to recover the parameters of each pattern and the structure of the object is described in this section. The pseudocode for the procedure in a single iteration is given in Algorithm 1, with some details regarding scaling removed for clarity. (Please note that in the algorithm, the functions calc_prob and update_intensities are identical to the ones in the standard Expand-Maximize-Compress (EMC) algorithm described elsewhere [2,24].)

Tables Icon

Algorithm 1. Pseudocode for the Reconstruction Algorithm with Variably Attached Spheres

The EMC algorithm [2] used widely in conventional SPI [58] is composed of three steps in each iteration: expand, maximize, and compress. The goal in each iteration is to find a model that has a higher likelihood of generating the data measured on the detector. The expand step is a transformation from model space to detector space for a given set of sampled hidden parameters. In the standard use case, the model is a grid of 3D intensities and the hidden parameters are the orientation. So in the expand step, one interpolates the 3D intensities along an Ewald sphere surface rotated by the given orientation and then applies standard polarization and solid angle corrections to produce the predicted intensities on the detector.

The maximize step finds an update to each of these detector views using the expectation maximization procedure and given a noise model. Usually, one also needs to find the maximum likelihood fluence factors. The result is a set of updated views that together have a higher likelihood but are not necessarily consistent with a single 3D intensity. At the end of each iteration, this consistency is enforced in the compress step. The straightforward solution is to reinterpolate the detector views into the 3D model after undoing the detector corrections. Once the 3D intensity has converged, standard iterative phase retrieval algorithms are used to get the electron density.

In the holographic case, the maximize step is left unchanged, since the objective is still to find the best possible predictions for the intensity at each detector pixel. The common 3D model is now not the 3D intensity of the whole object but the complex Fourier transform of the unknown target, ${F_o}({\bf q})$. In the expand step, one now interpolates the complex values along the Ewald sphere as before, but then converts them to detector intensities according to Eq. (2) before applying detector corrections. As stated before, the predicted detector intensities depend upon the orientation, sphere diameter, and relative shift.

The compress step, though, is not so straightforward, since the determination of the optimal ${F_o}({\bf q})$ from many different detector intensities is effectively a phase retrieval problem. The first part of this is to recover the 3D intensities for a given set of $d$ and ${\bf t}$ diameter and shift parameters. This can be accomplished simply by interpolation as before. One is then left with many 3D intensity volumes, each corresponding to a different realization of Eq. (2), from which a single complex ${F_o}({\bf q})$ must be determined. A divide and concur difference map approach [25,26] will be used here to solve this problem.

Iterative projection algorithms such as difference map [27] and hybrid input–output [28] are used to solve constraint satisfaction problems like phase retrieval by searching for the intersection of two constraint sets in a high dimensional space. In these methods, update rules are composed of projections to sets, which are defined to be the point in the set closest to any given point in this space. The divide and concur method extends these algorithms to an arbitrary number of constraint sets by expanding the state vector. If there are $N$ constraints to satisfy, the new state vector is $N$ copies of the original one. In the divide projection, each of the copies is projected to one of the constraint sets. The concur projection enforces consistency, and the projection is just to replace each copy with the average over all of them.

As applied to the compress step here, the divide projection will be a standard modulus projection from phase retrieval for each of the 3D intensity volumes. If the $n$th intensity is ${I_{\text{obs},n}}({\bf q})$, the divide projection for that copy ${F_{o,n}}({\bf q})$ will be

$${{\cal P}_D}[{F_{o,n}}({\bf q})] = \sqrt {\frac{{{I_{\text{calc},n}}({\bf q})}}{{{I_{\text{obs},n}}({\bf q})}}} {F_{\text{calc},n}}({\bf q}) - {F_s}({\bf q},{d_n}){e^{2\pi i{\bf q.{\bf t}_n}}},$$
where ${I_{\text{calc},n}}({\bf q}) = |{F_{\text{calc},n}}({\bf q}{)|^2} = I({\bf q},{d_n},{{\bf t}_n})$ from Eq. (2). The concur projection will set each copy equal to the average over all of them. In addition, one can add additional real-space constraints like positivity or a bounded support and the projection will be to project the averaged copy to those constraints. This is especially helpful at low resolutions, where the phase shift can be small due to the range of translations. After convergence, the solution chosen for the next iteration is taken to be the concur projection, which is just the average over all copies with the real-space constraints applied.
 figure: Fig. 2.

Fig. 2. Illustration of the forward calculation, used both to generate data and in the expand step. (a) Poisson-sampled photon counts of the intensity distribution in Fig. 1(b) shown on a logarithmic scale. Almost all the photons are concentrated at a low resolution, as is expected from the Fourier transform of a compact object. The actual data will be a randomly rotated version of this pattern. (b) Virtual powder pattern, or integrated image for 10,000 iterations of this process with different sphere diameters, positions, and in-plane rotations. The innermost region and the corners of the detector were masked out.

Download Full Size | PDF

B. Practical Concerns

When reconstructing experimental data, it may often be the case that one can determine the sphere diameter from single shots to higher than the sampling precision in the EMC reconstruction. This is because the diameter can be estimated by the azimuthally averaged intensity $I(|{\bf q}|)$, which will have a relatively good SNR even with only a few hundred scattered photons. In this situation, the maximize step can be simplified to not calculate the probabilities over all diameters for every pattern, but just over the shift parameters.

In general, this reconstruction strategy lends itself readily to refinement, i.e., to systematically increasing the sampling rate with increasing resolution. At low resolutions, where the Ewald sphere curvature can be neglected, only the in-plane position of the reference need be recovered. Put another way, only one out-of-plane position need be sampled. As more angle data are included, one should sample more finely in the neighborhood of the most likely positions for each pattern. This approach works because the intensity dependence due to translation seen in Eq. (2) is smooth and the error metric near the solution is convex. A similar refinement strategy is used in single-particle cryo-electron microscopy, where in-plane translations and per-pattern contrast transfer functions (corresponding to out-of-plane translations) need to be solved [29]. As one approaches resolutions below 1 nm, spherical reference objects do not exist, which means three additional orientation parameters would have to be solved for, at the same angular precision as the objects themselves.

Regarding the sampling rate for translations, the primary term of interest is the holographic cross term in Eq. (2), which is written out explicitly as follows:

$${I_{\text{cross}}}({\bf q},d,{\bf t}) = 2|{F_o}({\bf q})||{F_s}({\bf q},d)|\cos (2\pi {\bf q.\bf t} + {\phi _o}({\bf q})),$$
where ${F_o}({\bf q}) = |{F_o}({\bf q})|{e^{{\phi _o}({\bf q})}}$. The cosine term covers one full period when ${\bf t}$ changes by $2\pi /|{\bf q}|$, where ${\bf q}$ has been defined using the crystallographic convention. We speculate that an estimated sampling rate of $1/|{\bf q}|$ should be sufficient to correctly place patterns in the right bin during refinement, although this must be tested in simulations.

The computational complexity for the conventional EMC algorithm with a Poisson noise model is determined by the expectation step, and it scales with the number of orientations times the number of photon-containing pixels in the data set. Here, this would be multiplied by the number of sampled states. Naturally, both the orientations and the number of sampled states will be much lower in a refinement iteration than for a global search.

C. 2D Simulations

Simulations have been performed to illustrate the data produced and to demonstrate the reconstruction algorithm. For simplicity, a 2D toy model has been used that is rotated in-plane, similar to previous experiments to test the performance of the EMC algorithm with sparse data [4,30]. There is one parameter for the angle and there are two for the shift, but the qualitative structure of the problem remains the same. The test object representing the projected density of a random agglomeration of spheres and its Fourier intensity is shown in Fig. 1(a).

In order to generate the holographic data, the density of the sphere was added to that of the test object, with the sphere center and diameters randomly sampled from normal distributions of a certain width. The result of one instance of this is shown in Fig. 1(b), which also shows the intensity distribution of the composite object. These intensities were then Poisson sampled to generate photon counts per pixel [Fig. 2(a)] and rotated in-plane by a random angle. For this simulation, 10,000 patterns with ${10^5}$ photons/frame were generated. The sum of all the patterns, showing azimuthal symmetry due to random in-plane rotations, is shown in Fig. 2(b). The electron density of the sphere was chosen to be around 11 times that of the object, corresponding to the scattering factor ratio between gold and a protein-like material.

The sphere diameters for each shot were randomly sampled from a normal distribution with a mean of 7 pixels and a standard deviation of 1 pixel. For comparison, the test object image in the inset of Fig. 1(a) is ${50} \times {50}$ pixels in size. The shift of the sphere center was randomly sampled from a 2D normal distribution with a standard deviation of 1 pixel. For these simulations, all of these parameters were independently generated, but as mentioned earlier, it is quite possible that the sphere diameter and center positions are correlated. The reconstruction algorithm could be made more efficient if these correlations were known.

The initial guess for the iterate, ${F_o}({\bf q})$, is a set of random complex numbers. The reconstruction proceeds iteratively as described in Section 2.A, with the main difference that the object is 2D and there is only one degree of freedom for the orientations, namely the in-plane angle. Additionally, a support constraint is applied in conjunction with the concur projection. The initial support is taken to be a ${37} \times {37}$ pixel square region centered in the field of view. The support is updated every five iterations using a shrink-wrap-like [31] update rule where the current iterate is convolved with a Gaussian kernel with a standard deviation of 2 pixels and thresholded such that 2050 pixels are inside the support. Fifty iterations of the divide and concur difference map were applied for every EMC iteration with the $\beta$ parameter set to 1.

The results for a typical run are shown in Fig. 3. Figure 3(a) shows the concur projection of the current iterate after every five iterations. These images were rotated by ${-}{15^ \circ}$ to align with the true solution to make visual identification of features easier. The reconstruction will have, in general, a random rotational offset with respect to the ground truth. One can see that most of the structure of the test object has been recovered, but some additional density is also present. This can probably be optimized by modifying the phase retrieval parameters, especially those related to the support update. After every iteration, the 2D detector intensities were reconstructed for every set of sphere diameter and shift parameters by averaging over all the in-plane rotations. One of these is shown for the final iteration in Fig. 3(b). This can be compared with the true intensities with similar parameters shown in Fig. 1(b).

 figure: Fig. 3.

Fig. 3. Single-particle reference simulation results. (a) Reconstructed iterates after every five iterations. The reconstructions were rotated by ${-}{15^ \circ}$ to facilitate comparison with the original image. (b) Intensity reconstruction of the final iteration with a sphere of diameter 7 nm and shifts of ${+}{0.5}$ pixels in both the X and Y directions shown on a logarithmic scale.

Download Full Size | PDF

Figure 4(a) shows the Fourier ring correlation (FRC) metric [32] comparing the reconstructions for a few iterations to the ground truth. The vertical dashed line indicates the edge of the “detector,” corresponding to a full-period resolution of 1 pixel, and the horizontal dashed line indicates the somewhat arbitrary $\text{FRC} = 0.5$ cutoff. The final plot [Fig. 4(b)] shows the convergence of the most likely parameters (diameter, position, orientation) for each pattern as the iterations proceed. This convergence plot is the same one used in the Dragonfly [24] software and shows how after around 10 iterations the most likely parameters are already mostly converged.

 figure: Fig. 4.

Fig. 4. Single-particle reference simulation metrics. (a) Fourier ring correlation between reconstructions and ground truth as a function of iteration number. The oversampling ratio is close to 4 for these simulations, and the vertical dashed line corresponds to a resolution of 1 real-space pixel. (b) Convergence plot of most likely parameters for each frame as a function of iteration.

Download Full Size | PDF

3. REFERENCE LATTICE

The second method we will discuss to provide a holographic reference is to utilize a 2D crystal, either patterned onto a chip or as a self-assembled colloidal crystal [33]. An illustration of the experimental data for this is shown in Fig. 5. One way to get such data is to have the 2D crystal on one side of a substrate and the target samples randomly dispersed on the other side. Such fixed target scanning geometries have been used for SPI of gold clusters [14] as well as 2D crystallography [13] and fiber diffraction [15]. As before, one would have to solve for additional parameters on top of the object orientation, namely, the position of the object’s center within the unit cell as well as variations in the separation between the lattice and object along the beam direction.

 figure: Fig. 5.

Fig. 5. 2D schematic showing the diffraction from a 2D crystal made up of spheres in a triangular lattice with the same cluster test object used in Section 2.C. (a) The projected electron density showing the lattice, the target, and the probe, which here had a full width at half-maximum of 5 unit cells. (b) The expected intensity distribution from such a composite object on a logarithmic scale. The peak intensities are modulated by the orientation and position of the target object. One can also see the weak diffuse scattering from the molecular transform of the target object itself, but this will likely be drowned in the background scattering from the substrate. Note that the superlattice peaks visible along the horizontal axis are due to interpolation artifacts not expected in the real data.

Download Full Size | PDF

However, the big advantage of using a lattice reference rather than directly putting the sample on the substrate is the extreme gain in background tolerance obtained by using integrated Bragg peak intensities. Since experimental background scattering from the substrate and other beamline components is slowly varying, it is often straightforward to determine the integrated peak intensities, as is standard in crystallography. In contrast to the single-particle reference discussed in Section 2, the 2D crystal is prepared separately from the target sample and the relative positions and orientations of the two systems should be uniformly distributed.

Let the electron density of the unit cell be ${\rho _c}({\bf r})$ and that of the unknown object be ${\rho _o}({\bf r})$, as before. Let the unit cell be larger than the object, with the illuminated region represented by a probe function $P({\bf r})$ that is significantly larger than both. The first condition can be relaxed somewhat but is convenient for sufficient sampling, especially at low resolution, as will soon be evident. The second condition is necessary to avoid going into the regime of ptychography, where one would have to recover the shot-by-shot probe profile [34,35].

The 2D crystal can be represented as the unit cell convolved with a grid of Dirac delta functions,

$${\rho _L}({\bf r}) = {\rho _c}({\bf r})*\sum\limits_i \delta ({\bf r} - {{\bf r}_i}),$$
where the $*$ symbol represents convolution. The scattering contrast is the sum of the electron densities of the crystal and the rotated and translated object multiplied by the probe,
$$\rho ({\bf r}) = \left[{{\rho _L}({\bf r}) + {\rho _o}({\mathbb R.\bf r} - {\bf t})} \right] \cdot P({\bf r}),$$
where ${\mathbb R}$ and ${\bf t}$ are rotations and translations of the object with respect to a canonical configuration.

The far-field diffraction pattern is the Fourier transform of $\rho ({\bf r})$ sampled along the Ewald sphere. Using the convolution theorem, we get

$$F({\bf q}) = \sum\limits_i {F_c}({{\bf q}_i}){F_P}({\bf q} - {{\bf q}_i}) + {F_o}({\mathbb R.\bf q}){e^{2\pi i{\bf q.\bf t}}},$$
where the ${F_\_}({\bf q})$ terms represent the Fourier transforms of the corresponding real-space quantities. Since the spot size is assumed to be large compared to the object, the effect of convolving ${F_o}({\bf q})$ is neglected. The first term represents a reciprocal lattice of broad Bragg peaks whose shape is given by the probe Fourier transform; the height is given by the magnitude of the unit cell transform at the center of the Bragg peak. The diffracted intensities are given by
$$\begin{split}I({\bf q})& = F({\bf q}){F^*}({\bf q}) \\& \begin{array}{{ll}}{}&{= \sum\limits_i {{\left| {{F_c}({{\bf q}_i}){F_p}({\bf q} - {{\bf q}_i})} \right|}^2} + {{\left| {{F_o}({\mathbb R.\bf q})} \right|}^2}}\\&{\quad + {F_o}({\mathbb R.\bf q}){e^{2\pi i{\bf q.\bf t}}}\sum\limits_i F_c^*({{\bf q}_i})F_p^*({\bf q} - {{\bf q}_i}) + \text{c}\text{.c}\text{.}}\end{array},\end{split}$$
where c.c. refers to the complex conjugate of the previous term. The first term is simplified by the assumption that the probe is much larger than the unit cell, and thus the width of the Bragg peak is much less than the reciprocal lattice constant. In practice, there will be background scattering from various components in the beamline added to the intensities. The background is measurably higher in than the aerosol-based sample delivery method discussed in Section 2. As in serial crystallography, this can be mitigated by working with the integrated intensity of each Bragg peak at ${{\bf q}_i}$. The relatively slowly varying $|{F_o}{|^2}$ term is assumed to be lost in the background. Also, if the probe is much larger than a unit cell, as assumed, we would expect the Bragg peaks to be much brighter than the diffuse molecular transform of the target object. If the integration of the probe function ${F_p}({\bf q} - {{\bf q}_i})$ in the neighborhood of the peak is $N$, the integrated peak intensities are given by
$$\begin{split}{{I_{\text{obs}}}({\bf q})}&{= {{\left| {N{F_c}({{\bf q}_i})} \right|}^2}}\\&{\quad + 2N\left| {{F_o}({\mathbb R.{\bf q}_i})} \right|\left| {{F_c}({{\bf q}_i})} \right|\cos ({\phi _o} + 2\pi {{\bf q}_i.\bf t} - {\phi _c})}\end{split},$$
where the ${\phi _\_}$ terms represent the phases of the Fourier transform terms. With the choice of a simple object for the unit cell, ${F_c}({\bf q})$ can be precalculated or measured beforehand.

A reconstruction approach very similar to that in Section 2.A is applicable, except that Eq. (2) is replaced by Eq. (10) and the intensities are only sampled at the reciprocal lattice points. Depending on the relative sizes of the object and the unit cell, a worry might be that the sampling rate of the Bragg peaks may be insufficient to determine the structure ab initio. However, with random orientations, the sampling provided by ${\mathbb R.{\bf q}_i}$ will be sufficient beyond the first few $hk$ orders. Nevertheless, for completeness at low resolutions, a unit cell larger than the object would be preferable.

The other experimental parameter that requires some consideration is the size of the beam focus $P({\bf r})$ compared to the lattice constant. The biggest challenge in determining ${F_o}({\bf q})$ is the determination of the translation and orientation parameters for each diffraction pattern. For variable translations, Eq. (10) can be seen as a constant plus a scaled cosine as a function of $({\bf q.\bf t})$. As in the SNR discussion in Section 2 and Eq. (4), the amplitude of the cosine term is the signal relevant to determining the translation ${\bf t}$. The noise in the Poissonian photon counting regime is the square root of ${I_{\text{obs}}}({\bf q})$, which is approximately just the square root of the first term, $N|{F_c}({{\bf q}_i})$. Thus, the SNR is

$$\frac{{2N|{F_o}({\mathbb R.{\bf q}_i})||{F_c}({{\bf q}_i})|}}{{N|{F_c}({{\bf q}_i})|}} = 2|{F_o}({\mathbb R.{\bf q}_i})|,$$
which is independent of $N$. Since background subtraction during peak integration is an additional source of noise, $N$ should be as large as possible. However, detectors lose the ability to count individual photons if the signal is too high, either due to saturation or due to switching to a lower gain mode. The noise in the measurement would then be higher than $\sqrt {{I_{\text{obs}}}}$ because of the additional uncertainty about how many photons were measured. Thus, the optimal probe size in the absence of background would be the largest $N$ where the detector can still count photons. This optimum would shift to a larger $N$ when there is significant background, which would likely be the limiting experimental factor, especially at high resolutions.

4. DISCUSSION

X-ray SPI remains an experimentally demanding method for determining the structure of uncrystallized single biomolecules. Problems still remain in obtaining sufficient data of high quality, and questions remain regarding the feasibility of transitioning to smaller particles.

Two methodologies have been proposed here, both of which improve experimental efficiency by incorporating strongly scattering holographic references, but they also add complexity because the composite object is not necessarily reproducible. A reconstruction algorithm involving a modification to the EMC algorithm is proposed for recovering the additional degrees of freedom. The key insight is to separate the reference and the object, as shown in Eqs. (2) and (10), and explicitly sample the different degrees of freedom introduced by the addition of the reference. These methods also differ from other commonly used holographic methods like Fourier transform holography or in-flight holography, where the references are separated to such an extent that one can perform single-shot imaging without the need for phase retrieval.

The first reference proposed is one where a strong reference scatterer like a gold nanosphere is chemically attached to the target object in an aerosol imaging setup. The size and relative position of the sphere is assumed to vary shot-to-shot in some intervals. The reference makes hit detection easier and improves the hit rate, since the composite objects are denser, and hence slower in the aerosol stream. 2D simulations were performed showing the reconstruction process and the ability to determine the unknown degrees of freedom (sphere size, position, and object orientation).

The second geometry uses a 2D crystal reference in a scanning fixed-target sample geometry. High hit rates can be achieved by controlling the density of particles deposited on the surface. The lattice reference produces Bragg peaks in the diffraction pattern that are much more robust to background, which is usually a limiting factor due to the presence of a substrate in the beam path. The integrated peak intensity contains information about the structure of the target object as well as its position relative to the lattice unit cell. The gain in background tolerance may also enable sample preparation methods that are either easier or that leave the biomolecule in a closer-to-native state, like liquid cells or graphene sandwiches.

Further work is required to test the limits of the method in terms of minimum target object size with currently available XFEL parameters. The author also hopes that these ideas will be tested experimentally in the near future, potentially opening up a new dimension in optimizing experiments to achieve the goal of atomic-resolution structure and dynamics of uncrystallized biomolecules.

The data generation and reconstruction code for the 2D simulations shown here are available at [36].

Funding

Max Planck Society.

Acknowledgment

The author acknowledges the extremely valuable discussions with Henry Chapman during the preparation of this manuscript.

Disclosures

The author declares no conflict of interest.

REFERENCES

1. A. Aquila, A. Barty, C. Bostedt, S. Boutet, G. Carini, D. DePonte, P. Drell, S. Doniach, K. Downing, T. Earnest, H. Elmlund, V. Elser, M. Gühr, J. Hajdu, J. Hastings, S. Hau-Riege, Z. Huang, E. Lattman, F. Maia, S. Marchesini, A. Ourmazd, C. Pellegrini, R. Santra, I. Schlichting, C. Schroer, J. Spence, I. Vartanyants, S. Wakatsuki, W. Weis, and G. Williams, “The linac coherent light source single particle imaging road map,” Struct. Dyn. 2, 041701 (2015). [CrossRef]  

2. N.-T. D. Loh and V. Elser, “Reconstruction algorithm for single-particle diffraction imaging experiments,” Phys. Rev. E 80, 026705 (2009). [CrossRef]  

3. K. Ayyer, G. Geloni, V. Kocharyan, E. Saldin, S. Serkez, O. Yefanov, and I. Zagorodnov, “Perspectives for imaging single protein molecules with the present design of the European XFEL,” Struct. Dyn. 2, 041702 (2015). [CrossRef]  

4. K. Giewekemeyer, A. Aquila, N.-T. D. Loh, Y. Chushkin, K. S. Shanks, J. Weiss, M. W. Tate, H. T. Philipp, S. Stern, P. Vagovic, M. Mehrjoo, C. Teo, M. Barthelmess, F. Zontone, C. Chang, R. C. Tiberio, A. Sakdinawat, G. J. Williams, S. M. Gruner, and A. P. Mancuso, “Experimental 3D coherent diffractive imaging from photon-sparse random projections,” IUCrJ 6, 357–365 (2019). [CrossRef]  

5. K. Ayyer, A. J. Morgan, A. Aquila, H. DeMirci, B. G. Hogue, R. A. Kirian, P. L. Xavier, C. H. Yoon, H. N. Chapman, and A. Barty, “Low-signal limit of x-ray single particle diffractive imaging,” Opt. Express 27, 37816–37833 (2019). [CrossRef]  

6. T. Ekeberg, M. Svenda, C. Abergel, F. R. N. C. Maia, V. Seltzer, J.-M. Claverie, M. Hantke, O. Jönsson, C. Nettelblad, G. van der Schot, M. Liang, D. P. DePonte, A. Barty, M. M. Seibert, B. Iwan, I. Andersson, N. D. Loh, A. V. Martin, H. Chapman, C. Bostedt, J. D. Bozek, K. R. Ferguson, J. Krzywinski, S. W. Epp, D. Rolles, A. Rudenko, R. Hartmann, N. Kimmel, and J. Hajdu, “Three-dimensional reconstruction of the giant mimivirus particle with an x-ray free-electron laser,” Phys. Rev. Lett. 114, 098102 (2015). [CrossRef]  

7. M. Rose, S. Bobkov, K. Ayyer, R. P. Kurta, D. Dzhigaev, Y. Y. Kim, A. J. Morgan, C. H. Yoon, D. Westphal, J. Bielecki, J. A. Sellberg, G. Williams, F. R. Maia, O. M. Yefanov, V. Ilyin, A. P. Mancuso, H. N. Chapman, B. G. Hogue, A. Aquila, A. Barty, and I. A. Vartanyants, “Single-particle imaging without symmetry constraints at an x-ray free-electron laser,” IUCrJ 5, 727–736 (2018). [CrossRef]  

8. I. V. Lundholm, J. A. Sellberg, T. Ekeberg, M. F. Hantke, K. Okamoto, G. van der Schot, J. Andreasson, A. Barty, J. Bielecki, P. Bruza, M. Bucher, S. Carron, B. J. Daurer, K. Ferguson, D. Hasse, J. Krzywinski, D. S. D. Larsson, A. Morgan, K. Mühlig, M. Müller, C. Nettelblad, A. Pietrini, H. K. N. Reddy, D. Rupp, M. Sauppe, M. Seibert, M. Svenda, M. Swiggers, N. Timneanu, A. Ulmer, D. Westphal, G. Williams, A. Zani, G. Faigel, H. N. Chapman, T. Möller, C. Bostedt, J. Hajdu, T. Gorkhover, and F. R. N. C. Maia, “Considerations for three-dimensional image reconstruction from experimental data in coherent diffractive imaging,” IUCrJ 5, 531–541 (2018). [CrossRef]  

9. A. Munke, J. Andreasson, A. Aquila, S. Awel, K. Ayyer, A. Barty, R. J. Bean, P. Berntsen, J. Bielecki, S. Boutet, M. Bucher, H. N. Chapman, B. J. Daurer, H. DeMirci, V. Elser, P. Fromme, J. Hajdu, M. F. Hantke, A. Higashiura, B. G. Hogue, A. Hosseinizadeh, Y. Kim, R. A. Kirian, H. K. N. Reddy, T.-Y. Lan, D. S. D. Larsson, H. Liu, N. D. Loh, F. R. N. C. Maia, A. P. Mancuso, K. Mühlig, A. Nakagawa, D. Nam, G. Nelson, C. Nettelblad, K. Okamoto, A. Ourmazd, M. Rose, G. van der Schot, P. Schwander, M. M. Seibert, J. A. Sellberg, R. G. Sierra, C. Song, M. Svenda, N. Timneanu, I. A. Vartanyants, D. Westphal, M. O. Wiedorn, G. J. Williams, P. L. Xavier, C. H. Yoon, and J. Zook, “Coherent diffraction of single rice dwarf virus particles using hard x-rays at the linac coherent light source,” Sci. Data 3, 160064 (2016). [CrossRef]  

10. J. Bielecki, M. F. Hantke, B. J. Daurer, H. K. N. Reddy, D. Hasse, D. S. D. Larsson, L. H. Gunn, M. Svenda, A. Munke, J. A. Sellberg, L. Flueckiger, A. Pietrini, C. Nettelblad, I. Lundholm, G. Carlsson, K. Okamoto, N. Timneanu, D. Westphal, O. Kulyk, A. Higashiura, G. van der Schot, N.-T. D. Loh, T. E. Wysong, C. Bostedt, T. Gorkhover, B. Iwan, M. M. Seibert, T. Osipov, P. Walter, P. Hart, M. Bucher, A. Ulmer, D. Ray, G. Carini, K. R. Ferguson, I. Andersson, J. Andreasson, J. Hajdu, and F. R. N. C. Maia, “Electrospray sample injection for single-particle imaging with x-ray lasers,” Sci. Adv. 5, eaav8801 (2019). [CrossRef]  

11. H. N. Chapman, P. Fromme, A. Barty, T. A. White, R. A. Kirian, A. Aquila, M. S. Hunter, J. Schulz, D. P. DePonte, U. Weierstall, R. B. Doak, F. R. N. C. Maia, A. V. Martin, I. Schlichting, L. Lomb, N. Coppola, R. L. Shoeman, S. W. Epp, R. Hartmann, D. Rolles, A. Rudenko, L. Foucar, N. Kimmel, G. Weidenspointner, P. Holl, M. Liang, M. Barthelmess, C. Caleman, S. Boutet, M. J. Bogan, J. Krzywinski, C. Bostedt, S. Bajt, L. Gumprecht, B. Rudek, B. Erk, C. Schmidt, A. Hömke, C. Reich, D. Pietschner, L. Strüder, G. Hauser, H. Gorke, J. Ullrich, S. Herrmann, G. Schaller, F. Schopper, H. Soltau, K.-U. Kühnel, M. Messerschmidt, J. D. Bozek, S. P. Hau-Riege, M. Frank, C. Y. Hampton, R. G. Sierra, D. Starodub, G. J. Williams, J. Hajdu, N. Timneanu, M. M. Seibert, J. Andreasson, A. Rocker, O. Jönsson, M. Svenda, S. Stern, K. Nass, R. Andritschke, C.-D. Schröter, F. Krasniqi, M. Bott, K. E. Schmidt, X. Wang, I. Grotjohann, J. M. Holton, T. R. M. Barends, R. Neutze, S. Marchesini, R. Fromme, S. Schorb, D. Rupp, M. Adolph, T. Gorkhover, I. Andersson, H. Hirsemann, G. Potdevin, H. Graafsma, B. Nilsson, and J. C. H. Spence, “Femtosecond x-ray protein nanocrystallography,” Nature 470, 73–77 (2011). [CrossRef]  

12. R. G. Sierra, H. Laksmono, J. Kern, R. Tran, J. Hattne, R. Alonso-Mori, B. Lassalle-Kaiser, C. Glöckner, J. Hellmich, D. W. Schafer, N. Echols, R. J. Gildea, R. W. Grosse-Kunstleve, J. Sellberg, T. A. McQueen, A. R. Fry, M. M. Messerschmidt, A. Miahnahri, M. M. Seibert, C. Y. Hampton, D. Starodub, N. D. Loh, D. Sokaras, T.-C. Weng, P. H. Zwart, P. Glatzel, D. Milathianaki, W. E. White, P. D. Adams, G. J. Williams, S. Boutet, A. Zouni, J. Messinger, N. K. Sauter, U. Bergmann, J. Yano, V. K. Yachandra, and M. J. Bogan, “Nanoflow electrospinning serial femtosecond crystallography,” Acta Crystallogr. Sect. D 68, 1584–1587 (2012). [CrossRef]  

13. M. S. Hunter, B. Segelke, M. Messerschmidt, G. J. Williams, N. A. Zatsepin, A. Barty, W. H. Benner, D. B. Carlson, M. Coleman, A. Graf, S. P. Hau-Riege, T. Pardini, M. M. Siebert, J. Evans, S. Boutet, and M. Frank, “Fixed-target protein serial microcrystallography with an x-ray free electron laser,” Sci. Rep. 4, 6026 (2014). [CrossRef]  

14. D. Nam, C. Kim, Y. Kim, T. Ebisu, M. Gallagher-Jones, J. Park, S. Kim, S. Kim, K. Tono, M. Yabashi, T. Ishikawa, and C. Song, “Fixed target single-shot imaging of nanostructures using thin solid membranes at SACLA,” J. Phys. B 49, 034008 (2016). [CrossRef]  

15. C. Seuring, K. Ayyer, E. Filippaki, M. Barthelmess, J.-N. Longchamp, P. Ringler, T. Pardini, D. H. Wojtas, M. A. Coleman, K. Dörner, S. Fuglerud, G. Hammarin, B. Habenstein, A. E. Langkilde, A. Loquet, A. Meents, R. Riek, H. Stahlberg, S. Boutet, M. S. Hunter, J. Koglin, M. Liang, H. M. Ginn, R. P. Millane, M. Frank, A. Barty, and H. N. Chapman, “Femtosecond x-ray coherent diffraction of aligned amyloid fibrils on low background graphene,” Nat. Commun. 9, 1836 (2018). [CrossRef]  

16. S. Boutet, M. J. Bogan, A. Barty, M. Frank, W. H. Benner, S. Marchesini, M. M. Seibert, J. Hajdu, and H. N. Chapman, “Ultrafast soft x-ray scattering and reference-enhanced diffractive imaging of weakly scattering nanoparticles,” J. Electron Spectrosc. Relat. Phenom. 166, 65–73 (2008). [CrossRef]  

17. T.-Y. Lan, P.-N. Li, and T.-K. Lee, “Method to enhance the resolution of x-ray coherent diffraction imaging for non-crystalline bio-samples,” New J. Phys. 16, 033016 (2014). [CrossRef]  

18. I. McNulty, J. Kirz, C. Jacobsen, E. H. Anderson, M. R. Howells, and D. P. Kern, “High-resolution imaging by Fourier transform x-ray holography,” Science 256, 1009–1012 (1992). [CrossRef]  

19. T. Gorkhover, A. Ulmer, K. Ferguson, M. Bucher, F. R. Maia, J. Bielecki, T. Ekeberg, M. F. Hantke, B. J. Daurer, C. Nettelblad, J. Andreasson, A. Barty, P. Bruza, S. Carron, D. Hasse, J. Krzywinski, D. S. Larsson, A. Morgan, K. Mühlig, M. Müller, K. Okamoto, A. Pietrini, D. Rupp, M. Sauppe, G. V. D. Schot, M. Seibert, J. A. Sellberg, M. Svenda, M. Swiggers, N. Timneanu, D. Westphal, G. Williams, A. Zani, H. N. Chapman, G. Faigel, T. Möller, J. Hajdu, and C. Bostedt, “Femtosecond x-ray Fourier holography imaging of free-flying nanoparticles,” Nat. Photonics 12, 150–153 (2018). [CrossRef]  

20. T. Shintake, “Possibility of single biomolecule imaging with coherent amplification of weak scattering x-ray photons,” Phys. Rev. E 78, 041906 (2008). [CrossRef]  

21. M.-E. Aubin-Tam and K. Hamad-Schifferli, “Structure and function of nanoparticle–protein conjugates,” Biomed. Mater. 3, 034001 (2008). [CrossRef]  

22. C. A. Mirkin, R. L. Letsinger, R. C. Mucic, and J. J. Storhoff, “A DNA-based method for rationally assembling nanoparticles into macroscopic materials,” Nature 382, 607–609 (1996). [CrossRef]  

23. M. R. Jones, N. C. Seeman, and C. A. Mirkin, “Programmable materials and the nature of the DNA bond,” Science 347, 1260901 (2015). [CrossRef]  

24. K. Ayyer, T.-Y. Lan, V. Elser, and N. D. Loh, “Dragonfly: an implementation of the expand–maximize–compress algorithm for single-particle imaging,” J. Appl. Crystallogr. 49, 1320–1335 (2016). [CrossRef]  

25. S. Gravel and V. Elser, “Divide and concur: a general approach to constraint satisfaction,” Phys. Rev. E 78, 036706 (2008). [CrossRef]  

26. V. Elser, I. Rankenburg, and P. Thibault, “Searching with iterated maps,” Proc. Natl. Acad. Sci. USA 104, 418–423 (2007). [CrossRef]  

27. V. Elser, “Phase retrieval by iterated projections,” J. Opt. Soc. Am. A 20, 40–55 (2003). [CrossRef]  

28. J. R. Fienup, “Reconstruction of an object from the modulus of its Fourier transform,” Opt. Lett. 3, 27–29 (1978). [CrossRef]  

29. S. H. Scheres, M. Valle, and J.-M. Carazo, “Fast maximum-likelihood refinement of electron microscopy images,” Bioinformatics 21, ii243–ii244 (2005). [CrossRef]  

30. H. T. Philipp, K. Ayyer, M. W. Tate, V. Elser, and S. M. Gruner, “Solving structure with sparse, randomly-oriented x-ray data,” Opt. Express 20, 13129–13137 (2012). [CrossRef]  

31. S. Marchesini, H. He, H. N. Chapman, S. P. Hau-Riege, A. Noy, M. R. Howells, U. Weierstall, and J. C. Spence, “X-ray image reconstruction from a diffraction pattern alone,” Phys. Rev. B 68, 140101 (2003). [CrossRef]  

32. W. O. Saxton and W. Baumeister, “The correlation averaging of a regularly arranged bacterial cell envelope protein,” J. Microsc. 127, 127–138 (1982). [CrossRef]  

33. N. Vogel, M. Retsch, C.-A. Fustin, A. del Campo, and U. Jonas, “Advances in colloidal assembly: the design of structure and hierarchy in two and three dimensions,” Chem. Rev. 115, 6265–6311 (2015). [CrossRef]  

34. Y. Liu, M. Seaberg, D. Zhu, J. Krzywinski, F. Seiboth, C. Hardin, D. Cocco, A. Aquila, B. Nagler, H. J. Lee, S. Boutet, Y. Feng, Y. Ding, G. Marcus, and A. Sakdinawat, “High-accuracy wavefront sensing for x-ray free electron lasers,” Optica 5, 967–975 (2018). [CrossRef]  

35. S. Sala, B. Daurer, M. Hantke, T. Ekeberg, N. Loh, F. R. Maia, and P. Thibault, “Ptychographic imaging for the characterization of x-ray free-electron laser beams,” J. P. Conf. Ser. 849, 012032 (2017). [CrossRef]  

36. github.com/kartikayyer/Ref-EMC

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Random sphere cluster used as the test object for illustration. (a) Intensity distribution of the test object shown on a logarithmic scale. Inset shows the projected electron density on a linear scale. (b) The same test object with a strongly scattering reference sphere attached. The main figure again shows the log-scale intensity distribution, while the inset shows the projected electron density. The size of the intensity image is ${185} \times {185}$ pixels, while the inset is ${50} \times {50}$ pixels.
Fig. 2.
Fig. 2. Illustration of the forward calculation, used both to generate data and in the expand step. (a) Poisson-sampled photon counts of the intensity distribution in Fig. 1(b) shown on a logarithmic scale. Almost all the photons are concentrated at a low resolution, as is expected from the Fourier transform of a compact object. The actual data will be a randomly rotated version of this pattern. (b) Virtual powder pattern, or integrated image for 10,000 iterations of this process with different sphere diameters, positions, and in-plane rotations. The innermost region and the corners of the detector were masked out.
Fig. 3.
Fig. 3. Single-particle reference simulation results. (a) Reconstructed iterates after every five iterations. The reconstructions were rotated by ${-}{15^ \circ}$ to facilitate comparison with the original image. (b) Intensity reconstruction of the final iteration with a sphere of diameter 7 nm and shifts of ${+}{0.5}$ pixels in both the X and Y directions shown on a logarithmic scale.
Fig. 4.
Fig. 4. Single-particle reference simulation metrics. (a) Fourier ring correlation between reconstructions and ground truth as a function of iteration number. The oversampling ratio is close to 4 for these simulations, and the vertical dashed line corresponds to a resolution of 1 real-space pixel. (b) Convergence plot of most likely parameters for each frame as a function of iteration.
Fig. 5.
Fig. 5. 2D schematic showing the diffraction from a 2D crystal made up of spheres in a triangular lattice with the same cluster test object used in Section 2.C. (a) The projected electron density showing the lattice, the target, and the probe, which here had a full width at half-maximum of 5 unit cells. (b) The expected intensity distribution from such a composite object on a logarithmic scale. The peak intensities are modulated by the orientation and position of the target object. One can also see the weak diffuse scattering from the molecular transform of the target object itself, but this will likely be drowned in the background scattering from the substrate. Note that the superlattice peaks visible along the horizontal axis are due to interpolation artifacts not expected in the real data.

Tables (1)

Tables Icon

Algorithm 1. Pseudocode for the Reconstruction Algorithm with Variably Attached Spheres

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

ρ ( r ) = ρ o ( r ) + ρ s ( r t , d ) ,
I ( q , d , t ) = | F o ( q ) + F s ( q , d ) e 2 π i q . t | 2 ,
F s ( q , d ) d 3 ( sin ( s ) s cos ( s ) s 3 ) ,
SNR ( q ) = | I ( q , d , t ) | F s ( q , d ) | 2 I ( q , d , t ) + B ( q ) | .
| F o ( q ) | 1 + B ( q ) / | F o ( q ) | 2 .
2 | F o ( q ) | | cos ( 2 π q . t + ϕ o ) | 1 + B ( q ) / | F s ( q , d ) | 2 .
P D [ F o , n ( q ) ] = I calc , n ( q ) I obs , n ( q ) F calc , n ( q ) F s ( q , d n ) e 2 π i q . t n ,
I cross ( q , d , t ) = 2 | F o ( q ) | | F s ( q , d ) | cos ( 2 π q . t + ϕ o ( q ) ) ,
ρ L ( r ) = ρ c ( r ) i δ ( r r i ) ,
ρ ( r ) = [ ρ L ( r ) + ρ o ( R . r t ) ] P ( r ) ,
F ( q ) = i F c ( q i ) F P ( q q i ) + F o ( R . q ) e 2 π i q . t ,
I ( q ) = F ( q ) F ( q ) = i | F c ( q i ) F p ( q q i ) | 2 + | F o ( R . q ) | 2 + F o ( R . q ) e 2 π i q . t i F c ( q i ) F p ( q q i ) + c .c . ,
I obs ( q ) = | N F c ( q i ) | 2 + 2 N | F o ( R . q i ) | | F c ( q i ) | cos ( ϕ o + 2 π q i . t ϕ c ) ,
2 N | F o ( R . q i ) | | F c ( q i ) | N | F c ( q i ) | = 2 | F o ( R . q i ) | ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.