If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password
If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password
3 Bridget Carragher, personal communication. 2 The abbreviations used are:
SPAsingle-particle analysisFSCFourier shell correlation coefficientCTFcontrast transfer functionSBDDstructure-based drug designHAhemagglutininNRAMMNational Resource for Automated Molecular MicroscopyDQEdetective quantum efficiency.
Cryogenic electron microscopy (cryo-EM) enables structure determination of macromolecular objects and their assemblies. Although the techniques have been developing for nearly four decades, they have gained widespread attention in recent years due to technical advances on numerous fronts, enabling traditional microscopists to break into the world of molecular structural biology. Many samples can now be routinely analyzed at near-atomic resolution using standard imaging and image analysis techniques. However, numerous challenges to conventional workflows remain, and continued technical advances open entirely novel opportunities for discovery and exploration. Here, I will review some of the main methods surrounding cryo-EM with an emphasis specifically on single-particle analysis, and I will highlight challenges, open questions, and opportunities for methodology development.
Cryo-EM enables structure determination of monodisperse macromolecular assemblies imaged at cryogenic temperatures in a transmission electron microscope. Cryo-EM techniques have been rapidly developing over the last several years and have become standard tools in the structural biologist’s toolkit (
), all coupled to systematic improvements in auxiliary methodologies surrounding microscope operation and general cryo-EM workflows, have enabled the wide adoption of the technique for many structural biology applications. In many instances, cryo-EM is now the first go-to method for structural analysis of specific biological samples (
). For these and other reasons, the 2017 Nobel prize in Chemistry was awarded to Jacques Dubochet, Richard Henderson, and Joachim Frank for “developing cryo-EM for the high-resolution structure determination of biomolecules in solution” (
Beyond simply extending capabilities to samples that cannot be crystallized, cryo-EM techniques open entirely new questions and raise novel challenges, especially pertaining to dynamic and heterogeneous assemblies (
National Resource for Automated Molecular Microscopy
detective quantum efficiency.
multiple good recent reviews and perspectives cover the history and development of the field, as well as applications to macromolecular structural biology, and the reader is directed to them for further details (
). Here, I will focus on what I believe are the current bottlenecks to streamlined and automated workflows specific to SPA of purified macromolecular samples within the confines of a generalized workflow (Fig. 1), highlighting some of the current technical limitations, open questions, and exciting areas of development.
Macromolecular specimen isolation and purification
Biological macromolecules and macromolecular assemblies are characterized by complex three-dimensional architectures with precisely defined local environments, both of which have been fine-tuned over millions of years of evolution. Macromolecular structure is crucial to macromolecular function, and deciphering the structure/function relationship—the central goal in the field of molecular structural biology—has illuminated the molecular world. Most current structural biology experiments begin by defining a question with respect to a macromolecular object of interest and subsequently isolating and purifying the sample from its cellular context (for the purpose of this review, in situ structural biology approaches will not be discussed). Single-particle cryo-EM techniques of purified specimens have facilitated defining molecular structures for samples that were not amenable to conventional crystallographic approaches. For example, structures of mitochondrial ribosomes (
). A purified sample should have a reasonable degree of stability and homogeneity. Typically, an SDS-polyacrylamide gel and size-exclusion peak from gel filtration should inform the researcher of the relative sample purity and whether there are contaminating bands or peaks that would impede structural studies. For most samples, these two biochemical assessments are a minimal requirement prior to initiating cryo-EM analysis. Concentrations in the micromolar range can produce well-distributed and polydisperse particles on holey cryo-EM grids (individual particles are distributed within holes etched into a carbon or gold support film). Higher concentrations may require the use of surfactants, such as detergents, to avoid oversaturating the field of view (
). However, in many instances, especially with larger and less abundant macromolecular assemblies, gel filtration is not an option, as the sample is too scarce. In such cases, an SDS-polyacrylamide gel followed by silver staining or Western blotting may suffice, but it would be of benefit to perform preliminary data analysis to look for homogeneous particles (see sections below), either using negative stain or with vitrified specimen, which can guide optimization of the purification protocol. In addition to changing the buffer conditions, the presence/absence of surfactants (for vitrification purposes), and general biochemical procedures, there are specific tools available for screening and evaluating the stability of macromolecular assemblies (e.g. differential scanning calorimetry (DSC), differential scanning fluorescence (DSF), ProteoPlex (
)). Some laboratories have found that the gradient fixation (GraFix) approach—wherein macromolecules undergo a weak, intramolecular chemical cross-linking while being purified by density gradient ultracentrifugation (
). As with any cross-linking method, there is always the potential to induce artifacts caused by chemical fixation. However, the argument is that the cross-links will be randomly dispersed throughout the molecular assembly and thus will be averaged out during image analysis. Biochemical sample preparation and optimization can be iterative processes, often guided by and benefiting from multiple rounds of data collection and analysis.
The goal of a single-particle imaging experiment is to capture all relevant structural states through classification, an idea that will be elaborated upon under “Computational image analysis” below. Numerous studies have taken advantage of this concept and demonstrated the utility of computational classification approaches, following data collection, to successfully “purify” complexes in silico, in cases where traditional biochemical approaches proved insufficient to isolate the specimen of interest free of undesired contaminating particles (
). However, this comes with the significant drawback of requiring larger datasets, because the final resolution for any individual map is directly related to the number of particles from which it is derived (
), although its success will likely vary depending on the properties of the sample.
As the tools become more developed, and the downstream protocols become more automated, we will want to examine more challenging samples, including those that may be less abundant, that contain transiently interacting factors, and that generally represent highly dynamic and heterogeneous assemblies. It should be possible to perform relatively crude purifications, even starting from cell lysate (
). Accomplishing such a task for higher-resolution studies would feed off of developments in all the downstream steps, especially as pertains to specimen vitrification (which can disrupt macromolecular integrity through destructive forces at the air–water interface (
)), and image analysis (which can become complex for highly heterogeneous cases). Although in silico purification approaches will be useful, the rarer and more challenging the sample, the more necessary it will be to optimize sample purity biochemically; and as always, the adage “garbage in, garbage out” will apply. In this regard, one particularly attractive approach is the use of affinity grids for on-grid specimen purification (
), it seems possible that a highly-specific and well-conjugated tag, coupled to a rigorous on-grid purification protocol, has the potential to provide a powerful means to isolate and explore rare biological assemblies with interesting functional properties.
Sample preparation for imaging in a transmission electron microscope
Once the sample has been purified and verified biochemically, it is applied onto grids for screening and data collection. There are two major methods for sample preparation for imaging in an electron microscope: negative staining and vitrification. Negative staining constitutes the application of a heavy metal stain (e.g. uranyl acetate/formate, ammonium molybdate, methylamine tungstate, etc.) to the sample (
). The process effectively dehydrates the sample, and the grids can be stored for a long period of time at room temperature. Grids are imaged under room temperature conditions, and the contrast is generated by the heavy metal atoms surrounding the molecule of interest. Vitrification constitutes freezing the sample under liquid nitrogen temperatures (typically into liquid ethane medium). The process, which was in part influenced by earlier reports with 2D crystals (
). The grids must be stored under cryogenic conditions. Images are acquired under liquid nitrogen temperatures, and the contrast is generated by electron scattering from the atoms within the sample itself.
Negative staining is a powerful technique that has been used for many successful structural studies at low resolution (
). However, several things are important to note. First, because the contrast is generated by the heavy metal atoms, the approach generates an outline of the particle, whereas all of the internal information is lost. Second, the stain dehydrates and flattens the object, which can be readily observed when the grid is imaged at a tilt angle (
). Third, there is an additional layer of carbon, which increases background noise. All these factors, as well as the grain size of the stain, fundamentally limit the resolution and the information that can be obtained. However, negative stain can also provide the experimentalist with a quick and meaningful understanding of the quality of the sample (
). For this reason, a high-throughput approach, e.g. for rapid sample screening and optimization, can be useful and time-saving for difficult samples or for identifying optimal buffer conditions. One such approach is currently being developed at the National Resource for Automated Molecular Microscopy (NRAMM),
When initiating cryo-EM experiments after successful results from a negative stain, several problems may be encountered. First, negative staining typically requires ∼1 order of magnitude lower sample concentration, because the particles adhere to a thin carbon support film, to which they have a high affinity and preference over empty holes. Second, the carbon support film can induce severe and/or altered preferential orientations, as compared with the air–water interface in conventional holey grids (
). For this reason, the negative staining protocol is distinct from cryo-EM vitrification, and structures obtained by negative staining may not immediately translate into high-resolution structures by cryo-EM. As a result, some groups, including my own, have in many cases omitted negative staining altogether.
Vitrification can be performed using numerous devices. A conventional manual plunge freezer has been around since the 1980s and has worked astonishingly well for vitrifying many different samples (
). Remarkably, a large percentage of the user base in the microscope facility employed by researchers from the Salk Institute and The Scripps Research Institute still uses the same manual plungers and prefers them to robotic vitrification instruments, such as the ThermoFisher Scientific (formerly FEI) Vitrobot, Gatan Cryoplunge 3 System, or the Leica EM GP. For the most part, they all perform the same standard procedure as one would carry out by hand, using filter paper to blot off excess buffer and plunge-freeze the grid into liquid ethane (
). They provide the user with the ability to, for example, reproducibly vary the blotting time, specify single- or double-side blotting, angle of the filter paper, etc. For lower-abundance samples (sub-micromolar quantities), there is a high likelihood that the particles will not go into the grid holes. In this case, support films can be floated onto the grids, and the sample would be adhered to the film (
) have also been employed and are currently being optimized for routine use.
There are multiple disadvantages to any vitrification approach that uses filter paper for sample blotting. First, the majority of the sample is discarded, and only nanoliters of material remain vitrified on the grid. Second, the user is limited to the application of a single sample on a grid, whereas only a small fraction of the grid is actually required for a high-resolution dataset. Third, the ice thickness often varies from one square to another. There have also been discussions that ions (like calcium) can potentially leak from the filter paper to the sample, thus disrupting the integrity or influencing the structural properties of ion channels, for example. Finally, in almost all instances, the sample appears to be absorbed to one of two air–water interfaces (
). The last problem is particularly bad and stems from the sample hitting the air–water interface orders of magnitude times faster than the time it takes for it to plunge into the ethane medium. Every time the particles hit the air–water interfaces, they tend to stick, and consequently, the vast majority of the particles end up adsorbing to one of two interfaces, at the top and bottom of the grid (
). This is a major problem, as it causes “preferred specimen orientation,” which results in resolution anisotropy within the final map (Fig. 2A, and see “Microscopy and data collection” and “Cryo-EM map and atomic model validation” below) and can also lead to protein denaturation at the site of contact (
To overcome some of the problems with conventional blotting and vitrification, the instrument Spotiton has been developed at NRAMM, which uses inkjet printing heads with picoliter dispensing capabilities to spot samples onto grids (
). Initially, the instrument was developed to reproducibly spot small volumes of sample onto cryo-EM grids and generate “perfectly thin ice,” with additional capabilities of multiplexing the spotting process and vitrifying multiple samples on the grid (
). These advances addressed the first three issues described above (namely discarding the sample and large areas of ice thickness variation). Unexpectedly, the developers also noticed that it was possible to reduce specimen adherence to the air–water interface, and therefore preferred particle orientation and resulting directional resolution anisotropy, by minimizing the time between spotting and plunging (
). However, even the fastest spot-to-plunge times do not completely overcome specimen adherence to the air–water interface, and further speedups are being developed within in-house and commercial instruments. Low spot-to-plunge times may therefore have major benefits for routine and automated sample preparation in SPA and may be applicable to other instruments that aim to automate and improve conventional blotting methods. It is worth noting that, under some circumstances, one may actually want to exploit sample adherence to the air–water interface to the experimenter’s advantage, for example for concentrating rare samples (
). Anytime the specimen adheres to the air–water interface, there is the possibility of partial sample denaturation, but if this is the only mechanism by which to distribute particles within holes and away from the grainy and noisy carbon, it may remain the best strategy, at least for some samples in the future. One can then use tilted data collection strategies to overcome preferred orientation (Fig. 2B), as discussed below.
The basic method for vitrification has changed little in the last 40 years and works remarkably well. But it is nonetheless limited. The ability to reproducibly titrate the thickness of the ice and to ensure even ice across the entirety of the grid will dramatically speed up workflows. Multiplexing capabilities, coupled with the ability to load and screen multiple samples during a microscope session, should essentially ensure that a good grid can be obtained every time a sample is purified, at least for “well-behaved” and abundant samples. Solving the air–water interface problem, either by reducing the spot-to-plunge time (
), should reproducibly and routinely diminish the orientation bias and affect the resulting resolution anisotropy.
Microscopy and data collection
Data collection procedures for single-particle cryo-EM have become more standardized in recent years. They have increasingly relied on automated software, such as the Leginon system that pioneered automated data collection methodologies (
). Multiple user facilities, for example at New York Structural Biology Center, Janelia Research Campus, the recently established national centers, and many places around the world, have standardized their procedures for data acquisition, at least for the specific facility. Because the development (
), which can directly count each incident electron on a camera pixel, much of the field has migrated toward them due to the improved detective quantum efficiency (DQE, a measure of signal-to-noise ratio of the detector at different spatial frequencies) (
A common problem that is encountered with many samples is preferential specimen orientation through adherence to the air–water interface, which leads to anisotropic resolution in the map. The user collects a dataset, spends weeks processing the data to obtain a map, only to find out that it is smeared in the Z direction and difficult to interpret. The user can easily spend months trying out and screening different substrate supports, surfactants, surface treatment strategies, etc., to ameliorate the preferential orientation problem and alter the orientation distribution (
). However, none of these are generalizable. Because of the geometry of the imaging experiment, simply tilting the specimen can largely alleviate the problem in a generally applicable manner (Fig. 2B) (
). Using tilts during data collection results in more even coverage of Fourier space voxels and a corresponding improvement in the reconstructed volume. Previously, we compared reconstructions of the hemagglutinin (HA) trimer, which is oriented in predominantly top views (
) from data collected at different tilt angles. Reconstructions from tilted images show less stretching, better defined features, and less problems caused by misalignment of orientation (streaking evident within top views, caused by iterative refinement in the context of uneven orientation sampling) (Fig. 3A). Although some amount of anisotropy will likely remain even after tilting (indicated by the 3D FSCs in Fig. 3B), as only a uniform orientation distribution results in a completely isotropic map, in practice, tilting is sufficient to solve many of the problems and has been successfully applied to multiple specimens by numerous laboratories (
). The approach does not require any modification to the data collection strategy other than setting the tilt angle during image acquisition (and potentially using a higher frame rate to account for increased beam-induced movement). The practical disadvantages are that the sample exhibits more beam-induced movement at tilt, the focus gradient needs to be properly estimated, and the ice is inherently thicker due to geometry. We believe that the first two problems will be addressed with improved computational methods, whereas increased ice thickness is unavoidable, and its effects must be experimentally determined. Until the preferred orientation problem is completely eliminated, the tilting strategy remains a robust technique for solving the anisotropy problem. It should nonetheless be noted that tilting will not address sample denaturation at the air–water interface (
), which must be done through chemical or other means.
It is worth emphasizing that resolutions still rarely break 3 Å, even though, in principle, there may not be a theoretical barrier for achieving this (Fig. 4, A and B). Thus, users often want to optimize their collection strategies for their particular sample. For example, one may vary the dose, the amount of underfocus used for imaging, whether image shift or stage position is used for targeting (
). Although this can be successful, currently, with constant frame rates in most of the current generation of detectors, the strategy has the drawback that a large movie must be recorded to compensate for movement that occurs largely in the first few frames; effectively, the majority of the movie becomes redundant. To account for this, variable frame rates have been introduced in the latest generation K3 detectors (Gatan) and should become more standard, especially as we learn more about the mechanisms of beam-induced movement. Other aspects, such as dose rate, total dose, magnification (and the balance between smaller field of view versus improved low-frequency DQE (
)), etc., would be relevant to obtain a quantitative understanding of the current bottlenecks within the structure determination pipeline.
The selection of a cost-effective microscope for routine high-resolution molecular structural biology remains an open question. The vast majority of high-resolution structures deposited into the EM data bank (
) have been collected on a 300-kV microscope, typically a Titan Krios manufactured by ThermoFisher Scientific (formerly FEI company)—the most expensive electron microscope on the market. The higher accelerating voltage reduces the amount of inelastic electron scattering (low-dose images are primarily formed by elastically scattered electrons) and specimen charging (
), but this microscope may not be the most cost-effective solution to individual institutions. Recently, cheaper 200-kV microscopes have been shown to be amenable for high-resolution structural biology, including for sub-4 Å (
). Atomic cross-sections for elastically (and inelastically) scattered electrons increase at lower microscope accelerating voltages, which leads to more low-resolution contrast within the images, albeit at the expense of a dampened envelope and increased inelastic scattering (
). Although it is still not clear whether current resolutions on a 200-kV system are limited by the two-condenser lens system of the Talos Arctica—the mid-range microscope manufactured by ThermoFisher Scientific—and whether the improvements arise from the increased contrast of lower kV instruments, or simply general improvements in data collection and analysis protocols, these preliminary results are exciting, because they demonstrate that significantly cheaper microscopes can be used for routine high-resolution data collection. It will be particularly interesting to watch for developments with even lower voltage (e.g. 100 kV) microscopes as, for example, proposed by Vinothkumar and Henderson (
). Efforts toward quantifying the energy dependence of contrast and radiation damage in cryo-EM are underway and will help guide microscope developments and their application to molecular structural biology (
). It will be interesting to see how data collection strategies will be transformed over the ensuing years, and whether lower accelerating voltage microscopes gain popularity for routine single-particle work.
Although cryo-EM capabilities are fast approaching those of X-ray crystallography (
), data collection time and efficiency still lag far behind by several orders of magnitude. One interesting approach to speed up data collection is to use the electron beam-shift (instead of the stage) for moving to different areas of the grid, while compensating for the introduced beam tilt using the microscope’s deflection coils (
), but further studies will need to be conducted to explore the benefits of each approach. Both can provide severalfold speedup within a data collection session, potentially without compromising image quality. Larger fields of view on the detector will provide further gains. Such improvements would be broadly beneficial to the structural biology community, but might have particular implications to the pharmaceutical industry, which will benefit from rapidly defining the footprints of small molecules on macromolecular targets of interest. Many classes of proteins simply cannot be routinely crystallized (for example, membrane proteins), and therefore, cryo-EM represents an important alternative to traditional crystallographic structure-based drug design (SBDD) strategies. However, current improvements will still be insufficient for the throughput necessary for routine SBDD, due to the requirement for solving many structures bound to different small molecules, in parallel and in an iterative manner (
). An alternative may be to simply maintain “microscope farms” within a facility, all devoted to automated and high-throughput data collection for structure determination. While not necessarily the most elegant solution, this may be the best medium-term approach for routine sub-2 Å resolution drug studies.
Computational image analysis
Once the data are collected, it is necessary to analyze the images to come up with one or several reconstructions representing the imaged object. Not too long ago, image processing involved many independent, time-consuming, and often experimental steps. Today, much of it, at least for “easy” samples, is more automated within consolidated workflows. There is a large variety of software available for image analysis, developed over the last ∼4 decades (
). Generally speaking, single-particle–specific software is designed to take raw cryo-EM micrographs, select particles, perform 2D and 3D alignments and classifications, assign or refine angular orientations (rotations and translations, either ab initio or using a model), and reconstruct the object(s). Many other features and functionalities are often built-in to different software, and “wrapper” packages are often employed to get the best procedures from each (
). An experienced user can often quickly generate high-resolution reconstructions with more challenging samples, although this requires being trained in the field and intimately familiar with the pitfalls of SPA. There is every reason to believe that the trend toward more automated workflows will continue, and automation will take over many aspects of single-particle analysis and reconstruction, much like in the X-ray crystallography field.
Size limits in cryo-EM also continue to decrease. Even a few years ago, it was difficult to imagine obtaining a near-atomic resolution structure for complexes or samples that are less than ∼100 kDa (
). The challenge arises due to the mechanism of image formation in a microscope characterized by an approximately sinusoidal contrast transfer function (CTF), which results in poor low-resolution contrast for weak-phase objects. The smaller the object, the more difficult it is to distinguish it from background noise and thus the more difficult to computationally analyze (
). One idea was that phase plates, which introduce a phase shift (ideally π/2) between the scattered and unscattered waves inside the microscope, thus producing a cosine-type CTF and improving low-resolution contrast, could more readily address size limitations (
) the use of phase plates. There is no silver bullet to achieving this, but it seems that a combination of higher magnification, larger dataset sizes, careful image analysis, and possibly lower accelerating voltages (and/or phase plates), facilitate reaching higher resolutions. Presumably, small particles will become more routine for structural studies with better detectors and continued software advances, reaching their predicted limit speculated some time ago (
)) will be helpful to definitively define phase plate utility for routine SPA workflows. It will also be interesting to follow the advances of phase plate technology, especially as the next generation of laser phase plates (
Although cryo-EM resolutions have consistently improved, it is worth emphasizing that the first near-atomic structures from a single-particle experiment were published 10 years ago, independently by three groups (Fig. 4, C and D) (
). However, the above studies were limited to a select few cases of icosahedral viruses. Since then, many different aspects of the methodologies improved, such that a steady slew of smaller and generally more challenging structures could be obtained at increasingly higher resolutions (
). We are now on the cusp of breaking into true atomic resolution, where carbon:carbon and perhaps soon even carbon:hydrogen bonds can be distinguished (Fig. 5). Interestingly, the majority of the highest-resolution information suffers radiation damage within the first several e−/Å2 (
). It seems likely that, unless we account for the beam-induced movement to make all particles “shiny” (the term was once coined during a session at the annual 3D Electron Microscopy Gordon Research Conference), resolutions will continue to lag behind microscope capabilities. The development of general supports for cryo-EM (
), coupled with continued improvements in software to correct for residual beam-induced movement, suggests that this may be possible in the near future, and it is likely that we may see an ∼1-Å, or even sub-Å, structure within a few years.
Some of the most interesting data sets, arguably, are those that exhibit an extensive amount of structural heterogeneity. Heterogeneous datasets can provide insight into mechanisms of assembly or complex function and opportunities for discovering novel functionally relevant factors (
). Automated approaches are still underdeveloped for their analyses. To decipher structural heterogeneity, it is necessary to classify the particles in 3D (older successful analyses have also been performed in 2D (
), their utility will vary, as will the accuracy with which they can identify distinct and especially sub-stoichiometric populations of particles. The analysis of heterogeneous structures can also be taken to its extreme. For example, it is possible to essentially take lysate from cells, put it on a grid, and obtain three-dimensional structures of select specimens (
). However, the current approaches are still limited to complexes that are highly abundant and homogeneous. For any practitioner of single particle analysis, it is well-known that even crudely purified macromolecular assemblies may present significant challenges to computational image analysis, and it is almost always advisable to improve the purity of the sample as a first step when troubleshooting a difficult specimen. In practice, it is not clear how many impurities it is possible to tolerate. Nonetheless, such “lysate-to-structure” methods represent the first steps toward the cryo-EM version of structural (or visual) proteomics (
), and with the right approach (and perhaps mild biochemical enrichment), one can envision the possibility of taking relatively crude material and determining structures of many, or at least some, core macromolecules or macromolecular assemblies.
We often think of macromolecular dynamics in terms of discrete conformational or compositional states, such as those that characterize distinct enzymatic states or allosteric activators. However, many macromolecules are continuously or quasi-continuously dynamic. Several approaches have explicitly attempted to deal with the continuous flexibility problem, including normal mode analysis (
). This is an ongoing field of development, which has the potential to define entire energy landscapes associated with continuous movement in addition to discrete structural states. When the complex has multiple moving parts, it is also possible to break them up into independently defined (and often continuously mobile) rigid bodies and treat them separately within individual refinement protocols (
). Presumably, with increases in data size, computational power, and algorithmic capabilities, we will more actively apply continuous and quasi-continuous conformational analyses to macromolecular objects, at least on a focused regional basis.
With improved methods for classification, it is also possible to build upon old ideas pertaining to time-resolved structural studies. Over the years, different methods of fast specimen preparation have been introduced to capture transient structural states (
). Early pioneering work showed that one could capture the response of the acetylcholine receptor to its substrate, with millisecond reaction times, by spraying acetylcholine onto grids coated with the receptor prior to cryo-EM structural analysis (
) and the redistribution of domains into distinct states.
When analyzing heterogeneous particles, the following questions are often encountered and also represent current challenges. How does one decide on the proper number of classes with which to represent the data? What is the right classification approach? What determines a significant difference between any two structures? Finally, and perhaps most importantly, what are the biological implications of each structural state? Arguably some of the most interesting biology exists within structurally heterogeneous datasets, and single-particle cryo-EM is, at its core, a single molecule technique that has a unique capability to make sense of such data. A comprehensive approach is required to make sense of increasingly more heterogeneous data sets, especially as the questions start to diverge from understanding static structures to studying dynamic assemblies.
Derivation of an atomic model
Many of the individual steps pertaining to atomic model building and refinement have relied upon the wealth of knowledge in physical and protein chemistry, as well as existing tools for crystallographic model refinement. Most of the first atomic models were obtained by manually building and real-space refinement in programs such as Coot (
). Such approaches were logical extensions of existing workflows, as modeling packages lagged behind the rapid improvements in cryo-EM resolution that suddenly necessitated deriving atomic models. Over the last several years, many of the gaps have been closed, and there are now numerous available packages, both standalone and as part of existing suites, which are designed to perform many aspects of model building into real space cryo-EM maps (
). Automated model building tools (phenix.auto-build, Rosetta, ARP-wARP, MAINMAST, Buccaneer) have also been modified from X-ray crystallography to work with cryo-EM maps.
Moving forward, there are concrete differences that are unique to cryo-EM, which distinguish refinement of models into cryo-EM maps from X-ray maps. The atomic form factors in cryo-EM maps represent electrostatic potential of the atoms, whereas in X-ray maps, they represent electron density. This means that cryo-EM maps also contain information about charge states in the macromolecule, which are not detected in an X-ray experiment. Cryo-EM maps contain information about the phases of the Fourier transform of the imaged object, whereas the phases must be experimentally recovered in an X-ray scattering experiment. The presence of experimental phases also allows for easier interpretation of cryo-EM maps at a lower resolution than is typically accepted for X-ray crystallographic experiments. These and other factors (
) mean that cryo-EM refinements cannot simply borrow concepts from the crystallographic community, and some procedures need to be uniquely developed within the cryo-EM community. Understanding these fundamental differences will better help to derive atomic models from cryo-EM reconstructions, especially at true atomic resolution.
Cryo-EM map and atomic model validation
There is a common understanding that the most powerful, but also the most dangerous, aspect of single-particle cryo-EM is that a map will always emerge at the end of any workflow. Ensuring that the map, and subsequently the model, correctly represents the data to the best possible agreement is arguably the most important aspect of the experiment and will help to avoid serious mistakes and misinterpretations of the data (
). Validation measures have seen an extensive amount of development over the last several years, but they are not as standardized as in the X-ray crystallographic community. They will continue to evolve as resolutions improve and as heterogeneous data sets become more complicated. I have summarized some of the important questions that must be asked when evaluating the quality of a map and corresponding atomic model, both qualitatively and quantitatively (Fig. 6). There are numerous good reviews on validation topics that go into much more detail (
). Furthermore, with the recent “Frontiers in Cryo-EM Validation” meeting held in Hinxton, United Kingdom, in January 2019, we can soon expect a timely update to the established standards set several years back (
). Finally, in addition to analyzing the standard validation metrics, as will be described below, it is always advisable that the reader exercise her/his own judgment in determining whether the structural data presented in a paper justifies the conclusions.
The standard metric for evaluating a reconstructed cryo-EM map is the FSC curve (
) and describes the correlation between two “half-maps,” each reconstructed from randomly selected ½-subsets of the data, as a function of spatial resolution. The nominal resolution value can then be obtained by cutting off the curve at a specific threshold, typically 0.143 (
). The FSC is a requirement for all publications and map depositions.
Although the evaluation of the global map resolution is critical, it often belies some of the most interesting features of the map, particularly those that exhibit structural heterogeneity. For this reason, local resolution analyses, typically computed in patches across the map, have become increasingly important to describe the quality of different regions of the reconstructed object (
). The most common observation is that core regions of a map display higher resolution, whereas outer regions display lower resolution. Importantly, it is also possible to filter the map by local resolution, such that heterogeneous regions would be filtered to a lower resolution than homogeneous regions. Often times, especially for large assemblies, some of the most interesting biology occurs in the outer segments of a map, whereby auxiliary (often sub-stoichiometrically occupied) components or associated protein factors relay a signal to the catalytic core (
), can reveal a lot about the pathologies in the map and serve as a quantitative complement to conventional qualitative Euler angle distribution profiles. This is an important validation measure, which stems from the problems associated with vitrification causing preferential specimen orientation of the imaged object (see “Sample preparation for imaging in a transmission electron microscope” above) that results in nonuniform resolution in different directions. Because of the geometry of the imaging experiment, preferential specimen orientation typically manifests itself as poor resolution in the Z direction at the expense of better resolution in the X and Y directions. This effectively results in stretching of density features along the Z axis. In many cases, anisotropy may not cause much of a problem, either for model building/refinement or interpretation, but in more serious cases, it can severely deform the map and compromise its interpretability altogether; if not properly accounted for, even minor density elongation could pose problems during atomic model refinement and lead to inappropriate conclusions, especially if the biological interpretations are founded on subtle structural changes. We have recently proposed that 3D FSCs, which quantitatively describe orientation anisotropy (
), become standard tools for validation of any single-particle reconstruction.
The relationship between anisotropy and resolution is not fully understood. Although the sampling distribution is largely independent of other factors attenuating image quality, distinct sampling distributions seem to affect the nominal resolution of a reconstructed map (
). However, a direct relationship has not been established and requires further work. Whereas tilts can ameliorate resolution anisotropy, this comes at the expense of increased B-factor that attenuates global resolution (minimally caused by inherent increases in ice thickness within tilted images, but possibly also from other factors (
)). Ideally, one would be able to explicitly define the optimal tilt angle for a dataset given a sampling distribution, which can be deduced within a few hours of data collection.
The FSC can also be used to compute the correlation of the map to the atomic model. Such map-to-model FSCs describe the fit of the model to the experimental reconstruction, which needs to numerically correspond to the map-to-map resolution between half-sets of the data. If the two do not (approximately) match, this is a sign of problems that should be corrected. Furthermore, the map-to-model resolution analysis can be dynamically incorporated into atomic model refinement protocols, as demonstrated with independent half-map–based refinements (
). The disadvantage of the approach is that only one-half of the data are used for model refinement, which by definition lowers the quality and resolution of the working map. Nonetheless, independent half-map–based refinements would, at a minimum, provide an external measure of model improvement. Whether the approach becomes more widely accepted remains to be seen.
Atomic models must also be validated using numerous external metrics. Model validation builds upon decades of work in the crystallographic community and is typically reported in table format as supporting information. Importantly, the proper way to validate any model is to only use metrics that have not been relied upon during model refinement. For example, if Ramachandran restraints were used during refinement, they cannot be used to validate the model. The same logic applies for many other restraints, such as bond distances, dihedral angles, planarity, etc. Numerous complementary approaches, such as (Cα-based low-resolution annotation method) CaBLAM (
), which provides an “all-atom contact analysis” and produces a variety of numerical scores that summarize the quality of the final model.
Some important questions remain to be addressed or accepted as standards in the field. For example, how does one know when the resolution is sufficient to start modeling? Which areas of the map should and should not be modeled? To what extent is the model, and its interpretability, affected by anisotropy? Validation measures also have to be further developed and standardized. As was recently highlighted, there are many issues still remaining (
). For example, in many of the current models derived from cryo-EM maps, experimental temperature factors lie outside of the expected range of values given the nominal map resolution, waters are not accounted for or lie outside of density, geometry is poor with many interatomic clashes, and there are no standard methods for sharpening or thresholding the maps (
), but in practice, the user typically sharpens the map to different extents and visually and qualitatively selects the optimal sharpening amount. Furthermore, sharpening is performed slightly differently within cryo-EM packages (
). It will be necessary to systematically establish working criteria that the field can generally agree upon, both for maps and for models, and continue to define what should and shouldn’t be deposited into the PDB/EMDB.
In light of the systematic improvements in resolution, one interesting observation is the appearance of hydrogen atoms at apparently lower resolutions than those typically required for their observation in X-ray diffraction experiments (
). This is exciting because it demonstrates some of the fundamental differences between X-ray electron density and cryo-EM coulomb potential maps (see “Derivation of an atomic model” above). The latter are less straightforward to interpret (
). One of the observations from high-resolution electron crystallography experiments is that the R-factors derived from an atomic model seem to be higher than R-factors from X-ray maps at comparable resolutions, implying slight deviations in interatomic distances from those known through physical chemistry (
). One explanation for this phenomenon is that electron scattering factors for proteins are not properly taken into account during refinement of the atomic model (scattering factors for proteins and amino acids differ from scattering factors calculated for gas-phase electron diffraction from neutral atoms (
) may also contribute. The fact that single-particle images already contain experimental phases, and because resolutions are quickly approaching those from micro-electron diffraction (micro-ED) implies that careful measurements can be performed to define and utilize the correct scattering factors for atomic model refinement, and such analyses will have implications for the types of details that can be observed at true atomic resolution. Biologically, this may have implications for enzyme mechanism, which is often governed by subtle structural changes, coupled with charge state, within the local chemical environment.
Cryo-EM methods have come a long way and are now opening opportunities to explore the complexity of macromolecular structural biology in previously inconceivable ways. The numerous current challenges should be worked out over the ensuing years to establish routine workflows, such that, upon sample purification (perhaps even in relatively crude form), a structure could be readily obtained. In contrast to crystallography, where sample purity and its (in)ability to crystallize can stifle or completely impede progress, cryo-EM is much less, if at all, constrained by these factors. Thus, perhaps we can speculate about the future of cryo-EM development by looking toward the sequencing community, where rapid progress has resulted in the establishment of core sequencing centers within many institutions, and broad developments have completely transformed biological sciences. Analogously, cryo-EM may provide the opportunity for structural biology to evolve from a relatively niche field to a fundamental component intrinsic to any biological study. Although I did not address any methods or applications relevant to cryo-electron tomography or in situ cellular structural biology (
), it is important to note that many of the same tools that have been developed for SPA can be retooled or in some cases directly applied to the analysis of tomograms or sub-tomogram averages. There are even greater challenges to overcome in cellular cryo-EM imaging, but the possibilities offer an opportunity to chart out the molecular organization of the cell with unprecedented detail, level of understanding, and resolution.
I thank Nikolaus Grigorieff, Peter Rosenthal, Yong Zi Tan, as well as members of my lab for critical reading of the manuscript.
HHS | National Institutes of Health DP5 OD021396, R01 AI136680, U54 GM103368 to Lyumkis Dmitry
The development of cryo-EM into a mainstream structural biology technique.