4.9.2. Principal Component Analysis (PCA) — MDAnalysis.analysis.pca

Authors:John Detlefs
Year:2016
Copyright:GNU Public License v3

New in version 0.16.0.

This module contains the linear dimensions reduction method Principal Component Analysis (PCA). PCA sorts a simulation into 3N directions of descending variance, with N being the number of atoms. These directions are called the principal components. The dimensions to be analyzed are reduced by only looking at a few projections of the first principal components. To learn how to run a Principal Component Analysis, please refer to the PCA Tutorial.

The PCA problem is solved by solving the eigenvalue problem of the covariance matrix, a \(3N \times 3N\) matrix where the element \((i, j)\) is the covariance between coordinates \(i\) and \(j\). The principal components are the eigenvectors of this matrix.

For each eigenvector, its eigenvalue is the variance that the eigenvector explains. Stored in PCA.cumulated_variance, a ratio for each number of eigenvectors up to index \(i\) is provided to quickly find out how many principal components are needed to explain the amount of variance reflected by those \(i\) eigenvectors. For most data, PCA.cumulated_variance will be approximately equal to one for some \(n\) that is significantly smaller than the total number of components, these are the components of interest given by Principal Component Analysis.

From here, we can project a trajectory onto these principal components and attempt to retrieve some structure from our high dimensional data.

For a basic introduction to the module, the PCA Tutorial shows how to perform Principal Component Analysis.

4.9.2.1. PCA Tutorial

The example uses files provided as part of the MDAnalysis test suite (in the variables PSF and DCD). This tutorial shows how to use the PCA class.

First load all modules and test data

>>> import MDAnalysis as mda
>>> import MDAnalysis.analysis.pca as pca
>>> from MDAnalysis.tests.datafiles import PSF, DCD

Given a universe containing trajectory data we can perform Principal Component Analyis by using the class PCA and retrieving the principal components.

>>> u = mda.Universe(PSF, DCD)
>>> PSF_pca = pca.PCA(u, select='backbone')
>>> PSF_pca.run()

Inspect the components to determine the principal components you would like to retain. The choice is arbitrary, but I will stop when 95 percent of the variance is explained by the components. This cumulated variance by the components is conveniently stored in the one-dimensional array attribute cumulated_variance. The value at the ith index of cumulated_variance is the sum of the variances from 0 to i.

>>> n_pcs = np.where(PSF_pca.cumulated_variance > 0.95)[0][0]
>>> atomgroup = u.select_atoms('backbone')
>>> pca_space = PSF_pca.transform(atomgroup, n_components=n_pcs)

From here, inspection of the pca_space and conclusions to be drawn from the data are left to the user.

4.9.2.2. Classes and Functions

class MDAnalysis.analysis.pca.PCA(universe, select='all', align=False, mean=None, n_components=None, **kwargs)[source]

Principal component analysis on an MD trajectory.

After initializing and calling method with a universe or an atom group, principal components ordering the atom coordinate data by decreasing variance will be available for analysis. As an example:

>>> pca = PCA(universe, select='backbone').run()
>>> pca_space =  pca.transform(universe.select_atoms('backbone'), 3)

generates the principal components of the backbone of the atomgroup and then transforms those atomgroup coordinates by the direction of those variances. Please refer to the PCA Tutorial for more detailed instructions.

p_components

The principal components of the feature space, representing the directions of maximum variance in the data. The column vector p_components[:, i] is the eigenvector corresponding to the variance[i].

Type:array, (n_atoms * 3, n_components)
variance

The raw variance explained by each eigenvector of the covariance matrix.

Type:array (n_components, )
cumulated_variance

Percentage of variance explained by the selected components and the sum of the components preceding it. If a subset of components is not chosen then all components are stored and the cumulated variance will converge to 1.

Type:array, (n_components, )
mean_atoms

After running PCA.run(), the mean position of all the atoms used for the creation of the covariance matrix will exist here.

Type:MDAnalyis atomgroup
transform(atomgroup, n_components=None)[source]

Take an atomgroup or universe with the same number of atoms as was used for the calculation in PCA.run(), and project it onto the principal components.

Notes

Computation can be sped up by supplying a precalculated mean structure.

Changed in version 1.0.0: n_components now limits the correct axis of p_components. cumulated_variance now accurately represents the contribution of each principal component and does not change when n_components is given. If n_components is not None or is less than the number of p_components, cumulated_variance will not sum to 1. align=True now correctly aligns the trajectory and computes the correct means and covariance matrix.

Changed in version 0.19.0: The start frame is used when performing selections and calculating mean positions. Previously the 0th frame was always used.

Parameters:
  • universe (Universe) – Universe
  • select (string, optional) – A valid selection statement for choosing a subset of atoms from the atomgroup.
  • align (boolean, optional) – If True, the trajectory will be aligned to a reference structure.
  • mean (MDAnalysis atomgroup, optional) – An optional reference structure to be used as the mean of the covariance matrix.
  • n_components (int, optional) – The number of principal components to be saved, default saves all principal components, Default: None
  • verbose (bool (optional)) – Show detailed progress of the calculation if set to True; the default is False.
MDAnalysis.analysis.pca.cosine_content(pca_space, i)[source]

Measure the cosine content of the PCA projection.

The cosine content of pca projections can be used as an indicator if a simulation is converged. Values close to 1 are an indicator that the simulation isn’t converged. For values below 0.7 no statement can be made. If you use this function please cite [BerkHess1].

Parameters:
  • pca_space (array, shape (number of frames, number of components)) – The PCA space to be analyzed.
  • i (int) – The index of the pca_component projection to be analyzed.
Returns:

  • A float reflecting the cosine content of the ith projection in the PCA
  • space. The output is bounded by 0 and 1, with 1 reflecting an agreement
  • with cosine while 0 reflects complete disagreement.

References

[BerkHess1]Berk Hess. Convergence of sampling in protein simulations. Phys. Rev. E 65, 031910 (2002).