4.9.2. Principal Component Analysis (PCA) — MDAnalysis.analysis.pca
¶
Authors: | John Detlefs |
---|---|
Year: | 2016 |
Copyright: | GNU Public License v3 |
New in version 0.16.0.
This module contains the linear dimensions reduction method Principal Component Analysis (PCA). PCA sorts a simulation into 3N directions of descending variance, with N being the number of atoms. These directions are called the principal components. The dimensions to be analyzed are reduced by only looking at a few projections of the first principal components. To learn how to run a Principal Component Analysis, please refer to the PCA Tutorial.
The PCA problem is solved by solving the eigenvalue problem of the covariance matrix, a \(3N \times 3N\) matrix where the element \((i, j)\) is the covariance between coordinates \(i\) and \(j\). The principal components are the eigenvectors of this matrix.
For each eigenvector, its eigenvalue is the variance that the eigenvector
explains. Stored in PCA.results.cumulated_variance
, a ratio for each
number of eigenvectors up to index \(i\) is provided to quickly find out
how many principal components are needed to explain the amount of variance
reflected by those \(i\) eigenvectors. For most data,
PCA.results.cumulated_variance
will be approximately equal to one for some \(n\) that is significantly
smaller than the total number of components. These are the components of
interest given by Principal Component Analysis.
From here, we can project a trajectory onto these principal components and attempt to retrieve some structure from our high dimensional data.
For a basic introduction to the module, the PCA Tutorial shows how to perform Principal Component Analysis.
4.9.2.1. PCA Tutorial¶
The example uses files provided as part of the MDAnalysis test suite
(in the variables PSF
and
DCD
). This tutorial shows how to use the
PCA class.
First load all modules and test data:
import MDAnalysis as mda
import MDAnalysis.analysis.pca as pca
from MDAnalysis.tests.datafiles import PSF, DCD
Given a universe containing trajectory data we can perform Principal Component
Analyis by using the class PCA
and retrieving the principal
components.:
u = mda.Universe(PSF, DCD)
PSF_pca = pca.PCA(u, select='backbone')
PSF_pca.run()
Inspect the components to determine the principal components you would like
to retain. The choice is arbitrary, but I will stop when 95 percent of the
variance is explained by the components. This cumulated variance by the
components is conveniently stored in the one-dimensional array attribute
PCA.results.cumulated_variance
. The value at the ith index of
PCA.results.cumulated_variance
is the sum of the variances from 0 to
i.:
n_pcs = np.where(PSF_pca.results.cumulated_variance > 0.95)[0][0]
atomgroup = u.select_atoms('backbone')
pca_space = PSF_pca.transform(atomgroup, n_components=n_pcs)
From here, inspection of the pca_space
and conclusions to be drawn from the
data are left to the user.
4.9.2.2. Classes and Functions¶
-
class
MDAnalysis.analysis.pca.
PCA
(universe, select='all', align=False, mean=None, n_components=None, **kwargs)[source]¶ Principal component analysis on an MD trajectory.
After initializing and calling method with a universe or an atom group, principal components ordering the atom coordinate data by decreasing variance will be available for analysis. As an example::
pca = PCA(universe, select='backbone').run() pca_space = pca.transform(universe.select_atoms('backbone'), 3)
generates the principal components of the backbone of the atomgroup and then transforms those atomgroup coordinates by the direction of those variances. Please refer to the PCA Tutorial for more detailed instructions.
Parameters: - universe (Universe) – Universe
- select (string, optional) – A valid selection statement for choosing a subset of atoms from the atomgroup.
- align (boolean, optional) – If True, the trajectory will be aligned to a reference structure.
- mean (array_like, optional) – Optional reference positions to be be used as the mean of the covariance matrix.
- n_components (int, optional) – The number of principal components to be saved, default saves all principal components
- verbose (bool (optional)) – Show detailed progress of the calculation if set to
True
.
-
results.
p_components
¶ Principal components of the feature space, representing the directions of maximum variance in the data. The column vector p_components[:, i] is the eigenvector corresponding to the variance[i].
New in version 2.0.0.
Type: array, (n_atoms * 3, n_components)
-
p_components
¶ Alias to the
results.p_components
.Deprecated since version 2.0.0: Will be removed in MDAnalysis 3.0.0. Please use
results.p_components
instead.Type: array, (n_atoms * 3, n_components)
-
results.
variance
¶ Raw variance explained by each eigenvector of the covariance matrix.
New in version 2.0.0.
Type: array (n_components, )
-
variance
¶ Alias to the
results.variance
.Deprecated since version 2.0.0: Will be removed in MDAnalysis 3.0.0. Please use
results.variance
instead.Type: array (n_components, )
-
results.
cumulated_variance
¶ Percentage of variance explained by the selected components and the sum of the components preceding it. If a subset of components is not chosen then all components are stored and the cumulated variance will converge to 1.
New in version 2.0.0.
Type: array, (n_components, )
-
cumulated_variance
¶ Alias to the
results.cumulated_variance
.Deprecated since version 2.0.0: Will be removed in MDAnalysis 3.0.0. Please use
results.cumulated_variance
instead.Type: array, (n_components, )
-
transform
(atomgroup, n_components=None)[source]¶ Take an atomgroup or universe with the same number of atoms as was used for the calculation in
PCA.run()
, and project it onto the principal components.
Notes
Computation can be sped up by supplying precalculated mean positions.
Changed in version 0.19.0: The start frame is used when performing selections and calculating mean positions. Previously the 0th frame was always used.
Changed in version 1.0.0:
n_components
now limits the correct axis ofp_components
.cumulated_variance
now accurately represents the contribution of each principal component and does not change whenn_components
is given. Ifn_components
is not None or is less than the number ofp_components
,cumulated_variance
will not sum to 1.align=True
now correctly aligns the trajectory and computes the correct means and covariance matrix.Changed in version 2.0.0:
mean_atoms
removed, as this did not reliably contain the mean positions.mean
input now accepts coordinate arrays instead of atomgroup.p_components
,variance
andcumulated_variance
are now stored in aMDAnalysis.analysis.base.Results
instance.-
cumulative_overlap
(other, i=0, n_components=None)[source]¶ Compute the cumulative overlap of a vector in a subspace.
This is not symmetric. The cumulative overlap measures the overlap of the chosen vector in this instance, in the
other
subspace.Please cite [Yang2008] if you use this function.
Parameters: Returns: Cumulative overlap of the chosen vector in this instance to the
other
subspace. 0 indicates that they are mutually orthogonal, whereas 1 indicates that they are identical.Return type: See also
New in version 1.0.0.
-
rmsip
(other, n_components=None)[source]¶ Compute the root mean square inner product between subspaces.
This is only symmetric if the number of components is the same for both instances. The RMSIP effectively measures how correlated the vectors of this instance are to those of
other
.Please cite [Amadei1999] and [Leo-Macias2004] if you use this function.
Parameters: Returns: Root mean square inner product of the selected subspaces. 0 indicates that they are mutually orthogonal, whereas 1 indicates that they are identical.
Return type: See also
New in version 1.0.0.
-
run
(start=None, stop=None, step=None, verbose=None)¶ Perform the calculation
Parameters:
-
transform
(atomgroup, n_components=None, start=None, stop=None, step=None)[source] Apply the dimensionality reduction on a trajectory
Parameters: - atomgroup (AtomGroup or Universe) – The AtomGroup or Universe containing atoms to be PCA transformed.
- n_components (int, optional) – The number of components to be projected onto. The default
None
maps onto all components. - start (int, optional) – The frame to start on for the PCA transform. The default
None
becomes 0, the first frame index. - stop (int, optional) – Frame index to stop PCA transform. The default
None
becomes the total number of frames in the trajectory. Iteration stops before this frame number, which means that the trajectory would be read until the end. - step (int, optional) – Include every step frames in the PCA transform. If set to
None
(the default) then every frame is analyzed (i.e., same asstep=1
).
Returns: pca_space
Return type: array, shape (n_frames, n_components)
Changed in version 0.19.0: Transform now requires that
run()
has been called before, otherwise aValueError
is raised.
-
MDAnalysis.analysis.pca.
cosine_content
(pca_space, i)[source]¶ Measure the cosine content of the PCA projection.
The cosine content of pca projections can be used as an indicator if a simulation is converged. Values close to 1 are an indicator that the simulation isn’t converged. For values below 0.7 no statement can be made. If you use this function please cite [BerkHess1].
Parameters: - pca_space (array, shape (number of frames, number of components)) – The PCA space to be analyzed.
- i (int) – The index of the pca_component projection to be analyzed.
Returns: - A float reflecting the cosine content of the ith projection in the PCA
- space. The output is bounded by 0 and 1, with 1 reflecting an agreement
- with cosine while 0 reflects complete disagreement.
References
[BerkHess1] Berk Hess. Convergence of sampling in protein simulations. Phys. Rev. E 65, 031910 (2002).
-
MDAnalysis.analysis.pca.
rmsip
(a, b, n_components=None)[source]¶ Compute the root mean square inner product between subspaces.
This is only symmetric if the number of components is the same for
a
andb
. The RMSIP effectively measures how correlated the vectors ofa
are to those ofb
.Please cite [Amadei1999] and [Leo-Macias2004] if you use this function.
Parameters: - a (array, shape (n_components, n_features)) – The first subspace. Must have the same number of features as
b
. - b (array, shape (n_components, n_features)) – The second subspace. Must have the same number of features as
a
. - n_components (int or tuple of ints, optional) – number of components to compute for the inner products.
None
computes all of them.
Returns: Root mean square inner product of the selected subspaces. 0 indicates that they are mutually orthogonal, whereas 1 indicates that they are identical.
Return type: New in version 1.0.0.
- a (array, shape (n_components, n_features)) – The first subspace. Must have the same number of features as
-
MDAnalysis.analysis.pca.
cumulative_overlap
(a, b, i=0, n_components=None)[source]¶ Compute the cumulative overlap of a vector in a subspace.
This is not symmetric. The cumulative overlap measures the overlap of the chosen vector in
a
, in theb
subspace.Please cite [Yang2008] if you use this function.
Parameters: - a (array, shape (n_components, n_features) or vector, length n_features) – The first subspace containing the vector of interest. Alternatively,
the actual vector. Must have the same number of features as
b
. - b (array, shape (n_components, n_features)) – The second subspace. Must have the same number of features as
a
. - i (int, optional) – The index of eigenvector to be analysed.
- n_components (int, optional) – number of components in
b
to compute for the cumulative overlap.None
computes all of them.
Returns: Cumulative overlap of the chosen vector in
a
to theb
subspace. 0 indicates that they are mutually orthogonal, whereas 1 indicates that they are identical.Return type: New in version 1.0.0.
- a (array, shape (n_components, n_features) or vector, length n_features) – The first subspace containing the vector of interest. Alternatively,
the actual vector. Must have the same number of features as