deepfold.losses package

Submodules

deepfold.losses.auxillary module

deepfold.losses.auxillary.chain_centre_mass_loss(pred_atom_positions: Tensor, true_atom_positions: Tensor, atom_mask: Tensor, asym_id: Tensor, eps: float = 1e-10) Tensor
deepfold.losses.auxillary.experimentally_resolved_loss(logits: Tensor, atom37_atom_exists: Tensor, all_atom_mask: Tensor, resolution: Tensor, min_resolution: float, max_resolution: float, eps: float = 1e-08) Tensor

Predicts if an atom is experimentally resolved in a high-res structure.

Jumper et al. (2021) Suppl. Sec. 1.9.10 ‘“Experimentally resolved” prediction’

Parameters:
  • logits – logits of shape [*, N_res, 37]. Log probability that an atom is resolved

  • representation (in atom37)

  • sigmoid. (can be converted to probability by applying)

  • atom37_atom_exists – labels of shape [*, N_res, 37]

  • all_atom_mask – mask of shape [*, N_res, 37]

  • resolution – resolution of each example of shape [*]

Note

This loss is used during fine-tuning on high-resolution X-ray crystals and cryo-EM structures resolution better than 0.3 nm. NMR and distillation examples have zero resolution.

deepfold.losses.auxillary.get_asym_mask(asym_id)

Get the mask for each asym_id. [, NR] -> [, NC, NR]

deepfold.losses.auxillary.repr_norm_loss(msa_norm: Tensor, pair_norm: Tensor, msa_mask: Tensor, pseudo_beta_mask: Tensor, eps=1e-05, tolerance=0.0) Tensor

Representation norm loss of Uni-Fold.

deepfold.losses.confidence module

deepfold.losses.confidence.compute_plddt(logits: Tensor) Tensor
deepfold.losses.confidence.compute_predicted_aligned_error(logits: Tensor, max_bin: int = 31, num_bins: int = 64) Dict[str, Tensor]

Computes aligned confidence metrics from logits.

Parameters:
  • logits – [*, num_res, num_res, num_bins] the logits output from PredictedAlignedErrorHead.

  • max_bin – Maximum bin value

  • num_bins – Number of bins

Returns:

[*, num_res, num_res, num_bins] the predicted

aligned error probabilities over bins for each residue pair.

predicted_aligned_error: [*, num_res, num_res] the expected aligned distance

error for each pair of residues.

max_predicted_aligned_error: [*] the maximum predicted error possible.

Return type:

aligned_confidence_probs

deepfold.losses.confidence.compute_tm(logits: Tensor, residue_weights: Tensor | None = None, asym_id: Tensor | None = None, interface: bool = False, max_bin: int = 31, num_bins: int = 64, eps: float = 1e-08) Tensor

Compute TM score from logis.

deepfold.losses.confidence.lddt(all_atom_pred_pos: Tensor, all_atom_positions: Tensor, all_atom_mask: Tensor, cutoff: float = 15.0, eps: float = 1e-10, per_residue: bool = True) Tensor

Calculate lDDT score.

deepfold.losses.confidence.lddt_ca(all_atom_pred_pos: Tensor, all_atom_positions: Tensor, all_atom_mask: Tensor, cutoff: float = 15.0, eps: float = 1e-10, per_residue: bool = True) Tensor

Calculate lDDT score with only alhpa-carbon.

deepfold.losses.confidence.plddt_loss(logits: Tensor, all_atom_pred_pos: Tensor, all_atom_positions: Tensor, all_atom_mask: Tensor, resolution: Tensor, cutoff: float = 15.0, num_bins: int = 50, min_resolution: float = 0.1, max_resolution: float = 3.0, eps: float = 1e-10) Tensor

Calculate plDDT loss.

deepfold.losses.confidence.tm_loss(logits: Tensor, final_affine_tensor: Tensor, backbone_rigid_tensor: Tensor, backbone_rigid_mask: Tensor, resolution: Tensor, max_bin: int = 31, num_bins: int = 64, min_resolution: float = 0.1, max_resolution: float = 3.0, eps: float = 1e-08) Tensor

deepfold.losses.geometry module

deepfold.losses.geometry.backbone_loss(backbone_rigid_tensor: Tensor, backbone_rigid_mask: Tensor, traj: Tensor, use_clamped_fape: Tensor | None = None, clamp_distance: float = 10.0, loss_unit_distance: float = 10.0, eps: float = 0.0001) Tensor
deepfold.losses.geometry.compute_distogram(positions: Tensor, mask: Tensor, min_bin: float = 2.3125, max_bin: float = 21.6875, num_bins: int = 64) Tuple[Tensor, Tensor]
deepfold.losses.geometry.compute_fape(pred_frames: Rigid, target_frames: Rigid, frames_mask: Tensor, pred_positions: Tensor, target_positions: Tensor, positions_mask: Tensor, length_scale: float, l1_clamp_distance: float | None = None, eps: float = 1e-08) Tensor

Computes FAPE loss.

Parameters:
  • pred_frames – Rigid object of predicted frames. [*, N_frames]

  • target_frames – Rigid object of ground truth frames. [*, N_frames]

  • frames_mask – Binary mask for the frames. [*, N_frames]

  • pred_positions – Predicted atom positions. [*, N_pts, 3]

  • target_positions – Ground truth positions. [*, N_pts, 3]

  • positions_mask – Positions mask. [*, N_pts]

  • length_scale – Length scale by which the loss is divided.

  • l1_clamp_distance – Cutoff above which distance errors are disregarded.

  • eps – Small value used to regularize denominators.

Returns:

FAPE loss tensor.

deepfold.losses.geometry.compute_renamed_ground_truth(batch: Dict[str, Tensor], atom14_pred_positions: Tensor, eps=1e-10) Dict[str, Tensor]
deepfold.losses.geometry.distogram_loss(logits: Tensor, pseudo_beta: Tensor, pseudo_beta_mask: Tensor, min_bin: float = 2.3125, max_bin: float = 21.6875, num_bins: int = 64, eps: float = 1e-06) Tensor
deepfold.losses.geometry.fape_loss(outputs: Dict[str, Tensor], batch: Dict[str, Tensor], backbone_clamp_distance: float, backbone_loss_unit_distance: float, backbone_weight: float, sidechain_clamp_distance: float, sidechain_length_scale: float, sidechain_weight: float, eps: float = 0.0001) Tensor
deepfold.losses.geometry.get_optimal_transform(src_atoms: Tensor, tgt_atoms: Tensor, mask: Tensor | None = None) Tuple[Tensor, Tensor]

Calculate the optimal superimposition.

deepfold.losses.geometry.kabsch_rmsd(true_atom_pos: Tensor, pred_atom_pos: Tensor, atom_mask: Tensor | None = None) Tensor
deepfold.losses.geometry.sidechain_loss(sidechain_frames: Tensor, sidechain_atom_pos: Tensor, rigidgroups_gt_frames: Tensor, rigidgroups_alt_gt_frames: Tensor, rigidgroups_gt_exists: Tensor, renamed_atom14_gt_positions: Tensor, renamed_atom14_gt_exists: Tensor, alt_naming_is_better: Tensor, clamp_distance: float = 10.0, length_scale: float = 10.0, eps: float = 0.0001) Tensor
deepfold.losses.geometry.superimpose(src_atoms: Tensor, tgt_atoms: Tensor, mask: Tensor) Tuple[Tensor, Tensor]

Superimposes coordinates onto a tgt_atoms by minimizing RMSD using SVD.

Parameters:
  • src_atoms – reference tensor shaped [*, N, 3]

  • tgt_atoms – target tensor shaped [*, N, 3]

  • mask – mask tensor shaped [*, N]

Returns:

superimposed coords [, N, 3] rmsds: final RMSDs []

Return type:

superimposed

deepfold.losses.geometry.supervised_chi_loss(angles_sin_cos: Tensor, unnormalized_angles_sin_cos: Tensor, aatype: Tensor, seq_mask: Tensor, chi_mask: Tensor, chi_angles_sin_cos: Tensor, chi_weight: float, angle_norm_weight: float, eps: float = 1e-06) Tensor

Torsion Angle Loss.

Supplementary ‘1.9.1 Side chain and backbone torsion angle loss’: Algorithm 27 Side chain and backbone torsion angle loss.

Parameters:
  • angles_sin_cos – Predicted angles. [*, N, 7, 2]

  • unnormalized_angles_sin_cos – [*, N, 7, 2] The same angles, but unnormalized.

  • aatype – Residue indices. [*, N]

  • seq_mask – Sequence mask. [*, N]

  • chi_mask – Angle mask. [*, N, 7]

  • chi_angles_sin_cos – Ground truth angles. [*, N, 7, 2]

  • chi_weight – Weight for the angle component of the loss.

  • angle_norm_weight – Weight for the normalization component of the loss.

Returns:

Torsion angle loss tensor.

deepfold.losses.masked_msa module

deepfold.losses.masked_msa.masked_msa_loss(logits: Tensor, true_msa: Tensor, bert_mask: Tensor, eps: float = 1e-08) Tensor

Computes BERT-style masked MSA loss.

Supplementary ‘1.9.9 Masked MSA prediction’.

Parameters:
  • logits – [*, N_seq, N_res, 23] predicted residue distribution

  • true_msa – [*, N_seq, N_res] true MSA

  • bert_mask – [*, N_seq, N_res] MSA mask

Returns:

Masked MSA loss

deepfold.losses.procrustes module

class deepfold.losses.procrustes.Procrustes(*args, **kwargs)

Bases: Function

static backward(ctx, grad_r, grad_ds)

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the vjp function.)

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.

static forward(ctx, m: Tensor, force_rotation: bool, regularization: bool, gradient_eps: float)

Define the forward of the custom autograd Function.

This function is to be overridden by all subclasses. There are two ways to define forward:

Usage 1 (Combined forward and ctx):

@staticmethod
def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any:
    pass
  • It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

  • See combining-forward-context for more details

Usage 2 (Separate forward and ctx):

@staticmethod
def forward(*args: Any, **kwargs: Any) -> Any:
    pass

@staticmethod
def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None:
    pass
  • The forward no longer accepts a ctx argument.

  • Instead, you must also override the torch.autograd.Function.setup_context() staticmethod to handle setting up the ctx object. output is the output of the forward, inputs are a Tuple of inputs to the forward.

  • See extending-autograd for more details

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

deepfold.losses.procrustes.flatten_batch_dims(tensor: Tensor, end_dim: int) Tuple[Tensor, Size]
deepfold.losses.procrustes.kabsch(x: Tensor, y: Tensor, weights: Tensor | None = None, compute_scaling: bool = False) Tuple[Tensor, Tensor, float | None]

Returns the rigid transformation and the optimal scaling that best align an input list of points x to a target list of points y, by minimizing the sum of square distance.

Parameters:
  • x – […, N, D] list of N points of dimension D.

  • y – […, N, D] list of corresponding target points.

  • weights – […, N] optional list of weights associated to each point.

Returns:

a triplet (R, t, s) consisting of a rotation matrix r, a translational vector t and a scaling s if compute scaling is true.

deepfold.losses.procrustes.procrustes(m: Tensor, force_rotation: bool = False, regularization: float = 0.0, gradient_eps: float = 1e-05, return_singular_values: bool = False) Tuple[Tensor, Tensor | None]

Returns the orthonormal matrix minimizing Frobenius norm.

Parameters:
  • m – […, N, N] batch of square matrices.

  • force_rotation – if true, forces the output to be a rotation matrix.

  • regularziation – weight of a regularzation term added to the gradient.

  • gradient_eps – small value used to enforce numerical stability during backpropagation.

Returns:

batch of orthonormal matrices […, N, N] and optional singular values.

deepfold.losses.procrustes.rigid_points_registration(x: Tensor, y: Tensor, weights: Tensor | None = None, compute_scaling: bool = False) Tuple[Tensor, Tensor, float | None]

Returns the rigid transformation and the optimal scaling that best align an input list of points x to a target list of points y, by minimizing the sum of square distance.

Parameters:
  • x – […, N, D] list of N points of dimension D.

  • y – […, N, D] list of corresponding target points.

  • weights – […, N] optional list of weights associated to each point.

Returns:

a triplet (R, t, s) consisting of a rotation matrix r, a translational vector t and a scaling s if compute scaling is true.

deepfold.losses.procrustes.rigid_vectors_registration(x: Tensor, y: Tensor, weights: Tensor | None = None, compute_scaling: bool = False) Tuple[Tensor, Tensor | None]
deepfold.losses.procrustes.speical_procrustes(m: Tensor, regularization: float = 0.0, gradient_eps: float = 1e-05, return_singular_values: bool = False) Tuple[Tensor, Tensor | None]
deepfold.losses.procrustes.svd(m: Tensor) Tuple[Tensor, Tensor, Tensor]

Singular value decomposition.

Parameters:

m – [B, M, N] batch of real matrices.

Returns:

decomposition, such as m = u @ diag(d) @ v.T

Return type:

u, d, v

deepfold.losses.procrustes.unflatten_batch_dims(tensor: Tensor, batch_shape: Size) Tensor

deepfold.losses.utils module

deepfold.losses.utils.calculate_bin_centers(boundaries: Tensor) Tensor
deepfold.losses.utils.sigmoid_cross_entropy(logits: Tensor, labels: Tensor) Tensor
deepfold.losses.utils.softmax_cross_entropy(logits: Tensor, labels: Tensor) Tensor

Softmax cross entropy.

deepfold.losses.violation module

deepfold.losses.violation.between_residue_bond_loss(pred_atom_positions: Tensor, pred_atom_mask: Tensor, residue_index: Tensor, aatype: Tensor, tolerance_factor_soft: float = 12.0, tolerance_factor_hard: float = 12.0, eps: float = 1e-06) Dict[str, Tensor]

Flat-bottom loss to penalize structural violations between residues.

This is a loss penalizing any violation of the geometry around the peptide bond between consecutive amino acids. This loss corresponds to equation 44 & 45 (Supplementary ‘1.9.11 Structural violations’).

Parameters:
  • pred_atom_positions – Atom positions in atom37/14 representation

  • pred_atom_mask – Atom mask in atom37/14 representation

  • residue_index – Residue index for given amino acid, this is assumed to be monotonically increasing.

  • aatype – Amino acid type of given residue

  • tolerance_factor_soft – soft tolerance factor measured in standard deviations of pdb distributions

  • tolerance_factor_hard – hard tolerance factor measured in standard deviations of pdb distributions

Returns:

  • ‘c_n_loss_mean’: Loss for peptide bond length violations

  • ’ca_c_n_loss_mean’: Loss for violations of bond angle around C spanned by CA, C, N

  • ’c_n_ca_loss_mean’: Loss for violations of bond angle around N spanned by C, N, CA

  • ’per_residue_loss_sum’: sum of all losses for each residue

  • ’per_residue_violation_mask’: mask denoting all residues with violation present.

Return type:

Dict containing

deepfold.losses.violation.between_residue_clash_loss(atom14_pred_positions: Tensor, atom14_atom_exists: Tensor, atom14_atom_radius: Tensor, residue_index: Tensor, overlap_tolerance_soft: float = 1.5, overlap_tolerance_hard: float = 1.5, eps: float = 1e-10) Dict[str, Tensor]

Loss to penalize steric clashes between residues.

This is a loss penalizing any steric clashes due to non bonded atoms in different peptides coming too close. This loss corresponds to the part with different residues of equation 46 (Supplementary ‘1.9.11 Structural violations’).

Parameters:
  • atom14_pred_positions – Predicted positions of atoms in global prediction frame.

  • atom14_atom_exists – Mask denoting whether atom at positions exists for given amino acid type.

  • atom14_atom_radius – Van der Waals radius for each atom.

  • residue_index – Residue index for given amino acid.

  • overlap_tolerance_soft – Soft tolerance factor.

  • overlap_tolerance_hard – Hard tolerance factor.

Returns:

  • ‘mean_loss’: average clash loss

  • ’per_atom_loss_sum’: sum of all clash losses per atom, shape (N, 14)

  • ’per_atom_clash_mask’: mask whether atom clashes with any other atom shape (N, 14)

Return type:

Dict containing

deepfold.losses.violation.compute_violation_metrics(batch: Dict[str, Tensor], atom14_pred_positions: Tensor, violations: Dict[str, Tensor]) Dict[str, Tensor]

Compute several metrics to assess the structural violations.

deepfold.losses.violation.compute_violation_metrics_np(batch: Dict[str, ndarray], atom14_pred_positions: ndarray, violations: Dict[str, ndarray]) Dict[str, ndarray]
deepfold.losses.violation.extreme_ca_ca_distance_violations(pred_atom_positions: Tensor, pred_atom_mask: Tensor, residue_index: Tensor, max_angstrom_tolerance: float = 1.5, eps: float = 1e-06) Tensor

Counts residues whose Ca is a large distance from its neighbour.

Measures the fraction of CA-CA pairs between consecutive amino acids that are more than ‘max_angstrom_tolerance’ apart.

Parameters:
  • pred_atom_positions – Atom positions in atom37/14 representation

  • pred_atom_mask – Atom mask in atom37/14 representation

  • residue_index – Residue index for given amino acid, this is assumed to be monotonically increasing.

  • max_angstrom_tolerance – Maximum distance allowed to not count as violation.

Returns:

Fraction of consecutive CA-CA pairs with violation.

deepfold.losses.violation.find_structural_violations(batch: Dict[str, Tensor], atom14_pred_positions: Tensor, violation_tolerance_factor: float, clash_overlap_tolerance: float) Dict[str, Tensor]

Computes several checks for structural violations.

deepfold.losses.violation.find_structural_violations_np(batch: Dict[str, ndarray], atom14_pred_positions: ndarray, violation_tolerance_factor: float, clash_overlap_tolerance: float) Dict[str, ndarray]
deepfold.losses.violation.violation_loss(violations: Dict[str, Tensor], atom14_atom_exists: Tensor, eps: float = 1e-06) Tensor
deepfold.losses.violation.within_residue_violations(atom14_pred_positions: Tensor, atom14_atom_exists: Tensor, atom14_dists_lower_bound: Tensor, atom14_dists_upper_bound: Tensor, tighten_bounds_for_loss: float = 0.0, eps: float = 1e-10) Dict[str, Tensor]

Loss to penalize steric clashes within residues.

This is a loss penalizing any steric violations or clashes of non-bonded atoms in a given peptide. This loss corresponds to the part with the same residues of equation 46 (Supplementary ‘1.9.11 Structural violations’).

Parameters:
  • atom14_pred_positions ([*, N, 14, 3]) – Predicted positions of atoms in global prediction frame.

  • atom14_atom_exists ([*, N, 14]) – Mask denoting whether atom at positions exists for given amino acid type

  • atom14_dists_lower_bound ([*, N, 14]) – Lower bound on allowed distances.

  • atom14_dists_upper_bound ([*, N, 14]) – Upper bound on allowed distances

  • tighten_bounds_for_loss ([*, N]) – Extra factor to tighten loss

Returns:

  • ‘per_atom_loss_sum’ ([*, N, 14]):

    sum of all clash losses per atom, shape

  • ’per_atom_clash_mask’ ([*, N, 14]):

    mask whether atom clashes with any other atom shape

Return type:

Dict containing

Module contents