deepfold.train package¶
Submodules¶
deepfold.train.gradient_clipping module¶
- class deepfold.train.gradient_clipping.AsyncGradientClipping(device: device, comm_group: ProcessGroup | None = None, norm_type: float = 2.0)¶
Bases:
object
- get_clip_scale(max_norm: float, eps: float = 1e-06) float ¶
- deepfold.train.gradient_clipping.update_norm_from_buckets(state: AsyncGradientClipping, bucket: GradBucket) Future[Tensor] ¶
deepfold.train.lr_scheduler module¶
- class deepfold.train.lr_scheduler.AlphaFoldLRScheduler(init_lr: float, final_lr: float, warmup_lr_length: int, init_lr_length: int, optimizer: Optimizer)¶
Bases:
object
AlphaFold learning rate schedule.
Suppl. ‘1.11.3 Optimization details’.
- step(iteration: int) None ¶
- class deepfold.train.lr_scheduler.OpenFoldBenchmarkLRScheduler(base_lr: float, warmup_lr_init: float, warmup_lr_iters: int, optimizer: Optimizer)¶
Bases:
object
- deepfold.train.lr_scheduler.get_learning_rate(optimizer: Optimizer) float ¶
- deepfold.train.lr_scheduler.set_learning_rate(optimizer: Optimizer, lr_value: float) None ¶
deepfold.train.validation_metrics module¶
- deepfold.train.validation_metrics.compute_validation_metrics(predicted_atom_positions: Tensor, target_atom_positions: Tensor, atom_mask: Tensor, metrics_names: Set[str]) Dict[str, Tensor] ¶
- deepfold.train.validation_metrics.drmsd(structure_1: Tensor, structure_2: Tensor, mask: Tensor | None = None) Tensor ¶
- deepfold.train.validation_metrics.gdt(p1: Tensor, p2: Tensor, mask: Tensor, cutoffs: List[float]) Tensor ¶
- deepfold.train.validation_metrics.gdt_ha(p1: Tensor, p2: Tensor, mask: Tensor) Tensor ¶
- deepfold.train.validation_metrics.gdt_ts(p1: Tensor, p2: Tensor, mask: Tensor) Tensor ¶