deepfold.distributed.model_parallel.Disable

class deepfold.distributed.model_parallel.Disable(*args, **kwargs)[source]
__init__(*args, **kwargs)

Methods

__init__(*args, **kwargs)

apply(*args, **kwargs)

backward(ctx)

Define a formula for differentiating the operation with backward mode automatic differentiation.

forward(ctx)

Define the forward of the custom autograd Function.

jvp(ctx, *grad_inputs)

Define a formula for differentiating the operation with forward mode automatic differentiation.

mark_dirty(*args)

Mark given tensors as modified in an in-place operation.

mark_non_differentiable(*args)

Mark outputs as non-differentiable.

mark_shared_storage(*pairs)

maybe_clear_saved_tensors

name

register_hook

register_prehook

save_for_backward(*tensors)

Save given tensors for a future call to backward().

save_for_forward(*tensors)

Save given tensors for a future call to jvp().

set_materialize_grads(value)

Set whether to materialize grad tensors.

setup_context(ctx, inputs, output)

There are two ways to define the forward pass of an autograd.Function.

vjp(ctx, *grad_outputs)

Define a formula for differentiating the operation with backward mode automatic differentiation.

vmap(info, in_dims, *args)

Define the behavior for this autograd.Function underneath torch.vmap().

Attributes

dirty_tensors

generate_vmap_rule

materialize_grads

metadata

needs_input_grad

next_functions

non_differentiable

requires_grad

saved_for_forward

saved_tensors

saved_variables

to_save