gd_dl.rerank_model.Weight_CGConv¶
- class gd_dl.rerank_model.Weight_CGConv(channels: int | Tuple[int, int], dim: int = 0, aggr: str = 'add', batch_norm: bool = False, bias: bool = True, **kwargs)[source]¶
The crystal graph convolutional operator from the “Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties” paper
- __init__(channels: int | Tuple[int, int], dim: int = 0, aggr: str = 'add', batch_norm: bool = False, bias: bool = True, **kwargs)[source]¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
Methods
__init__(channels[, dim, aggr, batch_norm, bias])Initialize internal Module state, shared by both nn.Module and ScriptModule.
add_module(name, module)Add a child module to the current module.
aggregate(inputs, index[, ptr, dim_size])Aggregates messages from neighbors as \(\bigoplus_{j \in \mathcal{N}(i)}\).
apply(fn)Apply
fnrecursively to every submodule (as returned by.children()) as well as self.bfloat16()Casts all floating point parameters and buffers to
bfloat16datatype.buffers([recurse])Return an iterator over module buffers.
children()Return an iterator over immediate children modules.
compile(*args, **kwargs)Compile this Module's forward using
torch.compile().cpu()Move all model parameters and buffers to the CPU.
cuda([device])Move all model parameters and buffers to the GPU.
double()Casts all floating point parameters and buffers to
doubledatatype.edge_update()Computes or updates features for each edge in the graph.
edge_updater(edge_index[, size])The initial call to compute or update features for each edge in the graph.
eval()Set the module in evaluation mode.
explain_message(inputs, dim_size)extra_repr()Set the extra representation of the module.
float()Casts all floating point parameters and buffers to
floatdatatype.forward(x, edge_index[, edge_attr, edge_weight])get_buffer(target)Return the buffer given by
targetif it exists, otherwise throw an error.get_extra_state()Return any extra state to include in the module's state_dict.
get_parameter(target)Return the parameter given by
targetif it exists, otherwise throw an error.get_submodule(target)Return the submodule given by
targetif it exists, otherwise throw an error.half()Casts all floating point parameters and buffers to
halfdatatype.ipu([device])Move all model parameters and buffers to the IPU.
jittable([typing])Analyzes the
MessagePassinginstance and produces a new jittable module that can be used in combination withtorch.jit.script().load_state_dict(state_dict[, strict, assign])Copy parameters and buffers from
state_dictinto this module and its descendants.message(x_i, x_j, edge_attr)Constructs messages from node \(j\) to node \(i\) in analogy to \(\phi_{\mathbf{\Theta}}\) for each edge in
edge_index.message_and_aggregate(edge_index)Fuses computations of
message()andaggregate()into a single function.modules()Return an iterator over all modules in the network.
named_buffers([prefix, recurse, ...])Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children()Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules([memo, prefix, remove_duplicate])Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters([prefix, recurse, ...])Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters([recurse])Return an iterator over module parameters.
propagate(edge_index[, size])The initial call to start propagating messages.
register_aggregate_forward_hook(hook)Registers a forward hook on the module.
register_aggregate_forward_pre_hook(hook)Registers a forward pre-hook on the module.
register_backward_hook(hook)Register a backward hook on the module.
register_buffer(name, tensor[, persistent])Add a buffer to the module.
register_edge_update_forward_hook(hook)Registers a forward hook on the module.
register_edge_update_forward_pre_hook(hook)Registers a forward pre-hook on the module.
register_forward_hook(hook, *[, prepend, ...])Register a forward hook on the module.
register_forward_pre_hook(hook, *[, ...])Register a forward pre-hook on the module.
register_full_backward_hook(hook[, prepend])Register a backward hook on the module.
register_full_backward_pre_hook(hook[, prepend])Register a backward pre-hook on the module.
register_load_state_dict_post_hook(hook)Register a post hook to be run after module's
load_state_dictis called.register_message_and_aggregate_forward_hook(hook)Registers a forward hook on the module.
register_message_and_aggregate_forward_pre_hook(hook)Registers a forward pre-hook on the module.
register_message_forward_hook(hook)Registers a forward hook on the module.
register_message_forward_pre_hook(hook)Registers a forward pre-hook on the module.
register_module(name, module)Alias for
add_module().register_parameter(name, param)Add a parameter to the module.
register_propagate_forward_hook(hook)Registers a forward hook on the module.
register_propagate_forward_pre_hook(hook)Registers a forward pre-hook on the module.
register_state_dict_pre_hook(hook)Register a pre-hook for the
state_dict()method.requires_grad_([requires_grad])Change if autograd should record operations on parameters in this module.
reset_parameters()Resets all learnable parameters of the module.
set_extra_state(state)Set extra state contained in the loaded state_dict.
share_memory()See
torch.Tensor.share_memory_().state_dict(*args[, destination, prefix, ...])Return a dictionary containing references to the whole state of the module.
to(*args, **kwargs)Move and/or cast the parameters and buffers.
to_empty(*, device[, recurse])Move the parameters and buffers to the specified device without copying storage.
train([mode])Set the module in training mode.
type(dst_type)Casts all parameters and buffers to
dst_type.update(inputs)Updates node embeddings in analogy to \(\gamma_{\mathbf{\Theta}}\) for each node \(i \in \mathcal{V}\).
xpu([device])Move all model parameters and buffers to the XPU.
zero_grad([set_to_none])Reset gradients of all model parameters.
Attributes
SUPPORTS_FUSED_EDGE_INDEXT_destinationcall_super_initdecomposed_layersdump_patchesexplainspecial_argstraining