gd_dl package¶
Subpackages¶
Submodules¶
gd_dl.lib_pdb_mol2 module¶
- class gd_dl.lib_pdb_mol2.MOL2(mol2_fn)¶
Bases:
object
- read(read_end=None, read_hydrogen=False)¶
- write(model_index_start=0, model_index_end=None)¶
- class gd_dl.lib_pdb_mol2.MOL2_UNIT(model_no)¶
Bases:
object
- add_hydrogen_index(index)¶
- append_atom_index(index)¶
- append_atom_mol2_type(mol2_type)¶
- append_coordinates(tmp_crd_list)¶
- get_atom_index_list()¶
- get_atom_mol2_type_list()¶
- get_bond_dict()¶
- get_coordinates_np_array()¶
- get_hydrogen_set()¶
- read_line(line)¶
- update_bond(start, end, bond_type)¶
- write()¶
- class gd_dl.lib_pdb_mol2.Model(model_no=0)¶
Bases:
object
- append(X)¶
- get_residue_lines(res_range=[])¶
- get_residues(res_range=[])¶
- index(key)¶
- write(exclude_remark=False, exclude_symm=False, exclude_missing_bb=False, exclude_nucl=False, exclude_SSbond=False, remark_s=[], chain_id=None)¶
- class gd_dl.lib_pdb_mol2.PDB(pdb_fn, read=True, read_het=True)¶
Bases:
object
- read(read_het=True)¶
- write(exclude_remark=True, exclude_symm=False, exclude_missing_bb=True, model_index=[], remark_s=[])¶
- class gd_dl.lib_pdb_mol2.PDBline(line)¶
Bases:
object
- isAtom()¶
- isHetatm()¶
- isResidue()¶
- startswith(key)¶
- class gd_dl.lib_pdb_mol2.Residue(line)¶
Bases:
object
- R(atmName=None, atmIndex=None)¶
- append(line)¶
- atmIndex(atmName)¶
- atmName()¶
- chainID()¶
- check_bb()¶
- exists(atmName)¶
- get_CB()¶
- get_backbone()¶
- get_heavy()¶
- get_sc()¶
- i_atm(atmName=None, atmIndex=None)¶
- isAtom()¶
- isHetatm()¶
- isResidue()¶
- resName()¶
- resNo()¶
- resNo_char()¶
- write()¶
gd_dl.models module¶
- class gd_dl.models.Cgc_block(node_dim=32, edge_dim=16)¶
Bases:
PatchedModule
- forward(args)¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class gd_dl.models.Rerank_model(node_dim_in=29, node_dim_hidden=64, edge_dim_in=17, edge_dim_hidden=32, ligand_only=True, readout='mean')¶
Bases:
PatchedModule
- forward(G, n_atom)¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class gd_dl.models.Weight_CGConv(channels: int | Tuple[int, int], dim: int = 0, aggr: str = 'add', batch_norm: bool = False, bias: bool = True, **kwargs)¶
Bases:
MessagePassing
The crystal graph convolutional operator from the “Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties” paper We modifed the crystal graph convolutional operator to incorporate manual edge weights
- forward(x: Tensor | Tuple[Tensor, Tensor], edge_index: Tensor | SparseTensor, edge_attr: Tensor | None = None, edge_weight: Tensor | None = None) Tensor ¶
Runs the forward pass of the module.
- message(x_i, x_j, edge_attr: Tensor | None, edge_weight: Tensor | None) Tensor ¶
Constructs messages from node \(j\) to node \(i\) in analogy to \(\phi_{\mathbf{\Theta}}\) for each edge in
edge_index
. This function can take any argument as input which was initially passed topropagate()
. Furthermore, tensors passed topropagate()
can be mapped to the respective nodes \(i\) and \(j\) by appending_i
or_j
to the variable name, .e.g.x_i
andx_j
.
- reset_parameters()¶
Resets all learnable parameters of the module.
gd_dl.path_setting module¶
gd_dl.rerank_model module¶
- class gd_dl.rerank_model.Cgc_block(node_dim=32, edge_dim=16)¶
Bases:
PatchedModule
- forward(args)¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class gd_dl.rerank_model.Rerank_model(node_dim_in=29, node_dim_hidden=32, edge_dim_in=17, edge_dim_hidden=16, ligand_only=True, readout='mean')¶
Bases:
PatchedModule
- forward(G, n_atom)¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class gd_dl.rerank_model.Weight_CGConv(channels: int | Tuple[int, int], dim: int = 0, aggr: str = 'add', batch_norm: bool = False, bias: bool = True, **kwargs)¶
Bases:
MessagePassing
The crystal graph convolutional operator from the “Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties” paper
- forward(x: Tensor | Tuple[Tensor, Tensor], edge_index: Tensor | SparseTensor, edge_attr: Tensor | None = None, edge_weight: Tensor | None = None) Tensor ¶
- message(x_i, x_j, edge_attr: Tensor | None) Tensor ¶
Constructs messages from node \(j\) to node \(i\) in analogy to \(\phi_{\mathbf{\Theta}}\) for each edge in
edge_index
. This function can take any argument as input which was initially passed topropagate()
. Furthermore, tensors passed topropagate()
can be mapped to the respective nodes \(i\) and \(j\) by appending_i
or_j
to the variable name, .e.g.x_i
andx_j
.
- reset_parameters()¶
Resets all learnable parameters of the module.
gd_dl.utils module¶
- gd_dl.utils.set_prep(mode, valid_idx, train_val_ratio)¶
- gd_dl.utils.str2bool(v)¶