siml.networks.pyg package¶
Submodules¶
siml.networks.pyg.abstract_pyg_gcn module¶
- class siml.networks.pyg.abstract_pyg_gcn.AbstractPyGGCN(block_setting, *, create_subchain=True, residual=False, multiple_networks=None)¶
Bases:
AbstractGCN
- block_setting: BlockSetting¶
- training: bool¶
siml.networks.pyg.cluster_gcn module¶
- class siml.networks.pyg.cluster_gcn.ClusterGCN(block_setting)¶
Bases:
AbstractPyGGCN
Cluster-GCN based on https://arxiv.org/abs/1905.07953 .
- block_setting: BlockSetting¶
- static get_name()¶
Abstract method to be overridden by the subclass. It shoud return str indicating its name.
- training: bool¶
siml.networks.pyg.gcnii module¶
- class siml.networks.pyg.gcnii.GCNII(block_setting)¶
Bases:
AbstractPyGGCN
GCNII based on https://arxiv.org/abs/2007.02133 .
- block_setting: BlockSetting¶
- static get_name()¶
Abstract method to be overridden by the subclass. It shoud return str indicating its name.
- training: bool¶
siml.networks.pyg.gcnii_pytorch_geometric module¶
- class siml.networks.pyg.gcnii_pytorch_geometric.GCN2Conv(channels: int, alpha: float, theta: float | None = None, layer: int | None = None, shared_weights: bool = True, cached: bool = False, add_self_loops: bool = True, normalize: bool = True, **kwargs)¶
Bases:
MessagePassing
The graph convolutional operator with initial residual connections and identity mapping (GCNII) from the “Simple and Deep Graph Convolutional Networks” paper .. math:
\mathbf{X}^{\prime} = \left( (1 - \alpha) \mathbf{\hat{P}}\mathbf{X} + \alpha \mathbf{X^{(0)}}\right) \left( (1 - \beta) \mathbf{I} + \beta \mathbf{\Theta} \right)
with \(\mathbf{\hat{P}} = \mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2}\), where \(\mathbf{\hat{A}} = \mathbf{A} + \mathbf{I}\) denotes the adjacency matrix with inserted self-loops and \(\hat{D}_{ii} = \sum_{j=0} \hat{A}_{ij}\) its diagonal degree matrix, and \(\mathbf{X}^{(0)}\) being the initial feature representation. Here, \(\alpha\) models the strength of the initial residual connection, while \(\beta\) models the strength of the identity mapping. :param channels: Size of each input and output sample. :type channels: int :param alpha: The strength of the initial residual connection
\(\alpha\).
- Parameters:
theta (float, optional) – The hyperparameter \(\theta\) to compute the strength of the identity mapping \(\beta = \log \left( \frac{\theta}{\ell} + 1 \right)\). (default:
None
)layer (int, optional) – The layer \(\ell\) in which this module is executed. (default:
None
)shared_weights (bool, optional) – If set to
False
, will use different weight matrices for the smoothed representation and the initial residual (“GCNII*”). (default:True
)cached (bool, optional) – If set to
True
, the layer will cache the computation of \(\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2}\) on first execution, and will use the cached version for further executions. This parameter should only be set toTrue
in transductive learning scenarios. (default:False
)normalize (bool, optional) – Whether to add self-loops and apply symmetric normalization. (default:
True
)add_self_loops (bool, optional) – If set to
False
, will not add self-loops to the input graph. (default:True
)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.MessagePassing
.
- forward(*args: Any, **kwargs: Any) Any ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- message(x_j: Tensor, edge_weight: Tensor) Tensor ¶
Constructs messages from node \(j\) to node \(i\) in analogy to \(\phi_{\mathbf{\Theta}}\) for each edge in
edge_index
. This function can take any argument as input which was initially passed topropagate()
. Furthermore, tensors passed topropagate()
can be mapped to the respective nodes \(i\) and \(j\) by appending_i
or_j
to the variable name, .e.g.x_i
andx_j
.
- message_and_aggregate(adj_t: SparseTensor, x: Tensor) Tensor ¶
Fuses computations of
message()
andaggregate()
into a single function. If applicable, this saves both time and memory since messages do not explicitly need to be materialized. This function will only gets called in case it is implemented and propagation takes place based on atorch_sparse.SparseTensor
or atorch.sparse.Tensor
.
- reset_parameters()¶
- siml.networks.pyg.gcnii_pytorch_geometric.glorot(tensor)¶
siml.networks.pyg.gin module¶
- class siml.networks.pyg.gin.GIN(block_setting)¶
Bases:
AbstractPyGGCN
Graph Isomorphism Network based on https://arxiv.org/abs/1810.00826 .
- block_setting: BlockSetting¶
- static get_name()¶
Abstract method to be overridden by the subclass. It shoud return str indicating its name.
- training: bool¶