Classic Graph Convolutional Networks (GCNs) often learn node representation holistically, which would ignore the distinct impacts from different neighbors when aggregating their features to update a node's representation. Disentangled GCNs have been proposed to divide each node's representation into several feature channels. However, current disentangling methods do not try to figure out how many inherent factors the model should assign to help extract the best representation of each node.
To solve the problems, a research team led by Chuliang WENG published their
new research on 15 Jan 2025 in
Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.
The team proposed a novel disentangled graph convolutional network named
D2-GCN to dynamically adjust the number of each node’s disentangled feature channels during training, which helps obtain adaptive node representations on different datasets.
In the research, they design a two-level disentangling mechanism that integrates the epoch-level and the layer-level disentanglement during training. With this mechanism,
D2-GCN could capture nuanced changes in node representations on graphs of varying topological complexities. They further leverage the expectation-maximization (EM) algorithm to demonstrate the convergence of the proposed dynamic disentanglement. Finally, they define an information entropy based evaluation metric to portray the convergence speed of the disentangling process.
Experiments show that their model outperforms all the baselines in both single- and multi-label node classification tasks in terms of test accuracies. Visualization results also indicate that
D2-GCN could display clearer classification boundaries and higher intra-class similarity than the baseline disentangled methods.
Future work can combine the subgraph theory to improve their model performance in presenting more accurate local-global disentanglement and grasping long-distance dependencies in a graph .
DOI:
10.1007/s11704-023-3339-7