#network-science/graph-embedding

Graph embedding maps a network into a vector space, which let us explore the relationships of nodes using a simple vector operation, or feed them into machine learning algorithms.

Spectral embedding

Spectral embedding is a technique that generates an embedding by decomposing a matrix that represents a network. This matrix can take various forms, such as an adjacency matrix, a Laplacian matrix, or a random walk transition matrix. Spectral embedding is rooted in the principles of linear algebra and its properties have been extensively studied using tools from random matrix theory.

Neural graph embedding

Neural graph embedding is a technique that utilizes neural networks. It can be traced back to two distinct branches: one originating from neural networks used in natural language processing, and the other from those used in image processing.

Embedding based on language models

This branch originates from neural language models used in natural language processing. Language models are designed to analyze text consisting of sequential characters' data. Although networks are not inherently sequential, language models can be applied to network data by transforming a network into a sequence of nodes and edges, such as using random walks.

Embedding based on image processing

Graph convolution is a technique that originated from image processing. An image is represented as a regular graph, in which each pixel is represented as a node and connected to adjacent pixels. A node has RGB colors as the feature vector. The purpose of graph convolution is to use a basic operation called convolution to identify edges, shapes, and objects.

The concept of convolution has been extended to irregular graphs, such as social networks. This extension allows for applying convolutional neural networks on graphs with fast localized spectral filtering.

In addition to graph convolution, different aggregation strategies can be adopted, including concatenation and pooling.

For scalable computation, mini-batching networks can be used. This approach is discussed in the paper "Cluster-GCN".

Modeling networks with embedding

Modeling the growth of networks by a geometric branching process, in which a parent node is split into multiple offspring that inherit the properties of the parent, which can regenerate numerous ubiquitous network features, including degree heterogeneity and stable community structure. It is an inverse transformation of geometric renormalization.

References

Properties

Analysis