Hello Medium World,
“Graphs are everywhere!”
“Yes, they are. But can you let me know how can I use them for machine learning?”
I have been exploring graph neural networks (GNNs) for the past couple of months. Along the way, I have kept the list of resources that helped me get clearer and understand more.
Videos
Intro to graph neural networks (ML Tech Talks) — TensorFlow
Books
Introduction to Graph Neural Networks — Zhiyuan Liu and Jie Zhou Been
Python modules:
Spektral (Based on and can be integrated with TensorFlow/keras)
StellarGraph (built-on TensorFlow/keras, can be integrated with standard layers)
PyTorch Geometric (Based on and can be integrated with PyTorch)
Data Representation:
Having worked majorly in other types of data (in Euclidean Space) the data representation for GNNs seems complicated but it’s not.
Graphs have nodes and edges and we need to represent them mathematically.
For data representation, we need node features, edge information (how nodes are connected), edge features.
The data modes in spektral help understand it from a development perspective.
If you are interested in seeing, how they actually take a look at the node features and their neighborhood, it’s matrix multiplication taking place, and we aggregate node features and neighborhood features by some function. More of it in the next article! Stay tuned.