Recent advances in information network embedding
Many sources of our informational landscape can be formalized as a network of intertwined documents and authors. For a long time the textual content of documents and the structure that shows how documents and authors relate to each other have been considered separately. Recently document network embedding has been proposed to learn representations that take both content and structure into account. This space can then be used for downstreams tasks, such as classification or link prediction. In this talk I will give an overview of recent methods that aim at building such embedding spaces. In particular, I will focus on several models that were recently proposed in the ERIC Lab [1,2,3,4].
 R. Brochier, A. Guille and J. Velcin. Global Vectors for Node Representation. Proceedings of The Web Conference (WWW), 2019.  A. Gourru, J. Velcin, J. Jacques and A. Guille. Document Network Projection in Pretrained Word Embedding Space, Proceedings of ECIR, 2020.  R. Brochier, A. Guille and J. Velcin. Inductive Document Network Embedding with Topic-Word Attention, Proceedings of ECIR, 2020.  A. Gourru, J. Velcin and J. Jacques. Gaussian Embedding of Linked Documents from a Pretrained Semantic Space, Proceedings of IJCAI, 2020.