NS Seminar

Date and Location

Feb 09, 2018 - 3:00pm to 4:00pm
NS Lab, Bldg 434, room 122

Abstract

Deep Neural Networks for Learning Graph Representations (presented by Ashwini Patil, Computer Science)

Cao, S., Lu, W., & Xu, Q. (2016, February). Deep Neural Networks for Learning Graph Representations. In AAAI (pp. 1145-1152).

In this paper, we propose a novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information. Different from other previous research efforts, we adopt a random surfing model to capture graph structural information directly, instead of using the sampling based method for generating linear sequences proposed by Perozzi et al. (2014). The advantages of our approach will be illustrated from both theoretical and empirical perspectives. We also give a new perspective for the matrix factorization method proposed by Levy and Goldberg (2014), in which the pointwise mutual information (PMI) matrix is considered as an analytical solution to the objective function of the skipgram model with negative sampling proposed by Mikolov et al. (2013). Unlike their approach which involves the use of the SVD for finding the low-dimensitonal projections from the PMI matrix, however, the stacked denoising autoencoder is introduced in our model to extract complex features and model non-linearities. To demonstrate the effectiveness of our model, we conduct experiments on clustering and visualization tasks, employing the learned vertex representations as features. Empirical results on datasets of varying sizes show that our model outperforms other stat-of-the-art models in such tasks.

 

metapath2vec: Scalable Representation Learning for Heterogeneous Networks presented by Isaac Mackey, Computer Science)

Dong, Y., Chawla, N. V., & Swami, A. (2017, August). metapath2vec: Scalable representation learning for heterogeneous networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 135-144). ACM.

We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. Œe metapath2vec model formalizes meta-pathbased random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. Œe metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classi€cation, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects

 

Co-domain Embedding using Deep Quadruplet Networks for Unseen Traffic Sign Recognition(presented by James Bird, Computer Science)

Kim, J., Lee, S., Oh, T. H., & Kweon, I. S. (2017). Co-domain Embedding using Deep Quadruplet Networks for Unseen Traffic Sign Recognition. arXiv preprint arXiv:1712.01907.

Recent advances in visual recognition show overarching success by virtue of large amounts of supervised data. However, the acquisition of a large supervised dataset is often challenging. This is also true for intelligent transportation applications, i.e., traffic sign recognition. For example, a model trained with data of one country may not be easily generalized to another country without much data. We propose a novel feature embedding scheme for unseen class classification when the representative class template is given. Traffic signs, unlike other objects, have official images. We perform co-domain embedding using a quadruple relationship from real and synthetic domains. Our quadruplet network fully utilizes the explicit pairwise similarity relationships among samples from different domains. We validate our method on three datasets with two experiments involving one-shot classification and feature generalization. The results show that the proposed method outperforms competing approaches on both seen and unseen classes.