Faculty Contact: Ambuj Singh
Abstract: Consider a dynamic undirected network, where each network timestamp shares the same vertex set but contains time-varying edge weights. On each vertex i at time point t, a model (parameterized by xit) will be learned to fit the observed variable yit, regarding to predefined convex loss function fit. With sparse observations for each vertex at each timestamp, my task is to find jointly optimal models for all vertices in the dynamic network through an optimization approach, under the regularization from both network structure and temporal evolution. To encourage adjacent node variables to have similar values, a network penalty term penalizes the difference between every pair of connected node variables with the edge weight wtjk. To enforce temporal smoothness, node variables between adjacent network timestamps are weighted by wtj that reflects the variation tendency of node values between t and t+1. Furthermore, unlike most graph-based approaches which apply temporal regularization on all vertices uniformly, I introduce a tolerance variable βit on each node between adjacent timestamps. If the optimal solution indicates xit should be far away from xi,t+1 in the vector space, then βit has to increase its value for a compensation. An additional regularization Φ on β avoids overfitting. By allowing such heterogeneous temporal variations, this framework should more accurately capture complex temporal behaviors on dynamic networks with introduced flexibility.
Formulating the problem as shown above has three main advantages worth mentioning: (1) It overcomes the sparsity issue in both spatial and temporal spaces, (2) ensures temporal consistency, and (3) enables heterogeneous temporal variations. We can imagine that this formulation allows us to build models at each node that borrow strength from the fact that both spatial and temporal neighboring nodes should have somewhat similar, or even identical, models. Therefore, if a node i at time point t doesn't have any observed value, it can still learn a model close to it’s neighbors. Introducing temporal penalty ensures temporal consistency which is the idea that, most of the time, adjacent timestamps should have very similar estimations of the network. Lastly, adding flexibility on temporal penalty term (with tolerance variable βit) will allow me to capture different types of temporal variations due to external and internal effects.
Proposed methodology: Since the described problem is a computationally expensive task, one needs to derive a scalable algorithm, where each node can perform its own optimization task simultaneously and further communicate with the rest of the network. Therefore, I plan to derive a message passing algorithm that can simultaneously update variables x and β. Note that in the formulation, the objective function is biconvex (in terms of x and β), an alternating convex search (ACS) algorithm can be applied to converge to local optimum. Since it is also jointly convex, the local optimum obtained by ACS will also be the global optimum. Ideally, the ACS algorithm will have two main steps for each iteration, x update for fixed β and β update for fixed x. For β update step, I plan to employ an efficient convex solver such as CVXPY. As for x update step, I will investigate various distributed convex optimization algorithms such as Alternating Direction Method of Multipliers (ADMM) or other machine learning algorithms that can be scaled using parallel architectures like Google’s Map-Reduce.
Broader Impact: Generating domain-independent framework that can learn accurate models from time-varying networks to predict future time stamps will have a broad impact on many scientific fields due to interdisciplinary aspect of the problem. For example, an engineer working on road network planning can employ this framework to observe upcoming trends on traffic and plan future road extension/construction plans accordingly. Similarly, a researcher working on brain networks can use this framework to observe the differences between abnormal and healthy neurological regions in the brain and further propose region-specific treatments.
- Fall 2017: Furkan Kocayusufoglu