Accepted Papers
- Home /
- Accepted Papers /
Discovering Models from Structural and Behavioral Brain Imaging Data
Zilong Bai (University of California, Davis); Buyue Qian (Department of Computer Science Xi'An Jiaotong University); Ian Davidson (University of California, Davis)
Block models of graphs are used in a wide variety of domains as they find not only clusters (the blocks) but also interaction within and between the blocks. However, existing approaches primarily focus on either structural graphs (i.e. for MRI scans) or behavioral graphs (i.e. for fMRI scans). In both cases the block model’s interaction or mixing matrix can be useful for understanding potential interaction (for structural graphs) and actual interaction (for behavioral graphs) between the blocks. In this paper we explore finding block models where there is both a structural network and multiple behavioral graphs. This provides significant modeling challenges, consider if there is strong behavioral connectivity but no structural connectivity between two nodes. We show why existing multi-graph settings such as multi-view learning are insufficient and instead propose a novel model to address the problem. Our method not only learns structurally and behaviorally cohesive blocks of nodes but also finds structurally and behaviorally feasible block interactions. We show in numerical evaluations on synthetic data that our method outperforms baseline approaches in recovering the ground-truth factor matrices in increasingly complex situations. We further apply our method to real-world datasets from two different domains (1) brain imaging data (a multi-cohort fMRI study) and to show its versatility (2) Twitter (following network and retweet behavior) and gain insights into the information flow and underlying generating mechanisms of these complex data.