Segmentation of medical images has become critical to building understanding of biological structure-functional relationships. Atlas registration and label transfer provide a fully-automated approach for deriving segmentations given atlas training data. When multiple atlases are used, statistical label fusion techniques have been shown to dramatically improve segmentation accuracy. However, these techniques have had limited success with complex structures and atlases with varying similarity to the target data. Previous approaches have parameterized raters by a single confusion matrix, so that spatially varying performance for a single rater is neglected. Herein, we reformulate the statistical fusion model to describe raters by regional confusion matrices so that co-registered atlas labels can be fused in an optimal, spatially varying manner, which leads to an improved label fusion estimation with heterogeneous atlases. The advantages of this approach are characterized in a simulation and an empirical whole-brain labeling task.