Point cloud (PC)-a collection of discrete geometric samples of a 3D object's surface-is typically large, which entails expensive subsequent operations. Thus, PC sub-sampling is of practical importance. Previous model-based sub-sampling schemes are ad-hoc in design and do not preserve the overall shape sufficiently well, while previous data-driven schemes are trained for specific pre-determined input PC sizes and sub-sampling rates and thus do not generalize well. Leveraging advances in graph sampling, we propose a fast PC sub-sampling algorithm of linear time complexity that chooses a 3D point subset while minimizing a global reconstruction error. Specifically, to articulate a sampling objective, we first assume a super-resolution (SR) method based on feature graph Laplacian regularization (FGLR) that reconstructs the original high-res PC, given points chosen by a sampling matrix H. We prove that minimizing a worst-case SR reconstruction error is equivalent to maximizing the smallest eigenvalue λmin of matrix HT H+ μL, where L is a symmetric, positive semi-definite matrix derived from a neighborhood graph connecting the 3D points. To arrive at a fast algorithm, instead of maximizing λmin, we maximize a lower bound λ-min(HT H+ μL) via selection of H-this translates to a graph sampling problem for a signed graph G with self-loops specified by graph Laplacian L. We tackle this general graph sampling problem in three steps. First, we approximate G with a balanced graph GB specified by Laplacian LB. Second, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we perform a similarity transform Lp = SLB S-1, so that all Gershgorin disc left-ends of Lp are aligned exactly at λmin(LB). Finally, we choose samples on GB using a previous graph sampling algorithm to maximize λ-min(HT H+ μLp) in linear time. Experimental results show that 3D points chosen by our algorithm outperformed competing schemes both numerically and visually in reconstruction quality.