Anatomical landmarks on 3-D human body scans play key roles in shape-essential applications, including consistent parameterization, body measurement extraction, segmentation, and mesh re-targeting. Manually locating landmarks is tedious and time-consuming for large-scale 3-D anthropometric surveys. To automate the landmarking process, we propose a data-driven approach, which learns from landmark locations known on a dataset of 3-D scans and predicts their locations on new scans. More specifically, we adopt a coarse-to-fine approach by training a deep regression neural network to compute the locations of all landmarks and then for each landmark training an individual deep classification neural network to improve its accuracy. In regards to input images being fed into the neural networks, we compute from a frontal view three types of image renderings for comparison, i.e., gray-scale appearance images, range depth images, and curvature mapped images. Among these, curvature mapped images result in the best empirical accuracy from the deep regression network, whereas depth images lead to higher accuracy for locating most landmarks using the deep classification networks. In conclusion, the proposed approach performs better than state of the art on locating most landmarks. The simple yet effective approach can be extended to automatically locate landmarks in large scale 3-D scan datasets.