||Global optimization of a non-convex objective function often appears in large-scale machine learning and artificial intelligence applications. Recently, consensus-based optimization (CBO) methods have been introduced as one of the gradient-free optimization methods. In this paper, we provide a convergence analysis for the first-order CBO method in [J. A. Carrillo, S. Jin, L. Li and Y. Zhu, A consensus-based global optimization method for high dimensional machine learning problems, https://arxiv.org/abs/1909.09249v1]. Prior to this work, the convergence study was carried out for CBO methods on corresponding mean-field limit, a Fokker-Planck equation, which does not imply the convergence of the CBO method per se. Based on the consensus estimate directly on the first-order CBO model, we provide a convergence analysis of the first-order CBO method [J. A. Carrillo, S. Jin, L. Li and Y. Zhu, A consensus-based global optimization method for high dimensional machine learning problems, https://arxiv.org/abs/1909.09249v1] without resorting to the corresponding mean-field model. Our convergence analysis consists of two steps. In the first step, we show that the CBO model exhibits a global consensus time asymptotically for any initial data, and in the second step, we provide a sufficient condition on system parameters - which is dimension independent - and initial data which guarantee that the converged consensus state lies in a small neighborhood of the global minimum almost surely.