Objective. Accurate modeling of retinal information processing remains a major challenge in retinal physiology with applications in visual rehabilitation and prosthetics. Most of the current artificial retinas are fed with static frame-based information, losing thereby the fundamental asynchronous features of biological vision. The objective of this work is to reproduce the spatial and temporal properties of the majority of ganglion cell (GC) types in the mammalian retina. Approach. Here, we combined an asynchronous event-based light sensor with a model pulling nonlinear subunits to reproduce the parallel filtering and temporal coding occurring in the retina. We fitted our model to physiological data and were able to reconstruct the spatio-temporal responses of the majority of GC types previously described in the mammalian retina (Roska et al 2006 J. Neurophysiol. 95 3810-22). Main results. Fitting of the temporal and spatial components of the response was achieved with high coefficients of determination (median R(2) = 0.972 and R(2) = 0.903, respectively). Our model provides an accurate temporal precision with a reliability of only few milliseconds-peak of the distribution at 5 ms-similar to biological retinas (Berry et al 1997 Proc. Natl Acad. Sci. USA 94 5411-16; Gollisch and Meister 2008 Science 319 1108-11). The spiking statistics of the model also followed physiological measurements (Fano factor: 0.331). Significance. This new asynchronous retinal model therefore opens new perspectives in the development of artificial visual systems and visual prosthetic devices.