Neuromorphic circuits were pioneered by Carver Mead in the late 80s, following a synthetic biology approach, introduced also by his Caltech colleagues R. Feynman and J. Hopfield in their famous Physics of Computation course. Extrapolating the doubling in computer performance that was occurring every 18 months (Moore’s Law), Mead predicted correctly in 1990 that present-day computers would use 10 million times more energy (per instruction) than the brain uses (per synaptic activation) (Mead, 1990). He proposed to close this gap by building neuromorphic circuits: a class of hybrid analog/digital circuits that implement hardware models of biological systems. These types of circuits can be used to develop a new generation of computing technologies that can compete and complement standard VLSI approaches. Indeed, by using massively parallel arrays of computing elements, exploiting redundancy to achieve fault tolerance, and emulating the neural style of computation, neuromorphic VLSI architectures can exploit to the fullest potential the features of advanced scaled VLSI processes and future emerging technologies, naturally coping with the problems that characterize them, such as device in-homogeneities, and imperfections.