This paper describes a method for building visual “maps” from video data using quantized descriptions of motion. This enables unsupervised classification of scene regions based upon the motion patterns observed within them. Our aim is to recognise generic places using a qualitative representation of the spatial layout of regions with common motion patterns. Such places are characterised by the distribution of these motion patterns as opposed to static appearance patterns, and could include locations such as train platforms, bus stops, and park benches. Motion descriptions are obtained by tracking image features over a temporal window, and are then subjected to normalisation and thresholding to provide a quantized representation of that feature’s gross motion. Input video is quantized spatially into N ×N pixel blocks, and a histogram of the frequency of occurrence of each vector is then built for each of these small areas of scene. Within these we can therefore characterise the dominant patterns of motion, and then group our spatial regions based upon both proximity and local motion similarity to define areas or regions with particular motion characteristics. Moving up a level we then consider the relationship between the motion in adjacent spatial areas, and can characterise the dominant patterns of motion expected in a particular part of the scene over time. The current paper differs from previous work which has largely been based on the paths of moving agents, and therefore restricted to scenes in which such paths are identifiable. We demonstrate our method in three very different scenes: an indoor room scenario with multiple chairs and unpredictable unconstrained motion, an underground station featuring regions where motion is constrained (train tracks) and regions with complicated motion and difficult occlusion relationships (platform), and an outdoor scene with challenging camera motion and partially overlapping video streams.