The past several decades have seen an exponential increase in the volume of available seismic data, and with it has come the need to develop fast, automatic earthquake detection, and location algorithms. Some of the most recent and promising tools come from the field of machine learning. In this study, we combine a recent seismic detection and location method with neural network classification and analyze 4 months of continuous data recorded by a network of 76 stations in northern California. While these approaches have been used separately, our implementation is unique in that it is not constrained by source templates and avoids user‐defined detection thresholds. In particular, we partition our data set into 234,240, 3‐min long time windows with 75% overlap. For each time window, we create a 3D image that captures information about the coherence of the seismic wavefield. We then devise four features as input and train a neural network classifier to predict which time windows in the data set are likely to contain regional seismic events. These features include the second and fourth Hu image moments computed from 2D cross sections of our 3D images and statistical p values that quantify the probability of observing network‐wide power‐spectral density values at 0.2 and 0.5 s. Our neural network model predicts that 2,522 time windows contain seismic events, from which we locate 1,192 unique events.