Many music composition algorithms attempt to compose music in a particular style. The resulting music is often impressive and indistinguishable from the style of the training data, but it tends to lack significant innovation. In an effort to increase innovation in the selection of pitches and note durations, we present a system that discovers musical motifs by coupling machine-learning techniques with an inspirational component. Unlike many generative models, the inspirational component allows the composition process to originate outside of what is learned from the training data. Candidate motifs are extracted from non-musical data such as audio, images, and sleep signals. Machine-learning algorithms select the motifs that most resemble the training data. We find that the inspirational motif discovery process is more efficient than random generation. We also extract motifs from real music scores, identify themes in the piece according to a theme database, and measure the probability of discovering thematic motifs verses non-thematic motifs. We examine the information content of the motifs by comparing the entropy of the discovered motifs, candidate motifs, and training data. We measure innovation by comparing the probability of the training data and the probability of the discovered motifs given the model.