Robots that interact with humans must adapt to diverse user preferences. Learning representations of robot behaviors can facilitate user-driven customization of the robot, but machine learning techniques require large amounts of manually labelled data. Manually labelled data can be difficult to obtain because users are often unmotivated to engage in monotonous labeling processes. In this work, we identify that users learning to use a new robot automatically engage in exploratory search processes that generate data which can be used in place of manually-labelled data. We propose a method to learn representations called Contrastive Learning from Exploratory Actions (CLEA) that leverages this exploratory search data to learn representations of robot behaviors that facilitate user-driven customization. We show that CLEA can learn representations that satisfy the criteria of effective robot representations: completeness, simplicity, minimality, and interpretability. CLEA representations outperform self-supervised representations in their completeness, simplicity, minimality, and interpretability.