Machine learning (ML) software is increasingly being employed in high stakes or sensitive applications. This brings forth important challenges related to safety, privacy, and non-discrimination. As a consequence, research in ML verification rapidly gained popularity and demand for interpretable ML models is more and more pronounced. Interpretability and verification are typically seen as orthogonal concerns. Research in ML interpretability is predominantly carried out in the ML community, while research in ML verification is almost exclusively carried out in the formal methods community, without much communication and exchanges between these research areas. In this talk, we advocate for closing this gap by highlighting the synergies between interpretability and verification in ML software. We will survey our recent and ongoing work, and discuss research questions and avenues for future work in this context.