The main theme of my professional career has been making unnecessarily difficult things easier so practitioners can focus on the difficult things that they're good at. My PhD research produced static program analyses to make it easier for software engineers to exploit hardware parallelism, but I also developed a framework to facilitate developing static program analyses. In industry, I built on my program-analysis expertise to build a distributed configuration management service based on a semantic model of system configuration. I have spent most of the last decade helping people make sense of data at scale by working in distributed data-processing and machine learning communities and by developing tools to make it easier to build and maintain machine learning systems.
To get a more specific sense of what I'm interested in lately, check out this IEEE Software article and these conference talks:
- "Band-aids don't fix bullet holes: repairing the broken promises of ubiquitous machine learning"
- "Cloud-native machine learning systems at day two and beyond" (with Sophie Watson)
- "Building machine learning algorithms on Apache Spark: scaling out and up"
I also have a weblog.