Kieran A Murphy

Visualizing the structure of information with machine learning.

prof_pic.jpg

Hello and welcome to my site!

My research has evolved from physics toward machine learning and information theory over the years, always taking an experimental approach toward understanding complex systems.

In recent years, I’ve grown especially excited about how deep learning can unlock new kinds of information-theoretic analyses of real world data. “Information theory” can sound intimidating and dry, but I’m interested in juicy questions like:

  • Where is the information in this room about whether my next coin toss will be heads or tails?
  • How does chaos destroy information?
  • What combination of diagnostics tell you the most about a hospital patient’s outcome?

With the methods I’ve been developing, you gain a powerful window into data while leaving deep learning as a black box, playing to its strengths while remaining blissfully unaware of its inner workings.

If you’d like to chat, please reach out!


A highly compressed view of my research trajectory:
University of Pennsylvania Postdoc, 2021-
Google Research AI Resident, 2019-2021
University of Chicago PhD (Physics), 2013-2019
UC Berkeley BA (Physics, computer science), 2009-2013
Lawrence Berkeley National Lab Research assistant, 2012-2013


I enjoy visualizing minimal examples of complex systems as a route to building intuition around how they work. Below is a visualization of a randomly initialized neural network that warps two-dimensional space. The input starts as a square and then what you’re seeing is the square after passing through the network. Try varying the number of layers (64 units each) and the activation function!




Number of layers:
Nonlinearity: tanh relu hard_sigmoid elu softsign