Speaker: Dr. James Sully of the University of British Columbia
Abstract: We find ourselves in the middle of a revolution in machine learning, driven by the astonishing success of deep neural networks and abundant computing power. But as we make these great technological advances with ever larger and more complicated neural networks, our theoretical understanding has not kept pace. Can we demystify the workings of deep neural networks? Tools and ideas from physics may form part of the solution. I will discuss how deep neural networks admit simpler effective descriptions that explain their astonishing successes and bypass their vastly more complicated microscopic specifications.