Seminario de estadística.
Título: Large-time asymptotics in deep learning
Conferenciante: Borjan Geshkovski, University of Deusto y Universidad Autónoma de Madrid
Fecha y hora: viernes 23 de octubre a las 10:30 horas
Resumen: It is by now well-known that practical deep supervised learning may roughly be cast as an optimal control problem for a specific discrete-time, nonlinear dynamical system called an artificial neural network. In this talk, we consider the continuous-time formulation of the deep supervised learning problem. We will present, using an analytical approach, this problem's behavior when the final time horizon increases, a fact that can be interpreted as increasing the number of layers in the neural network setting. We show qualitative and quantitative estimates of the convergence to the zero training error regime depending on the functional to be minimised.
Enlace Teams:
https://teams.microsoft.com/l/channel/19%3a0d91b40bd3754bab980f363f3ca16d3b%40thread.tacv2/General?groupId=6a0b708e-988d-4808-99fc-8b6171b98d43&tenantId=fc6602ef-8e88-4f1d-a206-e14a3bc19af2