Exploring Deep Learning from Feedforward Neural Network Experiments
Speaker:Prof. Qin SHENG
Professor
Department of Mathematics
Baylor University
USA
Date & Time:23 Sep 2019 (Monday) 15:00 - 16:00
Venue:E11-1006
Organized by:Department of Mathematics

Abstract

Neural network based deep learnings have been used for solving high dimensional (number of dimensions may reach more than several hundreds) partial differential equations. But what is a neural network and how to use it for approximating, or predicting, natural phenomena? In this exploration, we tend to consider approximations of multivariate data through an explicit feedforward neural network (FNN) which is the basic type of neural network. The easy-to-follow formula provides a piecewise constant function for the data. Our FNN architecture has two hidden layers, where the weights and thresholds are explicitly defined and no numerical optimization is required for training. The explicit algorithm does not rely on any tensor structure in multiple dimensions. Instead, it automatically creates Voronoi tessellation of the domain based on the given data, and generates a reliable piecewise constant approximation of the targeted function. These make the construction more practical for multiple applications.

Biography

Prof. Sheng is a professor of Department of Mathematics, Baylor University, USA. He got his PhD degree from the University of Cambridge under the supervision of Professor Arieh Iserles in 1990. His research area include splitting and adaptive numerical methods for solving linear and nonlinear partial differential equations.