Neural network based deep learnings have been used for solving high dimensional (number of dimensions may reach more than several hundreds) partial differential equations. But what is a neural network and how to use it for approximating, or predicting, natural phenomena? In this exploration, we tend to consider approximations of multivariate data through an explicit feedforward neural network (FNN) which is the basic type of neural network. The easy-to-follow formula provides a piecewise constant function for the data. Our FNN architecture has two hidden layers, where the weights and thresholds are explicitly defined and no numerical optimization is required for training. The explicit algorithm does not rely on any tensor structure in multiple dimensions. Instead, it automatically creates Voronoi tessellation of the domain based on the given data, and generates a reliable piecewise constant approximation of the targeted function. These make the construction more practical for multiple applications.
Prof. Sheng is a professor of Department of Mathematics, Baylor University, USA. He got his PhD degree from the University of Cambridge under the supervision of Professor Arieh Iserles in 1990. His research area include splitting and adaptive numerical methods for solving linear and nonlinear partial differential equations.