On Tuesday \(16^{\text{th}}\) of June Patrick Rubin-Delanchy presented Adaptive approximation and estimation of deep neural network to intrinsic dimensionality. The abstract is given below.
We prove that the performance of deep neural networks (DNNs) is mainly determined by an intrinsic low-dimensionality of covariates. DNNs have been providing an outstanding performance empirically, hence, the theoretical properties of DNNs are actively investigated to understand their mechanism. In particular, the behavior of DNNs with respect to high-dimensional data is one of the most important concerns. However, this issue has not been sufficiently investigated from the aspect of covariates, although high-dimensional data have an intrinsic low dimensionality in practice. In this paper, we derive bounds for an approximation error and an estimation error (i.e., a generalization error) by DNNs with intrinsically low-dimensional covariates. To the end, we utilize the notion of the Minkowski dimension and develop a novel proof technique. Consequently, we show that convergence rates of the errors by DNNs do not depend on the nominal high dimensionality of data, but on its lower intrinsic dimension. We also show that the rate is optimal in the minimax sense. We identify an advantage of DNNs by showing that DNNs can handle a broader class of intrinsic low-dimensional data compared to other adaptive estimators. Finally, we conduct a numerical simulation to validate the theoretical facts.