A novel one-layer recurrent neural network for the l1-regularized least square problem

Majid Mohammadi*, Yao Hua Tan, Wout Hofman, S. Hamid Mousavi

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

9 Citations (Scopus)
21 Downloads (Pure)

Abstract

The l1-regularized least square problem has been considered in diverse fields. However, finding its solution is exacting as its objective function is not differentiable. In this paper, we propose a new one-layer neural network to find the optimal solution of the l1-regularized least squares problem. To solve the problem, we first convert it into a smooth quadratic minimization by splitting the desired variable into its positive and negative parts. Accordingly, a novel neural network is proposed to solve the resulting problem, which is guaranteed to converge to the solution of the problem. Furthermore, the rate of the convergence is dependent on a scaling parameter, not to the size of datasets. The proposed neural network is further adjusted to encompass the total variation regularization. Extensive experiments on the l1 and total variation regularized problems illustrate the reasonable performance of the proposed neural network.

Original languageEnglish
JournalNeurocomputing
DOIs
Publication statusPublished - 2018

Bibliographical note

Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • Convex
  • l-regularization
  • Least squares
  • Lyapunov
  • Recurrent neural network
  • Total variation

Fingerprint

Dive into the research topics of 'A novel one-layer recurrent neural network for the l1-regularized least square problem'. Together they form a unique fingerprint.

Cite this