Research output: Chapter in Book/Conference proceedings/Edited volume › Conference contribution › Scientific › peer-review

**Supervised scale-regularized linear convolutionary filters.** / Loog, Marco; Lauze, François.

Research output: Chapter in Book/Conference proceedings/Edited volume › Conference contribution › Scientific › peer-review

Loog, M & Lauze, F 2017, Supervised scale-regularized linear convolutionary filters. in *British Machine Vision Conference 2017, BMVC 2017.* BMVA Press, 28th British Machine Vision Conference, BMVC 2017, London, United Kingdom, 4/09/17.

Loog, M., & Lauze, F. (2017). Supervised scale-regularized linear convolutionary filters. In *British Machine Vision Conference 2017, BMVC 2017 *BMVA Press.

Loog M, Lauze F. Supervised scale-regularized linear convolutionary filters. In British Machine Vision Conference 2017, BMVC 2017. BMVA Press. 2017

@inproceedings{30bab0e429b0414ca8600650da0fd9e9,

title = "Supervised scale-regularized linear convolutionary filters",

abstract = "We start by demonstrating that an elementary learning task—learning a linear filter from training data by means of regression—can be solved very efficiently for feature spaces of very high dimensionality. In a second step, firstly, acknowledging that such high-dimensional learning tasks typically benefit from some form of regularization and, secondly, arguing that the problem of scale has not been taken care of in a very satisfactory manner, we come to a combined resolution of both of these shortcomings by proposing a technique that we coin scale regularization. This regularization problem can also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabilities of our basic filter. In particular, we demonstrate that it clearly outperforms the de facto standard Tikhonov regularization, which is the one employed in ridge regression or Wiener filtering.",

author = "Marco Loog and Fran{\cc}ois Lauze",

year = "2017",

language = "English",

booktitle = "British Machine Vision Conference 2017, BMVC 2017",

publisher = "BMVA Press",

}

TY - GEN

T1 - Supervised scale-regularized linear convolutionary filters

AU - Loog, Marco

AU - Lauze, François

PY - 2017

Y1 - 2017

N2 - We start by demonstrating that an elementary learning task—learning a linear filter from training data by means of regression—can be solved very efficiently for feature spaces of very high dimensionality. In a second step, firstly, acknowledging that such high-dimensional learning tasks typically benefit from some form of regularization and, secondly, arguing that the problem of scale has not been taken care of in a very satisfactory manner, we come to a combined resolution of both of these shortcomings by proposing a technique that we coin scale regularization. This regularization problem can also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabilities of our basic filter. In particular, we demonstrate that it clearly outperforms the de facto standard Tikhonov regularization, which is the one employed in ridge regression or Wiener filtering.

AB - We start by demonstrating that an elementary learning task—learning a linear filter from training data by means of regression—can be solved very efficiently for feature spaces of very high dimensionality. In a second step, firstly, acknowledging that such high-dimensional learning tasks typically benefit from some form of regularization and, secondly, arguing that the problem of scale has not been taken care of in a very satisfactory manner, we come to a combined resolution of both of these shortcomings by proposing a technique that we coin scale regularization. This regularization problem can also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabilities of our basic filter. In particular, we demonstrate that it clearly outperforms the de facto standard Tikhonov regularization, which is the one employed in ridge regression or Wiener filtering.

UR - http://www.scopus.com/inward/record.url?scp=85072408297&partnerID=8YFLogxK

M3 - Conference contribution

BT - British Machine Vision Conference 2017, BMVC 2017

PB - BMVA Press

ER -

ID: 67289350