Abstract
We start by demonstrating that an elementary learning task—learning a linear filter from training data by means of regression—can be solved very efficiently for feature spaces of very high dimensionality. In a second step, firstly, acknowledging that such high-dimensional learning tasks typically benefit from some form of regularization and, secondly, arguing that the problem of scale has not been taken care of in a very satisfactory manner, we come to a combined resolution of both of these shortcomings by proposing a technique that we coin scale regularization. This regularization problem can also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabilities of our basic filter. In particular, we demonstrate that it clearly outperforms the de facto standard Tikhonov regularization, which is the one employed in ridge regression or Wiener filtering.
Original language | English |
---|---|
Title of host publication | British Machine Vision Conference 2017, BMVC 2017 |
Publisher | BMVA Press |
Number of pages | 11 |
ISBN (Electronic) | 190172560X, 9781901725605 |
Publication status | Published - 2017 |
Event | 28th British Machine Vision Conference, BMVC 2017 - London, United Kingdom Duration: 4 Sept 2017 → 7 Sept 2017 |
Conference
Conference | 28th British Machine Vision Conference, BMVC 2017 |
---|---|
Country/Territory | United Kingdom |
City | London |
Period | 4/09/17 → 7/09/17 |