1. 2018
  2. Experience selection in deep reinforcement learning for control

    De Bruin, T., Kober, J., Tuyls, K. & Babuška, R., 2018, In : Journal of Machine Learning Research. 19, 56 p., 9.

    Research output: Contribution to journalArticleScientificpeer-review

  3. Integrating State Representation Learning Into Deep Reinforcement Learning

    de Bruin, T., Kober, J., Tuyls, K. & Babuska, R., 2018, In : IEEE Robotics and Automation Letters. 3, 3, p. 1394-1401

    Research output: Contribution to journalArticleScientificpeer-review

  4. 2016
  5. Bayesian inference in dynamic domains using Logical OR gates

    Claessens, R., de Waal, A., de Villiers, P., Penders, A., Pavlin, G. & Tuyls, K., 2016, Proceedings of the 18th International Conference on Enterprise Information Systems (ICEIS 2016). Hammoudi, S., Maciaszek, L., Missikoff, M. M., Camp, O. & Cordeiro, J. (eds.). Vol. 2. p. 134-142

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

  6. Improved deep reinforcement learning for robotics through distribution-based experience retention

    de Bruin, T., Kober, J., Tuyls, K. & Babuska, R., 2016, Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): IROS 2016. Kwon, D-S., Kang, C-G. & Suh, I. H. (eds.). Piscataway, NJ, USA: IEEE, p. 3947-3952

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

ID: 95079