Nalazite se na CroRIS probnoj okolini. Ovdje evidentirani podaci neće biti pohranjeni u Informacijskom sustavu znanosti RH. Ako je ovo greška, CroRIS produkcijskoj okolini moguće je pristupi putem poveznice www.croris.hr
izvor podataka: crosbi !

Market Making With Signals Through Deep Reinforcement Learning (CROSBI ID 297972)

Prilog u časopisu | izvorni znanstveni rad | međunarodna recenzija

Gašperov, Bruno ; Kostanjčar, Zvonko Market Making With Signals Through Deep Reinforcement Learning // IEEE access, 9 (2021), 61611-61622. doi: 10.1109/ACCESS.2021.3074782

Podaci o odgovornosti

Gašperov, Bruno ; Kostanjčar, Zvonko

engleski

Market Making With Signals Through Deep Reinforcement Learning

Deep reinforcement learning has recently been successfully applied to a plethora of diverse and difficult sequential decision-making tasks, ranging from the Atari games to robotic motion control. Among the foremost such tasks in quantitative finance is the problem of optimal market making. Market making is the process of simultaneously quoting limit orders on both sides of the limit order book of a security with the goal of repeatedly capturing the quoted spread while minimizing the inventory risk. Most of the existing analytical approaches to market making tend to be predicated on a set of strong, naïve assumptions, whereas current machine learning- based approaches either resort to crudely discretized quotes or fail to incorporate additional predictive signals. In this paper, we present a novel framework for market making with signals based on model-free deep reinforcement learning, addressing these shortcomings. A new state space formulation incorporating outputs from standalone signal generating units, as well as a novel action space and reward function formulation, are introduced. The framework is underpinned by both ideas from adversarial reinforcement learning and neuroevolution. Experimental results on historical data demonstrate the superior reward-to-risk performance of the proposed framework over several standard market making benchmarks. More specifically, the resulting reinforcement learning agent achieves between 20-30% higher terminal wealth than the benchmarks while being exposed to only around 60% of their inventory risks. Finally, an insight into its policy is provided for the sake of interpretability.

Deep reinforcement learning ; Genetic algorithms ; High-frequency trading ; Machine learning ; Market making ; Stochastic control

nije evidentirano

nije evidentirano

nije evidentirano

nije evidentirano

nije evidentirano

nije evidentirano

Podaci o izdanju

9

2021.

61611-61622

objavljeno

2169-3536

10.1109/ACCESS.2021.3074782

Povezanost rada

Računarstvo

Poveznice
Indeksiranost