View a PDF of the paper titled Neural Dueling Bandits: Preference-Based Optimization with Human Feedback, by Arun Verma and 4 other authors
View PDF
HTML (experimental)
Abstract:Contextual dueling bandit is used to model the bandit problems, where a learner’s goal is to find the best arm for a given context using observed noisy human preference feedback over the selected arms for the past contexts. However, existing algorithms assume the reward function is linear, which can be complex and non-linear in many real-life applications like online recommendations or ranking web search results. To overcome this challenge, we use a neural network to estimate the reward function using preference feedback for the previously selected arms. We propose upper confidence bound- and Thompson sampling-based algorithms with sub-linear regret guarantees that efficiently select arms in each round. We also extend our theoretical results to contextual bandit problems with binary feedback, which is in itself a non-trivial contribution. Experimental results on the problem instances derived from synthetic datasets corroborate our theoretical results.
Submission history
From: Arun Verma [view email]
[v1]
Wed, 24 Jul 2024 09:23:22 UTC (15,238 KB)
[v2]
Wed, 16 Apr 2025 11:44:53 UTC (15,301 KB)