View a PDF of the paper titled Potential Field Based Deep Metric Learning, by Shubhang Bhatnagar and 1 other authors
View PDF
HTML (experimental)
Abstract:Deep metric learning (DML) involves training a network to learn a semantically meaningful representation space. Many current approaches mine n-tuples of examples and model interactions within each tuplets. We present a novel, compositional DML model that instead of in tuples, represents the influence of each example (embedding) by a continuous potential field, and superposes the fields to obtain their combined global potential field. We use attractive/repulsive potential fields to represent interactions among embeddings from images of the same/different classes. Contrary to typical learning methods, where mutual influence of samples is proportional to their distance, we enforce reduction in such influence with distance, leading to a decaying field. We show that such decay helps improve performance on real world datasets with large intra-class variations and label noise. Like other proxy-based methods, we also use proxies to succinctly represent sub-populations of examples. We evaluate our method on three standard DML benchmarks- Cars-196, CUB-200-2011, and SOP datasets where it outperforms state-of-the-art baselines.
Submission history
From: Shubhang Bhatnagar [view email]
[v1]
Tue, 28 May 2024 20:10:06 UTC (17,011 KB)
[v2]
Sun, 1 Dec 2024 05:22:22 UTC (16,172 KB)
[v3]
Thu, 10 Apr 2025 04:49:39 UTC (17,928 KB)