View a PDF of the paper titled Feature-Based Lie Group Transformer for Real-World Applications, by Takayuki Komatsu and 3 other authors
View PDF
HTML (experimental)
Abstract:The main goal of representation learning is to acquire meaningful representations from real-world sensory inputs without supervision. Representation learning explains some aspects of human development. Various neural network (NN) models have been proposed that acquire empirically good representations. However, the formulation of a good representation has not been established. We recently proposed a method for categorizing changes between a pair of sensory inputs. A unique feature of this approach is that transformations between two sensory inputs are learned to satisfy algebraic structural constraints. Conventional representation learning often assumes that disentangled independent feature axes is a good representation; however, we found that such a representation cannot account for conditional independence. To overcome this problem, we proposed a new method using group decomposition in Galois algebra theory. Although this method is promising for defining a more general representation, it assumes pixel-to-pixel translation without feature extraction, and can only process low-resolution images with no background, which prevents real-world application. In this study, we provide a simple method to apply our group decomposition theory to a more realistic scenario by combining feature extraction and object segmentation. We replace pixel translation with feature translation and formulate object segmentation as grouping features under the same transformation. We validated the proposed method on a practical dataset containing both real-world object and background. We believe that our model will lead to a better understanding of human development of object recognition in the real world.
Submission history
From: Takayuki Komatsu [view email]
[v1]
Thu, 5 Jun 2025 06:30:11 UTC (1,429 KB)
[v2]
Fri, 6 Jun 2025 03:48:26 UTC (1,429 KB)
[v3]
Mon, 9 Jun 2025 12:10:31 UTC (1,429 KB)