Abstract
(1) Background: Multimodal tactile cognition is paramount for robotic dexterity, yet its advancement is constrained by the limited realism of existing texture datasets and the difficulty of effectively fusing heterogeneous signals. This study introduces a comprehensive framework to overcome these limitations by integrating a parametrically designed dataset with a novel fusion architecture. (2) Methods: To address the challenge of limited dataset realism, we developed a universal texture dataset that leverages information entropy and Perlin noise to simulate a wide spectrum of surfaces. To tackle the difficulty of signal fusion, we designed the Multimodal Fusion Attention Transformer Network (MFT-Net). This architecture strategically combines a Convolutional Neural Network (CNN) for local feature extraction with a Transformer for capturing global dependencies, and it utilizes a Squeeze-and-Excitation attention module for adaptive cross-modal weighting. (3) Results: Evaluated on our custom-designed dataset, MFT-Net achieved a classification accuracy of 86.66%, surpassing traditional baselines by a significant margin of over 21.99%. Furthermore, an information-theoretic analysis confirmed the dataset's efficacy by revealing a strong positive correlation between the textures' physical information content and the model's recognition performance. (4) Conclusions: Our work establishes a novel design-verification paradigm that directly links physical information with machine perception. This approach provides a quantifiable methodology to enhance the generalization of tactile models, paving the way for improved robotic dexterity in complex, real-world environments.