Author
Listed:
- Amin Amiri
(Department of Computer Science and Engineering, University of Tennessee at Chattanooga (UTC), 615 McCallie Ave, Chattanooga, TN 37377, USA)
- Alireza Ghaffarnia
(Department of Computer Science and Engineering, University of Tennessee at Chattanooga (UTC), 615 McCallie Ave, Chattanooga, TN 37377, USA)
- Nafiseh Ghaffar Nia
(Department of Electrical and Computer Engineering, Northwestern University, 633 Clark Street, Evanston, IL 60208, USA
Feinberg School of Medicine, Division of Cardiac Surgery, Northwestern University, 633 Clark Street, Evanston, IL 60208, USA
Center for Artificial Intelligence, Bluhm Cardiovascular Institute, Northwestern Medicine, 633 Clark Street, Evanston, IL 60208, USA)
- Dalei Wu
(Department of Computer Science and Engineering, University of Tennessee at Chattanooga (UTC), 615 McCallie Ave, Chattanooga, TN 37377, USA)
- Yu Liang
(Department of Computer Science and Engineering, University of Tennessee at Chattanooga (UTC), 615 McCallie Ave, Chattanooga, TN 37377, USA)
Abstract
This paper introduces Harmonizer, a universal framework designed for tokenizing heterogeneous input signals, including text, audio, and video, to enable seamless integration into multimodal large language models (LLMs). Harmonizer employs a unified approach to convert diverse, non-linguistic signals into discrete tokens via its FusionQuantizer architecture, built on FluxFormer, to efficiently capture essential signal features while minimizing complexity. We enhance features through STFT-based spectral decomposition, Hilbert transform analytic signal extraction, and SCLAHE spectrogram contrast optimization, and train using a composite loss function to produce reliable embeddings and construct a robust vector vocabulary. Experimental validation on music datasets such as E-GMD v1.0.0, Maestro v3.0.0, and GTZAN demonstrates high fidelity across 288 s of vocal signals (MSE = 0.0037, CC = 0.9282, Cosine Sim. = 0.9278, DTW = 12.12, MFCC Sim. = 0.9997, Spectral Conv. = 0.2485). Preliminary tests on text reconstruction and UCF-101 video clips further confirm Harmonizer’s applicability across discrete and spatiotemporal modalities. Rooted in the universality of wave phenomena and Fourier theory, Harmonizer offers a physics-inspired, modality-agnostic fusion mechanism via wave superposition and interference principles. In summary, Harmonizer integrates natural language processing and signal processing into a coherent tokenization paradigm for efficient, interpretable multimodal learning.
Suggested Citation
Amin Amiri & Alireza Ghaffarnia & Nafiseh Ghaffar Nia & Dalei Wu & Yu Liang, 2025.
"Harmonizer: A Universal Signal Tokenization Framework for Multimodal Large Language Models,"
Mathematics, MDPI, vol. 13(11), pages 1-44, May.
Handle:
RePEc:gam:jmathe:v:13:y:2025:i:11:p:1819-:d:1667602
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:11:p:1819-:d:1667602. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.