Author
Listed:
- Tianci Gao
(Department IU-1 “Automatic Control Systems”, Bauman Moscow State Technical University, Moscow 105005, Russia)
- Konstantin A. Neusypin
(Department IU-1 “Automatic Control Systems”, Bauman Moscow State Technical University, Moscow 105005, Russia)
- Dmitry D. Dmitriev
(Department IU-1 “Automatic Control Systems”, Bauman Moscow State Technical University, Moscow 105005, Russia)
- Bo Yang
(Department IU-1 “Automatic Control Systems”, Bauman Moscow State Technical University, Moscow 105005, Russia)
- Shengren Rao
(Department IU-1 “Automatic Control Systems”, Bauman Moscow State Technical University, Moscow 105005, Russia)
Abstract
Learning from demonstration with multiple executions must contend with time warping, sensor noise, and alternating quasi-stationary and transition phases. We propose a label-free pipeline that couples unsupervised segmentation, duration-explicit alignment, and probabilistic encoding. A dimensionless multi-feature saliency (velocity, acceleration, curvature, direction-change rate) yields scale-robust keyframes via persistent peak–valley pairs and non-maximum suppression. A hidden semi-Markov model (HSMM) with explicit duration distributions is jointly trained across demonstrations to align trajectories on a shared semantic time base. Segment-level probabilistic motion models (GMM/GMR or ProMP, optionally combined with DMP) produce mean trajectories with calibrated covariances, directly interfacing with constrained planners. Feature weights are tuned without labels by minimizing cross-demonstration structural dispersion on the simplex via CMA-ES. Across UAV flight, autonomous driving, and robotic manipulation, the method reduces phase-boundary dispersion by 31% on UAV-Sim and by 30–36% under monotone time warps, noise, and missing data (vs. HMM); improves the sparsity–fidelity trade-off (higher time compression at comparable reconstruction error) with lower jerk; and attains nominal 2σ coverage (94–96%), indicating well-calibrated uncertainty. Ablations attribute the gains to persistence plus NMS, weight self-calibration, and duration-explicit alignment. The framework is scale-aware and computationally practical, and its uncertainty outputs feed directly into MPC/OMPL for risk-aware execution.
Suggested Citation
Tianci Gao & Konstantin A. Neusypin & Dmitry D. Dmitriev & Bo Yang & Shengren Rao, 2025.
"Unsupervised Segmentation and Alignment of Multi-Demonstration Trajectories via Multi-Feature Saliency and Duration-Explicit HSMMs,"
Mathematics, MDPI, vol. 13(19), pages 1-29, September.
Handle:
RePEc:gam:jmathe:v:13:y:2025:i:19:p:3057-:d:1756255
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:19:p:3057-:d:1756255. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.