IDEAS home Printed from https://ideas.repec.org/a/spr/joinma/v36y2025i5d10.1007_s10845-024-02379-2.html
   My bibliography  Save this article

A Generative AI approach to improve in-situ vision tool wear monitoring with scarce data

Author

Listed:
  • Alberto Garcia-Perez

    (ITP Aero)

  • Maria Jose Gomez-Silva

    (Univ. Complutense de Madrid)

  • Arturo de la Escalera-Hueso

    (Univ. Carlos III de Madrid)

Abstract

Most aerospace turbine casings are mechanised using a vertical lathe. This paper presents a tool wear monitoring system using computer vision that analyses tool inserts once that the machining process has been completed. By installing a camera in the robot magazine room and a tool cleaning device to remove chips and cooling residuals, a neat tool image can be acquired. A subsequent Deep Learning (DL) model classifies the tool as acceptable or not, avoiding the drawbacks of alternative computer vision algorithms based on edges, dedicated features etc. Such model was trained with a significantly reduced number of images, in order to minimise the costly process to acquire and classify images during production. This could be achieved by introducing a special lens and some generative Artificial Intelligence (AI) models. This paper proposes two novel architectures: SCWGAN-GP, Scalable Condition Wasserstein Generative Adversarial Network (WGAN) with Gradient Penalty, and Focal Stable Diffusion (FSD) model, with class injection and dedicated loss function, to artificially increase the number of images to train the DL model. In addition, a K|Lens special optics was used to get multiple views of the vertical lathe inserts as a means of further increase data augmentation by hardware with a reduced number of production samples. Given an initial dataset, the classification accuracy was increased from 80.0 % up to 96.0 % using the FSD model. We also found that using as low as 100 real images, our methodology can achieve up to 93.3 % accuracy. Using only 100 original images for each insert type and wear condition results in 93.3 % accuracy and up to 94.6 % if 200 images are used. This accuracy is considered to be within human inspector uncertainty for this use-case. Fine-tuning the FSD model, with nearly 1 billion training parameters, showed superior performance compared to the SCWGAN-GP model, with only 80 million parameters, besides of requiring less training samples to produced higher quality output images. Furthermore, the visualization of the output activation mapping confirms that the model takes a decision on the right image features. Time to create the dataset was reduced from 3 months to 2 days using generative AI. So our approach enables to create industrial dataset with minimum effort and significant time speed-up compared with the conventional approach of acquiring a large number of images that DL models usually requires to avoid over-fitting. Despite the good results, this methodology is only applicable to relatively simple cases, such as our inserts where the images are not complex.

Suggested Citation

  • Alberto Garcia-Perez & Maria Jose Gomez-Silva & Arturo de la Escalera-Hueso, 2025. "A Generative AI approach to improve in-situ vision tool wear monitoring with scarce data," Journal of Intelligent Manufacturing, Springer, vol. 36(5), pages 3165-3183, June.
  • Handle: RePEc:spr:joinma:v:36:y:2025:i:5:d:10.1007_s10845-024-02379-2
    DOI: 10.1007/s10845-024-02379-2
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s10845-024-02379-2
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s10845-024-02379-2?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:joinma:v:36:y:2025:i:5:d:10.1007_s10845-024-02379-2. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.