Abstract Scope |
Welding as one of the most important joining techniques has been automated greatly. However, in typical automated welding applications, the welding parameters are preset and not adjusted adaptively to overcome the effect from unpredicted disturbances. This imperfection cannot meet the increasing requirements from welding/manufacturing industry on quality, efficiency and individuality. Combining information sensing/processing with traditional welding manufacturing techniques has been a major direction to revolutionize the welding industry. In practical welding, the weld penetration, as measured by the back-side bead width, is a critical factor determining the integrity of the weld produced. However, the bask-side bead width is difficult to be monitored directly, during manufacturing, since it occurs underneath the surface of welded workpiece. Therefore, predicting back-side bead width from conveniently sensible information from welding process becomes a fundamental issue in intelligent welding.
An end-to-end and data-driven prediction system is proposed to predict the weld penetration status from top-side images during welding. In this method, a passive vision sensing system with two cameras is developed to monitor the top-side and back-bead information simultaneously. Then, the weld joints are classified as three classes i.e. under, desirable and excessive penetration depending on the back-bead width. Taking the weld pool-arc images as inputs and corresponding penetration status as labels, an end-to-end convolutional neural network (CNN) is designed and trained where the features are defined and extracted automatically.In order to increase the accuracy and training speed, a transfer learning approach based on residual neural network (ResNet) is developed. This ResNet-based model is pre-trained on ImageNet dataset to process a better feature extracting ability and its fully connected layers are modified based on our own dataset. The experiments show that this transfer learning approach can decrease the training time. What’s more, this paper proposes to fuse the present weld pool-arc image with two previous images, acquired 1/6 s and 2/6 s earlier. The fused single image thus reflects the dynamic welding phenomena, the prediction accuracy has been significantly improved with the image sequence data by fusion temporal information to the input layer of CNN (early fusion). Due to the critical role of weld penetration and the negligible impact on system/implementation, this method represents a major progress in the important filed of weld penetration monitoring and is expected to provide more significant improvements during welding using pulsed current where the process becomes highly dynamic. |