Vision based damage location method for robot laser cladding process
At present, damage identification and location in remanufacturing is an artificial vision task. It takes time and effort. And may lead to inaccurate repair. In order to alleviate this problem, this paper proposes a vision based automatic damage location method, which integrates the camera into the robot laser cladding repair unit. Two case studies were conducted to analyze different configurations of convolutional neural networks (R-CNN) based on faster regions. The purpose of this study was to select the most appropriate configuration to locate wear on damaged fixed bends. The collected images were used to test and train R-CNN. The results of this study showed that the training and verification loss showed a downward trend, and the average accuracy (mAP) was 88.7%.
1. Introduction
Laser cladding (LC) or laser based direct metal deposition (LMD) is an attractive additive manufacturing technology, which has aroused great interest in aerospace, oil and gas industry and mechanical engineering applications. This mature industrial process produces molten pool on the substrate by focusing high power laser beam, and continuously guides the material into the molten pool through coaxial nozzle to solidify it. Compared with traditional technologies (such as casting, forging and machining), this layer by layer manufacturing technology has the ability to improve time and cost efficiency.
In robotic laser cladding applications, the inspection of wear areas is currently a manual process. The operator visually locates the damage and then uses a laser scanner to capture the surface geometry of the defect. The information in this process is used to generate a maintenance strategy for the part. With the increase of part size, this process becomes more time-consuming, error prone and labor-intensive.
In this paper, we first propose to integrate visual sensors in the repair unit to record the image data of damaged parts. Then, two case studies are conducted using two different data sets. These case studies were analyzed to compare the feasibility, accuracy and time efficiency of common feature extractors for damage detection. Finally, the appropriate model configuration is selected according to the results, and the results and evaluation are provided.
2. Methods
This study focuses on the damage identification and location of the cylindrical fixed elbow, more specifically, the damage and gasket on the fixed elbow. These are machine parts used in the oil and gas industry. For the worn fixed elbow, the position of the gasket must be distinguished, because the gasket is the area causing the greatest damage and must be repaired.
2.1 Vision based RLCRC
The mechanical arm used is Fanuc-R-1000iA/80F, which is a high-speed transport robot for medium payload. The camera is UVC-G3-Bullet/UVC-G3-AF. Figure 1 shows the schematic diagram of the unit settings.
Figure 1 RLCRC settings.
3. Results and discussion
3.1 Case study 1
In order to develop a database containing images of damaged fixed bends, 72 images (resolution 1920 x 1080 pixels) of 8 different types of fixed bends were collected. R-CNN needs a lot of training data to generate high-performance models. This can be a burdensome task because obtaining large amounts of data is very expensive and often not easy to access. To overcome this problem, data expansion is a widely accepted practice. In this study, different types of geometric (horizontal and vertical flipping) and photometric (grayscale, hue and exposure) enhancement techniques were applied to make the training model more robust and flexible to changes in lighting and camera settings. Figure 2 shows the sample images from the extended data set, which is enlarged by increasing from 72 images to 221 images.
Fig. 2 Sample enhancement image from training data set.
3.2 Comparative analysis and results
This study mainly evaluates the mAP, not the object recommendation proxy metric, because it is a widely used metric for object detection.
It can be seen from Figure 2 that these two tags have similar features and have constant overlap in the image. It is assumed that these factors are the reasons for the deviation and difference in the model, resulting in a relatively low mAP score. The training and verification losses of ResNet50 with one tag are shown in Figure 3. Infer the two model configurations, and the boundary box prediction is shown in Figure 4.
Fig. 3 The relationship between training and verification losses and the number of steps.
Figure 4 Test data set with bounding box output.
3.3 Case Study 2
3.3.1 Datasets
For autonomous damage detection, the position of the camera in RLCRC will remain unchanged, which means that the image of the workstation camera will always be taken from the same settings. A new data set was formed, including 437 original images with four different fixed bends (resolution: 1920 x 1080 pixels). Similar to the first dataset, the images are expanded to 1049 images (see Figure 5).
Fig. 5 The sample enhanced image from the new training data set contains only images with the same camera settings.
3.3.2 Comparative analysis and results
Figure 6 illustrates the result measurement graph obtained from the ResNet50 model. As shown in Figure 6 (c, d), both training and verification losses fall to a stable point, which means that there is no over fitting. The model is evaluated by multiple IoU metrics (IoU=0.50:0.05:0.95), which means that the model must perform well under each IoU threshold to obtain a higher mAP score.
Figure 6 (b) shows the mAP value at 0.50 IoU, reaching 100%.
Figure 6 Result measurement graph display (a) mAP@0.5 : 0.95 IOU and (b) mAP@0.5 IOU; (c) Verification losses and (d) training losses.
3.4 Discussion and Limitations
The purpose of this study is to develop an intelligent vision system that can identify and locate damaged areas. This positioning process is performed using a fixed camera orientation, which means that the camera's view remains unchanged throughout the process and between different parts. Therefore, it is more important to have a special model to position the "pad" surface with higher accuracy, rather than a robust model with much lower accuracy.
The first case study was conducted on a relatively small data set, which contains 72 original images of eight fixed curves, while the second case study contains 437 original images of four fixed curves.
The results of the second case study are more favorable because the goal is to obtain a more trained and professional model to detect damage in a specific environment. A larger data set will enable R-CNN to more accurately understand the characteristics of damage and gasket, and generate a more robust and high-performance model.
4. Conclusion and future work
Damage identification and location in remanufacturing is an artificial vision task. It can be time-consuming, tedious, and error prone. With the latest advances in computer vision, computing power and access to large amounts of data, it is now worth exploring the use of this technology in remanufacturing. In this paper, an automatic vision detection and location method based on machine learning for damage in robot laser cladding repair unit is proposed. To achieve this, two Faster R-CNN configurations combined with transfer learning are used. The two case studies are conducted on different data sets. Case study 1 has more image sets, and case study 2 has more similar image sets. Their performances are also compared and analyzed. The promising results of this research prove the potential of vision based R-CNN technology in the field of maintenance and remanufacturing.
It should be noted that the scope of this paper is to find the best model for fixed elbow damage detection. The method proposed in this paper will be extended to work with the depth sensor and obtain the volume information needed for repairing parts in the future.
Source:Peer-review under responsibility of the scientific committee of the 54th CIRP Conference on Manufacturing System, 10.1016/j.procir.2021.11.139
- No comments
- 2023-08-30