Smart Glasses Aid Inspection of Automotive Wire Harnesses
Automotive wire harnesses contain a variety of components, including connectors, relays, sensors and even small controllers. Inspecting them is a challenge. For example, relays for different models often share identical dimensions and can be installed in the same location. Attention to detail is critical, and the risk of errors is high.
Harness assemblers would like to automate the process, but technology is lacking. Inspecting harnesses with conventional vision systems is problematic. The camera must be stationary or mounted to a robotic arm. That’s a problem, because the system must work in tandem with the movements of assemblers. In complex assembly scenarios, image acquisition may have problems such as blurring and scale variation, which will adversely affect accuracy.
Wearable image acquisition devices are a better option, due to their better flexibility and adaptability. To that end, we have developed smart glasses that use machine vision, artificial intelligence and augmented reality to help assemblers inspect wire harnesses. Our system is specifically designed to help assemblers check various parts of a harness, including the serial number, relay box labeling, relay types, the number of relays, and the size and color of the relays, focusing on aspects such as alphanumeric characters, colors and shapes.
The authors’ system consists of smart glasses, a wireless router, a server, a client computer, a display, and MES software. Illustration courtesy Guangzhou Industrial Intelligence Research Institute
For our system, we chose YOLOv5s AI-based vision software from Ultralytics Inc. YOLOv5 is in the You Only Look Once (YOLO) family of computer vision models. It is commonly used for detecting objects. YOLOv5 comes in four main versions: small (s), medium (m), large (l), and extra large (x), each offering progressively higher accuracy rates. Each variant also takes a different amount of time to train.
Compared with YOLOv4, YOLOv5 has a smaller weight file, shorter training and inference time, and performs well in terms of average detection accuracy. Compared with other AI-based vision software, YOLOv5s has significant advantages in handling complex motion and dynamic multi-object detection.
To address issues specific to automotive wire harness assembly, such as image motion blur and variations in target image sizes, key modules of the YOLOv5s were optimized. First, the Detect module was integrated with the Adaptive Spatial Feature Fusion Network (ASFF), so features at different scales could be effectively visualized. This integration better captures the intricate details of harness components.
Second, the C3 module incorporates the Global Context Network (GCNet), which draws on the strengths of non-local networks and squeeze-and-excitation networks (SENet). This enhancement bolsters the model’s understanding of global relationships, expands its perceptual range, and adapts more effectively to changes in scale.
Finally, to balance the overall performance of the system, Global Sparse Convolution (GSConv) replaces traditional convolution modules, reducing the model’s computational and parameter requirements. Additionally, data augmentation techniques were employed to increase the diversity of training samples, further enhancing the network’s generalization capabilities. Through these improvements, the overall detection effectiveness of the YOLOv5s system for wiring harness relay assembly was significantly enhanced.
In use, our system demonstrated excellent performance. With our system, assemblers were able to install relays in a harness in 43 seconds, on average, compared with 55 seconds without vision assistance—an increase in efficiency of nearly 22 percent.
The smart glasses weigh 120 grams, including the camera and battery. Photo courtesy Guangzhou Industrial Intelligence Research Institute
The Glasses
We chose smart glasses for our system because they would not interfere with an assembler’s normal workflow. This allows workers to focus on their assembly tasks, while delegating the more detailed inspection tasks to the “electronic eyes.” Using the assembler’s first-person perspective, the system captures images of the assembly, actions and materials. These images are then transmitted wirelessly to a server for processing and analysis.
Ultimately, the system outputs assembly prompts and quality assessment results, verifying whether the worker’s operations conform to the technical requirements. The worker can then use these results to make corrections, ensuring an error-free assembly.
The smart glasses weigh 120 grams, including the battery. It can be used in conjunction with other gear, such as a baseball cap, safety helmet or safety goggles. It is comfortable enough to wear all day and adaptable to the needs of various work environments.
The glasses are powered by a swappable lithium ion battery that lasts 24 hours on a charge. A 12-megapixel autofocus camera supplies 4K streaming video at a rate of 30 frames per second. With an 8-core, 2.52-gigahertz CPU, 64 gigabytes of storage, and 6 gigabytes of RAM, the glasses can handle high-performance processing demands in industrial scenarios. With a 28-degree field of view and two-axis vision adjustment, the camera can see everything it needs to. The WiFi network connection, combined with light waveguide screens, voice, touchpad and button controls for display and operation, makes the device user-friendly.
Through a user-friendly interface, the assembler can easily operate the system, view the inspection results, and receive warning messages. Photo courtesy Guangzhou Industrial Intelligence Research Institute
The server exchanges data with a manufacturing execution system. The server is responsible for receiving, processing and storing product data from the MES software and image data from the smart glasses. Through a user-friendly interface, the assembler can easily operate the system, view the inspection results, and receive warning messages.
Peripheral hardware includes a wireless router and industrial power supply, responsible for network signal exchange and transmission and reception of wireless signals.
To train the AI vision system, the authors first needed to create an image data set. While wearing the glasses, the staff took video of the entire assembly process. Photo courtesy Guangzhou Industrial Intelligence Research Institute
Training the AI
To train the AI vision system, we first needed to create an image data set. While wearing the glasses, the staff took video of the entire assembly process. The glasses captured images with an original size of 600 pixels by 600 pixels at a rate of 30 frames per second. Subsequently, each frame of the video was selected and labeled to ensure the quality of the images, and the data set was finally produced.
The images contain five items to be inspected, such as labels, alphanumeric characters, and colors. Software was used to enhance images, such as getting rid of blur or adjusting brightness.
Ultimately, our data set consisted of 83,788 images, of which 90 percent were used as the training set and 10 percent were used as the validation set. Forty samples were selected for each iteration of training the AI, and a total of 300 rounds of iterations were performed.
This series of images shows how the authors’ refinement of the AI algorithms increased the ability of the vision system to detect specific features of a wire harness. Photo courtesy Guangzhou Industrial Intelligence Research Institute
To measure the effectiveness of the AI, we used the following metrics: precision, recall, mean average precision. Precision (P) represents the proportion of correctly predicted positive samples (true positives) out of the total predicted positive samples (true positives and false positives). Recall (R) indicates the proportion of true positives out of the total labeled samples (true positives and false negatives). Mean average precision (mAP) is related to both precision and recall. It is calculated as the average of the average precision for each category across all images.
Our AI system achieved an mAP of 99.3 percent.
Editor’s note: This article is a summary of a research paper co-authored by Shuo Li, Wenhong Wang, Feidao Cao, Yuhang Zhang and Xiangpu Meng of the Guangzhou Industrial Intelligence Research Institute, as well as Hongyan Shi of Shenyang University of Chemical Technology in Shenyang, China. To read the entire paper, click here.
ASSEMBLY ONLINE
For more information on virtual and augmented reality, read these articles:
The Reality of Augmented Reality
Virtual Reality Systems Streamline Prototyping at CNH Global
Virtual Reality Aids Design of Assembly Lines