In the ever-evolving landscape of workplace safety, ensuring compliance with Personal Protective Equipment (PPE) regulations stands as a cornerstone in safeguarding your workforce. We understand the challenges posed by manual monitoring methods and the critical need for a more efficient and reliable solution. That’s why we’re thrilled to introduce our groundbreaking system, poised to revolutionise how you approach PPE compliance monitoring.

At the heart of our innovation lies the fusion of cutting-edge artificial intelligence (AI) technology with the imperative of workplace safety. We’ve developed a sophisticated system that automates the identification of PPE usage among your employees, transcending the limitations of traditional monitoring techniques.

Our system leverages advanced object detection models, including renowned technologies such as the YOLO (You Only Look Once) series and EfficientDet. These models have been meticulously trained on vast datasets, enabling them to accurately recognize and classify PPE items in real-time. From helmets to safety glasses and high-visibility vests, our system ensures unparalleled accuracy in assessing compliance.

The Convolutional Neural Networks model was created by utilizing transfer learning on a YOLO deep learning network base version. The model predicts compliance in four categories: NOT SAFE, SAFE, NoHardHat, NoGloves, NoChinstrap, NoSafetyShoes and NoJacket. It does this by accounting for the presence of safety jackets and hardhats. To train the model, a web-based collection of 7000 photos was gathered from video recordings in realtime CCTV of many building sites. On the test data set, the model yielded an F1 score of 0.97 with an average accuracy and recall rate of 97%. In order to facilitate real-time integration and adoption on building sites, the model incorporates an alarm and a time-stamped report upon detection of a non-”SAFE” category.

The hardware for the system was utilized to test the real-time processing of the algorithm on the field data. We understand that deploying such technology in real-world scenarios can be daunting, particularly in environments where connectivity is limited or latency issues are prevalent. That’s where our innovative approach comes into play: edge computing. By harnessing the power of Edge GPU hardware, such as the NVIDIA Jetson Nano and Jetson Orin, we’ve brought processing capabilities directly to your plant floor.

The YOLO model is used for transfer learning in the keras framework. The final output layer is modified to output different classes namely — NOTSAFE, SAFE, NoJacket, NoHelmet and other requirements- by changing the filter sizes. The trained weights of the YOLO are used as an initial set of weights for the CNN network and the convolutional and fully connected layers are all opened up for training with the data from different sites. In addition to the above, a code to generate alarms and reports in cases of non-compliance was developed, to increase the utility of the algorithm on different sites.

This means real-time performance without the need for extensive bandwidth or reliance on remote servers. With our system, you can rest assured that PPE compliance monitoring is not only efficient but also respects the privacy of your data.

The important part of training the machine learning algorithm was the collection and preparation of data to aid the validation of the model. The preparation of the dataset is the most time-consuming and critical component as it enabled efficient training and accurate detection by the algorithm. Data is collected by both manual collection and image scraping online. Firstly, for manual collection, data was collected from multiple locations and sites where the videos of ongoing works are recorded. The frames from the videos are later extracted as images. The image capturing was done at an interval of 3 s. The purpose of this data is to have a close approximation of the CCTV video data used by the algorithm to predict non-compliance in real-time.

Secondly, images are scraped from the internet using web-crawlers developed in python using the google_images-download library to gather images. The images were then manually checked for relevance to the study. This filtering involved discarding images with watermarks, synthetically generated images. Data augmentation was performed for 60% of images that were collected through standard augmentations such as flipping, rotating 30 degrees right, and 30 degrees left. The final data set had images and data points for the study. Once the dataset was collected, the data was labeled, a graphical image annotation tool. Annotations were saved as XML files.

We adopted a train-validation-test set with a random split of 90:8:2 for training. We ensured that there was adequate representation of all four classifications in each of the datasets. The generated datasets had annotations in the XML file for each image. The XML files were finally collated into a text file to a code readable format for training and validation purposes.

Upon completion of the three training phases, the network’s ultimate loss was 12.06. The dataset was tested and validated using a confusion matrix, and the accuracy of the model was determined by dividing the total number of predictions by the number of correct predictions. Furthermore, utilizing video footage of individuals wearing PPE, a whole new dataset with a range of settings and backdrops was developed. The algorithm’s performance was then evaluated by testing the trained model on both image and video data.

Where:

Precision is the number of true positive predictions divided by the total number of positive predictions made (true positive + false positive).

Recall is the number of true positive predictions divided by the total number of actual positive instances in the data (true positive + false negative).

The model has a 96.8% accuracy rate, an F1 score of 0.97, and average precision and recall of 0.97. Based on the test data set, these findings show that the model was predicting with 96.8% accuracy. This indicates that the model was operating consistently and predicting pictures with an overall accuracy of 96.8% in the validation data set, which was likewise attained for the test set. It is shown how the sample output predictions work.

AI Visual Inspection Solutions

A PPE compliance dashboard with charts is a visual tool used to track and monitor adherence to personal protective equipment (PPE) guidelines within a workplace or organization. Here’s a simple description of its key features:

  • Interactive Charts: It includes various charts and graphs that visually represent compliance data, such as:
  • Bar charts showing compliance rates by department, team, or individual.
  • Line graphs illustrating trends in compliance over time.
  • Pie charts displaying the distribution of compliance across different types of PPE.

Real-Time Updates: The dashboard offers real-time updates, ensuring that users have access to the most current compliance information.

Drill-Down Capability: Users can drill down into specific data points within the charts to gain deeper insights into compliance patterns and outliers.

And Many more interactive features that can drill down to see the real time frames captured from CCTV cameras.

Our Dashboard has NLI chat feature, integrated with our platform elsAI for an easy communication with the data capture. The user would be able to drive the data by chatting something in one liner.

Example : What is the last 24 hours data? What is most violated PPE? etc..

In conclusion, our system represents a paradigm shift in how you approach PPE compliance monitoring. With unparalleled accuracy, real-time performance, and enhanced data privacy, we’re confident that our solution will redefine workplace safety standards in your plant.

If you’re ready to take the next step towards a safer, more efficient workplace, we invite you to reach out to us for a personalized demonstration of our system.

Related Stories

ourstories-line

Boosting Product Quality: How can visual inspection improve quality control in manufacturing?

Whether companies are making cars, semiconductor chips, smartphones, or food, having high-quality st...

2 Mins read

Apr 24, 2024

Read More

AI Visual Inspection for Defect Identification on conveyor belt

Many recurring activities are being solved by AI-visual inspection and image processing. Particularl...

2 Mins read

Feb 20, 2024

Read More
Connect with us