000 | 03128aab a2200253 4500 | ||
---|---|---|---|
008 | 240216b20232023|||mr||| |||| 00| 0 eng d | ||
022 | _a0733-9364 | ||
100 |
_aXiahou, Xiaer _9880985 |
||
100 |
_aLi, Zirui _9880986 |
||
100 |
_aXia, Jikang _9880987 |
||
100 |
_aZhou, Zhipeng _9880988 |
||
100 |
_aLi, Qiming _9880989 |
||
245 | _aA Feature-Level Fusion-Based Multimodal Analysis of Recognition and Classification of Awkward Working Postures in Construction | ||
300 | _a1-17 p | ||
520 | _aDeveloping approaches for recognition and classification of awkward working postures is of great significance for proactive management of safety risks and work-related musculoskeletal disorders (WMSDs) in construction. Previous efforts have concentrated on wearable sensors or computer vision-based monitoring. However, certain limitations need to be further investigated. First, wearable sensor-based studies lack reliability due to vulnerability to environmental interferences. Second, conventional computer vision-based recognition demonstrates classification inaccuracy under adverse environmental conditions, such as insufficient illumination and occlusion. To address the above limitations, this study presents an innovative and automated approach for recognizing and classifying awkward working postures. This approach leverages multimodal data collected from various sensors and apparatuses, allowing for a comprehensive analysis of different modalities. A feature-level fusion strategy is employed to train deep learning-based networks, including a multilayer perceptron (MLP), recurrent neural network (RNN), and long short-term memory (LSTM). Among these networks, the LSTM model achieves optimal performance, with an impressive accuracy of 99.6% and an F1-score of 99.7%. A comparison of metrics between single-modality and multimodal-fused training methods demonstrates that the incorporation of multimodal fusion significantly enhances the classification performance. Furthermore, the study examines the performance of the LSTM network under adverse environmental conditions. The accuracy of the model remains consistently above 90% in such conditions, indicating that the model’s generalizability is enhanced through the multimodal fusion strategy. In conclusion, this study mainly contributes to the body of knowledge on proactive prevention for safety and health risks in the construction industry by offering an automated approach with excellent adaptability in adverse conditions. Moreover, this innovative attempt integrating diverse data through multimodal fusion may provide inspiration for future studies to achieve advancements. | ||
650 |
_aAwkward Working Postures _9880990 |
||
650 |
_aWearable Sensors _9878376 |
||
650 |
_aMultimodal Fusion _9720247 |
||
650 |
_aDeep Learning _9166900 |
||
650 | _aRisk Management | ||
773 | 0 |
_dReston,Virginia, U.S.A : American Society of Civil Engineers/ American Concrete Institute _x07339364 _tASCE: Journal of Construction Engineering and Management |
|
856 | _uhttps://doi.org/10.1061/JCEMD4.COENG-13795 | ||
942 |
_2ddc _n0 _cART _o14993 _pMr. Muhammad Rafique Al Haj Rajab Ali (Late) |
||
999 |
_c814992 _d814992 |