Few-label learning for fault diagnosis based on contrastive representations
It is a common scenario in industrial applications that though a large amount of monitoring data of mechanical machines are available, only a few of them are labeled due to the lack of expert knowledge and labor. This leads to the difficulty of developing powerful supervised fault diagnosis methods, which requires a relatively large fully-labeled dataset containing machine monitoring data collected under healthy and different faulty states. In terms of this issue, a novel few-label learning method for fault diagnosis is proposed in this work, which can first learn useful representations from a large amount of unlabeled data with the help of a contrastive learning technique, based on which a fault diagnosis model can be constructed with the support of only a few labeled data. To validate the effectiveness of The proposed method is applied to a benchmark bearing fault diagnosis dataset to validate its effectiveness in few-label scenarios. Results show that the proposed method obtains better accuracy than other state-of-the-art methods.
fault diagnosis, few-label scenario, contrastive learning, residual network