Abstract—The scarcity of fault samples limits cross-domain knowledge transfer, while training dominated by healthy samples leads to biased decision boundaries. In addition, distribution imbalance further aggravates the problem of negative transfer. To address these issues, a Multi-Source Contrastive Learning model with Pseudo-Label Self-Correction and Weight Adaptation (MSCL-PLWA) is proposed. Synthetic data are first generated using a Wasserstein Generative Adversarial Network (WGAN). Contrastive learning is then employed to train the model by evaluating the similarity between augmented instances and minimizing the distance between similar pairs of synthetic and real samples. A prototype-based pseudo-labeling algorithm is subsequently applied for pseudo-label self-correction, and a multi-pseudo-label-guided Local Maximum Mean Discrepancy (LMMD) strategy is incorporated to enhance subdomain alignment. Furthermore, an adaptive weighting mechanism is introduced to assign higher weights to source domains more relevant to the target domain, thereby reducing the adverse impact of less relevant domains and mitigating negative transfer. Experimental results on two bearing platformsdemonstrate that MSCL-PLWA effectively suppresses negative transfer and exhibits strong cross-domain fault diagnosis performance.