Analog circuits fault diagnosis is essential for guaranteeing the reliability and maintainability of electronic systems. In this paper, a novel analog circuit fault diagnosis approach is proposed based on greedy kernel principal component analysis (KPCA) and one-against-all support vector machine (OAASVM). In order to obtain a successful SVM-based fault classifier, eliminating noise and extracting fault features are very important. Due to the better performance of nonlinear fault features extraction and noise elimination as compared with PCA, KPCA is adopted in the proposed approach. However, when we adopt KPCA to extract fault features of analog circuit, a drawback of KPCA is that the storage required for the kernel matrix grows quadratically, and the computational cost for eigenvector of the kernel matrix grows linearly with the number of training samples. Therefore, GKPCA, which can approximate KPCA with small representation error, is introduced to enhance computational efficiency. Based on the statistical learning theory and the empirical risk minimization principle, SVM has advantages of better classification accuracy and generalization performance. The extracted fault features are then used as the inputs of OAASVM to solve fault diagnosis problem. The effectiveness of the proposed approach is verified by the experimental results.
As the solutions of the least squares support vector regression machine (LS-SVRM) are not sparse, it leads to slow prediction speed and limits its applications. The defects of the ex- isting adaptive pruning algorithm for LS-SVRM are that the training speed is slow, and the generalization performance is not satis- factory, especially for large scale problems. Hence an improved algorithm is proposed. In order to accelerate the training speed, the pruned data point and fast leave-one-out error are employed to validate the temporary model obtained after decremental learning. The novel objective function in the termination condition which in- volves the whole constraints generated by all training data points and three pruning strategies are employed to improve the generali- zation performance. The effectiveness of the proposed algorithm is tested on six benchmark datasets. The sparse LS-SVRM model has a faster training speed and better generalization performance.