针对非圆相干信号的解相干问题,给出了一种新的特征空间算法(eigenspace-direction of arrival,ES-DOA)。利用信号源的非圆特性,虚拟地扩展了阵元个数,使阵列信息增至扩展前的两倍,对信号源数目的估计突破了M-1(M为阵元数)的限制;将信息量加倍后的协方差矩阵加以重构,给出一种新的特征空间算法进行解相干,最大限度地利用了噪声子空间与信号子空间的信息,避免了空间平滑思想的阵列孔径损失及最大似然算法运算量过大的问题;该方法还对信号源功率进行了估计,提高了对小能量信号的估计成功概率。仿真结果表明,该方法对波达方向估计具有很好的鲁棒性。
In order to solve discrete multi-objective optimization problems, a non-dominated sorting quantum particle swarm optimization (NSQPSO) based on non-dominated sorting and quantum particle swarm optimization is proposed, and the performance of the NSQPSO is evaluated through five classical benchmark functions. The quantum particle swarm optimization (QPSO) applies the quantum computing theory to particle swarm optimization, and thus has the advantages of both quantum computing theory and particle swarm optimization, so it has a faster convergence rate and a more accurate convergence value. Therefore, QPSO is used as the evolutionary method of the proposed NSQPSO. Also NSQPSO is used to solve cognitive radio spectrum allocation problem. The methods to complete spectrum allocation in previous literature only consider one objective, i.e. network utilization or fairness, but the proposed NSQPSO method, can consider both network utilization and fairness simultaneously through obtaining Pareto front solutions. Cognitive radio systems can select one solution from the Pareto front solutions according to the weight of network reward and fairness. If one weight is unit and the other is zero, then it becomes single objective optimization, so the proposed NSQPSO method has a much wider application range. The experimental research results show that the NSQPS can obtain the same non-dominated solutions as exhaustive search but takes much less time in small dimensions; while in large dimensions, where the problem cannot be solved by exhaustive search, the NSQPSO can still solve the problem, which proves the effectiveness of NSQPSO.
In order to effectively solve combinatorial optimization problems,a membrane-inspired quantum bee colony optimization(MQBCO)is proposed for scientific computing and engineering applications.The proposed MQBCO algorithm applies the membrane computing theory to quantum bee colony optimization(QBCO),which is an effective discrete optimization algorithm.The global convergence performance of MQBCO is proved by Markov theory,and the validity of MQBCO is verified by testing the classical benchmark functions.Then the proposed MQBCO algorithm is used to solve decision engine problems of cognitive radio system.By hybridizing the QBCO and membrane computing theory,the quantum state and observation state of the quantum bees can be well evolved within the membrane structure.Simulation results for cognitive radio system show that the proposed decision engine method is superior to the traditional intelligent decision engine algorithms in terms of convergence,precision and stability.Simulation experiments under different communication scenarios illustrate that the balance between three objective functions and the adapted parameter configuration is consistent with the weights of three normalized objective functions.
Massive MIMO is one of the key technologies in future 5G communications which can satisfy the requirement of high speed and large capacity. This paper considers antenna selection and power allocation design to promote energy conservation then provide good quality of service(QoS) for the whole massive MIMO uplink network. Unlike previous related works, hardware impairment, transmission efficiency, and energy consumption at the circuit and antennas are involved in massive MIMO networks. In order to ensure the QoS, we consider the minimum rate constraint for each user and the system, which increases the complexity of power allocation problem for maximizing energy and spectral efficiency in massive MIMO system. To this end, a quantum-inspired social emotional optimization(QSEO) algorithm is proposed to obtain the optimal power control strategy in massive MIMO uplink networks. Simulation results assess the great advantages of QSEO which previous strategies do not have.