600字范文,内容丰富有趣,生活中的好帮手!
600字范文 > 时序预测 | MATLAB实现基于QPSO-BiLSTM PSO-BiLSTM和BiLSTM时间序列预测

时序预测 | MATLAB实现基于QPSO-BiLSTM PSO-BiLSTM和BiLSTM时间序列预测

时间:2020-11-15 05:14:47

相关推荐

时序预测 | MATLAB实现基于QPSO-BiLSTM PSO-BiLSTM和BiLSTM时间序列预测

时序预测 | MATLAB实现基于QPSO-BiLSTM、PSO-BiLSTM和BiLSTM时间序列预测

目录

时序预测 | MATLAB实现基于QPSO-BiLSTM、PSO-BiLSTM和BiLSTM时间序列预测效果一览基本描述程序设计参考资料

效果一览

基本描述

1.Matlab实现QPSO-BiLSTM、PSO-BiLSTM和BiLSTM神经网络时间序列预测;

2.输入数据为单变量时间序列数据,即一维数据;

3.运行环境Matlab及以上,依次运行Main1BiLSTMTS、Main2PSOBiLSTMTS、Main3QPSOBiLSTMTS、Main4CDM即可,其余为函数文件无需运行,所有程序放在一个文件夹,data为数据集;BiLSTM(双向长短时记忆模型)与粒子群算法优化后的BiLSTM(PSOBiLSTM)以及量子粒子群算法优化后的BiLSTM(QPSOBiLSTM)对比实验,可用于风电、光伏等负荷预测,时序预测,数据为单输入单输出,PSO、QPSO优化超参数为隐含层1节点数、隐含层2节点数、最大迭代次数和学习率。

4.命令窗口输出MAE、MAPE、RMSE和R2;

程序设计

完整程序和数据下载:私信博主回复QPSO-BiLSTM、PSO-BiLSTM和BiLSTM时间序列预测

for i=1:PopNum%随机初始化速度,随机初始化位置for j=1:dimif j==dim% % 隐含层节点与训练次数是整数 学习率是浮点型pop(i,j)=(xmax(j)-xmin(j))*rand+xmin(j);elsepop(i,j)=round((xmax(j)-xmin(j))*rand+xmin(j)); %endendend% calculate the fitness_value of Poppbest = pop;gbest = zeros(1,dim);data1 = zeros(Maxstep,PopNum,dim);data2 = zeros(Maxstep,PopNum);for i = 1:PopNumfit(i) = fitness(pop(i,:),p_train,t_train,p_test,t_test);f_pbest(i) = fit(i);endg = min(find(f_pbest == min(f_pbest(1:PopNum))));gbest = pbest(g,:);f_gbest = f_pbest(g);%-------- in the loop -------------for step = 1:Maxstepmbest =sum(pbest(:))/PopNum;% linear weigh factorb = 1-step/Maxstep*0.5;data1(step,:,:) = pop;data2(step,:) = fit;for i = 1:PopNuma = rand(1,dim);u = rand(1,dim);p = a.*pbest(i,:)+(1-a).*gbest;pop(i,:) = p + b*abs(mbest-pop(i,:)).*...log(1./u).*(1-2*(u >= 0.5));% boundary detectionfor j=1:dimif j ==dimif pop(i,j)>xmax(j) | pop(i,j)<xmin(j)pop(i,j)=(xmax(j)-xmin(j))*rand+xmin(j); %endelsepop(i,j)=round(pop(i,j));if pop(i,j)>xmax(j) | pop(i,j)<xmin(j)pop(i,j)=round((xmax(j)-xmin(j))*rand+xmin(j)); %endendendfit(i) = fitness(pop(i,:),p_train,t_train,p_test,t_test);if fit(i) < f_pbest(i)pbest(i,:) = pop(i,:);f_pbest(i) = fit(i);endif f_pbest(i) < f_gbestgbest = pbest(i,:);f_gbest = f_pbest(i);endendtrace(step)=f_gbest;step,f_gbest,gbestresult(step,:)=gbest;endor i=1:N%随机初始化速度,随机初始化位置for j=1:Dif j==D% % 隐含层节点与训练次数是整数 学习率是浮点型x(i,j)=(xmax(j)-xmin(j))*rand+xmin(j);elsex(i,j)=round((xmax(j)-xmin(j))*rand+xmin(j)); %endendv(i,:)=rand(1,D);end%------先计算各个粒子的适应度,并初始化Pi和Pg----------------------for i=1:Np(i)=fitness(x(i,:),p_train,t_train,p_test,t_test);y(i,:)=x(i,:);end[fg,index]=min(p);pg = x(index,:); %Pg为全局最优%------进入主要循环,按照公式依次迭代------------for t=1:Mfor i=1:Nv(i,:)=w*v(i,:)+c1*rand*(y(i,:)-x(i,:))+c2*rand*(pg-x(i,:));x(i,:)=x(i,:)+v(i,:);for j=1:Dif j ~=Dx(i,j)=round(x(i,j));endif x(i,j)>xmax(j) | x(i,j)<xmin(j)if j==Dx(i,j)=(xmax(j)-xmin(j))*rand+xmin(j); %elsex(i,j)=round((xmax(j)-xmin(j))*rand+xmin(j)); %endendendtemp=fitness(x(i,:),p_train,t_train,p_test,t_test);if temp<p(i)p(i)=temp;y(i,:)=x(i,:);endif p(i)<fgpg=y(i,:);fg=p(i);endendtrace(t)=fg;result(t,:)=pg;

参考资料

[1] /kjm13182345320/article/details/127596777?spm=1001..3001.5501

[2] /download/kjm13182345320/86830096?spm=1001..3001.5501

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。