论坛全局菜单下方 - TICKMILL 285X70论坛全局菜单下方 - ThinkMarkets285X70论坛全局菜单下方 - 荔枝返现285X70论坛全局菜单下方 -  icmarkets285X70
查看:1124回复:0
草龙
注册时间2004-12-17
[MT4指标]BP神经网络EA
楼主发表于:2014-02-10 16:58只看该作者倒序浏览
1楼 电梯直达
电梯直达
BP神经网络EA 无意碰到的 可以详细研究研究了//+--------------------------------------------------------------------------------------+ //| BPNN Predictor.mq4 | //| Copyright ? 2009, gpwr | //| [email protected] | //+--------------------------------------------------------------------------------------+ #property copyright "Copyright ? 2009, gpwr" #property indicator_chart_window #property indicator_buffers 3 #property indicator_color1 Red #property indicator_width1 2 #property indicator_color2 Blue #property indicator_width2 2 #property indicator_color3 Black #property indicator_width3 2 //======================================= DLL ============================================ #import "BPNN.dll" string Train( double inpTrain, // Input training data (1D array carrying 2D data, old first) double outTarget,// Output target data for training (2D data as 1D array, oldest 1st) double outTrain, // Output 1D array to hold net outputs from training int ntr, // # of training sets int UEW, // Use Ext. Weights for initialization (1=use extInitWt, 0=use rnd) double extInitWt,// Input 1D array to hold 3D array of external initial weights double trainedWt,// Output 1D array to hold 3D array of trained weights int numLayers, // # of layers including input, hidden and output int lSz, // # of neurons in layers. lSz[0] is # of net inputs int AFT, // Type of neuron activation function (0:sigm, 1:tanh, 2:x/(1+x)) int OAF, // 1 enables activation function for output layer; 0 disables int nep, // Max # of training epochs double maxMSE // Max MSE; training stops once maxMSE is reached ); string Test( double inpTest, // Input test data (2D data as 1D array, oldest first) double outTest, // Output 1D array to hold net outputs from training (oldest first) int ntt, // # of test sets double extInitWt,// Input 1D array to hold 3D array of external initial weights int numLayers, // # of layers including input, hidden and output int lSz, // # of neurons in layers. lSz[0] is # of net inputs int AFT, // Type of neuron activation function (0:sigm, 1:tanh, 2:x/(1+x)) int OAF // 1 enables activation function for output layer; 0 disables ); #import //===================================== INPUTS =========================================== extern int lastBar =0; // Last bar in the past data extern int futBars =5; // # of future bars to predict extern int numLayers =3; // # of layers including input, hidden & output (2..6) extern int numInputs =12; // # of inputs extern int numNeurons1 =5; // # of neurons in the first hidden or output layer extern int numNeurons2 =1; // # of neurons in the second hidden or output layer extern int numNeurons3 =0; // # of neurons in the third hidden or output layer extern int numNeurons4 =0; // # of neurons in the fourth hidden or output layer extern int numNeurons5 =0; // # of neurons in the fifth hidden or output layer extern int ntr =300; // # of training sets extern int nep =1000; // Max # of epochs extern int maxMSEpwr =-20; // sets maxMSE=10^maxMSEpwr; training stops < maxMSE extern int AFT =2; // Type of activ. function (0:sigm, 1:tanh, 2:x/(1+x)) //======================================= INIT =========================================== //Indicator buffers double pred,trainedOut,realOut; //Global variables int lb,nf,nin,nout,lSz,prevBars; double maxMSE; int init() { // Create 1D array describing NN --------------------------------------------------------+ ArrayResize(lSz,numLayers); lSz[0]=numInputs; lSz[1]=numNeurons1; if(numLayers>2) { lSz[2]=numNeurons2; if(numLayers>3) { lSz[3]=numNeurons3; if(numLayers>4) { lSz[4]=numNeurons4; if(numLayers>5) lSz[5]=numNeurons5; } } } // Use shorter names for some external inputs -------------------------------------------+ lb=lastBar; nf=futBars; nin=numInputs; nout=lSz[numLayers-1]; maxMSE=MathPow(10.0,maxMSEpwr); prevBars=Bars-1; // Set indicator properties -------------------------------------------------------------+ IndicatorBuffers(3); SetIndexBuffer(0,pred); SetIndexStyle(0,DRAW_LINE,STYLE_SOLID,2); SetIndexBuffer(1,trainedOut); SetIndexStyle(1,DRAW_LINE,STYLE_SOLID,2); SetIndexBuffer(2,realOut); SetIndexStyle(2,DRAW_LINE,STYLE_SOLID,2); SetIndexShift(0,nf-lb); // future data vector i=0..nf; nf corresponds to bar=lb IndicatorShortName("BPNN"); return(0); } //===================================== DEINIT =========================================== int deinit(){return(0);} //===================================== START ============================================ int start() { if(prevBars6) { Print("The maximum number of layers is 6"); return; } for(int i=0;i // ... // i=ntr-1 // // outTarget[i*nout+j] //-------------------- // j= 0...nout-1 // | // i=0 // ... // i=ntr-1 // // start with the oldest value first // Fill in the input arrays with data; in this example nout=1 for(i=ntr-1;i>=0;i--) { outTarget=(Open[lb+ntr-1-i]/Open[lb+ntr-i]-1.0); int fd2=0; int fd1=1; for(j=nin-1;j>=0;j--) { int fd=fd1+fd2; // use Fibonacci delays: 1,2,3,5,8,13,21,34,55,89,144... fd2=fd1; fd1=fd; inpTrain[i*nin+j]=Open[lb+ntr-i]/Open[lb+ntr-i+fd]-1.0; } } // Train NN -----------------------------------------------------------------------------+ double outTrain,trainedWt; ArrayResize(outTrain,ntr*nout); ArrayResize(trainedWt,nw); // The output data is arranged as follows: // // outTrain[i*nout+j] // j= 0...nout-1 // | // i=0 // ... // i=ntr-1 string status=Train(inpTrain,outTarget,outTrain,ntr,0,extInitWt,trainedWt,numLayers, lSz,AFT,0,nep,maxMSE); Print(status); // Store trainedWt as extInitWt for next training int iw=0; for(i=1;i // ... // i=ntt-1 // // start with the oldest value first // // The output data is arranged as follows: // // outTest[i*nout+j] //------------------ // j= 0...nout-1 // | // i=0 // ... // i=ntt-1 pred[nf]=Open[lb]; for(i=0;i=0;j--) { fd=fd1+fd2; // use Fibonacci delays: 1,2,3,5,8,13,21,34,55,89,144... fd2=fd1; fd1=fd; double o,od; if(i>0) o=pred[nf-i]; else o=Open[lb-i]; if(i-fd>0) od=pred[nf-i+fd]; else od=Open[lb-i+fd]; inpTest[j]=o/od-1.0; } status=Test(inpTest,outTest,1,extInitWt,numLayers,lSz,AFT,0); pred[nf-i-1]=pred[nf-i]*(outTest[0]+1.0); // predicted next open Print("Bar -"+DoubleToStr(i+1,0)+": predicted open = "+DoubleToStr(pred[nf-i-1],5)); } } return; }
TK29帖子1楼右侧xm竖版广告90-240
个性签名

阅尽天下指标
搬砖开始,始于2014

广告
TK30+TK31帖子一樓廣告
TK30+TK31帖子一樓廣告

本站免责声明:

1、本站所有广告及宣传信息均与韬客无关,如需投资请依法自行决定是否投资、斟酌资金安全及交易亏损风险;

2、韬客是独立的、仅为投资者提供交流的平台,网友发布信息不代表韬客的观点与意思表示,所有因网友发布的信息而造成的任何法律后果、风险与责任,均与韬客无关;

3、金融交易存在极高法律风险,未必适合所有投资者,请不要轻信任何高额投资收益的诱导而贸然投资;投资保证金交易导致的损失可能超过您投入的资金和预期。请您考虑自身的投资经验及风险承担能力,进行合法、理性投资;

4、所有投资者的交易帐户应仅限本人使用,不应交由第三方操作,对于任何接受第三方喊单、操盘、理财等操作的投资和交易,由此导致的任何风险、亏损及责任由投资者个人自行承担;

5、韬客不隶属于任何券商平台,亦不受任何第三方控制,韬客不邀约客户投资任何保证金交易,不接触亦不涉及投资者的任何资金及账户信息,不代理任何交易操盘行为,不向客户推荐任何券商平台,亦不存在其他任何推荐行为。投资者应自行选择券商平台,券商平台的任何行为均与韬客无关。投资者注册及使用韬客即表示其接受和认可上述声明,并自行承担法律风险。

版权所有:韬客外汇论坛 www.talkfx.com 联络我们:[email protected]