MachineLearningpart5--NeuralNetworks:Learing
 
CostFunction
 
Let'sfirstdefineafewvariablesthatwewillneedtouse:
让我们首先定义几个我们需要使用的变量
 
L=totalnumberoflayersinthenetwork
L=网络中的总层数
=numberofunits(notcountingbiasunit)inlayerl
=第l层中的单元数(不计入偏置单元)
K=numberofoutputunits/classes
K=输出单元/类的数量
 
Recallthatinneuralnetworks,wemayhavemanyoutputnodes.Wedenoteasbeingahypothesisthatresultsintheoutput.Ourcostfunctionforneuralnetworksisgoingtobeageneralizationoftheoneweusedforlogisticregression.Recallthatthecostfunctionforregularizedlogisticregressionwas:
回顾一下,在神经网络中,我们可能有许多输出节点。我们表示是一个假设,其结果是的输出。我们的神经网络成本函数将是对我们用于逻辑回归的成本函数的概括。回顾一下,正则化逻辑回归的成本函数是:
 
Forneuralnetworks,itisgoingtobeslightlymorecomplicated:
对于神经网络来说,它将会稍微复杂一些
 
Wehaveaddedafewnestedsummationstoaccountforourmultipleoutputnodes.Inthefirstpartoftheequation,beforethesquarebrackets,wehaveanadditionalnestedsummationthatloopsthroughthenumberofoutputnodes.
我们增加了一些嵌套求和,以考虑到我们的多个输出节点。在方程的第一部分,在方括号之前,我们有一个额外的嵌套求和,循环计算输出节点的数量
 
Intheregularizationpart,afterthesquarebrackets,wemustaccountformultiplethetamatrices.Thenumberofcolumnsinourcurrentthetamatrixisequaltothenumberofnodesinourcurrentlayer(includingthebiasunit).Thenumberofrowsinourcurrentthetamatrixisequaltothenumberofnodesinthenextlayer(excludingthebiasunit).Asbeforewithlogisticregression,wesquareeveryterm.