Java教程

DL:深度学习算法(神经网络模型集合)概览之《THE NEURAL NETWORK ZOO》的中文解释和感悟(三)

本文主要是介绍DL:深度学习算法(神经网络模型集合)概览之《THE NEURAL NETWORK ZOO》的中文解释和感悟(三),对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

DL:深度学习算法(神经网络模型集合)概览之《THE NEURAL NETWORK ZOO》的中文解释和感悟(三)

 

 

 

目录

MC

HN

BM

RBM

DBN


 

 

 

 

 

相关文章
DL:深度学习算法(神经网络模型集合)概览之《THE NEURAL NETWORK ZOO》的中文解释和感悟(一)
DL:深度学习算法(神经网络模型集合)概览之《THE NEURAL NETWORK ZOO》的中文解释和感悟(二)
DL:深度学习算法(神经网络模型集合)概览之《THE NEURAL NETWORK ZOO》的中文解释和感悟(三)
DL:深度学习算法(神经网络模型集合)概览之《THE NEURAL NETWORK ZOO》的中文解释和感悟(四)
DL:深度学习算法(神经网络模型集合)概览之《THE NEURAL NETWORK ZOO》的中文解释和感悟(五)
DL:深度学习算法(神经网络模型集合)概览之《THE NEURAL NETWORK ZOO》的中文解释和感悟(六)

 

 

MC

       Markov chains (MC or discrete time Markov Chain, DTMC) are kind of the predecessors to BMs and HNs. They can be understood as follows: from this node where I am now, what are the odds of me going to any of my neighbouring nodes? They are memoryless (i.e. Markov Property) which means that every state you end up in depends completely on the previous state. While not really a neural network, they do resemble neural networks and form the theoretical basis for BMs and HNs. MC aren’t always considered neural networks, as goes for BMs, RBMs and HNs. Markov chains aren’t always fully connected either.
       马尔可夫链(MC或离散时间马尔可夫链,DTMC)是BMs和HNs的前身。它们可以这样理解:从我现在所在的节点,我到相邻节点的概率是多少?它们是无记忆的(即马尔可夫性质),这意味着你最终所处的每一种状态完全依赖于前一种状态。虽然不是真正的神经网络,但它们确实类似于神经网络,构成了BMs和HNs的理论基础。MC并不总是像BMs、RBMs和HNs那样被认为是神经网络。马尔可夫链也不总是完全连接的。

Hayes, Brian. “First links in the Markov chain.” American Scientist 101.2 (2013): 252.
Original Paper PDF

 

HN

       A Hopfield network (HN) is a network where every neuron is connected to every other neuron; it is a completely entangled plate of spaghetti as even all the nodes function as everything. Each node is input before training, then hidden during training and output afterwards. The networks are trained by setting the value of the neurons to the desired pattern after which the weights can be computed. The weights do not change after this. Once trained for one or more patterns, the network will always converge to one of the learned patterns because the network is only stable in those states. Note that it does not always conform to the desired state (it’s not a magic black box sadly). It stabilises in part due to the total “energy” or “temperature” of the network being reduced incrementally during training. Each neuron has an activation threshold which scales to this temperature, which if surpassed by summing the input causes the neuron to take the form of one of two states (usually -1 or 1, sometimes 0 or 1). Updating the network can be done synchronously or more commonly one by one. If updated one by one, a fair random sequence is created to organise which cells update in what order (fair random being all options (n) occurring exactly once every n items). This is so you can tell when the network is stable (done converging), once every cell has been updated and none of them changed, the network is stable (annealed). These networks are often called associative memory because the converge to the most similar state as the input; if humans see half a table we can image the other half, this network will converge to a table if presented with half noise and half a table.
       Hopfield网络(HN)是一个网络,其中每个神经元都与其他神经元相连;它是一个完全缠结的意大利面盘,因为所有节点的功能都是一样的。每个节点在训练前输入,训练中隐藏,训练后输出。通过将神经元的值设置为所需的模式来训练网络,然后计算权重。在这之后,权重不会改变。
       一旦对一个或多个模式进行了训练,网络将始终收敛到所学习的模式之一,因为网络只在这些状态下是稳定的。注意,它并不总是符合所需的状态(遗憾的是,它不是一个神奇的黑盒)。它之所以稳定,部分原因是在训练过程中,网络的总“能量”或“温度”逐渐降低。每个神经元都有一个激活阈值尺度这个温度,如果超过了通过加总输入导致神经元以两种状态之一的形式(通常1或1,有时是0或1),更新网络可以做到同步或更常见。
        如果逐个更新,将创建一个公平随机序列来组织哪些单元格以何种顺序更新(公平随机是所有选项(n),每n个项目恰好发生一次)。这样您就可以知道网络何时是稳定的(聚合完成),一旦每个单元都更新了,并且没有一个单元更改,那么网络就是稳定的(退火)。这些网络通常被称为联想记忆,因为它们收敛到与输入最相似的状态;如果人类看到半张桌子,我们就能想象另一半,如果半张桌子有半张噪音,这个网络就会收敛到一张桌子。

Hopfield, John J. “Neural networks and physical systems with emergent collective computational abilities.” Proceedings of the national academy of sciences 79.8 (1982): 2554-2558.
Original Paper PDF

 

BM

       Boltzmann machines (BM) are a lot like HNs, but: some neurons are marked as input neurons and others remain “hidden”. The input neurons become output neurons at the end of a full network update. It starts with random weights and learns through back-propagation, or more recently through contrastive divergence (a Markov chain is used to determine the gradients between two informational gains). Compared to a HN, the neurons mostly have binary activation patterns. As hinted by being trained by MCs, BMs are stochastic networks. The training and running process of a BM is fairly similar to a HN: one sets the input neurons to certain clamped values after which the network is set free (it doesn’t get a sock). While free the cells can get any value and we repetitively go back and forth between the input and hidden neurons. The activation is controlled by a global temperature value, which if lowered lowers the energy of the cells. This lower energy causes their activation patterns to stabilise. The network reaches an equilibrium given the right temperature.
       玻尔兹曼机器(BM)很像HNs,但是:一些神经元被标记为输入神经元,而另一些仍然是“隐藏的”。在整个网络更新结束时,输入神经元变成输出神经元。它从随机权重开始,通过反向传播学习,或者最近通过对比发散学习(使用马尔可夫链来确定两个信息增益之间的梯度)。
       与HN相比,神经元大多具有二元激活模式。由MCs训练可知,BMs是随机网络。BM的训练和运行过程与HN非常相似:将输入神经元设置为特定的固定值,在此之后网络将被释放(它不会得到袜子)。当自由时,细胞可以得到任何值,我们不断地在输入和隐藏的神经元之间来回移动。
       激活由一个全局温度值控制,如果降低这个温度值,就会降低细胞的能量。这种较低的能量使它们的激活模式趋于稳定。在适当的温度下,网络达到平衡。

Hinton, Geoffrey E., and Terrence J. Sejnowski. “Learning and releaming in Boltzmann machines.” Parallel distributed processing: Explorations in the microstructure of cognition 1 (1986): 282-317.
Original Paper PDF

 

RBM

      Restricted Boltzmann machines (RBM) are remarkably similar to BMs (surprise) and therefore also similar to HNs. The biggest difference between BMs and RBMs is that RBMs are a better usable because they are more restricted. They don’t trigger-happily connect every neuron to every other neuron but only connect every different group of neurons to every other group, so no input neurons are directly connected to other input neurons and no hidden to hidden connections are made either. RBMs can be trained like FFNNs with a twist: instead of passing data forward and then back-propagating, you forward pass the data and then backward pass the data (back to the first layer). After that you train with forward-and-back-propagation.
       受限玻尔兹曼机(RBM)与BMs (surprise)非常相似,因此也与HNs相似。BMs和RBMs之间最大的区别是,RBMs的可用性更好,因为它们受到了更多的限制。它们不会把每个神经元连接到另一个神经元上,而是把每一组不同的神经元连接到另一组神经元上,所以没有输入神经元直接连接到其他输入神经元上,也没有隐藏到隐藏的连接。
      RBMs可以像FFNNs一样进行训练:不需要先向前传递数据,然后向后传播,而是向前传递数据,然后向后传递数据(回到第一层)。在此之后,您将使用正向和反向传播进行训练。

Smolensky, Paul. Information processing in dynamical systems: Foundations of harmony theory. No. CU-CS-321-86. COLORADO UNIV AT BOULDER DEPT OF COMPUTER SCIENCE, 1986.
Original Paper PDF

 

DBN

      Deep belief networks (DBN) is the name given to stacked architectures of mostly RBMs or VAEs. These networks have been shown to be effectively trainable stack by stack, where each AE or RBM only has to learn to encode the previous network. This technique is also known as greedy training, where greedy means making locally optimal solutions to get to a decent but possibly not optimal answer. DBNs can be trained through contrastive divergence or back-propagation and learn to represent the data as a probabilistic model, just like regular RBMs or VAEs. Once trained or converged to a (more) stable state through unsupervised learning, the model can be used to generate new data. If trained with contrastive divergence, it can even classify existing data because the neurons have been taught to look for different features.
      深度信念网络(DBN)是指以RBMs或VAEs为主的栈结构。这些网络已被证明是有效的可训练堆栈,其中每个AE或RBM只需要学习编码以前的网络。这种技术也被称为贪心训练,贪心的意思是使局部最优解得到一个体面的但可能不是最优的答案。
      DBNs可以通过对比发散或反向传播进行训练,并学习将数据表示为概率模型,就像普通的RBMs或VAEs一样。一旦通过无监督学习训练或收敛到(更)稳定的状态,该模型就可以用来生成新的数据。如果使用对比发散训练,它甚至可以对已有的数据进行分类,因为神经元已经学会寻找不同的特征。

Bengio, Yoshua, et al. “Greedy layer-wise training of deep networks.” Advances in neural information processing systems 19 (2007): 153.
Original Paper PDF

 

 

 

 

 

 

 

 

这篇关于DL:深度学习算法(神经网络模型集合)概览之《THE NEURAL NETWORK ZOO》的中文解释和感悟(三)的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!