13 jiongjiongai

尚未进行身份认证

我要认证

暂无相关简介

等级
TA的排名 1w+

gRPC

Protocol buffer data is structured as messages, where each message is a small logical record of information containing a series of name-value pairs called fields.Service method: Unary RPC Server s...

2018-08-21 17:13:03

Deep Learning Notes: Chapter 1 Introduction

前言最近开始读《Deep Learning》一书。这让我有了一个边读书边写笔记的动机:很有必要有一个笔记,能够让人很轻松流畅的读懂这本书的核心内容,至少可以把握住这本书的脉络。 由于终究是英文表达更地道,因此该笔记都是节选自书中的原文。各位读者如果有建议或意见,欢迎留言。谢谢!Deep Learning Chapter 1 Introduction Concept Des...

2018-08-18 20:16:48

Pro Git Notes

Git is a Distributed Version Control Systems (DVCSs). Clients fully mirror of the repository, including its full history.

2018-08-12 01:38:23

多元函数的牛顿迭代法

f(X)=f(X0)+f′(X0)ΔX+12(ΔX)Tf″(X0)ΔXf(X)=f(X0)+f′(X0)ΔX+12(ΔX)Tf″(X0)ΔXf(X)=f(X_0)+f'(X_0)\DeltaX+\dfrac{1}{2}\left(\DeltaX\right)^Tf''(X_0)\DeltaX于是f′(X)=f′(X0)+f″(X0)ΔXf′(X)=f′...

2018-08-09 17:55:05

牛顿迭代法

若 x=F(x)x=F(x)x = F(x) 等价于 f(x)=0f(x)=0f(x) = 0 则F(x)F(x)F(x) 称为迭代函数。f(x)f(x)f(x) 有二阶连续导数,且 f′(x)≠0f′(x)≠0f'(x) \neq 0 则 ∀x0,x∈R,∀x0,x∈R,\forall x_0, x \in \mathbb R, 若 f(x0)=0f(x0)=0f(x_0) = 0 则 ...

2018-08-09 09:02:05

Nesterov Momentum

x_ahead = x + mu * v# evaluate dx_ahead (the gradient at x_ahead instead of at x)v = mu * v - learning_rate * dx_aheadx += v=>x_prev = xv_prev = vx_ahead = x_prev+ mu * v_prev v = mu * v_...

2018-08-09 08:19:11

CS231n Note

CS231n NoteConcepts Concept Description Image Classification Object Detection Action Classification Image Captioning Semantic Segmentation Perceptual ...

2018-08-04 20:56:43

Clockwise/Spiral Rule to parse C declaration

http://c-faq.com/decl/spiral.anderson.html

2018-05-01 18:32:17

推荐系统

推荐方式社会化推荐(social recommendation) 基于内容的推荐(content-based filtering) 协同过滤(collaborative filtering)推荐系统评测推荐系统试验方法离线试验 用户调查 在线试验(AB测试)评测指标用户满意度预测准确度覆盖率多样性新颖性惊喜度(serendipity)信任度实时性...

2018-04-25 21:52:44

机器学习的求导公式

机器学习的求导公式损失函数的求导公式设 loss(X)loss⁡(X)\operatorname {loss} \left (X\right ) 为单个样本 XXX 的损失函数, A=g(Z)=⎛⎝⎜⎜g(z1)⋮g(zn)⎞⎠⎟⎟A=g(Z)=(g⁡(z1)⋮g⁡(zn))A = g\left (Z\right ) = \begin{pmatrix} \operatorname {g...

2018-04-18 12:30:10

Recurrent Neural Networks

Examples of Sequence DataSpeech RecognitionMusic GenerationSentiment ClassificationDNA Sequence AnalysisMachine TranslationVideo Activity RecognitionName Entity RecognitionNotation ...

2018-04-16 06:28:37

Neural Style Transfer

ConceptContent C + Style S = Generated image GWhat are Deep ConvNet Learning?More abstract features in deeper layer.Cost Functionloss(G;C,S)=αlosscontent(S,G)+βlossstyle(C,G)loss⁡(G;C,...

2018-04-16 00:04:20

Face Recognition

Face Verification vs Face Recognition Name Input Output Description Face Verification Image and Name / ID Is the image the person with this given ID? Face Recognition Ima...

2018-04-15 19:39:30

Object Detection

Concepts Name Description yyy Object Classification At most one object y=⎛⎝⎜c1c2c3⎞⎠⎟y=(c1c2c3)y = \begin{pmatrix} c_1 \\ c_2 \\ c_3 \end{pmatrix} Object Localization At most ...

2018-04-15 19:17:33

Convolutional Neural Networks

PaddingOutput Dimensionn+2p−f+1n+2p−f+1n + 2 p - f + 1Padding TypesValid: p=0p=0p = 0 Same: n+2p−f+1=n⇒p=f−12n+2p−f+1=n⇒p=f−12n + 2 p - f + 1 = n \Rightarrow p = \dfrac {f - 1} {2}Str...

2018-04-13 01:34:11

Learning from Multiple tasks

Where Transfer Learning from A to B Makes SenseTask A and B have the same input X.You have a lot more data for A than B.Low level features in A could be helpful for learning B.Where Multi-task...

2018-04-12 23:37:42

Bias and Variance with Mismatched Distributions

Bias and Variance with Mismatched Distributions

2018-04-12 22:08:00

Softmax Function

Sigmoid Functionsigmoid(z)=11−e−zsigmoid⁡(z)=11−e−z\operatorname {sigmoid} (z) = \dfrac {1} {1 - e ^{-z}}Softmax Functionsoftmax(zi;Z)=ezi∑i=1nezi,1≤i≤nsoftmax⁡(zi;Z)=ezi∑i=1nezi,1≤i≤n\operato...

2018-04-11 22:13:26

Momentum, RMSprob and Adam

Gradient Descent with MomentumCompute exponentially weighed average of gradient, and use the gradient to update weights.AlgorithmOn iteration t:Compute dWd⁡W\operatorname {d} W and dbd⁡b\op...

2018-04-11 02:03:54

Exponentially Weighted Averages

Exponentially Weighted Averagesvt=βvt−1+(1−β)θtvt=βvt−1+(1−β)θtv _{t} = \beta v _{t - 1} + \left (1 - \beta \right ) \theta _{t} =β[βvt−2+(1−β)θt−1]+(1−β)θt=β[βvt−2+(1−β)θt−1]+(1−β)θt= \beta \lef...

2018-04-11 00:09:48

查看更多

勋章 我的勋章
    暂无奖章