# Effective Modern C++:decltyle,use {}

### decltype用法:

In C++11, perhaps the primary use for decltypeis declaring function templates where the function’s return typedepends on its parameter types.

decltype 用于获得某个未知变量的类型，在什么情况下我们不知道变量的类型呢？ 当然是使用template或auto的时候：

template<typename T>
void funcValue(T param)
{
auto subParam = param;
decltype(auto) subSubParam = subParam;
}


# Effective Modern C++:Template and auto

The type deduction of Template and auto

The type deduction of template will ignore one level of reference and pointer

Example 1:

Input argument as reference

template<typename T>
void f(T& param);

int x = 27; // x is an int
const int cx = x; // cx is a const int
const int& rx = x; // rx is a reference to x as a const int
f(x);// T is int
f(cx);// T is const int, param's typeis const int&
f(rx);// T is const int, param's type is const int&


# PCA and Face Recognition - Eigen Face

PCA (Principal component analysis), just as its name shows, it computes the data set’s internal structure, its “principal components”.

Considering a set of 2 dimensional data, for one data point, it has 2 dimensions $x_1$ and $x_2$ . Now we get n such data points . What is the relationship between the first dimension $x_1$ and the second dimension $x_2$ ? We compute the so called covariance:

the covariance shows how strong is the relationship between $x_1$ and $x_2$. Its logic is the same as Chebyshev’s sum inequality:

# Switch to another window using C#

[DllImport("user32.dll")]
public static extern void SwitchToThisWindow(IntPtr hWnd,bool turnon);
String ProcWindow = "wechat";
private void switchToWechart()
{
Process[] procs = Process.GetProcessesByName(ProcWindow);
foreach (Process proc in procs)
{
//switch to process by name
SwitchToThisWindow(proc.MainWindowHandle, true);
}
}


# HoG Feature

HoG (Histograms of Oriented Gradients) feature is a kind of feature used for human figure detection. At an age without deep learning, it is the best feature to do this work.

Just as its name described, HoG feature compute the gradients of all pixels of an image patch/block. It computes both the gradient’s magnitude and orientation, that’s why it’s called “oriented”, then it computes the histogram of the oriented gradients by separating them to 9 ranges.

One image block (upper left corner of the image) is combined of 4 cells, one cell owns a 9 bins histogram, so for one image block we get 4 histograms, and all these 4 histograms will be flattened to one feature vector with a length of 4x9. Compute the feature vectors for all blocks in the image, we get a feature vector map.

Taking one pixel (marked red) from the yellow cell as an example: compute the $\bigtriangledown_x$ and $\bigtriangledown_y$ of this pixel, then we get its magnitude and orientation(represented by angle). When calculating the histogram, we vote its magnitude to its neighboring 2 bins using bilinear interpolation of angles.

Finally, when we get the 4 histograms of the 4 cells, we normalize them according to the summation of all the 4x9 values.

The details are described in the following chart:

# Linear Classification

### Loss Function

Now we want to solve a image classification problem, for example classifying an image to be cow or cat. The machine learning algorithm will score a unclassified image according to different classes, and decide which class does this image belong to based on the score. One of the keys of the classification algorithm is designing this loss function.

Map/compute image pixels to the confidence score of each class

Assume a training set:

$x_i$ is the image and $y_i$ is the corresponding class

i∈1…N means the traning set constains N images

$y_i$∈1…K means there are K image categories

So a score function maps x to y:

In the above function, each image $x_i$ is flattend to a 1 dimention vector

If one image’s size is 32x32 pixels with 3 channels

$x_i$ will be a 1 dimention vector with the length of D=32x32x3=3072 Parameter matrix W has the size of [KxD], it is often called weights b of size [Kx1] is often called bias vector In this way, W is evaluating $x_i$’s confidence score for K categories at the same time

# 用Jekyll模板搭建Github页面-Windows

### 安装基本软件

• 首先安装一个能在windows环境下运行的包管理器Chocolatey

• 因为Jekyll是用Ruby写的，所以要安装Ruby，在控制台中输入choco install ruby -y回车

• 关闭控制台，然后再打开控制台并输入gem install jekyll，这样Jekyll就装好了：如果出现ssl3错误按照以下步骤（点我看原文）解决：

cmd输入 gem install –local C:\rubygems-update-x.x.xx.gem：local后面即刚下载好的gem文件

然后输入update_rubygems –no-ri –no-rdoc

结束后再输入gem install jekyll，应该就可以了

• 重新打开控制台，输入chcp 65001避免编码问题

• 安装Ruby开发环境，在控制台中输入：

choco install ruby2.devkit

• C:\tools\DevKit2文件夹中打开控制台，执行命令 ruby dk.rb init，产生config.yml文件

# 改造Jekyll模板的技术细节

#### 框架的文件夹结构

_layout ：主要定义了两种类型页面的排版，post是为单篇文章设计的排版，post-index是为一系列文章设计的排版。

_posts：用于存放所有文章的md文件，md文件的命名必须严格按照”年-月-日-标题”的格式命名。

_sass：用于存放定制的css文件，比如_page就规定了页面各个元素的宽度颜色字体，_variables定义了一些全局变量的值。

_site：模板编译完成后生成的页面，这个是真正可以直接部署的页面，平时不用看

_templates：规定了不同类型的排版文件中可以定义的变量

images：用于存放图片

search：用于存放搜索框页面

tags：用于存放按照tags列出所有文章的页面

categories：用于存放按照category列出所有文章的页面

posts：用于存放列出所有文章的页面