Jekyll2023-06-14T22:49:38+01:00https://weihaocao.com/feed.xmlWeihao CaoA physics PhD candidate at UC San Diego.Weihao CaoNoise in high frequency signal generators from an amateur2021-10-23T00:00:00+01:002021-10-23T00:00:00+01:00https://weihaocao.com/miscellaneous/2021/10/23/high-freq<p>Tl;dr: Don’t trust the noise performance of high-freq signal generators except at the signal frequency.</p>
<!-- more -->
<h5 id="background">Background</h5>
<p>The large background current measured in our sample made me feel suspicious of the performance of the high frequency signal sources: HP 8753D Network Analyzer and E8257D PSG Analog Signal Generator, so I checked the frequency specturm of the source using an oscilliscope, and the result turned out to be very interesting ┑( ̄Д  ̄)┍</p>
<h6 id="result">Result</h6>
<p>The PSG frequency was set at 51.511MHz and -20 dbM, and the detected frequency peak include:<br />
-3dbM at 51.511MHz (expected)<br />
-30dbM at 1.511MHz
-35dbM at 11.36kHz
-50dbM at 60Hz</p>
<figure class="figure">
<img class="image" src="https://weihaocao.com/assets/images/freq_51M.jpg" alt="Image - Expected frequency" />
<figcaption class="caption">Expected frequency</figcaption>
</figure>
<figure class="figure">
<img class="image" src="https://weihaocao.com/assets/images/freq_11k.jpg" alt="Image - Peak at 11.36kHz" />
<figcaption class="caption">Peak at 11.36kHz</figcaption>
</figure>
<figure class="figure">
<img class="image" src="https://weihaocao.com/assets/images/freq_55.jpg" alt="Image - Peak at 55Hz" />
<figcaption class="caption">Peak at 55Hz</figcaption>
</figure>
<p>The 1.511Mhz, 11.36kHz and 60Hz frequency may be coming from the working principle of the generators: my guess is that they excite the signal with oscilliscope working in different ranges, like 100kHz to 10MHz. Another possiblity is just the integration artifact of the oscilliscope.</p>
<p>Another quick check on the <100Hz resonance also seem to be interesting, below are the results:
When PSG signal is set at 60MHz, there is a peak at 40Hz;<br />
40MHz shows 25Hz;<br />
30MHz shows 19Hz;<br />
10MHz shows 6.5Hz.</p>
<p>In summary: Don’t trust the generators except at the target frequency. Otherwise, use a bandpass filter.</p>
<figure class="figure">
<img class="image" src="https://weihaocao.com/assets/images/sus_device.jpg" alt="Image - Our suspended graphene device" />
<figcaption class="caption">Our suspended graphene device</figcaption>
</figure>Weihao CaoOne (probably) biggest lesson of the yearComputationally understanding Einstein’s field equation in 1 hour.2020-12-15T00:00:00+00:002020-12-15T00:00:00+00:00https://weihaocao.com/physics/2020/12/15/einstein-field-equation<p>This post is trying to explain how to unwrap the Einstein’s field equation into simple differential equations. When I studied general relativity, I spent quite some time in the geometrical structures instead of the physical signification, and I want to summarize the mathematical details here in the bottom-up way.</p>
<!-- more -->
<p>So when you open Wikipedia and clicked into the page of <a href="https://en.wikipedia.org/wiki/General_relativity">General Relativity</a> , the first equation you will encounter is the Einstein’s field equations:</p>
\[G_{\mu\nu} \equiv R_{\mu\nu}-\frac{1}{2}R g_{\mu\nu}=\frac{8\pi G}{c^4}T_{\mu\nu}\]
<p>If you have the same background knowledge as I did, then probably you immediately recognize the \(G\) and \(c\) constants, but wtf are the other symbols with subscripts? And if you open an orthodox GR textbook, you will probably revisit the equation in 8+ chapters, if you still haven’t gotten lost in the first 7 chapters.</p>
<p>Well, totally understandable.</p>
<p>Having gone through all the obstacles(probably not entirely useless) and struggled with all the mathematical details, I want to take down these when I still remember them, at least as a refresher in the future.</p>
<h4 id="from-scalars-to-tensors">From scalars to tensors</h4>
<p>So it’s obvious, we need to know how \(R_{\mu \nu}\), \(g_{\mu \nu}\) and \(T_{\mu \nu}\) looks like. \(R\) in the equation may seem familiar, but it is actually a scalar variable derived from the \(R_{\mu \nu}\) instead of radius. In summary, we need to know how to represent the three “monsters”.</p>
<p>Let’s talk about their variations from a scalar function, step by step. Firstly, what the \(\mu \nu\) represent? They are just indices from\(\{0 1 2 3\}\), where 0 represent the “time” component, and 1 to 3 is the “space component”. The slightest variation from a scalar function is a vector function, say \(U_{\mu}\), which is the four-velocity of a particle. It is worth knowing that the two variables follow Einstein’s notation, and when you see two identical indices(presumably one lower, one upper as explained next), they represent the summation of all the four possible cases.</p>
<p>Next, The lower and upper location. This is why it is called a (0-2) tensor, instead of just a matrix: The minimum amount of rules you need to know includes that,</p>
<ol>
<li>
<p>In GR, the symbols appearing in the super-scripts(sub-scripts) should match exactly those on the other side of the equation, expressions with different structures cannot be equal(0 can be any structure).</p>
</li>
<li>
<p>Symbol with a super-script behaves like the coordinate function, like \(x^{\mu}\). That with a lower-script is a one-form. (No need to shift your focus to it if you haven’t learnt it.).</p>
</li>
<li>
<p>If you need to move a symbol downward, you need to multiply the tensor by the metric tensor \(g_{\mu\, \nu}\), that is, \(F^{\mu}_{\gamma} g_{\mu \nu}=F_{\nu \gamma}\). (The sequence matters, but we will ignore it here.)</p>
</li>
<li>
<p>(Optional) You can contract a tensor by having another tensor with symbol in the other-script. Such as, \(T_{\mu\nu}U^{\mu}V^{\nu}\) is the inner product of the T-matrix with the two column vectors.</p>
</li>
</ol>
<p>To visualize the tensor(say the (0-2) case), you can somewhat understand it as a bundle of 16 scalar functions. It can be seen as picking the corresponding \(\mu\, \nu\) in the row vector shown below sequentially, and also in the corresponding \(U^{\mu}V^{\nu}\) column vector, and add up all variations. From this we can see why the super/sub-script matters.</p>
<figure class="figure">
<img class="image" src="https://weihaocao.com/assets/posts/gr_pic1.png" alt="Image - Structure of (0-2) tensor" />
<figcaption class="caption">Structure of (0-2) tensor</figcaption>
</figure>
<h5 id="the-metric-and-the-stress-energy-tensor">The metric and the stress-energy tensor</h5>
<p>Before getting into the pure mathematical definition, let’s get a little knowledge about the two symbols \(g_{\mu \nu}\) and \(T_{\mu \nu}\).</p>
<p>\(g_{\mu \nu}\), the metric tensor, describes the metric-defined distance between the two points, and more importantly serves as the bridge between lower-index and upper-index. From special relativity, we know that the proper length is</p>
\[\Delta S= \Delta x^2+ \Delta y^2 + \Delta z^2 - c^2 \Delta t^2,\]
<p>which can be understood as the inner product between the four vectors \(\Delta x^{\mu}\) and the Minkowski metric:</p>
\[\eta= \begin{bmatrix}
-c^2 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0& 0& 1 & 1
\end{bmatrix}\]
<p>(It should be made clear that even though the metric is commonly represented as matrix, their structures are different: More rigorous structure is shown above, but we will just use the matrix representation.)</p>
<p>Again, in the simplest language, the metric tensor is a 16-component vector with individual scalar function depending on the 4-vector, and it reflects the local structure of spacetime.</p>
<p>The stress-energy tensor, \(T_{\mu \nu}\), reflects the local energy density and energy flux.</p>
<p>The time–time component is the density of relativistic mass, i.e., the energy density divided by the speed of light squared. The flux of relativistic mass across the \(x^k\) surface is equivalent to the density of the k-th component of linear momentum, i.e., the momentum density in the figure; the components \(T^{kl}\) represent flux of k-th component of linear momentum across the \(x^l\) surface. (The whole paragraph is copied from <a href="https://en.wikipedia.org/wiki/Stress-energy_tensor">Wikipedia</a>.)</p>
<figure class="figure">
<img class="image" src="https://weihaocao.com/assets/posts/gr_pic2v3.png" alt="Image - Structure of stress-energy tensor" />
<figcaption class="caption">Structure of stress-energy tensor</figcaption>
</figure>
<p>One example is the case where there is only EM field in space, and then the SE tensor can be represented from the energy density, the <a href="https://en.wikipedia.org/wiki/Poynting_vector">Poynting vector</a> and the <a href="https://en.wikipedia.org/wiki/Maxwell_stress_tensor">Maxwell stress tensor</a>:</p>
<figure class="figure">
<img class="image" src="https://weihaocao.com/assets/posts/gr_pic3.png" alt="Image - Electromagnetic SE tensor" />
<figcaption class="caption">Electromagnetic SE tensor</figcaption>
</figure>
<h5 id="covariant-derivatives">Covariant derivatives</h5>
<p>So now we can interpret that the RHS of Einstein’s equation, the energy density and flux, can influence(and be influenced by) the spacetime structure. However, we can see immediately that if we ignore the term \(R_{\mu\nu}\), then the line is a zero-order linear equation, which doesn’t make sense as the spacetime is not propagating, so the first term must involve derivatives, and that’s what we are going to discuss next.</p>
<p>I will bring up the final form first: the term \(R_{\mu\nu}\) is a function of the second derivatives of the metric tensor. Again, we will mainly focus on the tensor structure instead of physical interpretation.</p>
<p>First we introduce the covariant derivative, the counterpart in changing coordinate systems: we know that in polar coordinates the derivative</p>
\[\frac{\partial \vec{V}}{\partial r} =\frac{\partial \vec{V}^{\alpha}}{\partial r} {\vec{e}_{\alpha}} + V^{\alpha}\frac{\partial \vec{V}_{\alpha}}{\partial r}\]
<figure class="figure">
<img class="image" src="https://weihaocao.com/assets/posts/gr_pic4.gif" alt="Image - Covariant and directional derivative" />
<figcaption class="caption">Covariant and directional derivative</figcaption>
</figure>
<p>(Cited from <a href="http://www.gregegan.net/FOUNDATIONS/02/found02.html">here</a> ).</p>
<p>where the second term comes from the rotating coordinate system, and it is similar in general relativity, where the spacetime is curved: We introduce the Christoffel symbol, whose definition is</p>
\[\frac{\partial \vec{e}_{\alpha}}{\partial x^{\beta}} = {\Gamma^{\mu}}_{\alpha\beta} \vec{e}_{\mu}.\]
<p>We can see clearly that the second term calculates the connection between the global and the basis vector. Going back to our discussion, the component-form Christoffel symbol is</p>
\[{\Gamma^{\alpha}}_{\mu\nu} = \frac{1}{2} g^{\alpha\beta} (g_{\beta \mu,\nu}+g_{\beta \nu,\mu}-g_{\mu\nu,\beta})\]
<p>The comma means the local derivative. So it’s obvious here that the Christoffel symbol is a (1-2) tensor which involves the zero- and first-order derivatives of the metric tensor.</p>
<h5 id="reimann-curvature-and-ricci-tensor">Reimann curvature and Ricci tensor.</h5>
<p>Congratulations you have reached so far! Now is the final step before we recover the whole field equation. You may have noticed that what we want eventually is a (0-2) tensor, so there are some further manipulations. Firstly, we introduce the Riemann tensor, which is defined as</p>
\[{R^{\alpha}}_{\beta\mu\nu} = {\Gamma^{\alpha}}_{\beta \nu,\mu} - {\Gamma^{\alpha}}_{\beta \mu,\nu} + {\Gamma^{\alpha}}_{\sigma\mu} {\Gamma^{\alpha}}_{\beta\nu} - {\Gamma^{\alpha}}_{\sigma\nu} {\Gamma^{\alpha}}_{\beta\mu}.\]
<p>This term comes from taking a closed loop and calculating the change in the vector due to transport. A more straightforward representation is</p>
\[{R^{\alpha}}_{\beta\mu\nu} = \frac{1}{2} G^{\sigma \sigma} (g_{\sigma \nu,\beta \mu} - g_{\sigma \mu,\beta \nu} +g_{\beta \mu,\sigma \nu} -g_{\beta \nu,\sigma \mu}).\]
<p>Then we contract the Riemann tensor, and get Ricci tensor:</p>
\[R_{\alpha \beta} = {R^{\mu}}_{\sigma\mu\beta}\]
<p>and the Ricci scalar</p>
\[R = g^{\mu \nu} R_{\mu \nu} = g^{\mu \nu} g^{\alpha \beta} R_{\alpha \mu \beta \nu}.\]
<p>If you still remember the field equation, you can see that these are the (0-2) R tensor and R scalar in it, so we can conclude the structure of Einstein field equation right now: the LHS is 16-component functions of the metric tensor and the second derivative of it, which describes the local geometric deformation, and the RHS is the 16-component function of the stress energy tensor. That’s it!</p>Weihao CaoThis post is trying to explain how to unwrap the Einstein's field equation into simple differential equations.Useful Websites for academic writing2018-05-27T00:00:00+01:002018-05-27T00:00:00+01:00https://weihaocao.com/efficiency/2018/05/27/academic-writing<p>This is an archive on the websites useful in writing reports.</p>
<!-- more -->
<h4 id="preparation">Preparation</h4>
<h5 id="using-word">Using word</h5>
<ol>
<li>Install Latex-type fonts: <a href="https://ctan.org/tex-archive/fonts/cm">Download cmr font package</a></li>
<li>Enable styles to generate content lists: <a href="http://www.addbalance.com/usersguide/styles.htm">Styles</a></li>
<li>Download tex-style equation editor: <a href="http://www.amyxun.com/">AxMath editor</a></li>
<li>Download refernce management systems: <a href="https://endnote.com/">Endnote Software</a></li>
<li>Enable grammar checking: <a href="https://www.grammarly.com/">Grammarly</a></li>
</ol>
<h5 id="using-latex">Using Latex</h5>
<ol>
<li>Online Latex editor: <a href="https://overleaf.com">Overleaf</a></li>
</ol>
<h4 id="writing">Writing</h4>
<ol>
<li>Version management system: <a href="https://github.com">Github</a><br />
Useful if you are trying to keep milestones or collaborate. (Maybe just Google Docs?)</li>
<li>Find the orthodox expressions: (Highly Recommend!) <a href="http://www.netspeak.org/">Netspeak</a></li>
<li>Find synonyms: <a href="https://www.thesaurus.com/browse/treasure">thesaurus</a></li>
<li>Foreign language translation: <a href="https://translate.google.com/">G Translate</a></li>
</ol>
<h5 id="other-websites">Other websites</h5>
<ol>
<li>Contemporary American English: <a href="https://corpus.byu.edu/coca/">Corpus</a></li>
<li>Higher-level grammar checking: (Recommend!) <a href="http://www.hemingwayapp.com/">Hemingway</a></li>
<li>Word context visualization: <a href="https://visuwords.com/two-dimensional">Visuwords</a></li>
<li>Word Pronunciation: <a href="https://zh.forvo.com/">Forvo</a></li>
<li>And finally some relax ^_^ <a href="https://www.netflix.com/">Netflix</a></li>
</ol>Weihao CaoThis is an archive on the websites useful in writing reports.Classification of subgroups of symmetric group S42017-12-21T00:00:00+00:002017-12-21T00:00:00+00:00https://weihaocao.com/mathematics/2017/12/21/alg-subgp<p>This article tries to identify the subgroups of symmetric group S4 using theorems from undergraduate algebra courses.</p>
<!-- more -->
<h1 id="basic-fact">Basic Fact</h1>
<p>Below we will use the cycle notation to denote subgroup elements.<br />
\(S_4\) has \(4!\) elements. Categorize them by cycle patterns, and we get,<br />
\(\begin{array}
{c|c|c}
Cycle \ Pattern & No. \ of \ elements & Order \\
\hline
(1,): id. & 1 \ elem & (ord(e)=1) \\
(2,1,1):(1,2),(1,3)... & 6 & (ord(\alpha)=2) \\
(3,1): (1,2,3),(1,3,2)... & 8 & (ord(\beta)=3) \\
(2,2): (1,2)(3,4)... & 3 & (ord(\gamma)=2) \\
(4): (1,2,3,4)... & 6 & (ord(\delta)=4) \\
\end{array}\)</p>
<h1 id="classification">Classification</h1>
<p>From Lagrange’s Theorem, nontrivial proper subgroups of \(S_4\) should be of order p \(\mid\) 24 for some integer \(p\), i.e., \(p = 2,3,4,6,8\) or 12.</p>
<h3 id="1subgroups-of-order-2">1.Subgroups of order 2</h3>
<p>\(H \cong \mathbb{Z_2}\), which is equivalent to \(H = <\sigma>\) for some \(\sigma\), where ord(\(\sigma\))=2. Thus \(S_4\) has 9 subgroups:\(\lbrace <\alpha> \rbrace \cup \lbrace <\gamma> \rbrace\).</p>
<h3 id="2subgroups-of-order-3">2.Subgroups of order 3</h3>
<p>\(H \cong \mathbb{Z_3}\), which is equivalent to \(H = <\sigma>\) for some \(\sigma\), where ord(\(\sigma\))=3. Thus \(S_4\) has 8 subgroups:\(\lbrace <\beta> \rbrace\).</p>
<h3 id="3subgroups-of-order-4">3.Subgroups of order 4</h3>
<p>The class equation (or just simple reasoning) tells us that any subgroup \(H\) of order 4 is abelian, and thus the fundamental theorem of finite abelian groups tells us \(H \cong \mathbb{Z_4}\) or \(H \cong \mathbb{Z_2} \times \mathbb{Z_2}\). Next we try to find all subgroups by enumerating all cases.</p>
<h4 id="case-1-h-cong-mathbbz_4">Case 1: \(H \cong \mathbb{Z_4}\)</h4>
<p>From cycle patterns, we know that there are 3 cyclic subgroups. (Note that not 6 subgroups: \(<1,2,3,4> = <1,4,3,2>\) and so on.)</p>
<h4 id="case-2-h-cong-mathbbz_2-times-mathbbz_2">Case 2: \(H \cong \mathbb{Z_2} \times \mathbb{Z_2}\)</h4>
<p>Denote \(H= \lbrace id,a,b,c \rbrace\), then \(ord(a)=ord(b)=ord(c)=2\), and \(a*b=c,a*c=b,b*c=a\).<br />
Consider the conjugacy class \(\lbrace \alpha \rbrace\) and \(\lbrace \gamma \rbrace\):</p>
<h6 id="1-hbackslash-lbrace-e-rbrace-subset-lbrace-alpha-rbrace">1. \(H\backslash \lbrace e \rbrace \subset \lbrace \alpha \rbrace\):</h6>
<p> None of such subgroup exists.(The set is not closed.)</p>
<h6 id="2-hbackslash-lbrace-e-rbrace-subset-lbrace-gamma-rbrace">2. \(H\backslash \lbrace e \rbrace \subset \lbrace \gamma \rbrace\):</h6>
<p> \(H^{(2)} = \lbrace e \rbrace \cup \lbrace \gamma \rbrace\), which is a nontrivial proper abelian (thus normal) subgroup of \(S_4\).</p>
<h6 id="3-hbackslash-lbrace-e-rbrace-notsubset-lbrace-alpha-rbrace-and-hbackslash-lbrace-e-rbrace-notsubset-lbrace-gamma-rbrace">3. \(H\backslash \lbrace e \rbrace \not\subset \lbrace \alpha \rbrace\) and \(H\backslash \lbrace e \rbrace \not\subset \lbrace \gamma \rbrace\):</h6>
<p> There are three subgroups, where one element is from {\(\gamma\)} and the other two nontrivial elements are from {\(\alpha\)} correponding to the two adjacent transposition from the element. For example, \(\lbrace id,(1,2)(3,4),(1,2),(3,4) \rbrace\).</p>
<h3 id="4subgroups-of-order-6">4.Subgroups of order 6</h3>
<p>\(2=p \mid 2=3-1=q-1\), thus we cannot claim using corollary to Sylow’s Theorem that the subgroup is abelian. However, as no element of \(S_4\) is of order 6, thus \(S_4\) has no abelian subgroup of order 6 by FTFAP. <br />
Using brute force approach or semidirect product \(H\) of order 2 and a normal subgroup \(N\) of order 3, we may claim that \(H=S_3\) up to isomorphism. Thus \(S_4\) has four subgroups of order 6.</p>
<h3 id="5subgroups-of-order-8">5.Subgroups of order 8</h3>
<p>Subgroups of order 8 are 2-Sylow subgroups of \(S_4\). Sylow’s third theorem tells us there are 1 or 3 2-Sylow subgroups. Case \(r=1\) can be ruled out, otherwise \(H\) is a normal subgroup in \(S_4\), but there is no such union(group) of conjugacy classes whose cardinality is 8. Thus \(r=3\).</p>
<p><br />
H is not normal in \(S_4\), thus H is not abelian. Lemma to Sylow’s First Theorem gives us that center of H, Z, satisfies \(Z \cong \mathbb{Z_2}\) and \(G/Z \cong \mathbb{Z_2} \times \mathbb{Z_2}\). Refer back to subgroups of order 4(Section 3.2), we may simply use composition of \(H^{(2)}\) and \(\alpha\) to get the three subgroups, e.g., \(\lbrace H^{(2)}, (1,2) H^{(2)}\rbrace\).</p>
<h3 id="6subgroups-of-order-12">6.Subgroups of order 12</h3>
<p>\([G:H]=2\), thus \(H^{(1)}\) is a normal subgroup in \(S_4\). Check the cardinality of conjugacy classes, we can get, there exists only one subgroup of order 12, i.e., \(\lbrace e \rbrace \cup \lbrace \beta \rbrace \cup \lbrace \gamma \rbrace\).<br />
In fact, \(H^{(1)} \cong A_4\) the alternating subgroup: elements of \(H^{(1)}\) are even permutations.</p>
<h1 id="discussion">Discussion</h1>
<h3 id="1-s_4-is-a-solvable-group">1. \(S_4\) is a solvable group.</h3>
<p>Generate the commutator subgroup sequence, and we can get, \(S_4 = H^{(0)} \supset H^{(1)} \subset H^{(2)} \supset H^{(3)} = \lbrace e \rbrace\).</p>
<h3 id="2-subgroup-lattice-of-s_4">2. Subgroup lattice of \(S_4\)</h3>
<figure class="figure">
<img class="image" src="https://weihaocao.com/assets/images/S4.jpg" alt="Image - Subgroup Lattice" />
<figcaption class="caption">Subgroup Lattice</figcaption>
</figure>Weihao CaoThis article tries to identify the subgroups of symmetric group S4 using theorems from undergraduate algebra courses.Quantum Computing - Report2017-11-25T00:00:00+00:002017-11-25T00:00:00+00:00https://weihaocao.com/physics/2017/11/25/quantum-computing<p>Quantum Computing and its Physical Realization</p>
<!-- more -->
<h1 id="abstract">Abstract</h1>
<p>Quantum computing has recently been a heated topic: The decreasing dimension of classical computing circuits makes it inevitable that quantum effects will take effect. On the other hand, many featured properties of quantum computing has made it a potential candidate for the next generation of computing device, if the growth of computing power is expected to keep in accordance with Moore’s Law. In specific, some quantum algorithms have shown exponential speedup compared with classical algorithms.</p>
<p>In this report, first I will review the formalism of quantum computing, and provide some mathematical proof to theorems or give some concrete examples. Then one of the most famous algorithms – quantum Fourier transform(QFT) and its application will be discussed. Finally, an experimental confirmation using QFT and machine learning techniques to classify digits will be analyzed, and some discussions will be proposed.</p>
<h1 id="formalism">Formalism</h1>
<p>First I will introduce the elements of quantum computation. Similar to classical computation, quantum computation may be decomposed into three parts: qubit, which is the “memory” and computational unit of quantum computing; quantum gate, which implements unitary transformation on quantum states; measurement, during which the desired information is extracted. The formalism of the three parts will be discussed below. As for the implementation of quantum computing algorithms, it generally involves three steps: encoding the input state, conducting unitary transformation on the previous state and measuring the output state. An example of this will be postponed until later.</p>
<h2 id="qubits">Qubits</h2>
<p>Analogous to a classical bit in classical computers, a qubit is the basic two-level computational unit governed by quantum mechanics. It is represented as</p>
\[| \psi \rangle = \alpha | 0 \rangle + \beta | 1 \rangle\]
<p>It can be seen that rather than only take values 0 and 1 as in classical computers, the coefficients \(\alpha\) and \(\beta\) take continuous values. Unfortunately, measurement theory tells us that only one bit of information may be acquired when the state is measured, thus only state \(\mid 0 \rangle\) or state \(\mid 1 \rangle\) may be detected in experiments, and the coefficients cannot be acquired directly.</p>Weihao CaoQuantum Computing - ReportConvolutional Neural Network Walkthrough -Post 3 |Summer Research Summary2017-10-01T00:00:00+01:002017-10-01T00:00:00+01:00https://weihaocao.com/computer/2017/10/01/conv-net-three<p>This post summarizes some other interesting topics or concepts.</p>
<!-- more -->
<h5 id="transfer-learning">Transfer Learning</h5>
<p><a href="http://ruder.io/transfer-learning/index.html">Transfer Learning - Machine Learning’s Next Frontier</a> <br />
A general introduction of transfer learning.</p>
<h5 id="one-shot-learning">One-Shot Learning</h5>
<p><a href="https://rylanschaeffer.github.io/content/research/one_shot_learning_with_memory_augmented_nn/main.html">Explanation of
One-shot Learning with Memory-Augmented Neural Networks</a><br />
Reading and writing external memory to augment neural networks’ performance.<br />
<a href="http://rylanschaeffer.github.io/content/research/neural_turing_machine/main.html">Explanation of
Neural Turing Machines</a><br />
Another common one-shot learning methods.<br />
<a href="https://www.quora.com/How-is-one-shot-learning-different-from-deep-learning">How is one-shot learning different from deep learning?</a></p>
<h5 id="ensemble-learning">Ensemble Learning</h5>
<p><a href="http://blog.csdn.net/google19890102/article/details/46507387">Ensemble Method</a></p>
<h5 id="lstm">LSTM</h5>
<p><a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/">Understanding LSTM Networks</a><br />
Recurrent neural network and LSTM networks.</p>
<h5 id="others">Others</h5>
<p><a href="http://ruder.io/highlights-nips-2016/index.html#thenutsandboltsofmachinelearning">NIPS 2016</a><br />
A concrete walkthrough of the event.<br />
<a href="https://zhuanlan.zhihu.com/p/21414359">[脑洞]Hinton剑桥演讲:大脑神经元的误差反向传播机制</a><br />
Some fascinating insights and ideas.</p>Weihao CaoThis post summarizes some other interesting topics or concepts.Convolutional Neural Network Walkthrough -Post 2 |Summer Research Summary2017-08-31T00:00:00+01:002017-08-31T00:00:00+01:00https://weihaocao.com/computer/2017/08/31/conv-net-two<p>This post mentions how to understand convolutional neural network from shallow to deep.</p>
<!-- more -->
<h5 id="convolutional-neural-network">Convolutional Neural Network</h5>
<p><a href="http://cs231n.github.io/convolutional-networks/">Convolutional Neural Network</a><br />
Understand basic structures of convolutional neural network from stanford open-course. Pay special attention to the convolution layer’s part, and understand the convolution operation from the gif given. You may refer back to this website for more careful lookthrough later.</p>
<p><a href="https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/">A Beginner’s Guide To Understanding Convolutional Neural Networks</a><br />
This tutorial explains rather straightforwardly, with nice illustrations and examples. Combine with the last tutorial.<br />
<a href="http://papers.nips.cc/paper/293-handwritten-digit-recognition-with-a-back-propagation-network.pdf">Handwritten Digit Recognition with a Back-Propagation Network </a><br />
One of the groundbreaking works in pattern recognition.</p>
<h6 id="detailed-compositions">Detailed Compositions</h6>
<p><a href="https://github.com/Kulbear/deep-learning-nano-foundation/wiki/ReLU-and-Softmax-Activation-Functions">ReLU Layer - New Activation Functions </a><br />
Take a look at the new ReLU activation function, compared with traditional sigmoid functions.<br />
<a href="https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/pooling_layer.html">Pooling Layer</a><br />
Pay attention to how pooling layers are backpropagated.<br />
<a href="https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf">Dropout Layer</a><br />
Another commonly-used technique to prevent overfitting in neural networks.</p>
<p><a href="https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/pooling_layer.html">Visualizing Convolution</a><br />
Watch the dynamic convolution illustration and several visualization of the function of individual layers.<br />
<a href="https://youtu.be/ghEmQSxT6tw">Visualizing Convolution - YouTube Video</a><br />
Visualize each convolution layers’ function and how they form a hierarchical structure.<br />
<a href="https://www.tensorflow.org/tutorials/layers">Tensorflow Realization</a><br />
This page tells how to realize CNN using tensorflow. The languages may be a little abstruse for newcomers, but at least better than nothing.</p>
<h5 id="general-deep-learning-strategies">General Deep Learning Strategies</h5>
<p><a href="http://lamda.nju.edu.cn/weixs/project/CNNTricks/CNNTricks.html">Must Know Tips/Tricks in Deep Neural Networks</a><br />
The webpage shares some interesting insight into deep networks training.<br />
<a href="https://machinelearningmastery.com/improve-deep-learning-performance/">How To Improve Deep Learning Performance</a><br />
Some general discussion in sequential order.</p>
<h5 id="object-detection">Object Detection</h5>
<p><a href="https://zhuanlan.zhihu.com/p/21412911">基于深度学习的目标检测研究进展(Chinese)</a>
Introducing object detection algorithms in chronological order.</p>Weihao CaoThis post mentions how to understand convolutional neural network from shallow to deep.Convolutional Neural Network Walkthrough -Post 1 |Summer Research Summary2017-08-30T00:00:00+01:002017-08-30T00:00:00+01:00https://weihaocao.com/computer/2017/08/30/conv-net-one<p>This post records the necessary links and excerpts to understand basic concepts regarding convolution neural network.</p>
<!-- more -->
<h5 id="preliminarybuilding-platforms-and-webtools">Preliminary:Building Platforms and Webtools</h5>
<h6 id="building-platforms">Building Platforms:</h6>
<p><a href="http://www.linuxandubuntu.com/home/10-basic-linux-commands-that-every-linux-newbies-should-remember">Linux System</a> Useful commands.</p>
<p><a href="https://www.tensorflow.org/install/" title="Tensorflow">Tensorflow</a><br />
Open-source machine-learning platform. Dependent Language: Python [Free]<br />
<a href="http://pytorch.org/" title="PyTorch">PyTorch</a><br />
Succint and trouble-free building platform. Dependent Language: Python [Free]<br />
<a href="http://www.mlpack.org/">C++</a><br />
ML library written in C++. [Free]<br />
<a href="https://www.mathworks.com/solutions/machine-learning.html">Matlab</a> / <a href="http://www.vlfeat.org/matconvnet/">MatConNet</a> <br />
Using Matlab to build ML network. [May be paid]</p>
<p><a href="https://www.liaoxuefeng.com/" title="递归函数 - 廖雪峰的官方网站">Python-Tutorial</a><br />
A comprehensive website for python beginners.</p>
<h6 id="useful-webtools">Useful webtools:</h6>
<p><a href="https://github.com/">Github</a> –<a href="https://www.howtoforge.com/tutorial/install-git-and-github-on-ubuntu-14.04/">Github-Installation</a> / <a href="http://www.ruanyifeng.com/blog/2014/06/git_remote.html">Github-Usage</a> <br />
Amazing collaborate platform, and good place to store your code. <br />
<a href="https://cloud.google.com/">Google VM</a> – <a href="https://haroldsoh.com/2016/04/28/set-up-anaconda-ipython-tensorflow-julia-on-a-google-compute-engine-vm/">VM-Setup</a> / <a href="https://medium.com/google-cloud/graphical-user-interface-gui-for-google-compute-engine-instance-78fccda09e5c">GUI Support</a><br />
Three hundred dollars worth of credits free to use. But GPU is not supported then.<br />
<a href="https://notepad-plus-plus.org/">Notepad++</a> –<a href="https://sites.google.com/site/fstellari/nppplugins" title="Autofill and Autosave">Notepad++ -Plugin</a><br />
Free editing software with multi-functional hightlight.</p>
<h5 id="machine-learning">Machine Learning</h5>
<p><a href="https://see.stanford.edu/Course/CS229">Machine Learning</a> <br />
Understand the basic concepts and category of machine learning.<br />
<a href="http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.html">Artificial Neural Network</a><br />
Grasp some basic ideas about the composition of ANN.<br />
<a href="http://playground.tensorflow.org/">Neural Network Playground</a><br />
Have fun with the interactive design! <br />
<a href="https://algobeans.com/2016/03/13/how-do-computers-recognise-handwriting-using-artificial-neural-networks/">ANN</a><br />
A more user-friendly introduction with multiple illustrations.</p>
<p><a href="http://colah.github.io/posts/2015-08-Backprop/">Visualizing Backpropagation</a><br />
Understand backpropagation’s core issue: partial derivative. <br />
<a href="https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/">Visualizing Backpropagation - Part Two</a><br />
Step-by-step backpropagation with numbers.<br />
<a href="https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/">Visualizing optimizers</a><br />
You may refer to individual papers for detailed explanation. <br />
<a href="https://www.tensorflow.org/get_started/mnist/beginners">MNIST using ANN</a><br />
Auxiliary Notes. Be sure to try running real programs!<br />
<a href="https://github.com/tensorflow/tensorflow/blob/r1.1/tensorflow/tensorboard/README.md">Tensorboard Readme</a><br />
Visualizing and monitoring the training process.</p>Weihao CaoThis post records the necessary links and excerpts to understand basic concepts regarding convolution neural network.