All Posts
In this week of developing the QuTiP Virtual Lab, I ran into an obstacle with the build system. It seems like the latest update to empack actually broke the ability to compile QuTiP into WebAssembly. So during this week, I focused on the user interface instead of the simulation logic.
Since the application is meant to be used cross-platform, I started thinking about the user experience for tablets. Initially I thought that the application would feature a canvas with tools for a user to drag-and-drop, but as it turns out, drag-and-drop is not intuitive on a tablet.
↪ Continue ReadingThis post is a summary of the work done on the QuTiP Virtual Lab for weeks 4 and 5, since last week I was out for the July 4th holiday. As a result, the application looks very different from last week, but hopefully it’s closer to the final product.
In this post, I will break down all the changes, but here’s what a demo of the QuTiP Virtual Lab looks like when simulating qubit dephasing:
↪ Continue ReadingThis week in development of the QuTiP Virtual Lab, things are starting to take shape. It’s actually starting to look like an application.
But before I could devote 100% of my time to UI development, I had to take care of a couple of bugs with the compilation of QuTiP to WebAssembly. When the application is built, it compiles QuTiP and its dependencies, and produces a WebAssembly module and a Javascript runtime.
↪ Continue ReadingPicking up from where we left off last week, this week I focused on establishing a build environment for the QuTiP Virtual Lab application. Although this may seem tangential to the actual application, it will allow future contributors to build the application regardless of their platform (Windows, Mac, Linux).
The way to do this is by specifying a container for building the application via a Dockerfile. Then, anyone with the source code and docker installed should be able to build the application.
↪ Continue ReadingLet’s face it: learning quantum mechanics is hard. This is due to a number of reasons, but one common solution would be to leverage computer simulations to help students get a feeling for real-world quantum systems. One such tool that can do this is the Python pacakge QuTiP (Quantum Toolbox in Python).1 It has support for solving the dynamics of a variety of quantum systems, as well as some basic visualization tools.
↪ Continue ReadingThe building block of quantum computers and the simplest possible quantum system are called qubits, and this is the math you need to understand how they work.
Wavefunction The simplest quantum system is described fully by a normalized vector \(|\psi\rangle\) in \(\mathbb{C}^2\):
\begin{equation*} |\psi\rangle = \left( \begin{matrix} a \\\
b \end{matrix} \right) = a\left|0\right \rangle+ b\left|1\right \rangle \end{equation*}
\(|a|^2 + |b|^2 = 1\) is the normalization condition, and \(\{|0\rangle, |1\rangle \}\) are basis vectors.
↪ Continue ReadingIt’s a question that has challenged the smartest physicists of the past century. Why is quantum mechanics so hard to understand? Why is it so hard to communicate it to ordinary people? Is it a problem with our minds? Is it because the theory itself is wrong?
Why can’t we find a common-sense interpretation of quantum theory?
There are a number of factors that make quantum theory difficult to understand. Here’s a short list:
↪ Continue ReadingIn this post, I’d like to share the answer to the natural question that arises when one first becomes acquianted with supervised machine learning: why are there so many different ways to solve supervised learning?
A quick glance at scikit-learn, the (most?) popular machine-learning library shows there are at least 8:
Linear models Support Vector Machines Nearest Neighbors Gaussian Processes Cross decomposition Naive Bayes Decision Trees Neural Networks At first glance, one might wonder why there are so many methods to accomplish the same task.
↪ Continue ReadingA common application of IoT sensors is for alerting of the presence of a specific event – be it a leaking appliance, a broken piece of equipment, or a home invasion (i.e. burglar alarm). Due to the imperfect nature of these sensors, and the random, unpredictable world they live in, detecting a specific event cannot be done perfectly. Instead, we can only infer the presence of some event with some “noise”, or uncertainty.
↪ Continue ReadingMost of us have heard the story of “the boy who cried wolf”. A town’s herald boy is charged with alerting the town of the presence of a wolf. However, he has a proclivity for lying, and because of this, his alerts are unreliable. Eventually, the townsfolk have a difficult time inferring the presence of wolf from the boy’s reports.
Though this story is simple, it is a perfect case to illustrate the application of Bayesian inference.
↪ Continue ReadingConsider a discrete time series \(\{X_t | t = 0, 1, 2, \ldots\}\) where the \(X_t\) could be either deterministic (like samples from a sine wave) or a random variable. We can form a simple moving average from this time series by defining a new time series \(Y_t\), where:
\begin{equation*} Y_t = \frac12 \left(X_t + X_{t-1} \right) \end{equation*}
i.e. \(Y_t\) represents the average of the current value of \(X_t\) and the previous value, \(X_{t-1}\).
↪ Continue ReadingThe equation of motion of a damped pendulum subject to a stochastic driving force is given by:
\[ X''(t) + 2\alpha X'(t) + (\omega_0^2 + \alpha^2)X(t) = W(t) \]
where \(X(t)\) is the displacement of the pendulum from its resting position, \(\alpha\) is the damping factor, and \(W(t)\) is a stochastic driving force, which may come from immersing the pendulum in a turbulent fluid.
Analytical Solution This system can be solved analytically, and fairly quickly, so we might as well just do it.
↪ Continue ReadingHomotopy The geometric idea of homotopy is an interpolation between two paths on a manifold which share the same start and end point.
Mathematically speaking, given a manifold \(S\) and two fixed points \(p\) and \(q\) on the manifold, two paths \(\gamma_0, \gamma_1 : [0,T] \to S\) are homotopic if there exists a smooth function \(\gamma\):
\[ \gamma: [0,1] \times [0,T] \to S \]
such that \(\gamma(s, \cdot)\) is a path from \(p\) to \(q\) for each \(s\) and
↪ Continue ReadingMotivation In developing a modern web application, you end up spending a lot of time managing state, and developing models of how the state can evolve. As you add features, you have to (1) update your models of how the application should work, (2) implement the actual features according to the model, and (3) test that your application is actually working. Modern web frameworks like React or Vue make it easy to update the view layer in response to transistions between states.
↪ Continue ReadingQuantum systems are inherently difficult to measure precisely, due to the variance in every measurement as codified by the Heisenberg uncertainty principle. In this post we look at the method of spin squeezing, which is a way to increase the precision of measurement of a specific quantity by using entanglement.
The Setup Let’s consider a quantum system of \(N\) spin-\(1/2\) particles, initially prepared in an non-entangled state. These could be electrons, but in this post, we will just refer to them as particles.
↪ Continue ReadingI recently participated in a challenge in quantum computing posed by the Quantum Open Source Foundation. In this post, I recap the problem description and my solution to the challenge. The challenge involves exploring the ability of a certain class of quantum circuits in preparing a desired four-qubit quantum state. In this challenge, the problem is to optimize a quantum circuit composed of layers of parameterized gates. To solve this, we implement the circuit in Python and optimize the parameters using the autograd library.
↪ Continue ReadingLet \(R\) be a random Rayleigh-distributed variable. Let \(\phi\) be a random uniformly-distributed variable.
The time series defined by:
\[ Y_t = R\cos{(2\pi(ft + \phi))} \]
with \(t\in\mathbb{Z}\), is equivalent to the time series
\[ Y_t' = U + V \]
if \(U\) and \(V\) are independently distributed standard normal variables:
\[ U, V \sim N(0,1) \]
Equivalence in this context means that \(Y_t\) and \(Y'_t\) have the same PMFs.
↪ Continue ReadingPrincipal component analysis, or PCA, is used to characterize a highly multi-dimensional data set in terms of its main sources of variation. It is a dimensionality-reduction technique, which helps to avoid the curse of dimensionality. Here’s the main idea:
Let’s say we have a \(n\) - dimensional distribution with random variables \(X_1, X_2, \dots, X_n\). We can take these variables to be uncorrelated, with mean zero, without loss of generality. If we take \(N\) samples for each variable in this distribution, we can calculate the set of sample variances:
↪ Continue ReadingIn statistical mechanics, we often deal with expressions involving the Gamma function, also know as the factorial. It is more often useful to approximate this function than to work with it directly when the function’s argument is large. Stirling’s approximation is one way to do so, let’s look at its derivation.
The factorial function \(f(n) = n!\) of an integer \(n\) is defined as the product of sequentially descending integers starting with \(n\):
↪ Continue ReadingIn introductory statistics, you are taught that when estimating a population mean \(\mu\) and variance \(\sigma^2\) you should use the sample mean:
\[ \bar{x} = \frac{1}{n}\sum_i x_i \]
and sample variance:
\[ \widehat{\sigma^2} = \frac{1}{n-1} \sum_i \left(x_i - \bar{x}\right)^2 \]
with the hand-wavy comment that the \(n-1\) in the denominator is necessary to account for small sample sizes.
At least that was what I was told.
In this post I explain why this factor is necessary.
↪ Continue Reading