Menu
QuantML.orgQuantML.org
  • Linear Algebra
  • Markov Matrix and Fourier Series

    Markov Matrix

    A
    \(n\times n\)
    matrix whose elements are non-negative and every column vector sums to
    \(1\)
    is known as Markov Matrix(a.k.a. stochastic matrix).
    Say
    \[A=\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \\ \end{bmatrix}\]
    So,
  • \(a_{ij}\geq 0;\quad\forall i,j\in\{1,2,\cdots,n\}\)
  • \(\sum_{i=0}^{n}a_{ij}=1\quad\forall j\in\{1,2,\cdots,n\}\)

  • \(\lambda_1=1\)

    Markov matrix always have one of the eigenvalues
    \(=1\)
    .
    Proof:
    \(A=\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \\ \end{bmatrix}\)

    \(A-1\mathcal{I}=\begin{bmatrix} a_{11}-1 & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22}-1 & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn}-1 \\ \end{bmatrix}\)

    Now we apply
    \(R_n \leftarrow R_1 + R_2 + \cdots + R_n\)


    \(A-\mathcal{I}=\begin{bmatrix} a_{11}-1 & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22}-1 & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ \sum_{i=0}^{n}a_{i1} -1 & \sum_{i=0}^{n}a_{i2} -1 & \cdots & \sum_{i=0}^{n}a_{in} -1 \\ \end{bmatrix}\)

    And as we know that
    \(\sum_{i=0}^{n}a_{ij}=1\quad\forall j\in\{1,2,\cdots,n\}\)


    \(A-\mathcal{I}=\begin{bmatrix} a_{11}-1 & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22}-1 & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 \\ \end{bmatrix}\)
    So now we can see that
    \(\text{det}(A-\mathcal{I})=0\)

    So indeed
    \(\lambda=1\)
    .

    Example of Markov Matrices

    Assume that we have two cities
    \(A\)
    and
    \(B\)
    and at the end of the year we calculate the
  • Proportion of peoples remains in city
    \(A\)
    , we denote it by
    \(p_{a}\)
  • Proportion of peoples remains in city
    \(B\)
    , we denote it by
    \(p_{b}\)
  • Proportion of peoples moved from city
    \(A\)
    to
    \(B\)
    , we denote it by
    \(p_{ab}\)
  • Proportion of peoples moved from city
    \(B\)
    to
    \(A\)
    , we denote it by
    \(p_{ba}\)
  • We can represent this information as a Markov Matrix (say
    \(M\)
    ).
    \[\begin{matrix} & \text{Moved to } A & \text{Moved to } B \\ \text{From } A & p_{a} & p_{ba}\\ \text{From } B & p_{ab} & p_{b}\\ \end{matrix}\]
    \[M=\begin{bmatrix} p_{a} & p_{ba}\\ p_{ab} & p_{b}\\ \end{bmatrix}\]
    \(p_a + p_{ab}=1\)
    (here we took all peoples in city
    \(A\)
    )
    \(p_b + p_{ba}=1\)
    (here we took all peoples in city
    \(B\)
    )
    And proportions are positive and cannot be greater then
    \(1\)

    So it is a Markov Matrix

    Now say that at time
    \(t=k\)
    .
  • Number of peoples in city
    \(A\)
    are
    \(u_A\)
  • Number of peoples in city
    \(B\)
    are
    \(u_B\)
  • So at that at time
    \(t=k+1\)
    we can say,
    \[\begin{bmatrix} u_{A}\\u_{B}\\ \end{bmatrix}_{t=k+1}=\begin{bmatrix} p_{a} & p_{ba}\\ p_{ab} & p_{b}\\ \end{bmatrix}\begin{bmatrix} u_{A}\\u_{B}\\ \end{bmatrix}_{t=k}\]
    Say that we know
    \(u_{t=0}\)
    and we want to find
    \(u_{t=100}\)
    then we have to compute
    \(M^{100}\)
    but for large
    \(M\)
    it's impractical to compute.
    Now say that eigenvalues of
    \(M\)
    is
    \(\lambda_1\)
    and
    \(\lambda_2\)
    and eigenvectos of
    \(M\)
    is
    \(x_1\)
    and
    \(x_2\)

    So,
    \[ \begin{matrix} \displaystyle \vec{u}_{t=k}=c_1\lambda_1^k\vec{u}_{t=0} + c_2\lambda_2^k\vec{u}_{t=0} \end{matrix} \]

    Expansion with Orthogonal basis

    Say that we are in
    \(n\)
    dimensional space and
    \(\vec{q}_1, \vec{q}_2, \cdots, \vec{q}_n\)
    are the basis for that
    \(n\)
    dimensional space.
    Here all
    \(\vec{q}_1, \vec{q}_2, \cdots, \vec{q}_n\)
    are orthogonal
    We can express any vector in this
    \(n\)
    dimensional space as the linear combination of basis vectors.
    Assume a vector
    \(\vec{v}\in\mathbb{R}^n\)
    in this
    \(n\)
    dimensional space.
    we can express
    \(\vec{v}\)
    as,
    \(\vec{v}=x_1\vec{q}_1 + x_2\vec{q}_2 + \cdots + x_n\vec{q}_n\)

    Ok we can express
    \(\vec{v}\)
    as linear combination of basis vectors but we are interested in these
    \(x_1, x_2, \cdots, x_n\)
    .

    How can we compute
    \(x_1, x_2, \cdots, x_n\)
    ?
    First let's see how can we get
    \(x_1\)
    ?
    IDEA: All
    \(\vec{q}_1, \vec{q}_2, \cdots, \vec{q}_n\)
    are orthogonal so
    \(\vec{q}_i^T\vec{q}_j=0;\quad \forall i\neq j\)

    so take dot product w.r.t.
    \(x_1\)
    .
    \(\vec{v}^T\vec{q}_1=x_1\underbrace{\vec{q}_1^T\vec{q}_1}_{=1} + x_2\underbrace{\vec{q}_2^T\vec{q}_1}_{=0} + \cdots + x_n\underbrace{\vec{q}_n^T\vec{q}_1}_{=0}\)
    \(\Rightarrow x_1=\vec{v}^T\vec{q}_1\)

    \[ \begin{matrix} \displaystyle x_i=\vec{v}^T\vec{q}_i;\quad\forall i\in\{1,2,\cdots, n\} \end{matrix} \]
    Say
    \(\begin{bmatrix} \vdots & \vdots & & \vdots \\ \vec{q}_{1} & \vec{q}_{2} & \cdots & \vec{q}_{n} \\ \vdots & \vdots & & \vdots \\ \end{bmatrix}=Q\)
    and
    \(\vec{x}=\begin{bmatrix} x_1\\ \vdots \\x_n\\ \end{bmatrix}\)

    We can also write it in matrix form,
    \[\begin{bmatrix} \vdots & \vdots & & \vdots \\ \vec{q}_{1} & \vec{q}_{2} & \cdots & \vec{q}_{n} \\ \vdots & \vdots & & \vdots \\ \end{bmatrix} \begin{bmatrix} x_1\\ \vdots \\x_n\\ \end{bmatrix} = \vec{v} \]
    \(\Rightarrow Q\vec{x}=\vec{v}\)

    \(\Rightarrow \underbrace{Q^TQ}_{\mathcal{I}}\vec{x}=Q^T\vec{v}\)

    \[ \begin{matrix} \displaystyle \vec{x}=Q^T\vec{v} \end{matrix} \]

    Fourier Series

    \[f(x)=a_0 1+a_1\cos(x)+b_1\sin(x)+a_2\cos(2x)+b_2\sin(2x)+\cdots\]
    Here instead of vectors we are using functions, so here we are computing linear combinations of functions (instead of vectors).
    Here our space is
    \(\infty\)
    dimensional
  • In vector case dot product is like,
  • Say
    \(\vec{v}= \begin{bmatrix} v_1\\ \vdots \\v_n\\ \end{bmatrix}\)
    and
    \(\vec{u}= \begin{bmatrix} u_1\\ \vdots \\u_n\\ \end{bmatrix}\)

    \(\vec{v}\cdot\vec{u}=v_1u_1 + v_2u_2 + \cdots + v_1u_n \)
  • For function dot product is like,
  • Vectors have some set of values like
    \(\vec{v}= \begin{bmatrix} v_1\\ \vdots \\v_n\\ \end{bmatrix}\)
    .
    But a function has a range of values like for
    \(\cos(x)\)
    range is
    \([0,2\pi]\)
    .
    And a dot product of two functions say
    \(f\)
    and
    \(g\)
    with interval
    \([0,2\pi]\)
    is defined as.
    \[ \begin{matrix} f\cdot g=\int_{0}^{2\pi}f(x)g(x)dx \end{matrix} \]
    Here our functions are
    \(1,\cos(x),\sin(x),\cos(2x),\sin(2x),\cdots\)

    \(\cos\)
    and
    \(\sin\)
    are periodic function with period of
    \([0,2\pi]\)
    , so
    \(0\leq x\leq 2\pi\)

    So we have
    \(\infty\)
    orthogonal basis.
    Now we want to find
    \(a_0,a_1,b_1,a_2,b_2,\cdots\)

    First let's see how to find
    \(a_1\)
    ?
    \(f(x)=a_0 1+a_1\cos(x)+b_1\sin(x)+a_2\cos(2x)+b_2\sin(2x)+\cdots\)

    IDEA: take dot product with function with
    \(a_1\)

    \(f(x)\cdot\cos(x) =a_0 1\cdot\cos(x) +a_1\cos(x)\cdot\cos(x) +b_1\sin(x)\cdot\cos(x) +a_2\cos(2x)\cdot\cos(x) +b_2\sin(2x)\cdot\cos(x) +\cdots\)


    \(\Rightarrow \int_{0}^{2\pi}f(x)\cos(x) =\underbrace{\int_{0}^{2\pi}a_0 1\cos(x)}_{=0} + \int_{0}^{2\pi}a_1\cos(x)\cos(x) + \underbrace{\int_{0}^{2\pi}b_1\sin(x)\cos(x)}_{=0} + \underbrace{\int_{0}^{2\pi}a_2\cos(2x)\cos(x)}_{=0} + \underbrace{\int_{0}^{2\pi}b_2\sin(2x)\cos(x)}_{=0} +\cdots\)

    \(\Rightarrow \int_{0}^{2\pi}f(x)\cos(x) =\int_{0}^{2\pi}a_1\cos(x)\cos(x)\)

    \(\Rightarrow \int_{0}^{2\pi}f(x)\cos(x) =a_1\pi\)

    \(\Rightarrow \int_{0}^{2\pi}f(x)\cos(x) =a_1\pi\)

    So,
    \[ \begin{matrix} a_1=\frac{1}{\pi}\int_{0}^{2\pi}f(x)\cos(x) \end{matrix} \]
    Similarly we can find rest of
    \(a_i\)
    's