Menu
QuantML.orgQuantML.org
  • Linear Algebra
  • Differential Equation

    Note
    In this lecture I struggle alot(because I have not much experience with differential equations) so there will be errors, but as always am sure you will find here something worth considering .
    If you spot an error then please tell me about that error in our discussion form.

    Matrix Exponential

    Before we dive into Differential Equation first let's see what meant by Matrix Exponential.
    We are familiar with power series of
    \(e^{x}\)
    .

    \[e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\cdots\]
    Now say we have a
    \(n\times n\)
    matrix
    \(A\)
    and we want to find
    \(e^{A}\)

    Then power series of
    \(e^A\)
    is,
    \[ \begin{matrix} \\ \quad \displaystyle e^A=\mathcal{I}_n+A+\frac{A^2}{2!}+\frac{A^3}{3!}+\cdots \quad\\ \\ \end{matrix} \]
  • \(\displaystyle \frac{d}{dt}e^{tA}=Ae^{tA}\)
  •      Proof:
    \(\frac{d}{dt}e^{tA}=\frac{d}{dt}\left(\mathcal{I}_n+tA+\frac{(tA)^2}{2!}+\frac{(tA)^3}{3!}+\cdots\right)\)

    \(\frac{d}{dt}e^{tA}=\mathbb{0}_n+A+t(A)^2+\frac{t^2(A)^3}{2!}+\frac{t^3(A)^4}{3!}+\cdots\)

    \(\frac{d}{dt}e^{tA}=A\left(\mathcal{I}_n+t(A)+\frac{t^2(A)^2}{2!}+\frac{t^3(A)^3}{3!}+\cdots\right)\)

    \[ \begin{matrix} \\ \quad \displaystyle \frac{d}{dt}e^{tA}=Ae^{tA} \quad\\ \\ \end{matrix}_{\quad✅} \]

  • \(\displaystyle e^{tA}\vec{x}=e^{\lambda t}\vec{x}\)
  •      Proof:
    Here
    \(\lambda\)
    is an eigenvalue of
    \(A\)
    with eigenvectors
    \(\vec{x}\)

    \(e^A=\mathcal{I}_n+A+\frac{A^2}{2!}+\frac{A^3}{3!}+\cdots\)

    \(\Rightarrow e^{tA}=\mathcal{I}_n+tA+\frac{t^2A^2}{2!}+\frac{t^3A^3}{3!}+\cdots\)

    \(\Rightarrow e^{tA}\vec{x}=\mathcal{I}_n\vec{x}+tA\vec{x}+\frac{t^2A^2}{2!}\vec{x}+\frac{t^3A^3}{3!}\vec{x}+\cdots\)

    We discussed that
    \(A\vec{x}=\lambda\vec{x}\)

    And we also discussed that
    \(A^k\vec{x}=\lambda^k\vec{x}\)
    so,
    \(\Rightarrow e^{tA}\vec{x}=\mathcal{I}_n\vec{x}+t\lambda\vec{x}+\frac{t^2\lambda^2}{2!}\vec{x}+\frac{t^3\lambda^3}{3!}\vec{x}+\cdots\)
    so,
    \[ \begin{matrix} \\ \quad \displaystyle e^{tA}\vec{x}=e^{\lambda t}\vec{x} \quad\\ \\ \end{matrix}_{\quad✅} \]

  • \(\displaystyle \frac{d}{dt}e^{\lambda t}\vec{x}=Ae^{\lambda t}\vec{x}\)
  •      Proof:
    \(\frac{d}{dt}e^{\lambda t}\vec{x}=\lambda e^{\lambda t}\vec{x}\)

    \(\Rightarrow \frac{d}{dt}e^{\lambda t}\vec{x}=e^{\lambda t}\underbrace{\lambda\vec{x}}_{=A\vec{x}}\)

    \[ \begin{matrix} \\ \quad \displaystyle \frac{d}{dt}e^{\lambda t}\vec{x}=Ae^{\lambda t}\vec{x} \quad\\ \\ \end{matrix}_{\quad✅} \]


    Differential Equation

    We have a vector
    \(\vec{u}= \begin{bmatrix} u_1\\u_2\\\vdots\\u_n \end{bmatrix}\in\mathbb{R}^n\)
    and a
    \(n\times n\)
    matrix
    \(A\)
    .
    \(\vec{u}(t_0)=\vec{u}_0\)

    And we want to solve a differential equation,
    \[\frac{\partial \vec{u}}{\partial t}=A\vec{u}\]
    For
    \(u,\alpha\in\mathbb{R}\)
    .
    Solution of
    \(\frac{\partial u}{\partial t}=\alpha u\)
    is,
    \[u=Ce^{\alpha t}\]

    Say that the eigenvectors of
    \(A\)
    are
    \(\vec{x}_1,\vec{x}_2,\cdots, \vec{x}_n\)
    ,
    And say the eigenvalues of
    \(A\)
    are
    \(\lambda_1,\lambda_2 ,\cdots, \lambda_n\)
    so,
    Then the solution for
    \(\vec{u}\)
    is,
    \[ \begin{matrix} \\ \quad \displaystyle \vec{u}(t) = C_1e^{\lambda_1 t}\vec{x}_1 + C_2e^{\lambda_2 t}\vec{x}_2 + \cdots + C_ne^{\lambda_n t}\vec{x}_n \quad\\ \\ \end{matrix} \]

    Example:
    Say
    \(\vec{u}= \begin{bmatrix} u_1\\u_2\\ \end{bmatrix}\in\mathbb{R}^n\)
    and
    \(\vec{u}(0)= \begin{bmatrix} 1\\0\\ \end{bmatrix}\)

    And we are given,
    \(\frac{du_1}{dt}=-u_1+2u_2\)
    and
    \(\frac{du_2}{dt}=2u_1-u_2\)
    so,
    \(A=\begin{bmatrix} -1 & 2\\ 1 & -2\\ \end{bmatrix}\in\mathbb{R}^n\)

    Now we want to solve differential equation,
    \[\frac{d \vec{u}}{d t}=A\vec{u}\]
    First of all find it's eigenvalues and eigenvectors.
    We discussed about how to find them
    So our eigenvalues are
    \(\lambda_1=0\)
    and
    \(\lambda_2=-3\)

    And our eigenvectors are
    \(\vec{x}_1= \begin{bmatrix} 2\\1\\ \end{bmatrix},\quad\)
    \(\vec{x}_2= \begin{bmatrix} 1\\-1\\ \end{bmatrix}\)

    Now we got our eigenvalues and eigenvectors so solution is.
    \(\vec{u}(t) = c_1e^{\lambda_1 t}\vec{x}_1 + c_2e^{\lambda_2 t}\vec{x}_2\)

    \(\vec{u}(t) = c_1e^{0 t}\begin{bmatrix} 2\\1\\ \end{bmatrix} + c_2e^{-3 t}\begin{bmatrix} 1\\-1\\ \end{bmatrix}\)

    \(\displaystyle \vec{u}(t) = \underbrace{c_1\begin{bmatrix} 2\\1\\ \end{bmatrix}}_{\text{steady state}} + \underbrace{c_2e^{-3 t}\begin{bmatrix} 1\\-1\\ \end{bmatrix}}_{\xrightarrow [t\to \infty ]{} \vec{0}}\)


    So here we can see that our eigenvalue control the state.
  • For
    \(\lambda_1=0\)
    the state becomes steady, it does not depends on time.
  • For
    \(\lambda_2=-3\)
    the solution vanishes, as
    \(t\)
    increases.
  • As
    \(t\to\infty\)
    solution associated with
    \(\lambda_2\)
    vanishes and solution associated with
    \(\lambda_1\)
    remains steady.

  • We know that
    \(\vec{u}(0)= \begin{bmatrix} 1\\0\\ \end{bmatrix}\)
    so,
    \(\vec{u}(0)= c_1\begin{bmatrix} 2\\1\\ \end{bmatrix} + c_2\begin{bmatrix} 1\\-1\\ \end{bmatrix} =\begin{bmatrix} 1\\0\\ \end{bmatrix}\)

    By solving we get
    \(c_1=c_2=\frac{1}{3}\)

    So solution is,
    \[ \begin{matrix} \\ \quad \displaystyle \vec{u}(t) = \underbrace{\frac{1}{3}\begin{bmatrix} 2\\1\\ \end{bmatrix}}_{\text{steady state}} + \underbrace{\frac{1}{3}e^{-3 t}\begin{bmatrix} 1\\-1\\ \end{bmatrix}}_{\xrightarrow [t\to \infty ]{} \vec{0}} \quad\\ \\ \end{matrix} \]



    If
    \(A\)
    is Diagonalizable

    If
    \(A\)
    is diagonalizable then,
    \[ \begin{matrix} \\ \quad \displaystyle A=S\Lambda S^{-1} \quad\\ \\ \end{matrix} \]
    Where the columns of matrix
    \(S\)
    are eigenvectors of
    \(A\)
    .
    \[S=\begin{bmatrix} \vdots & \vdots & \cdots & \vdots \\ \vec{x}_{1} & \vec{x}_{2} & \cdots & \vec{x}_{n} \\ \vdots & \vdots & \cdots & \vdots \\ \end{bmatrix}\]
    (we discussed it HERE) So,

  • \(\displaystyle e^{At}=Se^{(\Lambda t)}S^{-1}\)
  • Proof:
    \(e^A=\mathcal{I}_n+A+\frac{A^2}{2!}+\frac{A^3}{3!}+\cdots\)

    \(\Rightarrow e^{At}=\mathcal{I}_n+At+\frac{t^2A^2}{2!}+\frac{t^3A^3}{3!}+\cdots\)

    As we know that
    \(A=S\Lambda S^{-1}\)

    \(\Rightarrow e^{At}=\mathcal{I}_n+S\Lambda S^{-1}t+\frac{t^2(S\Lambda S^{-1})^2}{2!}+\frac{t^3(S\Lambda S^{-1})^3}{3!}+\cdots\)
    As we discussed that
    \(A^k=(S\Lambda S^{-1})^k=S\Lambda^k S^{-1}\)

    \(\Rightarrow e^{At}=\mathcal{I}_n+S\Lambda S^{-1}t+\frac{t^2 S\Lambda^2 S^{-1}}{2!}+\frac{t^3 S\Lambda^3 S^{-1}}{3!}+\cdots\)
    \(\Rightarrow e^{At}=\mathcal{I}_n+S\Lambda S^{-1}t+\frac{t^2 S\Lambda^2 S^{-1}}{2!}+\frac{t^3 S\Lambda^3 S^{-1}}{3!}+\cdots\)
    \(\Rightarrow e^{At}=S\left(\mathcal{I}_n+\Lambda t+\frac{t^2 \Lambda^2}{2!}+\frac{t^3 \Lambda^3}{3!}+\cdots\right)S^{-1}\)
    So,
    \[ \begin{matrix} \\ \quad \displaystyle e^{At}=S e^{(\Lambda t)} S^{-1} \quad\\ \\ \end{matrix}_{\quad✅} \]
    And remember that it's true only if
    \(A\)
    is Diagonalizable

  • \(\displaystyle \vec{u}(t)=Se^{\Lambda t}S^{-1}\vec{x}\)
  • Proof:
    We know that
    \(\vec{u}(t)=ce^{\lambda t}\vec{x}\)
    where
    \(c\in\mathbb{R}\)
    is a constant,
    \(\lambda\)
    is our eigenvalue and
    \(\vec{x}\)
    is our eigenvectors.
    And as we discussed above
    \(\displaystyle e^{\lambda t}\vec{x}=e^{At}\vec{x}\)
    so,
    \(\vec{u}(t)=ce^{At}\vec{x}\)

    We also discussed that
    \(\displaystyle e^{At}=Se^{(\Lambda t)}S^{-1}\)
    so,
    \(\vec{u}(t)=cSe^{(\Lambda t)}S^{-1}\vec{x}\)

    And we can get the value of
    \(c\)
    using initial condition
    \(\vec{u}(t_0)=\vec{u}_0\)
    so,
    \[ \begin{matrix} \\ \quad \displaystyle \vec{u}(t)=Se^{(\Lambda t)}S^{-1}\vec{x} \quad\\ \\ \end{matrix}_{\quad\quad✅} \]

    States

    \[\vec{u}(t) = C_1e^{\lambda_1 t}\vec{x}_1 + C_2e^{\lambda_2 t}\vec{x}_2 + \cdots + C_ne^{\lambda_n t}\vec{x}_n\]
    \(1.\)
    Stable state
    If
    \(\vec{u}(t)\xrightarrow [t\to \infty ]{} \vec{0}\)
    then we say we have a stable state.
    We can achieve it if all the eigenvalues are negative.
    \(\lambda_1\lt 0, \lambda_2\lt 0,\cdots ,\lambda_n\lt 0\)

    If eigenvalues are Complex say
    \(\lambda=a+ib;\quad a,b\in\mathbb{R}\)
    then
    \(\|e^{(a+ib)}\|=\|e^a\|\)
    so complex part doesn't matters here.
    \(2.\)
    Steady state
    If
    \(\vec{u}(t)\xrightarrow [t\to \infty ]{} \vec{c}\)
    then we have a steady state
    Here
    \(\vec{c}\)
    is a constant vector that does not depends on
    \(t\)
    .
    If atleast one of the eigenvalue is
    \(0\)
    and rest of the eigenvalues are negative then we can say we have a steady state.
    \(\lambda_1 = 0, \lambda_2 = 0,\cdots ,\lambda_k = 0;\quad\)
    \(1 \leq k \leq n\)

    \(\lambda_{k+1} \lt 0, \lambda_{k+2} \lt 0,\cdots ,\lambda_n \lt 0;\quad\)
    \(1 \leq k \leq n\)

    \(2.\)
    Blow-up
    If
    \(\vec{u}(t)\xrightarrow [t\to \infty ]{} \vec{\infty}\)
    then we have a blow-up
    Here
    \(\vec{\infty}\)
    means that all the elements of
    \(\vec{u}(t)\)
    (goes to)
    \(\infty\)

    If atleast one of the eigenvalue is positive then we can say we have a blow-up.
    So we can say that for a blow-up
    \(\exists \lambda_k \gt 0 ;\quad k\in\{1,2,3,\cdots,n\}\)