Skip to content

Unforced LTI differential equations

Linear Differential Equations

In this course, we will mostly be interested with linear differential equations. A linear differential equation of order $n$ is of the form

\begin{eqnarray}
a_n(t)\frac{d^n y}{dt^n } + a_{n-1}(t)\frac{d^{n-1}y}{dt^{n-1}} + \ldots + a_1(t)\frac{dy}{dt } + a_0(t)y = f(t)    \qquad \qquad   (1)
\end{eqnarray}

with initial conditions $y(t_0)$,  $y'(t_0)$, $y^{(2)}(t_0)$, …., $y^{(n-1)}(t_0)$ given. Most of the time $t_0$ is chosen as $0$. $a_i(t)$’s are called “coefficients” and $f(t)$ is called “the forcing term”.

A more economical way of writing the same $n$th ofder differential equation is

\begin{eqnarray}
a_n(t) D^n y + a_{n-1}(t) D^{n-1}y  + \ldots + a_1(t) Dy  + a_0(t)y = f(t)
\end{eqnarray}

A still more economical way is

\begin{eqnarray}
\sum_{k=0}^n a_k(t) D^k y = f(t)
\end{eqnarray}

EXAMPLES

A third order linear equation is

\begin{eqnarray}
t^2 D^3 y + (t-1) D^2y +  \cos(t) Dy + 2y = U(t)
\end{eqnarray}

Here $a_3(t) = t^2$, $a_2(t) = t-1$, $a_1(t) = \cos(t)$, $a_0(t) = 2$, $f(t) = U(t)$,

A second order linear equation is

\begin{eqnarray}
(t+2)(t-2) D^2y – \cos(t)y = tU(t-4)
\end{eqnarray}

Here $a_2(t) = (t+2)(t-2)$, $a_1(t) = 0$, $a_0(t) = -\cos(t)$, $f(t) = tU(t-4)$,

None of the following differential equations

\begin{eqnarray}
4 (D^2y)^2 – 2y = t^2 \\
\sqrt{D^3 y} + y = 4 \\
\sin(4t) (D^2y)^2 – 2y = 0\\
(D^3y) + (2+y) Dy + y = 4\\
t Dy D^2y + (t+2) Dy = 0 \\
Dy + y^2 = U(t)
\end{eqnarray}

are linear, as none of them are in the form given above.

What is Linearity?

Why we call differential equations of the form (1) “linear” and the rest “nonlinear”? To understand this, we have to revisit the definition of linearity which you should be familiar from your linear algebra courses. But before that, we have to define the superposition (or, linear combination) of two functions:

Superposition (or, Linear Combination) of two functions.

Given two functions function $x_1(t)$ and $x_2(t)$, their superposition is defined as $\alpha_1 x_1(t) + \alpha_2 x_2(t) $, where $ \alpha_1, \alpha_2 \in \mathbb{R}$. This is the same concept which you have seen in your linear algebra classes, applied to functions instead of vectors.

One can form infinite number of superpositions out of two functions. For example, take $x_1(t) = \sin(t)$ and $x_2(t) = tU(t)$. Then $3\sin(t)+5tU(t)$ is a superposition. $7.4\sin(t)-3.47tU(t)$ is an another superposition. $-117tU(t)$ is a third one (in this case $\alpha_1=0$.

The two terms “superposition” and “linear combination” have the same meaning and can be used interchangeably.

Transporting a concept from linear algebra to differential equations is not just a random occurence. In a deeper sense, functions and vectors are the same thing. Hence the concept of superposition works in both cases. We will understand more of this as the lecture progresses.

The definition of superposition can be enlarged to an arbitrary number of functions. If we have $N$ fuctions $x_1(t)$,  $x_2(t)$, … , $x_N(t)$ and $N$ numbers $\alpha_1$, $\alpha_2$. … . $\alpha_N$

\begin{eqnarray}
u(t) = \sum_{i=1}^N \alpha_i x_i(t)
\end{eqnarray}

is a linear combination. $N$ can be chosen as infinity.

Linear Systems

In the context of differential equations, linearity is defined with regard to the forcing term. Consider the differential equation (1).

  • Let us solve this differential equation with a forcing term $f_1(t)$ and obtain a solution $y_1(t)$.
  • Let us solve this differential equation with a forcing term $f_2(t)$ and obtain a solution $y_2(t)$.

Note that in (1) we only change the forcing term, we dont touch the rest of the equation (ie, degree and coefficients remain the same). Naturally, when the forcing term changes, solution $y(t)$ also changes.

If we apply a specific superposition of the forcing terms $f_1(t)$ and $f_2(t)$, ie, $\alpha_1 f_1(t)+\alpha_2 f_2(t)$ to the systems, the output would be the same superposition of $y_1(t)$ and $y_2(t)$, $\alpha_1 y_1(t)+\alpha_2 y_2(t)$. This is called the superposition property..

Example:  Consider a linear differential equation. If a forcing term $f_1(t) = e^{-t}U(t)$ is used, the solution is found to be $y_1(t)=t^2 e^{-2t}$. If a forcing term $f_2(t) = \sin(t)U(t)$ is used, the solution is found to be $y_2(t)=t^3 \cos(t)$.  If the forcing is $f(t) = \sqrt{2}e^{-t}U(t)-\sqrt{3}\sin(t)U(t)$, what is the solution?

Answer: $f(t) = \sqrt{2}t^2 e^{-2t}-\sqrt{3}t^3 \cos(t)$

Time invariant and time varying linear differential equations.

If all the coefficients of a linear differential equation are independent of time (ie, constants), that equation is called as “linear time independent”. Otherwise, it is called as “linear time varying”.

A time-invariant linear differential equation has the following structure

\begin{eqnarray}
a_n\frac{d^n y}{dt^n } + a_{n-1}\frac{d^{n-1}y}{dt^{n-1}} + \ldots + a_1\frac{dy}{dt } + a_0y = f(t)
\end{eqnarray}

where all the coefficients $a_n$, $a_{n-1}$, …. , $a_1$ are constants. Note that the forcing term can still be time dependent.

The differential equations

\begin{eqnarray}
7 D^3 y -3 D^2y +  5 Dy + 2y = U(t) \\
D^2y +  4 Dy + 4y = \sin(t) \\
\end{eqnarray}

are linear time invariant. The equations

\begin{eqnarray}
7 D^3 y -3 D^2y +  5 Dy + 2ty = U(t) \\
D^2y +  4 U(t) Dy + 4y = \sin(t) \\
\end{eqnarray}

are linear time varying.

The property of being linear time invariant is so important that it is usually abbreviated as “LTI”. Hence, rather than saying “linear time invariant differential equation”, we usually say “LTI differential equation”.

Factorization of LTI Differential equations

Consider the LTI differential equation:

\begin{eqnarray}
a_n D^n y + a_{n-1} D^{n-1}y  + \ldots +a_2D^2y+ a_1 Dy  + a_0 y = f(t)
\end{eqnarray}

We first redefine $a_k$ as $a_k \triangleq \frac{a_k}{a_n}$ and $f(t) \triangleq \frac{f(t)}{a_n}$. Doing this will allow us to take $a_n = 1$ and will transform the form of the most general LTI differential equation to

\begin{eqnarray}
D^n y + a_{n-1} D^{n-1}y  + \ldots +a_2D^2y+ a_1 Dy  + a_0 y = f(t)
\end{eqnarray}

We can write this equation as

\begin{eqnarray}
\left( D^n  + a_{n-1} D^{n-1}  + \ldots +a_2D^2+ a_1 D  + a_0 \right) y = f(t)
\end{eqnarray}

We may consider the part in paranthesis as a polynomial in $x$

\begin{eqnarray}
\left(a_m x^m  + a_{n-1}x^{n-1}  + \ldots + a_1 x   + a_0 \right)
\end{eqnarray}

and separate it to its roots. If the above polynomial has $k$ roots $\alpha_1, \ldots, \alpha_k$ and the root $\alpha_j$ has degree $m_j$, this factorization will take the form:

\begin{eqnarray}
\left(a_m x^m  + a_{m-1}x^{m-1}  + \ldots + a_1 x   + a_0 \right) = (x-\alpha_1)^{m_1} \ldots (x-\alpha_{k-1})^{m_{k-1}}(x-\alpha_k)^{m_k} = \prod_{j=1}^k (x-\alpha_j)^{m_j}
\end{eqnarray}

with the restriction $m_1+\ldots+m_k=m$. This means that the LHS of the difference equation can be factored as:

\begin{eqnarray}
(D-\alpha_1)^{m_1} \ldots (D-\alpha_{k-1})^{m_{k-1}}(D-\alpha_k)^{m_k} y =0
\end{eqnarray}

Forced and unforced linear differential equations.

If the forcing term is $0$, we call the linear differential equation “unforced”. Otherwise, it is forced. The following equations are forced:

\begin{eqnarray}
7 D^3 y -3t D^2y +  5 \sin(t) Dy + 2y = U(t) \\
D^2y +  4 t^2 Dy + 4e^{2t}y = \sin(4t) \\
\end{eqnarray}

with the forcing terms $U(t)$ and $\sin(4t)$, respectively. Their unforced versions are

\begin{eqnarray}
7 D^3 y -3t D^2y +  5 \sin(t) Dy + 2y = 0 \\
D^2y +  4 t^2 Dy + 4e^{2t}y = 0 \\
\end{eqnarray}

 

What is time invariance?

 

Solution of the 1st order unforced LTI differential equations

Here we will solve the 1st order unforced LTI equations

\begin{eqnarray}
\frac{dy}{dt} + ay = 0
\end{eqnarray}

With the initial condition

\begin{eqnarray}
y(t_0) = y_0
\end{eqnarray}

in two different ways.

METHOD 1: We can collect $y$ and $t$ terms to the left and right of the equation:

\begin{eqnarray}
\frac{dy}{y} = -a dt
\end{eqnarray}

Let us integrate both sides

\begin{eqnarray}
\int_{y_0}^{y(t)} \frac{dy}{y} = – \int_{t_0}^t a dt\\
\ln(y(t)) – \ln(y_0) = -a(t-t_0)\\
\ln \left( \frac{y(t)}{y_0} \right) = -a(t-t_0)\\
y(t) = y_0 e^{-a(t-t_0)}
\end{eqnarray}

METHOD 2: We multiply the both sides of the differential equation by $e^{at}$, which is known as the “integrating factor:

\begin{eqnarray}
e^{at}(Dy(t) + ay(t)) = e^{at}.0 = 0
\end{eqnarray}

Integrating factor is a function which, when multiplied with the left hand side of a differential equation, makes it an exact differential:

\begin{eqnarray}
D (e^{at}y(t)) =  0
\end{eqnarray}

Now we can integrate both sides from $t_0$ to $t$:

\begin{eqnarray}
\int_{t_0}^t D (e^{at}y(t)) dt =  \int_{t_0}^t 0 dt \\
e^{at}y(t) – e^{at_0}y(t_0) = 0 \\
e^{at}y(t) – e^{at_0}y_0 = 0
\end{eqnarray}

Rearranging the terms, we arrive at the solution

\begin{eqnarray}
y(t) = y_0 e^{-a(t-t_0)}
\end{eqnarray}

The first method is simpler, but the second method (ie, the method with integrating factors) is more general as it can be applicable to higher degree equations. Hence we will use it exclusively.

Solution of the 2nd order unforced LTI differential equations

After solving the first order unforced LTI equations, we will solve the second order unforced LTI equations. Such equations have the form

\begin{eqnarray}
D^2 y + a_1 Dy + a_2 y = (D-\alpha_1)(D-\alpha_2)y = 0
\end{eqnarray}

Solution of the nth order unforced LTI differential equations

Before constructing the solution to the general $n$th order unforced LTI equations, we need to prove a little theorem which will aid us in this construction.

Theorem 1: Consider the function

\begin{eqnarray}
\left(C_1+C_2t+C_3 t^2 + \ldots + C_n t^{n-1}\right) e^{at} =  \sum_{j=1}^n  C_j t^{j-1} e^{a t}
\end{eqnarray}

where $C_1 \ldots C_n$ and $a$ are arbitrary constants. If we integrate this function, its form will not change, only the constants $C_1 \ldots C_n$ will change. In other words:

\begin{eqnarray}
\int \left(\sum_{j=1}^n  C_j t^{j-1} e^{a t} \right) dt = \sum_{j=1}^n  Q_j t^{j-1} e^{a t} + Q_{n+1} \qquad ()
\end{eqnarray}

In words, if an $n$th degree polynomial multiplied by an exponent is integrated, we will get an $n$th degree polynomial (with different coefficients) multiplied by the same exponent as a result.

Some examples:

\begin{eqnarray}
\int 9 e^{-7t}dt &=& -\frac{9}{7}e^{-7t}+D\\
\int \left(-3+2t  \right) e^{-t}dt &=&  \left(1 -2t \right) e^{-t} + D\\
\int \left(1-2t+ t^2  \right) e^{2t}dt &=&  \left(\frac{5}{4} -\frac{6}{4}t+\frac{2}{4} t^2 \right) e^{2t} + D\\
\int \left(2+3t+ t^2 -3 t^3 \right) e^{-4t}dt &=&  \left(-\frac{83}{128} -\frac{76}{128}t+\frac{40}{128} t^2 +\frac{96}{128} t^3 \right) e^{-4t} + D
\end{eqnarray}

The proof is very simple (just take the derivative of both sides), and we will skip it in the interests of time.

Theorem 2: Consider a linear differential equation of order $n$,

\begin{eqnarray}
(D-\alpha_1)^{m_1} \ldots (D-\alpha_{k-1})^{m_{k-1}}(D-\alpha_k)^{m_k}y =0 \qquad \qquad (2)
\end{eqnarray}

where the coefficients $$ are real numbers and $m_1+\ldots+m_k=n$. Then the solution will be of the form

\begin{eqnarray}
y &=& \left( C_1^1 + C_2^1t+ \ldots + C_{m_1}^1 t^{m_1-1}\right)e^{\alpha_1 t}+\ldots+ \left( C_1^k + C_2^k t+ \ldots + C_{m_k}^k  t^{m_k-1}\right) e^{\alpha_k t} \\
&=& \sum_{i=1}^k  \left(\sum_{j=1}^{m_i}  C_j^i t^{j-1} \right)e^{\alpha_i t}
\end{eqnarray}

where $C^i_j$ means $j$ th arbitrary constant belonging to the $i$th root. As usual, these arbitrary constants are determined by initial conditions.

Proof 2: The proof is by induction. We will assume the solution for the $n$th order, and we will prove the solution to the $n+1$ th order.

A differential equation of order $n+1$ can be obtained from a differential equation of order $n$ in two ways.

  • CASE 1:  Increase the power of one of the existing roots by one. We will increase the power of the last root $a_k$ by one, without loosing any generality. Hence the equation (2) becomes
    \begin{eqnarray}
    (D-\alpha_1)^{m_1} \ldots (D-\alpha_{k-1})^{m_{k-1}}(D-\alpha_k)^{m_k+1}y =0 \qquad \qquad (2)
    \end{eqnarray}
  • CASE 2:  Add a new root. Then the equation (2) becomes
    \begin{eqnarray}
    (D-\alpha_1)^{m_1} \ldots (D-\alpha_{k-1})^{m_{k-1}}(D-\alpha_k)^{m_k}(D-\alpha_{k+1})y =0  \qquad \qquad (3)
    \end{eqnarray}

In both cases, we will show that the shape of the solution still fits into (2). We will analyze each case separately.

Case 1

Define $\bar{y} = (D-\alpha_k)y$.

\begin{eqnarray}
\bar{y} = (D-\alpha_k)y      \qquad \qquad (4)
\end{eqnarray}

Then (2) can be written as

\begin{eqnarray}
(D-\alpha_1)^{m_1} \ldots (D-\alpha_{k-1})^{m_{k-1}}(D-\alpha_k)^{m_k}\bar{y} =0      \qquad \qquad (5)
\end{eqnarray}

But (5) is a $n$ th order equation and we know how to solve it. Its solution is

\begin{eqnarray}
\bar{y} &=& \sum_{i=1}^k  \left(\sum_{j=1}^{m_i}  C_j^i t^{j-1} \right)e^{\alpha_i t}     \qquad \qquad (6)
\end{eqnarray}

If we substitute (6) back into (4) we get

\begin{eqnarray}
(D-\alpha_k)y = \bar{y} =  \sum_{i=1}^k  \left(\sum_{j=1}^{m_i}  C_j^i t^{j-1} \right)e^{\alpha_i t}    \qquad \qquad (7)
\end{eqnarray}

If we multiply both sides by the integrating factor $e^{-\alpha_k t}$ we get

\begin{eqnarray}
e^{-\alpha_k t}(D-\alpha_k)y &=& \left(\sum_{i=1}^k  \left(\sum_{j=1}^{m_i}  C_j^i t^{j-1} \right)e^{\alpha_i t}\right) e^{-\alpha_k t}   \qquad \qquad (8)   \\
D\left(e^{-\alpha_k t}y\right) &=& \sum_{i=1}^{k-1}  \left(\sum_{j=1}^{m_i}  C_j^i t^{j-1} \right)e^{(\alpha_i -\alpha_k)t}+ \sum_{j=1}^{m_k}  C_j^k t^{j-1} \qquad \qquad (8)
\end{eqnarray}

Integrate both sides.

\begin{eqnarray}
\int D \left(e^{-\alpha_k t}y \right) dt &=& \sum_{i=1}^{k-1} \int  \sum_{j=1}^{m_i}  \left( C_j^i t^{j-1} e^{(a_i -a_k)t} \right)dt +  \sum_{j=1}^{m_k}  \int C_j^k t^{j-1} dt \qquad \qquad (8)
\end{eqnarray}

Using theorem 1 above, we arrive at

\begin{eqnarray}
e^{-\alpha_k t}y  &=& \sum_{i=1}^{k-1}   \sum_{j=1}^{m_i}\left( Q_j^i t^{j-1} e^{(a_i -a_k)t} \right) + \sum_{j=1}^{m_k}  \frac{C_j^k}{j} t^j + D  \qquad \qquad (8)
\end{eqnarray}

Defining $Q_{j+1}^k = \frac{C_j^k}{j}$ and $Q_0^k = D$ we get

\begin{eqnarray}
e^{-\alpha_k t}y  &=& \sum_{i=1}^{k-1}   \sum_{j=1}^{m_i}\left( Q_j^i t^{j-1} e^{(\alpha_i -\alpha_k)t} \right) + \sum_{j=1}^{m_k+1} Q_j^k t^{j-1}   \qquad \qquad (8) \\
\end{eqnarray}

or

\begin{eqnarray}
y &=& \sum_{i=1}^{k-1}   \sum_{j=1}^{m_i} Q_j^i t^{j-1} e^{ \alpha_it} +  \sum_{j=1}^{m_k+1} Q_j^k t^{j-1} e^{ \alpha_k t}
\end{eqnarray}

which is the result we are looking for.

Case 2

Define $\bar{y} = (D-\alpha_{k+1})y$.

\begin{eqnarray}
\bar{y} = (D-\alpha_{k+1})y      \qquad \qquad (4)
\end{eqnarray}

Then (2) can be written as

\begin{eqnarray}
(D-\alpha_1)^{m_1} \ldots (D-\alpha_{k-1})^{m_{k-1}}(D-\alpha_k)^{m_k}\bar{y} =0      \qquad \qquad (5)
\end{eqnarray}

But (5) is a $n$ th order equation and we know how to solve it. Its solution is

\begin{eqnarray}
\bar{y} &=& \sum_{i=1}^k  \left(\sum_{j=1}^{m_i}  C_j^i t^{j-1} \right)e^{\alpha_i t}     \qquad \qquad (6)
\end{eqnarray}

If we substitute (6) back into (4) we get

\begin{eqnarray}
(D-\alpha_{k+1})y = \bar{y} =  \sum_{i=1}^k  \left(\sum_{j=1}^{m_i}  C_j^i t^{j-1} \right)e^{\alpha_i t}    \qquad \qquad (7)
\end{eqnarray}

If we multiply both sides by the integrating factor $e^{-\alpha_{k+1} t}$ we get

\begin{eqnarray}
e^{-\alpha_{k+1} t}(D+a_{k+1})y &=& \left(\sum_{i=1}^k  \left(\sum_{j=1}^{m_i}  C_j^i t^{j-1} \right)e^{\alpha_i t}\right) e^{-\alpha_{k+1} t}   \qquad \qquad (8)   \\
D\left(e^{-\alpha_{k+1} t}y\right) &=& \sum_{i=1}^{k}  \left(\sum_{j=1}^{m_i}  C_j^i t^{j-1} \right)e^{(\alpha_i-\alpha_{k+1})t}  \qquad \qquad (8)
\end{eqnarray}

When we integrate both sides and use theorem 1, we get

\begin{eqnarray}
e^{-\alpha_{k+1} t}y &=& \sum_{i=1}^{k}  \left(\sum_{j=1}^{m_i}  Q_j^i t^{j-1} \right)e^{(\alpha_i-\alpha_{k+1} )t} + Q^{k+1}_1 \qquad \qquad (8)
\end{eqnarray}

where $Q_j^î$’s are the new constants that replace $C_j^î$’s, and $Q^{k+1}_1$ is the integration constant. Mutliplying both sides by $e^{\alpha_{k+1} t}$ yields

\begin{eqnarray}
y &=& \sum_{i=1}^{k}  \left(\sum_{j=1}^{m_i}  Q_j^i t^{j-1} \right)e^{ \alpha_i t} + Q^{k+1}_1e^{\alpha_{k+1} t} \qquad \qquad (8)
\end{eqnarray}

which is the desired result.

The case of complex roots

In theorem 2, LTI differential equations are defined with real coefficients. Such differential equations may have two different type of roots:

  • Real roots
  • Complex conjugate roots.

Note that complex conjugate roots always come in pairs. Therefore, to have complex conjugate roots, a LTI differential equation must be at least of second order or more.

Example: Consider the LTI differential equation $(D^3+3D^2+4D+2)y=0$. This differential equation has three roots, one real at $-1$ and one complex conjugate at $-1 \pm i$. The factorized form is $(D+1)(D+1+i)(D+1-i)y=0$. And the solution is

\begin{eqnarray}
y = C_1 e^{-t} + C_2 e^{(-1-i)t} + C_3 e^{(-1+i)t}
\end{eqnarray}

Example: Consider the LTI differential equation $(D^3-2D^2+D-2)y=0$. This differential equation has three roots, one real at $2$ and one complex conjugate at $ \pm i$. The factorized form is $(D-2)(D+i)(D-i)y=0$. And the solution is

\begin{eqnarray}
y = C_1 e^{2t} + C_2 e^{-it} + C_3 e^{it}
\end{eqnarray}

Example: Consider the LTI differential equation $(D^5-D^4+8D^3-8D^2+16D-16)y=0$. This differential equation has five roots, one real at $1$ and a double complex conjugate at $ \pm 2i$. The factorized form is $(D-1)(D+2i)^2(D-2i)^2 y=0$. And the solution is

\begin{eqnarray}
y = C_1 e^{t} + (C_2+C_3 t) e^{-2it} + (C_4+C_5 t) e^{2it}
\end{eqnarray}

The problem with complex conjugate roots is this: We have a real LTI differential equation (remember that all the coefficients are real), our initial conditions are also real, but our solutions stray into the complex domain. We want them to stay in the real domain. Is there a trick under our sleeve such that we force them to stay in the real world? Yes, we have.

Let us consider the simplest possible differential equation with complex conjugate roots:

\begin{eqnarray}
(D-a-ib)(D-a+ib)y = 0
\end{eqnarray}

The solution will be

\begin{eqnarray}
y &=&  C_1 e^{(a-ib)t} + C_2 e^{(a+ib)t}\\
&=&  C_1 e^{at}e^{-ibt} + C_2 e^{at}e^{ibt} \\
&=&  e^{at} \left( C_1 e^{-ibt} + C_2 e^{ibt} \right)
\end{eqnarray}

Using Euler’s law

\begin{eqnarray}
e^{ibt} &=&  \cos(bt)+i\sin(bt)\\
e^{-ibt} &=&  \cos(-bt)+i\sin(-bt) = \cos(bt)-i\sin(bt)
\end{eqnarray}

Substituting this into our differential equation we get

\begin{eqnarray}
y &=&  e^{at} \left( C_1 (\cos(bt)-i\sin(bt)) + C_2 (\cos(bt)+i\sin(bt)) \right) \\
&=&  e^{at} \left( (C_1+C_2) \cos(bt)+(C_2-C_1)i\sin(bt) \right)
\end{eqnarray}

Now a way to save ourselves from complex solutions becomes apparent. Until now, we always considered $C_1$ and $C_2$ as arbitrary real numbers, determined by initial conditions. Instead, we can consider them as complex conjugate complex numbers

\begin{eqnarray}
C_1 &=&  P+iQ\\
C_2 &=&  P-iQ
\end{eqnarray}

where $P, Q$ are real. Note that two arbitrary complex numbers mean four real numbers. But because we choose them as complex conjugates, we still end up with only two real numbers, $P$ and $Q$. Substituting them in, we get

\begin{eqnarray}
y &=&  e^{at} \left( 2P \cos(bt)+2Q\sin(bt) \right)
\end{eqnarray}

As $P$ and $Q$ are arbitrary real constants, we can forget about the factor of two and write the solution as

\begin{eqnarray}
y &=&  e^{at} \left( P \cos(bt)+Q\sin(bt) \right)
\end{eqnarray}

where everything is real.

Now let us consider the general case. The complex conjugate multiple factor $(D-a+b i)^n (D-a-b i)^n$ in our LTI differential equation will give rise the following term in our solution

\begin{eqnarray}
\left( \sum_{k=1}^n R_k t^{k-1} \right) e^{(a-ib)t} + \left( \sum_{k=1}^n S_k t^{k-1} \right) e^{(a+ib)t}\\
=\left( \sum_{k=1}^n R_k t^{k-1} \right) e^{at}e^{-ibt} + \left( \sum_{k=1}^n S_k t^{k-1} \right) e^{at}e^{ibt} \\
=e^{at} \left[ \left( \sum_{k=1}^n R_k t^{k-1} e^{-ibt}\right)  + \left( \sum_{k=1}^n S_k t^{k-1} e^{ibt}\right)  \right] \\
=e^{at} \left[ \left( \sum_{k=1}^n R_k t^{k-1} (\cos(bt)-i\sin(bt))\right)  + \left( \sum_{k=1}^n S_k t^{k-1} (\cos(bt)+i\sin(bt))\right)  \right] \\
=e^{at} \left[ \left( \sum_{k=1}^n (R_k+S_k) t^{k-1} \cos(bt)\right)  + \left( \sum_{k=1}^n (S_k-R_k) t^{k-1}i \sin(bt)\right)  \right] \\
\end{eqnarray}

choosing

\begin{eqnarray}
R_k &=&  P_k+iQ_k\\
S_k &=&  P_k-iQ_k
\end{eqnarray}

we get

\begin{eqnarray}
=e^{at} \left[ \left( \sum_{k=1}^n 2R_k t^{k-1} \cos(bt)\right)  + \left( \sum_{k=1}^n 2S_k t^{k-1} \sin(bt)\right)  \right] \\
\end{eqnarray}

Absorbing the factor of 2 into the constants $R_k, S_k$ and multiplying in the exponential factor $e^{at}$ we get the final form

\begin{eqnarray}
= \sum_{k=1}^n R_k t^{k-1} e^{at}\cos(bt)  +  \sum_{k=1}^n S_k t^{k-1} e^{at}\sin(bt)
\end{eqnarray}

which is real.

Note that if $b=0$ we recover the solution for the real case.

Summary: Steps in solving an unforced LTI differential equation

  1. Check if the equation is an LTI unforced equation. If not, the methods you have learned in this chapter is not useful for solution.
  2. If there is $a_n$, divide it to all the other coefficients to make it 1.
  3. Factorize the differential equation to find the roots. The roots will either be real or complex conjugate.
  4. If the factorization contains a term $(D-a)^n$, this means we have an nth degree real root $a$. Add the term $\sum_{k=1}^n C_k t^{k-1} e^{at}$ to the solution.
  5. If the factorization contains the term $(D-a+ib)^n (D-a-ib)^n$, this means we have an nth degree complex conjugate root $a \pm ib$. Add the term $\sum_{k=1}^n R_k t^{k-1} e^{at}\cos(bt)  +  \sum_{k=1}^n S_k t^{k-1} e^{at}\sin(bt)$ to the solution.
  6. Use initial conditions to find the constants.

Stability and dominant roots

Now that we have an algorithm to solve the most general unforced LTI differential equation, we can speculate about the structure of their solutions. The structure of the solutions are determined by the roots of the differential equations.

  • All positive real roots will blow to infinity.
  • All complex conjugate roots with positive real part will blow to infinity while oscillating.
  • All negative real roots decay to zero.
  • All complex conjugate roots with negative real part will decay to zero while oscillating.
  • When the real root is zero, we get a polynomial solution. The degree of the polynomial is the degree of the root.
  • All complex conjugate roots with zero real part will give oscillating polynomial solutions. The degree of the polynomial is the degree of the root.

It is customary to show the roots of an LTI differential equation in the complex plane. By looking at the roots, we can conjecture about how the system will behave.