Laplace Transform

The Laplace transform of a matrix-valued function $$z:\mathbb{R}_{+} \to \mathbb{R}^{p\times q}$$ (e.g., a function that takes as input time) is denoted with $$Z(s)$$ and defined as:

$$Z(s) = \int\limits_{0}^{\infty} e^{-st}z(t) dt$$. The integral of this matrix is done term-by-term (example below).

What makes the Laplace transform so useful in the context of LDSs is it's derivative property, that is:

$$L(\dot{z}) = sZ(s) - z(0)$$.

Using this property, it can be shown that the LDS $$\dot{x} = A x(t)$$ can be represented in the Laplace domain as:

$$X(s) = (sI-A)^{-1}x(0)$$.

We can now solve the LDS via the inverse laplace transform as: $$x(t)=\mathcal{L}^{-1}((sI-A)^{-1})x(0)$$

The matrix $$(sI-A)^{-1}$$ is called the resolvent matrix. The matrix $$\Phi(t):=\mathcal{L}^{-1}((sI-A)^{-1})$$ is called the state-transition matrix as it maps the initial state to the state at time $$t$$:

$$x(t) = \Phi(t) x(0)$$.

Using the matrix inversion via series expansion (see $$(I-C)^{-1}$$ in inverse matrices) and the linearity of the Laplace transform, it can be shown that the matrix $$\Phi(t)$$ can be computed as:

$$\Phi(t) = \mathcal{L}^{-1}((sI-A)^{-1}) = I+tA+\frac{(tA)^{2}}{2!}$$.

In fact, the RHS above corresponds to the matrix exponential of $$tA$$.

Example of Laplace transform
Let $$A$$ be $$A = \begin{bmatrix}0 & 1 \\ 0 & 0 \end{bmatrix}$$.

Then, $$sI-A = \begin{bmatrix} s & -1 \\ 0 & s \end{bmatrix}$$ so the resolvent is:

$$(sI-A)^{-1} = \begin{bmatrix}1/s & 1/s^2 \\ 0 & 1/s\end{bmatrix}$$ and the state transition matrix is:

$$\Phi(t) = \mathcal{L}^{-1}\left (sI-A)^{-1} \right) = \begin{bmatrix}1 & t \\ 0 & 1\end{bmatrix}$$