# The Convolution Integral

The term convolution means "folding." Convolution is an invaluable tool to the engineer because it provides a means of viewing and characterizing physical systems. For example, it is used in finding the response $y(t)$ of a system to an excitation $x(t)$, knowing the system impulse response $h(t)$. This is achieved through the convolution integral, defined as $$y(t)=\int_{-\infty}^{\infty} x(\lambda) h(t-\lambda) d \lambda \tag{1}$$ or simply $$y(t)=x(t) * h(t) \tag{2}$$ where $\lambda$ is a dummy variable and the asterisk denotes convolution. Equation (1) or (2) states that the output is equal to the input convolved with the unit impulse response. The convolution process is commutative: $$y(t)=x(t) * h(t)=h(t) * x(t)$$ or
$$y(t)=\int_{-\infty}^{\infty} x(\lambda) h(t-\lambda) d \lambda=\int_{-\infty}^{\infty} h(\lambda) x(t-\lambda) d \lambda \tag{3}$$
This implies that the order in which the two functions are convolved is immaterial. We will see shortly how to take advantage of this commutative property when performing graphical computation of the convolution integral.
The convolution of two signals consists of time-reversing one of the signals, shifting it, and multiplying it point by point with the second signal, and integrating the product.
The convolution integral in Eq. (1) is the general one; it applies to any linear system. However, the convolution integral can be simplified if we assume that a system has two properties. First, if $x(t)=0$ for $t < 0$, then
$$y(t)=\int_{-\infty}^{\infty} x(\lambda) h(t-\lambda) d \lambda=\int_{0}^{\infty} x(\lambda) h(t-\lambda) d \lambda \tag{4}$$
Second, if the system's impulse response is causal (i.e., $h(t)=0$ for $t < 0)$, then $h(t-\lambda)=0$ for $t-\lambda < 0$ or $\lambda > t$, so that Eq. (4) becomes
$$\bbox[10px,border:1px solid grey]{y(t)=h(t) * x(t)=\int_{0}^{t} x(\lambda) h(t-\lambda) d \lambda} \tag{5}$$
Here are some properties of the convolution integral.
• $x(t) * h(t)=h(t) * x(t)$ (Commutative)
• $f(t) *[x(t)+y(t)]=f(t) * x(t)+f(t) * y(t)$ (Distributive)
• $f(t) *[x(t) * y(t)]=[f(t) * x(t)] * y(t)$ (Associative)
• $f(t) * \delta(t)=\int_{-\infty}^{\infty} f(\lambda) \delta(t-\lambda) d \lambda=f(t)$
• $f(t) * \delta\left(t-t_{o}\right)=f\left(t-t_{o}\right)$
• $f(t) * \delta^{\prime}(t)=\int_{-\infty}^{\infty} f(\lambda) \delta^{\prime}(t-\lambda) d \lambda=f^{\prime}(t)$
• $f(t) * u(t)=\int_{-\infty}^{\infty} f(\lambda) u(t-\lambda) d \lambda=\int_{-\infty}^{t} f(\lambda) d \lambda$
Before learning how to evaluate the convolution integral in Eq. (5), let us establish the link between the Laplace transform and the convolution integral. Given two functions $f_{1}(t)$ and $f_{2}(t)$ with Laplace transforms $F_{1}(s)$ and $F_{2}(s)$, respectively, their convolution is $$f(t)=f_{1}(t) * f_{2}(t)=\int_{0}^{t} f_{1}(\lambda) f_{2}(t-\lambda) d \lambda \tag{5.1}$$ Taking the Laplace transform gives $$F(s)=\mathcal{L}\left[f_{1}(t) * f_{2}(t)\right]=F_{1}(s) F_{2}(s) \tag{6}$$ To prove that Eq. (6) is true, we begin with the fact that $F_{1}(s)$ is defined as $$F_{1}(s)=\int_{0}^{\infty} f_{1}(\lambda) e^{-s \lambda} d \lambda$$ Multiplying this with $F_{2}(s)$ gives
$$F_{1}(s) F_{2}(s)=\int_{0}^{\infty} f_{1}(\lambda)\left[F_{2}(s) e^{-s \lambda}\right] d \lambda \tag{7}$$
We recall from the time shift property
\begin{aligned}F_{2}(s) e^{-s \lambda} &=\mathcal{L}\left[f_{2}(t-\lambda) u(t-\lambda)\right] \\&=\int_{0}^{\infty} f_{2}(t-\lambda) u(t-\lambda) e^{-s \lambda} d t\end{aligned} \tag{8}
Substituting Eq. (8) into Eq. (7) gives
$$F_{1}(s) F_{2}(s)=\int_{0}^{\infty} f_{1}(\lambda)\left[\int_{0}^{\infty} f_{2}(t-\lambda) u(t-\lambda) e^{-s \lambda} d t\right] d \lambda \tag{9}$$
Interchanging the order of integration results in
$$F_{1}(s) F_{2}(s)=\int_{0}^{\infty}\left[\int_{0}^{t} f_{1}(\lambda) f_{2}(t-\lambda) d \lambda\right] e^{-s \lambda} d t \tag{10}$$
The integral in brackets extends only from 0 to $t$ because the delayed unit step $u(t-\lambda)=1$ for $\lambda < t$ and $u(t-\lambda)=0$ for $\lambda > t$. We notice that the integral is the convolution of $f_{1}(t)$ and $f_{2}(t)$ as in Eq. (5.1). Hence, $$F_{1}(s) F_{s}(s)=\mathcal{L}\left[f_{1}(t) * f_{2}(t)\right] \tag{11}$$ as desired. This indicates that convolution in the time domain is equivalent to multiplication in the $s$ domain. For example, if $x(t)=4 e^{-t}$ and $h(t)=5 e^{-2 t}$, applying the property in Eq. (11), we get
\begin{aligned}h(t) * x(t) &=\mathcal{L}^{-1}[H(s) X(s)]=\mathcal{L}^{-1}\left[\left(\frac{5}{s+2}\right)\left(\frac{4}{s+1}\right)\right] \\&=\mathcal{L}^{-1}\left[\frac{20}{s+1}+\frac{-20}{s+2}\right] \\&=20\left(e^{-t}-e^{-2 t}\right), \quad t \geq 0\end{aligned} \tag{12}
Although we can find the convolution of two signals using Eq. (11), as we have just done, if the product $F_{1}(s) F_{2}(s)$ is very complicated, finding the inverse may be tough. Also, there are situations in which $f_{1}(t)$ and $f_{2}(t)$ are available in the form of experimental data and there are no explicit Laplace transforms. In these cases, one must do the convolution in the time domain.
The process of convolving two signals in the time domain is better appreciated from a graphical point of view. The graphical procedure for evaluating the convolution integral in Eq. (5) usually involves four steps.

#### Steps to evaluate the convolution integral:

• Folding: Take the mirror image of $h(\lambda)$ about the ordinate axis to obtain $h(-\lambda)$.
• Displacement: Shift or delay $h(-\lambda)$ by $t$ to obtain $h(t-\lambda)$.
• Multiplication: Find the product of $h(t-\lambda)$ and $x(\lambda)$.
• Integration: For a given time $t$, calculate the area under the product $h(t-\lambda) x(\lambda)$ for $0 < \lambda < t$ to get $y(t)$ at $t$.
The folding operation in step 1 is the reason for the term convolution. The function $h(t-\lambda)$ scans or slides over $x(\lambda)$. In view of this superposition procedure, the convolution integral is also known as the superposition integral.
To apply the four steps, it is necessary to be able to sketch $x(\lambda)$ and $h(t-\lambda)$. To get $x(\lambda)$ from the original function $x(t)$ involves merely replacing $t$ with $\lambda$. Sketching $h(t-\lambda)$ is the key to the convolution process. It involves reflecting $h(\lambda)$ about the vertical axis and shifting it by $t$. Analytically, we obtain $h(t-\lambda)$ by replacing every $t$ in $h(t)$ by $t-\lambda$. Since convolution is commutative, it may be more convenient to apply steps 1 and 2 to $x(t)$ instead of $h(t)$. The best way to illustrate the procedure is with some examples.
Example 1: Find the convolution of the two signals in Fig. 1.
Fig. 1: Example 1.
Solution: We follow the four steps to get $y(t)=x_{1}(t) * x_{2}(t)$.
First, we fold $x_{1}(t)$ as shown in Fig. 2(a) and shift it by $t$ as shown in Fig. 2(b). For different values of $t$, we now multiply the two functions and integrate to determine the area of the overlapping region.
Fig. 2: (a) Folding $x_1(λ)$, (b) shifting $x_1(−λ)$ by t.
Fig. 3: Overlapping of $x_1(t − λ)$ and $x_2(λ)$ for: (a) 0 < t < 1, (b) 1 < t < 2, (c) 2 < t < 3, (d) 3 < t < 4, (e) t > 4.
For $0 < t < 1$, there is no overlap of the two functions, as shown in Fig. 3(a). Hence, $$y(t)=x_{1}(t) * x_{2}(t)=0, \quad 0 < t < 1 \tag{1.1}$$ For $1 < t < 2$, the two signals overlap between 1 and $t$, as shown in Fig. $3(\mathrm{~b})$
$$y(t)=\int_{1}^{t}(2)(1) d \lambda=\left.2 \lambda\right|_{1} ^{t}=2(t-1), \quad 1 < t < 2 \tag{1.2}$$
For $2 < t < 3$, the two signals completely overlap between $(t-1)$ and $t$, as shown in Fig. 3(c). It is easy to see that the area under the curve is 2 . Or
$$y(t)=\int_{t-1}^{t}(2)(1) d \lambda=\left.2 \lambda\right|_{t-1} ^{t}=2, \quad 2 < t < 3 \tag{1.3}$$
For $3 < t < 4$, the two signals overlap between $(t-1)$ and 3 , as shown in Fig. 3(d).
\begin{aligned}y(t) &=\int_{t-1}^{3}(2)(1) d \lambda=\left.2 \lambda\right|_{t-1} ^{3} \\&=2(3-t+1)=8-2 t, \quad 3 < t < 4\end{aligned} \tag{1.4}
For $t > 4$, the two signals do not overlap [Fig. 3(e)], and $$y(t)=0, \quad t > 4 \tag{1.5}$$ Combining Eqs. (1.1) to (1.5), we obtain
$$y(t)=\left\{\begin{array}{ll}0, & 0 \leq t \leq 1 \\2 t-2, & 1 \leq t \leq 2 \\2, & 2 \leq t \leq 3 \\8-2 t, & 3 \leq t \leq 4 \\0, & t \geq 4\end{array}\right. \tag{1.6}$$
Fig. 4: Convolution of signals $x_1(t)$ and $x_2(t)$ in Fig. 1.
which is sketched in Fig. 4. Notice that $y(t)$ in this equation is continuous. This fact can be used to check the results as we move from one range of $t$ to another. The result in Eq. (1.6) can be obtained without using the graphical procedure-by directly using Eq. (5) and the properties of step functions.