next up previous contents
Next: Vector Calculus Up: Calculus Previous: Differential Calculus   Contents

Integral Calculus

With differentiation under our belt, we need only a few definitions and we'll get integral calculus for free. That's because integration is antidifferentiation, the inverse process to differentiation. As we'll see, the derivative of a function is unique but its integral has one free choice that must be made. We'll also see that the (definite) integral of a function in one dimension is the area underneath the curve.

There are lots of ways to facilitate derivations of integral calculus. Most calculus books begin (appropriately) by drawing pictures of curves and showing that the area beneath them can be evaluated by summing small discrete sections and that by means of a limiting process that area is equivalent to the integral of the functional curve. That is, if $ f(x)$ is some curve and we wish to find the area beneath a segment of it (from $ x = x_1$ to $ x = x_2$ for example), one small piece of that area can be written:

$\displaystyle \Delta A = f(x)\Delta x$ (107)

The total area can then be approximately evaluated by piecewise summing $ N$ rectangular strips of width $ \Delta x = (x_2 - x_1)/N$ :

$\displaystyle A \approx \sum_{n=1}^N f(x_1 + n\cdot \Delta x) \Delta x$ (108)

(Note that one can get slightly different results if one centers the rectangles or begins them on the low side, but we don't care.)

In the limit that $ N \to \infty$ and $ \Delta x \to 0$ , two things happen. First we note that:

$\displaystyle f(x) = \frac{d A}{d x}$ (109)

by the definition of derivative from the previous section. The function $ f(x)$ is the formal derivative of the function representing the area beneath it (independent of the limits as long as $ x$ is in the domain of the function.) The second is that we'll get tired adding teensy-weensy rectangles in infinite numbers. We therefore make up a special symbol for this infinite limit sum. $ \Sigma$ clearly stands for sum, so we change to another stylized ``ess'', $ \int$ , to also stand for sum, but now a continuous and infinite sum of all the infinitesimal pieces of area within the range. We now write:

$\displaystyle A = \int_{x_1}^{x_2} f(x) dx$ (110)

as an exact result in this limit.

The beauty of this simple approach is that we now can do the following algebra, over and over again, to formulate integrals (sums) of some quantity.

$\displaystyle \frac{d A}{d x}$ $\displaystyle =$ $\displaystyle f(x)$  
$\displaystyle dA$ $\displaystyle =$ $\displaystyle f(x)dx$  
$\displaystyle \int dA$ $\displaystyle =$ $\displaystyle \int f(x)dx$  
$\displaystyle A$ $\displaystyle =$ $\displaystyle \int_{x_1}^{x_2} f(x)dx$ (111)

This areal integral is called a definite integral because it has definite upper and lower bounds. However, we can also do the integral with a variable upper bound:

$\displaystyle A(x) = \int_{x_0}^x f(x')dx'$ (112)

where we indicate how $ A$ varies as we change $ x$ , its upper bound.

We now make a clever observation. $ f(x)$ is clearly the function that we get by differentiating this integrated area with a fixed lower bound (which is still arbitrary) with respect to the variable in its upper bound. That is

$\displaystyle f(x) = \frac{d A(x)}{d x}$ (113)

This slope must be the same for all possible values of $ x_0$ or this relation would not be correct and unique! We therefore conclude that all the various functions $ A(x)$ that can stand for the area differ only by a constant (called the constant of integration):

$\displaystyle A'(x) = A(x) + C$ (114)

so that

$\displaystyle f(x) = \frac{d A'(x)}{d x} = \frac{d A(x}{d x} + \frac{d C}{d x} = \frac{d A(x)}{d x}$ (115)

From this we can conclude that the indefinite integral of $ f(x)$ can be written:

$\displaystyle A(x) = \int^x f(x)dx + A_0$ (116)

where $ A_0$ is the constant of integration. In physics problems the constant of integration must usually be evaluated algebraically from information given in the problem, such as initial conditions.

From this simple definition, we can transform our existing table of derivatives into a table of (indefinite) integrals. Let us compute the integral of $ x^n$ as an example. We wish to find:

$\displaystyle g(x) = \int x^n dx$ (117)

where we will ignore the constant of integration as being irrelevant to this process (we can and should always add it to one side or the other of any formal indefinite integral unless we can see that it is zero). If we differentiate both sides, the differential and integral are inverse operations and we know:

$\displaystyle \frac{d g(x)}{d x} = x^n$ (118)

Looking on our table of derivatives, we see that:

$\displaystyle \frac{d x^{n+1}}{d x} = (n+1)x^n$ (119)

or

$\displaystyle \frac{d g(x)}{d x} = x^n = \frac{1}{n+1}\frac{d x^{n+1}}{d x}$ (120)

and hence:

$\displaystyle g(x) = \int^x x^n dx = \frac{1}{n+1}x^{n+1}$ (121)

by inspection.

Similarly we can match up the other rules with integral equivalents.

$\displaystyle \frac{d (a f(x))}{d x} = a\frac{d f(x)}{d x}$ (122)

leads to:

$\displaystyle \int a f(x) dx = a\int f(x) dx$ (123)

A very important rule follows from the rule for differentiating a product. If we integrate both sides this becomes:

$\displaystyle \int d(gh) = gh = \int g dh + \int h dg$ (124)

which we often rearrange as:

$\displaystyle \int g dh = \int d(gh) - \int h dg = gh - \int h dg$ (125)

the rule for integration by parts which permits us to throw a derivative from one term to another in an integral we are trying to do. This turns out to be very, very useful in evaluating many otherwise extremely difficult integrals.

If we assemble the complete list of (indefinite) integrals that correspond to our list of derivatives, we get something like:

$\displaystyle \int 0\ dx$ $\displaystyle =$ $\displaystyle 0 + c = c \quad\quad {\rm with\ } c\ {\rm constant}$ (126)
$\displaystyle \int a f(x) dx$ $\displaystyle =$ $\displaystyle a \int f(x) dx$ (127)
$\displaystyle \int x^n dx$ $\displaystyle =$ $\displaystyle \frac{1}{n+1}x^{n+1} + c$ (128)
$\displaystyle \int (f + g) dx$ $\displaystyle =$ $\displaystyle \int f\ dx + \int g\ dx$ (129)
$\displaystyle \int f(x)dx$ $\displaystyle =$ $\displaystyle \int f(u) \frac{d x}{d u} du \quad\quad {\rm change\
variables}$ (130)
$\displaystyle \int d(gh)$ $\displaystyle =$ $\displaystyle gh = \int g dh + \int h dg \quad\quad {\rm or}$ (131)
$\displaystyle \int g dh$ $\displaystyle =$ $\displaystyle gh - \int h dg \quad\quad {\rm integration\ by\ parts}$ (132)
$\displaystyle \int e^x dx$ $\displaystyle =$ $\displaystyle e^x + a \quad\quad {\rm or\ change\ variables\ to}$ (133)
$\displaystyle \int e^{ax} dx$ $\displaystyle =$ $\displaystyle \frac{1}{a} \int e^{ax} d(ax) = \frac{1}{a} e^{ax}
+ c$ (134)
$\displaystyle \int \sin(ax) dx$ $\displaystyle =$ $\displaystyle \frac{1}{a} \int \sin(ax) d(ax) = \frac{1}{a}
\cos(ax) + c$ (135)
$\displaystyle \int \cos(ax) dx$ $\displaystyle =$ $\displaystyle \frac{1}{a} \int \cos(ax) d(ax) = - \frac{1}{a}
\sin(ax) + c$ (136)
$\displaystyle \int \frac{dx}{x}$ $\displaystyle =$ $\displaystyle \ln(x) + c$ (137)

It's worth doing a couple of examples to show how to do integrals using these rules. One integral that appears in many physics problems in E&M is:

$\displaystyle \int_0^R \frac{r\ dr}{(z^2 + r^2)^{3/2}}$ (138)

This integral is done using u substitution - the chain rule used backwards. We look at it for a second or two and note that if we let

$\displaystyle u = (z^2 + r^2)$ (139)

then

$\displaystyle du = 2 r dr$ (140)

and we can rewrite this integral as:
$\displaystyle \int_0^R \frac{r\ dr}{(z^2 + r^2)^{3/2}}$ $\displaystyle =$ $\displaystyle \frac{1}{2} \int_0^R
\frac{ 2 r\ dr}{(z^2 + r^2)^{3/2}}$  
  $\displaystyle =$ $\displaystyle \frac{1}{2} \int_{z^2}^{(z^2+R^2)} u^{-3/2}\ du$  
  $\displaystyle =$ $\displaystyle - u^{-1/2} \bigg\vert _{z^2}^{(z^2+R^2)}$  
  $\displaystyle =$ $\displaystyle \frac{1}{z} - \frac{1}{(z^2 + R^2)^{1/2}}$ (141)

The lesson is that we can often do complicated looking integrals by making a suitable $ u$ -substitution that reduces them to a simple integral we know off of our table.

The next one illustrates both integration by parts and doing integrals with infinite upper bounds. Let us evaluate:

$\displaystyle \int_0^\infty x^2 e^{-ax} dx$ (142)

Here we identify two pieces. Let:

$\displaystyle h(x) = x^2$ (143)

and

$\displaystyle d(g(x)) = e^{-ax} dx = -\frac{1}{a} e^{-ax} d(-ax) = - \frac{1}{a}d(e^{-ax})$ (144)

or $ g(x) = -(1/a)e^{-ax}$ . Then our rule for integration by parts becomes:
$\displaystyle \int_0^\infty x^2 e^{-ax} dx$ $\displaystyle =$ $\displaystyle \int_0^\infty h(x) dg$  
  $\displaystyle =$ $\displaystyle h(x)g(x) \bigg\vert _0^\infty - \int_0^\infty g(x) dh$  
  $\displaystyle =$ $\displaystyle -\frac{1}{a} x^2 e^{-ax} \bigg\vert _0^\infty +
\frac{1}{a}\int_0^\infty e^{-ax} 2x dx$  
  $\displaystyle =$ $\displaystyle \frac{2}{a}\int_0^\infty x e^{-ax} dx$  

We repeat this process with $ h(x) = x$ and with $ g(x)$ unchanged:
$\displaystyle \int_0^\infty x^2 e^{-ax} dx$ $\displaystyle =$ $\displaystyle \frac{2}{a}\int_0^\infty x e^{-ax} dx$  
  $\displaystyle =$ $\displaystyle -\frac{2}{a^2} x e^{-ax} \bigg\vert _0^\infty +
\frac{2}{a^2}\int_0^\infty e^{-ax} dx$  
  $\displaystyle =$ $\displaystyle \frac{2}{a^2}\int_0^\infty e^{-ax} dx$  
  $\displaystyle =$ $\displaystyle - \frac{2}{a^3}\int_0^\infty e^{-ax} d(-ax)$  
  $\displaystyle =$ $\displaystyle - \frac{2}{a^3} e^{-ax} \bigg\vert _0^\infty = \frac{2}{a^3}$ (145)

If we work a little more generally, we can show that:

$\displaystyle \int_0^\infty x^n e^{-ax} dx = \frac{(n+1)!}{a^n}$ (146)

This is just one illustration of the power of integration by parts to help us do integrals that on the surface appear to be quite difficult.


next up previous contents
Next: Vector Calculus Up: Calculus Previous: Differential Calculus   Contents
Robert G. Brown 2011-04-19