Differential Forms are simpler than you’ve been told
TL; DR
The great success of differential forms is to generalize determinants and
cofactor expansions via the wedge product, giving a viable theory of
algebraic area, and then to use this theory to relate integrals over
k-dimensional “volumes” with integrals over their (k-1)-dimensional boundary
“surfaces”.
Once you understand the wedge product and the importance of multi-linearity,
the definition of the exterior derivative is almost inevitable, and the rest
is just book-keeping.
This post explains the relationship between the various concepts that are defined in textbooks.
Abstract
I’ve struggled to understand differential forms for many years. I have several books that present differential forms and the generalized Stokes theorem at various levels of sophistication, and none of them made the subject click into place in my head.
I recently had an epiphany, not from a new book but rather from reframing and untangling bits and pieces from those books.
In this post I’ll just explain the roles of the various objects that are defined in books and materials on differential forms, and how they fit together. I won’t go into the heavy stuff, because that is well covered in those materials. The complete theory requires quite a bit of algebraic machinery. This post is meant to help you through the journey.
Red herrings and obscurity
Books on differential forms and Stokes’s Theorem pay a heavy toll for handling some historical baggage:
- A smooth transition from “div, grad, curl” and the classic integral theorems
- The
notation in integrals
I think those are red herrings. Differential forms and generalized Stokes are
clearer than the classical path.
And the
The books tend to be either full-on axiomatic and algebraic, or frustratingly informal and vague, defining differential forms as “something you integrate” or as “a formalism”. Yikes, what a silly thing to say.
Forms are generalizations of determinants
Determinants are defined on square matrices, or viewed from another vantage point, are defined to take N vectors in an N-dimensional vector space, and compute the “volume” of the parallelepiped they span.
When learning the theory of determinants, you probably saw them characterized as multilinear, alternating functions, motivated by the need to measure volumes, and to detect linearly dependent sets of vectors.
A form is, like a determinant, a multilinear, alternating function into
For example, in
We can define
This function is easily seen to be bilinear and alternating.
Note that we could have two linearly independent vectors
For example
Once you realize this, it’s easier to understand all the trouble the books will
go through to prove that the space of multilinear alternating K-forms on N
is a vector space with dimension
In
Let’s define a basis of 1-forms:
And a basis of 2-forms:
Note that we said that the space K-forms on N
is a vector space with dimension
The wedge product generalizes determinant cofactors
When computing determinants, one will generally use the Laplace expansion, aka cofactor expansion, to decompose a determinant into a sum of smaller determinants.
Working downward on dimension, we compute a 3×3 determinant as a weighted addition of 2×2 minors.
Or, working upward on dimension: given that we have 2×2 determinants in the x-y plane to measure surface area, how could we build 3×3 determinants to compute volumes with the extra z dimension?
We’ve defined the basis 1-form
However, note that the expression above could give a positive volume for a degenerate 2-dimensional parallelepiped.
Let’s choose:
and observe:
The failure here is we combined
The wedge product can
combine the forms
Notice that
and
So,
With the previous example
since
In determinants, one ends up computing a sum made up of products of permuted vector components.
The wedge product generalizes this: given two alternating forms, it shuffles them in such a way that the result is also an alternating form. And the crazy thing is, by this simple insistence on getting another multilinear alternating function, we end up with a viable and consistent theory of algebraic volume.
The proofs and computations around the wedge product are a bit of a slog, but knowing that they’re just repeating the determinant’s trick of permuting components, it all makes more sense.
Differential forms are just form fields
A vector field is an assignment of a vector to each point in a space. A differential form is an assignment of a form to each point in a space. That’s it, there’s nothing more there. Differential forms should be called form fields. And that’s what I’ll call them for the remainder of this post.
We already know a 1-form field: the derivative.
In the generalized realm of functions in vector spaces, the derivative of a
function
The
In several books there is a long preparation leading to the definition of a differential form. Really, it’s nothing much on top of forms. And the term differential forms makes them sound all too fancy. I don’t see anything intrinsically “differential” about them.
Integrals in the small are approximately forms
In the general setting of functions on vector spaces, the derivative of a function at a point is the linear map that best approximates the function in a small environment of the point.
In the same spirit, we can examine the integral of a function
For a small enough parallelepiped
for some k-form
This point especially, explains why forms are useful to the theory of integration. While the derivative is the linear function that approximates a given function locally, a form field approximates integrals over k-volumes locally.
Forms dictate how domains of integration count for the integral
We’ve seen the basis forms
In n-space, even the simplest integral needs to choose an direction.
Looking at the the integral as the limit of a sum, let’s define
a line
Let’s integrate
Imagine
Then the
Now imagine
And this looks very much like a plain old integral from single-variable calculus.
The form
In the rest of this post we only need to consider domains of integration that are perpendicular or parallel to the form being integrated.
The exterior derivative is … what you’d expect
Having established that in the small, integrals are approximately forms, and that forms can be combined into higher-dimensional forms, the exterior derivative is almost low hanging fruit.
Let’s remember that, for a function
We’re going to invent the new operation of the exterior differential
Let’s imagine we have a k-form field. The general k-form field is
a linear combination of basis k-forms, and as we saw already, there are
Note that the functions
We lose no generality by looking at a single summand:
For convenience, let’s just call this
Now, a differential or exterior derivative ought to be found by taking differences at points that are close, just as with the derivative of a function:
The combination has one more input vector
It’s important, that’s why it gets a frame.
Let’s go back to this part:
That’s a mini-integral at point
Using the
Bringing it all together … stopping before Stokes
The last equation smacks of the fundamental theorem of calculus. Let’s corroborate this.
We’re going to pick a very simple domain of integration. A
using the formula from last section…
As happens in single-variable calculus, the sums of differences have a telescoping effect, and collapse to a single subtraction. One would suspect:
Let’s prove it.
Note that reordering the
Much as I commented that getting div, curl, grad is a red herring, it is
satisfying to compute
just so easily. Yes, I know, that was implicitly done in dimension 2. If you did
it in dimension 3,
OK, back to our proof:
We thus get our proof of a sort of “Fundamental Theorem of Calculus”, but we’re not quite there yet.
These two integrals are across the
across the oriented boundary of
We’ve only done one term
It’s interesting to realize that the fundamental part of this result, and the fundamental part of Stokes’s Theorem as done with forms, is the definition of the exterior derivative.
Once we got here:
everything else was simple follow-through.
Michael Spivak writes this in Calculus on Manifolds:
Stokes’ theorem shares three important attributes with many fully evolved major theorems:
- It is trivial.
- It is trivial because the terms appearing in it have been properly defined.
- It has significant consequences.
There’s more… elsewhere
There are more pieces to the narrative, which you can find in the books. Majorly missing from this post:
- integrating the general form with its
terms - generalized “cubes” and their boundaries in N-space via chains
- integrating over non-rectangular domains via pullbacks
- a proper discussion of orientation
- defining exterior differentiation in a coordinate-free way
I haven’t found a single book that made everything click, and several that are popular left me frustrated so I’ll ignore them. However, three books I recommend:
- Harold Edwards Advanced Calculus: A Differential Forms Approach
- Serge Lang Undergraduate Analysis
- Bamberg & Sternberg A course in mathematics for students or physics
Edwards and Bamberg & Sternberg both are introductory and try to motivate things. Lang gives a quick intro without much motivation and gets to Stokes with ease and no fuss.
Honorable mention to Michael Spivak’s Calculus on Manifolds and A Comprehensive Introduction to Differential Geometry. They are a bit uneven, in that Spivak just goes full-steam on multilinear algebra without much in the way of motivation, but then, they do get to the point fast, and Spivak knows how to tell a story with math.
Godspeed.