Classifying PDEs
PDEs often have an infinite number of solutions; however, when applying them to physical systems, we usually assume that a deterministic, or at least a probabilistic, sequence of events will occur. Therefore, we impose additonal constraints on a PDE, usually in the form of
Initial conditions, values of independent variables over all space at an initial time and
Boundary conditions, values of independent variables (or their derivatives) over all time.
Ideally, imposing such conditions leaves us with a well-posed problem, which has three aspects (Bove, Colombini, and Santo 2006, sec. 1.5):
- Existence
-
There exists at least one solution.
- Uniqueness
-
There exists at most one solution.
- Stability
-
If the PDE, boundary conditons, or initial conditions are changed slightly, the solution changes only slightly.
As with ODEs, PDEs can be linear or nonlinear; that is, the dependent variables and their derivatives can appear in only linear combinations (linear PDE) or in one or more nonlinear combination (nonlinear PDE). As with ODEs, there are more known analytic solutions to linear PDEs than nonlinear PDEs.
The order of a PDE is the order of its highest derivative. A great many physical models can be described by second-order PDEs or systems thereof. Let \(u\) be an independent scalar variable, a function of \(m\) temporal and spatial variables \(x_i\). A second-order linear PDE has the form, for coefficients \(\alpha\), \(\beta\), \(\gamma\), and \(\delta\), and real functions of \(x_i\), (Strauss 2007, sec. 1.6) $$\begin{align} \underbrace{\sum_{i=1}^m\sum_{j=1}^m \alpha_{ij} \partial_{x_i x_j}^2 u}_\text{second-order terms} + \underbrace{\sum_{k=1}^m \left( % \beta_{k} \partial_{x_k}^2 u + \gamma_{k} \partial_{x_k} u + \delta_{k} u \right)}_\text{first- and zeroth-order terms} = \underbrace{f(x_1,\cdots,x_m)}_\text{forcing} \label{eq:pde_second_order} \end{align}$$ where \(f\) is called a forcing function. When \(f\) is zero, eq. ¿eq:pde_second_order? is called homogeneous. We can consider the coefficients \(\alpha_{ij}\) to be components of a matrix \(A\) with rows indexed by \(i\) and columns indexed by \(j\). There are four prominent classes defined by the eigenvalues of \(A\):
- Elliptic
-
The eigenvalues all have the same sign
- Parabolic
-
The eigenvalues have the same sign except one that is zero
- Hyperbolic
-
Exactly one eigenvalue has the opposite sign of the others
- Ultrahyperbolic
-
There are at least two eigenvalues of each sign
The first three of these have received extensive treatment. They are named after conic sections due to the similarity the equations have with polynomials when derivatives are considered analogous to powers of polynomial variables. For instance, here is a case of each of the first three classes, $$\begin{align} \partial_{xx}^2 u + \partial_{yy}^2 u &= 0 \tag{elliptic} \\ \partial_{xx}^2 u - \partial_{yy}^2 u &= 0 \tag{hyperbolic} \\ \partial_{xx}^2 u - \partial_{t} u &= 0. \tag{parabolic} \end{align}$$ When \(A\) depends on \(x_i\), it may have multiple classes across its domain. In general, this equation and its associated initial and boundary conditions do not comprise a well-posed problem; however several special cases have been shown to be well-posed. Thus far, the most general statement of existence and uniqueness is the cauchy-kowalevski theorem for cauchy problems.
Online Resources for Section 7.1
No online resources.