수학 이야기/ㄱ ● 공업수학

# 공업수학 1 NEW : 1계 선형미분방정식 linear differential equation

### 공업수학 1 NEW : 1계 선형미분방정식    linear differential equation

공업수학 1 NEW : 1계 선형미분방정식(linear differential equation) part01

공업수학 1 NEW : 1계 선형미분방정식(linear differential equation) part02

공업수학 1 NEW : 1계 선형미분방정식(linear differential equation) part03

공업수학 1 NEW : 1계 선형미분방정식(linear differential equation) part04          In mathematicslinear differential equations are differential equations having solutions which can be added together in particular linear combinations to form further solutions. They equate 0 to a polynomial that is linear in the value and various derivatives of a variable; its linearity means that each term in the polynomial has degree either 0 or 1.

Linear differential equations can be ordinary (ODEs) or partial (PDEs).

The solutions to (homogeneous) linear differential equations form a vector space (unlike non-linear differential equations).

## Basic features

Linear differential equations are of the form

$Ly=f$ where the differential operator L is a linear operatory is the unknown function, and the right hand side f is a given function (called the source term) of the same variable. For a function dependent on time we may write the equation more expressly as

$Ly(t)=f(t)$ and, even more precisely by bracketing

$L[y(t)]=f(t)$ .

The linear operator L may be considered to be of the form

$L_{n}(y)\equiv {\frac {d^{n}y}{dt^{n}}}+A_{1}(t){\frac {d^{n-1}y}{dt^{n-1}}}+\cdots +A_{n-1}(t){\frac {dy}{dt}}+A_{n}(t)y$ The linearity condition on L rules out operations such as taking the square of the derivative of y; but permits, for example, taking the second derivative of y. It is convenient to rewrite this equation in an operator form

$L_{n}(y)\equiv \left[\,D^{n}+A_{1}(t)D^{n-1}+\cdots +A_{n-1}(t)D+A_{n}(t)\right]y$ where D is the differential operator d/dt (i.e. Dy = y' = dy/dt , D2y = y" = d2y/dt2,... ), and the An are given functions.

Such an equation is said to have order n, the index of the highest derivative of y that is involved.

A typical simple example is the linear differential equation used to model radioactive decay. Let N(t) denote the number of radioactive atoms remaining in some sample of material  at time t. Then for some constant k > 0, the rate at which the radioactive atoms decay can be modelled by

${\frac {dN}{dt}}=-kN$ If y is assumed to be a function of only one variable, one speaks about an ordinary differential equation, else the derivatives and their coefficients must be understood as (contracted) vectors, matrices or tensors of higher rank, and we have a (linear) partial differential equation.

The case where f = 0 is called a homogeneous equation and its solutions are called complementary functions. It is particularly important to the solution of the general case, since any complementary function can be added to a solution of the inhomogeneous equation to give another solution (by a method traditionally called particular integral and complementary function). When the Ai are numbers, the equation is said to have constant coefficients.

## Homogeneous equations with constant coefficients

The first method of solving linear homogeneous ordinary differential equations with constant coefficients is due to Euler, who realized that solutions have the form ezx, for possibly-complex values of z. The exponential function is one of the few functions to keep its shape after differentiation, allowing the sum of its multiple derivatives to cancel out to zero, as required by the equation. Thus, for constant values A1,..., An, to solve:

$y^{(n)}+A_{1}y^{(n-1)}+\cdots +A_{n}y=0\,,$ we set y = ezx, leading to

$z^{n}e^{zx}+A_{1}z^{n-1}e^{zx}+\cdots +A_{n}e^{zx}=0.$ Division by ezx gives the nth-order polynomial:

$F(z)=z^{n}+A_{1}z^{n-1}+\cdots +A_{n}=0.\,$ This algebraic equation F(z) = 0 is the characteristic equation considered later by Gaspard Monge and Augustin-Louis Cauchy.

Formally, the terms $y^{(k)}(k=1,2,\dots ,n)$ of the original differential equation are replaced by zkSolving the polynomial gives n values of zz1, ..., zn. Substitution of any of those values for z into ezxgives a solution ezix. Since homogeneous linear differential equations obey the superposition principle, any linear combination of these functions also satisfies the differential equation.

When these roots are all distinct, we have n distinct solutions to the differential equation. It can be shown that these are linearly independent, by applying the Vandermonde determinant, and together they form a basis of the space of all solutions of the differential equation.

Examples
$y''''-2y'''+2y''-2y'+y=0$ has the characteristic equation

$z^{4}-2z^{3}+2z^{2}-2z+1=0.$ This has zeroes, i, −i, and 1 (multiplicity 2). The solution basis is then

$e^{ix},\,e^{-ix},\,e^{x},\,xe^{x}.$ This corresponds to the real-valued solution basis

$\cos x,\,\sin x,\,e^{x},\,xe^{x}\,.$ The preceding gave a solution for the case when all zeros are distinct, that is, each has multiplicity 1. For the general case, if z is a (possibly complex) zero (or root) of F(z) having multiplicity m, then, for $k\in \{0,1,\dots ,m-1\}\,$ $y=x^{k}e^{zx}$ is a solution of the ordinary differential equation. Applying this to all roots gives a collection of n distinct and linearly independent functions, where n is the degree of F(z). As before, these functions make up a basis of the solution space.

If the coefficients Ai of the differential equation are real, then real-valued solutions are generally preferable. Since non-real roots z then come in conjugate pairs, so do their corresponding basis functions xkezx, and the desired result is obtained by replacing each pair with their real-valued linear combinations Re(y) and Im(y), where y is one of the pair.

A case that involves complex roots can be solved with the aid of Euler's formula.

...

## Systems of linear differential equations

An arbitrary linear ordinary differential equation or even a system of such equations can be converted into a first order system of linear differential equations by adding variables for all but the highest order derivatives. A linear system can be viewed as a single equation with a vector-valued variable. The general treatment is analogous to the treatment above of ordinary first order linear differential equations, but with complications stemming from noncommutativity of matrix multiplication.

To solve

$\left\{{\begin{array}{rl}\mathbf {y} '(x)&=A(x)\mathbf {y} (x)+\mathbf {b} (x)\\\mathbf {y} (x_{0})&=\mathbf {y} _{0}\end{array}}\right.$ (here $\mathbf {y} (x)$ is a vector or matrix, and $A(x)$ is a matrix), let $U(x)$ be the solution of $\mathbf {y} '(x)=A(x)\mathbf {y} (x)$ with $U(x_{0})=I$ (the identity matrix). $U$ is a fundamental matrix for the equation — the columns of $U$ form a complete linearly independent set of solutions for the homogeneous equation. After substituting $\mathbf {y} (x)=U(x)\mathbf {z} (x)$ , the equation $\mathbf {y} '(x)=A(x)\mathbf {y} (x)+\mathbf {b} (x)$ simplifies to $U(x)\mathbf {z} '(x)=\mathbf {b} (x).$ Thus,

$\mathbf {y} (x)=U(x)\mathbf {y_{0}} +U(x)\int _{x_{0}}^{x}U^{-1}(t)\mathbf {b} (t)\,dt$ If $A(x_{1})$ commutes with $A(x_{2})$ for all $x_{1}$ and $x_{2}$ , then

$U(x)=e^{\int _{x_{0}}^{x}A(x)\,dx}$ and thus

$U^{-1}(x)=e^{-\int _{x_{0}}^{x}A(x)\,dx},$ but in the general case there is no closed form solution, and an approximation method such as Magnus expansion may have to be used. Note that the exponentials are matrix exponentials.

제공 : 위키피아 wikipedia.org