In Monday's 6/Jan/2014 post we mentioned a proof for the existence of an antiderivative for any function continuous on an interval.
In today's post, i am going to supply an alternative proof for the same proposition. For the reader's convenience, i am repeating at this point the statement of the proposition:
Proposition: Let a real function $f$, continuous on an interval $\Delta$ and let $a \in \Delta$ be a fixed point. Then the function $F(x)=\int_{a}^{x}f(t)dt$ is an antiderivative function of $f$ in $\Delta$. In other words:
$$
F'(x) = \big( \int_{a}^{x}f(t)dt \big)' = f(x)
$$
for all $x \in \Delta$.
Prooof: (alternative)
It is sufficient to show that for any fixed point $x_{0} \in \Delta$ we have $F'(x_{0})=f(x_{0})$. Let $x_{0}, x_{0}+h \in \Delta$ with $h \neq 0$. Then we can compute
$$
\begin{array}{c}
F(x_{0}+h) - F(x_{0}) = \int_{a}^{x_{0}+h}f(t)dt - \int_{a}^{x_{0}}f(t)dt = \\
\\
\bigg( \int_{a}^{x_{0}}f(t)dt + \int_{x_{0}}^{x_{0}+h}f(t)dt \bigg)- \int_{a}^{x_{0}}f(t)dt =
\int_{x_{0}}^{x_{0}+h}f(t)dt
\end{array}
$$
and since $h \neq 0$, this implies that
\begin{equation} \label{diff}
\frac{F(x_{0}+h) - F(x_{0})}{h} = \frac{1}{h} \int_{x_{0}}^{x_{0}+h}f(t)dt
\end{equation}
In order to proceed, we will distinguish between two cases:
$$
\begin{array} {c}
mh \leq \int_{x_{0}}^{x_{0}+h}f(t)dt \leq Mh \Leftrightarrow f(c)h \leq \int_{x_{0}}^{x_{0}+h}f(t)dt \leq f(d)h \Leftrightarrow \\ \\
\\
\Leftrightarrow f(c) \leq \frac{1}{h} \int_{x_{0}}^{x_{0}+h}f(t)dt \leq f(d) \stackrel{\eqref{diff}}{\Leftrightarrow} f(c) \leq \frac{F(x_{0}+h) - F(x_{0})}{h} \leq f(d)
\end{array}
$$
In today's post, i am going to supply an alternative proof for the same proposition. For the reader's convenience, i am repeating at this point the statement of the proposition:
Proposition: Let a real function $f$, continuous on an interval $\Delta$ and let $a \in \Delta$ be a fixed point. Then the function $F(x)=\int_{a}^{x}f(t)dt$ is an antiderivative function of $f$ in $\Delta$. In other words:
$$
F'(x) = \big( \int_{a}^{x}f(t)dt \big)' = f(x)
$$
for all $x \in \Delta$.
Prooof: (alternative)
It is sufficient to show that for any fixed point $x_{0} \in \Delta$ we have $F'(x_{0})=f(x_{0})$. Let $x_{0}, x_{0}+h \in \Delta$ with $h \neq 0$. Then we can compute
$$
\begin{array}{c}
F(x_{0}+h) - F(x_{0}) = \int_{a}^{x_{0}+h}f(t)dt - \int_{a}^{x_{0}}f(t)dt = \\
\\
\bigg( \int_{a}^{x_{0}}f(t)dt + \int_{x_{0}}^{x_{0}+h}f(t)dt \bigg)- \int_{a}^{x_{0}}f(t)dt =
\int_{x_{0}}^{x_{0}+h}f(t)dt
\end{array}
$$
and since $h \neq 0$, this implies that
\begin{equation} \label{diff}
\frac{F(x_{0}+h) - F(x_{0})}{h} = \frac{1}{h} \int_{x_{0}}^{x_{0}+h}f(t)dt
\end{equation}
In order to proceed, we will distinguish between two cases:
- $h > 0$ $\rightsquigarrow$ (I)
- $h < 0$ $\rightsquigarrow$ (II)
$$
\begin{array} {c}
mh \leq \int_{x_{0}}^{x_{0}+h}f(t)dt \leq Mh \Leftrightarrow f(c)h \leq \int_{x_{0}}^{x_{0}+h}f(t)dt \leq f(d)h \Leftrightarrow \\ \\
\\
\Leftrightarrow f(c) \leq \frac{1}{h} \int_{x_{0}}^{x_{0}+h}f(t)dt \leq f(d) \stackrel{\eqref{diff}}{\Leftrightarrow} f(c) \leq \frac{F(x_{0}+h) - F(x_{0})}{h} \leq f(d)
\end{array}
$$
So we have concluded that
\begin{equation} \label{sand1}
f(c) \leq \frac{F(x_{0}+h) - F(x_{0})}{h} \leq f(d)
\end{equation}
At this point, we have to observe the following thing: by the application of the extreme value theorem on the continuous function $f$ on the interval $[x_{0},x_{0}+h]$ it follows that both $c$ and $d$ depend in general on the value of $h > 0$. It is easy to see that their values are actually functions of the positive $h$: So we can write $c(h)$ and $d(h)$. Not much needs to be said about these functions; their behaviour may be complicated in general (for example, you can provide an argument to show that $c(h), \ d(h)$ need not even be continuous in general!). However we have:
\begin{equation} \label{concomplim1}
\begin{array}{c}
\lim_{h \rightarrow 0^{+}} c(h) = x & , & \lim_{h \rightarrow 0^{+}} d(h) = x
\end{array}
\end{equation}
\eqref{concomplim1} can be proved as a simple application of the $(\varepsilon, \delta)$-definition of the limit. Readers are adviced to show that explicitly for practise!
Taken that $f$ is continuous on $\Delta$ and thus on $[x_{0},x_{0}+h]$, \eqref{concomplim1} imply that
\begin{equation} \label{concomplim2}
\begin{array}{c}
\lim_{h \rightarrow 0^{+}} f(c) = \lim_{h \rightarrow 0^{+}} f(c(h)) = f(x) \\
\\
\lim_{h \rightarrow 0^{+}} f(d) = \lim_{h \rightarrow 0^{+}} f(d(h)) = f(x)
\end{array}
\end{equation}
Now combining \eqref{sand1} together with \eqref{concomplim2} and applying the squeeze theorem from the right, we get
\begin{equation} \label{from the right}
\lim_{h \rightarrow 0^{+}} \frac{F(x_{0}+h) - F(x_{0})}{h} = f(x)
\end{equation}
\begin{equation} \label{sand1}
f(c) \leq \frac{F(x_{0}+h) - F(x_{0})}{h} \leq f(d)
\end{equation}
At this point, we have to observe the following thing: by the application of the extreme value theorem on the continuous function $f$ on the interval $[x_{0},x_{0}+h]$ it follows that both $c$ and $d$ depend in general on the value of $h > 0$. It is easy to see that their values are actually functions of the positive $h$: So we can write $c(h)$ and $d(h)$. Not much needs to be said about these functions; their behaviour may be complicated in general (for example, you can provide an argument to show that $c(h), \ d(h)$ need not even be continuous in general!). However we have:
\begin{equation} \label{concomplim1}
\begin{array}{c}
\lim_{h \rightarrow 0^{+}} c(h) = x & , & \lim_{h \rightarrow 0^{+}} d(h) = x
\end{array}
\end{equation}
\eqref{concomplim1} can be proved as a simple application of the $(\varepsilon, \delta)$-definition of the limit. Readers are adviced to show that explicitly for practise!
Taken that $f$ is continuous on $\Delta$ and thus on $[x_{0},x_{0}+h]$, \eqref{concomplim1} imply that
\begin{equation} \label{concomplim2}
\begin{array}{c}
\lim_{h \rightarrow 0^{+}} f(c) = \lim_{h \rightarrow 0^{+}} f(c(h)) = f(x) \\
\\
\lim_{h \rightarrow 0^{+}} f(d) = \lim_{h \rightarrow 0^{+}} f(d(h)) = f(x)
\end{array}
\end{equation}
Now combining \eqref{sand1} together with \eqref{concomplim2} and applying the squeeze theorem from the right, we get
\begin{equation} \label{from the right}
\lim_{h \rightarrow 0^{+}} \frac{F(x_{0}+h) - F(x_{0})}{h} = f(x)
\end{equation}
(II). $h < 0$: In this case $[x_{0}+h,x_{0}] \subseteq \Delta$ and we proceed again following exactly the same steps as before keeping however in mind that now $h < 0$. We leave the intermediate details to the reader. We finally end up in
\begin{equation} \label{from the left}
\lim_{h \rightarrow 0^{-}} \frac{F(x_{0}+h) - F(x_{0})}{h} = f(x)
\end{equation}
Combining \eqref{from the right}, \eqref{from the left} we get the result
$$
\lim_{h \rightarrow 0} \frac{F(x_{0}+h) - F(x_{0})}{h} = F'(x_{0}) = f(x_{0})
$$
which finally concludes the proof!
\begin{equation} \label{from the left}
\lim_{h \rightarrow 0^{-}} \frac{F(x_{0}+h) - F(x_{0})}{h} = f(x)
\end{equation}
Combining \eqref{from the right}, \eqref{from the left} we get the result
$$
\lim_{h \rightarrow 0} \frac{F(x_{0}+h) - F(x_{0})}{h} = F'(x_{0}) = f(x_{0})
$$
which finally concludes the proof!
No comments :
Post a Comment