## Monday, January 20, 2014

### Derivative of a unit vector

In today's post I would like to present a simple but elegant construction: the computation of the derivative of a rotating unit vector i.e. of a vector function $\vec{\varepsilon}: [0, +\infty) \rightarrow \mathbb{R}^{2}$ whose length is equal to one $|\vec{\varepsilon}(t)|=1$ for any $t \in [0, +\infty)$. I will use $t$ to denote the independent variable since it normally stands for time. Thus, at any given instant $t$, the value of the function will be a  unit vector $\vec{\varepsilon}(t)$.
We will apply the definition, so we need to compute the vector
\label{def1}
\frac{d}{dt}\vec{\varepsilon}(t) = \lim_{\Delta t \rightarrow 0} \frac{\vec{\varepsilon}(t + \Delta t) - \vec{\varepsilon}(t)}{\Delta t}

First note that since $|\vec{\varepsilon}(t)|^{2} = \vec{\varepsilon}(t) \cdot \vec{\varepsilon}(t) = 1$ by differentiating both sides we get
$$\begin{array}{c} \frac{d}{dt} \big( \vec{\varepsilon}(t) \cdot \vec{\varepsilon}(t) \big) = \frac{d \vec{\varepsilon}(t)}{dt} \cdot \vec{\varepsilon}(t) + \vec{\varepsilon}(t) \cdot \frac{d \vec{\varepsilon}(t)}{dt} = \\ \\ 2 \frac{d \vec{\varepsilon}(t)}{dt} \cdot \vec{\varepsilon}(t) = 0 \Leftrightarrow 2 \frac{d \vec{\varepsilon}(t)}{dt} \cdot \vec{\varepsilon}(t) = 0 \end{array}$$
Thus $\frac{d \vec{\varepsilon}(t)}{dt} \cdot \vec{\varepsilon}(t) = 0$ which is equivalent to the fact that $\vec{\varepsilon}(t)$ and its derivative vector $\frac{d \vec{\varepsilon}(t)}{dt}$ are perpendicular
\label{def2}
\vec{\varepsilon}(t) \bot \frac{d \vec{\varepsilon}(t)}{dt}

The situation can be presented in the following picture:
$\Delta s$ is the length of the arc spanned by the edge of the unit vector $\vec{\varepsilon}(t)$ while it rotates by an angle of $\Delta \varphi$ during time $\Delta t$. Since $\varphi$   is measured in radians we have $\Delta s = \Delta \varphi |\vec{\varepsilon}(t)|$ which implies that
$$\Delta s = \Delta \varphi$$

Notice that (in the figure) we also have:  $|\vec{\eta}(t)|=1$ and $\ \vec{\eta}(t) \bot \vec{\varepsilon}(t)$.
Now, during that time interval $\Delta t$ the vector has changed by
$$\Delta \vec{\varepsilon} = \vec{\varepsilon}(t + \Delta t) - \vec{\varepsilon}(t)$$
While $\Delta t \rightarrow 0$ (the notation $dt \rightarrow 0$ will be used instead from now on) we can notice two things: i). the direction of the vector $d \vec{\varepsilon} = \vec{\varepsilon}(t + dt) - \vec{\varepsilon}(t)$ "tends" to become perpendicular to $\vec{\varepsilon}(t)$ and thus parallel to the direction specified by $\vec{\eta}(t)$ and ii). the length of the vector $d \vec{\varepsilon}$ "tends" to become equal to the length of the arc $ds$ thus
\label{def3}
|d \vec{\varepsilon}| = |\vec{\varepsilon}(t + dt) - \vec{\varepsilon}(t)| = ds = d \varphi

(where $d \varphi$ is measured in radians).
Now, combining \eqref{def2} and \eqref{def3} we can write \eqref{def1} as follows
$$\begin{array}{c} \frac{d}{dt}\vec{\varepsilon}(t) = \lim_{dt \rightarrow 0} \frac{\vec{\varepsilon}(t + dt) - \vec{\varepsilon}(t)}{dt} = \lim_{dt \rightarrow 0} \frac{d \varepsilon}{dt} = \frac{d \varphi}{dt} \vec{\eta}(t) \end{array}$$
Finally, if we define a vector $\vec{\omega}$ with direction perpendicular to the plane defined by $(\vec{\varepsilon}(t), \vec{\eta}(t))$ and with length $|\vec{\omega}|=\frac{d \varphi}{dt}$ then we have
$$\frac{d}{dt}\vec{\varepsilon}(t) = \vec{\omega} \times \vec{\varepsilon}(t) = \frac{d \varphi}{dt} \vec{\eta}(t)$$

## Friday, January 17, 2014

### Question of the week #6

Let $z_{1}=x_{1}+iy_{1}$ and $z_{2}=x_{2}+iy_{2}$ be two points on the complex plane. They determine a line segment. Consider the perpendicular bisector of this line segment. Find the point where this perpendicular bisector cuts the vertical (i.e. the $Oy$) axis.

Waiting for your thoughts and ideas till next week!

## Thursday, January 16, 2014

### Question of the week #5 - the answer

Let us come to consider a detailed look at the solution of last week's question:
For the reader's convenience we repeat the statement of the question:
Question of the week #5:
(a). Use integration by parts to show that:
$$\int sinx \cdot cosx \cdot e^{-sinx} dx = -e^{-sinx} \cdot \big( 1 + sinx \big) + c$$
Now consider the following differential equation:
$$\frac{dy}{dx}-y \cdot cosx = sinx \cdot cosx$$
(b). Determine the integration factor and find the general solution $y=f(x)$
(c). Find the special solution satisfying $f(0)=-2$

Solution: (a). Noticing (by applying the chain rule of differentiation) that $\big( e^{-sinx} \big)' = e^{-sinx}(-sinx)' = - cosx \cdot e^{-sinx}$ we can proceed with a straightforward integration by parts:
$$\begin{array}{c} \int sinx \cdot cosx \cdot e^{-sinx} dx = -\int sinx \frac{d\big( e^{-sinx} \big)}{dx} dx = \\ \\ = -\int sinx \big( e^{-sinx} \big)' dx = -sinx \cdot e^{-sinx} + \int e^{-sinx} \cdot cosx dx = \\ \\ = -sinx \cdot e^{-sinx} - \int \frac{d\big( e^{-sinx} \big)}{dx} dx = -sinx \cdot e^{-sinx} - \int \big( e^{-sinx} \big)' dx = \\ \\ = -sinx \cdot e^{-sinx} - e^{-sinx} + c = - e^{-sinx} \big( 1 + sinx \big) + c \end{array}$$
(b). The given DE is $y'-y \cdot cosx = sinx \cdot cosx$. So we compute for the integration factor:

• $-\int cosx dx = -sinx + d$, where $d \in \mathbb{R}$ is a constant of integration. Since we need only one (actually: anyone) of the indefinite integrals -in order to find an integration factor- we can pick $d=0$
• The integration factor thus reads: $\mu(x) = e^{-sinx}$

(c). Multiplying both sides of the DE with the integration factor $\mu$ determined in (b) and using the result of (a) we get:
$$\begin{array}{c} e^{-sinx} \cdot y'- e^{-sinx} \cdot y \cdot cosx = e^{-sinx} \cdot sinx \cdot cosx \Leftrightarrow \\ \\ \Leftrightarrow \big( e^{-sinx} \cdot y \big) ' = e^{-sinx} \cdot sinx \cdot cosx \Leftrightarrow \\ \\ \Leftrightarrow e^{-sinx} \cdot y = \int e^{-sinx} \cdot sinx \cdot cosx dx \Leftrightarrow \\ \\ \Leftrightarrow e^{-sinx} \cdot y = -sinx \cdot e^{-sinx} - e^{-sinx} + c \Leftrightarrow \\ \\ \Leftrightarrow y = -sinx + c \cdot e^{sinx} - 1 \end{array}$$
Hence, we have determined the general solution of the given DE. It is a family of functions parameterized by $c \in \mathbb{R}$. In order to single out that special solution satisfying $x=0$, $y=2$ we have to simply substitute these values in the expression of the general solution and solve the resulting expression for $c$:
$$-2 = sin0 + c \cdot e^{sin0} - 1 \Leftrightarrow -2 = c - 1 \Leftrightarrow c = -1$$
thus, the special solution is
$$y = -sinx - e^{sinx} - 1$$

## Sunday, January 12, 2014

### Theoretical Remarks #4

In Monday's 6/Jan/2014 post we mentioned a proof for the existence of an antiderivative for any function continuous on an interval.
In today's post, i am going to supply an alternative proof for the same proposition. For the reader's convenience, i am repeating at this point the statement of the proposition:
Proposition: Let a real function $f$, continuous on an interval $\Delta$ and let $a \in \Delta$ be a fixed point. Then the function $F(x)=\int_{a}^{x}f(t)dt$ is an antiderivative function of $f$ in $\Delta$. In other words:
$$F'(x) = \big( \int_{a}^{x}f(t)dt \big)' = f(x)$$
for all $x \in \Delta$.
Prooof: (alternative)
It is sufficient to show that for any fixed point $x_{0} \in \Delta$ we have $F'(x_{0})=f(x_{0})$. Let $x_{0}, x_{0}+h \in \Delta$ with $h \neq 0$. Then we can compute
$$\begin{array}{c} F(x_{0}+h) - F(x_{0}) = \int_{a}^{x_{0}+h}f(t)dt - \int_{a}^{x_{0}}f(t)dt = \\ \\ \bigg( \int_{a}^{x_{0}}f(t)dt + \int_{x_{0}}^{x_{0}+h}f(t)dt \bigg)- \int_{a}^{x_{0}}f(t)dt = \int_{x_{0}}^{x_{0}+h}f(t)dt \end{array}$$
and since $h \neq 0$, this implies that
\label{diff}
\frac{F(x_{0}+h) - F(x_{0})}{h} = \frac{1}{h} \int_{x_{0}}^{x_{0}+h}f(t)dt

In order to proceed, we will distinguish between two cases:
• $h > 0$   $\rightsquigarrow$  (I)
• $h < 0$   $\rightsquigarrow$  (II)
(I). $h > 0$: Since $[x_{0},x_{0}+h] \subseteq \Delta$, $f$ is continuous on $[x_{0},x_{0}+h]$ and the Extreme value theorem applies: there are numbers $c,d \in [x_{0},x_{0}+h]$ such that $f(c)=m$ and $f(d)=M$ are the absolute minimum and absolute maximum values respectively of $f$ in $[x_{0},x_{0}+h]$. Consequently
$$\begin{array} {c} mh \leq \int_{x_{0}}^{x_{0}+h}f(t)dt \leq Mh \Leftrightarrow f(c)h \leq \int_{x_{0}}^{x_{0}+h}f(t)dt \leq f(d)h \Leftrightarrow \\ \\ \\ \Leftrightarrow f(c) \leq \frac{1}{h} \int_{x_{0}}^{x_{0}+h}f(t)dt \leq f(d) \stackrel{\eqref{diff}}{\Leftrightarrow} f(c) \leq \frac{F(x_{0}+h) - F(x_{0})}{h} \leq f(d) \end{array}$$
So we have concluded that
\label{sand1}
f(c) \leq \frac{F(x_{0}+h) - F(x_{0})}{h} \leq f(d)

At this point, we have to observe the following thing: by the application of the extreme value theorem on the continuous function $f$ on the interval $[x_{0},x_{0}+h]$ it follows that both $c$ and $d$ depend in general on the value of $h > 0$. It is easy to see that their values are actually functions of the positive $h$: So we can write $c(h)$ and $d(h)$. Not much needs to be said about these functions; their behaviour may be complicated in general (for example, you can provide an argument to show that $c(h), \ d(h)$ need not even be continuous in general!). However we have:
\label{concomplim1}
\begin{array}{c}
\lim_{h \rightarrow 0^{+}} c(h) = x & ,   &  \lim_{h \rightarrow 0^{+}} d(h) = x
\end{array}

\eqref{concomplim1} can be proved as a simple application of the $(\varepsilon, \delta)$-definition of the limit. Readers are adviced to show that explicitly for practise!
Taken that $f$ is continuous on $\Delta$ and thus on $[x_{0},x_{0}+h]$, \eqref{concomplim1} imply that
\label{concomplim2}
\begin{array}{c}
\lim_{h \rightarrow 0^{+}} f(c) = \lim_{h \rightarrow 0^{+}} f(c(h)) = f(x) \\
\\
\lim_{h \rightarrow 0^{+}} f(d) = \lim_{h \rightarrow 0^{+}} f(d(h)) = f(x)
\end{array}

Now combining \eqref{sand1} together with \eqref{concomplim2} and applying the squeeze theorem from the right, we get
\label{from the right}
\lim_{h \rightarrow 0^{+}} \frac{F(x_{0}+h) - F(x_{0})}{h} = f(x)

(II). $h < 0$:  In this case $[x_{0}+h,x_{0}] \subseteq \Delta$ and we proceed again following exactly the same steps as before keeping however in mind that now $h < 0$. We leave the intermediate details to the reader. We finally end up in
\label{from the left}
\lim_{h \rightarrow 0^{-}} \frac{F(x_{0}+h) - F(x_{0})}{h} = f(x)

Combining \eqref{from the right}, \eqref{from the left} we get the result
$$\lim_{h \rightarrow 0} \frac{F(x_{0}+h) - F(x_{0})}{h} = F'(x_{0}) = f(x_{0})$$
which finally concludes the proof!

## Monday, January 6, 2014

### Theoretical Remarks #3

We come in today's post to supply a proof for a well known Calculus proposition:  In friday's 27/Dec/2013 post we mentioned (without proof), the following proposition:
Proposition: Let a real function $f$, continuous on an interval $\Delta$ and let $a \in \Delta$ be a fixed point. Then the function $F(x)=\int_{a}^{x}f(t)dt$ is an antiderivative function of $f$ in $\Delta$. In other words:
$$F'(x) = \big( \int_{a}^{x}f(t)dt \big)' = f(x)$$
for all $x \in \Delta$.

Proof:
It is sufficient to show that for any fixed point $x_{0} \in \Delta$ we have $F'(x_{0})=f(x_{0})$.

Let us first study the difference quotient $\frac{F(x)-F(x_{0})}{x-x_{0}}$ whose limit at $x \rightarrow x_{0}$, $x \neq x_{0}$ defines the value of $F'(x_{0})$:
$$\begin{array}{c} \frac{F(x)-F(x_{0})}{x-x_{0}}= \frac{1}{x-x_{0}}\bigg( \int_{a}^{x}f(t)dt - \int_{a}^{x_{0}}f(t)dt \bigg) = \\ \\ = \frac{1}{x-x_{0}}\bigg( \int_{x_{0}}^{a}f(t)dt + \int_{a}^{x}f(t)dt \bigg) = \frac{1}{x-x_{0}} \int_{x_{0}}^{x}f(t)dt \end{array}$$
thus
\label{diff*}
\frac{F(x)-F(x_{0})}{x-x_{0}}=\frac{1}{x-x_{0}} \int_{x_{0}}^{x}f(t)dt

and since
\label{fun}
f(x_{0}) = \frac{1}{x-x_{0}}(x-x_{0})f(x_{0}) = \frac{1}{x-x_{0}}\int_{x_{0}}^{x}f(x_{0}) dt

combining \eqref{diff*} and \eqref{fun}, we readily get the following relation:
\label{diffun}
\frac{F(x)-F(x_{0})}{x-x_{0}}-f(x_{0}) = \frac{1}{x-x_{0}} \int_{x_{0}}^{x} \big( f(t) - f(x_{0}) \big) dt

Since $f$ is continuous at $x_{0} \in \Delta$, for any $\varepsilon > 0$ there is a $\delta > 0$ such that: for any $t \in \Delta$ with $|t-x_{0}| < \delta$ we will have $|f(t)-f(x_{0})| < \varepsilon$.

Thus, for any $x \in \Delta$ with $0 < |x-x_{0}| < \delta$, using \eqref{diffun} we get:
$$\begin{array}{c} \bigg| \frac{F(x)-F(x_{0})}{x-x_{0}}-f(x_{0}) \bigg| = \frac{1}{|x-x_{0}|} \bigg| \int_{x_{0}}^{x} \big( f(t) - f(x_{0}) \big) dt \bigg| \leq \\ \\ \leq \frac{1}{|x-x_{0}|} \bigg| \int_{x_{0}}^{x} \big| f(t) - f(x_{0}) \big| dt \bigg| < \frac{1}{|x-x_{0}|} \big| \int_{x_{0}}^{x} \varepsilon dt \big| = \\ \\ = \frac{1}{|x-x_{0}|} \varepsilon |x-x_{0}| = \varepsilon \end{array}$$
But the above means -according to the $(\varepsilon, \delta)$ definition of the limit- that
$$F'(x_{0}) = \lim_{x \rightarrow x_{0}} \bigg( \frac{F(x)-F(x_{0})}{x-x_{0}} \bigg) = f(x_{0})$$
which finally concludes the proof.

## Sunday, January 5, 2014

### Question of the week #5

This week's post comes from differential equations:
(a). Use integration by parts to show that:
$$\int sinx \cdot cosx \cdot e^{-sinx} dx = -e^{-sinx} \cdot \big( 1 + sinx \big) + c$$
Now consider the following differential equation:
$$\frac{dy}{dx}-y \cdot cosx = sinx \cdot cosx$$
(b). Determine the integration factor and find the general solution $y=f(x)$
(c). Find the special solution satisfying $f(0)=-2$

## Thursday, January 2, 2014

### Question of the week #4 - the answer

Question of the week #4Let a continuous real function $f$ and $f(x)=e^{\int_{0}^{x}f(t)dt}$  for all $x<1$.  Find the formula of the function $f$.

Solution: First we note that $f(0)=e^{\int_{0}^{0}f(t)dt}=e^{0}=1$. Moreover, we can see that
$$f(x)>0, \ \ \forall x<1$$
For any $x<1$ we have:
$$\begin{array}{c} f^{\prime}(x)=\bigg( e^{\int_{0}^{x}f(t)dt} \bigg) ^{\prime}=e^{\int_{0}^{x}f(t)dt} \big( \int_{0}^{x}f(t)dt \big)^{\prime}=f(x) \cdot f(x) \Leftrightarrow \\ \\ \Leftrightarrow f^{\prime}(x)=f^{2}(x) \Leftrightarrow \frac{f^{\prime}(x)}{f^{2}(x)} =1 \Leftrightarrow \\ \\ \Leftrightarrow \bigg( -\frac{1}{f(x)} \bigg)^{\prime}=(x)^{\prime} \Leftrightarrow -\frac{1}{f(x)}=x+c \end{array}$$
where $c \in \mathbb{R}$ is the constant of integration.
But the above relation readily implies (for $x=0$) that $c=-\frac{1}{f(0)}=-1$.
So we finally get:
$$f(x)=\frac{1}{1-x}, \ \ \forall x<1$$