Thursday 30 March 2017

From curves to cylinders

We know that from a planar curve $\alpha:I\rightarrow{\mathbb R}^2$ we can define the generalized cylinder as $S=\{(\alpha(s),t): s\in I, t\in{\mathbb R}\}$. This can also view as follows. 

Consider a smooth function $f:O\subset{\mathbb R}^2\rightarrow {\mathbb R}$ and $a\in {\mathbb R}$ a regular value. Then we know that $C=f^{-1}(\{a\})$ is a regular curve: the $1$-dimensional case than a surface. With this function $f$, we define a new function $F$ as $$F:O\times{\mathbb R}\rightarrow{\mathbb R},\ F(x,y,z)=f(x,y),$$
which is differentiable. We prove that $a$ is a regular value of $F$. Indeed, $$\nabla F=(\frac{\partial F}{\partial x},\frac{\partial F}{\partial y},\frac{\partial F}{\partial z})=(\nabla f,0).$$ Consequently, $(x,y,z)$ is a critical point of $F$ if and only if $(x,y)$ is a critical point of $f$. In particular, if $F(x,y,z)=a$, then $f(x,y)=a$ and $(x,y)$ is not a critical point of $f$, and thus, $(x,y,z)$ is not a critical point of $F$. This proves that $a$ is a regular value of $F$. 
As a consequence, $S=F^{-1}(\{a\})$ is a surface. We describe this surface: $$S=\{(x,y,z)\in O\times {\mathbb R}: f(x,y)=a\}=C\times {\mathbb R}.$$ This shows that $S$ is the generalized cylinder over the planar curve $C$.

The next picture is a generalized cylinder based on the cardioid.

Tuesday 28 March 2017

A surface is locally the graph of a function

We know that any surface is locally the graph of a function. In fact, we proved two things. First, the graph is defined in one of the three coordinate planes and, second, we know exactly what plane is. The proof says that if $p\in S$ and $X$ is a parametrization around $p$, then the rank of the derivative $dX_q$, $q=X^{-1}(p)$, is $2$. In particular, a $2\times 2$ matrix of $$dX_q=\left(\begin{array}{ccc}x_u& y_u&z_u\\ x_v& y_v&z_v\end{array}\right)$$ has non zero determinant, where $X(u,v)=(x(u,v),y(u,v),z(u,v))$. If we choose $$\left|\begin{array}{cc}x_u& y_u\\ x_v&y_v\end{array}\right|(q)\not=0,$$ then it is a graph on the $xy$-plane. 

It is clear that if $S$ is a graph on the $xy$-plane around $p$, then $T_pS$ can not be vertical, that is, $T_pS$ can not be orthogonal to the $xy$-plane. This can also proved in terms of the projection map. Let $\pi:{\mathbb R}^3\rightarrow{\mathbb R}^2$ be the projection $\pi(x,y,z)=(x,y)$ onto the $xy$-plane. This map is smooth, so when we restrict to $S$ is a differentiable map, which we denote by $\pi$ again. If $S$ is a graph on the $xy$-plane, then $\pi$ would be a local diffeomorphism. Indeed, the differential of $\pi$ is $$(d\pi)_p(v)=(\pi\circ\alpha)'(0)=(\alpha_1'(0),\alpha_2'(0))=(v_1,v_2)$$ where $\alpha$ is a curve that represents $v\in T_pS$ and $\alpha(t)=(\alpha_1(t),\alpha_2(t),\alpha_3(t))$. We ask when $(d\pi)_p$ is an isomorphism. Thus if $v\in T_pS$ where $(d\pi)_p(v)=(0,0)$ then $v_1=v_2=0$. This means that $v=(0,0,v_3)$ is a vertical vector. We conclude that $(d\pi)_p$ is an isomorphism if and only if  the tangent plane is not orthogonal to the $xy$-plane. In such a case, the inverse function theorem asserts the existence of an open set $V\subset S$ around $p\in S$ and and open set $W\subset{\mathbb R}^2$ around $\pi(p)$ such that $\pi:V\rightarrow W$ is a diffeomorphism. If $\phi=\pi^{-1}:W\rightarrow V$ and we stand for $\phi=(\phi_1,\phi_2,\phi_3)$, then for any $(x,y,z)\in V$ we have $$(x,y,z)=\phi(\pi(x,y,z))=\phi(x,y)=(\phi_1(x,y),\phi_2(x,y),\phi_3(x,y)).$$ This proves that $$V=\mbox{graph}(f)=\{(x,y,f(x,y)):(x,y)\in W\},$$ where $f=\phi_3:W\rightarrow\mathbb{R}$.

Sunday 26 March 2017

Ruled surfaces

A ruled surface is a surface constructed moving a straight-line along a given curve. If $\alpha=\alpha(s)$ is this curve and $w(s)$ is the direction of the straight-line at $\alpha(s)$, the straight-line is the set $\{\alpha(s)+t w(s):t\in{\mathbb R}\}$. Thus the surface $S=\{\alpha(s)+t w(s): s\in I,t\in{\mathbb R}\}$ and the parametrization is $X(s,t)=\alpha(s)+t w(s)$. Since $X_s=\alpha'(s)+tw'(s)$ and $X_t=w(s)$, then we have to assume that they are independent linearly. Then $X$ is a parametrized surface and so, locally, $S$ is a surface.   

We show some examples. Suppose $\alpha$ is a planar curve included in the plane $z=0$. If we take $w(s)=a=(0,0,1)$, we have the right cylinder over the curve $\alpha$. In the next figure, $\alpha$ is the parabola $\alpha(s)=(s,s^2,0)$ and $w(s)=(0,0,1)$.



We can take $w(s)$ to be tilted at each point of $s$. If $\{T(s), N(s), B(s)\}$ is the Frenet trihedron along $\alpha$,  then up to reparametrizations, $B(s)=(0,0,1)$. If we take $w(s)=\cos(m) N(s)+\sin(m) B(s)$, with $m\in{\mathbb R}$,  we obtain a cone along $\alpha$. In the next picture,  $\alpha$ is the parabola again.



If we replace the constant $m$ by a function $\theta(s)$, then $w(s)$ goes changing at each point. Here we take $\alpha$ the circle $\alpha(s)=(\cos(s),\sin(s),0)$. If $w(s)$ is a $2\pi$-periodic function, then $w$ is also $2\pi$-periodic. This occurs for example if $\theta(s)=s$. The parametrization is $X(s,t)=(\cos (s)-t \cos ^2(s),\sin (s)-t \sin (s) \cos (s),t \sin (s))$ and the surface is:


But if $w$ is $4\pi$-periodic, then we obtain a Möbius strip. For this, we take $\theta(s)=s/2$. Then $$X(s,t)=\left(\cos (s) \left(1-t \cos \left(\frac{s}{2}\right)\right),\sin (s) \left(1-t \cos \left(\frac{s}{2}\right)\right),t \sin \left(\frac{s}{2}\right)\right).$$

Friday 24 March 2017

Differential of the restriction of a function

We know that if $F:{\mathbb R}^3\rightarrow{\mathbb R}^m$ is a smooth function, and $S$ is a surface, then $F_{|S}:S\rightarrow{\mathbb R}^m$ is differentiable. Moreover, if $p\in S$, then $(dF_{|S})_p={(dF)_p}{\Big |}_{T_pM}$. In the right side, we have the differential of the restriction of $F$ and in the right side, we have the derivative of $F$ which is only viewed in the tangent plane $T_pS$. Denote $f=F_{|S}$. From now, and to simplify, we suppose $m=1$.

It is usual that we work with the Jacobian matrix of $dF$, which is noting the matrix expression of the linear map $dF$ with respect to the usual basis of Euclidean spaces, here, of ${\mathbb R}^3$ and ${\mathbb R}$. By definition, the elements are the partial derivatives of $f$, namely, $$\mbox{Jac F}(p)=M(dF_p,B_u,B_u')=(f_x\ f_y\ f_z)(x,y,z),$$ where $p=(x,y,z)$.

If we want to write the matrix expression of the linear map $df_p$, we need to fix basis in both spaces, that is, in $T_pS$ and in ${\mathbb R}$. Although for ${\mathbb R}$ we choose the basis $B_u'=\{1\}$, there is not a natural basis on $T_pS$. This only appears when we fix previously a parametrization $X=X(u,v)$ around $p$. Then the basis could be $B=\{X_u(q),X_v(q)\}$, with $X(q)=p$ and the matrix would be $M(df_p,B,B_u')$. Therefore, there is not a relation between this matrix and $\mbox{Jac F}(p)$.

We show an example. In order to avoid 'simple numbers', consider the elliptic paraboloid $z=x^2+y^2$ and the function $F(x,y,z)=z$. Then $\mbox{Jac F}(x,y,z)=(0\ 0\ 0)$. 

Consider now $f=F_{|S}$. Then $(df)_p(v)=\alpha_3'(0)=v_3$, where $v=(v_1,v_2,v_3)$. Let $p=(x,y,z)\in S$ and the parametrization $X(x,y)=(x,y,x^2+y^2)$. Then $X_x=(1,0,2x)$ and $X_y=(0,1,2y)$. Moreover, using the chain-rule, we have
$$(df)_{X(x,y)}(X_x)=(f\circ X)_x=2x,\  (df)_{X(x,y)}(X_y)=(f\circ X)_y=2y.$$
If $B=\{X_x,X_y\}$, we have  $$M((df)_{X(x,y)},B,B_u')=(2x\ 2y),$$
a matrix completely different than ${\mbox Jac F}(p)$.

Thursday 23 March 2017

Vector structure of the tangent plane

We know that the tangent plane $T_pS$ of $S$ at $p$ is a vector space of dimension $2$. We ask here what is the vector structure of $T_pS$. Indeed, let $v,w\in T_pS$ and let $\alpha, \beta: I\rightarrow S$ be two curves such that $\alpha(0)=\beta(0)=p$ and $\alpha'(0)=v$ and $\beta'(0)=w$. Since $T_pS$ is a vector space, then $v+w\in T_pS$. We ask: 
Question. What is the curve on $S$ whose tangent vector is $v+w$?
A first attempt would be $\alpha+\beta$, because $(\alpha(t)+\beta(t)'(0)=v+w$. Of course, this is completely wrong because $\alpha(t)+\beta(t)$ is a curve in ${\mathbb R}^3$ which it is not included at $S$! To find the right curve $\gamma:I\rightarrow S$ such that $\gamma(0)=p$ and $\gamma'(0)=v+w$ we have to come back to the moment where we proved that $T_pS$ is a vector space. Recall that $$T_pS=(dX)_q({\mathbb R}^2),\ q=X^{-1}(p),$$ where $X$ is a parametrization around $p$, and $(dX)_q:{\mathbb R}^2\rightarrow {\mathbb R}^3$ is the derivative of the map $X:U\subset {\mathbb R}^2\rightarrow{\mathbb R}^3$. The above identity says us that $v+w$ is the sum of two vectors in ${\mathbb R}^2$. Thus the idea to find $\gamma$ is: first, compute the preimages of $v$ and $w$, namely, $\bar{v}$, $\bar{w}$, compute $\bar{v}+\bar{w}$, then take a curve passing $q$ with tangent vector $\bar{v}+\bar{w}$, and finally, consider the image of this curve by the parametrization $X$.

Let $\bar{\alpha}(t)=X^{-1}(\alpha(t))$, $\bar{\beta}(t)=X^{-1}(\beta(t))$. Then  $\alpha(t) = X(\bar{\alpha}(t))$, $\beta(t)=X(\bar{\beta}(t))$. Thus $$v=(dX)_q(\bar{\alpha}'(0)),\ w=(dX)_q(\bar{\beta}'(0)).$$ Here we obtain $\bar{v}= \bar{\alpha}'(0)$ and $\bar{w}= \bar{\beta}'(0)$. Consider $\bar{v}+\bar{w}$ placed at $q$ and consider a curve passing $q$ with this tangent vector: if suffices $$\sigma:I\rightarrow U,\ \sigma(t)=q+t(\bar{v}+\bar{w}),$$ where it is immediate $\sigma'(0)=\bar{v}+\bar{w}$.


Here we find the key of the vector structure of the tangent plane $T_pS$: of course, we can not sum the curves $\alpha$ and $\beta$, neither the curves $\bar{\alpha}$ and $\bar{\beta}$ (in this case, outside of $U$). We take another curve that represents the (tangent) vector $\bar{v}+\bar{w}$. In this case, it is enough a straight-line and here we use strongly that $U$ is an open set of ${\mathbb R}^2$, because at least around $t=0$, the straight-line  $\sigma$ is included in $U$. 

The last steps are now easy. Consider $\gamma(t)=X(\sigma(t))$, which is a curve on $S$ with $\gamma(0)=X(\sigma(0))=X(q)=p$. Then using the chain rule, we have $$\gamma'(0)=(dX)_{\sigma(0)}(\sigma'(0))= (dX)_q(\bar{v}+\bar{w})=(dX)_q(\bar{v})+(dX)_q(\bar{w})=v+w.$$
Similarly, we can find the curve that represents the tangent vector $\lambda v$, where $\lambda\in{\mathbb R}$.

Wednesday 22 March 2017

Writing differentiable maps on surfaces

In calculus it is usual to work with smooth functions in terms of `variables', I mean, something as $f(x,y,z)=x^2+\sin(z)+e^y$, in terms of `x', `y' and `z'. However working on surfaces, sometimes (or many), we prefer do not use `variables', specially when we need to compute the derivative of the function. The next example clarifies this issue. 

If $S$ is a surface, define $$f:S\rightarrow{\mathbb R},\ f(p)=|p|^2=\langle p,p\rangle.$$ Here we use `p' instead of the variables. This function measures the square of the distance of the point $p$ to the origin of ${\mathbb R}^3$. We observe that if $p=(x,y,z)$, then $f(x,y,z)=x^2+y^2+z^2$, which is a known differentiable function in ${\mathbb R}^3$, but now $f$ is defined on a surface. If we want to prove that $f$ is differentiable on $S$, first we consider $F:{\mathbb R}^3\rightarrow{\mathbb R}$ the function $F(p)=\langle p,p\rangle$. Since $p\mapsto p$ is the identity, which is differentiable, then $F$ is noting the scalar product of a differentiable map by itself. Then $F$ is differentiable. Finally, $f=F_{|S}$, that is, the restriction on $S$ of a differentiable map of ${\mathbb R}^3$. This proves definitively that $f$ is differentiable.

Other example is the height function. Let $a\in {\mathbb R}^3$ be a unit vector and define $$f:S\rightarrow{\mathbb R},\ f(p)= \langle p,a\rangle.$$ This function measures the square of the distance of the point $p$ to the vector plane $\Pi$ orthogonal to $a$. For this reason, it is named height function. If we write in coordinates and $p=(x,y,z)$, we have $f(x,y,z)=a_1 x+a_2 y+a_3 z$, where $a=(a_1,a_2,a_3)$.  If we want to prove that $f$ is differentiable on $S$ without the use of `x's', define $F:{\mathbb R}^3\rightarrow{\mathbb R}$ the function $F(p)=\langle p,a\rangle$. Since $p\mapsto p$ and $p\mapsto a$ are differentiable maps, then  $F$ is the scalar product of two differentiable vector maps, so $F$ is differentiable. Finally, $f=F_{|S}$, proving that $f$ is differentiable.  

Tuesday 21 March 2017

Two possible definition of differentiability between two surfaces

The definition of a differentiable map between two surfaces given in the course is extrinsic. I explain it. Consider  $f:S_1\rightarrow S_2$ a map between two surfaces and $p\in S_1$. Then $f$ is differentiable at $p$ if $i\circ f\circ X:U\rightarrow{\mathbb R}^3$ is differentiable at $q=X^{-1}(q)$, where $i:S_2\rightarrow{\mathbb R}^3$ is the inclusion map (definition I). Here we use strongly that $S_2$ is included in Euclidean space ${\mathbb R}^3$. If one changes the viewpoint, one would request that the definition does not depend if $S_2$ is or is not included in ${\mathbb R}^3$, but only on $S_2$, that is, an intrinsic definition. Then the natural way to do it is by means of parametrizations and the definition would be: $f$ is differentiable at $p$ if $Y^{-1}\circ f\circ X:U\rightarrow W$ is smooth at $q=X^{-1}(p)$, where $X:U\rightarrow S_1$ and $Y_W\rightarrow S_2$ are parametrizations around $p$ and $f(p)$ respectively (definition II). Now it is not important if the surface is included in Euclidean space. 

This allows to extend the above definition to object with similar properties than surfaces, that is, objects with a set of parametrizations between open sets of ${\mathbb R}^n$ and open sets of the object. Then it will appear the concept of manifold of dimension $n$.

Returning, we prove that both definition are equivalents. 
  1. (II) $\Rightarrow$ (I). Suppose a such $f$ which is differentiable at $p$ with definition II. When we consider a parametrization $X$ around $p$, then $i\circ f\circ X=(i \circ Y)\circ (Y^{-1} \circ f\circ X)$ and thus, it is the composition of two smooth maps between open sets of Euclidean spaces.
  2. (I) $\Rightarrow$ (II). Suppose $f$ which is differentiable at $p$ with definition I, that is, we know $i\circ f\circ X$ is smooth at $q$ for any $X$. Without loss of generality, and fi $Y=(Y_1,Y_2,Y_3)$, we suppose that $$\left|\begin{array}{cc}\frac{\partial Y_1}{\partial u}&\frac{\partial Y_2}{\partial u}\\  \frac{\partial Y_1}{\partial v}& \frac{\partial Y_2}{\partial v}\end{array}\right|\not=0.$$ The Inverse function theorem asserts that the function $$(Y_1,Y_2):W'\rightarrow O', (u,v)\mapsto (Y_1(u,v),Y_2(u,v))$$ is a diffeomorphism between suitable open sets of ${\mathbb R}^2$. Let $\phi=(Y_1,Y_2)^{-1}$. If $(i\circ f\circ X)=(f_1,f_2,f_3)$, then $$Y^{-1} \circ f\circ X (u',v')=\phi^{-1} (f_1(u',v'),f_2(u',v'')),$$ which is differentiable because it is the composition of two differentiable maps.

Finally, we will adopt the definition I because it is more intuitive, although we are loosing `generality'.

Monday 20 March 2017

Using the theory on differentiability for the properties of differentiability on surfaces

Almost all properties on the differentiability of maps on surfaces are proved by the analogous properties of differentiable maps between open sets of Euclidean spaces. I point out two of them
  1. A parametrization of a surface is differentiable. Here we are saying that the parametrization $X_U\subset{\mathbb R}^2\rightarrow V\subset S$ is differentiable, where $V$ is an open set of a surface $S$. In order to clarify the notation, we stand for $Y$ the above map, and $X:U\rightarrow {\mathbb R}^3$ the parametrization. In fact, $Y$ is noting the restriction of $X$ into the codomain. Because $Y$ arrives to a surface, $Y$ is differentiable if $i\circ Y: U\rightarrow{\mathbb R}^3$ is smooth. But this map is just $X$, which it is smooth because is the second property of a parametrization.
  2. The inverse of a parametrization is differentiable. Here we mean $X^{-1}:V\rightarrow U\subset{\mathbb R}^2$ is differentiable. Now $X^{-1}$ is a map whose domain is a surface, in fact, the open set $V$ of $S$, which is indeed a surface. By the definition, we have to prove that $X^{-1}\circ Z$ is smooth for some parametrization of $S$. Here we take $Z=X$. Then $X^{-1}\circ X$ is the identity map on the open set $U$, which is trivially smooth.

Sunday 19 March 2017

Surfaces constructed from curves (II)

Surfaces of revolutions are other type of surfaces constructed by curves. In the previous entry, a cylinder is noting a planar curve $\alpha$ moved along a fix direction $a$, that is, we translate $\alpha$ along a direction. If $a$ is the given direction, a translation in this direction is $T_t(x)=x+t a$, $x\in{\mathbb R}^3$ and $t\in{\mathbb R}$. Then the cylinder on basis $\alpha$ is $$\cup_{t\in {\mathbb R}}T_t(\alpha(s)):s\in I\}.$$

In order to  define a surface of revolution, we consider a   curve $\alpha$ contained in a plane $P$ and we rotate $\alpha$ about a line $L$ contained in the plane $P$. We know that the parametrization is $X(s,\theta)=(f(t)\cos\theta,f(t)\sin\theta,g(t))$, where $\alpha(t)=(f(t),0,g(t))$. Again, the difficulties appear when we prove that $X$ is an embedding. For this reason we assume again that $\alpha$ is an embedding or a simple closed curve. 

With the curve $\alpha(t)=( \sin(t),0,1+\cos(t)\cos(2t))$, with $t\in (0.5,2.5)$, we observe that there is a self-intersection, so it does not define a surface. 


Other example is the torus generated by the circle $\alpha(t)=(1+2\cos(t),0,2\sin(t))$ because it intersects the $z$-axis.










Definitively, we impose that the curve $\alpha$ is an embedding or a simple closed curve. In the next picture, the surface is generated by the simple closed curve  $\alpha(t)=(2+\sin(t),0,\cos(t)+\cos(2t))$.


Saturday 18 March 2017

Surfaces constructed from curves: cylinders

We have defined some types of surfaces from curves, for example, generalized cylinders. Let $\alpha:I\rightarrow{\mathbb R}^3$ a curve contained in a plane $P$, which we suppose it is the plane $z=0$ and let $a\in {\mathbb R}^3$ be a vector that is not contained in $P$. The cylinder on base $\alpha$ in the direction of $a$ is the set $$S=\{\alpha(s)+ta:s\in I,t\in{\mathbb R}\}.$$ Of course, the parametrization is $$X:I\times{\mathbb R}\rightarrow{\mathbb R}^3, X(s,t)=\alpha(s)+ta.$$ If we prove that $S$ is a surface, it is immediate that $X$ is differentiable, $X_s=\alpha'(s)$, $X_t=a$ and both vectors are independent linearly. The difficulty appears when we want to prove that $X$ is a parametrization. The sets $I\times{\mathbb R}$ and $S$ are open in ${\mathbb R}^2$ and $S$, respectively. Also, it is immediate that $X$ is continuous. It remains to prove that $X$ is biyective and $X^{-1}$ is continuous. Of course, if $\alpha$ is not one-to-one, then $X$ is not, as in the next pictures (here the vector $a$ is $a=(1,1,1)$.






  1. For this reason, we suppose two cases: $\alpha:I\rightarrow{\mathbb R}^3$ is an embedding or 
  2. $\alpha:{\mathbb R}\rightarrow {\mathbb R}^3$ is a simple closed curve.
In the first case, $X$ is one-to-one. If $(x,y,z)=(\alpha_1(s)+ta_1,\alpha_2(s)+t a_2,ta_3)$, then $t=z/a_3$ and so $$s=\alpha^{-1}(x-\frac{z}{a_3} a_1,y-\frac{z}{a_3} a_2).$$ It is immediate that $X^{-1}$ is continuous.

In the second case, $\alpha$ is an embedding in an interval of length less than $T$, where $T>0$ is the period of $\alpha$. 

We have the next pictures for the simple closed curve $\alpha(s)=(3 \cos (s),\sin(s)+\cos(s)+\cos(2s)$ and $a=(0,0,1)$. 



Friday 17 March 2017

Surfaces by implicit equations

We have proved that the inverse of a regular value of a function $f:O\subset{\mathbb R}^3\rightarrow{\mathbb R}$ is a surface. Consider the prototype of function $f(x,y,z)=x^n+y^n+z^n$ where $n\in {\mathbb N}$ and let $S_a=f^{-1}(\{a\})$. The gradient of $f$ is $$\nabla f(x,y,z)=n(x^{n-1},y^{n-1},z^{n-1}).$$
First we compute the critical points, that is,  $\nabla f(x,y,z)=(0,0,0)$, obtaining  that there do no exist if $n=1$ and for $n>1$, the only critical point is $(0,0,0)$. Then 

  1. For $n=1$, any value $a\in {\mathbb R}$ is regular. Here $S_a$ is a plane.
  2. For $n>1$, any value $a\not=0$ is regular and $S_a\not=\emptyset$ if $a>0$. Of course, for $n=2$, we have the sphere, but if $n$ is large, $S_a$ seems a cube, but a smooth cube! 
Some pictures for $n=4$ and $n=10$:


[Remark. We observe that the set of regular value is 'big', in the above case, ${\mathbb R}-\{0\}$. There is a result that informs us about the size of $f(A)$, where $A\subset O$ is the set of critical points: the Sard's theorem says that $f(A)$ has Lebesgue measure $0$. For example, the compliment of the set of regular value has not interior points.]

Consider now the function $g(x,y,z)=x^n+y^n-z^n$ that gives the hyperboloid of one and two sheets. The set of regular values is ${\mathbb R}-\{0\}$ again. Using Mathematica, I put some pictures when $a=1$ and $n=4$ and $n=10$ 


and when $a=-1$ and $n=4$ and $n=10$

Thursday 16 March 2017

Parametrizations of a surface of revolution

Consider $S$ the surface of revolution obtained by rotating the planar curve $\alpha(t)=(f(t),0,g(t))$, $t\in I$ which it is contained in the halfplane $y=0, x\geq 0$. We know that $\alpha$ is regular and we have two possible types of curves, i) $\alpha$ is an embedding or ii) $\alpha$ is a simple closed curve. The surface $S$ is $X(I\times{\mathbb R})$, where $$X(t,\theta)=(f(t)\cos\theta,f(t)\sin\theta,g(t)).$$ In order to prove that $S$ is a surface, it appears the problem about how many of parametrizations are needed and how to prove that they are homeomorphisms.

First consider that $\alpha$ is an embedding. We compute the inverse function of $X$ so we will discover what is the right domain of $X$. By the injectivity, it is necessary that $\theta$ moves in an interval of length $2\pi$ at most. Thus a possibility is $X:U_1:=I\times (0,2\pi)\rightarrow X(U_1)$. The set $X(U_1)$ is an open set of $S$ because $$X(U_1)=S-\alpha(I)=S-S\cap(\{y=0,x\geq 0\}).$$ Here we use that $f(t)>0$. Because we need to cover the curve $\alpha(I)$, then the other parametrization is $Y:U_2:=I\times (-\pi,\pi)\rightarrow X(U_2)$. For $Y$ we have to remove `the other side' of $S$, that is $\Phi_\pi(\alpha(I))$, where $\Phi_\theta$ is the rotation of angle $\theta$. In other words, $Y(U_2)=S-(S\cap(\{y=0,x\leq 0\})$.

We compute the inverse of $X$ (or $Y$). We have to write $t$ and $\theta$ in terms of $x,y,z$ from the next equations $$\left\{\begin{array}{l}x=f(t)\cos\theta\\ y=f(t)\sin\theta\\ z=g(t)\end{array}\right.$$ From $x^2+y^2=f(t)^2$, we obtain $f(t)=\sqrt{x^2+y^2}$, and using that $\alpha$ is an embedding, then $t=\alpha^{-1}(\sqrt{x^2+y^2},0,z)$. For $\theta$ we have two possibilities:
  1. If we want to write `something' with the arc tangent, then it would be $\theta=\mbox{arc tan}(y/x)$. But in such a case, $\theta$ is defined in $(-\pi/2,\pi/2)$. Because the initial interval is $(0,2\pi)$, we have to change the domain of $X$. We write now $X:U_1:=I\times(-\pi/2,\pi/2)\rightarrow X(U_1)$. The only difference is that we have to prove that $X(U_1)$ is an open set of $S$. But by the picture, $X(U_1)=S-(S\cap \{x\leq 0\})$. Other parametrization is with $I\times (\pi/2,3\pi/2)$ where it is possible to define the inverse of the tangent. And what about the points with $\theta=\pi/2$ or $\theta=3\pi/2$? Here, the $x$-coordinate of the point $(x,y,z)\in S$ vanishes. Now we consider that inverse of the cotangent and taking $\theta={\mbox arc cot}(x/y)$. We need two more domains, namely, $I\times (0,\pi)$ and $I\times (\pi,2\pi)$. Finally, we observe that, using the inverse functions of the tangent of the cotangent function, we need 4 parametrizations: all them write 'very similar', but  the domain goes changing.
  2.  If we do not want to use trigonometric functions, we can do the following. If one looks the picture of a surface of revolution, it is clear that the parametrization $X$ can be defined in $I\times (0,2\pi)$, because the angle $\theta$ is well defined. The problem appeared in how to catch the variable $\theta$. The trigonometric functions have added a bit of confusion. Other way is the following.                                                                                                                        Consider $\beta:(0,2\pi)\rightarrow {\mathbb S}^1-\{(0,0\})$ a parametrization of the circle minus one point. The key is the following: the map $\beta$ is a homeomorphism! It is clear that $\beta$ is one-to-one and continuous. One could do the following argument. It is well known that  ${\mathbb S}^1$ minus one point is homeomorphic to the real line ${\mathbb R}$, which it is homeomorphic to the interval $(0,2\pi)$. But the problem is if $\beta$ is, indeed, a homeomorphism. The only trouble is about the inverse. But $\beta$ is an open map because the image of an interval of $(0,2\pi)$   is an open set of ${\mathbb S}^1$.             Once proved that $\beta$ is a homeomorphism, from $x=f(t)\cos\theta$ and $y=f(t)\sin\theta$ we conclude $$\theta=\beta^{-1}\left(\frac{x}{\sqrt{x^2+y^2}},\frac{y}{\sqrt{x^2+y^2}}\right).$$ Then $$X^{-1}(x,y,z)=\left(\alpha^{-1}(\sqrt{x^2+y^2}),\beta^{-1}\left(\frac{x}{\sqrt{x^2+y^2}},\frac{y}{\sqrt{x^2+y^2}}\right)\right).$$ Thus we need 2 parametrizations.

If $\alpha$ is a simple closed curve, for covering the curve $\alpha$ by embeddings, we need two parametrizations, namely, $\alpha:(0,T)\rightarrow \alpha(0,T)$ and $\alpha:(T/2,3T/2)\rightarrow \alpha(T/2,3T/2)$. Then for the surface, we need 4 parametrizations.

There is another elegant argument. We observe that when one has proved that $X$ is a parametrization, then we think $Y$ as a `rotation' of $X$. Recall that if $S$ is a parametrization and $\psi$ is a diffeomorphism of ${\mathbb R}^3$, then $\psi(S)$ is a surface. But the parametrizations of $\psi(S)$ are of type  $\psi\circ X$, where $X$ is a parametrization of $S$. With this idea in mind, consider now $\Phi_\theta$ the rotation about the $z$-axis of angle $\theta$. By the definition of $S$, we have $S=\Phi_\theta(S)$. Suppose that we have proved that $X:U_1:=I\times (-\pi/2,\pi/2)\rightarrow {\mathbb R}^3$ is a parametrization of $S$: it is only of a part of $S$, exactly, $V_1=X(U_1)$. Now we take $\theta\in{\mathbb R}$. Because $\Phi_\theta$ is a homeomorphism, it is clear that $\Phi_\theta\circ X$ satisfies the properties of a parametrization, where now the coordinate open is $\Phi_\theta(V_1)$. We point out that $\Phi_\theta(V_1)$ is an open set of $S$. Therefore, if we are going taking many $\theta$, we cover all the surface $S$ by coordinate opens of type $\Phi_\theta(U_1)$. In fact, it is enough only one rotation!, namely, $\Phi_\pi$. With this argument, the effort lies only in the first parametrization $X$: the other ones are obtained `by rotating' $X$.

Finally, we conclude that it is clear that the parametrizations $X$ hold to prove that $S$ is a surface of revolution, for example, taking very small domains, or rotations of $X$. However, the proof is a bit tedious if one wants to write precise arguments.

Tuesday 14 March 2017

Surfaces and topology

The surfaces that we are introduced present a variety of possibilities on its topology.

  1. There are connected surfaces (sphere) and non-connected surfaces (hyperboloid of two sheets, with two connected components).
  2. There are compact surfaces (sphere) and non-compact surfaces (plane).
  3. There are closed surfaces (sphere) and non-closed surfaces (a hemisphere).
  4. The boundary of the sphere ${\mathbb S}^2$  is the very sphere ${\mathbb S}^2$.
  5. Every point of a surface has a neighborhood homeomorphic to ${\mathbb R}^2$. In fact, the coordinate open $V$ of $p\in S$ is homeomorphic to an open set $U\subset{\mathbb R}^2$ via the parametrization $X:U\rightarrow V$. Since $U$ is an open set, there exists a ball $B_r(q)$ around $q=X^{-1}(p)$ with $B_r(q)\subset U$. Then then restriction $$X_{| B_r(q)}:B_r(q)\rightarrow X(B_r(q))$$ is a homeomorphism, being $X(B_r(q))$ an open set of $V$, so, of $S$. This means that $X(B_r(q))$ is an open set around $p$ homeomorphic to ${\mathbb R}^2$. As a consequence, we conclude:
    • The interior of a surface is empty.
    • The surfaces 'have not boundary point', I mean, for example, the closed hemisphere $T=\{p\in{\mathbb S}^2: z(p)\geq 0\}$ is not a  surface because the above property fails at the points with $z(p)=0$.
    • A point is not a surface.
    • A surface has a non-countable set of points.
  6. If $\phi:{\mathbb R}^3\rightarrow{\mathbb R}^3$ is a diffeomorphism and $S$ is a surface, then $\phi(S)$ is a surface which is a homeomorphic to $S$ thanks to the restriction $\phi_{|S}:S\rightarrow \phi(S)$.
  7. An open set of a surface is a surface (proved).
  8. Some closed sets of a surface are surfaces; other not. For example, if $S$ is the union of two disjoint spheres, then each sphere is closed and it is a surface. On the other hand, the closed hemisphere is closed in ${\mathbb S}^2$ and it is not a closed set.

Monday 13 March 2017

Curves-maps; surfaces-sets

I remarked in the classroom the differences between the definition of a curve and a surface: a curve is a differentiable map and a surface is a subset of Euclidean space where there do exist parametrizations. I return again with it.

If a curve $\alpha:I\rightarrow{\mathbb R}^3$ is regular $t_0$, then $\alpha'(t_0)\not=0$. If we write in terms of the differential map of $\alpha$, it means that $(d\alpha)_t:{\mathbb R}\rightarrow {\mathbb R}^3$ is a non-zero linear map. This is equivalent to say that $\mbox{rank}(d\alpha)_t=1$, because $$(d\alpha)_{t_0}(1)=\frac{d}{ds}{\Big |}_{s=0}\alpha(t_0+s)=(x'(t_0),y'(t_0),z'(t_0))\not=(0,0,0).$$ Thus the rank  of $(d\alpha)_t$ is the maximum possible (it would be $0$ or $1$). Furthermore, by using the inverse function theorem, ``the curve is a graph locally around $t_0$''. In fact, it was proved that there exists $\epsilon>0$ such that $$\alpha:J=(t_0-\epsilon,t_0+\epsilon)\rightarrow \alpha(t_0-\epsilon,t_0+\epsilon)$$ coincides with the graph of a function, that is, there exists a differentiable function $f:K\subset {\mathbb R}\rightarrow {\mathbb R}$ such that $\{(x,f(x)):x\in K\}=\alpha(J)$. As a consequence, $\alpha:J\rightarrow\alpha(J)$ is homeomorphic to an interval of ${\mathbb R}$.

Then the map $\alpha$ would play (almost) the same role of parametrizations in a surface. If we want to give the definition of the  analogous $1$-dimensional case of a surface, then a subset $C\subset{\mathbb R}^3$ is a $1$-surface (=curve) if for each point $p\in C$ there exists $I\subset {\mathbb R}$ and a map $X:I\rightarrow V\subset C$ a homeomorphism, where $V$ is an open of $C$ around $p$, $X:I\rightarrow {\mathbb R}^3$ is differentiable and $X'(t)\not=(0,0,0)$.

The question is the definition of curve given in chapter $1$ is now a $1$-surface, more precisely, if the trace $\alpha(I)$ is a such $1$-surface. Then one would think `yes' by taking around each point $p\in C$ the corresponding restriction of $\alpha$ to the suitable interval $J$. However, the only problem is the following: Is $\alpha(J)$ an open set of $C$? because the other properties have been showed. We find the answer in the curve $\alpha(t)=(\cos(t),\sin(2t))$, $t\in {\mathbb R}$.

This curve self-intersects at the origin. Thus it can not be a $1$-surface because this point has not a neighbourhood which is homeomorphic to ${\mathbb R}$. By the inverse function theorem, around $t_0=0$, $\alpha(J)\cong J$ is a graph, but $\alpha(J)$ is not an open set of $\alpha({\mathbb R})$, which it happens exactly in our example, as one can see in the next picture: the red color line is $\alpha(J)$ is not an open set in $C$.


Tuesday 7 March 2017

Curvature of a planar curve by reversing its direction

After the last exercise this afternoon, I will revise what is the curvature of a planar curve when we reverse its orientation. In order to simplify the notation, we will assume that the domain $I$ of the planar curve is $I=\mathbb{R}$, so we have $\alpha:\mathbb{R}\rightarrow{\mathbb R}^2$, which we suppose parametrized by the length-arc. In classroom we have said "when we reverse the orientation, then the curvature change the sign at the point". Firstly, we explanation of our intuition.

When we change the orientation on the curve, we are taking $-s$ instead of $s$ as the new parameter the curve, and as we increase $s$, we are going in the initial curve on the contrary direction. Then the result that we want to prove is that when we pass by a point of the curve, its curvature is the opposite if we go along the curve in the other direction. That is, we say that at $s_0$  the curvature of $\alpha$ when we reverse the sense of the parameter is the opposite with respect to the other direction.

The computations are clear and were made in the classroom. Define $\beta(s)=\alpha(-s)$ the curve $\alpha$ parameterized in the opposite direction. Then $T_\beta(s)=-T_\alpha(-s)$, $N_\beta(s)=JT_\beta(s)=-JT_\alpha(-s)=-N_\alpha(-s)$ and $T_\beta'(s)=T'_\alpha(-s)$. Thus $$\kappa_\beta(s)=\langle T'_\beta(s),N_\beta(s)\rangle=-\langle T'_\alpha(-s),N_\alpha(-s)\rangle=-\kappa_\alpha(-s). (*)$$
The problem appears when we utilize the words to describe the above result. The symbol $\kappa_\beta(s)$ is the curvature of $\alpha$ with the reverse direction at the point $s$. The right hand side says that it coincides with the opposite of the curvature of $\alpha$ at $-s$.

An example. Consider the curve $\alpha(t)=(t,t^2+t^3)$. This curve is not parameterized by the length-arc, but the above argument (which it involves only the curvature) holds for any curve. The curvature of $\alpha$ is $$\alpha'(t)=(1,2t+3t^2), \alpha''(t)=(0,2+6t)\Rightarrow\kappa_\alpha(t)=\frac{2+6t}{(1+(2t+3t^2)^2)^{3/2}}.$$ We fix the point $t=1$, that is, $\alpha(1)=(1,2)$. Then the curvature at $t=1$ is $\kappa_\alpha(1)=8/26^{3/2}$.  We reverse the orientation of $\alpha$ by doing $\beta(t)=(-t,t^2-t^3)$. Now the point $t=1$ for $\alpha$ corresponds with $t=-1$. The curvature of $\beta$ is $$\kappa_\beta(t)=\frac{-2+6t}{(1+(2t-3t^2)^2)^{3/2}}.$$ Then $\kappa_\beta(-1)=-8/26^{3/2}$, which corresponds with the formula  (*).

To finish, I propose a question. Suppose now that the curve is the graph of a function $y=f(x)$, where we know that the curvature of the curve is $f''(x)/(1+f'(x)^2)^{3/2}$. If we move in the opposite direction, then the curve is the graph of the function $y=f(-x)$. Using the above formula,  the curvature would be $f''(-x)/(1+f'(-x)^2)^{3/2}$ because the first derivative gives $-f'(-x)$, but the second one yields $f''(-x)$. But according to (*), this should give $-f''(-x)/(1+f'(-x)^2)^{3/2}$, the oppossite sign.

Question: what is it happening?

Monday 6 March 2017

Curvature and symmetry about a point

Motivated by one of the proposed exercises today, we observe that the curve $y=x^3$, which is symmetric about the origin of the plane, satisfies that the curvature at the point $-x$ is the opposite in $x$: look in which side lies the graph of $y=x^3$ around $x$. Thus we prove the next result:

Result. If $\alpha:I\rightarrow{\mathbb R}^2$ is a planar curve which is symmetric about  a point $p_0\in\alpha(I)$, then $\kappa(-s)=-\kappa(s)$. Here we are assuming that $\alpha(0)=p$.

After a translation and without loss of generality, we suppose that $p_0=(0,0)$. Say that $\alpha$ is symmetric about the origin means $\alpha(-s)= M\alpha(s)$, where $M$ is the symmetry, that is, $Mx=-x$. Thus $\alpha(-s)=-\alpha(s)$.  Let $\beta(s)=\alpha(-s)$. Then $\kappa_\beta(s)=-\kappa_\alpha(-s)$. On the other hand, the curve $\gamma(s)=-\alpha(s)$ is a direct rigid motion of $\alpha$, so $\kappa_\gamma(s)=\kappa_\alpha(s)$. Since $\kappa_\gamma(s)=\kappa_\beta(s)$, we conclude $$-\kappa_\alpha(-s)=\kappa_\alpha(s).$$
This can be checked for curves that are graphs of $y=f(x)$ where we know that $$\kappa_\alpha(x)=\frac{f''(x)}{(1+f'(x)^2)^{3/2}}.$$ Denote $\alpha$ the curve $y=f(x)$. As $f$ is symmetric about the origin, then $f(-x)=-f(x)$. The curvature of $y=g(x)$, where $g(x)=f(-x)$ is
$$g'(x)=-f'(-x), g''(x)=f''(-x)\Rightarrow \kappa_g(x)=\frac{f''(-x)}{(1+f'(-x)^2)^{3/2}}.$$
The curvature of $y=-h(x)$ is $$h'(x)=-f'(x), h''(x)=-f''(x)\Rightarrow \kappa_h(x)=\frac{-f''(x)}{(1+f'(x)^2)^{3/2}}.$$ Since the curvatures of both graphs coincide, we have $$\kappa_f(-x)=\frac{f''(-x)}{(1+f'(-x)^2)^{3/2}}=\frac{-f''(x)}{(1+f'(x)^2)^{3/2}}=-\kappa_f(x).$$
The converse is also true:

Theorem. If $\alpha:I=(-a,a)\rightarrow{\mathbb R}^2$ is a planar curve  such that $\kappa(-s)=-\kappa(s)$, then the graphic of $\alpha$ is symmetric about  the point $p=\alpha(0)$.

After a translation, we suppose $\alpha(0)=(0,0)$. Define two curves: $\beta(s)=\alpha(-s)$ and $\gamma(s)=-\alpha(s)$. We compute their curvatures. For $\beta$, $\kappa_\beta(s)=-\kappa(-s)$ and for $\gamma$, $\kappa_\gamma(s)=\kappa(s)$ because the symmetry is a direct rigid motion.  By hypothesis, $\kappa_\gamma(s)=\kappa_\beta(s)$ so there is a direct rigid motion $M$ such that $\gamma(s)=M\beta(s)$. Because $\gamma(0)=\alpha(0)=\beta(0)$ and $\gamma'(0)=\beta'(0)$ (check!), then their normal vector also coincide at $s=0$. Then $M$ must be the identity, proving the result.

Saturday 4 March 2017

The intermediate value theorem

I consider in this entry the exercise 2 of the homework page and some people asked. The interest of this exercise is to show what we gain in this course comparing we know from the high-school or Calculus I. We learnt the following intermediate value theorem: if $f$f is a smooth function in $[a,b]$, then there exists  $\xi\in (a,b)$ such that $f(b)-f(a)=f'(\xi)(b-a)$. If we write this equality as  $$f'(\xi)=\frac{f(b)-f(a)}{b-a}$$ the results asserts that there exists an intermediate point between $a$ and $b$ where the tangent line at $x=\xi$ is parallel to the straight-line joining the points $(a,f(a))$ and $(b,f(b))$: the about identity says that the slopes of both lines coincide.

Exercise 1 says that this result holds for any planar curve without to be a graph of a function $y=f(x)$, as it shows the next picture.

A possible starting point of the proof would be applying the intermediate value theorem to the functions  $x=x(t)$ and $y=y(t)$, where $\alpha(t)=(x(t),y(t))$. Suppose $t\in [a,b]$. Then there exists $\xi_1,\xi_2\in (a,b)$ such that $$x(b)-x(a)=x'(\xi_1)(b-a), \ y(b)-y(a)=y'(\xi)(b-a).$$ Hence $$x'(\xi_1)=\frac{x(b)-x(a)}{b-a},\ y'(\xi_2)=\frac{y(b)-y(a)}{b-a}.$$
A vector determining the line $\alpha(a)$ with $\alpha(b)$ is $$\alpha(b)-\alpha(a)=(x(b)-x(a),y(b)-y(a))$$ and a vector of the tangent line at $t$ is $(x'(t),y'(t))$. And we want to prove that the first one is proportional to the second one at some point $t_0\in (a,b)$. But this point is not  $\xi_1$ or $\xi_2$ because they do not need to coincide!

Thus the proof follows other ideas. In the picture we observe that the point that we are looking for is the farthest point of the trace  $\alpha(a,b)$ with respect to the line $R$ joining  $\alpha(a)$ with $\alpha(b)$. Then define $$f:[a,b]\rightarrow\mathbb{R},\ f(s)=\langle \alpha(s)-\alpha(a),N(a)\rangle$$ the distance between $\alpha(s)$ and $R$ and take $s_0\in (a,b)$ the maximum of $f$ (if $\alpha(s_0)=0$, then $\alpha([a,b])\subset R$ and the result is immediate). Let us point out that the maximum is attained because $[a,b]$ is a compact set. In particular, $f'(s_0)=0$ and this yields $$0=f'(s_0)=\langle \alpha'(s_0),N(a)\rangle,$$ which proves the result.

In fact the reasoning says that it holds for any critical point of $f$ in $(a,b)$, as it appears in the next figure.

Wednesday 1 March 2017

Evolute of the parabola

I propose an easy example of evolutes using Mathematica to draw the curves. For this, I take the known parabola $y=x^2$ and its parametrization is $\alpha(t)=(t,t^2)$. This curve  is not parametrized by the length-arc so I use the formula for arbitrary curves, obtaining
$$\kappa(t)=\frac{2}{\left(4 t^2+1\right)^{3/2}}.$$
For the normal vector of $\alpha$, we first calculate the tangent vector
$$T_\alpha(t)=\frac{\alpha'(t)}{|\alpha'(t)|}\Rightarrow N_\alpha(t)=J T_\alpha(t)=\left(-\frac{2 t}{\sqrt{4 t^2+1}},\frac{1}{\sqrt{4 t^2+1}}\right).$$ Finally we have $$\beta(t)=\alpha(t)+N(t)/\kappa(t)=\left(-4 t^3,3 t^2+\frac{1}{2}\right).$$ The picture of both curves is:

where the blue color corresponds with the parabola and the green one is its evolute. We check the property proved in classroom that the tangent line of  $\alpha$ is the normal line of $\beta$ at the corresponding point. We do it at $t=1$, with $\alpha(1)=(1,1)$ and $\beta(1)=(-4,7/2)$.

Question: Let do with the curve $y=\sin(x)$ at the point $(0,0)$.