online advertising

Saturday, January 30, 2016

UTM Ideals Varieties and Algorithm - Chapter 1 Section 5 Exercise 7

Problem:


Solution:

The key proposition to prove is
$ GCD(f_1, \cdots. f_s) = GCD(f_1, GCD(f_2, \cdots, f_s)) $.

Once we have this proposition, then all we have to do is to recursively compute the GCDs.

To prove the key proposition, we let $ a = GCD(f_2, \cdots, f_s) $ and $ b = GCD(f_1, a) $, obviously we need to prove $ b = GCD(f_1, \cdots, f_s) $.

First, obviously, $ b $ is a factor of $ f_1 $, $ b $ is also a factor of $ a $ so that it is also a factor of $ f_2, \cdots, f_s $.

So now we proved $ b $ is a common factor, but is it the greatest?

Suppose it is not, so that in fact $ c $ is $ GCD(f_1, \cdots, f_s) $. Now we know $ c = bd $ and $ d $ is not a constant.

Because $ b = GCD(f_1, a) $, $ c $ cannot be a factor of $ a $, for if it were, then $ c $ is a factor of $ f_1 $, $ c $ is a factor of $ a $, $ c $ has a higher degree than $ b $, yet $ b $ is the GCD.

Now $ c $ is a factor of $ f_2, \cdots, f_s $, $ a = GCD(f_2, \cdots, f_s) $, yet $ c $ is not a factor of $ a $. A common factor is not a factor of the greatest common factor. This is impossible (*), the contradiction proved our key proposition.

Now let us finish the unfinished business in Exercise 6. There we assumed if $ h = GCD(f_1, \cdots, f_s) $, then we can write $ h = \sum\limits_{i=1}^{s}{p_if_i} $.

If $ s = 2 $ then Exercise 4 showed what we wanted.

Now assume it can be written so for $ s = k $, now for $ s = k + 1 $, we have the recursion above

$ GCD(f_1, \cdots, f_{k+1}) = GCD(f_1, GCD(f_2, \cdots f_{k+1})) $

So $ GCD(f_1, \cdots, f_{k+1}) = Af_1 + BGCD(f_2, \cdots f_{k+1}) $.

But we know by our induction hypothesis that

$ GCD(f_2, \cdots f_{k+1}) = \sum\limits_{i = 2}^{k+1}{p_i f_i} $.

Putting it back we get what we wanted.

$ GCD(f_1, \cdots, f_{k+1}) = Af_1 + B\sum\limits_{i = 2}^{k+1}{p_i f_i} = \sum\limits_{i = 1}^{k+1}{q_i f_i}$.

By mathematical induction we proved our unfinished business in Exercise 6!

(*) To prove this, we leverage crucially on polynomials uniquely factorize.

Suppose $ d $ is a common factor and $ g $ is the greatest common factor, if $ d $ is not a factor of $ g $, then $ \frac{d}{GCD(d, g)}g $ is also a greater common factor. Prime factorization is needed to claim that.

UTM Ideals Varieties and Algorithm - Chapter 1 Section 5 Exercise 6

Problem:


Solution:

Since $ h = GCD(f_2, \cdots, f_s) $, therefore $ f_i = hg_i $ for all $ i \in [2, s] $.

If $ p \in \langle f_2, \cdots, f_s \rangle $, then $ p = \sum\limits_{i = 2}^{s}{p_if_i} = \sum\limits_{i = 2}^{s}{p_ihg_i} = h\sum\limits_{i = 2}^{n}{p_ig_i} $, therefore $ p \in \langle h \rangle $

Now assume without proof (yet, we will do that soon) that there exists polynomial $ q_i $ such that $ h = \sum\limits_{i = 2}^{s}{q_if_i} $, then it is easy to show if $ p \in \langle h \rangle $, then $ p = rh = r\sum\limits_{i = 2}^{s}{q_if_i} = \sum\limits_{i = 2}^{n}{rq_if_i} $, and therefore $ p \in \langle f_2, \cdots, f_s \rangle $

So now we establish $ \langle h \rangle = \langle f_2, \cdots, f_s \rangle $

If $ p \in \langle f_1, h \rangle $, then $ p = af_1 + bh = af_1 + b\sum\limits_{i = 2}^{s}{q_if_i} $, therefore $ p \in \langle f_1, f_2, \cdots, f_s \rangle $.

If $ p \in \langle f_1, f_2, \cdots, f_2 \rangle $, then $ p = af_1 + c $ where $ c \in \langle f_2, \cdots, f_s \rangle = \langle h \rangle $, so we can write $ p = af_1 + rh $, so $ p \in \langle f_1, h \rangle $.

UTM Ideals Varieties and Algorithm - Chapter 1 Section 5 Exercise 5

Problem:


Solution

If $ p \in \langle f - qg, g \rangle $, then $ p = a(f-qg) + bg = af + (b- aq)g \in \langle f, g \rangle $

If $ p \in \langle f, g \rangle $, then $ p = af + bg = a(f-qg) + (b+ aq)g \in \langle f-qg, g \rangle $

Therefore $ \langle f - qg, g \rangle =\langle f, g \rangle $.

UTM Ideals Varieties and Algorithm - Chapter 1 Section 5 Exercise 4

Problem:


Solution:

Since $ h $ is a factor of $ f $ and $ g $, we have

$ f = ph $ and $ g = qh $

We have $ qf - pg = qph - pqh = qph - qph = 0 $

Therefore $ A = q $ and $ B = -p $

UTM Ideals Varieties and Algorithm - Chapter 1 Section 5 Exercise 3

Problem:


Solution:

It is obvious that $ x \in I $ and $ y \in I $, if $ I $ were a principal ideal then there exists $ f \in I $ such that every element in $ p \in I $ can be written as $ p = f g_p $.

Now $ f g_x = x $ and $ f g_y = y $

Since $ \deg(x)  = 1$, so either $ f $ has degree 1 and $ g_x $ has degree 0 or $ f $ has degree 0 and $ g $ has degree 1.

Suppose $ f $ has degree 1 so $ f = ax + by + c $ and therefore $ g_x = d $ has degree 0. We found $ fg_x = (adx + bdy + cd) = 0 $, so $ ad = 1 $ and hence $ d \ne 0 $, we also know $ b = c = 0 $, so $ f = ax $ and $ g_x = d $. However, $ f g_y = ax g_y = y $, there couldn't be a polynomial $ g_y $ that satisfy this, so $ f $ has to have degree 0.

Now $ f = e $ has degree 0, but then it implies $ e \in I $, but there is not way $ e \in I $ because any constant cannot be written as $ p(x, y)x + q(x, y)y $, so we conclude $ \langle x, y. \rangle $ cannot be a principal ideal.

Thursday, January 28, 2016

Differential Geometry and Its Application - Exercise 2.1.22

Problem:



Solution:

Disclaimer, I thought about the solution with the hint from wikipedia, in particular, this spinning model. But not the hint text in the problem.

There I started to think, maybe the lines are just joining points on the circle with a phase shift. So I tried:

$ (0, 0, -1) \to (\cos \theta, \sin \theta, 1) $, the mid point is $ \frac{1}{2}(\cos \theta, \sin\theta, 0) $, so that is indeed on a circle in the $ z = 0 $ plane.

With that in mind, now I generalize, for the general hyperboloid, when $ z = \pm c $, it is a ellipse with major radius $ \sqrt{2} a $ and minor radius $ \sqrt{2} b $.

Consider the 'phase shift' lines:

$ \sqrt{2}((a \cos u, b \sin u, -c) + v(a \cos (u + s), b \sin (u + s), c)) $

Our goal is to find the unknown $ s $, the phase shift required, to fit the formula. To do that, we just check if all these points is in fact on the hyperboloid.

$ \begin{eqnarray*} & & \frac{x^2}{a^2} + \frac{y^2}{b^2} - \frac{z^2}{c^2} \\ &=& \frac{(\sqrt{2}(a\cos u + va\cos(u + s)))^2}{a^2} + \frac{\sqrt{2}(b\sin u + vb\sin(u + s)))^2}{b^2} - \frac{(\sqrt{2}((-c) + vc)^2}{c^2} \\ &=& 2(\cos u + v\cos(u + s))^2 + 2(\sin u + v\sin(u + s))^2 - 2(v - 1)^2 \\ &=& 2\cos^2 u + 4v \cos u \cos(u + s) + 2v^2\cos^2(u + s) + 2\sin^2 u + 4v \sin u \sin(u + s) + 2v^2\sin^2(u + s) - 2v^2 + 4v - 2 \\ &=& 2\cos^2 u + 2\sin^2 u + 2v^2\cos^2(u + s) + 2v^2\sin^2(u + s) + 4v \cos u \cos(u + s) + 4v \sin u \sin(u + s) - 2v^2 + 4v - 2 \\ &=& 2\ + 2v^2 + 4v \cos (s) - 2v^2 + 4v - 2 \\ &=& 4v \cos (s) + 4v \\ \end{eqnarray*} $

At this point it should be obvious that $ s = \pm \pi $, that correspond to the two ruling patch for the surface, and the surface is doubly ruled.

Now looking at the hint in the text, I think I can simplify this by making the ellipse at $ z = 0 $ the directix instead, to do so, we just move $ v $ by 1 as follow:

$ \sqrt{2}((a \cos u, b \sin u, -c) + 1(a \cos (u + s), b \sin (u + s), c) + v(a \cos (u + s), b \sin (u + s), c)) $

$ \sqrt{2}((a (\cos u + \cos (u + s)), b (\sin u +\sin (u + s)),  0) + v(a \cos (u + s), b \sin (u + s), c)) $

Now we can use the sum to product formula to simplify this to:

$ \sqrt{2}((a (2\cos(u + \frac{s}{2})\cos \frac{s}{2}), b (2\sin(u + \frac{s}{2})\cos \frac{s}{2}),  0) + v(a \cos (u + s), b \sin (u + s), c)) $

Remember $ s = \pm \pi $, so all these simplifies to simply:

$ (a \cos(u + \frac{s}{2}), b \sin(u + \frac{s}{2}),  0) + v(a \cos (u + s), b \sin (u + s), c) $

Just shift the definition of $ u $ by $ \frac{s}{2} $, we get

$ (a \cos(u), b \sin(u),  0) + v(a \cos (u  + \frac{s}{2}), b \sin (u  + \frac{s}{2}), c) $

So finally we have these two ruling patches:

$ (a \cos(u), b \sin(u),  0) + v(-a \sin u, b \cos u, c) $

$ (a \cos(u), b \sin(u),  0) + v(a \sin u, -b \cos u, c) $

Now we get back to the full circle to the problem text hint!

Saturday, January 23, 2016

Differential Geometry and Its Application - Exercise 3.2.18

Problem:


Solution:

This is a very long question, and we will tackle this part-by-part. This is for part (a)

Let's start with the formula:

$ x_u = (\beta'(u) + v\delta'(u)) $
$ x_v = (\delta(u)) $
$ x_{uu} = (\beta''(u) + v\delta''(u)) $
$ x_{uv} = (\delta'(u)) $
$ x_{vv} = (0) $

The observation is that $ n = U \cdot  x_{vv} = U \cdot 0 = 0 $. So we have got the first equality.

Now we need to compute $ x_u \times x_v $

$ x_u \times x_v = (\beta'(u) + v\delta'(u)) \times \delta(u) = \beta'(u) \times \delta(u) + v\delta'(u) \times \delta(u) $

So that explain the denominator, as per the hint. Finally we compute $ m $

$ U \cdot x_{uu}  = (\beta'(u) \times \delta(u) + v\delta'(u) \times \delta(u)) \cdot \delta'(u) = \beta'(u) \times \delta(u) \cdot \delta'(u) $. So we finally also explain the numerator!

Notice the numerator is not exactly the same form I had, but it is the same because it is a circular shift of the scalar triple product.

Now we moved on to part (b), we can parametrize $ (x, y, xy) = (u, 0, 0) + v(0, 1, u) $ to make it a ruled surface.

The code for computing the Gaussian curvature is as follow:

syms u
syms v
beta = [u; 0; 0];
delta = [0; 1; u];

beta_u = diff(beta, u);
delta_u = diff(delta, u);

n = -(beta_u.' * cross(delta, delta_u))^2
D = cross(beta_u, delta) + v * cross(delta_u, delta);
d = (D.' * D)^2;

K = simplify(n/d)

We get the answer as $ -\frac{1}{(u^2 + v^2 + 1)^2} $.

For part (c) and (d), the Gaussian curvatures for both cone and cylinder are 0, this because either $ \beta' = 0 $ or $ \delta' = 0 $.

For part (e), the helicoid has the parametrization as $ (v \cos u, v \sin u, bu) = (0, 0, u) + v(\cos u ,\sin u , 0) $, so we use essentially the same code above except

syms b;
beta = [0;0;b*u];
delta= [cos(u);sin(u);0];

So we get the answer as $ -\frac{b^2}{(b^2 + v^2)^2} $

For part (f), we will use the ruling patch we found in Exercise 2.1.22, so we simply put in yet another $ \beta $ and $ \delta $ into the program and get the answer:

$ -\frac{a^2b^2c^2}{(a^2b^2v^2 + a^2c^2(v\cos u - \sin u)^2 + b^2c^2(v\sin u + \cos u)^2)^2} $

For part (g), in some sense, we have already done with it, for the saddle $ z = xy $ is a hyperbolic paraboloid.

For a more general hyperbolic paraboloid, we consider

$ \frac{z}{c} = \frac{y^2}{b^2} - \frac{x^2}{a^2} $

Now let $ u = \frac{y}{b} + \frac{x}{a} $ and $ v = \frac{y}{b} - \frac{x}{a} $

$ (\frac{a}{2}(u - v), \frac{b}{2}(u + v), cuv) $

Now we obtain the ruled patch $ (\frac{a}{2}u, \frac{b}{2}u, 0) + v(-\frac{a}{2}, \frac{b}{2}, cu) $

Because we wanted the expression to show values in terms of $ x $ and $ y $ this time, so we modified the program a bit as follow:

% defining the ruled patch
syms a
syms b
syms c
beta =  [ 0.5 * a * u; 0.5 * b * u; 0  ];
delta = [-0.5 * a    ; 0.5 * b    ; u/c];

% back substitute the x, y values

syms x
syms y
simplify(subs(subs(K, u, y/b + x/a), v, y/b - x/a))

So the answer $ -\frac{4a^6b^6c^2}{(a^4b^4c^2 + 4a^4y^2 + 4b^4x^2)^2} $

Phew, finally!

Thursday, January 21, 2016

UTM Ideals Varieties and Algorithm - Chapter 1 Section 5 Exercise 2

Problem:


Solution:

The hint basically spell out the solution. If the determinant is 0, then the columns are linearly dependent.

We can write, with $ c_i $ not all zero.

$ c_0\left(\begin{array}{c}1\\ \vdots \\ 1\end{array}\right) + c_1 \left(\begin{array}{c}a_1\\ \vdots \\ a_n\end{array}\right) + \cdots + c_n\left(\begin{array}{c}a_1^{n-1}\\ \vdots \\ a_n^{n-1}\end{array}\right) = 0 $

These equations can be interpreted as $ n $ roots of the polynomial $ c_0 + c_1 x + \cdots + c_n x^{n-1} $.

A non-zero polynomial of degree $ n -1 $ cannot have $ n $ roots, so we have a contradiction, the determinant is not 0.

UTM Ideals Varieties and Algorithm - Chapter 1 Section 5 Exercise 1

Problem:


Solution:

Theorem 7 is the fundamental theorem of algebra, staying that any polynomial with coefficients in $ \mathbf{C} $ has a root in $ \mathbf{C} $.

Now suppose $ f $ has degree $ n $ with coefficients in $ \mathbf{C} $, but the fundamental theorem of algebra we have $ f $ have a root in $ \mathbf{C} $, so using the division theorem we know $ f = (x - a)g(x) $ with $ g(x) $ a polynomial of degree $ n - 1 $.

By induction we get the result we want.

Differential Geometry and Its Application - Exercise 3.2.12

Problem:

Show that the Gaussian curvature of the hyperboloid of one sheet $ x(u, v) = (a\cosh u \cos v, b \cosh u \sin v, c \sinh u) $ maybe written in Cartesian coordinate as

$ K = -\frac{1}{a^2b^2c^2[\frac{x^2}{a^4} + \frac{y^2}{b^4} + \frac{z^2}{c^4}]^2} $

Solution:

This code should verify the correctness of the identity. It would be too tedious for manual computation.

clear;
clc;
syms a;
syms b;
syms c;
syms x;
syms y;
z = sqrt(c^2 * (x^2/a^2 + y^2/b^2 - 1));
s = [x; y; z];
sx = diff(s, x);
sy = diff(s, y);
E = simplify(sx.' * sx);
F = simplify(sx.' * sy);
G = simplify(sy.' * sy);
normal = cross(sx, sy);
normal_norm = simplify(sqrt(normal.' * normal));
u = simplify(normal/normal_norm(1));
sxx = simplify(diff(sx, x));
sxy = simplify(diff(sx, y));
syy = simplify(diff(sy, y));
l = simplify(u.' * sxx);
m = simplify(u.' * sxy);
n = simplify(u.' * syy);
K = simplify((l * n - m * m)/(E * G - F * F))
PK = -1/(a^2 * b^2 * c^2 * (x^2 / a^4 + y^2 / b^4 + z^2/c^4)^2);
simplify(K - PK)

Now we get from the program that:
$ K = -\frac{c^2a^6b^6}{(c^2x^2b^4+c^2y^2a^4+a^2b^4x^2+a^4b^2y^2-a^4b^4)^2} $

PK represent the K value we need to prove, and the program output 0 meaning the K we found matches the PK we need to reach!

Saturday, January 16, 2016

Differential Geometry and Its Application - Exercise 2.1.11

Problem:

For a surface of revolution:

$ x(u, v) = (g(u), h(u)\cos v, h(u) \sin v) $

Check that $ x_u \times x_v = h(\frac{dh}{du}, -\frac{dg}{du}\cos v, -\frac{dg}{du}\sin v) $.

Why is $ x_u \times x_v \neq 0 $ for all $ u $, $ v $?

Solution:

$ x_u = (g'(u), h'(u) \cos v, h'(u) \sin v) $
$ x_v = (0, -h(u) \sin v, h(u) \cos v) $

$ \begin{eqnarray*} & & x_u \times x_v \\ &=& \left|\begin{array}{ccc}i & j & k \\ g'(u) & h'(u) \cos v & h'(u) \sin v \\ 0 & -h(u) \sin v & h(u) \cos v\end{array}\right| \\ &=& ((h'(u)\cos v) (h(u) \cos v) - (h'(u)\sin v)(-h(u)\sin v))i + ((h'(u)\sin v) (0) - (g'(u))(h(u)\cos v)) j + ((g'(u)) (-h(u)\sin v) - (h'(u)\cos v)(0)) k \\ &=& h(u)(h'(u), -\cos(v), -\sin(v)) \end{eqnarray*} $

If $ h(u) = 0 $, all bets are off because in fact the normal is $ 0 $. Assuming $ h(u) \neq 0 $, then the normal vector is always non zero because $ \cos v $ and $ \sin v $ can never be 0 at the same $ v $.

Differential Geometry and Its Application - Exercise 2.1.12

Problem:

Find a patch for the catenoid obtained by revolving the catenary $ y = \cosh(x) $ about the x-axis.

Solution:

$ (u. \cosh u \cos v, \cosh u \sin v) $, $ u \in (-\infty, +\infty) $, $ v \in [-\pi, \pi] $.

We will show the patch is one-to-one by giving its inverse. Given a point $ (x, y, z) $ on the catenoid, we know $ u = x $, now we know $ y = \cosh x \cos v $, so $ v = \cos^{-1}(\frac{y}{\cosh x}) $.

Next, we will show the patch is regular. We have

$ x_u = (1, \sinh u \cos v, \sinh u \sin v) $
$ x_v = (0, -\cosh u \sin v, \cosh u \cos v) $.t

$ x_u \times x_v = \left|\begin{array}{ccc} i & j & k \\ 1 & \sinh u \cos v & \sinh u \sin v \\ 0 & -\cosh u \sin v & \cosh u \cos v \end{array}\right| = (\sinh u \cosh u \cos^2 v + \sinh u \cosh u \sin^2 v, -\cosh u \cos v, -\cosh u \sin v) $

Now $ \cosh u $ is never 0, $ \cos v $ and $ \sin v $ is never simultaneously 0, so the vector is never zero, the patch is regular.

UTM Ideals Varieties and Algorithm - Chapter 1 Section 4 Exercise 15

Problem:


Solution:

For part (a), we show the set is closed from addition and multiplication.

Suppose $ f, g $ is in $ I(S) $, $ h \in k[x_1, \cdots , x_n] $ then for all $ x \in S $, $ f(x) = g(x) = 0 $.

Now $ (f + g)(x) = 0 $ and also $ (hg)(x) = 0 $, so $ f + g $ and $ hg $ are both in $ I(S) $.

Therefore $ I(S) $ is an ideal.

For part (b), we do exactly the same as in Exercise 10. Now we have $ f(x, y) = h(x, y)(x - y) + r(x) $, $ f(a, a) = 0 \implies r(a) = 0 $, $ \forall a \neq 1 $. so now we still have $ r(x) = 0 $  and therefore the ideal is still $ \langle x  - y \rangle $.

For part (c), we already know polynomials that vanish on the whole integer grid is necessarily the zero polynomial, so $ I(\mathbf{Z}) $ is simply $ \{ 0 \} $.

UTM Ideals Varieties and Algorithm - Chapter 1 Section 4 Exercise 14

Problem:


Solution:

For the purpose of practicing, let's just prove Proposition 8.

For (i), if $ V \subset W $, then for any polynomial $ p \in I(W) $ must vanish in $ W $ and therefore vanish in $ V $, so $ p \in I(V) $ and $ I(W) \subset I(V) $.

On the other hand, if $ I(W) \subset I(V) $. We know $ W $ is a set of common zero for a set of polynomial $ g_i $, so $ \forall i $, $ g_i \in I(W) \subset I(V) $, therefore $ \forall i $, $ \forall v \in V $, $ g_i(v) = 0 $, therefore $ \forall v \in V $, $  v \in W $.

To be honest, I cheated, I couldn't figure out the second part myself. The critical part I missed is the red part. Next time I should try harder before I cheat and maybe work backwards from the conclusion.

Part (a) is trivial though. We have part (i) already, so $ V = W \implies V \subset W \text { and } W \subset V $, so $ I(W) \subset I(V) \text{ and } I(V) \subset I(W) $, so we got the easy conclusion!

Part (b) is also trivial, it is simply (i) and not (ii) !

UTM Ideals Varieties and Algorithm - Chapter 1 Section 4 Exercise 13

Problem:


Solution:

Another interesting problem. Now we are in the arena of finite fields.

Part (a) is simple. First, we denote that $ f(x) = x^2 - x $, $ g(y) = y^2 - y $. We can simply check that $ f(0) = f(1) = 0 $, $ g(0) = g(1) = 0 $.

Any polynomial $ p \in \langle x^2 - x, y^2 - y \rangle $ has the form $ a(x, y)f(x) + b(x, y)g(y) $, so $ p(0, 0) = p(0, 1) = p(1, 0) = p(1, 1) = 0 $, so $ p \in I $.

Part (b) is the interesting part. Consider $ a $ as a polynomial in $ \mathbf{F}_2[x][y] $, then the division algorithm shows that $ a(x, y) = (y^2 - y)b(x, y) + c(x)y + d(x) $.

Next, we divide $ c(x) $ and $ d(x) $ by $ (x^2 - x) $ and get:

$ c(x) = e(x)(x^2 - x) + fx + g $, $ d(x) = h(x)(x^2 - x) + jx + k $.

Putting these all back together we get:

$ \begin{eqnarray*} a(x, y) &=& (y^2 - y)b(x, y) + c(x)y + d(x) \\ &=& (y^2 - y)b(x, y) + (e(x)(x^2 - x) + fx + g)y + (h(x)(x^2 - x) + jx + k) \\ &=& (y^2 - y)b(x, y) + e(x)(x^2 - x)y + fxy + gy + h(x)(x^2 - x) + jx + k \\ &=& (y^2 - y)b(x, y) + e(x)(x^2 - x)y + h(x)(x^2 - x) + fxy + gy + jx + k \\ &=& (y^2 - y)b(x, y) + (x^2 - x)(e(x)y + h(x)) + fxy + gy + jx + k \\ \end{eqnarray*} $

Now we show the required form.

For part (c), let $ f(x, y) = axy + bx + cy + d $, we have:

$ f(0, 0) = d= 0 $
$ f(1, 0) = b + d = b + 0 = 0 $
$ f(0, 1) = c + d = c + 0 = 0 $.
$ f(1, 1) = a + b + c + d = a + 0 + 0 + 0 = 0 $.

So we showed that $ a = b = c = d = 0 $.

For part (d), we know that any polynomial $ p \in I $ can be written as the form in part (b), and part (c) guarantee $ a = b = c = d = 0 $, so $ p \in \langle x^2 - x, y^2 - y \rangle $.

For part (e), we use the division algorithm we have above:

$ x^2y + y^2x = x(y^2 - y) + (x^2y + xy) = x(y^2 - y) + (x^2 - x)y + 2xy = x(y^2 - y) + (x^2 - x)y $.

The last term disappear because $ 2 = 0 \in \mathbf{F}_2 $.

Differential Geometry - rational parameterization of the Cissoid of Diocles (2)

Problem:

Find a formula with only $ x $ and $ y $ describing the Cissoid of Diocles as defined in the last post.

Solution:

The key idea is that we should phase shift the parametrization of the circle. I learn this trick when I worked on this problem.

In particular, we could let $ \phi $ = $ \theta - \frac{\pi}{2} $, that allow us to write the circle as $ (r \cos \theta, r \sin \theta) = (r \cos (\phi + \frac{\pi}{2}), r \sin(\phi + \frac{\pi}{2}) = (-r\sin \phi, r\cos\phi) $.

The circle looked pretty similar, but magic happen when we find the parametrization of the cissoid. The code is basically the same as in the last post, of course, except the phase shift

syms t;
cx = -t/(1 + t*t);
cy = 0.5*(1 - t*t)/(1 + t*t) + 0.5;
m  = cy/cx;
lx = 1/m;
dx = lx - cx;
dy = m * dx;

This time, we get the parametrization as $ x = \frac{-t^3}{t^2 + 1} $, $ y = \frac{t^2}{t^2 + 1} $.

The magic of the phase shift originate from the geometric insight. If we start the circle at the y-axis, we obtain symmetry!

Having the simplified parametrization, now we can simply obtain $ t = \frac{-x}{y} $. Substituting that back to the $ y $ formula, we get:

$ y = \frac{(\frac{-x}{y})^2}{(\frac{-x}{y})^2 + 1} $

That simplifies to $ (x^2 + y^2)y = x^2 $, and this is the formula we sought for.

So long since the last post, sometimes, ideas just kick in.

Friday, January 15, 2016

Differential Geometry and Its Application - Exercise 2.1.2

Problem:

This exercise is for those with some knowledge of topology. It will be used in Theorem 6.7.7. Suppose $ M $ is connected. Show that a subset $ Z \subseteq M $ which is both open and closed must be either $ M $ or $ \emptyset $.

Solution:

Suppose the contrary that $ \emptyset \neq A \subsetneq M $ is both open and closed, now we can write $ A \cup (M - A) = M $ where $ A \neq \emptyset $ is both open and closed. By definition, $ M - A \neq \emptyset $ is also open because $ A $ is closed.

But $ M $ is connected, a connected set cannot be written as a disjoint union of two non-empty open sets, so we have reached a contradiction that proved the required proposition.


Differential Geometry and Its Application - Exercise 2.1.19

Problem:

Find the standard patch for the standard cone $ \sqrt{x^2 + y^2} $ and standard cylinder $ x^2 + y^2 = 1 $ which are ruling patches in the send of Example 2.1.17 and Example 2.1.18 above. This explain why the names cones and cylinder are used for the more general patches given above.

Example 2.1.17 said a cone is $ x(u, v) = p + v\delta(u) $, where $ p $ is a fixed point.
Example 2.1.18 said a cylinder is $ x(u, v) = \beta(u) + vq $, where $ v $ is a fixed direction.

Solution:

For the standard cone, the fixed point is obviously the apex, the direction is the angle going up the cone.

$ x(u, v) = (v\cos u, v\sin u, 1)v = (0, 0, 0) + v(\cos u, \sin u, 1) $.

The the standard cylinder, the fixed direction is the vertical direction.

$ x(u, v) = (\cos u, \sin u, v) = (\cos u, \sin u, 0) + v(0, 0, 1) $.

Thursday, January 14, 2016

UTM Ideals Varieties and Algorithm - Chapter 1 Section 4 Exercise 12

Problem:


Solution:

Part (a) is almost completely analogous to Exercise 11. Skipped here for brevity.
Suffice to say the answer is $ (t^2, t^3, t^4) = V(y^2 - x^3, z - x^2) $.

To complete the rest, we claim $ I(V) = \langle y^2 - x^3, z - x^2 \rangle $.

By substituting $ (t^2, t^3, t^4) $ to a polynomial $ p \in \langle y^2 - x^3, z - x^2 \rangle $, we show that $ \langle y^2 - x^3, z - x^2 \rangle \subset  I(V) $

For the reverse inclusion, the problem already hinted on we cannot use the same approach as Exercise 11 for part (b), so we try division algorithm.

A polynomial in $ k[x, y, z] $ can be thought of as $ k[x, y][z] $, the ring of polynomial on the ring of $ k[x, y] $, that is, we treat any polynomial of the form $ f(x, y) $ as a coefficient to the power of $ z $, so we can use division to show for any polynomial

$ a(x, y, z) = (z - x^2)b(x, y) + c(x, y) $.

This is really no different from showing $ a(z) = (z - m)b(z) + c $, just simple division.

Doing this again on the remainder give $ c(x, y) = (y^2 - x^3)d(x) + e(x)y + f(x) $, so we get

$ a(x, y, z) = (z - x^2)b(x, y) + (y^2 - x^3)d(x) + e(x)y + f(x) $.

Putting the parametrized curve into the equation, we get

$ a(x, y, z) = e(t^2)t^3 + f(t^2) = 0 $.

Here is the interesting part, we claim $ e(x) $ and $ f(x) $ are both 0 polynomial.

The reason is that if we look at $ e(t^2)t^3 + f(t^2) $ as a polynomial in $ t $, it is identically zero so all coefficients are 0, the polynomial has no constant term and $ t $ terms, all even terms have coefficients comes from $ f $ and all odd terms have coefficients comes from $ e $, so all the coefficients of $ e $ and $ f $ are zero!

This prove the tricky part for this problem.

Writing a polynomial in $ a(x, y, ) \in I(V) $ as $ a(x, y, z) = (z - x^2)b(x, y) + (y^2 - x^3)d(x) $, we proved that $ I(V) \subset  \langle y^2 - x^3, z - x^2 \rangle $.

So we conclude $ I(V) = \langle y^2 - x^3, z - x^2 \rangle $.

UTM Ideals Varieties and Algorithm - Chapter 1 Section 4 Exercise 11

Problem:


Solution:

For part (a), it suffice the find the set of polynomials that vanish on the parametric curve.

Analogous to the twisted cubic, it is pretty obvious to me that the polynomials are:

$ y - x^3 $ and $ z - x^4 $.

Let's prove that $ V(y - x^3, z - x^4) $ is the parametric curve $ C = \{(t, t^3, t^4)\} $.

Note that any point in the parametric curve must be in $ V $ by construction, so $ C \subset V(y - x^3, z - x^4) $.

For the reverse inclusion, suppose a point $ p $ outside of $ C $, the point $ p $ must necessarily have the form $ (t, t^3 + a, t^4 + b) $ where not both $ a = 0 $ and $ b = 0 $. Putting $ p $ into the equation we get $ y - x^3 = a $ and $ z - x^4 = b $, which mean at least one of the polynomial must not vanish and therefore the point cannot in $ V $. In other words $ v \in V \implies v \in C $, and so $ V(y - x^3, z - x^4) \subset C $, so we proved that the curve is the variety $ V(y - x^3, z - x^4) $.

For part (b), we claim that $ P = \langle y - x^3, z - x^4 \rangle $ is $ I(V) $.

It is obvious that any polynomial $ p \in P $ must vanish in $ V(y - x^3, z - x^4) $.

We will use the same trick as the last problem. We claim that any polynomial $ f(x, y, z) $ can be written as $ f(x, y, z) = g(x, y, z)(y - x^3) + h(x, y, z)(z - x^4) + r(x) $.

Again, it suffice to prove this for monomial, so we have:

$ \begin{eqnarray*} & & x^ay^bz^c \\ &=& x^a((y - x^3) + x^3)^b((z - x^4) + x^4)^c \\ &=& x^a(x^{3b} + p(x, y)(y - x^3))(x^{4c} + q(x, z)(z - x^4)) \\ &=& x^{a + 3b + 4c} + m(x, y, z)(y - x^3) + n(x, y, z)(z - x^4) \end{eqnarray*} $

As we can represent a monomial, we can also represent any polynomial, so the claim is proved.

Next, for any polynomial $ f(x, y, z) \in I(V) $, we write $ f(x, y, z) = g(x, y, z)(y - x^3) + h(x, y, z)(z - x^4) + r(x) $ and then it need to vanish $ (t, t^3, t^4) $, so we substitute in and get:

$ 0 = f(x, y, z) = g(t, t^3, t^4)(t^3 - t^3) + h(t, t^3, t^4)(t^4 - t^4) + r(t) $, so we get $ r(t) $ is the zero polynomial and therefore $ I(V) \subset \langle y - x^3, z - x^4 \rangle $.

Bitcoin and Cryptocurrency Technologies Index

This page is an index of all my studies on BitCoin

Coursera Bitcoin and Cryptocurrency Technologies Series

The course is available here.
A Chinese discussion thread talking about BitCoin is available here.

Bitcoin and Cryptocurrency Technologies - Quiz 1 Problem 1
Bitcoin and Cryptocurrency Technologies - Quiz 1 Problem 2
Bitcoin and Cryptocurrency Technologies - Quiz 1 Problem 3
Bitcoin and Cryptocurrency Technologies - Quiz 1 Problem 4
Bitcoin and Cryptocurrency Technologies - Quiz 1 Problem 5

Bitcoin and Cryptocurrency Technologies - Quiz 2 Problem 1
Bitcoin and Cryptocurrency Technologies - Quiz 2 Problem 2
Bitcoin and Cryptocurrency Technologies - Quiz 2 Problem 3
Bitcoin and Cryptocurrency Technologies - Quiz 2 Problem 4
Bitcoin and Cryptocurrency Technologies - Quiz 2 Problem 5
Bitcoin and Cryptocurrency Technologies - Quiz 2 Problem 6
Bitcoin and Cryptocurrency Technologies - Quiz 2 Problem 7

Bitcoin and Cryptocurrency Technologies - Quiz 3 Problem 1
Bitcoin and Cryptocurrency Technologies - Quiz 3 Problem 2
Bitcoin and Cryptocurrency Technologies - Quiz 3 Problem 3
Bitcoin and Cryptocurrency Technologies - Quiz 3 Problem 4
Bitcoin and Cryptocurrency Technologies - Quiz 3 Problem 5
Bitcoin and Cryptocurrency Technologies - Quiz 3 Problem 6
Bitcoin and Cryptocurrency Technologies - Quiz 3 Problem 7

Bitcoin and Cryptocurrency Technologies - Quiz 4 Problem 1
Bitcoin and Cryptocurrency Technologies - Quiz 4 Problem 2
Bitcoin and Cryptocurrency Technologies - Quiz 4 Problem 3
Bitcoin and Cryptocurrency Technologies - Quiz 4 Problem 4
Bitcoin and Cryptocurrency Technologies - Quiz 4 Problem 5
Bitcoin and Cryptocurrency Technologies - Quiz 4 Problem 6

Bitcoin and Cryptocurrency Technologies - Quiz 5 Problem 1
Bitcoin and Cryptocurrency Technologies - Quiz 5 Problem 2
Bitcoin and Cryptocurrency Technologies - Quiz 5 Problem 3
Bitcoin and Cryptocurrency Technologies - Quiz 5 Problem 4
Bitcoin and Cryptocurrency Technologies - Quiz 5 Problem 5
Bitcoin and Cryptocurrency Technologies - Quiz 5 Problem 6

... Quiz 6 missed ...

Bitcoin and Cryptocurrency Technologies - Quiz 7 Problem 1
Bitcoin and Cryptocurrency Technologies - Quiz 7 Problem 2
Bitcoin and Cryptocurrency Technologies - Quiz 7 Problem 3
Bitcoin and Cryptocurrency Technologies - Quiz 7 Problem 4
Bitcoin and Cryptocurrency Technologies - Quiz 7 Problem 5
Bitcoin and Cryptocurrency Technologies - Quiz 7 Problem 6

Bitcoin and Cryptocurrency Technologies - Quiz 8 Problem 1
Bitcoin and Cryptocurrency Technologies - Quiz 8 Problem 2
Bitcoin and Cryptocurrency Technologies - Quiz 8 Problem 3
Bitcoin and Cryptocurrency Technologies - Quiz 8 Problem 4

Bitcoin and Cryptocurrency Technologies - Quiz 9 Problem 1
Bitcoin and Cryptocurrency Technologies - Quiz 9 Problem 2
Bitcoin and Cryptocurrency Technologies - Quiz 9 Problem 3
Bitcoin and Cryptocurrency Technologies - Quiz 9 Problem 4
Bitcoin and Cryptocurrency Technologies - Quiz 9 Problem 5

Bitcoin and Cryptocurrency Technologies - Quiz 10 Problem 1
Bitcoin and Cryptocurrency Technologies - Quiz 10 Problem 2
Bitcoin and Cryptocurrency Technologies - Quiz 10 Problem 3

Bitcoin and Cryptocurrency Technologies - Quiz 11 Problem 1
Bitcoin and Cryptocurrency Technologies - Quiz 11 Problem 2
Bitcoin and Cryptocurrency Technologies - Quiz 11 Problem 3
Bitcoin and Cryptocurrency Technologies - Quiz 11 Problem 4

Tuesday, January 12, 2016

UTM Ideals Varieties and Algorithm - Chapter 1 Section 4 Exercise 10

Problem:


Solution:

First, let's look at what is $ V(x - y) $, it is the point set such that $ x - y = 0 $, so it is simply the straight line $ (t, t) $.

Next, we look at what $ \langle x - y \rangle $ is, it is the set of all polynomial with a factor $ (x - y) $, so it must vanish on the straight line, so we showed $ \langle x - y \rangle \subset I(V(x - y)) $

For the reverse inclusion, we consider the polynomials in $ I(V(x - y)) $, they must vanishes on $ (t, t) $. The challenge is to show one such polynomial must have $ (x - y) $ as a factor.

In order to solve the challenge, we need this result:

We claim that any polynomial $ f(x, y) $ can be written as $ f(x, y) = h(x, y)(x - y) + r(x) $.

The prove the claim, it suffice to show that the trick works when $ f(x, y) $ are monomials. For general polynomials we just add the terms up.

So for a general monomial, we see that we can indeed express it in the claim form as follow:

$ x^a y^b = x^a(x - (x - y))^b = x^a(x^b + c(x, y)(x-y)) = x^{a+b} + x^ac(x, y)(x - y) $.

Armed with the claim, we write any polynomial $ f(x, y) \in I(V) $, $ f(x, y) = h(x, y)(x - y) + r(x) $.

It must vanishes on $ (t, t) $, so $ h(t, t)(t - t) + r(t) = 0 \implies r(t) = 0 $.

Therefore we proved any polynomial $ f(x, y) \in I(V) \implies f(x, y) \in \langle x - y \rangle $.



Here we have the previous wrong proof  - just for reference for a mistake I had.

An easy fact is that such polynomial must vanish on $ (0, 0) $, so the polynomial must not have a constant term, any such polynomial can be written as

$ f(x, y) = x a(x, y) + y b(x, y) $.

Next, we substitute $ (t, t) $ and get $ a(t, t) = -b(t, t) $ whenever $ t \ne 0 $.

So we have two polynomials that agree on infinite number of points (as the field $ k $ is infinite). The two polynomials must be equal, so we have $ a(x, y) = -b(x, y) $.

Putting back in, we have $ f(x, y) = x a(x, y) - y a(x, y) = (x - y) a(x, y) $, so we have proved that any such polynomial must have $ (x - y) $ as a factor, or $ I(V(x, y)) \subset \langle x - y \rangle $.

The key mistake is in the red line. In general, two polynomials of two or more variables, even when they agree on infinite number of points, can be different.

See:
http://math.stackexchange.com/questions/623981/check-that-two-function-fx-y-and-gx-y-are-identical

This is a good story learnt.

UTM Ideals Varieties and Algorithm - Chapter 1 Section 4 Exercise 9

Problem:


Solution:

The parameterization of the twisted cubic is $ (t, t^2, t^3) $, so all points in $ V $ must be of that form.

Now we test such points on the polynomial $ y^2 - xz = (t^2)^2 - (t)(t^3) = t^4 - t^4 = 0 $, so the polynomial vanishes on all points in the variety, so $ y^2 - xz \in I(V) $.

For part (b), the key observation is that we have $ xz $, so we multiply the second polynomial by $ x $, the rest seems the just follow as:

$ (y)(y - x^2) - (x)(z - x^3) = (y^2 - xz) - x^2 y  + x^4 = (y^2 - xz) - x^2(y - x^2) $.

So we just it back on the left hand side and get our answer as:

$ (x^2 + y)(y - x^2) - (x)(z - x^3) = (y^2 - xz) $.

Monday, January 11, 2016

UTM Ideals Varieties and Algorithm - Chapter 1 Section 4 Exercise 8

Problem:


Solution:

For part (a), if $ f^m \in I(V) $, then for all points $ p $ in $ V $, $ f^m(p) = 0 \implies f(p) = 0 \implies f \in I(V) $. Therefore $ I(V) $ is radical.

For part (b), $ x^2 \in \langle x^2, y^2 \rangle $ but $ x \notin \langle x^2, y^2 \rangle $. That shows the $ \langle x^2, y^2 \rangle $ is not radical and therefore cannot be an ideal of a variety.

Look forward to Nullstellensatz, what a long word to type ...

UTM Ideals Varieties and Algorithm - Chapter 1 Section 4 Exercise 7

Problem:


Solution:

Recall the ideal of a variety is the set of all polynomials vanishes on $ V $, but what is $ V $ as a point set?

We claim $ V(x^n, y^m) = \{(0,0)\} $, this is actually pretty obvious, for this is the only solution for $ x^n = y^m = 0 $.

So the problem becomes, what is the set of all polynomials that vanish on $ \{(0,0)\} $?

For one thing, it must not have a constant term, otherwise $ \{(0,0)\} $ is not a solution. With that, we pick all the terms with $ x $ and factor out $ x $, and then all the term without $ x $ must have $ y $ and therefore factor out $ y $, we get

$ f(x, y) \in I(V(x^n, y^m)) \implies f(x, y) = a(x, y) x + b(x, y) y \in \langle x, y \rangle $, this show $ I(V(x^n, y^m)) \subset \langle x, y \rangle $.

The other inclusion is just as easy, notice if $ f(x, y) = c_1 x + c_2 y $, then $ f(0, 0) = 0 \in I(V(x^n, y^m)) $.

Therefore we proved $ I(V(x^n, y^m)) = \langle x, y \rangle $.

UTM Ideals Varieties and Algorithm - Chapter 1 Section 4 Exercise 6

Problem:


Solution:

Another long problem.

For part (a), let's review what is a vector space first.

A vector space is a set of vectors $ v $ such that we can add these vector together and do 'scalar' multiplication, where the scalar is an element of a field $ k $, and in this case we say it is a vector field over $ k $.

To see, how $ k[x] $ can be interpreted as a vector space, we already know the scalar are elements of $ k $, so the vectors must be power of $ x $, and therefore $ \langle I \rangle $ is a subspace with the 0th power missing.

To show that must have an infinite basis, we simply note that there is no way one can construct $ x^n $ from linear combination of $ x^k $ where $ k < n $, thus a basis must be infinite.

For part (b), it is trivial. So trivial that I am not sure if I will remember what I meant, so let's be specific. The form of the solution is:

$ c_1 x + c_2 y  = 0 $

So we pick the coefficients $ c_1 = y $ and $ c_2 = -x $, both of them are not zero and yet the result is 0.

Part (c) is equally trivial. We just simply do exactly the same as above:

$ \sum\limits_{i = 0}^{s} c_i f_i = 0 $

So we pick $ c_1 = f_2 $, $ c_2 = -f_1 $ and $ c_k = 0 $ for other $ k $.

For part (d), we need to be a little more creative, the two representations are

$ \begin{eqnarray*} x^2 + xy + y^2 &=& (x + y)(x) &+& (y)(y) \\ x^2 + xy + y^2 &=& (x)(x) &+& (x + y)(y) \end{eqnarray*} $

So there you go, two representations of the same polynomial.

For part (e), it suffice to show three facts:

$ \langle x + x^2, x^2 \rangle = \langle x \rangle $
$ \langle x + x^2 \rangle \ne \langle x \rangle $
$ \langle x^2 \rangle \ne \langle x \rangle $

All of them are pretty obvious.
$ x = (1)(x + x^2) + (-1)(x^2) \in \langle x + x^2, x^2 \rangle $, this implies $ \langle x \rangle \subset \langle x^2 + x, x^2 \rangle $.
$ x = (1)(x) \in \langle x \rangle $ and $ x + x^2 = (x + 1)(x) \in \langle x \rangle $, this implies $ \langle x + x^2, x \rangle \subset \langle x \rangle $.
Together we have shown $ \langle x + x^2, x^2 \rangle = \langle x \rangle $

Note that for the other two statements, any polynomial in the left hand side subsets necessarily have degree at least 2, so they cannot have the monomial $ x $, which is in $ \langle x \rangle $, so we establish the other two statements. These two statements together indicate it is a minimal basis.

In linear algebra, the minimal basis always have the same size, which is the dimension of the space. But apparently this does not hold with ideals.

Differential Geometry and Its Application - Exercise 3.1.7

Problem:

Show that the principal curvatures are given in terms of $ K $ and $ H $ by

$ k_1 = H + \sqrt{H^2 - K} $ and $ k_2 = H - \sqrt{H^2 - K} $.

Solution:

We know $ H = \frac{k_1 + k_2}{2} $ and $ K = k_1 k_2 $. Therefore we can form the quadratic equation $ x^2 - 2Hx + K $ so that the roots are $ k_1 $ and $ k_2 $

Now using the quadratic formula, we get

$ k = \frac{2H \pm \sqrt{4H^2 - 4K}}{2} = H \pm \sqrt{H^2 - K} $, this is exactly what we needed.

Differential Geometry and Its Application - Exercise 3.1.6

Problem:

Using Euler's formula (Corollary 2.4.11) to show

(1) The mean curvature $ H $ at a point is the average normal curvature

$ H = \frac{1}{2\pi}\int\limits_{0}^{2\pi}k(\theta)d\theta $.

(2) $ H = \frac{1}{2}(k(v_1) + k(v_2)) $ for any two unit vectors $ v_1 $ and $ v_2 $ which are perpendicular.

Solution:

The Euler formula is $ k(\theta) = \cos^2(\theta)\lambda_1 + \sin^2(\theta)\lambda_2 $.

For part (1), we simply integrate it.

Note that $ \cos^2 \theta = \frac{1 + \cos 2\theta}{2} $, so $ \int\limits_{0}^{2\pi}\cos^2 \theta = \pi $.
Same for $ \sin^2 \theta = \frac{1 - \cos 2\theta}{2} $, so $ \int\limits_{0}^{2\pi}\sin^2 \theta = \pi $.

Putting them together, we get

$ \begin{eqnarray*} & & \frac{1}{2\pi}\int\limits_{0}^{2\pi}k(\theta)d\theta \\ &=& \frac{1}{2\pi}\int\limits_{0}^{2\pi}(\cos^2(\theta)\lambda_1 + \sin^2(\theta)\lambda_2)d\theta \\ &=& \frac{1}{2\pi}(\lambda_1 \int\limits_{0}^{2\pi}\cos^2(\theta)d\theta + \lambda_2 \int\limits_{0}^{2\pi}\sin^2(\theta)d\theta) \\ &=& \frac{1}{2\pi}(\lambda_1 \pi + \lambda_2 \pi) \\ &=& \frac{\lambda_1+ \lambda_2}{2} \\ &=& H \end{eqnarray*} $

Part (2) is also simple, we have

$ \begin{eqnarray*} & & \frac{1}{2}(k(v_1) + k(v_2)) \\ &=& \frac{1}{2}(k(\theta) + k(\theta + \frac{\pi}{2})) \\ &=& \frac{1}{2}(\lambda_1 \cos^2\theta + \lambda_2 \sin^2\theta + \lambda_1 \cos^2(\theta + \frac{\pi}{2})+ \lambda_2 \sin^2(\theta + \frac{\pi}{2})) \\ &=& \frac{1}{2}(\lambda_1 \cos^2\theta + \lambda_2 \sin^2\theta + \lambda_1 \sin^2(\theta)+ \lambda_2 \cos^2(\theta)) \\ &=& \frac{1}{2}(\lambda_1 + \lambda_2) \\ &=& H \end{eqnarray*} $

Differential Geometry and Its Application - Exercise 2.2.5

Problem:

Show that the Leibniz Rule (or Product Rule) holds. That is, for $ \v \in T_p(M) $, we have orientable. $ v[fg] = v[f]g(p) + f(p)v[g] $.

Solution:

Remember the definition $ v[f] = \nabla f \cdot v $, so we can write

$ v[fg] = \nabla fg \cdot v $

Now we can expand the $ \nabla $ using the product rule because it is really just $ n $ (assuming we are in $ \mathbf{R}^n $ partial derivatives, so we can write

$ \nabla fg = g \nabla f + f \nabla g $

Putting it back we have:

$ \begin{eqnarray*} v[fg] &=& \nabla fg \cdot v \\ &=& (g \nabla f + f \nabla g) \cdot v \\ &=& (g \nabla f \cdot v + f \nabla g \cdot v) \\ &=& (g v[f] + f v[g]) \end{eqnarray*} $

So there we go.

Differential Geometry and Its Application - Exercise 1.3.12

Problem:

If a rigid body moves along a curve $ \alpha(s) $ (which we suppose is unit speed), then the motion of the body consists of translation along $ \alpha $ and rotation about $ \alpha $. The rotation is determined by an angular velocity vector $ \omega $ which satisfies $ T' = \omega \times T $, $ N' = \omega \times N $ and $ B' = \omega \times B $. The vector $ \omega $ is called the Darboux vector. Show that $ \omega $, in terms of $ T $, $ N $ and $ B $, is given by $ \omega = \tau T + \kappa B $.

Solution:

$ T $, $ N $ and $ B $ forms an orthonormal frame, so we can write:

$ \omega = aT + bN + cB $.

To make things easier later, we document the cross product table here:

$ \times $
$ T $
$ N $
$ B $
$ T $
$ 0 $
$ B $
$ -N $
$ N $
$ -B $
$ 0 $
$ T $
$ B $
$ N $
$ -T $
$ 0 $

$ \begin{eqnarray*} \kappa N = T' &=& \omega \times T &=& (aT + bN + cB) \times T &=& -bB + cN \\ -\kappa T + \tau B = N' &=& \omega \times N &=& (aT + bN + cB) \times N &=& aB - cT \\ -\tau N = B' &=& \omega \times B &=& (aT + bN + cB) \times B &=& -aN + bT \end{eqnarray*} $

Remember again $ T $, $ N $, $ B $ is an orthonormal frame, so we proved $ a = \tau $, $ b = 0 $ and $ c = \kappa $, so $ \omega = \tau T + \kappa B $.

Sunday, January 10, 2016

Differential Geometry and Its Application - Exercise 2.3.9

Problem:

For the cone $ x(u, v) = (v \cos u, v \sin u, v) $, computes the Gauss map and its derivative. Estimate the amount of area the image of the Gauss map takes up on the sphere.

Solution:

We compute the Gauss map step-by-step, again, we starts with the tangent vectors:

$ x_u = (-v\sin u, v \cos u, 0) $
$ x_v = (\cos u, \sin u, 1) $.

Then we compute the normal vector

$ x_u \times x_v = \left|\begin{array}{ccc}i & j & k \\ -v\sin u & v \cos u & 0 \\ \cos u & \sin u & 1\end{array}\right| = (v\cos u, v\sin u, -v)$

The make intuitive sense, the normal vector point radially outward and downwards (for $ v > 0 $)
Also note that when $ v = 0 $, the tangent vector is $ 0 $ and therefore the surface is not regular there, we will ignore that single point.

The unit normal vector is

$ \frac{1}{\sqrt 2}(\cos u, \sin u, -1) $.

So this is the Gauss map.

To compute the 'derivative', again, it is helpful to think of the Gauss map as a function from $ (x, y, z) $ to the unit sphere, so we can compute the $ 3 \times 3 $ Jacobian matrix.

$ \frac{\partial G_x}{\partial x} = \frac{\partial G_x}{\partial u}\frac{\partial u}{\partial x} = \frac{\partial G_x}{\partial u}\frac{1}{\frac{\partial x}{\partial u}} = \frac{-1}{\sqrt 2}\sin u \frac{1}{-v \sin u} = \frac{1}{\sqrt 2 v} $
$ \frac{\partial G_x}{\partial y} = \frac{\partial G_x}{\partial u}\frac{\partial u}{\partial y} = \frac{\partial G_x}{\partial u}\frac{1}{\frac{\partial y}{\partial u}} = \frac{-1}{\sqrt 2}\sin u \frac{1}{v \cos u} = \frac{-\tan u}{\sqrt 2 v} $
$ \frac{\partial G_x}{\partial z} = \frac{\partial G_x}{\partial u}\frac{\partial u}{\partial z} = \frac{\partial G_x}{\partial u}\frac{1}{\frac{\partial z}{\partial u}} = \infty $

Something seems wrong with the last line. We know that the derivative of the Gauss map is the Weingarten map, which should exist. Why am I getting an infinite there?

Alternatively, we can compute the Weingarten map using a shortcut. We know $ x_u[f] = f_u $, so we know:

$ S(x_u) = -\nabla_{x_u} U = -\frac{1}{\sqrt 2}(\frac{\partial \cos u}{\partial u}, \frac{\partial \sin u}{\partial u}, \frac{\partial (-1)}{\partial u}) = (-\sin u, \cos u, 0) = -\frac{1}{\sqrt 2 v} x_u $.

$ S(x_v) = -\nabla_{x_v} U = -\frac{1}{\sqrt 2}(\frac{\partial \cos u}{\partial v}, \frac{\partial \sin u}{\partial v}, \frac{\partial (-1)}{\partial v}) = (0, 0, 0) $.

As per the image of the Gauss map, it is a circle at $ z = -\frac{1}{\sqrt 2} $. It does not take up area of the surface of the sphere.

Differential Geometry and Its Application - Exercise 2.2.14

Problem:

Let $ M $ be the cylinder $ x^2 + y^2 = R^2 $ parametrized by $ x(u, v) = (R\cos u, R\sin u, v) $. Show that the shape operator on $ M $ is described on a basis by $ S(x_u) = -\frac{1}{R}x_u $ and $ S(x_v) = 0 $. Therefore, in the u-direction the cylinder resembles a sphere and in the v-direction a plane. Of course, intuitively, this is exactly right. Why?

Solution:

Let's compute the shape operator step by step, we start with the tangent vectors:

$ x_u = (-R\sin u, R \cos u, 0 ) $.
$ x_v = (0, 0, 1) $.

Then we compute the normal vector

$ x_u \times x_v = \left|\begin{array}{ccc}i & j & k \\ -R\sin u & R \cos u & 0 \\ 0 & 0 & 1\end{array}\right| = (R\cos u, R\sin u, 0)$

Intuitively that make sense, the normal vector points radially outwards.

The unit normal vector is therefore $ (\cos u, \sin u, 0) $.

Next, we compute the directional derivatives:

$ v[\cos u] = \nabla \cos u \cdot v $.
$ v[\sin u] = \nabla \sin u \cdot v $.
$ v[\sin u] = \nabla 0 \cdot v = 0$.

At this point, it is useful to remind ourselves that when we talk about the gradient operator $ \nabla $, we are thinking the unit normal vector as a function of $ (x, y, z) $. Now $ \cos u = \frac{x}{R} $ and $ \sin u = \frac{y}{R} $, so the gradients can be computed easily as:

$ v[\cos u] = \nabla \frac{x}{R} \cdot v = (\frac{1}{R}, 0, 0) \cdot v = v_x $
$ v[\sin u] = \nabla \frac{y}{R} \cdot v = (0, \frac{1}{R}, 0) \cdot v = v_y $

Therefore the Weingarten map is

$ S_p(v) = -(\frac{1}{R}v_x, \frac{1}{R}v_y, 0) = \left(\begin{array}{ccc}\frac{-1}{R} & 0 & 0 \\ 0 & \frac{-1}{R} & 0 \\ 0 & 0 & 0\end{array}\right)\left(\begin{array}{c}v_x \\ v_y \\ v_z \end{array}\right) $.

This is representing the map as a 3 dimensional transform, but we also know the Weingarten map is a transform that takes tangent vector to tangent vectors, so we compute (simply by substituting) and get

$ S_p(x_u) = -\frac{1}{R}x_u $.
$ S_p(x_v) = 0 $.


Differential Geometry and Its Application - Exercise 1.2.2

Problem:

Recall that the arclength of a curve $ \alpha : [a, b] \to \mathbf{R}^3 $ is given by $ L(\alpha) = \int | \alpha'(t)| dt $. Let $ \beta(r) : [c, d] \to \mathbf{R}^3 $ be a reparametrization of $ \alpha $ defined by taking a map $ h : [c, d] \to [a, b] $ with $ h[c] = a, h[d] = b $ and $ h'(r) \ge 0 $ for all $ r \in [c, d] $. Show that the arclength does not change under this type of reparametrization.

Solution:

Intuitively, arclength of a curve should not change because we parametrize it differently. To show that, let's compute the arclength of $ \beta $.

First, we notice $ \beta(r) = \alpha(h(r)) $. By chain rule, we know $ \frac{d\beta}{dr} = \frac{d\alpha}{dh}\frac{dh}{dr} $

$ \begin{eqnarray*} & & \int\limits_{c}^{d}{|\beta'(r)|dr} \\ &=& \int\limits_{c}^{d}{|\frac{d\alpha}{dh}\frac{dh}{dr}|dr} \\ &=& \int\limits_{c}^{d}{|\frac{d\alpha}{dh}|\frac{dh}{dr}dr} & & (\text{We are using } h'(r) \ge 0 \text{ here.}) \\ &=& \int\limits_{a}^{b}{|\frac{d\alpha}{dh}|dh} \end{eqnarray*} $

So the arclength in unchanged after reparametrization!

Differential Geometry and Its Application - Exercise 1.1.13

Problem:

Suppose a circle of radius $ a $ sits on the x-axis making contact at (0, 0). Let the circle roll along the positive x-axis. Show that the path $ a $ followed by the point originally in contact with the x-axis is given by:

$ \alpha(t) = (a(t - \sin t), a(1 - \cos t)) $

where $ t $ is the angle formed by the (new) point of contact with the axis, the center and the original point of contact. This curve is called a cycloid.

Solution;

Once we have the diagram, it is a simple matter to argue the formula:



As we can see, the point $ p $ should have coordinate $ at $ because the circle has rolled for $ t $ radian. Now the base of the triangle is given by $ a \sin t $, so the x coordinate is $ at - a\sin t = a(t - \sin t) $.

For y coordinate, we see the height of the triangle is $ a \cos t $, so the y-coordinate is $ a(1 - \cos t) $.

So we have proved the parametric formula for the cycloid.

Saturday, January 9, 2016

UTM Ideals Varieties and Algorithm - Chapter 1 Section 4 Exercise 5

Problem:


Solution:

This is easy.

In exercise 3, we just shown $ \langle x + xy, y + xy, x^2, y^2 \rangle = \langle x, y \rangle $.
In exercise 4, we just shown if $ \langle f_1, \cdots, f_s \rangle = \langle g_1, \cdots, g_t \rangle $, then $ \mathbf{V}(f_1, \cdots, f_s) = \mathbf{V}(g_1, \cdots, g_t) $.

This is just applying the two facts we have just proved.

UTM Ideals Varieties and Algorithm - Chapter 1 Section 4 Exercise 4

Problem:



Solution:

Consider a point $ \vec{x} \in \mathbf{V}(f_1, \cdots, f_s) $, we know $ f_1(\vec{x}) = \cdots = f_s(\vec{x}) = 0 $.

Because $ \langle f_1, \cdots, f_s \rangle = \langle g_1, \cdots, g_t \rangle $, therefore $ g_k = h_1f_1 + \cdots + h_sf_s $ for all $ k \in [1, t] $.

So $ g_k(\vec{x}) = h_1f_1(\vec{x}) + \cdots + h_sf_s(\vec{x}) = 0 $, so the point $ \vec{x} \in \mathbf{V}(g_1, \cdots, g_t) $.

We have just shown $ \mathbf{V}(f_1, \cdots, f_s) \subset \mathbf{V}(g_1, \cdots, g_t) $.

By symmetry, we also know $ \mathbf{V}(g_1, \cdots, g_t) \subset \mathbf{V}(f_1, \cdots, f_s) $, so the two varieties are equal.

UTM Ideals Varieties and Algorithm - Chapter 1 Section 4 Exercise 3

Problem:


Solution:

Let's discuss how to we prove two finitely generated ideals are equal. By the previous problem, we can show one finitely generated ideal $ I $ to be a subset of another ideal $ J $ by simply showing its basis of $ I $ is in the ideal $ J $.

Therefore, to we prove two finitely generated ideals are equal, we simply check if the two sets of basis are included in each other. The rest is just algebra.

Part (a)

$ \begin{eqnarray*} x + y &=& 1x + 1y & \in & \langle x, y \rangle \\ x - y &=& 1x + (-1)y & \in & \langle x, y \rangle \end{eqnarray*} $

Therefore $ \langle x + y, x - y \rangle \subset \langle x, y \rangle $.

$ \begin{eqnarray*} x &=& \frac{1}{2}(x + y) + \frac{1}{2}(x - y) & \in & \langle x + y, x - y \rangle \\ y &=& \frac{1}{2}(x + y) + \frac{-1}{2}(x - y) & \in & \langle x + y, x - y \rangle \end{eqnarray*} $

Therefore $ \langle x, y \rangle \subset \langle x + y, x - y \rangle $.
Together we proved $ \langle x + y, x - y \rangle = \langle x, y \rangle $

Part (b)

$ \begin{eqnarray*} x + xy &=& (1)x + (x)y & \in & \langle x, y \rangle \\ y + xy &=& (y)x + (1)y & \in & \langle x, y \rangle \\ x^2 &=& (x)x + (0)y & \in & \langle x, y \rangle \\ y^2 &=& (x)x + (y)y & \in & \langle x, y \rangle \\ \end{eqnarray*} $

Therefore $ \langle x + xy, y + xy, x^2, y^2 \rangle \subset \langle x, y \rangle $.

$ \begin{eqnarray*} x &=& (1)(x+xy) + (-x)(y + xy) + (y)(x^2) + (0)(y^2) & \in & \langle x + xy, y + xy, x^2, y^2 \rangle \\ y &=& (-y)(x+xy) + (1)(y + xy) + (0)(x^2) + (x)(y^2) & \in & \langle x + xy, y + xy, x^2, y^2 \rangle \end{eqnarray*} $

Therefore $ \langle x, y \rangle \subset \langle x + xy, y + xy, x^2, y^2 \rangle $.
Together we proved $ \langle x + xy, y + xy, x^2, y^2 \rangle = \langle x, y \rangle $

Part (c)

$ \begin{eqnarray*} 2x^2 + 3y^2 - 11 &=& (2)(x^2 - 4) + (3)(y^2 - 1) & \in & \langle x^2 - 4, y^2 - 1 \rangle \\ x^2 - y^2 - 3 &=& (1)(x^2 - 4) + (-1)(y^2 - 1) & \in & \langle x^2 - 4, y^2 - 1 \rangle \end{eqnarray*} $

Therefore $ \langle 2x^2 + 3y^2 - 11, x^2 - y^2 - 3 \rangle \subset \langle x^2 - 4, y^2 - 1 \rangle $.

$ \begin{eqnarray*} x^2 - 4 &=& (\frac{1}{5})(2x^2 + 3y^2 - 11) + (\frac{3}{5})(x^2 - y^2 - 3) & \in & \langle 2x^2 + 3y^2 - 11, x^2 - y^2 - 3 \rangle \\ y^2 - 1 &=& (\frac{1}{5})(2x^2 + 3y^2 - 11) + (\frac{-2}{5})(x^2 - y^2 - 3) & \in & \langle 2x^2 + 3y^2 - 11, x^2 - y^2 - 3 \rangle \end{eqnarray*} $

Therefore $ \langle x^2 - 4, y^2 - 1 \rangle \subset \langle 2x^2 + 3y^2 - 11, x^2 - y^2 - 3 \rangle $.
Together we proved $ \langle 2x^2 + 3y^2 - 11, x^2 - y^2 - 3 \rangle = \langle x^2 - 4, y^2 - 1 \rangle $

UTM Ideals Varieties and Algorithm - Chapter 1 Section 4 Exercise 2

Problem:


Solution:

(i) $ \implies $ (ii)

If $ f_i \in I $, $ \forall i $, then $ \sum\limits_{i = 1}^{s}h_i f_i \in I $ because $ I $ is an ideal, therefore $ \langle f_1, \cdots f_s \rangle \subset I $.

(ii) $ \implies $ (i)

If $ \langle f_1, \cdots f_s \rangle \subset I $, then $ \sum\limits_{i = 1}^{s}h_i f_i \in \langle f_1, \cdots f_s \rangle \subset I $ because $ \langle f_1, \cdots f_s \rangle $ is an ideal. For $ k \in [1, s] $, set $ h_k = 1 $ and $ h_i = 0 $ for all $ i \ne k $, we get (i).