Instructions

Submission is through GradeScope as a PDF.

Please be considerate of the grader:

  • ensure your answers are legible;
  • if you scan/photo your handwritten work adjust your settings so that your handwriting is higher visibility than the lines on your paper;
  • when you submit to GradeScope, mark the answers for each problem you are submitting: this helps greatly.

Teamwork is encouraged but every person must turn in their own assignment. Please reference who you worked with and all external sources used. External help (not full solutions) or information is permitted but must be acknowledged. Lecture notes, OH help, course books, recorded lectures are not considered external and can be used without reference.

When solving a point of a problem you may give for granted any problem or point that appears prior to it, including problems on previous HW sheets (even if you didn't manage to solve them) but must referenced (e.g. "I use the proven fact that…" or "I use the fact from HW1P2 that…").

All statements require proof, unless specified otherwise.

By submitting this HW assignment you pledge to the UVA honor code.

1

Recall that \(T\) is a Taylor polynomial for \(f\) at \(x_0\) of degree \(d\) if there exists \(\rho\gt 0\) such that for \(B_\rho(x_0)\subset (a,b)\) and one has

\begin{equation*} f(x_0+t)=T(t) + |t|^dr_{x_0}(t) \end{equation*}

for any \(t\in B_\rho(0)\) for some function \(r_{x_0}\colon B_\rho(0)\to \R\) with

\begin{equation*} \lim_{t\to 0} r_{x_0}(t) =0. \end{equation*}

1.1

Let \(P\) be a polynomial of degree at most \(d\). Show that if

\begin{equation*} \lim_{t\to 0}\frac{P(t)}{|t|^d}=0 \end{equation*}

then \(P\) is the zero polynomial

1.1.1 Solution:

Let us write \(P(t)=\sum_{k=k_0}^{d} a_{k}t^{k} \) where \(a_k\in\R\), \(a_{k_0}\neq 0\), and \(k_0\) is the smallest degree such that \(P\) contains a monomial of that degree

We have that

\begin{equation*} \sum_{k=k_0}^{d} a_{k}t^{k}{t^d}= \Big(a_{k_0}t^{k_0-d} \big(1+\sum_{k=k_0}^{d} a_{k}/a_{k_0}t^{k-k_0}\big)\Big) \end{equation*}

The second factor convegres to \(1\) as \(t\to 0^+\). The term \(a_{k_0}t^{k_0-d}\) either goes to \(a_{k_0}\) if \(k_0=d\) or is unbounded if \(k_0\lt d\) (we always have \(k_0\leq d\)). thus we always will have that the expression above eigher converges to \(a_{k_0}\) or diverges, contradicting our assumption.

1.2 ​   few_points

Fix a function \(f\colon(a,b)\to \R\), \(x_0\in(a,b)\), and a degree \(d\in \N\). Show using the above that there is at most one Taylor polynomial of degree \(d\) for \(f\) at \(x_0\).

1.2.1 Solution:

Take two Taylor polynomials \(T_1\) and \(T_2\) and write out the definitions

\begin{equation*} \begin{aligned}[t] & f(x_0+t)=T_1(t) + |t|^dr_{x_0,1}(t) \\ & f(x_0+t)=T_2(t) + |t|^dr_{x_0,2}(t) \end{aligned} \end{equation*}

Subtract the above two identities and obtain

\begin{equation*} T_1(t)-T_2(t)= - |t|^d\big(r_{x_0,1}(t) -r_{x_0,2}(t)\big) \end{equation*}

thus

\begin{equation*} \lim_{t\to 0}\frac{T_1(t)-T_2(t)}{|t|^d }= - \lim_{t\to 0} \big(r_{x_0,1}(t) -r_{x_0,2}(t)\big)=0 \end{equation*}

and the limit exists since it exists for for the terms on the RHS by definition. Since \(T_1\) and \(T_2\) are at most degree \(d\) it follows from the previous point that \(T_1-T_2\) is the zero polynomial i.e. \(T_1=T_2\).

1.3 ​   few_points

Show that given a polynomial \(P\) of degree at most \(d\), its Taylor polynomial \(T_{x_0}^d\) of degree \(d\) at any point \(x_0\) is given by

\begin{equation*} T_{x_0}^d(t) = P(x_0+t) \end{equation*}

1.3.1 Solution:

Notice that if \(P(\cdot)\) is a Polynomial of degree \(d\) then \(P(x_0+\cdot)\) is a polynomial of degree \(d\). We clearly have

\begin{equation*} P(x_0+t) = T_{x_0}^d(t) +0 \end{equation*}

where \(+0\) emphasizes the constant \(0\) error term. This shows the claim

1.4

Using examples from the the previous HW find a function \(f\colon(-1,1)\to \R\) that has a Taylor polynomial of order \(100\) at \(0\) but has no second or higher derivatives at \(0\).

1.4.1 Solution:

Take \(f(x)= x^{200}\sin(x^{-500})\) outside \(0\) and \(f(0)=0\). By the previous HW assignment we have that \(f'(x)\) exists on \((-1,1)\) but is discontinuous at (and unbounded around) \(x=0\). This shows that \(f''(0)\) does not exists because then \(f'\) would have to be continuous at \(x=0\)

However notice that \(0\) is a Taylor polynomial of order \(100\) for \(f\) at \(0\) since we have

\begin{equation*} \lim_{t\to 0 } \frac{|f(t)-f(0)|}{|t|^{100}} =0 \end{equation*}

that can be seen by sandwiching:

\begin{equation*} \frac{|f(t)-f(0)|}{|t|^{100}} = |t|^{100} |\sin(t^{-500})| \leq |t|^{100}. \end{equation*}

2

This exercise wants to suggest a method of doing power expansions at \(+\infty\). The definitions below are not universal and have been written for the sake of this exercise.

Let \(f\colon \N \to \R\) and let us define rate of power growth at \(\infty\) as

\begin{equation*} \mathrm{pow}(f):=\inf\Big\{s\colon \lim_{n} \frac{|f(n)|}{n^s}=0\Big\} \end{equation*}

and

\begin{equation*} \mathrm{pow}(f):=+\infty \end{equation*}

if no such \(s\) exists.

2.1

  • Show that if \(s'\gt \mathrm{pow}(f)\) then \(\lim_{n} \frac{|f(n)|}{n^{s'}}=0\)
  • Show that if \(s'\lt \mathrm{pow}(f)\) then \(\frac{|f(n)|}{n^{s'}}\) is unbounded

2.2

Consider the function on \(\N\) given by

\begin{equation*} f(n):= \Big(n + \sqrt{n+1}\Big)^{3/4} - \Big(n + 3\sqrt{n-1}\Big)^{3/4} \end{equation*}

Find \(\mathrm{pow}(f)\)

2.3

  • Find a power \(s_0\in\R\) and a coefficient \(a_0\in\R\) such that
\begin{equation*} \mathrm{pow}\Big(f(n)-a_0 n^{s_0}\Big) \lt \mathrm{pow}(f) \end{equation*}
  • Find \(\mathrm{pow}(f(n)-a_0 n^{s_0})\) for those \(a_0\) and \(s_0\).

2.4

  • Find a power \(s_1\in\R\) and a coefficient \(a_1\in\R\) such that
\begin{equation*} \mathrm{pow}\Big(f(n)-a_0 n^{s_0}-a_1n^{s_1}\Big) \lt \mathrm{pow}\Big(f(n)--a_0 n^{s_0}\Big) \end{equation*}
  • Find \(\mathrm{pow}\Big(f(n)-a_0 n^{s_0}-a_1n^{s_1}\Big)\) for those \(a_1\) and \(s_1\).

3

3.1

Let \(f\colon(a,b)\to \R\) be differentiable at every point of \((a,b)\) with

\begin{equation*} f(y)=f(x)+f'(x)(y-x)+ \mathrm{err}_x(y-x) \end{equation*}

for all \(x,y\in(a,b)\) with

\begin{equation*} |\mathrm{err}_x(t)| \leq C |t|^{1+\alpha} \end{equation*}

for some \(C\) independent of \(x\). Show that \(f'\) is \(\alpha\)-Hölder continuous.

Hint: Switch the role of \(x\) and \(y\).

3.1.1 Solution:

For any two point \(x,y\) we have

\begin{equation*} \begin{aligned}[t] f(y)-f(x)=+f'(x)(y-x)+ \mathrm{err}_x(y-x) f(x)-f(y)=+f'(y)(x-y)+ \mathrm{err}_{y}(x-y) \end{aligned} \end{equation*}

Adding the two gives

\begin{equation*} f'(x)-f'(y) = \frac{\mathrm{err}_{y}(x-y) -\mathrm{err}_{x}(y-x)}{y-x} \end{equation*}

The condition on the error with \(t=(x-y)\) and \(t=(y-x)\) respecively gives

\begin{equation*} |f'(x)-f'(y)| \leq \frac{|\mathrm{err}_{y}(x-y)| }{|y-x|} + \frac{|\mathrm{err}_{x}(y-x)| }{|y-x|}\leq 2 C |y-x|^{\alpha} \end{equation*}

showing that \(f'(x)\) is \(\alpha\)-Hölder continuous. Furthermore it also follows that \(f'(x)\) is bounded on \((a,b)\) (even though this was not asked by the problem). Fix some \(x_0\in(a,b)\) and he have

\begin{equation*} |f'(x)-f'(x_0)| \leq 2 C |x-x_0|^{\alpha}\leq 2C |b-a|^\alpha \end{equation*}

so

\begin{equation*} f'(x_0)+ 2C |b-a|^\alpha\leq f'(x) \leq f'(x_0)+ 2C |b-a|^\alpha \end{equation*}

4

4.1

Let \(f_n\colon (-1,1)\to \R\) be a sequence of \(C^1_b((-1,1);\R)\) functions such that \(f_n'\) is Cauchy in \(\|\cdot\|_{\sup}\) and \(f_n(0)\) is Cauchy in \(\R\).

Show that there exists \(f\in C^1_b((-1,1);\R)\) such that \(f_n\to f\) in \(\|\cdot\|_{C^1}\) (i.e. \(f_n\to f\) uniformly and \(f_n'\to f'\) uniformly).

Hint:

  • The function \(f\) can be found as \(f(x)=\lim_n f_n(x)\). Justify that such a limit exists by using Lagrange applied to
\begin{equation*} f_m(x)-f_n(x)= (f_m-f_n)(x)= (f_m-f_n)(x) - (f_m-f_n)(0) \;+ \; (f_m-f_n)(0) \end{equation*}
  • Using the procedure above show that \(f_n\to f\) uniformly.
  • Conclude using fact from class about completeness of \(C^1\) and continuity of the derivative from \(C^1_b\) to \(C^0_b\).

4.1.1 Solution:

We have that \(f_n'\) is Cauchy in \(\|\cdot\|_{\sup}\). If we show that \(f_n\) is Cauchy in \(\|\cdot\|_{\sup}\) then we have shown that \(f_n\) is Cauchy in \(\|\cdot\|_{C^1}\). Since \(C^1_b((-1,1);\R)\) is complete this proves the claim. To show that \(f_n\) is Cauchy we use the triangle inequality and Lagrange

\begin{equation*} \begin{aligned}[t] |f_m(x)-f_n(x)|& \leq |(f_m-f_n)(x) - (f_m-f_n)(0)| \;+ \; |(f_m-f_n)(0)| \\ & \leq \sup_{c\in(-1,1)} \|f_n'(c)-f_m'(c)\| |x-0| + |(f_m-f_n)(0)| \\ &\leq \|f_n'-f_m'\|_{\sup} + |(f_m-f_n)(0)| \end{aligned} \end{equation*}

By choosing \(N=N(\epsilon)\) large enough so that both \( |(f_m-f_n)(0)|\lt \epsilon\) and \( \|f_n'-f_m'\|_{\sup}\lt \epsilon\) for all \(n,m\ge N(\epsilon)\) we conclude that

\begin{equation*} |f_m(x)-f_n(x)|\lt 2\epsilon \end{equation*}

for all \(n,m\ge N(\epsilon)\) showing that \(f_n\) is Cauchy in \(\|\cdot\|_{\sup}\) as required.

4.2

Show that the above statement is false if instead of \(C^1_b((-1,1);\R)\) we consider function in \(C^1_b(\R;\R)\)

4.2.1 Solution:

Take

\begin{equation*} f_n(x) = e^{-(x/n)^2}-1 \end{equation*}

We have that \(f_n(0)=0\) for all \(n\). Pointwise \(f_n(x)\to 0\) since \(-x/n\to 0\) and \(e^{-t^2}\to 1\) as \(t\to 0\).

The functions \(f_n\) do NOT converge uniformly to \(0\) since \(f_n(n)=e^{-1}-1\) that is not \(0\).

The derivatives \(f_n'\) converge uniformly to \(0\). To see this compute

\begin{equation*} f'_n(x)= -2\frac{x}{n^2} e^{-(x/n)^2} = \frac{1}{n} \Big(-2\frac{x}{n} e^{-(x/n)^2}\Big) = \frac{1}{n} f_1'(x/n). \end{equation*}

Where in particular

\begin{equation*} f'_1(x)= -2x e^{-x^2} \end{equation*}

Notice that \(f'_1(x)\) is bounded since it \(\lim_{x\to +\infty }-2x e^{-x^2}=\lim_{x\to -\infty }-2x e^{-x^2}=0\) and \(-2xe^{-x^2}\) is continuous (a variant of a problem from Midterm 01). We have thus that \(\|f'_1\|_{\sup}\lt \infty\) so \(\|f'_n\|_{\sup}\leq \|f'_1\|_{\sup}\frac{1}{n}\) so \(f_n'\rightrightarrows 0\).

This finishes the counterexample

4.3

Let \(f_n\colon (-1,1)\to \R\) be a sequence of \(C^1((-1,1);\R)\) functions such that there exists \(M\gt 0\) such that

\begin{equation*} \|f_n'\|_{\sup}\leq M\qquad n\in\N \end{equation*}

Show that if \(f_n(x)\to 0\) for \(x\) in a dense subset of \((-1,1)\) then \(f_n\to 0\) in \(\|\cdot\|_{\sup}\).

4.3.1 Solution:

Let \(D\) be the dense set on which pointwise convergence holds.

Fix \(\epsilon \gt 0\) and let \(\mc{A}_\epsilon\) be a finite \(\epsilon\) net for \((-1,1)\) made of points in \(D\). This can be done density by choosing one point from \(D\) that is \(\epsilon/100\) close to each of the points \(k\epsilon/100\) with \(k\in\N\), \(- 100/ \epsilon\lt k\lt 100/ \epsilon \).

For each \(z\in\mc{A}_\epsilon\) let \(N(\epsilon,z)\) be an index such that \(|f_n(z)|\lt \epsilon\) for all \(n\gt N(\epsilon,z)\). Such an index exists for each such \(z\) since \(z\in D\) and thus \(f_n(z)\to 0\). Let \(N(\epsilon)=\max_{z\in\mc{A}_\epsilon} {N(\epsilon,z)}\) that is allowed since \(\mc{A}_\epsilon\) is finite. This way \(|f_n(z)|\lt \epsilon\) for all \(n\gt N(\epsilon)\) and for all \(z\in\mc{A}_\epsilon\).

Finally for each \(x\in(-1,1)\) choose \(z_x\in\mc{A}_\epsilon\) such that \(|x-z_x|\lt \epsilon\); this is possible since \(\mc{A}_\epsilon\) is an \(\epsilon\)-net.

We have by Lagrange:

\begin{equation*} \begin{aligned}[t] |f_n(x)| &\leq |f_n(x)-f_n(z_x)| +|f_n(z_x)| \\ & \leq \epsilon + |f_n(x) -f_n(z_x)| \leq \epsilon + |z_x-x| \sup_{c\in(-1,1)} |f'_n(c)| \\ & \leq \epsilon + \epsilon M \end{aligned} \end{equation*}

The RHS does not depend on \(x\) so

\begin{equation*} \|f_n\|_{\sup }\leq \epsilon(M+1) \end{equation*}

for all \(n\gt N(\epsilon)\). This shows that \(f_n\) is Cauchy in \(\|\cdot\|_{\sup}\).

4.4

Find a sequence of \(f_n\colon (-1,1)\to \R\) be of \(C^1((-1,1);\R)\) functions such that there exists \(M\gt 0\) such that

\begin{equation*} \|f_n'\|_{\sup}\leq M\qquad n\in\N, \end{equation*}

and such that \(f_n(x)\to 0\) uniformly on \((-1,1)\) BUT \(f_n'\) does not converge uniformly to \(0\).

4.4.1 Solution:

Either copy the example from class or take

\begin{equation*} f_n(x) = \frac{1}{n}\sin(nx) \end{equation*}

Clearly \(\|f_n\|_{\sup}=1/n\to 0\) while \(f'_n(x) = \cos(nx)\) is bounded in absolute value by \(1\). Taking \(x= \pi\) we have

\begin{equation*} f'_n(\pi)= (-1)^n \end{equation*}

that does not converge even pointwise.