\documentclass[11pt,onecolumn]{article}
\usepackage{amssymb, amsmath, amsthm,graphicx, paralist,algpseudocode,algorithm,cancel,url,color}
\usepackage{sectsty}
\usepackage{fancyvrb}
\usepackage{mathrsfs}
\usepackage{multirow}
\usepackage{hhline}
\usepackage{booktabs}
\usepackage[table]{xcolor}
\usepackage{tikz}
% \usepackage[framed,numbered,autolinebreaks,useliterate]{mcode}
\usepackage{listings}
\usepackage{enumitem}
\newcommand{\bvec}[1]{\mathbf{#1}}
\newcommand{\R}{\mathbb{R}}
\newcommand{\tx}{\tilde{x}}
\newcommand{\C}{\mathbb{C}}
\newcommand{\Rn}{\R^{n\times n}}
\newcommand{\Rmn}{\R^{m\times n}}
\newcommand{\Cn}{\C^{n\times n}}
\newcommand{\Cmn}{\C^{m\times n}}
\newcommand{\cO}{\mathcal{O}}
\DeclareMathOperator{\Tr}{Tr}
\DeclareMathOperator{\trace}{trace}
\DeclareMathOperator{\diag}{diag}
\sectionfont{\Large\sc}
\subsectionfont{\sc}
\usepackage[margin=1 in]{geometry}
\begin{document}
\noindent
\textsc{\Large CS 6210: Homework 2}\\
Instructor: Anil Damle\\
Due: February 24, 2022\\
\subsection*{Policies}
You may discuss the homework problems freely with other students, but please
refrain from looking at their code or writeups (or sharing your own).
Ultimately, you must implement your own code and write up your own solution to
be turned in. Your solution, including plots and requested output from your code
should be typeset and submitted via the Gradescope as a pdf file. Additionally, please
submit any code written for the assignment. This can be done
by either including it in your solution as an appendix, or uploading it as a zip
file to the separate Gradescope assignment.\\
\hrule
\vspace{1cm}
\noindent
\subsection*{Question 1:}
In class we saw a backwards stability style result for pivoted LU factorization that incorporated the growth factor $\rho.$ You may have noticed that we did not see any forward error results, i.e., we never said anything about $\tilde{L}-L$ or $\tilde{U}-U.$ In fact, the forward error can be quite large. However, often what we ultimately care about is the solution to $Ax=b.$ Devise a backwards error bound for solving $Ax=b$ with non-singular $A$ using LU with partial pivoting followed by a sequence of triangular solves.\footnote{The computed solution $\tx$ solves a linear system $(A+\delta A)\tx = b,$ what can you say about $\|\delta A\|/\|A\|$?} Your bound should explicitly incorporate the growth factor $\rho.$ For this problem you may assume that in exact arithmetic there are no ties in the pivoting procedure and $\mu$ is sufficiently small such that the computed permutation matches the exact one.
\subsection*{Question 2:}
Implement LU factorizations with no, partial, and complete pivoting for $A\in\Rn$. Then use your code to address the following:
\begin{enumerate}[label=(\alph*)]
\item Demonstrate that your implementations can accurately solve well-conditioned square
non-singular linear systems $Ax=b$ given $A$ and $b,$ and scales as $\mathcal{O}(n^3).$ Clearly outline how you test this and include your results.
\item Construct test cases where without the use of pivoting your algorithm yields solutions with large relative residuals\textemdash demonstrate this happens using your code.
\item Construct a test case where partial pivoting fails to yield a good solution but complete pivoting does\textemdash demonstrate this happens using your code. Such examples exist, but are typically considered pathological and ``rare'' enough that simply using LU with partial pivoting (despite its poor worst case theoretical performance) is fine in practice.\footnote{One way to accomplish this is to appeal to the matrices that realize the worst case growth factor for LU with partial pivoting\textemdash they can be found in most of the textbooks listed for this course.}
\item Consider computing the LU factorization (all three ways) of the so-called Hilbert matrix of size $n$ (defined as $H_{ij}=1/(i+j-1)$ for $i,j=1,\ldots,n$). For moderate $n$ what do you observe about $\|LU-H\|_2$ (adding permutation matrices as applicable)? What about the accuracy of a solution to $Hx=b$ as quantified by the relative residual? Explain your observations.
\item Since the Hilbert matrix $H$ is symmetric positive definite, presumably it would be natural to use a Cholesky decomposition rather than an LU factorization. Repeat the previous problem using a built-in Cholesky decomposition routine (or implement it yourself if you are so inclined). What do you observe?
\end{enumerate}
\subsection*{Question 3:}
Given a square matrix $A\in\Rn$ and QR factorization $A=QR$ is such a factorization unique? (\emph{i.e.,}\ are there some other orthogonal and upper triangular factors $Q_2\neq Q$ and $R_2\neq R$ such that $A=Q_2R_2$). Justify your response and comment on the implications of it from your perspective, e.g., does your answer make such factorizations less useful?
\subsection*{Question 4:}
Let $V\in\Rmn$ with $m\geq n$ have orthonormal columns, prove the following:
\begin{enumerate}[label=(\alph*)]
\item For any $x\in\R^m,$ $VV^Tx$ is the unique closest point in the range of $V$ to $x.$
\item Prove that the orthogonal projector (also often referred to as the spectral projector) onto the range of $V$ is unique.
\end{enumerate}
\subsection*{Question 5:}
\begin{enumerate}[label=(\alph*)]
\item Given any $A\in\Rmn$ and $\lambda > 0$ prove that the solution to the regularized least squares problem
\begin{equation}
\min_x \|Ax-b\|_2^2+\lambda\|x\|_2^2
\label{eqn:ls}
\end{equation}
is unique.
\item In practice, we often want to solve~\eqref{eqn:ls} for many values of $\lambda.$ Say we are in a situation where $m\gg n,$ $A$ is full column rank (though potentially ill-conditioned), and we would like to avoid the use of the normal equations. Assume you are given the reduced QR factorization $A=QR.$ Devise a scheme to solve~\eqref{eqn:ls} for $T$ values of $\lambda$ (i.e., $\lambda_1,\ldots,\lambda_T$ with $\lambda_i>0$) that costs at most $\cO(mn+Tn^3).$ Critically, note that the dependence on $T$ only scales with $n$ and not $m.$
\end{enumerate}
\end{document}