HW9 for CS 6210
You may (and probably should) talk about problems with the each other, with the TA, and with me, providing attribution for any good ideas you might get. Your final write-up should be your own.
1. Approximate inverse
Suppose $A$ is a symmetric positive definite matrix with known spectrum $\lambda_1, \ldots, \lambda_n$. Explain how to use a least squares problem to compute
$$\phi_k = \min_{p \in \mathcal{P}_k} \|p(A) - A^{-1} \|_F$$
for a given $k < n$. Implement this in a function with the interface
frob_poly_approx_error(lambdas, k)
What is $\phi_3$ when the eigenvalues are $1, 2, 3, 5, 7, 9$?
2. Grandly Galerkin
Recall the problem introduced in HW 8:
$$TU + UT + h^2 W \odot U + h^2 F = 0$$
with right hand side $f_{ij} = x_{i}^2 + x_{j}^2$ and potential term $w_{ij} = x_i (1-x_i) x_j (1-x_j)$. Now we can write $W = dd^T$ where $d_i = x_i (1-x_i)$, and letting $D = \operatorname{diag}(d)$, we have
$$R(U) = TU + UT + h^2 DUD + h^2 F = 0.$$
Now consider an approximate solution $\hat{U} = QXQ^T$ where $Q \in \mathbb{R}^{n \times k}$ has orthonormal columns, using the approximation ansatz $Q^T R(\hat{U}) Q = 0$. We note that this is equivalent to Galerkin approximation with the basis $Q \otimes Q$ on the "vec"ed system
$$(T \otimes I + I \otimes T + h^2 D \otimes D) \operatorname{vec}(U) = -h^2 \operatorname{vec}(F)$$
You should be able to write a very similar system for the reduced variable $X$.
Write a function
solve_schrod_galerkin(Q)
to solve this system for a given $Q$. You may assume $k$ small enough so that direct factorization of the $k^2$-by-$k^2$ matrix is not too expensive. For $n = 100$, what is the relative residual $\|R(\hat{U})\|/\|h^2 F\|$ if we choose $Q$ to be an orthonormal basis for the polynomials of degree at most 6 sampled on the $n$ internal mesh points $x_i$? What is the relative error (using the iteration from HW 8 to compute a reference solution)?
Hint: It is fine to use Kronecker products to form the small problem, but if it costs you more than $O(n^2 k)$ to set things up, you are on the wrong track. Note that if X
is a $k \times k$ matrix in Julia, then x = X[:]
is its "vec", and the expression X[:] = x
can be used to "undo" a vec. You may find it useful to look at the following code for computing the residual.
schrod_resid (generic function with 1 method)
You may also want to use the following helper routines from the previous homework for computing your reference solution.
dst (generic function with 2 methods)