We are almost there. Let us look at a few more things before formulating the algorithm. \(\\\) First, note if we take subgradient with regards to X in our goal function: $$\underset{X}{\operatorname{argmin}}\left(\frac{1}{2}| | W_{11}^{\frac{1}{2}} X-W_{11}^{-\frac{1}{2}} S_{12}| |_{2}^{2}+\rho\|X\|_{1}\right).$$ And then set it to 0, we will arrive at: $$W_{11} X-S_{12}+\rho * v=0.$$ On the other hand, recall the original problem formulation: $$\underset{\Theta}{\operatorname{argmax}} \log \operatorname{det} \Theta-\operatorname{trace}(S \Theta)-\rho\|\Theta\|_{1}.$$ Taking its subgradient with regards to \(\Theta\) and set it to 0, we have: $$W-S-\rho * \Gamma=0.$$ Focusing on the upper right block of every term involved, we have: $$W_{12}-S_{12}-\rho * \gamma_{12}=0.$$ Obviously, we have the following equivalence: $$W_{12}=W_{11} X \text { and } v=-\gamma_{12}.$$ Note now we have the update rule for \(W_{12}\) based on X. \(\\\) Now that we have an algorithm for updating the estimate for off-diagonal elements of the covariance matrix W, we are ready to deduce from it the rule to recover the inverse covariance matrix \(\Theta\), our ultimate goal. \(\\\) Please observe that W\(\Theta\)=I, putting which down as blocks we have: $$W_{11} \theta_{12}+W_{12} \theta_{22}=0 ; W_{12}^{T} \theta_{12}+W_{22} \Theta_{22}=1.$$ Solving for \(\Theta\) elements, we have: $$\Theta_{12}=-W_{11}^{-1} W_{12} \Theta_{22} ; \Theta_{22}=1 /\left(W_{22}-W_{12}^{T} W_{11}^{-1} W_{12}\right).$$ This is exactly what we want. Now we are done with the algorithm and are able to put down the algorithm in whole here: \(\\\) 1. Let W be our current estimate of the covariance matrix. Initialize it with the empirical covariance S or S+\(\rho\)I to make it more positive definite. \(\\\) 2. Continue until convergence or max number of iterations exhausted: \(\\\) For each dimension 1 to p, solve the lasso problem, using the current version of \(W_{11}\) and \(S_{12}\) from the empirical covariance as input, and get the output X. We will update \(W_{12}\)=\(W_{11}\)*X \(\\\) 3. Now that we have our estimated W. We will recover \(\Theta\) using the two update rules for \(\Theta\) elements above. \(\\\) Coming up next, I will try illustrate this idea of the Graphical Lasso algorithm using easy-to-understand Julia code. Stay tuned!