MoM matrix is very dense and ill conidtioned
is it normal that the MoM matrix is very dense and ill conidtioned?
regards,
Sherb
Usually it is a dense matrix except you use wavelet to sparse the matrix.
yes this is true, however there are some method to precondition it. Also, there are some methods to diagonalize it using the wavelets.
My experience is that the matrix is very well conditioned.
I think we again arrive to the diffrerntiation of the use of free space green's functions using 2D triangular RWG elements, my experience with them is that they give dense matrix and an ill conidtioned one. Also, they produce singular integrals which require special techniques to solve them exactly.
I don't know what is the resulting matrix for shielded green's functions.
there seems to be no exact theory yet. condition number of the matrix is one factor affecting the accuracy of the final answer. there are many topics discussing the accuracy of MoM in this forum.
a general "rule": the condition number is higher (worse) when the size of the matrix becomes bigger and/or the frequency becomes lower. (..... application engineers should keep this in mind... it means not all "large" problem can be solved with 64bit computing and "un-limited" amount of memory...)
Hi,
sometimes the matrix gets illcondition if the accuracy of calculating matrix elements is not sufficient.
flyhigh
I think Flyhigh has a very good point. My experience has been almost exclusively with the shielded MoM formulation. The matrix elements are calculated by FFT to full numerical precision. At low frequency (e.g., 10 MHz and 1 micron cell size), the matrix solve starts to get noisy, quad precision is now needed (or use specialized low frequency code..and that involves its own approximations). To see whether or not a result is noisy, just look at the resulting current distribution. Numerical noise results in a speckled current distiribution. I published one really good (bad?) example, will look up the reference if anyone is interested.
As for matrix size, with 2 GByte of RAM, we go up to 30,000x30,000 matrix (single precision, lossless) and the solution is nice and clean. In house, we are now doing much much larger matrices on 64 bit software and double precision, still nice and clean.
I understand the unshielded solvers do their numerical integrations to a fixed number of digits of precision (I think it is 3 digits of precision, can anyone verify?). Now, I would guess that if you want a good solve for big matrices, you might need to go to an iterative matrix solve, and all the problems that entails.
It is really nice to have the MoM matrix filled to full nmerical precision, you can start doing lots of neat things then with no worry.
because the matrix is ill-condition, you need to represent the elements in higher precision in order to get a solution with acceptable accuracy. This is in fact one of the reasons the "condition number" is defined and in many cases estimated.
Hi Loucy -- I think there are two things it is important to consider. First is poorly conditioned matrices due to the underlying theory. Second is poorly conditioned matrices due to how well we fill the matrix.
MoM theory results in very well conditioned matrices (the first item above). This is what I am talking about. However, if we do not calculated the correct numbers to go into the matrix, then we get inversion problems. This is what you and, I think, others are talking about.
To illustrate the problem with theory providing a poorly conditioned matrix, take any matrix inversion routine, do everything in double precision and use full pivoting (for maximum accuracy). Fill each matrix element with consecutive numbers. For example, a 2x2 matrix is filled with 1, 2, 3, 4. The matrix is non-singular, no matter how big the matrix is. The 2x2 case inverts easily. When you get to 4x4 (as I recall, might be a little larger), inversion, even using double precision fails. This is a poorly conditioned problem, it fails even though we know absolutely that the matrix is not singular and even though we use full double precision.
So, give me a 10000x10000 matrix from MoM with every one of the 100000000 numbers calculated to only 3 digits of precision, if the matrix inversion is noisy, I will not be surprised. In fact, I think it is amazing that you can put that much noise into a matrix of that size and still get any kind of solution at all!
With MoM you can get a well conditioned matrix above a certain (very low) cut-off frequency in all cases I have ever tried. The MoM provides a well conditioned matrix. If we a sloppy in filling the matrix, that well conditioned situation goes away.
Hi Dr. Rautio -- I have to disagree with your statement that "MoM theory results in very well condition matrices". For one thing I am not sure which MoM formulation you refer to, the one implemented in Sonnet EM or any formuluation? Secondly, from all of the theoretical analyses that I am aware of, the condition number is proportional to some power of D/h, where D is the size of the problem and the h is the discretization size. Those analyses are for some restricted cases (e.g. no loss or 2D) and particular integral equation formulation. I haven't found a general theory on the condition number of the MoM matrix for EM problems. There are, however, special examples that show MoM matrix tend to be ill-conditioned. I think there is some paper about the matrix for the 2d problem scattering from a circular cylinder.
Your example matrix is not very clear. (what are the elements of the 4x4 matrix? are you talking about a Hankel matrix?) There are many test matrix patterns (certain way of filling the matrix) for which the matrix sequence becomes more ill-conditioned as the size increases, regardless of the precision of the floating point representation. (Their condition numbers have analytical close-form expressions.) Those examples clearly demonstrate that the condition number is a measure of the closeness to being singular; Ill-conditioned problems need to be solved with higher precision arithmetics. To prove your point, you need to give an example matrix (likely with irrational numbers as the elements) whose "condition number" is high when evaluated in single precision but becomes lower in higher precision.
--------------------------------------------------------------------------------------
I think Sonnet EM doesn't output the condition number. Other codes such as Feko do. For those have access to some MoM code or at lease the matrix, it is easy to find the answer. Just compare the condition numbers of the two matrices (for the same problem with the same discretization) resulted from single and double precision arithmetics. The order of magnitude should remain the same. Please post the geometry if you find otherwise.
Hi Loucy -- I think we are both correct and what we are discussing is a matter of point of view, i.e., what is "well conditioned". From my point of view, a matrix is well conditioned if you can invert it with a direct (i.e., LU decomp, etc.) technique and come up with a good result. Except for low frequency where double precision is not enough, this is all we ever see, and we have done some really big matrices. I think the reason we see this is because we fill the matrix to full precision by use of the FFT (shielded planar MoM). If we switch to single precision (easy to do), the low frequency limit increases, but above that point, we still get good solutions.
If a MoM tool gives poorly conditioned matrices (i.e., the resulting current distribution is noisy), do not blame MoM, rather blame the specific implementation. It might be a choice of basis functions problem (this is what results in the low frequency problem I mentioned above, choose different basis functions and that problem goes away, but others come in to play), or limited precision (doing numerical integration to 3 digits of precision is simply not enough for at least some big problems.)
I fill my matrices to double precision and they turn just fine. I'm happy. If you want to turn a matrix filled to less precision, you will consider the same MoM matrix poorly conditioned. We are both right, when measured against our specific requirements.
As for the example matrix that is not singular but is difficult to turn, for 4x4 it is:
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
Very easy to set up the matrix. We know it is non-singular. Invert it twice and see what comes back. At some relatively small value of N, your direct matrix inversion fails. I don't remember what value of N, if you try it, let us know. The point I am making here is that getting a large well conditioned matrix is difficult. Add a little noise to a matrix that solves OK (by, for example, calculating the matrix elements out to only 3 digits), and you get a matrix that no longer inverts well...and we will both call it poorly conditioned.
In Sonnet, analyze any circuit to low enough frequency that the current distribution looks noisy. Then, repeat with "Memory Saver" checked...that invokes single precision. You will find that the low frequency limit has inreased quite a bit. If we could limit the precision to 3 digits, the low frequency limit would be even higher.
hello,
I was surprised to read that the simple matrix
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
could be problematic, so I tried it with octave.
the inverse:
6.3719e+16 -7.7005e+16 -3.7149e+16 5.0434e+16
-5.9168e+16 7.2791e+16 3.1922e+16 -4.5545e+16
-7.2822e+16 8.5432e+16 4.7603e+16 -6.0212e+16
6.8271e+16 -8.1218e+16 -4.2376e+16 5.5323e+16
the inverse of the inverse:
6.4272e-02 6.3027e-02 6.1782e-02 6.0536e-02
1.5741e-01 9.3911e-02 3.0413e-02 -3.3085e-02
2.5054e-01 1.2479e-01 -9.5610e-04 -1.2671e-01
3.4368e-01 1.5568e-01 -3.2325e-02 -2.2033e-01
for both calculations octave gave a warning:
"warning: inverse: matrix singular to machine precision, rcond = 7.90026e-20"
it seems to be problematic indeed, the inverse of the inverse differs strongly from the original matrix.
best regards.
I have used Mathematica to find the inverse of the above matrix and it gives me the error that this matrix is singular. Note that Mathematica is not limited by the machine precision (it is an algebraic manipulation program).
also, as a simple proof, you can obtain the fourth row by adding the second and the third then subtracting the first, which means that this matrix is for sure singular !
Hi Adel_48 -- Very good! I never noticed that before. My statement that it is for sure non-singluar is incorrect. Now it gets a little more interesting. Are there any matrices like this with N>4 that are not singular?
Thank you Dr. Rautio, Actually my mathematical foundations concerning matrices is not very good. However, if we return to the original question about MoM, I noticed that when I use RWG basis functions (as in Makarov book about EM simulation using MATLAB, based on an old paper by Rao, Wilsson and Glisson whose names were used as the basis function name) I rarely get a conidtion number better than 10^-5. No matter how large or small the mesh (as long as it is within a reasonable range of course). May be that is another example of an ill conidtioned matrix, but it is not very ill conidtioned actually
Hi Adel-48 -- Actually, as you can probably guess by my blunder above, I am not very strong in matrix theory either. 1e-5 sounds pretty bad, but frankly, I have no practical idea if that is good or bad, I have nothing to reference to. As I mentioned above, if I can turn a matrix that is filled to double precision (or single precision if memory is limited), and get a good clean current distribution out, then I consider it a well conditioned matrix. If a condition number of 1e-5 will do that, then I will consider that good. But..what condition number corresponds to a well conditioned matrix is a personal subjective judgement. Someone else might look at exactly the same matrix and say that it is terrible and be 100% right, which I think is the situation we have in this thread. Two completely different opinions. Both completely correct.
a classical example of ill-conditioned matrix is the Hilbert matrix with element defined by a_ij=1/(i+j-1).
A general rule of thumb is that you lose 1 digit of accuracy in a LU for every power of the condition number, e.g. for a condition number of 1e-5 expect to lose 5 digits of accuracy. Because of the integral equation is singular, this is a very conservative estimate, and I doubt you will ever actually lose 5 digits. But it is better to err on the side of caution.
The EFIE is known to have an unstable condition number. As frequency decreases the condition number worsens. As the number of unknowns increase, the conditions number worsens. Contrast that with the MFIE whose condition number is fairly independent of both frequency and number of unknowns. Why? The MFIE has a constant term along the diagonal; this is known as an "identity plus compact" operator which has very stable condition number.
What does this mean in practice? Except for the low frequency breakdown, I have never seen an LU fail on an EFIE. So for small problems don't worry about it. (For low frequency you may have to use Loop/Tree or Loop/Star). For large problems where you need an iterative solver, you will need a preconditioner.
is there an accurate definition of the concept "low frequency breakdown"? If certain code doesn't seem to give the correct answer, how do we know if it is because of "low frequency breakdown" or something else?
it is wise to check the condition number frequently, yet most of the commercial codes do output such number. Probably it is too academic...