What is meant Convergence Problem
Could someone explain what is meant by convergence issues in simulations.
Regards
convergance is the actual data you get back dosnt tally with what the simulator has decided that certain points in terms of time should yield
this can be caused by by many things
but most circuits in simulators need carefull fine tuning and realisations at depth
about that actual simulator and the bound's it uses
the intergation system also requires the digital time transitions happen at the right time so the simulator will run in realtime....
infact it is also a measure of histerisis of the circuit
I assume, that you work with a simulator that uses the finite element method (for example hfss).
Convergence means:
One of the points of the finite elemnt method is, that you never get any absolute correct values as a result.
The calculation is based on differential equations solved in a small finite space of your object. The object can be divided into as many tiny elements as you like. This increases accuracy but also calculation time and the demands for resources (RAM and disk space).
The program starts with quite large subdivisions (the whole thing is called MESH and can be controlled by the user to a certain extent) and calculates the fields. Then it subdivides the original mesh even further and calculates again. The two solutions are the compared and expressd as a delta value. When the change between two solutions (the delta) is smaller than a user defined value, the program terminates the solution process and presents the result.
This whole process of getting smaller deltas with finer meshes is called CONVERGENCE.
A convergence problem can be if you never (or very late) receive small enough deltas to call a calculation a solution. This may happen if you had a bad initial mesh.
I hope it helps.
D.
Hi coolrak:
There is yet another interesting point to convergence, and that is how much the simulation error changes from pass to pass as you refine your descritizations. Often, we all assume that EM simulators provide monotonic error convergence. That is to say that as we make the descritizations smaller and smaller, the simulation result always gets closer and closer to either:
a) a asymtotic result, or
b) the correct answer.
Simulations may converge to a given result as we make meshes finer and finer, but they may not be the correct result.
I have seen that FEM results often do not converge monotonically as you make the mesh progressively finer. Sometimes, you will see error definitions that change in a strange way with finer and finer mesh. You might set a delta_s definition of 0.01, and the solver may achieve this delta_s level in, say, 7 or 8 adaptive meshing passes. But if you make the simulate take another adpative pass, suddently the delta_s jumps to -0.03 or something else.
It seems like shielded environment MoM planar codes yeild the best monotonic error convergence. Unshielded Mom planar codes don't do quite as well, and then the full 3D codes seem to be the poorest (maybe it is more difficult to eliminate simulation error in 3D discretizations than it is for 2.5 D?).
Anyway, this is a good topic to consider. I think there are a number of benchmark examples that one can run that will check things like convergence error. There are a few interesting examples on Sonnet Software's web site for starters: http://www.sonnetsoftware.com/produc...nchmarking.asp I'm sure there are also other good benchmarks out there too.
--Max
Thank you very much MAX,DR D and VSMVDD. I have got a clear idea now.
Just to share the experience, a similar thing happened to me in MWS a few days ago. I was imulating a resonator and I had set the accuracy to -50 dB and the simulation time to 50 pulses. By about the 40th the energy had got down to about -405 dB but suddenly after that, it started rising. The port signals started going up exponentially with time.
-svarun