Finite Difference Time Domain (FDTD) method and metals
I am working with the FDTD method. I read a lot of time that the FDTD method in his "simple" form cannot deal with metals. I understand that it is due to the negative relative permittivity. Indeed my MATLAB code was not working for these values of permittivity.
I would like to understand why but not on a programming point of view (I can see why it is diverging when the equations are discretized). I put a pdf file where I derived the curl equations needed for the FDTD from the Maxwell equations and the constitutive relations:
Question.pdf
My questions are the following:
1) Does the problem come from the assumption made for the constitutive relation? Is it really not valid for metals?
2) If the problem lies in the constitutive relation, using the FDTD to describe semiconductors is maybe also a mistake because no material is stricly non dispersive (in time).
3) I would like to know if I linked correctly the n k parameters with epsilon and sigma.
Thanks in advance,
TW
ps: I wrote it in Latex because I could not make the equations work directly on the internet page...
Let's start with the easiest question:
3) Looks correct
1+2) The problem is really a purely mathematical one. There is no deep physical significance. In setting up the FDTD equations you approximate values in several places and you also approximate a differential equation by a difference equation.
Now, providing a number of conditions is satisfied, such an approximation can still give good results but there always will be errors.
The 'mathematical' problem is what happens to these errors when you iterate, more concretely how does an error introduced at step, say, 100 affects the result on step 200, 300 etc..
If the effect of an error introduced at step 100 vanishes later or stay below a certain bound then you are usually ok. If not then your method is instable.
Look at the FDTD update equation for your favorite E-field.
E^{n+1} = c * E^n + c' * Del H
In particular look at the constant c in front of E^n.
c = (2*eps - sig*dt) / (2*eps + sig*dt)
For positive eps and small enough sig you will have 0 <= c <= 1 which means that the error introduced at the previous step will decrease over time (if c < 1) or at least does not grow.
But if eps = -sig*dt for example then c = 3.
That means that at every step your previously introduced error is multiplied by 3 => instablility
So again. This is a very basic question about all numerical methods. Is the method stable? Usually the answer is: Yes, provided the following conditions are satisfied: ...
For standard FDTD one such condition is epsilon > 0, but of course there are other conditions as well (maximal stable time step etc).
Thank you for your answers iyami.