微波EDA网,见证研发工程师的成长!
首页 > 研发问答 > 微波和射频技术 > 天线设计和射频技术 > Impedance match and S11 vs Noise match and NF

Impedance match and S11 vs Noise match and NF

时间:04-07 整理:3721RD 点击:
I searched the forums for similar topics, but none of them really went as in depth as I wanted or came to any hard conclusions.

So I'm trying to match a 50Ω source to a AD8331 LNA at around 60MHz. Low noise is critical, so I assumed that matching to the noise impedance (around 300Ω) would be best, as opposed to the input impedance. However, as I go deeper into the math, this becomes somewhat questionable... My biggest issue with this is that most methods of measuring NF depend on an impedance match from the source. For instance if I use a 50Ω terminator as a noise source, I cannot assume that the power delivered is the usual -174dB/Hz, since that is based on the assumption of a power match. I'm unsure of how to fix the calculations to compensate for this... I also suspect that there exists between a noise match and impedance match where the true NF is minimal.

To be specific, here is a rough overview of my measurement procedure:
1. Measure gain (S21) and bandwidth of the whole amplifier. I get about 68dB and a BW of 490KHz.
2. Measure input impedance (S11). I get Z=14.8+j20.6Ω. |S11|=-4.45dB.
3. Put a 50Ω terminator on the input and measure the output on a spectrum analyzer. I measure a peak power of -48dB, and a spectral density of -105.5dB/Hz (over a 2MHz bandwidth).
So at this point if I do calculations based on the assumption that my 50Ω source provides -174dBm/Hz, I'll get NF values from 0.5dB (way too low to be true) to 0.92dB (close to the theoretical yield of the amp... I don't buy it). So there's definitely something wrong here, and I'm betting it has to do with my source mismatch. Any advice would be appreciated.

Hello,

How accurate is your analyzer? Rounding Pn to -174 dBm/Hz also gives a small error. Did you use the noise BW of the analyser or the RBW value, there may be some small difference.

If you have a signal source (with calibrated output), you may check your analyser.

Other point can be the detector inside the analyser. Only when using RMS detection the analyser will show the RMS noise density.

If your analyser has average IF detection, you have to add 1 dB to the reading when measuring Gaussian noise.

---------- Post added at 20:41 ---------- Previous post was at 20:31 ----------

It is very common that matching for maximum gain doesn't coincide with minimum noise. It has to do with the internal current noise and voltage noise of the input stage. Your calculation method method looks OK to me.

Both the network and spectrum analyzers are agilents, less than five years old, so they should be quite accurate.

I've used the output of my network analyzer as a stimulus source and it agrees well with the spectrum analyzer.
The analyzer apparently has three measurement settings: log power average (video), power average (rms), and voltage average. Not sure how they're actually different, but switching between them does change to measured power by a couple dB. Up till now I believe it was set to log power average (video). Should I use the rms power average?
What do you mean by average IF detection? Do you mean trace averaging?

Sure, that's what I'm expecting. I'm not looking for the perfect compromise, but I'd like to get a noise figure of less than 2dB overall, which I think will require some optimization.

The SA noise floor is not only rely on RBW, but also rely on VBW. Just change VBW, you will see noise floor change several dB. So I think your measure is not accurate.

Hello,

Without any further info, I would use power RMS average as this will very likely determine the real power of the gaussian noise.

What type of analyzer do you have, maybe I can find the manual and check how it deals with noise.

IF detection has to do with how the analyzer measures the signal that is within the Resolution Bandwidth filter. You can compare it with an AC voltmeter. You can have a true RMS voltmeter, but also a general (average measuring) one. The "average" voltmeter determines the average value of the rectified signal and will only give correct readings in case of a sinusoidal input. When you add a square wave to an average measuring AC voltmeter, it will not show the rms value of the square wave.

I would not use trace averaging (as you will read dB's from the screen).

I'm not trying to measure the noise floor outside my bandwidth, though. What I really want is a measure of my spectral noise density inside my passband.

I played with a variety of settings and it seems that using the rms measurement gives higher noise measurements than the others. However this leads me to suspect it more accurate.

There is a measurement function on the SA called "marker noise density" which, I believe, measures spectral density at a certain frequency. I believe it's equivalent to the normal spectral density measurement (which measures over a specified bandwidth) with a bandwidth approaching zero. Does this make sense? Which density measurement do you think I should use?

I'll post the models of both the NA and SA when I get to work. I've skimmed over the manuals we have, and they don't seem to cover much about how to actually do proper measurements, or explain the internal architecture.
But even with a square wave, won't it read the correct frequency spectrum, allowing you to recover the overall power by summing all the harmonics in?
Without trace averaging, the noise measurement is, well, very noisy. Why would reducing VBW (which I assume is what trace averaging really is) hurt me?

Hello,

You can see your analyser as a receiver with certain resolution bandwidth. The receive frequency is swept to show you a spectrum trace.

To get the real power of the signal that passes the resolution bandwidth filter (the IF filter in a receiver), you should carry out an RMS scheme (that means squaring the output, taking average over time and scale based on the gain and reference impedance). This should be done before conversion to dB.

If you use trace averaging, this may be done on the dBm values. Averaging dBm values, does not give you the real power in dBm. Maybe it is possible to use trace averaging when you put the display in linear power.

If VBW is done after lin to log conversion, the averaging will also not show true power within the RBW.

The square wave example was in combination with an AC voltmeter only.

Regarding spectral density, you are right. I would use that, as the SW programmers take (probably) noise bandwidth of the analyser into account. However when there is something wrong in the measurement of power, all mathematics afterwards will be wrong also.

To be sure, you can do it manually also based on power displayed and RBW setting (of course better is knowing the Noise Bandwidth).

You can overcome the analyser hardware issues by using a noise generator. When switching from "thermal noise only" to "thermal + external noise", you only have to observe the change in signal shown at the analyser.

You can check your analyzer whether is uses true RMS or not.

Add a carrier to your analyser and observe the power. Now modulate that carrier with 100% AM and use an RBW that fully covers the sideband and carrier. Now measure the power again, it should be 50% (1.76 dB) more. Make sure that you have descent AM.

If the output doesn't change, you do not have an RMS detector in the SA, in that case you need to add a correction manually.

Okay, the SA is the CXA n9000A and the NA is the e5051A.
Okay, that makes sense, since averaging is a linear operation but log is not. I've switched to no trace averaging, rms detection, with a much longer sweep time, and it's giving reasonable power measurements with very good consistency.

We actually bought a noise source specifically for measuring noise figure with the SA, but the idiots who sold it neglected to include a built in pre-amp which makes the measurements accurate. At some point we might get that fixed, though.

Yeah, I'm seeing this test referenced in the device measurement manual. I'll give it a try.

Another issue that occurred to me is that the gain measured by my NA is likely not correct due to the mismatch on my input. What I ultimately want to know is the voltage gain, but the NA only tells me power gain. It determines delivered power based on the assumption that the load is 50Ω, but if that is not the case then the result will be incorrect. I remeasured my gain using the SA to measure input and output power, and came up with a gain 2db higher than what the NA said... yet again another problem arising from using noise matching instead of impedance matching...

Hello,

The analyser tells you the insertion gain. Based on a 50 Ohms reference (both sides).

As there might be significant input mismatch, the actual power gain will be higher.

If you want the voltage gain (referenced to the actual input voltage), you need the input impedance or complex input reflection coefficient (input VSWR is not sufficient).

When you know the input impedance, you can calculate the input voltage based on the EMF of the signal generator. The EMF is double the output voltage given certain power (with 50 Ohms load).

When the input impedance is significantly above 50 Ohms, the actual voltage gain is less, as the input voltage will be more then the voltage you expect when the signal source is terminated with 50 Ohms.

If the input impedance is infinite, the actual voltage gain is 6 dB below the insertion gain (as the input voltage is now equal to the EMF of the signal source).
The above assuming direct coupling between generator and input (no impedance match).

Right, this is all what I suspected, and also what I observed.

Based on the best measurements I've been able to come up with, I think my circuit has a noise figure between 2.5 and 3.5dB. Not what I was hoping for... but I'm beginning to suspect that the performance claimed by the datasheet isn't really attainable. I've been doing spreadsheet calculations to try and match their results, and I can only duplicate them if I make some very naive assumptions.

I've attached two spreadsheets. The first shows calculations that yield results closely matching figure 30 of the AD8331 datasheet:http://www.analog.com/static/importe..._8332_8334.pdf
These results are done by assuming that the voltage supplied from the source is equal to \sqrt{4KTR} , that is, the unloaded emf. This is an absurd assumption, since the source will be loaded down by the pre amp (through a transformation network) and the actual voltage from the source seen by the amp will be lower. Am I getting this right?

On the second spreadsheet, I take the loading into account and redo the calculations. It yields noise figures much higher than before. And more importantly, the noise figure is a minimum at a different impedance!

Here's a rar with the excel spreadsheets. It would be great if someone else checked my work.
noise figure.rar

Copyright © 2017-2020 微波EDA网 版权所有

网站地图

Top