微波EDA网,见证研发工程师的成长!
首页 > 研发问答 > 测试测量 > 虚拟仪器 > 贴个英文FAQ-框图部分

贴个英文FAQ-框图部分

时间:12-13 整理:3721RD 点击:
The Block Diagram
Is the while loop really a while loop?
<Alex Le Dain alexATicon-tech.com.au>
(Sep 2000)
No. Strictly speaking the while loop is a "do while" (C) or repeat until
(PASCAL) programming structure. In other words the loop always executes at
least once, and the Boolean test for continuation is done at the end of the
loop execution. With a true while loop in a text based program the test is
done prior to executing any of the commands contained within the loop.
To create a "real" while loop wire the output of your terminating condition
to a case structure surrounding the rest of the code within the loop.
(back)
How do I stop a for loop?
<Scott Hannahs sthATmagnet.fsu.edu, Mark Evans Info-LabVIEWATUltimateG.com>
(Sep 2000)
A for loop by definition executes the requisite number of times (N). If the
number of iterations is not known then you have to use a while loop.
Is there some religious prohibition against the while loop that precludes its
use? There are ugly, inelegant things you could do by putting a case
statement in the for loop to avoid doing anything after some number of
iterations but there is no reason to do this except to introduce wasteful,
inefficient, useless code. Use a while loop, you will like it.
An underappreciated feature of while loops is their ability to index arrays
or produce array outputs similar to for loops. To do this "enable indexing"
on the wire coming into or out of the while loop. Likewise you can disable
indexing on for loops to avoid the indexing feature.
(back)
What does a 0 msec wait function do in a while loop?
<Greg McKaskle>
(Timeless)
If you have multiple loops in your application that don't need to run as fast
as possible, then placing a 1ms delay will limit your loops to at most 1000
iterations per second. Without a delay, the loop rate can be in the millions
of iterations per second range depending on what is inside it. So that means
that your CPU has about 1/1000th as much work to do and can go off and tend
to other tasks. If your loop was already taking several ms, then the 1ms
delay is likely in parallel and it won't affect the loop speed noticeably. So
placing the delay in the loop can drop your CPU usage noticably and allow
time for the OS to do other work like send messages that your application may
be waiting for, etc.
But what does a wait of 0 ms do? Let's consider the LV execution system to be
a xerox copying machine. Everyone that has something to copy heads for the
copying machine and lines up. If you have never had the pleasure of waiting
at a copy machine, then consider any other time consuming task where people
wait in line. Everytime a LV diagram executes a wait function, it is like
releasing the copier and staying out of the line for some time delay. After
the delay has passed, you will get back in line to make your next copy. A
wait of 0 ms is like immediately going to the end of the line to let a
smaller copy-task take place. If nobody is in line behind you, you
immediately start your next copy task. If someone is in line, it lets them
take a turn.
This 0 ms wait is a pretty cool little trick to make LabVIEW loop parallelism
less chunky. It naturally adds some overhead because the loops have some
setup time, but when the loops are doing significant work it is tiny. Use it
whenever you think you need to, but beware that if some loops don't have any
delay, they are still going to hog the CPU, and the wait of 0 ms may turn
into much larger waits because the loops with no waits play by different
rules.
(back)
How do I execute a shell command via system exec?
<Alex Le Dain alexATicon-tech.com.au>
(Timeless)
The exec VI function can be used to send shell commands. Include the shell
call with the commands that are desired. For example under Windows operating
systems the call is to command.com. For example, the string to copy filea to
a drive would be
command "copy filea a:"
with the arguments enclosed in quotes. The exact shell call will depend on
the operating system.
(back)
What is a state machine?
<Alex Le Dain alexATicon-tech.com.au>
(Timeless)
The state machine is a convenient LabVIEW construct where a case structure is
contained with a while loop. Execution of particular cases in the structure
is determined by the output from the previous case (or in the instance of the
first execution) by the control selector input. The control of the order of
case execution is controlled through the use of a shift register. By wiring
the output from one case as the determinate to the subsequent case (wiring to
the shift register on the right hand side) and wiring the input to the case
selector from the left hand side shift register, operations can be ordered
according to which state is executed.
The state machine has the advantage of allowing several steps to be linked in
series so that each individual step can be executed (or debugged) easily. The
trap for the inexperienced programmer is the use of the state machine to step
in a haphazard fashion through the cases in the loop. This can lead to
complicated pathways through the state machine reminiscent of convoluted goto
statements in conventional text based languages. However state machines are a
valuable tool and can be used very effectively with this in mind.
(back)
How do I control the order of execution in a state machine?
<Alex Le Dain alexATicon-tech.com.au>
(Feb 2003)
There are three possible ways to order a LabVIEW state machine. The first and
easiest method (although NOT recommended) is to use an integer. The integer
output from one case is wired as the selector for the subsequent case. The
disadavantage of this method is that the numbers themselves are not very
helpful in tracking through the code and the case selector becomes somewhat
meaningless. Either of the following methods are preferred and there are pros
and cons for each of them. Debate (strong at times) arises on the list quite
frequently as to the best of these methods, but in practice either will work
very nicely.
The second method is to use a string as the case selector. With the correct
choice of string states, this has the advantage of labelling each case so
that your state machine becomes self documenting. A further advantage of this
method is that all cases need not be written in the first instance and it is
somewhat easier to include new states should the need arise (especially in
earlier versions of LabVIEW (< 6.0)). The disadvantage is that it is easy to
misspell a state and have the state machine buggy because of his. An easy way
to overcome this disadvantage is to include a default case that pops up a
warning including the spelling of the failed case.
The third method is to use an enumerated type as the case selector. In
preference this enum should be a typedef'ed so that changes to the states are
refleced easily and transparently when new states are added to the enum (via
the LabVIEW auto update from typedef setting). Note that there are reports
where this method causes problems in version of LabVIEW prior to 6.0, because
the enum auto updating did not correctly handle newly inserted cases. This
method has the advantage of the string method in code documentation, but
requires the use of a default case if not all the cases are required to be
written when the code is first created.
(back)
How do occurrences occur?
<Stepan Riha, stepanATnatinst.com>
(Sep 2000)
There seem to be misconceptions about how exactly occurrences work and what
exactly the "ignore previous" parameter on "Wait on Occurrence" (WoO) does.
The following tries to explain, in a long and winded way, how they
(functionally) behave:
How Occurrences are "generated": When a VI is loaded (!) each "Generate
Occurrence" function allocates exactly one unique occurrence. When the VI is
running and this function is called it simply returns this one occurrence --
no matter how many times it is called (try putting one in a loop and
examining its value using the probe, it will always have the same number). If
you stop the VI and run it again, you will get the same value; only removing
the VI from memory and loading it again will give you a "fresh" value.
How Wait On Occurrence (WoO) works: Each WoO functions "remembers" what
occurence it last waited on and what time it continued (because the occurence
fired or because of a timout). When a VI is loaded (!) each WoO is
initialized with a non existing occurrence. When a WoW is called and "ignore
previous" is FALSE there are
four potential cases:
1) The occurrence has *never* been set -> in this case WoO waits
2) The occurrence has been set since this WoO last executed -> WoO does not
wait
3) The occurrence has last been set before this WoO last executed and last
time this WoO was called it waited on the *same* occurrence -> WoO will wait
4) The occurrence has last been set before this WoO last executed but last
time this WoO was called it waited on a *different* occurrence -> WoO will
*not* wait!!!!
The first three cases are pretty clear, the last one may seem a bit strange.
It will only arise if you have a WoO inside a loop (or inside a *re-entrant*
VI in this loop) and it waits on *different* occurrences (out of an array,
for example) or if it is inside a *non re-entrant* VI and the VI is called
with different occurrences. These cases do generally not happen. The reason
the WoO behaves this way is due to its implementation. Each
occurrence "knows" the last time it has been fired, and each WoO remembers
the occurrence it was last called with and what time it fired (or timed out).
When WoO is called and "ignore previous" is FALSE, it will look at its input;
if the input is the same as last time, it will look at the time of the last
firing and wait depending on whether the time was later than last execution;
if the input is *not* the same as last time, it will simply look at the time
and wait depending on whether it has *ever* been fired.
A possible (implementation) problem: It appears that occurances are
"remembering" that they have been set during previous invocations of the
program. One would think that generating an occurance should create a clean
"non-set" occurance. This problem can be illustrated in a program that has
three parallel loops with an abortable wait in each using occurances. If the
program is stopped with the stop button things are fine.  But if one waits
until one of the random stop conditions triggers the end of the loops
(generated from the other loops), the next time the program is run, the loops
will execute only once and not loop at all. (The random terminate condition
in the actual program is an error occuring in some piece of equipment.)
Either this is a bug, or we are completely wrong on the use of occurances.
Our example here is using the occurances in the "do not clear previous" mode.
We would think that we will not remember ccurances from previous runs of the
program since a new clear occurance should be created with the generate
occurance icon. In this instance we can not use the clear previous occurances
mode since we need a single occurance to stop multiple parallel loops.
The reason for the problem: The first time the occurrence is set because of
an error, the loops terminate, but the "stop button" loop is still running.  
When you click on the stop button, the occurrence gets triggered again
(unneccessarily) and the program stops. The next time the VI runs, the WoO
will not wait because of this extra trigger; and since you'll trigger again
in the "stop button" loop, the VI won't work until it's reloaded from disk.
A solution to the problem: Due to this behavior of occurrences, it is clear
that one cannot use the "timed out" flag to determine when the occurrence
fired. You will have to maintain some global information about your state,
let's say in a global boolean called "FINISHED". At the beginning of the
program you would initialize it to false. If you have an error, set FINISHED
to true, and then trigger the occurrence. After the WoO see if FINISHED is
true (make sure you don't read the global until *after* WoO finished
executing), if FINISHED is false continue with the loop. In the "Stop  
button" loop, also set FINISHED before you trigger the occurrence.
BTW, if you don't like globals, you could use a VI with an uninitialized
shift register (LabVIEW 2 global), but the effect would be the same.
Comments: One may think that occurrences are quirky, a pain to use and that
they
should be avoided. One might be right! Occurrences are very low level and you
often have to add functionality to them in order to use them effectively. In
other words, they are not for the faint of heart. Anything implemented with
occurences can be also implemented without them, but maybe not as efficently.
What occurrences do is to allow you to program synchronization in a way that
does not use polling, and is thus "cheaper" in processor time.
(back)
Why does an typecast enum output not change?
<Paul Sullivan PaulATSULLutions.com>
(May 2002)
If a typecast function is used to convert a number related, for example, to
the button pressed (in an array of buttons) to an enum type then a problem
observed in that the type cast doesn't seem to work. The button press is
changing the number on the input, but the enum output does not change. The
answer is that most likely the two inputs to the typecast function have
different representations, probably I32 and U16. The type cast is reading the
high order bits of the I32 (which aren't changing) and casting them into the
U16 enum output. The solution is to convert both to representations having
the same length (signed and unsigned mix fine), either by changing the
representation of the controls themselves or by using a convert function at
the input to the type cast.
(back)
How do I make cool looking icons smaller than the entire square?
<Don R. Wagner wagnerATsiliconlight.com, Albert Geven>
(May 2002)
First, draw the border for the icon in the B&W icon view. Be sure to align
the icon to the wiring terminals by using the Show Terminal check box. This
defines the icon outline. If you have no border drawn in the B&W view, it
defaults to the whole square (in my limited tests, you get a "no outline in
B&W view default will be used" dialog). The icon border will actually be the
largest of the drawn areas in the three icon views. So actually, you need
only draw a single dot on the B&W view to define your 256 colour drawing as
the border limit. Something must be drawn in the B&W view to get the
selection border for the icon on your diagrams to conform to the drawn
outline (is this a useful feature? seems like you could have nothing drawn in
this view by defaults and have it still work).
Second, Click on the 256 colour icon view (if that's what you use), and click
Copy from Black & White to get your border. Then finish drawing the icon to
your liking, link it up, and presto, small icon. Wires to border, transparent
outside, etc. You can even make irregularly shaped icons. The white area
outside will be transparent in your diagrams. Interestingly, if the border
you define is not contiguous, the inside of your icon will also be
transparent (you can see a wire go all the way to its connection point
(visible as a little cross in the "show terminals" icon view). You can
actually have multiple non-transparent areas in your icons if you wish. Very
cool! You could have a wire pass between two blobs to represent some subtle
transformation on the data if you really wanted to go all out, but this
actually requires drawing wires between the connection points on the icon to
make them look contiguous.
(back)
Are there any other icon tricks I should know?
<Don R. Wagner wagnerATsiliconlight.com, Albert Geven>
(May 2002)
Double clicking on some (any?) of the tool icons in the editing palette will
apply that tool to the entire icon square. For instance double click on the
dotted square selection tool will select the entire icon area (handy for
deleting the icon as a starting point in making small icons). Double clicking
on the bordered square will draw a border around the current icon in the
currently selected colour, without affecting the inner pixels, etc.
(back)
Why doesn't my FOR loop return a value?
<Jean-Pierre Drolet jean-pierre.droletATtr.cgocable.ca, Uwe Frenz
Uwe.frenzATgetemed.de, Alex Le Dain alexATicon-tech.com.au>
(Oct 2001)
As a good programming rule in LabVIEW, NEVER output a value from a for loop
with indexing disabled. When the loop does not execute (0 iteration) a non
indexing output stays undefined and will hold any garbage left there by
previous memory usage. This is because the wire output from the for loop has
no code or data source to get the value from when the loop does not execute.
Instead, use a shift register (SR) to output the value and at the left SR
enter a default value for the case when the loop does not execute. When the
for loop does not execute this default value of the left SR is passed to the
right SR. Similarly passing a refnum through a for loop that never executes
(0 iterations) destroys the reference. Either pass the reference using a SR
as described above or wire around the loop. Note that while loops always
execute at least once so the outputs are always defined.
(back)
What is a reentrant VI and what are the implications of reentrancy?
<David A. Moore David_A_MooreATMooreGoodIdeas.com, Greg McKaskle>
(Feb 2003)
First, consider three VIs, foo.vi, bar.vi, and sub.vi. Both foo.vi and bar.vi
call sub.vi.
If sub.vi is normal (not reentrant), then if foo.vi tries to call sub.vi but
sub.vi is busy servicing a call from bar.vi, then foo.vi has to wait. This
can be both a very GOOD thing and a very BAD thing depending on
circumstances. It's very GOOD when sub.vi controls access to something like a
serial port, where you only want one part of your program accessing it at a
time. It's very BAD when sub.vi controls something like ALL the serial ports,
because you may want foo.vi to be able to use one serial port while bar.vi is
busy using a different serial port. Another very bad circumstance is where
foo.vi is in a critical loop and bar.vi is not, yet because of the contention
for sub.vi, bar.vi can end up blocking foo.vi.
If sub.vi is reentrant, then both foo.vi and bar.vi can call sub.vi at the
same time. In order for this to work, each call to sub.vi needs to have its
own "data space" which is all the internal storage sub.vi uses in order to
execute its code.
Now at this point I need to point out a distinction between LabVIEW and most
other languages. LabVIEW doesn't want to allocate a data space on the fly,
because for LabVIEW that would slow down performance. LabVIEW allocates all
the VI data spaces it needs when VIs are being loaded. Except when you use VI
Server to call VIs dynamically, all the loading happens before any VIs
execute. Therefore, for each place that sub.vi appears on foo.vi's block
diagram, a copy of sub.vi's data space gets embedded in foo.vi's data space,
assuming sub.vi is reentrant. If sub.vi isn't reentrant, it just has its one
data space allocated that each call will use in turn.
In most (all?) other languages, reentrant functions allocate their data
spaces on the fly, so there's no storage that goes with each place that a
particular function is called.
How does LabVIEW's unusual implementation affect us in practical terms? There
are really two ways:
1. If you use uninitialized shift registers to store information, then you
can get two different behaviors depending on the reentrancy of your VI. For a
non-reentrant VI, you get a data sharing function that lets you move large
quantities of information between parallel loops without making copies of it.
For a reentrant VI, you get a reusable storage function that can keep
independent copies each place you use it. There is a PowerPoint presentation
and some example code related to uninitialized shift registers at:
http://www.mooregoodideas.com/Downloads/Downloads.htm#ChangeDetector.
2. The second implication is that you can't do recursion (functions that call
themselves) easily in LabVIEW. In most languages, if a function is reentrant
then it's OK for it to call itself. In LabVIEW, that would require that the
data storage for sub.vi would include a copy of the data storage for sub.vi
which would ... to infinity. You can do recursion in LabVIEW if you use VI
Server to have a VI call itself dynamically, but as I said, allocating data
spaces on the fly is inherently slow. In LabVIEW, it's best to convert
recursive algorithms to their iterative equivalents, which I hear is
mathematically proven to always be possible. In the iterative version, you'll
end up changing the sizes of arrays at each iteration, which is also one of
the slower operations in LabVIEW, but is not nearly as slow as dynamic VI
calls.
And Greg McKaskle added:
To expand on this, reentrant means that more than one execution is allowed to
take place at the same time. In other languages, it is more a situation than
a setting. You never mark a C function as allowing or disallowing reentrancy,
it is either safe to do so or a source of bugs. In LV it is a setting, and
many times its setting doesn't affect the correctness of a VI, but in some
cases, it can be a source of bugs. It depends on what the VI does.
The setting in LV determines two major attributes about how a VI executes.
First is access. With reentrancy turned off, only one call to the subVI can
be active at a time. When the current call finishes, the next one can begin.
The subVI calls queue up while the VI is busy. For functions that execute
quickly, this is normally fine and reentrancy doesn't affect much.
If you have a function that uses TCP to talk to another computer and waits
for responses, these waits also affect the other subVI calls that are queued
up. So if you have an operation that can occur in parallel and doesn't
consume the CPU, you can make the VI reentrant and the multiple subVI calls
don't enter a queue, and multiple VIs can talk TCP and wait for responses at
once. This allows the wait time of one subVI to be used as work time in
another and increases overall performance.
On the otherhand, given a VI that reads a global modifies it and writes it
back, a reentrant subVI means that more than one subVI call at a time can be
modifying the global -- a race condition which will cause incorrect answers.
Lots of real-world devices also get confused when more than one subVI tries
to control them at a time. So when trying to protect a global resource, the
one of the tools, and frequently the easiest to use is to simply make sure
that the access goes through a non-reentrant VI.
The second attribute is data side-effects. If a VI has unconnected controls
or uninitialized shift registers on its diagram, then it remembers some
amount of information from call to call. A good example of this is a PID or a
filter. Data from previous calls affect the result of the next call. For
these sorts of VIs, if they are reentrant, then each call gets its own place
to store the previous call's state information. If made non-reentrant, there
will be only one storage location for all calls to share, so the data will
get all jumbled, likely causing an incorrect answer.
(back)
What are some of the pitfalls of the event structure?
<Alex Le Dain alexATicon-tech.com.au>
(Feb 2003)
In LabVIEW v6.1 a new structure was introduced: the Event Structure. This
structure allows the user more power over the UI, but at the same time with
the new power comes the potential to lock a LabVIEW application hard. Below
are a compilation of tips for using the UI Event Structure:
0. Read the LabVIEW Help on "Loops and Case Structures, Case Sequence and
Event Structures, Event Structures, Caveats and Recommendations when Using
Events in LabvIEW"
1. Only place a single event structure within a while loop. In reality the
while loop and the event structure are intimately linked so there should only
ever be one event structure per loop.
2. Never place an event structure within an event structure. It is better to
solve issues where you might want to do this with some thought and perhaps a
second while loop in parallel.
3. It is possible to have more than one event structure while loop
combination on the same block diagram. There are valid reasons why you might
want to do this: eg to have some events handled without pausing the while
loop (Lock Panel Until Handler Completes = False) and others to wait until
the handler completes (= True). The advice here is to separate out those
events that lock the handler and those that do not lock the handler into
separate while loops.
4. Whether the panel is locked until the handler completes is set
individually for EACH case in the event structure.
5. It is advisable to notify the user (ie with a message popup or mouse busy)
when tasks that lock the panel take a long time to execute (> 0.5 s). Another
way is to set some busy status message visible as the first step and then
hide this message when the case exits.
6. Boolean switches and their state require some thought when used within the
event structure. With the default button style ("latch when released")
LabVIEW reads the state of the boolean and only after reading does the button
change back to it's other state. If the boolean is the "switched" type then
TWO events are generated when the boolean is switched (ie F->T and T->F). If
a single case of the event structure is used for "value chaged" on a boolean
that is of the "switched" type, make sure that the code is only executed when
the true event is processed (ie with a true/false case selector). This is
most relevant where to use a boolean as a local variable then the boolean
type must be of the "switched" type.
(back)
Why does the event structure behave as it does?
<Greg McKaskle>
(Feb 2003)
LV BE (Before Events) had the panel and diagram behaving as asynchronously as
possible WRT one another. A control periodically handled UI events from the
user and dropped the value into its terminal. Independently, the diagram
takes the current value when it is needed and does its work. There is no
synchronization between when the UI event is handled, when the value in the
terminal changes, and when the diagram reads it. It is often a very simple
way of writing your code and mimics how most hardware works. So why do events?
The primary reason for the events feature is to allow synchronization between
the UI and the diagram. First off, the diagram gets notification of a value
change. It is guaranteed not to miss user changes or to burn up the CPU
looking for them. In addition to the notification, the diagram gets a chance
to respond, to affect the rest of the UI, before the rest of the user input
is evaluated. Maybe this part needs an example.
Lets look at polled radio buttons. In theory, your diagram code polls fast
enough to see each change in the radio buttons so that it can make sure that
the previous button is set to FALSE. But when the user is faster than the
diagram and presses multiple buttons, what order did they press them in?
There has to be a fixup step to break the tie and pop out all but one button
to FALSE regardless of the order the user clicked them.
With events, when the user clicks on a button, the panel is locked and will
not process the next user click until the diagram finishes. This allows the
diagram to see the button changes one at a time in the same order as the user
presses.
Another example, perhaps a better one, is a panel with three action buttons:
Save, Acquire, and Quit. The order that the presses are responded to is
important, so the polling diagram has to "guess" whether to Save and then
Acquire or Acquire, then Save. The Event Structure knows the order and the
code in it is synchronized with the UI allowing for a better, more friendly
UI.
Getting back on topic, the event structure introduces the synchronization,
but the downside is that as with all synchronization, it allows for deadlocks
when not used in the right way. As already discovered, nesting event
structures is almost never the right thing to do, at least not yet.
As noted earlier, it is possible to leave the Event Structures nested, but
turn off the UI locking. This appears to solve the problem, but it isn't how
I would do it. I think a much better solution is to combine the diagrams of
two structures into one. The value change for hidden controls will not
happen, but it doesn't hurt to have it in the list of things to watch for.
Another option is to place them in parallel loops. This will let the first
structure finish and go back to sleep.
(back)
Why doesn't the event structure register local variable changes?
<Greg McKaskle>
(Feb 2003)
>The main reason for not sending events for programmatic value changes is to
avoid feedback.
A happens. In responding to A, you update B and C. If responding to B or C
results in changing A, then you have feedback and your code will behave like
a dog chasing its tail. Sometimes the feedback will die out because the value
set will match what is already there, but often it will continue indefinitely
in a new type of infinite loop.
One solution to this is what is called User Events. You define what data they
carry and when they fire. Then you either have the value change and the set
local both fire the user event, or you can combine the event diagram to
handle both and just fire the event when writing to the locals. Today, you
can accomplish this with the queued state machine using the state machine to
do all the common work and just having the event structure pass things to the
queue.
(back)
Why is it necessary to lock the panel after an event fires?
<Greg McKaskle>
(Feb 2003)
Given the synchronized mechanism of events, it is pretty easy to repost
events to another queue or turn off locking and synchronization. If the event
structure isn't synchronized, it would be impossible for the diagram to add
it and become synchronized with the UI, so it is at least necessary for event
diagrams to be able to lock the panel.
Should it be the default?
In our opinion, yes. When responding to an event, it is pretty common to
enable/disable or show/hide some other part of the display. Until you finish
doing this, it is wrong for LV to process user clicks on those controls, and
LV doesn't know which controls you are changing until you are finished.
Additionally, it isn't the best idea, but what happens when the event handler
does something expensive like write lots of stuff to a database inside the
event structure? If the UI is locked, then the user's events don't do much,
ideally the mouse is made to spin and this works the same as a C program. The
user is waiting for the computer, and the UI more or less tells the user to
be patient.
If the UI isn't locked, the user can change other things, but you can't
execute another frame of your event structure until this one is finished.
This is a node. It must finish and propagate data before it can run again,
and the loop it is probably in can't go to the next iteration until it
completes. You would have low level clicks being interpretted with the
current state of the controls before the diagram has a chance to respond.
This is sometimes the case, so it is possible to turn off event handling on
the cases where you know that you may take awhile and you do not affect the
state of the UI.
If you need to respond to the events in parallel, you can make a parallel
loop, add an event handler for the other controls, and handle them there
while this one churns away. That will work fine and it is IMO clear from the
diagram what is synchronized and what is parallel. Taken to an extreme, each
control has its own loop and this approach stinks, but it is a valid
architecture. Note that for this to work well, you need to turn off the UI
lock or have a node to release it.
Another way of doing expensive tasks is to have the Event Structure do the
minimum amount necessary before unlocking -- treat them like interrupts. Have
the event structure repost expensive operations to a parallel loop or fire up
an asynchronous dynamic VI. Now your event structure is free to handle
events, your UI is live, LV is still a nice parallel-friendly language, and
your diagram just needs to keep track of what parallel tasks it has going on.
In the end, I'm not sure I can convince you, but if you continue to
experiment with the different architectures that can be built using the Event
Structure, I think you will come to agree that it normally doesn't matter
whether it is locked or not. There are times where it is really nice that it
is locked, and occasionally you may turn off locking so that the user can do
additional UI things up to the point where synchronization is necessary
again. For correctness, we decided that locking should be the default.
I'd suggest reading the article on devzone sooner or later. It is in the
Developer Insights/LabVIEW Guru section, and it will help start you down the
right path.
(back)
What does an "Insane Object" mean? Should I report this as a bug to NI or do
they already know about it?
<Stephan Mercer stephan.mercerATni.com>
(Feb 2003)
Firstly, there is no need to necessarily report this to NI, as they do know
about insane objects. Secondly they are NOT just bugs, insane objects can be
generated as part of the LabVIEW verification process.
The insane object message is what LabVIEW puts in a dialog when one of the
objects on the diagram does not meet its "checksum". In other words, this
object isn't in a state we expect it to be in. Most of the time these errors
are not fatal -- we simply put the object in the state we do expect. But it
raises questions about how the object got into that bad state and what might
have been done with that object between the last time we checked and it was
good and the time it became insane.
The insane object messages are something we work on with each version of LV.
But as it is a generic error message that can apply to anything from a simple
cosmetic to the front panel itself, you'll still see them in any version of
LV that has a bug -- a fact of life unfortunately for the foreseeable future.
If you get such a message, it is good to check the known bugs of LV at ni.com
and if the particular insanity in your dialog is not listed, report the
insanity to NI technical support.
The cryptic nature of the message can be deciphered as follows:
Insane object at FPHP+44 in "name.vi": {dsitem} 0x400: Panel (FPSC)
* FPHP -- this will be either FP or BD for "front panel heap" or "block
diagram heap"
* 44 -- this is a number indicating which object
* name.vi -- which VI had the insanity
* {dsitem} 0x400 -- really only meaningful if you know how our internals
work; I'll skip it here
* Panel (FPSC) -- the type of object that has problems. The four letter codes
are usually descriptive (COSM for a cosmetic or simple drawn part, SGNL for a
signal aka wire)
Most of the time, deleting the offending object and recreating it from
scratch is sufficient to fix your VI and allow you to continue working.
(back)
What are LabVIEW 2 style globals, functional globals, and uninitialised shift
registers (USR's)?
<Stephan Mercer stephan.mercerATni.com>
(Feb 2003)
LabVIEW 2 style globals and functional globals refer to the same code
construct in LV. Most people in the mailing list refer to these structures as
LV2 globals because this was the version of LV when they were introduced.
Functional globals is how NI refer to these structures because they can be
more than just global variables in that they can contain code and data.
Uninitialised shift registers (USR's) are the LV code primitives used in
making these functional global variables work.
A functional global is created by placing a shift register upon a while loop
and wiring a boolean constant to the condition terminal such that the loop
executes a single time. When the shift register is not wired with an input
from the left hand side (ie it is uninitialised) LV retains the memory in use
for that vi between calls. This means the the vi stores the value of the
shift register variable between calls (executions) in the code. Extensions
can be done by adding cases within the loop, eg one for read and one for
write with a 'mode' enumerated type selecting which case is executed when the
vi is called. These globals can contain as many cases as needed and because
these are very similar to state machines, they can be made to execute several
cases at each call to add increased functionality.
Of course all recent versions of LV have global variables, accessed from the
structures palette, so why do people still refer to them or even use them?
Well, there are a couple of good reasons:
1. More efficient memory storage. Beacuse the vi retains the same data space
regardless of it use in the code, only a single call to the memory manager is
made. If arrays are stored in the USR, and replace array element primitives
used, then the memory space is efficiently used/accessed wherever the vi is
called.
2. Built in data space protection. One problem that can happen with normal
globals in LV, occurs if you try to write a value to the global in several
places in your code, updates to the value are unpredictable with respect to
time. This means that changes to the global can be subject to race
conditions. This is especially true if you read/write to the global where a
race condition could occur between the read and the write of the variable.
So how does it work? In the LV execution system, a single vi, even if called
from multiple places in the LV code, can only be executed exclusively in each
place. In other words, the vi will not execute until a previous execution is
complete. With a functional global, because it is a vi, LV determines that
the functional global can only be executed once. Therefore updating of the
shift registers is protected by the LV execution system. This mechanism of a
vi 'protects' the data preventing any race condition. It should also be noted
that USR's only retain this ability to share data across the code when they
are not reentrant. If the vi is reentrant then the USR still stores the data,
but this is only stored between calls of the vi in that place in the code -
another call to the vi in another section of the code would have its own
dataspace and its own saved USR values. It should be stressed that the
reentrant vi would not transfer data from one section of the code to another
and by definition not be a functional global.
(back)

Copyright © 2017-2020 微波EDA网 版权所有

网站地图

Top