The first set of advanced functions, which are trigger-based (TRIGGER_RECEIVE_ANY, SET_WAITING_TRIGGER, and SET_WAITING_TRIGGER_WITH_TIMEOUT), are more flexible to use than the second set, but may slow some simulations down slightly. The second set, which are call-based (CALL_THREAD, CALL_ON_RECEIVE_ALL_PORTS, CALL_ON_RECEIVE_ANY, SET_WAITING_CALL, and SET_WAITING_CALL_WITH_TIMEOUT), can greatly accelerate simulation speeds while reducing thread-counts, but are more restrictive in usage.
Background: CSIM's normal thread-control functions allow a descriptive paradigm known as procedural description. Delay, Wait, or Receive statements, also called blocking statements, may be placed within larger procedural blocks of code. They are basically treated no differently than any other statements. Multiple blocking statements may occur within deeply nested for or while loops, or conditional if blocks. This enables very natural model behavior descriptions in the form of logical proceedures, where the proceedure boundaries correspond to logical processes; not necessarily just where blocking statements occur. Few simulation tools or environments support procedural description paradigms, other than those of VHDL, Verilog, and CSIM. More typically, discrete event simulators support only a state-oriented paradigm, which is actually a subset of the procedural paradigm. For example, CSIM can be used in both procedural or state-transition paradigms. While elegant and efficient from a descriptive sense, blocking statements within procedural descriptions consume threads while waiting for an activation event or time-out.
The advanced thread-control functions enable threads to be relinquished while inactive, and to be re-established only when activated. This requires that the blocking thread: (1) replace the traditional blocking statement (DELAY, RECEIVE, WAIT) with an advanced trigger-based or call-based function, (2) name a thread to be actived on resumption, and (3) exit immediately. Any state-values must be preserved in shared-variables so that the resuming thread can pick up where the other thread left off. As you can see, this mode is ideal for state-transition based modeling. Many models can be converted to this mode by breaking threads into pieces. But this may not be very practical for deeply nested procedural threads. Advice: Use where appropriate or needed.
In either case, using these new routines will present some coding inconveniences.
They are advanced options; not cure-alls.
Please heed the cautionary notice below.
void TRIGGER_RECEIVE_ANY( thread_name, thread_var, port_name_list );
void SET_WAITING_TRIGGER( thread_name, thread_var, synchron, QUEUABLE/NONQUEUABLE );
void SET_WAITING_TRIGGER_WITH_TIMEOUT( thread_name, thread_var, synchron, QUEUABLE/NONQUEUABLE, timeout );
These Call-based routines are similar to the trigger-based methods above, except, instead of starting the named thread-routine as a true thread, it is called as a subroutine directly from the simulator's main kernel when the activation event arises. There is no thread creation/destruction overhead. There is virtually infinite stack space available. The thread routine behaves in the normal way, like any other model thread code. It has access to the shared variables of the box instance under which it is running, like any other thread. But it cannot block (WAIT, DELAY, RECEIVE). Instead, it can call any of the trigger-based or call-based functions to accomplish the same effect. The called thread-routine must perform its actions and exit immediately. (No other threads can run until the called routine finishes.)
int CALL_THREAD( thread_name, delay_time, thread_var );
void CALL_ON_RECEIVE_ALL_PORTS( thread_name, thread_var );
void CALL_ON_RECEIVE_ANY( thread_name, thread_var, port_name_list );
void SET_WAITING_CALL( thread_name, thread_var, synchron, QUEUABLE/NONQUEUABLE );
void SET_WAITING_CALL_WITH_TIMEOUT( thread_name, thread_var, synchron, QUEUABLE/NONQUEUABLE, timeout );
.
PLEASE NOTE THE LIMITATIONS HIGHLIGHTED BY "<<<=== ***" ABOVE !!!
These new routines may not be convenient nor useable in all cases!
However, in situations where they apply, they can reduce threads and/or speed-up
simulations. Basically, the TRIGGER_RECEIVE_ANY and SET_WAITING_TRIGGER are
safer to use anywhere, except where deep nesting impedes breaking-up the thread.
However, their advantage is only to reduce thread-counts, though slowing simulations.
The CALL_ON_RECEIVE_ANY, CALL_THREAD, and SET_WAITING_CALL are even more restrictive,
in that THEY CAN ONLY BE USED WHERE THE CALLED THREAD CANNOT BLOCK !!!
However, in such cases you save threads, plus improve run-time!
(Another added benefit of the CALL_xx methods: infinite stack for called routine.)
In either case, using these new routines will present some coding inconveniences.
They are options; not cure-alls.
/* Traditional procedural model. */ DEFINE_DEVICE: Spinner1 DEFINE_THREAD: start_up { int counter=0; while (counter < 10) { DELAY( 10.0 + (double)counter ); printf("%d: The time is now %g\n", counter, CSIM_TIME); counter = counter + 1; } } END_DEFINE_THREAD. END_DEFINE_DEVICE. /* Model using "zero-thread" call-based (or state-based) method. */ DEFINE_DEVICE: Spinner2 int counter; /* Declare persistent state variable, shared between threads. */ DEFINE_THREAD: start_up { counter = 0; /* Initialize the state variable. */ CALL_THREAD( state2, 10.0 + (double)counter, 0 ); /* Schedule state2 to activate in the future. */ } END_DEFINE_THREAD. DEFINE_THREAD: state2 { printf("%d: The time is now %g\n", counter, CSIM_TIME); counter = counter + 1; if (counter < 10) CALL_THREAD( step2, 10.0 + (double)counter, 0 ); /* Re-schedule myself to activate in the future. */ } END_DEFINE_THREAD. END_DEFINE_DEVICE.(Note that Spinner2 passed counter to state2 as a shared variable. Alternatively, it could have been passed as a THREAD_VAR.)
/* Traditional procedural model. */ DEFINE_DEVICE: Relay1 DEFINE_THREAD: start_up { int *message, *len; int numports; char **portlist; portlist = list_in_ports( &numports ); /* Get the in-port names. */ while (1) { RECEIVE( portlist, &message, &len ); /* Wait for and receive incoming messages. */ DELAY( 2.5 ); SEND( outport, message, len ); /* Send message out. */ } /* Loop back to wait for next message. */ } END_DEFINE_THREAD. END_DEFINE_DEVICE. /* Model using "zero-thread" TRIGGER-based (or state-based) method. */ DEFINE_DEVICE: Relay2 char **portlist; /* Declare persistent state variable, shared between threads. */ DEFINE_THREAD: start_up { int numports; portlist = list_in_ports( &numports ); /* Get the in-port names. */ TRIGGER_RECEIVE_ANY( state2, 0, portlist ); /* Wait for an incoming message. */ } END_DEFINE_THREAD. DEFINE_THREAD: state2 { int *message, *len; RECEIVE( portlist, &message, &len ); /* Receive the incoming message. */ DELAY( 2.5 ); SEND( outport, message, len ); /* Send message out. */ TRIGGER_RECEIVE_ANY( state2, 0, portlist ); /* Wait for next incoming message. */ } END_DEFINE_THREAD. END_DEFINE_DEVICE. /* Model using "zero-thread" CALL-based (or state-based) methods. */ DEFINE_DEVICE: Relay3 char **portlist; /* Declare persistent state variable, shared between threads. */ int len; DEFINE_THREAD: start_up { int numports; portlist = list_in_ports( &numports ); /* Get the in-port names. */ CALL_ON_RECEIVE_ANY( state2, 0, portlist ); /* Wait for an incoming message. */ } END_DEFINE_THREAD. DEFINE_THREAD: state2 { int *message; RECEIVE( portlist, &message, &len ); /* Receive the incoming message. */ CALL_THREAD( state3, 2.5, message ); /* Delay state3 for 2.5 units. */ } END_DEFINE_THREAD. DEFINE_THREAD: state3 { int *message, *len; message = (int *)THREAD_VAR; SEND( outport, message, len ); /* Send message out. */ CALL_ON_RECEIVE_ANY( state2, 0, portlist ); /* Wait for next incoming message. */ } END_DEFINE_THREAD. END_DEFINE_DEVICE.(Note how Relay2 and Relay3 models are similar except Relay3 needed an extra state (state3) because call-based threads cannot have internal delays. The delay was accomplished by scheduling state3 2.5 units in the future by CALL_THREAD. Notice how state values were passed to state3 by a combination of thread-var (message) and shared variables (len). )