Threads and concurrency - summary points from presentation: Difference between revisions
Jump to navigation
Jump to search
(Importing text file) |
No edit summary |
||
(4 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
== Threads and concurrency - summary points from presentation 2001 == | |||
=== Introduction === | |||
* Concurrency: More than one activity at the same time | |||
* Models usually contain concurrent activity | |||
*• Models are inherently parallel programs | |||
*• The (runtime) engine itself only processes one activity at a time (even on dual CPUs). | |||
=== Events And Their Ordering === | |||
* Events generate activity in a model | |||
* Activity generates further events | |||
* Events are sorted by time in the FEC (Its not safe to rely on the order that events in the FEC at the same time will be processed) | |||
* Past models forced order using delays | |||
* Order can be assured by understanding threads | |||
=== Threads === | |||
* Items flow between points of capacity. A moving item (flowing) is a thread | |||
* Events trigger item movement hence events trigger threads | |||
* A thread generally cannot be interrupted | |||
* All activities within one thread will occur together and in their flow sequence | |||
* A thread ends when an item reaches a point of capacity | |||
* Threads can post “side effect” events | |||
* The next thread to execute is well defined for some specific cases | |||
=== Co-ordinating Threads - One to One vs. One to Many === | |||
* Attribute Gate (One to One) | |||
* Messages (One to One) | |||
* Normal, Directed, Immediate | |||
* Broadcasts (One to Many) | |||
* Global, Scoped, Directed | |||
* Splitters (One to Many) | |||
* Immediate Messages | |||
=== Messaging from a dispatcher === | |||
* Gives a One to One co-ordination of threads | |||
* Original item waits until message item completes and “returns” | |||
* Thread order is maintained if the message item leaves in its original thread | |||
* Can dynamically direct message to a location determined at runtime | |||
=== Broadcasting from a dispatcher === | |||
* Combines One To Many and Wait Until | |||
* Broadcasts all sent before item leaves | |||
* Only guarantees execution of the initial thread after each broadcast entry | |||
* Sending the broadcast to all entries takes higher priority than processing other events at the same time | |||
* Unsafe to rely on the order that the entries will receive the broadcast | |||
=== Avoiding Concurrency Problems === | |||
* Minimise creating simultaneous item threads when possible | |||
* Use a single item to process events in a system to avoid concurrency issues and conflicts which are more likely with multiple items | |||
* Call common code using messages | |||
* Minimise capacity objects in threads to avoid creating extra events | |||
* Try perform any Graphical iteration without capacity objects in the loop | |||
* Do not rely on the order that broadcasts get sent, items leave servers etc. Splitters are OK | |||
* Use Immediate Messages to call a thread from within a routine (within a thread) | |||
=== Cases where thread order is defined === | |||
* Empty Queue / normal Dispatcher which is not blocked moves item out in next thread | |||
* Message/Broadcast Dispatcher - same | |||
* Splitter which is not blocked | |||
* Zero delay multiserver ONLY if thread is initiated by an FEC event - not a broadcast or splitter | |||
=== Pausing Issues === | |||
* A model can only be paused between events, never within a thread, broadcast send etc. | |||
* Earlier versions of Planimate could be paused between any pair of events This meant that a model could be paused before all events at the current time were processed, leading to incomplete updates. | |||
* Engine now “pauses” only after all events at the current time have been processed | |||
* This can be overridden using the Pauseable Zero Delay multiserver option | |||
=== Future Directions === | |||
* User Thread - Persistent Items (now present as a Restart Dispatcher mode) | |||
* Handling graphical looping better (has been improved to support more complex loops) | |||
* Flexible class changes within a thread (implemented for modules) | |||
* Threads spanning multiple instances/machines (network broadcasts) | |||
<font size="2">idkbase note 206</font> | |||
[[Category:Runtime Engine]] | [[Category:Runtime Engine]] | ||
Latest revision as of 01:46, 13 January 2008
Threads and concurrency - summary points from presentation 2001
Introduction
- Concurrency: More than one activity at the same time
- Models usually contain concurrent activity
- • Models are inherently parallel programs
- • The (runtime) engine itself only processes one activity at a time (even on dual CPUs).
Events And Their Ordering
- Events generate activity in a model
- Activity generates further events
- Events are sorted by time in the FEC (Its not safe to rely on the order that events in the FEC at the same time will be processed)
- Past models forced order using delays
- Order can be assured by understanding threads
Threads
- Items flow between points of capacity. A moving item (flowing) is a thread
- Events trigger item movement hence events trigger threads
- A thread generally cannot be interrupted
- All activities within one thread will occur together and in their flow sequence
- A thread ends when an item reaches a point of capacity
- Threads can post “side effect” events
- The next thread to execute is well defined for some specific cases
Co-ordinating Threads - One to One vs. One to Many
- Attribute Gate (One to One)
- Messages (One to One)
- Normal, Directed, Immediate
- Broadcasts (One to Many)
- Global, Scoped, Directed
- Splitters (One to Many)
- Immediate Messages
Messaging from a dispatcher
- Gives a One to One co-ordination of threads
- Original item waits until message item completes and “returns”
- Thread order is maintained if the message item leaves in its original thread
- Can dynamically direct message to a location determined at runtime
Broadcasting from a dispatcher
- Combines One To Many and Wait Until
- Broadcasts all sent before item leaves
- Only guarantees execution of the initial thread after each broadcast entry
- Sending the broadcast to all entries takes higher priority than processing other events at the same time
- Unsafe to rely on the order that the entries will receive the broadcast
Avoiding Concurrency Problems
- Minimise creating simultaneous item threads when possible
- Use a single item to process events in a system to avoid concurrency issues and conflicts which are more likely with multiple items
- Call common code using messages
- Minimise capacity objects in threads to avoid creating extra events
- Try perform any Graphical iteration without capacity objects in the loop
- Do not rely on the order that broadcasts get sent, items leave servers etc. Splitters are OK
- Use Immediate Messages to call a thread from within a routine (within a thread)
Cases where thread order is defined
- Empty Queue / normal Dispatcher which is not blocked moves item out in next thread
- Message/Broadcast Dispatcher - same
- Splitter which is not blocked
- Zero delay multiserver ONLY if thread is initiated by an FEC event - not a broadcast or splitter
Pausing Issues
- A model can only be paused between events, never within a thread, broadcast send etc.
- Earlier versions of Planimate could be paused between any pair of events This meant that a model could be paused before all events at the current time were processed, leading to incomplete updates.
- Engine now “pauses” only after all events at the current time have been processed
- This can be overridden using the Pauseable Zero Delay multiserver option
Future Directions
- User Thread - Persistent Items (now present as a Restart Dispatcher mode)
- Handling graphical looping better (has been improved to support more complex loops)
- Flexible class changes within a thread (implemented for modules)
- Threads spanning multiple instances/machines (network broadcasts)
idkbase note 206