Track Movement Sequencing Logic

From Planimate Knowledge Base
Jump to navigation Jump to search

Track Movement Sequencing Logic

A dynamic network model strives for realism. It must strike a balance between the provision of enough traffic management rules to reduce the probability of deadlocks, and the provision of so many rules that the traffic-carrying capability of the system becomes unrealistically low.

The Planimate® Platform contains logic for controlling train item movement around a track network. The underlying philosophy is to aim for simplicity and to provide modellers with immediate traffic management over a network without them having to first design any movement rules of their own.

By design, the logic does not optimise localised train movement scheduling, nor does it perform overall system-wide trade-offs. Train Item movements are resolved on a first come first served basis and the Platform manages what it is presented with locally.

In the case of simultaneous competition for a section, the train item with the highest ‘priority’ wins out, depending upon the circumstance. In conditions of “moderate congestion”, the movement logic may "relax" priority-based assignments in the interests of moving trains through the system to ease the congestion and release capacity.

However if you move trains into a track network, assigning trains to sections, roads and loops using simple rules to avoid ‘collisions’, you end up with a deadlock in short order. The issue of deadlock limits how simple you can make the rule set with which train item movements over a network are managed.

The rules that guide train item movement decisions have to respond to the wider “network context” within which each train item’s movement is contemplated. Each decision requires knowledge of where other relevant train items are in the network and where they are heading. Some movements will increase the chance of deadlock developing more than others.

Making movement decisions by taking full account of the position and routing for every train on even a simple network consumes huge memory and processing resources. The Planimate® Platform avoids this overhead by adopting a general rule set designed guide decision-making about moving each train item individually, given the “network context” within which it finds itself.

In order to reduce the chance of a deadlock occurring, the Planimate® Platform uses rules developed to favor train item movements that tend not to constrain the movements of other train items. Of the many possible ‘next movements’ in a network, given the specifics of network occupancy at that time, some will not comply with these general rules regarding train movements toward, and be disallowed.

During a simulation run, the rules in this logic pursue a balance between these basic goals:

1.  To facilitate reasonable sequencing of train item movements over the network,
And at the same time…
2.  To avoid train item movements that increase the chance of a “deadlock” occurring.


Train Movement Principles

Trains in Planimate are Items which use Tracks to move from node to node (Portals).Trains start at a point of capacity in a track node and only move when they have a guaranteed location they can move to.

Before a train can enter a track section, a "far lookahead" is performed to the node on the other side.

It can respond in 3 ways: admit, block or defer. Defer means the decision is passed to the next node on the trains route.

The actual decisions are implemented in the model code, and can be as specific as the modeller requires.

They can involve checking local tables or a global train movement co-ordinator or even calling a C++ API.

Typical considerations in the decision are:

  • are other trains approaching the destination node (particularly in the other direction).
  • the number of passing loops (if any) of the node* bookings for pre-scheduled traffic.
  • planned outages or restrictions.

In models configured for train length the entire train from head to tail is considered.

Deadlock

Deadlock occurs in dynamic rail network models when two or more train items become blocked and cannot proceed past each other at some location in the rail network. Other train items then start to queue behind these blocked trains until all train items in the system become blocked. Cycles and routes do not conclude and the model run cannot be completed.

When your model experiences a deadlock it means that the rules for managing train item movements are insufficient to prevent those movements that bring about that particular deadlock situation. The deficiency could reside in the Planimate® Platform itself, or in the modeller’s own additions, extensions or modifications of the rules used by the Platform. In either case, the specific deadlock conditions can be analysed and the modeller can develop rules to prevent the situation occurring.

As you develop your rail model, growth of network detail, dynamic capacity assignment, more sophisticated ‘look-ahead logic’ and higher traffic levels, produce growing numbers of network states that can result in traffic deadlocks. The enormous number of possible combinations of traffic movement in a network means deadlock conditions are extremely difficult to predict. They can occur at any level of traffic density. Unfortunately this means that deadlocks need to be addressed as they appear.

Heavy or extreme congestion can produce deadlocks.

This is taken as a sign that this part of the track network has been forced beyond its actual capacity, and the traffic needs to be reduced or smoothed out, or the area in question requires some investment to raise its capacity.

It is also possible for a deadlock to occur in relatively light traffic circumstances, if an unfortunate arrangement of movements occurs.

Traffic deadlocks are an inevitable part of using dynamic network models with conflict resolution rules. The reality is that deadlock issues will accompany your rail model development and its use.

Whilst no guarantee can be offered that deadlocks will never occur, it is realistic to expect that the frequency of deadlocks will be reduced as more attention is paid to refining traffic management rules implemented into the rail models themselves, as well as into the Planimate® Platform.

Reducing the Risk of Deadlock

Much effort over the years has been expended (by both modellers and software engineering in the Planimate® Platform itself) to identify causes of deadlocks and to develop solutions that prevent them, or lower the risk of traffic deadlock situations occurring.

Deadlocks reported to InterDynamics are examined to determine whether there is evidence of a general case for which rules may be developed or refined and applied into the Planimate® Platform’s rule set. A general case rule addition or refinement will be tested, after which an updated platform executable is made available. This results in a lowered risk of reappearance of that deadlock situation, or others similar to it.

It is important to note that for rail models with a significant history, a rule refinement to address reported deadlocks may alter the results of prior scenarios.

Upon receipt of new executables with changed movement rules it is important to identify and report differences ahead of any further use of your rail model.

Managing Deadlocks or Undesired Behaviour

Because the Planimate® movement sequence management logic is an attempt to remain simple, and to achieve the goals and balance mentioned above, no guarantee can be given that deadlocks will never occur.

However, various options exist to support the modeller in modifying the control logic in local areas, so that this balance may be flexibly adjusted. First of all there is debugging support that assists you to locate a deadock’s source.

Track Unblocking debugging

During runtime, Portal, loop and section menus contain an "unblock" option.

This will report why a train is blocked, starting at the first object to report the blockage and working back towards the start of the unblock.

Tracking of blocked trains is done by the Planimate® Platform and this is accessible during runtime from the ‘Blocked Trains’ option in the Tracks Menu of the menu bar.

Interleaving Your Modelling Code into Planimate® Track Movement Code

If you have tried some or all of the above options to address a deadlock you are trying to remove, and it continues to recur, then the issue will probably require intervention on your part, in addition to the setting or adjusting of these local options.

Likewise, you may desire to take control of the way train movements are treated around a specific intersection.


For either of these requirements, you may need to manage an area of the network using token logic and other devices that require counting of train items into and out of regions, and/or more extensive use of a train items’ system-maintained attributes to be able to detect, identify and decide which is to be allowed to move next, into or out of a region of the network.

When Planimate wants to try move a train to another object, it does two tests:

TestEnter
Checks for capacity at an adjacent object
;CheckNext: Checks further down the track for impending congestion


You can take advantage of System Attributes associated with the Planimate® Run Engine to inject your own rule-checking into the train item movement decision making activity that goes on during TestEnter and CheckNext…

The Track Lookahead Separator

A Switch object has a mode called "Track Lookahead Separator".

In this mode a switch will forward track "checknext" lookahead tests out the first outgoing path.

Any other tests including normal item lookaheads and movements are directed out the second outgoing path.

The "Assume No Blocking" option works on the second path.

Track CheckNext tests (out the first path) do not depend on the setting of this option and are always performed.

Things are easier to work with if the assume no blocking option is on.

This mode is useful in track models where the modeller is adding logic to Planimate's track "far" lookahead rules.

It makes it easier to separate the "far lookahead" flow from the actual item movement flow.

It is important to realise that this path is only "tested" by the lookahead mechanism - an item entering will never actually move along it!

Also note that Switches do not cache decisions made during track lookahead. This makes the EnableTrackCheckNext routine operation work properly for switches in the track lookahead path (following the separator).


The suggested method:

Separator switches after portal entries connected to tracks should have assume no blocking off

Separator switches after modeller capacity/wormhole entries etc (where the modeller is triggering the track checknext lookahead) should have the "assume no blocking" option on.

Otherwise they are more complicated to work with (you have to lookahead twice, second time without track checknext).

If you use a Separator switch object you can to direct a “lookahead thread” to a path along which you evaluate system and network conditions.

Based on this evaluation of your own criteria and conditions, you direct the path to either a Portal Exit, a dead end (or blocked switch), or an Exit Object,.

This then signals to the lookahead “thread” whether to:

Object Outcome
Portal Exit Continue with lookahead checking the next track section further on, and repeat the test at the next object along the train's route.
Dead end Stop the lookahead, report it as being unsuccessful and “Block” the item.
Exit Object Stop the lookahead, report it as being successful and allow the item to move.


Train Unblock Order Engine Option

This Engine Setting enables you to select to unblock trains in FIFO order instead of the default LIFO order. This may change the behaviour of your train movements during a run.