Recent Changes - Search:

PmWiki

pmwiki.org

edit SideBar

TransactionalCoherenceAndConsistency-TCC

TCC is being pushed by the Stanford people who did Hydra and Jrpm.

It is strongly related to the BulkSynchronousParallelism (BSP) stuff which was being done by Oxford and others back in the 90s. It is also strongly related to Optimistic Concurrency Control in databases. It also relates closely to Thread Level Speculation (TLS).

The basic idea is fairly simple. It involves a computational model where the program is expressed as a collection of potentially parallel 'transactions'. Each transaction is effectively a thread which (obviously) relies on input data and produces output data. However, the output data is buffered locally (in a cache) and only committed to lower levels of memory when the transaction has completed. So far, virtually identical to BSP. The novelty of the Stanford approach is to couple this with speculation and underlying hardware mechanisms to make it work. This contrasts with BSP where there is the notion of 'supersteps' which are effectively barrier synchronisation points (IIRC).

It is assumed that the programmer will write parallel programs in this transaction form. Various constructs are provided. One important facility is to be able to specify the ordering of transaction committal. This may either be unordered, partially ordered or fully ordered.

When a transaction commits, it is assumed to send all its updates in a single go (packet) so that the update process is effectively atomic. Other transactions can 'snoop' these updates. If they see changes to the data which they are using as input, they abandon their execution (and restart, I think, although the exact strategy for this isn't totally clear to me).

Before a transaction commits, it must ask for global permission to do so. I haven't yet read a detailed description of this process, but the rules are probably obvious. If the transactions have been specified as ordered, then they must wait until all previous transactions have committed. More generally, it is probably necessary to treat the commit request/update as an indivisible action so a transaction can't commit while another request is outstanding. This latter requirement is probably all that is required for unordered transactions.

Examples of programs which can be written in this style are:

  • Simple loop parallelisation - each loop body can be specified as an ordered transaction, the order being obvious. The papers suggest t_for and t_while constructs in the language. These are, of course, transparent in a serial environment.
  • Parallel histogram generation (the classic speculative parallelism example). Here the transactions can be totally unordered but a transaction must restart if it sees an update to the bucket that it was using.

The papers refer mainly to explicit specification of transactions although clearly examples like loop parallelisation could be done automatically. They talk about tuning the performance by analysing access clashes (using gdb!) and then restructuring code. Future work includes looking at the possibility of dynamic optimisation.

It is suggested that this model can be a total replacement for line level cache coherence schemes. Traditional locking can (if necessary)be expressed as a collection of speculative transactions trying to write to a lock variable, only one of which will succeed.

There is a lot of devil in the detail of possible implementation. E.g. bandwidth bursts on commit and the complexity of cache updates/invalidates. However, it is certainly worth thinking about.

A good starting point for references is http://tcc.stanford.edu

Edit - History - Print - Recent Changes - Search
Page last modified on November 23, 2005, at 12:13 PM