Please use this identifier to cite or link to this item:
Title: Wait-free cache-affinity thread scheduling
Authors: Debattista, Kurt
Cordina, Joseph
Vella, Kevin
Keywords: Cache memory
Simultaneous multithreading processors
Electronic data processing -- Batch processing
Issue Date: 2003-04
Publisher: Institute of Electrical and Electronics Engineers Inc.
Citation: Cordina, J., Debattista, K., & Vella, K. J. (2003). Wait-free cache-affinity thread scheduling. IEE Proceedings - Software, 150(2), 137-146.
Abstract: Cache utilisation is often very poor in multithreaded applications, due to the loss of data access locality incurred by frequent context switching. This problem is compounded on shared memory multiprocessors when dynamic load balancing is introduced, as thread migration disrupts cache content. Batching, a technique for reducing the negative impact of fine grain multithreading on cache performance is introduced to mitigate this problem. Moreover, the related issue of contention for shared data within a thread scheduler for shared memory multiprocessors is considered. In this regard wait-free techniques are applied in lieu of conventional lock-based methods for accessing internal scheduler structures, alleviating to some extent serialisation and hence the degree of contention. Prototype schedulers which make use of the above ideas are described, and finally experimental results which illustrate the observed improvements are presented.
Appears in Collections:Scholarly Works - FacICTCS

Files in This Item:
File Description SizeFormat 
Wait-free cache-aff inity thread scheduling.pdf
  Restricted Access
Wait-free cache-affinity thread scheduling1.19 MBAdobe PDFView/Open Request a copy

Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.