Please use this identifier to cite or link to this item:
Title: Cache-affinity scheduling for fine grain multithreading
Authors: Debattista, Kurt
Vella, Kevin
Cordina, Joseph
Keywords: Simultaneous multithreading processors
Cache memory
Electronic data processing -- Batch processing
Parallel processing (Electronic computers)
Issue Date: 2002
Publisher: WoTUG
Citation: Debattista, K., Vella, K. J., & Cordina, J. (2002). Cache-affinity scheduling for fine grain multithreading. Communicating Process Architectures, Reading. 135-146.
Abstract: Cache utilisation is often very poor in multithreaded applications, due to the loss of data access locality incurred by frequent context switching. This problem is compounded on shared memory multiprocessors when dynamic load balancing is introduced and thread migration disrupts cache content. In this paper, we present a technique, which we refer to as ‘batching’, for reducing the negative impact of fine grain multithreading on cache performance. Prototype schedulers running on uniprocessors and shared memory multiprocessors are described, and finally experimental results which illustrate the improvements observed after applying our techniques are presented.
ISSN: 13837575
Appears in Collections:Scholarly Works - FacICTCS

Files in This Item:
File Description SizeFormat 
OA Cache-affinity scheduling for fine grain multithreading.pdfCache-affinity scheduling for fine grain multithreading108.51 kBAdobe PDFView/Open

Items in OAR@UM are protected by copyright, with all rights reserved, unless otherwise indicated.