Hierarchical Bucket Queuing for Fine‐Grained Priority Scheduling on the GPU

dc.contributor.authorKerbl, Bernharden_US
dc.contributor.authorKenzel, Michaelen_US
dc.contributor.authorSchmalstieg, Dieteren_US
dc.contributor.authorSeidel, Hans‐Peteren_US
dc.contributor.authorSteinberger, Markusen_US
dc.contributor.editorChen, Min and Zhang, Hao (Richard)en_US
dc.date.accessioned2018-01-10T07:42:52Z
dc.date.available2018-01-10T07:42:52Z
dc.date.issued2017
dc.description.abstractWhile the modern graphics processing unit (GPU) offers massive parallel compute power, the ability to influence the scheduling of these immense resources is severely limited. Therefore, the GPU is widely considered to be only suitable as an externally controlled co‐processor for homogeneous workloads which greatly restricts the potential applications of GPU computing. To address this issue, we present a new method to achieve fine‐grained priority scheduling on the GPU: hierarchical bucket queuing. By carefully distributing the workload among multiple queues and efficiently deciding which queue to draw work from next, we enable a variety of scheduling strategies. These strategies include fair‐scheduling, earliest‐deadline‐first scheduling and user‐defined dynamic priority scheduling. In a comparison with a sorting‐based approach, we reveal the advantages of hierarchical bucket queuing over previous work. Finally, we demonstrate the benefits of using priority scheduling in real‐world applications by example of path tracing and foveated micropolygon rendering.While the modern graphics processing unit (GPU) offers massive parallel compute power, the ability to influence the scheduling of these immense resources is severely limited. Therefore, the GPU is widely considered to be only suitable as an externally controlled co‐processor for homogeneous workloads which greatly restricts the potential applications of GPU computing. To address this issue, we present a new method to achieve fine‐grained priority scheduling on the GPU: hierarchical bucket queuing. By carefully distributing the workload among multiple queues and efficiently deciding which queue to draw work from next, we enable a variety of scheduling strategies. These strategies include fair‐scheduling, earliest‐deadline‐first scheduling and user‐defined dynamic priority scheduling.en_US
dc.description.number8
dc.description.sectionheadersArticles
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume36
dc.identifier.doi10.1111/cgf.13075
dc.identifier.issn1467-8659
dc.identifier.pages232-246
dc.identifier.urihttps://doi.org/10.1111/cgf.13075
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf13075
dc.publisher© 2017 The Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectGPU, queuing, priority scheduling
dc.subjectparallel computing
dc.subjectI.3.1 [Computer Graphics]: Hardware Architecture—Parallel processing
dc.titleHierarchical Bucket Queuing for Fine‐Grained Priority Scheduling on the GPUen_US
Files
Collections