Hiding I/O Latency with Parallel Pre-Execution Prefetching

Yue Zhao and Kenji Yoshigoe

Keywords

Parallel application, I/O latency, I/O prefetching, Preexecution

Abstract

Parallel applications continue to suffer more from I/O latency as the rate of increase in computing power grows faster than that of memory and storage access performance. I/O prefetching is an effective solution to hide the latency, yet existing I/O prefetching techniques are conservative and their effectiveness is limited. A pre-execution prefetching approach, whereby a thread dedicated to read operations is executed ahead of main thread in order to hide I/O latency, has been put forward to solve this “I/O wall” problem in a recent work. We first identify the limitation of applying the existing pre-execution prefetching approach due to read after write (RAW) dependency, and then propose a method to overcome this limitation by assigning a thread for each dependent read operation. Preliminary experiments, including one from Hill encryption as a real-life application, verify the benefits of the proposed approach.

Important Links:



Go Back