Abstract:
System-level benchmarks enable performance evaluation early in the design cycle of multi-core architectures. However, benchmark development is costly. Synthetic benchmarks have similar performance behavior as the originals that they are generated from. Additionally, they can run faster, and they can act as proxies for proprietary customer codes that are not available in source form. In this thesis we develop a framework to generate system-level synthetic benchmarks from SystemC/TLM or Pthreads applications. These benchmarks are intended for di erent use cases, the former targeting virtual platforms in co-simulation environments and the latter targeting simulation platforms lacking either library support or necessary computing capabilities. Our framework was implemented by extending the synthetic benchmark generation framework developed by Deniz et al. [1] with a SystemC front-end and back-end. In experiments we observe that not only are our system-level benchmarks much smaller than the real benchmarks that they are generated from but they are much faster also. For example, when we generate synthetic benchmarks from the well-known multi-core benchmark suite, PARSEC, our benchmarks have an average speedup of 141x over PARSEC benchmarks. Experiments with benchmarks generated from SystemC/TLM applications also have an average speedup of 4x even for designs with the shortest execution times. We observe that our synthetics maintain similar performance characteristics as the original benchmarks and they are portable across three di erent multi-core architectures. Speci cally, benchmarks generated from PARSEC have more than 81% similarity and benchmarks generated from SystemC have more than 88% similarly on all three architectures.