Fusion11



As I’m heading home after three exciting days at the AMD’s Fusion Developer Summit 2011, I’d like to share with you my findings, thoughts and ideas I got out of this event. It had five fascinating tracks each one had around 10 sessions over the four days. The Programming Models track was the most interesting and exciting, at least to me. It is tightly coupled with the new AMD Fusion System Architecture (FSA). It brought with it a lot of new concepts. I can see also a lot of interesting challenges.

Let me take you in a series of posts sharing with you the excitement of these new innovations from AMD. I’ll start with a quick background of why the APUs are a good answer to many computation problems and then I’ll talk about its programming model.

So, the Fusion architecture is a reality now. It starts the era of heterogeneous computing for the common end-user. It combines the x86 heavy lifting cores with super-fast simpler GPU cores on the same chip. You probably came across articles or research papers advertising the significant performance improvement that GPUs offer compared to the CPUs. This is often heard as a result of poor CPU code and the inherently massive parallelism of the algorithms.

The APUs architecture offers the balance between these worlds. GPU cores are optimized for arithmetic workloads and latency hiding. However, CPU cores deal with the branchy code for which branch prediction and out-of-order execution are so valuable. They both built for different design goals in mind:

  • CPUs design is based on maximizing performance of a single thread. They allocate transistors budget (or chip area) in: branch prediction, out-of-order execution, extensive caching, and deep pipelines.
  • GPUs design aims to maximize throughput at the cost of lower performance for each thread. They use the area in having more cores of simpler designs by not implementing branch prediction, out-of-order, or large caches.

Hence, these architectures hide memory latency in different ways.

So, in the CPUs world memory stalls are of high cost and they are harder to cover. Because of the several caching hierarchies, it takes many cycles to cover a cache miss. That’s why a larger cache reduces is necessary to reduce memory stalls. Also the out-of-order execution makes the pipeline busy doing useful computations while cache misses are served for some other instructions.

GPUs, however, use different techniques to hide memory latency. They issue an instruction over multiple cycles. For example, a large vector execute on a smaller vector unit. This reduces instruction decode overhead and improves throughput. Executing many threads concurrently by interleaving their instructions fills the gaps in the instructions stream. So, they depend on the aggregated performance of all executing threads and not reducing the latency of a single thread. GPU’s cache, however, is designed to improve spatial locality of instructions execution and not focusing on temporal locality. That’s why they are very efficient in retrieving large vectors through many banks they offer for the SIMD fashioned data fetching.

So choosing either of these two worlds comes with a cost. For example, CPUs large caches to maximize number of cache hits and the support the out-of-order execution consumes a much budget of the available transistors on the chip. The GPUs however cannot handle branchy code efficiently; they are effective most on massively parallel algorithms that can be solved in vectors and many independent threads. So, each one is for a specific type of algorithms or a problem domain. For a concrete case study have a look at the table below comparing representatives of the CPU and GPU sides.

AMD Phenom II – x86 AMD Radeon HD6070
  • 6 cores 4-way SIMD (ALUs)
  • A single set of registers per core
  • Deep pipeline supporting out-of-order execution
  • 24 simple cores 16-way SIMD
  • 64-wide SIMD state (threads count per CU)
  • Multiple register sets shared
  • 8 or 16 SIMD engines per core

And this is when the Eureka! moment came to the AMD engineers & researchers to reconsider of microprocessors and design the Accelerated Processing Units (APUs). Combining both architectures on a single chip may solve many problems efficiently, specially for multimedia and gaming related. The E350 APU for example combines two “Bobcat” cores and two “Cedar”-like cores, which includes 2 and 8-wide SIMD engines on the same chip!

So let me take through an example in my next post to show you quickly the current and future models on these APUs. Also, I’ll be writing about: the run-time models, the software ecosystem of APUs, and the Roadmap of the AMD Fusion System Architecture (FSA)


Posting the slides as they come out!
Speaker: Eric Demers, AMD corporate VP and CTO, Graphics Division

20110616-103219.jpg

20110616-102938.jpg

20110616-103646.jpg

20110616-103716.jpg

20110616-103831.jpg

20110616-103948.jpg

20110616-104034.jpg

20110616-104100.jpg

20110616-104234.jpg

20110616-104425.jpg

20110616-104459.jpg

20110616-104540.jpg

20110616-104818.jpg

20110616-104843.jpg

20110616-105131.jpg

20110616-105205.jpg

20110616-105235.jpg

20110616-105309.jpg

20110616-105337.jpg

20110616-105434.jpg

20110616-105555.jpg

20110616-105856.jpg

20110616-110025.jpg

20110616-110345.jpg

20110616-110555.jpg

20110616-110806.jpg

20110616-110948.jpg

20110616-111048.jpg

20110616-111446.jpg

20110616-111542.jpg

20110616-111832.jpg


Here you go slides I could capture in this session

20110616-093636.jpg

20110616-093752.jpg

20110616-095848.jpg

20110616-100013.jpg

20110616-100124.jpg

20110616-100401.jpg

20110616-100525.jpg

20110616-100623.jpg

20110616-100749.jpg

20110616-100839.jpg

20110616-100936.jpg

20110616-101046.jpg


20110615-010317.jpg

20110615-010724.jpg

20110615-010353.jpg

20110615-010601.jpg

20110615-010944.jpg

20110615-011010.jpg

20110615-011236.jpg

20110615-011344.jpg

20110615-011431.jpg

20110615-011545.jpg

20110615-011656.jpg

20110615-011741.jpg

20110615-012206.jpg

20110615-012241.jpg

20110615-012409.jpg


Here you go the slides

20110615-105032.jpg

20110615-105100.jpg

20110615-105254.jpg

20110615-105323.jpg

20110615-105421.jpg

20110615-105521.jpg

20110615-105635.jpg

20110615-105729.jpg

20110615-105758.jpg

20110615-105833.jpg

20110615-110023.jpg

20110615-110112.jpg

20110615-110153.jpg

20110615-110217.jpg

20110615-110306.jpg

20110615-110345.jpg

20110615-110413.jpg

20110615-110433.jpg

20110615-110537.jpg

20110615-110605.jpg

20110615-110704.jpg

20110615-110732.jpg

20110615-110752.jpg

20110615-110812.jpg

20110615-111001.jpg

20110615-111040.jpg

20110615-111335.jpg

20110615-111436.jpg

20110615-111548.jpg

20110615-111646.jpg

20110615-111816.jpg

20110615-111836.jpg