Monthly Archives: June 2009

University of Florida preps Novo-G FPGA cluster

The CHREC team at the University of Florida has announced a new reconfigurable computing cluster.

The Novo-G system is being built using FPGA accelerator cards provided by GiDEL and Altera. The system will have 96 Altera Stratix-III FPGA devices, installed into 24 networked servers with 576GB of memory and 20Gb/s InfiniBand for interconnection.

According to Professor Alan George, the purpose of Novo-G is to “advance and prove reconfigurable computing technologies at a level of scale, performance, and productivity unprecedented in this field, for applications from satellites to supercomputers”.

Novo-G is based on PCI Express FPGA cards provided by GiDEL and populated with FPGAs provided by Altera. Support for C programming of these cards has been enabled with an Impulse C Platform Support Package (PSP) developed by Rafael Garcia at the CHREC lab.

More information about this project can be found here.

Advertisements

Leave a comment

Filed under News Shmews

Medical imaging gets an FPGA boost

FPGAs are finding increased use in medical electronics. Frost and Sullivan reported in 2007 that FPGAs in medical imaging, including X-Ray, CT, PET, MRI, and ultrasound, already represented as much as $138M in revenue to FPGA companies, with CT alone representing $10M or more of that amount. Steady growth in these applications was forecast through 2011.

FPGAs assist medical imaging in two areas, detection and image construction. The detection part of medical imaging is an embedded systems application, with real-time performance requirements and significant hardware interface challenges. Image reconstruction, on the other hand, is more like a high-performance computing problem.

Image capture and display in computed tomography involves synchronizing large numbers of detectors arranged in a ring around the patient, in the large doughnut structure that we associate with CT, MRI and PET scanning. These detectors are often implemented using FPGAs, many hundreds of them, and already represent a large and profitable market for programmable logic devices.

While FPGAs are well-established in the detector part of the imaging problem, they can also help solve a significant problem in image reconstruction. They do this by serving as computing engines – as dedicated software/hardware application accelerators.

Tomographic reconstruction is a compute-intensive problem; the process of creating cross-sectional images from data acquired by a scanner requires a large amount of CPU cycles. The primary computational bottleneck after data capture by the scanner is the back-projection of the acquired data into image space to reconstruct the internal structure of the scanned object.

University of Washington graduate researchers Nikhil Subramanian and Jimmy Xu, working under the direction of Dr. Scott Hauck, recently completed a project evaluating the use of higher level programming methods for FPGAs, using back-projection as a benchmark algorithm. Nikhil and Jimmy achieved well over 100X speedup of the algorithm over a software-only equivalent. The target hardware for this evaluation was an XtremeData coprocessor module. This module, the XD1000, is based on Altera FPGA devices and serves as coprocesser to an AMD Opteron processor running Linux, via a HyperTransport socket interface.

This project, which was funded in part by a $100,000 Research and Technology Development grant from Washington Technology Center, was intended to determine the tradeoffs of using higher-level FPGA programming methods for medical imaging, radar and other applications requiring high throughput image reconstruction.

The key to accelerating back-projection is to exploit parallelism in the computation. Working in cooperation with Dr. Adam Alessio of the UW Department of Radiology, the two researchers converted and refactored an existing back-projection algorithm, using both C-to-FPGA (the Impulse C tools) and Verilog HDL, to evaluate design efficiency and overall performance.

This conversion, which included refactoring the algorithm for parallel execution in both C and Verilog, took 2/3 of the time when working in C than when working in Verilog. Perhaps more importantly, the two researchers found that later design revisions and iterations were much faster when working in C, with as little as 1/7 the time being required to make algorithm modifications when compared to Verilog.

The quick success of this project showed how even first-time users of C-to-FPGA methods can rival the results achieved from hand-coding in HDL, with surprisingly little performance penalty and faster time-to-deployment.

The results of this study have been published as Nikhil’s Master’s Thesis, which is available here: A C-to-FPGA Solution for Accelerating Tomographic Reconstruction.

1 Comment

Filed under News Shmews, Reconshmiguration