Tag Archives: Xilinx

Xilinx earns VDC “Best of Show” at Embedded Systems Conference

VDC Press Release: Best of Show for ESC 2009

I like to fantasize that our two Impulse C demonstrations running in the Xilinx booth had something to do with this. Or perhaps it was the video processing workshop featuring C-to-FPGA methods that impressed the distinguished panel of judges.

More likely it was the timing of announcements from Xilinx of their new Virtex-6 and Spartan-6 devices.

Leave a comment

Filed under Embed This!, News Shmews

Finding Nemo at CES 2009


When producing higher-level design tools for FPGAs, it’s important to know the true “pain points” of application developers – what aspects of software-to-hardware are most critical in actual projects, and what barriers there are to success with a given tool flow. There is no better way to learn this than by actually completing a project yourself, with your own tools, on a tight schedule. 

In software circles this is known as “eating your own dog food”.

About a week before the holidays I had a phone call from our friends at Xilinx, asking if we would like to participate in the Consumer Electronics Show in Las Vegas, highlighting the use of FPGAs for HD video processing.

That sounded like a great opportunity. The trouble was, we didn’t have a CES-quality demo ready to go. Something that would suggest how low-power Xilinx Spartan FPGAs could be used in the newest and grooviest consumer and automotive devices. Something other than the usual, yawn-inducing edge detection filters, decompression engines or picture-in-a-picture demos.

As an organization we’ve done a fair amount of video processing work, most of it customer-funded and military/aerospace related. We’ve created configurable filters, combined dual embedded processors with custom video coprocessors, streamed video between TI DSPs and FPGAs, and all kinds of other fun stuff. There are some folks on our team who are real hotshots at this kind of thing.

But what could we build in two or three weeks that would be really fun and different? Could we use the new Xilinx Video Starter Kit, with its DVI input and output interfaces and its Embedded Development Kit (EDK) reference designs, and actually put something together in time?

To make this more interesting, it was two weeks before the holidays, and we were already jammed up with critical customer deliverables. There was nobody available who was actually qualified to do the work. Everyone was busy.

There was only me.

By way of background: I have significant past experience with VHDL, primarily as a synthesis tools developer. And I have plenty of past experience with C programming. But to be honest, I have not written a production-quality line of code in a very long time. I’m in more of a marketing and executive management role. We have very good engineers here who probably laugh at my feeble, occasional efforts to help with new features and bug fixes.

My exposure to the modern Xilinx tools is relatively limited. I know just enough to stumble through our own Impulse CoDeveloper / EDK tutorials, by carefully following the instructions that our more expert staff have written.

I talk a good story, but I am not by any means a professional FPGA developer.

This is all a long-winded way of saying that when we received the new VSK package from Xilinx (a loaner sent the very next day) we had very little time in the lab to bring up a baseline project and begin coding an Impulse C demonstration example for it. Mostly the work would have to happen over the holiday break. In my dining room at home. With curious kids and an impatient spouse hovering nearby. (“So, this is a week off?”)

The demo I had in mind was object recognition in a 720p video stream. One evening I had come across a copy of Finding Nemo in the stack of DVDs, and it had occured to me that Nemo was probably not so hard to pick out of a video frame in real-time, given his bright colors and stripes. Could I actually “Find Nemo” using the Xilinx hardware and Impulse C, starting with a Xilinx DVI reference example?

Fortunately the bring-up of the EDK reference examples was painless. The ACE files provided in the Flash card worked flawlessly with multiple input and output devices, verifying the hardware setup within minutes. The provided Platform Studio project for DVI passthrough built in EDK (the Xilinx Platform Studio environment), downloaded and came up perfectly the first try. I also tested the camera-input reference example and it worked, although I did not make use of that reference example for this project. (A live-action “Clown Fish in a Fishbowl” demo would have been nice… perhaps next time.)

My first effort (with the help of Mei Xu, Applications Engineer here at Impulse) was to hack into the reference example and its System Generator 2D FIR code, to attempt to insert a pre-existing Impulse C 5X5 filter in place of the FIR filter. This was moderately successful (we had edge-enhanced video output with less than two days of effort) but not particularly impressive from a functionality standpoint. Or as a high-level method of design. As I said earlier, edge detect has been done by everyone, and it’s not all that interesting. And to be honest, the HDL code that was provided in the reference example (created apparently by Xilinx System Generator) was rather obscure and probably very difficult to follow for a software person. We don’t really want Impulse C users having to muck around in that stuff.

It was then that I got the Finding Nemo idea. Mei (who had quickly gotten the edge detect working) said something like “good idea, good luck with it” and promptly left on holiday. Joe at Xilinx had also made a comment during our first call, something like “you guys should generate a pcore from your compiler”. That seemed like a good idea for productizing the design method, though maybe a bit of extra work setting up (a few days as it turned out… we already generate EDK pcores for use with MicroBlaze and PowerPC embedded processors).

During development in the subsequent week at home, I spent nearly all my development time using C and wrote no application HDL, apart from some trivial wrapper code customization for the pcore generation (based on our existing MicroBlaze PSP). 

The project was a complete success. Once I had a reliable video generation setup and had build the reference example it was mostly smooth sailing, with just the expected process of debugging and testing C code using GCC, compiling to RTL with our tools, optimizing and synthesizing, downloading and testing… and repeating this process until the demonstration was working to an acceptable level. I estimate that, not counting the time waiting for place and route to complete, I spent a total of 20 hours on the actual demo coding and testing, and then perhaps that amount in addition refining the design to make the smoothly moving “spotlight” effect that is shown in the screen capture image above.

A block diagram of the system is shown below:


The demonstration will be shown tomorrow and Friday at CES. There is certainly more that can be done in this application, such as providing run-time configuration from MicroBlaze, improving support for alternate resolutions, and using more intelligent pattern recognition methods. But given the time constraints and the number of tools and hardware “fiddly bits” in the complete system, the speed of bring-up was impressive and encouraging to say the least. We intend to leverage the VSK in our own product promotions.

Next step: finding my car keys.

Thank you, Xilinx!


Filed under Embed This!

PACT files suit against Avnet and Xilinx

In the continuing and messy, decades-old saga of programmable logic litigation, we now have this news from EE Times:

German processor firm alleges Xilinx, Avnet infringe patents

PACT is a reconfigurable computing company that has worked for eight years to promote and sell its XPP reconfigurable device technology, most recently focusing on intellectual property (IP) licensing for HD video applications. The lawsuit appears to be aimed squarely at the DSP48 blocks that now appears in every FPGA family sold by Xilinx. Obviously there is a lot at stake here for Xilinx if the suit has merit. But don’t hold your breath for a result: the case is not scheduled in court until 2011.

Leave a comment

Filed under News Shmews

More FPGAs on Mars

The Phoenix Mars lander is scheduled to be on the surface of Mars this weekend, on Sunday. At least one Actel FPGA is on board, handling pressure and temperature data processing.

Note that Actel and Xilinx FPGAs are already rolling around on Mars, in the Spirit and Opportunity rovers.

Details of the Phoenix lander and its gadgets can be found on the JPL Mars Page.

Leave a comment

Filed under News Shmews

Two PowerPCs, no waiting

Ed put together another “way cool” demo, this one for Embedded World in Germany.

For this show, we wanted to demonstrate the use of two embedded PowerPC processors operating in concert in a single FPGA, with Impulse-generated C-language coprocessors attached to each PowerPC along with other needed soft-hardware peripherals.

Here’s a block diagram of the demonstration, featuring Ed’s smiling and emboss-filtered face (sorry Ed!):

Dual PowerPC demonstration

All of this stuff is running on a single FPGA device, a Xilinx Virtex-4 FX60. There are two PowerPCs provided in the FX60 device. As the block diagram shows, one of the PowerPCs is being dedicated for processing images that come in from a network JPEG camera. There is no operating system running on this dedicated PowerPC, just a single application that reads data from the hardware TEMAC interface and performs JPEG decoding of the streaming video. To speed up this decoding, a hardware accelerator has been added as a peripheral to perform an inverse discrete cosine transform (IDCT) operation. This accelerator was written and C and compiled by our tools into a hardware module/peripheral.

After decoding of the image frames, the video data (now in RGB format) is streamed directly to the second PowerPC, again using Impulse-generated hardware modules. One of these Impulse C hardware modules is a configurable image filter that allows such things as emboss, edge detect, blur, color conversion, etc. The processor-to-processor interfaces in the FPGA are implemented using a special Xilinx Virtex-4 feature called the Auxiliary Processing Unit, or APU. Our tools automatically generate the needed APU interfaces for Xilinx, making such connections relatively easy to create.

There is more fun happening in the second PowerPC. This processor runs the VxWorks operating system and an embedded web server. This means that the images being captured and filtered frame-by-frame from the camera can either be displayed on a TFT display (as shown) or be served up via a webpage.

This is a great example of how single-chip, multiple-processor embedded applications can be created using tools like ours, and using the latest FPGA devices. You can bet we’ll be seeing more examples like this in the future, possibly including a large number of embedded processors to create single-chip, accelerated computing clusters.

Leave a comment

Filed under Embed This!