Tag Archives: CES

Finding Nemo at CES 2009

nemox1

When producing higher-level design tools for FPGAs, it’s important to know the true “pain points” of application developers – what aspects of software-to-hardware are most critical in actual projects, and what barriers there are to success with a given tool flow. There is no better way to learn this than by actually completing a project yourself, with your own tools, on a tight schedule. 

In software circles this is known as “eating your own dog food”.

About a week before the holidays I had a phone call from our friends at Xilinx, asking if we would like to participate in the Consumer Electronics Show in Las Vegas, highlighting the use of FPGAs for HD video processing.

That sounded like a great opportunity. The trouble was, we didn’t have a CES-quality demo ready to go. Something that would suggest how low-power Xilinx Spartan FPGAs could be used in the newest and grooviest consumer and automotive devices. Something other than the usual, yawn-inducing edge detection filters, decompression engines or picture-in-a-picture demos.

As an organization we’ve done a fair amount of video processing work, most of it customer-funded and military/aerospace related. We’ve created configurable filters, combined dual embedded processors with custom video coprocessors, streamed video between TI DSPs and FPGAs, and all kinds of other fun stuff. There are some folks on our team who are real hotshots at this kind of thing.

But what could we build in two or three weeks that would be really fun and different? Could we use the new Xilinx Video Starter Kit, with its DVI input and output interfaces and its Embedded Development Kit (EDK) reference designs, and actually put something together in time?

To make this more interesting, it was two weeks before the holidays, and we were already jammed up with critical customer deliverables. There was nobody available who was actually qualified to do the work. Everyone was busy.

There was only me.

By way of background: I have significant past experience with VHDL, primarily as a synthesis tools developer. And I have plenty of past experience with C programming. But to be honest, I have not written a production-quality line of code in a very long time. I’m in more of a marketing and executive management role. We have very good engineers here who probably laugh at my feeble, occasional efforts to help with new features and bug fixes.

My exposure to the modern Xilinx tools is relatively limited. I know just enough to stumble through our own Impulse CoDeveloper / EDK tutorials, by carefully following the instructions that our more expert staff have written.

I talk a good story, but I am not by any means a professional FPGA developer.

This is all a long-winded way of saying that when we received the new VSK package from Xilinx (a loaner sent the very next day) we had very little time in the lab to bring up a baseline project and begin coding an Impulse C demonstration example for it. Mostly the work would have to happen over the holiday break. In my dining room at home. With curious kids and an impatient spouse hovering nearby. (“So, this is a week off?”)

The demo I had in mind was object recognition in a 720p video stream. One evening I had come across a copy of Finding Nemo in the stack of DVDs, and it had occured to me that Nemo was probably not so hard to pick out of a video frame in real-time, given his bright colors and stripes. Could I actually “Find Nemo” using the Xilinx hardware and Impulse C, starting with a Xilinx DVI reference example?

Fortunately the bring-up of the EDK reference examples was painless. The ACE files provided in the Flash card worked flawlessly with multiple input and output devices, verifying the hardware setup within minutes. The provided Platform Studio project for DVI passthrough built in EDK (the Xilinx Platform Studio environment), downloaded and came up perfectly the first try. I also tested the camera-input reference example and it worked, although I did not make use of that reference example for this project. (A live-action “Clown Fish in a Fishbowl” demo would have been nice… perhaps next time.)

My first effort (with the help of Mei Xu, Applications Engineer here at Impulse) was to hack into the reference example and its System Generator 2D FIR code, to attempt to insert a pre-existing Impulse C 5X5 filter in place of the FIR filter. This was moderately successful (we had edge-enhanced video output with less than two days of effort) but not particularly impressive from a functionality standpoint. Or as a high-level method of design. As I said earlier, edge detect has been done by everyone, and it’s not all that interesting. And to be honest, the HDL code that was provided in the reference example (created apparently by Xilinx System Generator) was rather obscure and probably very difficult to follow for a software person. We don’t really want Impulse C users having to muck around in that stuff.

It was then that I got the Finding Nemo idea. Mei (who had quickly gotten the edge detect working) said something like “good idea, good luck with it” and promptly left on holiday. Joe at Xilinx had also made a comment during our first call, something like “you guys should generate a pcore from your compiler”. That seemed like a good idea for productizing the design method, though maybe a bit of extra work setting up (a few days as it turned out… we already generate EDK pcores for use with MicroBlaze and PowerPC embedded processors).

During development in the subsequent week at home, I spent nearly all my development time using C and wrote no application HDL, apart from some trivial wrapper code customization for the pcore generation (based on our existing MicroBlaze PSP). 

The project was a complete success. Once I had a reliable video generation setup and had build the reference example it was mostly smooth sailing, with just the expected process of debugging and testing C code using GCC, compiling to RTL with our tools, optimizing and synthesizing, downloading and testing… and repeating this process until the demonstration was working to an acceptable level. I estimate that, not counting the time waiting for place and route to complete, I spent a total of 20 hours on the actual demo coding and testing, and then perhaps that amount in addition refining the design to make the smoothly moving “spotlight” effect that is shown in the screen capture image above.

A block diagram of the system is shown below:

edk_blockdiagramx1

The demonstration will be shown tomorrow and Friday at CES. There is certainly more that can be done in this application, such as providing run-time configuration from MicroBlaze, improving support for alternate resolutions, and using more intelligent pattern recognition methods. But given the time constraints and the number of tools and hardware “fiddly bits” in the complete system, the speed of bring-up was impressive and encouraging to say the least. We intend to leverage the VSK in our own product promotions.

Next step: finding my car keys.

Thank you, Xilinx!

Advertisements

4 Comments

Filed under Embed This!