Tuesday, August 12, 2014

Image Sensors in Optical Supercomputer

Laser Focus: UK, Cambridge University spin-off, Optalysys says that it is only months away from launching a prototype optical processor operating at 1.32 gigaFLOPS. An array of fast image sensors is supposed to be used as the output device for the supercomputer. A Youtube video by Emeritus Prof. Heinz Wolff pitches the technology:

4 comments:

  1. Interesting to peruse but it sure looks more like E-O computing to me.
    Two comments:
    (1) They say "Based on this, our first demonstrator system, which will operate at a frame rate of 20Hz and resolution 500×500 pixels and produce a vorticity plot, will operate at around 40GFLOPs." I am trying to figure out how 5 Mpix/s throughput, say doing a gradient image, results in 40 GFLOPs. That means each pixel-wise operation corresponds to 8000 digital operations? 80 I would believe, perhaps, 8000 seems excessive. What am I missing?
    (2) Looks like a spatially-parallel analog computer. Under "achievements" they show an input image (binary) and the gradient image output. It definitely "kind of" works. But there seems to be a lot of noise, noise of the order of the gradient they are generating. I would think this would be a giant problem. Even analog computers need about 8b equivalent accuracy for reasonably accurate solutions.

    ReplyDelete
  2. According the the equation the we quickly see this system seems to be performing a Fourier Transform. If that is the case, Matlab running on a single core on my desktop computer can do about 150 of these operations per second.

    >> a=rand(500);tic;fft2(a);toc
    Elapsed time is 0.006457 seconds.

    Assuming the operation is a correlation which requires two of these Fourier Transform that is still 75Hz. Now considering these day a CPU can have easily have 8 cores that gives 600Hz.

    I've heard the "speed of light" argument many times, the really bottleneck is the spatial light modulator and the sensor speed. Having said that I still think it may be possible to do an optical system that could be faster than a traditional CPU but how much will it cost? I think we are still far from a commercial breakthrough.

    ReplyDelete
  3. Many years ago, I visited a big company who had built an optical computer using a laser, a computational mask and an image sensor. They wanted to look into commercializing this. It was very fast in tasks like pattern matching - "at the speed of light" - and the sensor had enough speed, well capacity, resolution and dynamic range to perform the tasks. The cost and size of the unit would have even been reasonable. However, it had one big problem. There was no way to change the computational mask quickly. Essentially, they had the odd idea that the computer only needed to run one program. Making and installing a new mask took a week. They had not considered how to do it in real time or even in a few minutes. They had no internal development funds for commercialization so they just dropped the program.

    I hope this one doesn't repeat that error.

    ReplyDelete
    Replies
    1. Based on the video, the mask is the liquid crystal, so it looks like they can make changes very quickly (probably order of < 1ms response time). Maybe that's their innovation here.

      We built similar analog optical computer with a fixed mask for class. The computation is fast, but quite limited. It's not fair to say so many giga flops, because that's not what it is. They're not generalized gigaflops.

      Delete

All comments are moderated to avoid spam and personal attacks.