Friday, October 28, 2016

ASICFPGA Offers HDR ISP Pipeline

ASICFPGA offers an ISP pipeline with many features, including HDR processing, but, apparently, not including small pixel support:

  • Support RGB Bayer progressive image sensor
  • Support 8 ~ 14 bit input data Bayer
  • Support image sensor of 256*256 ~ 8192*8192 size
  • Defect Correction
  • Lens Shading Correction
  • High quality interpolation
  • 3D Motion Adaptive noise reduction and 2D noise reduction
  • Color correction by 3x3 matrix
  • Gamma correction
  • HDR processing for Multiple exposure images and HDR bayer image
  • WDR (Shadow/Highlight compensation, back light compensation)
  • 2D edge enhancement
  • support AE, AWB and AF
  • Saturation, contrast and brightness control
  • Support special images (sepia, negative, solarization)

The company's demo video shows the HDR capabilities:

More Info on Canon IEDM Presentation

IEDM 2016 press kit has words on Canon paper #8.6, “A 1.8e- Temporal Noise Over 90dB Dynamic Range 4k2k Super 35mm Format Seamless Global Shutter CMOS Image Sensor with Multiple-Accumulation Shutter Technology” by K. Kawabata et al:

"Canon researchers will discuss high-resolution, large-format CMOS imaging technology for use in high-performance cameras large enough to take photographs and videos at ultra-high-definition resolution.

The Canon researchers developed a new architecture that enables the readouts of multiple pixels to be accumulated and stored in memory, and then processed all at once. This technique enabled the implementation of a global shutter while also delivering excellent noise and dark current performance and high dynamic range (92dB at a standard 30fps frame rate).

Thursday, October 27, 2016

Chronocam Raises $15M Series B Led by Intel Capital

MarketWired, BusinessWire: Chronocam SA announces it has raised $15M in Series B round led by Intel Capital, along with iBionext, Robert Bosch Venture Capital GmbH, 360 Capital, CEAi and Renault Group.

Chronocam will use the investment to continue building a world-class team to accelerate product development and commercialize its computer vision sensing and processing technology. The funding will also allow the company to expand into key markets, including the US and Asia.

Conventional computer vision approaches are not well-suited to the requirements of a new generation of vision-enabled systems,” said Luca Verre, CEO and co-founder of Chronocam. “For example, autonomous vehicles require faster sensing systems which can operate in a wider variety of ambient conditions. In the IoT segment, power budgets, bandwidth requirements and integration within sensor networks make today’s vision technologies impractical and ineffective.

Chronocam’s unique bio-inspired technology introduces a new paradigm in capturing and processing visual data, and addresses the most pressing market challenges head-on. We are well-positioned to capitalize on this significant market opportunity; and we appreciate the confidence demonstrated by our investors as we roll out our technology to an increasing number of customers.

Light L16 Camera Article

IEEE Spectrum publishes Light Co. founder Rajiv Laroia's article "Inside the Development of Light, the Tiny Digital Camera That Outperforms DSLRs." Few quotes:

"...molded plastic lens technology had been nearly perfected over the previous five years to the point where these lenses were ­“diffraction limited”—that is, for their size, they were as good as the fundamental physics would ever allow them to be. Meanwhile, the cost had dropped dramatically: A five-element smartphone camera lens today costs only about US $1 when purchased in volume. (Elements are the thin layers that make up a plastic lens.) And sensor prices had plummeted as well: A high-resolution (13-megapixel) camera sensor now costs just about $3 in volume.

By using many modules, the camera could capture more light energy. The effective size of each pixel would also increase because each object in the scene would be captured in multiple pictures, increasing the dynamic range and reducing ­graininess. By using camera modules with different focal lengths, the camera would also gain the ability to zoom in and out. And if we arranged the multiple camera modules to create what was effectively a larger aperture, the photographer could control the depth of field of the final image.

The first and current version of the Light camera—called the L16—has 16 individual camera modules with lenses of three different focal lengths—five are 28-mm equivalent, five are 70-mm equivalent, and six are 150-mm equivalent. Each camera module has a lens, an image sensor, and an actuator for moving the lens to focus the image. Each lens has a fixed aperture of F2.4.

Five of these camera modules capture images at what we think of as a 28-mm field of view; that’s a wide-angle lens on a standard SLR. These camera modules point straight out. Five other modules provide the equivalent of 70-mm telephoto lenses, and six work as ­150-mm equivalents. These 11 modules point sideways, but each has a mirror in front of the lens, so they, too, take images of objects in front of the camera. A linear actuator attached to each mirror can adjust it slightly to move the center of its field of view.

Each image sensor has a 13-megapixel resolution. When the user takes a picture, depending on the zoom level, the camera normally selects 10 of the 16 modules and simultaneously captures 10 separate images. Proprietary algorithms are then used to combine the 10 views into one high-quality picture with a total resolution of up to 52 megapixels.

Our first-generation L16 camera will start reaching consumers early next year, for an initial retail price of $1,699. Meanwhile, we have started thinking about future versions. For example, we can improve the low-light performance. Because we are capturing so many redundant images, we don’t need to have every one in color. With the standard sensors we are using, every pixel has a filter in front of it to select red, green, or blue light. But without such a filter we can collect three times as much light, because we don’t filter two-thirds of the light out. So we’d like to mix in camera modules that don’t have the filters, and we’re now working with On Semiconductor, our sensor manufacturer, to produce such image sensors.

Wednesday, October 26, 2016

Harvest Imaging Forum 2016 is Almost Sold Out

Albert Theuwissen reports that the Harvest Imaging Forum 2016 is almost sold out: there are only 4 seats left in the session of December 8th and 9th 2016.

Mentor Graphics CEO on Image Sensor Market Growth

Mentor Graphics CEO Wally Rhines presents his view on the semiconductor industry and says few words on image sensor market (the link works only in Internet Explorer for me):

SK Hynix to Try Foundry/Custom Model for its CIS Business

ETNews: As Hynix moves into 13MP sensors mass production at its 300mm M10 fab in Icheon in 2017, the company plans to reduce production of the low-priced low resolution CIS at 200mm M8 plant in Cheongju. Instead, it is going to outsource the low cost sensor production to other foundries such as DDIC and PMIC. Eventually, SK Hynix is going to stop low-resolution sensor production at M8 fab.

As a part of this plan, SK Hynix publicly announced that it has recently received all SiliconFile's assets of $3.98 million (4.5 billion KRW) from SiliconFile, which is 100% CIS design subsidiary. Silicon File is becoming SK Hynix’s CIS design house. SiliconFile is supposed to find new fabless customers.

"Experts believe that variety of businesses that have been competing against each other in a field of fabless design can become customers of Silicon File." SK Hynix appointed Director Lee Dong-jae serving as the department head of foundry business department, as SiliconFile board director.

Receiving company assets from Silicon File and changing Silicon File into a design house indicate that SK Hynix is officially promoting its non-memory semiconductor business,” says ETNews industry source.

Hynix VP KD Yoo who has established and led Hynix image sensor business over the years, left the company and now is a Professor at Hanyang University.

SK Hynix in Cheongju. White building at 2 o’clock is M8

DENSO Works with Toshiba and Sony on ADAS

JCN Newswire: DENSO and Toshiba have reached a basic agreement to jointly develop a Deep Neural Network-Intellectual Property (DNN-IP), which will be used in image recognition systems which have been independently developed by the two companies for ADAS and automated driving technologies.

Because of the rapid progress in DNN technology, the two companies plan to make the technology flexibly extendable to various network configurations. They will also make the technology able to be implemented on in-vehicle processors that are smaller, consume less power, and feature other optimizations.

DENSO has been developing DNN-IP for in-vehicle applications. By incorporating DNN-IP in in-vehicle cameras, DENSO will develop high-performance, ADAS and automated driving systems. Toshiba will partition this jointly developed DNN-IP technology into dedicated hardware components and implement them on its in-vehicle image recognition processors to process images using less power than image processing systems with DSPs or GPUs.

DENSO also invests in the US-based machine learning startup THINCI. “We are thrilled DENSO is our lead investor,” said THINCI CEO Dinakar Munagala. “The automotive industry is one of the earliest adopters of vision processing and deep learning technology. DENSO’s investment in THINCI’s trailblazing solution confirms our own belief that our innovation has much to offer, not only in the automobile but in the wide range of everyday products.

JCN Newswire: DENSO announces that the image sensors provided by Sony have helped DENSO improve the performance of its in-vehicle vision sensors and can now detect pedestrians during night conditions.

Sony image sensors, which are also used in surveillance and other monitoring devices, enable cameras to take clear images of objects even at night. DENSO has improved the quality of Sony's image sensors in terms of ease of installation, heat resistance, vibration resistance, etc. to be used in vehicle-mounted vision sensors. DENSO has also used Sony's ISPs for noise reduction and optimization of camera exposure parameters to better recognize and take clearer images of pedestrians at night.

Remote Sensing Trends

SPIE publishes Stephen Marshall of University of Strathclyde (UK) talk on trends in remote sensing, including miniaturized multi- and hyperspectral cameras and other devices incorporated into drones and other flying sensors:

Tuesday, October 25, 2016

IHS on Security Market Trends

IHS publishes Top Video Surveillance Trends for 2016 report. Few quotes:

"4K video surveillance has been repeatedly touted as a major trend in video surveillance for the last 18 months and it can sometimes be challenging to see past the marketing hype. Yet make no mistake, the video surveillance market is going to 4K cameras; it’s only a matter of when rather than if. For 2016, IHS is predicting:
• Volumes of 4K cameras shipped in 2016 will remain low, less than 1% of the 66 million network cameras projected to be shipped globally. We are unlikely to see over million 4K network cameras units shipped in a calendar year until 2018.
• More “4K-compliant” cameras will be launched because of the increased use of 4Kp30 and above chipsets, meaning more cameras adhering to 4K standards, such as SMPTE ST 2036-1.
• Like the HD surveillance cameras, early 4K models offered the resolution at lower frame rates. We’ll see more cameras with a higher frame rate offered, and closer ties to other video standards.