Tuesday, May 24, 2016

DynaOptics' Free-Form Lens

Singapore-based startup DynaOptics presents Free-Form Lens:

DynaOptics is about to start a Kickstarter campaign for its Free-Form Lens add-on for smartphone:

Sony Earthquake Impact Estimated at 115b Yen

Reuters, Sony: Sony estimates the impact from the quake on its image sensor and digital camera operations would total 105 billion yen this business year. It says the impact on the company as a whole would be 115 billion yen. The devices division, which includes image sensors, is to book an operating loss of 40 billion yen, compared with the previous year's loss of 29.3 billion yen. Sony also says, without elaborating, that the expected loss at its devices segment factored in a 30 billion yen loss from cancelling development of some camera modules:

"In addition, Sony decided to terminate the development and manufacturing of high-functionality camera modules for external sale, the mass production of which was being prepared at the Kumamoto Technology Center, as a result of a reconsideration of the strategy of this business from a long-term perspective. Approximately 30 billion yen in expense is expected to be incurred due to this termination."

Monday, May 23, 2016

More Details on SPI Color Night Vision Sensor

SPI Infrared publishes few more details on its X27 4K color night vision sensor. Few statements from SPI Infrared site:

"Unlike other technologies, the x27 low light color security camera always images full 390-1200Nm without having to switch camera functions, the user always gets the full broadband.

The x27 low light color sensor has extremely large pixel pitch cells for high light gathering capabilities and is very sensitive in the IR spectra region. The high 5 Million equivalent ISO system has outstanding low lux capabilities with a whopping 85,000x luminance gain.

High definition 10 megapixel sensor works in the daytime and accepts a wide array of standard off the shelf commercially available lenses. The X27 low light sensor produces 4K high definition color imagery even at 1 millilux low light levels.

The x27 low light color security camera vastly outperforms any exsisting low light color technology like CMOS, Scmos, CCD, EMCCD, EBAPS and traditional military grade intensified technologies.

Current day CMOS extreme low light color sensors reach a peak maximum quantum detection limit, an inevitable quality to these sensors are RGB or color filters that must reside on the sensor to produce a nice color image, along with other filters that enhance color image quality. These filters cut down photons and sensitivity dramatically, but must be present to produce a nice color image. The Solid-state x27 color low light night vision sensor utilizes specialty video processing on chip and on the filters, as well as advanced electronic vis-nir image enhancement algorithms that allow it to collect an incredible amount of light, and retain full sensitivity without loss of a brilliant color image, furthermore the x27’s BSTFA (Broad Spectrum Thin Film Array) high fidelity, large pixel pitch sensor architecture achieves incredible bright as day, true color imagery at real time full tv frame rates, without image lag and minimal image noise or grain.

Another drawback to traditional sensors is the infrared cut filter, this filter sits in front of the sensor and cuts out all infrared wavelengths which is needed for a good color image. By cutting out the infrared spectrum from the sensor, the camera does not pick up infrared signals which have added benefits to a good night vision image and also cuts the ability to see infrared lasers, pointers, Illuminators and designators. The removal of the infrared cut filter from traditional sensors produce a pink/red image and displays a non optimal picture. The specialized x27 sensor sees well into the infrared region as well as produces a nice true brilliant color image allowing the user to see a full broadband extended dynamic range image that includes visible to infrared wavelengths. The x27 outperforms any low light technology in existence today within the visible to SWIR spectral region.

Back side illumination of chip technology is another area that allows the chip to output higher performance in low light, the x27’s color night vision detectors bsi backside illumination is yet another aspect that makes it desirable for Imaging at never before seen extreme low light conditions. The x27 ultra extreme low light color night vision complementary metal oxide semiconductor (CMOS) integrated circuit (IC) is a vital proven technology.

Preliminary Technical Specifications:
  • Sensor & Parameters: Maintenance free, no moving parts, Solid State non intensified BSTFA Extreme low light color FPA w/column amplification
  • Large Format, large pixel pitch architecture w/5,000,000 equivalent ISO
  • Backside Illuminated for light utilization efficiency
  • Extreme low SNR – High Dynamic Range, Photoconductive & photoresponse gain
  • Very high ISO w/Extremely Low read Noise
  • Auto Black Level Calibration
  • Auto Exposure w/excellent color fidelity
  • Excellent image uniformity
  • Auto hot pixel correction
  • Frame Rate: 60 FPS / optional 120 FPS
  • Day Night Mode: Auto Imaging/Auto Switching
  • Bright light/Blooming compensation: Automatic
  • Photodetector Array Size: 10 Megapixel / HD 4320 x 2432
  • Temperature Range: -30C to +80C
  • Wavelength: 390-1200 Um broadband Extreme High Sensitivity
  • IR Response: Yes

Here is one of the recent Youtube demos showing the sensor's night vision capabilities:

Sunday, May 22, 2016

Difference Between Binning and Averaging

Albert Theuwissen publishes a blog post explaining a difference between different ways to bin pixels and average them:

"Conclusion : charge domain binning is more efficient in increasing the signal-to-noise ratio compared to binning/averaging in the voltage domain or binning in the digital domain. The explanation of binning and averaging as well as the discussion about signal-to-noise ratio in this blog takes into account that the noise content of the pixel output signals is dominated by readout noise. The story becomes slightly different is the signals are shot-noise limited. This will be explained next time."

NHK on Future Image Sensor Technologies

NHK publishes a flyer toward its Open House event traditionally held at the end of May:

The research on back-illuminated small-size image sensors is jointly being conducted with Shizuoka University. The pixel-parallel processing three-dimensional integrated imaging devices research is jointly being conducted with the University of Tokyo. The organic image sensors research is jointly being conducted with Kochi University of Technology.

Sharp Changes Reporting

Sharp reports the results of its fiscal 2015 year, ended on March 31, 2016. One of the largest product categories "CCD/CMOS Imagers" in the previous reports, now disappears. The camera modules group is reported instead:

Friday, May 20, 2016

CNN on Image Sensor

Nuit Blanche: Rice and Cornell Universities publish a paper on CNN integration onto the image sensor:

"ASP Vision: Optically Computing the First Layer of Convolutional Neural Networks using Angle Sensitive Pixels" by Huaijin Chen, Suren Jayasuriya, Jiyue Yang, Judy Stephen, Sriram Sivaramakrishnan, Ashok Veeraraghavan, Alyosha Molnar.

Abstract: Deep learning using convolutional neural networks (CNNs) is quickly becoming the state-of-the-art for challenging computer vision applications. However, deep learning's power consumption and bandwidth requirements currently limit its application in embedded and mobile systems with tight energy budgets. In this paper, we explore the energy savings of optically computing the first layer of CNNs. To do so, we utilize bio-inspired Angle Sensitive Pixels (ASPs), custom CMOS diffractive image sensors which act similar to Gabor filter banks in the V1 layer of the human visual cortex. ASPs replace both image sensing and the first layer of a conventional CNN by directly performing optical edge filtering, saving sensing energy, data bandwidth, and CNN FLOPS to compute. Our experimental results (both on synthetic data and a hardware prototype) for a variety of vision tasks such as digit recognition, object recognition, and face identification demonstrate 97% reduction in image sensor power consumption and 90% reduction in data bandwidth from sensor to CPU, while achieving similar performance compared to traditional deep learning pipelines.

Thursday, May 19, 2016

e2v Reports Strong Image Sensor Sales

Optics.org: e2v reportes 17% sales growth for its imaging division in its annual report for the year ending March 31, 2016. The company CEO Steve Blair declared himself “delighted” with overall results. At £103.5M, the imaging division’s sales were up from £88.7M in the prior year, growing at a much faster rate than e2v’s other divisions.

CEO Steve Blair and chairman Neil Johnson noted a sharp improvement in profit margins for the imaging division. Recent changes have included the summer 2014 acquisition of Anafocus, and the sale of e2v’s thermal imaging unit.

A year ago e2v said that it had ten “problem contracts” in space imaging, of which four have now been completed and five are due for delivery within 12 months. “We have a strong position in Europe, particularly in CCD sensors, and our offering remains attractive to customers due to its long proven performance in flight,” said the e2v executives.

The increased use of sensors for industrial automation has brought on board some new customers. “We are well positioned to take advantage of the five year plan in China for automation to support the quality drive to 'made in China',” added Blair and Johnson.

Looking to the future, CEO Blair sounded a note of caution regarding the broader macroeconomic environment, but told investors that he still expected solid growth from the imaging division.

Autonomous Driving Vision Challenges

As mentioned in comments, there is a nice video lecture by Mobileye algorithm group leader Uri Rokni on challenges in vision algorithms for autonomous driving:

Wednesday, May 18, 2016

Tessera FotoNation Partners with Kyocera on Automotive Camera Technology

BusinessWire: FotoNation, a wholly owned subsidiary of Tessera, partners with Kyocera to develop intelligent vision solutions for automotive applications. As part of the partnership and using FotoNation technology as a foundation, the two companies will jointly develop advanced computer vision solutions for the automotive market.

FotoNation is focused on delivering complex computational imaging solutions for automotive applications, and together we will develop technologies that will transform the future of driving,” said Norio Okuda, Manager, Kyocera.

Increasing interest from the automotive industry for vision systems to enhance vehicle safety represents an opportunity for significant growth for FotoNation, driven mainly by adoption of our advanced imaging systems by tier-one automotive suppliers and OEMs,” stated Sumat Mehra, SVP of marketing and business development at FotoNation. “Kyocera has a strong reputation as a leading technology innovator, and we are pleased to be working with them as a valued technology partner to bring these cutting-edge vision solutions to market.