Monday, April 23, 2018

Image Sensor Market is Greater than Lamps

IC Insights Optoelectronic, Sensor, and Discrete (O-S-D) report gives a nice comparison of image sensor business with others:

LiDAR Patents Review

EETimes publishes Junko Yoshida article "Who’s the Lidar IP Leader?" Few quotes:

"Pierre Cambou, activity leader for imaging and sensors at market-research firm Yole Développement (Lyon, France), said he can’t imagine a robotic vehicle without lidars.

Qualcomm, LG Innotek, Ricoh and Texas Instruments.. contributions are “reducing the size of lidars” and “increasing the speed with high pulse rate” by using non-scanning technologies. Quanergy, Velodyne, Luminar and LeddarTech... focus on highly specific patented technology that leads to product assertion and its application. Active in the IP landscape are Google, Waymo, Uber, Zoox and Faraday Future. Chinese giants such as Baidu and Chery also have lidar IPs.

Notable is the emergence of lidar IP players in China. They include LeiShen, Robosense, Hesai, Bowei Sensor Tech.
"

Sunday, April 22, 2018

Trinamix and Andanta Company Presentations

Spectronet publishes presentations of two small German image sensor companies - Trinamix and Andanta:


As for 3D imaging, Trinamix complements its initial "chemical 3D imager" idea with a more traditional structured light approach:


Andanta too publishes some info about the company and its products:

Saturday, April 21, 2018

Stretchcam

Columbia University, Northwest University and University of Tokio publish a paper "Stretchcam: Zooming Using Thin, Elastic Optics" by Daniel C. Sims, Oliver Cossairt, Yonghao Yu, Shree K. Nayar:

"Stretchcam is a thin camera with a lens capable of zooming with small actuations. In our design, an elastic lens array is placed on top of a sparse, rigid array of pixels. This lens array is then stretched using a small mechanical motion in order to change the field of view of the system. We present in this paper the characterization of such a system and simulations which demonstrate the capabilities of stretchcam. We follow this with the presentation of images captured from a prototype device of the proposed design. Our prototype system is able to achieve 1.5 times zoom when the scene is only 300 mm away with only a 3% change of the lens array's original length."

Friday, April 20, 2018

Thursday, April 19, 2018

Leonardo DRS Launches 10um Pixel Thermal Camera

PRNewswire: The pixel race goes on in microbolometric sensors. Leonardo DRS launches of its Tenum 640 thermal imager, the first uncooled 10um pixel thermal camera core for OEMs.

The Tenum 640 thermal camera module combines small pixel structure with its sensitive vanadium oxide micro-bolometer sensor and a 640 x 512 array. It provides exceptional LWIR imaging at up to 60fps. The high-resolution LWIR camera core features image contrast enhancement, called "ICE™ ", 24-bit RGB and YUV (4,2,2), at sensitivity less than 50 mK NETD.

"The Tenum 640 represents the most advanced, uncooled and cost-effective infrared sensor design available to OEM's today," said Shawn Black, VP and GM of the Leonardo DRS EO&IS business unit. "Our market-leading innovative technologies, such as the Tenum 640, continue to enable greater affordability while delivering uncompromising thermal imaging performance for our customers."

Face Recognition Startup Raises $600m on $3b Valuation

Bloomberg, Teslarati: A 3 year-old Chinese startup SenseTime raises $600m from Alibaba Group, Singaporean state firm Temasek Holdings, retailer Suning.com, and other investors at a valuation of more than $3b ($4.5b, according to Reuters), becoming the world’s most valuable face recognition startup. By the way, the second largest Chinese facial-recognition start-up Megvii has raised $460m last year.

The Qualcomm-backed company specializes in systems that analyse faces and images on an enormous scale. SenseTime turned profitable in 2017 and wants to grow its workforce to 2,000 by the end of this year. With the latest financing deal, SenseTime has doubled its valuation in a few months.


Wednesday, April 18, 2018

Corephotonics Signs Broad Agreement with Oppo

OPPO, one of the largest smartphone manufacturers in China, signs a strategic licensing agreement with Corephotonics, the licensor of dual camera technologies. Under the agreement, OPPO will collaborate with Corephotonics on developing its smartphone camera roadmap – supporting high optical zoom factors, accurate depth mapping, digital bokeh and other advanced features, all involving innovations in optics, mechanics, computational photography, deep learning and other fields.

Mobile photography is a key focus of OPPO, and we have always been eager to forge strong partnerships with leading suppliers like Corephotonics,” said Dr. King, OPPO’s Hardware Director. “Corephotonics’ dual cameras with wide-angled and telephoto lenses, along with the periscope-style construction, optical image stabilization and image fusion technology, edge mobile photography even closer to what digital cameras are capable of doing.

OPPO has the most impressive record of innovation in the field of smartphone imaging,” affirmed David Mendlovic, CEO of Corephotonics. “We are proud to be working closely with the OPPO teams on their next generation camera technologies. This strategic agreement is a major validation of the benefits that our camera designs and imaging algorithms have on the future of mobile photography.

Tuesday, April 17, 2018

LargeSense Unveils 4K Video-Capable Medium Format Camera

PRNewswire: LargeSense LLC launches the LS911, said to be the first medium format 8x10 digital camera for sale. Everything in this announcement looks, well, impressively large:

  • 9x11-inch-large digital sensor. The company tends to refer to it as 8x10 as that is the closest format that people search for.
  • 75 micron pixel size
  • 12MP resolution
  • Live view for easy focusing
  • Video modes (all progressive):
    - Up to 26fps: Full sensor scan 3888 x 3072 and optional crop sizes: 3840 × 2160, 3840 x 1600, 1920 x 1080.
    - Up to 70fps with pixel 2x2 pixel binning: Full frame: 1944 x 1536
  • 14 bit ADC
  • The US price is $106,000, available now

The LS911 sensor uses a rolling shutter. Rolling shutters can produce distortions, but the LS911 is said to handle it well, as shown in the test video:



Anitoa Ultra-low Light Bio-optical Sensor in Volume Production

PRNewswire: Anitoa, a Menlo Park CA startup since 2012, announces the volume production of its Ultra-low-light CMOS bio-optical sensor, ULS24. Capable of 3x10-6 lux low-light detection, Anitoa ULS24 is said to be "the world's most sensitive image sensor manufactured with proven low-cost CMOS image sensor (CIS) process technology."

Until now, molecular testing such as DNA or RNA, and immunoassay testing (e.g. ELISA) rely on traditional bulky and expensive Photon Multiplier Tube (PMT) or cooled CCD technologies. "Following the trend of CMOS image sensors replacing CCDs in consumer cameras, many customers are exploring this CMOS Bio-optical sensor to replace CCD or PMT designs for new products," says Anitoa SVP Yuping Chung. With Anitoa ULS24 now in volume production, it's low-light sensitivity is said to rival PMTs and CCDs used in molecular and immunoassay testing devices. ULS24 achieves this high level of sensitivity through the innovation of temperature compensated dark current management algorithm.

Cadence Presents Tensilica Vision Q6 DSP

Cadence announces the Tensilica Vision Q6 DSP, its latest DSP for embedded vision and AI built on a new, faster processor architecture. The fifth-generation Vision Q6 DSP offers 1.5X greater vision and AI performance than its predecessor, the Vision P6 DSP, and 1.25X better power efficiency at the Vision P6 DSP’s peak performance. The Vision Q6 DSP is targeted for embedded vision and on-device AI applications in the smartphone, surveillance camera, automotive, augmented reality (AR)/virtual reality (VR), drone and robotics markets.

NIT Announces HDR Sensor with LED Flicker Mitigation

NIT presents its NSC 1701 HDR Global Shutter CMOS sensor featuring a new Light Flicker Mitigation mode and a 12b digital output.
  • 1280 x 1024 Pixels Resolution 
  • 6.8µm Pixel Pitch 
  • On Chip 12 bits ADC 
  • 1.3 MP
  • Light Flicker Mitigation
  • Color or Monochrome
The NSC1701 sensor is aimed to industrial and emerging embedded applications and to the automotive market with its new flicker mitigation mode. The engineering samples are available now and the mass production planned for June 2018.

Monday, April 16, 2018

ABI Research: Face Recognition is 5x Easier to Spoof than Fingerprint One

ABI Research: “Face recognition on smartphones is five times easier to spoof than fingerprint recognition,” stated ABI Research Industry Analyst Dimitrios Pavlakis. “Despite the decision to forgo its trademark sapphire sensor in the iPhone X in favor of face recognition (FaceID,) Apple may be now forced to return to fingerprints in the next iPhone,” added Pavlakis.

ABI also comments on Synaptics under-display fingerprint sensor in Vivo X20+ smartphone:

Vivo may have been cautious to fully commit to the new technology and left room to fall back to a traditional sensor below the display,” said Jim Mielke, ABI Research’s VP of the Teardowns service. “The performance of this first implementation does warrant some caution as the sensor seemed less responsive and required increased pressure to unlock the phone.

Samsung 0.9um TetraCell Sensor Reverse Engineered

TechInsights publishes reverse engineering of Samsung first 0.9um pixel sensor found in Vivo V7+ smartphone 24MP selfie camera:

"We did not expect to see the Samsung S5K2X7SP image sensor until Q2, 2018…but here it is in the Vivo V7+."

Omnivision Announces 8MP 2um Nyxel Sensor

PRNewswire: OmniVision announces the 8MP OS08A20 equipped with 2um pixel with Nyxel NIR technology. The OS08A20 is the first sensor to combine Nyxel technology with PureCel pixel architecture.

"With 8 megapixels of resolution and our industry-leading Nyxel technology, the OS08A20 allows surveillance cameras to capture accurate and detailed images at night, without the need for high-power LEDs," said Brian Fang, Business Development Director at OmniVision. "With such capabilities, this sensor also fills a need in emerging applications, such as video analytics, where accurate object and facial recognition is aided by higher resolution and sensitivity."

Demand for surveillance cameras continues to grow, with well over 125 million such cameras expected to ship globally in 2018, according to IHS Markit. Other applications with similar requirements, such as body-worn cameras for law enforcement, represent an additional growth opportunity.

Nyxel technology delivers QE improvement at 850nm and 940nm while maintaining high-modulation transfer function, allowing the OS08A20 to monitor a larger area compared with legacy technologies. Eliminating the need for external lighting sources reduces power consumption and enables covert surveillance for improved security. The OS08A20 is also a color CMOS image sensor, employing the PureCel pixel architecture with BSI to capture color images during the daytime.

The OS08A20 is currently sampling and is expected to start volume production in Q2 2018.

Friday, April 13, 2018

Omnivision Proposes Adding Shield Bumps to Pixel Level Interconnect

Omnivision patent application US20180097030 "Stacked image sensor with shield bumps between interconnects" by Sohei Manabe, Keiji Mabuchi, Takayuki Goto, Vincent Venezia, Boyd Albert Fowler, and Eric A. G. Webster reduces coupling in pixel level interconnected stacked sensor:

"One of the challenges presented with conventional stacked image sensors is the unwanted capacitive coupling that exists between the adjacent interconnection lines between the first and second dies of the stacked image sensors that connect the photodiodes to the pixel support circuits. The capacitive coupling between the adjacent interconnection lines can cause interference or result in other unwanted consequences between adjacent interconnection lines when reading out image data from the photodiodes."


"As such, there are also shield bumps 520 disposed between adjacent interconnection lines 518 along each of the diagonals A-A′ and/or B-B′ of the pixel array of stacked imaging system 500 in accordance with the teachings of the present invention. As such, when every other pixel cell in two rows of the pixel array included in stacked imaging system 500 are read out at a time, there is a shield bump 520 disposed the corresponding interconnect lines 518 in accordance with the teachings of the present invention. With a shield bump 520 disposed between adjacent interconnection lines 518, the coupling capacitance is eliminated to reduce unwanted interference, crosstalk, and the like, during readouts of stacked image sensor 500 in accordance with the teachings of the present invention."

Image Sensors at 2018 VLSI Symposia

VLSI Symposia to be held on June 18-22 in Honolulu, Hawaii, publishes its official Circuit and Technology programs. In total, there are 8 image sensor papers:
  • C7‐1 A 252 × 144 SPAD Pixel FLASH LiDAR with 1728 Dual‐clock 48.8 ps TDCs, Integrated Histogramming and 14.9‐to‐1 Compression in 180nm CMOS Technology,
    S. Lindner, C. Zhang*, I. Antolovic*, M. Wolf**, E. Charbon***,
    EPFL/University of Zurich, *TUDelft, **University of Zurich, ***EPFL/TUDelft 
  • C7‐2 A 220 m‐Range Direct Time‐of‐Flight 688 × 384 CMOS Image Sensor with Sub‐Photon Signal Extraction (SPSE) Pixels Using Vertical Avalanche Photo Diodes and 6 kHz Light Pulse Counters,
    S, Koyama, M. Ishii, S. Saito, M. Takemoto, Y. Nose, A. Inoue, Y. Sakata, Y. Sugiura, M. Usuda, T. Kabe, S. Kasuga, M. Mori, Y. Hirose, A. Odagawa, T. Tanaka,
    Panasonic Corporation 
  • C7‐3 Multipurpose, Fully‐Integrated 128x128 Event‐Driven MD‐SiPM with 512 16‐bit TDCs with 45 ps LSB and 20 ns Gating,
    A. Carimatto, A. Ulku, S. Lindner*, E. D’Aillon, S. Pellegrini**, B. Rae**, E. Charbon*,
    TU Delft, *EPFL, **ST Microelectronics 
  • C7‐4 A Two‐Tap NIR Lock‐In Pixel CMOS Image Sensor with Background Light Cancelling Capability for Non‐Contact Heart Rate Detection,
    C. Cao, Y. Shirakawa, L. Tan, M. W. Seo, K. Kagawa, K. Yasutomi, T. Kosugi*, S. Aoyama*, N. Teranishi, N. Tsumura**, S. Kawahito,
    Shizuoka University, *Brookman Technology, **Chiba University
  • T7‐2 An Over 120 dB Wide‐Dynamic‐range 3.0 μm Pixel Image Sensor with In‐pixel Capacitor of 41.7 fF/µm2 and High Reliability Enabled by BEOL 3D Capacitor Process,
    M. Takase, S. Isono, Y. Tomekawa, T. Koyanagi, T. Tokuhara, M. Harada, Y. Inoue,
    Panasonic Corporation
  • T15‐4 Next‐generation Fundus Camera with Full Color Image Acquisition in 0‐lx Visible Light by 1.12‐micron Square Pixel, 4K, 30‐fps BSI CMOS Image Sensor with Advanced NIR Multi‐spectral Imaging System,
    H.Sumi, T.Takehara*,S.Miyazaki*, D.Shirahige*, K.Sasagawa*, T. Tokuda*, Y. Watanabe*, N.Kishi, J.Ohta*, M.Ishikawa,
    The University of Tokyo, *NAIST
  • T15‐2 A Near‐ & Short‐Wave IR Tunable InGaAs Nanomembrane PhotoFET on Flexible Substrate for Lightweight and Wide‐Angle Imaging Applications,
    Y.Li, A. Alian*, L.Huang, K. Ang, D. Lin*, D. Mocuta*, N. Collaert*, A.V‐Y Thean,
    National University of Singapore, *IMEC
  • C23‐2 A 2pJ/pixel/direction MIMO Processing based CMOS Image Sensor for Omnidirectional Local Binary Pattern Extraction and Edge Detection,
    X. Zhong, Q. Yu, A. Bermak**, C.‐Y. Tsui, M.‐K. Law*,
    Hong Kong University of Science and Technology, *University of Macau, **also with Hamad Bin Khalifa University

Thursday, April 12, 2018

Luminar Acquires Black Forest Engineering

Optics.org, Techcrunch: Colorado Springs-based image sensor and ROIC design house Black Forest Engineering has been acquired by a LiDAR startup Luminar:

"“This year for us is all about scale. Last year it took a whole day to build each unit — they were being hand assembled by optics PhDs,” said Luminar’s wunderkind founder Austin Russell. “Now we’ve got a 136,000 square foot manufacturing center and we’re down to 8 minutes a unit.”

...the production unit is about 30 percent lighter and more power efficient, can see a bit further (250 meters vs 200), and detect objects with lower reflectivity (think people wearing black clothes in the dark).

The secret is the sensor. Most photosensors in other lidar systems use a silicon-based photodetector. Luminar, however, decided to start from the ground up with InGaAs.

The problem is that indium gallium arsenide is like the Dom Perignon of sensor substrates. It’s expensive as hell and designing for it is a highly specialized field. Luminar only got away with it by minimizing the amount of InGaAs used: only a tiny sliver of it is used where it’s needed, and they engineered around that rather than use the arrays of photodetectors found in many other lidar products. (This restriction goes hand in glove with the “fewer moving parts” and single laser method.)

Last year Luminar was working with a company called Black Forest Engineering to design these chips, and finding their paths inextricably linked, Luminar bought them. The 30 employees at Black Forest, combined with the 200 hired since coming out of stealth, brings the company to 350 total.

By bringing the designers in house and building their own custom versions of not just the photodetector but also the various chips needed to parse and pass on the signals, they brought the cost of the receiver down from tens of thousands of dollars to… three dollars.

“We’ve been able to get rid of these expensive processing chips for timing and stuff,” said Russell. “We build our own ASIC. We only take like a speck of InGaAs and put it onto the chip. And we custom fab the chips.”

“This is something people have assumed there was no way you could ever scale it for production fleets,” he continued. “Well, it turns out it doesn’t actually have to be expensive!”
"


Update: IEEE Spectrum publishes a larger image of Luminar's InGaAs sensors:

Spectral Edge Raises $5.3M

Remember the times when ISP startups were popular - Nethra, Insilica, Alphamosaic, Nucore, Atsana, Mtekvision, etc? With AI and machine learning in fashion, this time might come back. EETimes reports that UK-based Spectral Edge has raised $5.3m. The new startup bets on image fusion of IR and RGB images claiming the improvements of image quality:

SWIR Camera Market

Esticast Research and Consulting publishes Shortwave Infra-Red Camera Market report. Tke key findings in the report:

  • North America held the largest chunk of market share in 2016 owing to rapid technical development and increasing applications.
  • China and other Asian countries are expected to grow the fastest during the forecast period.
  • Area cameras held more than 50% of the global market share. However, linear cameras are expected to grow with the fastest growth rate of 8.31% during the forecast period.
  • Optical communication dominated the global market in 2017, holding nearly 3/7th of the global market.
  • Aerial SWIR cameras are expected to witness the highest CAGR of 10.03% during the forecast period.

SWIR Camera Market

Qualcomm Unveils Vision Intelligence Platform

PRNewswire: Qualcomm announces its Vision Intelligence Platform featuring the Company's first family of SoCs for IoT built in 10nm FinFET process. The QCS605 and QCS603 SoCs deliver computing for on-device camera processing and machine learning across a wide range of IoT applications. The SoCs integrate Qualcomm's most advanced ISP to date and an Artificial Intelligence (AI) Engine, along with a heterogeneous compute architecture including ARM-based multicore CPU, vector processor and GPU. The Vision Intelligence Platform also includes Qualcomm Technologies' advanced camera processing software, machine learning and computer vision SD), as well as connectivity and security technologies.

"Our goal is to make IoT devices significantly smarter as we help customers bring powerful on-device intelligence, camera processing and security. AI is already enabling cameras with object detection, tracking, classification and facial recognition, robots that avoid obstacles autonomously, and action cameras that learn and generate a video summary of your latest adventure, but this is really just the beginning," said Joseph Bousaba, VP, product management, Qualcomm. "The Qualcomm Vision Intelligence Platform is the culmination of years of advanced research and development that brings together breakthrough advancements in camera, on-device AI and heterogeneous computing. The platform is a premier launchpad for manufacturers and developers to create a new world of intelligent IoT devices."

The Vision Intelligence Platform supports up to 4K video resolution at 60 fps, or 5.7K at 30 fps, as well as multiple concurrent video streams at lower resolutions. The platform integrates a dual 14-bit Spectra 270 ISP supporting dual 16 MP sensors. In addition, the Vision Intelligence Platform includes vision processing capabilities necessary for IoT segments such as staggered HDR to prevent the "ghost" effect in HDR video, electronic image stabilization, de-warp, de-noise, chromatic aberration correction, and motion compensated temporal filters in hardware.

The QCS605 and QCS603 are sampling now.

Wednesday, April 11, 2018

SPAD-based HDR Imaging

MDPI Sensors keep publishing expanded papers from 2017 International Image Sensor Workshop. ST Micro and University of Edinburgh present "High Dynamic Range Imaging at the Quantum Limit with Single Photon Avalanche Diode-Based Image Sensors" by Neale A.W. Dutton, Tarek Al Abbas, Istvan Gyongy, Francescopaolo Mattioli Della Rocca, and Robert K. Henderson.

"This paper examines methods to best exploit the High Dynamic Range (HDR) of the single photon avalanche diode (SPAD) in a high fill-factor HDR photon counting pixel that is scalable to megapixel arrays. The proposed method combines multi-exposure HDR with temporal oversampling in-pixel. We present a silicon demonstration IC with 96 × 40 array of 8.25 µm pitch 66% fill-factor SPAD-based pixels achieving >100 dB dynamic range with 3 back-to-back exposures (short, mid, long). Each pixel sums 15 bit-planes or binary field images internally to constitute one frame providing 3.75× data compression, hence the 1k frames per second (FPS) output off-chip represents 45,000 individual field images per second on chip. Two future projections of this work are described: scaling SPAD-based image sensors to HDR 1 MPixel formats and shrinking the pixel pitch to 1–3 µm."