editor's blog
Subscribe Now

Simultaneous alternative views

Embedded vision systems are providing more opportunities for machines to see the world in the same way that our eyes see them (and our brains interpret them), but variants of these technologies are also enabling systems to see things in ways we can’t.

Imec has just announced a new “hyperspectral” camera system for use in medical and industrial inspection systems or anywhere specific filters are needed to understand specific characteristics of whatever is being viewed. In such situations, simply looking at one bandwidth of light may not be enough; a complement of filters may be needed either to provide a signature or to evaluate multiple characteristics at the same time.

One way this is done now is to take separate images, each with a different filter, time-domain multiplexed. This slows the overall frame rate, divided down by the number of filters. The new approach allows full-frame-rate images with all filters simultaneously.

There are actually two versions of this, which imec calls “shapshot” and “linescan.” We’ll look at snapshot first, as this is particularly new. It’s intended for image targets that are either stationary or moving in a random way (or, more specifically, not moving in a line as if on a conveyor belt). The imaging chip is overlaid with tiles, each of which is a different filter.

The filters are the last masked layers of the chip – this is monolithic integration, not assembly after the fact. The filter consists of two reflecting surfaces with a cavity between; the size of the cavity determines the frequency. This means that the final chip will actually have a non-planar surface because of the different cavity sizes – and therefore different heights – of the filter layer. Because these make up the last layers to be processed, it’s convenient for staging base wafers for final processing with custom filter patterns.

The camera lens then directs, or duplicates, the entire scene to every tile. Perhaps it’s better to think of it as an array of lenses, much like a crude plenoptic lens. This gives every filter the full image at the same time; the tradeoff against time-domain multiplexing filters over the lens instead is that you get the resolution of one tile of the imaging chip, not the entire imaging chip.

A camera optimized for linescan applications doesn’t use the plenoptic approach; instead, the tiles are made very small and, as the image moves under the camera at a known rate, multiple low-resolution images are captured and then stitched back together using computational photography techniques.

The lensing system determines the bandwidth characteristics, since you can design this with filters having wider or narrower bandwidths and with or without gaps between the filters. This allows a range from continuous coverage to discrete lines. A collimating lens will direct light in straight lines onto the filters, providing narrow bandwidth; a lens that yields conical light will give wider bandwidth due to interference from the different filters with this light. The aperture size then acts as a bandwidth knob.

Imec has put together a development kit that allows designers to figure out which filters to use for a given linescan application; they’ll be providing one for snapshot cameras as well. Each filter configuration is likely be very specific to its application, making this something of a low-volume business for now. Because of that, and in order to grow the market, imec will actually be open for commercial production of these systems.

You can find more details in their release.

Leave a Reply

featured blogs
Mar 18, 2024
Innovation in the AI and supercomputing domains is proceeding at a rapid pace, with each new advancement heralding a future more tightly interwoven with the threads of intelligence and computation. Cadence, with the release of its Millennium Platform, co-optimized with NVIDIA...
Mar 18, 2024
Cloud-based EDA tools are critical to accelerating AI chip design and verification; see how NeuReality leveraged cloud-based chip emulation for their 7NR1 NAPU.The post NeuReality Accelerates 7nm AI Chip Tape-Out with Cloud-Based Emulation appeared first on Chip Design....
Mar 5, 2024
Those clever chaps and chapesses at SiTime recently posted a blog: "Decoding Time: Why Leap Years Are Essential for Precision"...

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured paper

Reduce 3D IC design complexity with early package assembly verification

Sponsored by Siemens Digital Industries Software

Uncover the unique challenges, along with the latest Calibre verification solutions, for 3D IC design in this new technical paper. As 2.5D and 3D ICs redefine the possibilities of semiconductor design, discover how Siemens is leading the way in verifying complex multi-dimensional systems, while shifting verification left to do so earlier in the design process.

Click here to read more

featured chalk talk

PolarFireĀ® SoC FPGAs: Integrate LinuxĀ® in Your Edge Nodes
Sponsored by Mouser Electronics and Microchip
In this episode of Chalk Talk, Amelia Dalton and Diptesh Nandi from Microchip examine the benefits of PolarFire SoC FPGAs for edge computing applications. They explore how the RISC-V-based Architecture, asymmetrical multi-processing, and Linux-based reference solutions make these SoC FPGAs a game changer for edge computing applications.
Feb 6, 2024
5,927 views