feature article
Subscribe Now

Reprogrammable Logic Drives Automotive Vision Systems Design

Automotive electronics system development teams increasingly grapplewith the challenges of new industry standards and product features. An automotive vision system is an example of a design that must deliver improved performance, system integration and security.  In turn, these requirements dictate the use of Field Programmable Gate Array (FPGA) devices in an automotive vision system.

Many of the automotive vision systems now being developed specify several cameras in the design for different views, such as reverse, side and forward (Figure 1).  Depending on the application, the system designer can choose from among several types of automotive cameras.  Even different cameras from the same manufacturer may use a different physical connection and communications protocol (LVDS, ITU_R BT.656, Camera Link, Digital RGB, PAL, NTSC).  Each of the connection/protocol schemes requires a different process for converting the video data stream into a usable digital format suitable for processing.  The challenge is to design a transport and processing design using the least possible number of components and achieving the desired performance, while also managing the overall system cost to budget.

20070227_drives_fig1.jpg
Figure 1. Automotive Camera Applications
(Picture courtesy of Micron Technology)

To meet that challenge, a growing number of automotive designers are implementing designs using FPGAs for video processing.  Because they are reprogrammable, FPGAs allow the automotive designer to achieve greater performance and flexibility in the design’s feature set.  Off-the-shelf Intellectual Property (IP) cores are available to facilitate and accelerate the design process by performing most of the specific video processing tasks: MPEG decoding / encoding, gamma correction, color separation, scaling, adaptive image enhancement and shape recognition.

Security concerns are another challenge for automotive designs as more aftermarket companies sell “tweaks” that change the operation of automotive electronic systems.  The manufacturer’s IP, in the form of the FPGA or processor operating code and logic, typically is unprotected and can be accessed for reverse engineering, duplication or unauthorized modifications.  These modifications may include changing operational parameters for an engine controller or offering additional features, such as the ability to show DVD movies on the navigation screen.  Frequently these changes can compromise the design specifications for vehicle operation or safety, which can lead to mechanical damage, injuries or loss of life.  Newer FPGAs, such as the LatticeECP2 devices, use internal security to defeat these attacks on the operating code.

Video Processing: Distributed or Centralized?

Automotive designers continuously look for ways to reduce vehicle weight while adding functionality.  If multiple cameras are to be added around the vehicle, the designer must closely review the transport method of the video streams: the wiring medium must be reviewed for speed, noise susceptibility, weight and cost.

Part of the review process is deciding whether the video processing will be distributed and performed remotely at each of the cameras and the results sent to a central controller unit for review and display, or if all the raw video streams will be sent directly to a central control unit for centralized processing.  Several EDA companies have developed design tools that help the automotive system designer analyze automotive systems and decide whether to embed functionality in local silicon or transport the data over a vehicle wiring harness.

Distributed Processing Requirements and Examples

20070227_drives_fig2.jpg
Figure 2. Distributed/Remote Video Processing

Distributed video processing requires an Electronic Control Module mounted on or very near the camera for signal integrity.  Micron Technology1 makes several CMOS Imagers that are suitable for automotive use.  One of the cameras, the MT9V111, outputs 8 bits of parallel data along with two synchronization signals and one clock signal.  The resulting video can be processed within either an FPGA or a microprocessor.  The local processing unit then can use the appropriate vehicle media bus, such as MOST, 1394 or CAN, to send the resulting information to the central control unit.  This information could be a partial or complete video stream, or just a metadata message with a summary of the video content.

Three examples of distributed/remote processing are shown in Figure 2.  The first two examples show the MT9V111 camera, which has an 8-bit parallel output in an ITU_R BT.656 or RGB format.  The third example shows the Micron MT9V125 camera, which has a temperature range of –40oC to +105oC.  This camera is unique because it has several physical output options: 10B/12B LVDS, 8-bit Parallel and NTSC Analog.  In all three examples, the local processor receives the video output in an ITU_R BT.656 8-bit parallel format for decoding and processing.  In addition, the FPGA or microprocessor generates the 27MHz clock source for the CMOS imager and has a two-wire serial connection to the imager device for control and setup of the internal registers of the camera, with options such as white balance, exposure, color correction and luminance settings.

20070227_drives_fig3.jpg
Figure 3. Transmitted Video Data For Centralized Processing

Centralized Processing Requirements and Examples
Transmission of camera video data for centralized processing also requires some minimal intelligence at the camera for setting up the internal registers.  These examples (Figure 3) show a low-cost microcontroller being used to communicate with the camera to perform the setup and control.  The microcontroller also can communicate with the rest of the vehicle systems using a CAN or other vehicle bus interface.  This allows the central control unit to issue configuration commands to the microcontroller over the bus to change camera setups and options.

The first example with the Micron MT9V111 imager has only the 8-bit parallel output, so it requires an LVDS (8B/10B or 10B/12B) serializer to transmit the video data.   The video stream from the camera is sent over LVDS on differential twisted pair wiring to the central control unit for processing.  It is possible to perform the setup and control in a state machine within an AEC-Q100 qualified crossover PLD (See sidebar, “What is a Crossover PLD?”), but unless there is more advanced processing to be performed on or near the camera, the microcontroller is the most cost-effective solution.

What is a Crossover PLD?

Crossover PLD is a new class of Programmable Logic Device used by Lattice Semiconductor in 2006 to define its family of MachXO devices.  These are characterized by deterministic pin-to-in timing, a combination of non-volatile FLASH cells and SRAM technology to deliver a single-chip solution supporting “instant-on” start-up and infinite re-configurability.  These devices support applications that traditionally have been addressed either by high density CPLDs or low capacity FPGAs, but with a more comprehensive and cost-effective architecture and technology.

Video Central Control Unit

The central control unit discussed in this article is a LatticeECP2-35 FPGA.  This device receives video data from three distributed cameras and one locally connected camera (Figure 4).  This example is a multipurpose video system that has inputs from external NTSC video and LVDS sources.  It is configured to show the finalized video data on a local video display and to drive Camera Link LVDS data to a rear entertainment display.  Because the focus of this article is video applications, the audio processing portion of the design is omitted.

Here is a description of the functional blocks of Figure 4, starting on the left side of the diagram:
A reprogrammable clock network management device controls the overall timing of the application.  It generates the master system clock for the FPGA and the 14.31818MHz required for the Texas Instruments NTSC video decoder devices.

The first input is a 7:1 LVDS Channel Link data stream.  This interface is processed directly within the I/O structure of the FPGA (see sidebar, “Directly Processing 7:1 LVDS Signals with the LatticeECP2 FPGA”).  The 7:1 interface can be implemented in 4 to 6 twisted pairs depending on the color depth and resolution of the video signal.  This signal is being driven by a DVD player and has a bandwidth up to 75MHz.  Additional decryption logic inside the FPGA is required to support the HDCP protocol of the DVD player.

20070227_drives_fig4.jpg
Figure 4. Central Video Processing System

The locally attached Micron CMOS imager requires a 27MHz clock that is generated using one of the FPGA’s on-chip PLLs.

Analog video input is accomplished using a TI TVP5140V NTSC decoder that accepts two composite or a single S-video input source.  The output is an 8-bit parallel YcbCr 4:2:2 with sync signals.  These connections are available for input from external devices, such as a portable electronics player or game console.

The three remote camera inputs are an example of the cost tradeoff the automotive designer must evaluate.  There are two ways to deal with the de-serializing and clock recovery of the LVDS 10B/12B video data.  One way is to use individual 10:1 LVDS de-serializers.  The de-serializers add a component cost of 2 to 3 dollars each to the design.  The alternative is to use an FPGA that has the ability to decode the LVDS bitstream internally.  However, because the clock must be recovered from the LVDS bitstream, this choice requires a more advanced and costly FPGA device that has internal SERDES channels.

The forward-looking (front) camera is a locally connected CMOS imager that is connected using a parallel interface, and the camera setup and control use the I2C communications bus that is implemented with logic inside the FPGA.

The FPGA configuration memory is on the top right side of the block diagram.  This is a small 8-pin SPI memory device that downloads the logic and function code into the FPGA on power-up.  The example shown has encrypted configuration code, which will be discussed later in this article.

Directly Processing 7:1 LVDS Signals with the LatticeECP2 FPGA

Source synchronous interfaces consisting of multiple data bits and clocks have become a common method for moving image data within electronic systems.  A prevalent standard is the 7:1 LVDS interface (also known as Channel Link, Flat Link and Camera Link), which has become a common standard in many electronic products including consumer devices, industrial control, medical and automotive telematics.
A unique feature of the LatticeECP2 Input/Output structure is the ability to directly receive and transmit 7:1 LVDS data streams.

20070227_drives_diagram.jpg

This diagram shows how the 7:1 LVDS receiver is implemented in hardware. On the left side are the 4 LVDS data channels at the top with the LVDS buffers able to support up to 800Mbps.  The data inputs drive the DDR input blocks that have a 2x gearing, which allow capturing 4-bits of data per clock cycle.  The last signal at the bottom, left side is the LVDS clock input that drives the internal PLL block.  The PLL generates the internal clock requirements.  The two specific clock requirements are the ECLK (Edge Clock) that is driven by CLKOS and SCLK (System Clock) (FPGA fabric clock), driven by CLKOK outputs of the PLL block.  CLKOS has a 3.5x multiplier and a 90-degree phase shift to properly capture the incoming data, whereas CLKOK is half of the CLKOP with 0-degree phase shift.  Inside the FPGA fabric is the 4:7 de-serializer logic that converts the incoming 4-bit parallel data from the I/O logic block to the 7-bit parallel data.

Software and Intellectual Property

The internal operation of the FPGA is partitioned in two parts.  The first is the customer-specific synchronous and asynchronous logic and IP that are used for most of the video processing such as scaling, switching, routing overlays and split-screen generation.  The other operational part includes the overall system management and control functions.  This part is implemented using the Micrium uC/OS-II Real Time Operating System2 (RTOS) coupled with the LatticeMico323 soft processor core, along with communications IP.

Micrium’s RTOS requires only a small footprint of 2K LUTs and is compliant with the Motor Industry Software Reliability Association4 (MISRA) C Coding Standards.

The LatticeMico32 is an Open-Source 32-bit microprocessor core and peripheral set that is available as a free reference design from Lattice.  Included in the LatticeMico32 are several I/O interfaces (Timer, DMA, UART, Memory Controllers) that connect to the processor core via the dual-WISHBONE interface.  This design also has a CAN interface that uses IP from CAST5, a VGA controller from OPENCORES.org and an optional MediaLB or MOST core.

Both the LatticeMico32 and uC/OS-II software packages are royalty free, so there is no charge for each instance of usage within the vehicle’s systems.

Operating Code Security

The only way for designers to secure proprietary IP in automotive applications is to use a form of data encryption.  Many automotive qualified SRAM-based FPGA devices have an open configuration bitstream between the boot device and the FPGA that allows interception or downloading of the internal FPGA logic by unauthorized users.  That information may be used to reverse engineer the device for duplication or modification.  The LatticeECP2 SRAM-based FPGA devices give the designer the option to use a 128-bit AES Encryption to secure the configuration code.  A unique 128-bit key is programmed into One-Time-Programmable memory inside the FPGA.  The configuration bitstream is encoded before it is programmed into the external configuration memory.  When the FPGA powers up, it copies the configuration code from the external memory over the SPI bus into the SRAM of the FPGA, decrypting the configuration data on the fly.  The designer has the option to build all units with the same encryption key or, for heightened security, each individual automotive unit can have a unique key

Summary

Designers are challenged to create automotive video systems that balance performance, signal integrity and weight, and that will remain cost-effective to manufacture.  Choosing the correct camera and wiring transport can help reduce overall component and wiring costs.  For many applications, there will be a combination of localized and distributed video processing to meet system goals.

Using reprogrammable components such as FPGAs, clock and power controllers reduces system cost by incorporating several individual components into a single device package that also improves reliability and design flexibility.

Using standard software and IP for the RTOS, soft processor, video functions and communications modules helps reduce the overall software development time.

FPGA boot-code-bit-stream encryption allows automotive designers to develop and deploy secure vehicle vision systems that are safe from tampering, duplication or unauthorized modifications.  Moving the RTOS, processor and its associated program memory inside the FPGA by using a soft processor also protects the manufacturer’s operating code IP.  The soft processor system gives the automotive designer control over the implementation scale and feature set. 
Reprogrammable devices provide the security and reliability that is necessary in today’s automotive vision systems, and the flexibility and performance necessary for future systems.

References

  1. Micron Technology Automotive CMOS Imagers: http://www.micron.com/applications/automotive/
  2. Micrium uC/OS-II Real Time Operation System: www.micrium.com
  3. Lattice Semiconductor: www.latticesemi.com
  4. Motor Industry Software Reliability Association: http://www.misra.org.uk
  5. Cast, Inc. CAN and DUAL-CAN IP software: www.cast-inc.com

6 thoughts on “Reprogrammable Logic Drives Automotive Vision Systems Design”

  1. Pingback: GVK BIO
  2. Pingback: DMPK Services
  3. Pingback: Bdsm
  4. Pingback: iraq Colarts

Leave a Reply

featured blogs
Mar 28, 2024
The difference between Olympic glory and missing out on the podium is often measured in mere fractions of a second, highlighting the pivotal role of timing in sports. But what's the chronometric secret to those photo finishes and record-breaking feats? In this comprehens...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Telematics, Connectivity & Infotainment Integration Made Easy
Today’s automotive designs must contend with a variety of challenges including scalability, security, software integration and the increased use of different radio technologies. In this episode of Chalk Talk, Fredrik Lonegard from u-blox, Patrick Stilwell from NXP and Amelia Dalton explore how the use of modules can help address a variety of automotive design challenges and the benefits that ublox’s JODY-W3 host-based modules can bring to your next automotive application.
Apr 4, 2023
40,900 views