feature article
Subscribe Now

Big Data Moves to “Electronics”

Optimal+ Covers Board, System Manufacturing

Last year, right around this time, we took a look at a company called Optimal+. They were harvesting reams of semiconductor test data so that analysts could rummage through, looking for clues to manufacturing tweaks that could improve yield or throughput without compromising quality. With latency of up to around 10 minutes, it provided something close to real-time feedback.

And any new learnings could be tested on mounds of historical data to see “what would have happened” had the new rule been in place. If good things would have happened, then they could immediately roll the new rule out.

Of course, in the abstract, there’s nothing about these generic concepts that is tied to semiconductors. So, in the way of any ambitious, growing company, you want to expand your technical prowess to other areas. Especially if potential customers from those other areas get jealous of the semi guys and say, “Dude, I want what they’re having.”

So you launch in the direction of the next obvious step up the food chain: circuit boards. Or further yet: systems. Same as chips, right? Just on FR-4 (or whatever) instead of silicon. Right? A little bit right maybe? No?

Once you get to practical matters, chips and boards and systems are very different things – in particular because the supply chain is very different. And the most pressing problems they have may also be different. And then there is the fact that boards can be reworked, while chips can’t. Systems may get returned by end users. (And will also be used by end users… we’ll come back to that in-use thing in a moment.)

And so Optimal+ has announced Global Ops for Electronics. This is not an extension of their semiconductor product, but is a completely separate offering.

Optimal+ uses the term, “electronics,” which slightly confused this literal engineer. After all, if anything is an electronic item, it’s a semiconductor, whose only moving parts are electrons. But no, in this case, the word “electronics” refers to systems that use electronic components… um, no, that’s not helping. Heck with it; turn off brain and accept that “electronics” means systems, and chips are but inputs to the system.

Figure_1.png 

(Image courtesy Optimal+)

Whereas the semi version focuses on test and yield, Optimal+’s version 6.5 also looks at quality and reliability – and return material authorizations (RMAs) are a good proxy for how you’re doing out there (and likely an underrepresentation, since not everyone goes through the hassle of a return).

So now, Optimal+ gives access to the entire manufacturing history of the system for use in analyzing trends. “Genealogy” is a big part of this – tracking which version of a flow or test program was used and how many rework cycles a system had. And, in a wrinkle that’s distinctly different from semis, it includes which contract manufacturer (CM) was used.

Semis are certainly built by contracted foundries, but it’s much less common to have a single chip sourced from multiple foundries, since designs are generally optimized for a target foundry’s proprietary process. With mechanical assembly, however, using multiple vendors is common, so that adds a new parameter to a system’s history.

Their first offering in this area, called Global Ops for Electronics, targets yield using assembly house data. In the last quarter of this year, they’ll follow that up with tools for managing new product introductions (NPI) and production ramp-up. Right now, the traceability stops with incoming components, but in the future, a connection to Global Ops for Semiconductors will provide access to semiconductor data history as well. This connection is referred to as the Supplier Quality Network, and it’s still in early stages. For now, Global Ops for Semiconductors and for Electronics remain distinct, disconnected products.

Figure_2.png 

(Image courtesy Optimal+)

One of the new angles they’re working aims to reduce expensive returns. Given the genealogy aspect, any return can, in theory, have all of its individual paperwork examined to look for early hints that there might have been a problem. For instance, some parameter might have been within limits, but only just – the tests passed, but then the system got returned. So a tighter test limit might result from such an “aha” moment.

Machine learning generates many of these “ahas” by correlating outcomes with production statistics. It may take many, many samples before a trend becomes evident, so, by definition, this approach requires tons of data – not just current in-process data, but also historical data.

Exactly what that data is and how it’s represented will vary by manufacturer. So adopting Optimal+ means working with them to stitch the specific flow together. And, to be clear, it’s the OEM that’s in charge here, not the manufacturer, even though you need the cooperation of any and all CMs in your flow.

In fact, that raises a tricky issue with the data: isolation. The OEM wants to see all data from all CMs, but the CMs don’t want other CMs to see their data. So each CM has its visibility limited to itself – they can see their own data, but not that of their competition. Only the OEM gets to see the whole shebang.

There’s one other future aspect to this tool: including in-use data in the history. What if users are “doing it wrong?” Or what if it turns out that humid days cause increased failures? Wouldn’t it be great to know that when trying to sleuth out causes of failure? What about reminding users about preventative maintenance (PM)?

Well, that’s where we get into some murky waters. Folks are (not unreasonably) queasy about being tracked. So learning that every move they make with their gadgetry is being sent to Headquarters might not be welcome news. I have to confess to having written off such concerns about deep data mining a decade or so ago to the conspiracy crowd; revelations over the last years have proven me wrong.

So before Optimal+ adds this last bit, a few things need to be in place:

  • A legal framework of what is and what isn’t OK.
  • Presumably, user anonymization, although it’s never been clear to me how a company can prove it’s doing that (a single programmer or analyst could so easily violate the best of corporate intentions).
  • Assurance to users that the data gathering will be under user control.

There’s no question but that this additional data would add incredible richness to the kinds of analysis that Optimal+ provides, but our entire industry has some work to do before the public will be ready for that.

 

More info:

Optimal+ Global Ops for Electronics

16 thoughts on “Big Data Moves to “Electronics””

  1. Pingback: musicxraylogin
  2. Pingback: Petplay
  3. Pingback: binaural
  4. Pingback: DMPK Studies
  5. Pingback: agen bola online
  6. Pingback: casino online
  7. Pingback: Cheap

Leave a Reply

featured blogs
Sep 26, 2022
Most engineers are of the view that all mesh generators use an underlying geometry that is discrete in nature, but in fact, Fidelity Pointwise can import and mesh both analytic and discrete geometry. Analytic geometry defines curves and surfaces with mathematical functions. T...
Sep 22, 2022
On Monday 26 September 2022, Earth and Jupiter will be only 365 million miles apart, which is around half of their worst-case separation....
Sep 22, 2022
Learn how to design safe and stylish interior and exterior automotive lighting systems with a look at important lighting categories and lighting design tools. The post How to Design Safe, Appealing, Functional Automotive Lighting Systems appeared first on From Silicon To Sof...

featured video

PCIe Gen5 x16 Running on the Achronix VectorPath Accelerator Card

Sponsored by Achronix

In this demo, Achronix engineers show the VectorPath Accelerator Card successfully linking up to a PCIe Gen5 x16 host and write data to and read data from GDDR6 memory. The VectorPath accelerator card featuring the Speedster7t FPGA is one of the first FPGAs that can natively support this interface within its PCIe subsystem. Speedster7t FPGAs offer a revolutionary new architecture that Achronix developed to address the highest performance data acceleration challenges.

Click here for more information about the VectorPath Accelerator Card

featured paper

Algorithm Verification with FPGAs and ASICs

Sponsored by MathWorks

Developing new FPGA and ASIC designs involves implementing new algorithms, which presents challenges for verification for algorithm developers, hardware designers, and verification engineers. This eBook explores different aspects of hardware design verification and how you can use MATLAB and Simulink to reduce development effort and improve the quality of end products.

Click here to read more

featured chalk talk

Double Density Cool Edge Next Generation Card Edge Interconnect

Sponsored by Mouser Electronics and Amphenol ICC

Nowhere is the need for the reduction of board space more important than in the realm of high-performance servers. One way we can reduce complexity and reduce overall board space in our server designs can be found in the connector solutions we choose. In this episode of Chalk Talk, Amelia Dalton chats with David Einhorn from Amphenol about how Amphenol double-density cool edge interconnects can not only reduce space but also lessen complexity and give us greater flexibility.

Click here for more information about Amphenol FCI Double Density Cool Edge 0.80mm Connectors