feature article
Subscribe Now

Maximizing Utility

Conventional economic theory has had a pretty tough couple of years. Markets didn’t behave like markets should have behaved. “Irrationality,” in an exuberant guise, toppled, or threatened to topple, some august institutions.

Of course, any time behavior starts to threaten orthodoxy, it’s explained away in some fashion that fits the orthodoxy for as long as possible. During the Great Depression, when contemporary economic theory didn’t allow for the existence of a depression, upon seeing some bedraggled gentlemen selling old fruit by the side of the road as an only means of eking out a bit of coin, Hoover is said to have commented on the vibrancy of the economy that these entrepreneurial fellows proved.

Today the topic is markets. The meaning of free markets, fair markets, whether to ratchet down, regulate, rampage (with some voices shrieking with every possible limit on behavior, “You’re threatening innovation!!!”), and, of course, what the right thing to do is.

The way the market works, in conventional terms, is relatively simple: rational consumers use perfect information to maximize their utility. “Maximizing utility” being an arcane way of saying “getting their happy on.” But the two questionable elements here are “rational consumer” and “perfect information.”

How anyone can take seriously the concept of a rational consumer after the pet rock craze (not to mention tulip craze and any other number of crazes) is baffling. And, of course, perfect information is nice, except that purveyors of products spend big bucks on marketing and advertising to ensure that the information you receive is not perfect. Yes, we could all devote our lives to doing individual research on each and every product we purchase, each of us equaling in effort the sum total of all the marketing organizations of all the companies from which we buy. But we don’t. We have jobs and lives.

So economists are actually starting to rethink some of these concepts. Nothing like closing the barn door after the barn burned down when the horse kicked over the lantern on the way out…

Meanwhile, there is an arena where “rational consumers” and “perfect information” can go together in a more credible fashion. There are numerous problems faced by engineers, probably in any type of engineering, where these two concepts could legitimately hold sway.

The last name of EDA is “Automation.” We’re used to taking manual tasks and finding ways for a machine to do it for us. And the holy grail in such situations is to have push-button solutions. You provide an input, push a button, and out comes an optimized result.

I mean, who needs engineers after all that?

Well, not so fast.

There are many problems as yet unsolved, either because they’re too complex to be tractable or because there may not be one single correct answer.

Take chip floorplanning. It can be done automatically to a rough degree, but to do so in a manner that, with no manual intervention, optimally places everything – not just in a way that works, but in a way that works best – that’s a very hard problem. There are so many interdependent elements, not to mention unknowns, that make it hard to turn floorplanning into as neatly solved a problem as place-and-route is.

Partitioning problems can be similar. It’s possible to have a large design split over multiple FPGAs, but that’s typically when the ultimate in performance isn’t needed – say, for SoC prototyping or emulation. If you really want to get the absolute best partition for a lowest-cost, highest-speed production product, you end up having to do it manually.

In the software world, multicore partitioning has the same problem. Attempts at automatic push-button parallelization of sequential code have all come to naught.

In another example, automation that’s taken for granted in the digital world doesn’t work in the analog world. There are too many intangibles; it’s too hard to corral the behavior of undomesticated analog circuits into a nice algorithm. So such routine things as standard cells or automatic place-and-route may work for digital, but they remain largely manual for analog.

Part of the problem is the tendency to go for complete automation, when in fact we can rely on two concepts that seem quite valid here: perfect information and rational consumers. In this case, engineers are the rational consumers that need to make engineering decisions. And, unlike the frazzled parent clawing his or her way to the last STFU Elmo doll, most engineers have the capacity for rationality. (OK, unless someone suggests their circuit sucks in a design review.)

The problem comes with the load of work that our rational hero has to perform in order to make the decisions and then carry them out. Decisions have to be made on the basis of information, so typically there is a lot of analysis required, much of which is manual. And when the solution space is very large (as it often is), it’s typically impossible to do enough manual analysis to cover all possibilities. So the engineer ends up generating what is hoped to be the most relevant set of facts for making the decision.

But, because the information is not perfect or whole, the decision might involve picking a local optimum rather than a global one. Or, heaven forfend, it might even involve some educated guesswork.

Once the decision, however good or bad, is made, execution of that decision is needed. This may very typically involve a sequence of rote steps that no longer engage the creativity or judgment of the engineer. But, of course, without completion of those steps, the decision is of no value. And so the engineer spends long hours making things work.

So we’ve got these three steps: information gathering, a decision based on the information, and execution of the decision.

The part that’s hardest to automate is the decision process itself. It’s the weakest link. The brain and the computer work so differently that engineering minds can decide things that a computer simply can’t. The human decision isn’t guaranteed to be right, but it can usually be better than what a computer might suggest.

The fact that this end-to-end process has this decision step in the middle is generally what kills attempts to achieve a complete push-button result.

But, in fact, both the front- and back-end segments tend to be more poorly done by engineers. Manual analysis can be incomplete, and it can be wrong. Manual implementation can also introduce errors. It’s much more common that these processes can be algorithmic.

So here you have a three-part problem, two parts of which are better solved by machine, and one part of which is better solved by man. Or woman. Which leads to a middle way: have the computer generate perfect information on which the rational consumer can base his or her decisions, and then, the decision being made, let the computer implement the details of that decision in a deterministic manner.

As a kicker to the information process, the computer might even make some suggestions as to the best solution, but it would be the engineer that actually made the decision either by validating one of the suggestions or creating a new one based on the information a hand.

(OK, truth be told, this theory has a bit of a weakness in that it assumes computer programs are bug-free, which they aren’t, but at least they can be made successively more bug-free… much more likely than the possibility of normal consumers being made successively more rational…)

In more concrete terms, whereas most analog engineers would be uncomfortable with a fully-automated place-and-route tool, one that provides some alternatives that meet all of the obvious rules, presented with the host of costs and benefits to each option, would then allow the engineer to take control by selecting and perhaps modifying one of the options and then letting the computer complete all of the interconnect details (and other minutiae) to realize that choice.

An FPGA partitioning tool could present partitioning options, each with performance, power, cost, and I/O implications. Choosing one, the engineer lets the computer continue the generation of the bitstreams for each of the FPGAs.

Some tools already take approaches approximating this; it’s sometimes surprising that more don’t. I could be over-simplifying the problem, but, on the other hand, there are obvious cases where, when the full automation solution didn’t work, everyone just gave up. This can give new life to such abandoned problems.

It can be a nice way to put “perfect information” and “rational consumer” together in a way that truly does maximize utility.

10 thoughts on “Maximizing Utility”

  1. Pingback: pax 3 battery life
  2. Pingback: Petplay
  3. Pingback: DMPK
  4. Pingback: juegos de friv
  5. Pingback: orospu cocuklari
  6. Pingback: Bolide
  7. Pingback: basement water
  8. Pingback: ADME

Leave a Reply

featured blogs
Apr 23, 2024
The automotive industry's transformation from a primarily mechanical domain to a highly technological one is remarkable. Once considered mere vehicles, cars are now advanced computers on wheels, embodying the shift from roaring engines to the quiet hum of processors due ...
Apr 22, 2024
Learn what gate-all-around (GAA) transistors are, explore the switch from fin field-effect transistors (FinFETs), and see the impact on SoC design & EDA tools.The post What You Need to Know About Gate-All-Around Designs appeared first on Chip Design....
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

TE Connectivity MULTIGIG RT Connectors
In this episode of Chalk Talk, Amelia Dalton and Ryan Hill from TE Connectivity explore the benefits of TE’s Multigig RT Connectors and how these connectors can help empower the next generation of military and aerospace designs. They examine the components included in these solutions and how the modular design of these connectors make them a great fit for your next military and aerospace design.
Mar 19, 2024
4,923 views