posted by Jim Turley
If you store your music digitally -- and don't we all? -- you want audio grade bits.
Apparently bits aren't just bits. An audiophile site in the UK just published an article claiming "significant" and "quite marked" difference in sound quality between MP3 files stored on a Hitachi hard disk, a Seagate hard disk, and a flash SSD. The lengthy article goes on to note such perceived differences as "rhythmic drive," "image soundstaging," "edgy grain," "musical intent," and other impenetrible audiophile pseudo-babble.
The piece de resistance is when the authors theorize that the differences among storage media may be down to the disk controllers' processor architectures (one is ARM-based, one x86). Or perhaps one HDD controller chip is "exerting itself more than usual." They go on to theorize that Ethernet cables might affect "blacker silences" or "a hint of glaze," too.
Real scientists conducting real studies have shown that anything digitized over 44.1 KHz is inaudible to humans. That's why it's the sample rate for CDs. Where and how those bits are stored is... irrelevant.
Thankfully, the authors conclude with an appeal. "Maybe we can solicit logical explanations from engineers who understand the low-level mechanics and operation of computer ﬁle and storage technologies..." [my emphasis].
I think our readers know what to do.
posted by Bryon Moyer
Last December’s IEDM conference included energy harvesting as a topic; a couple of papers caught my attention. You could almost think of one of them as bridging batteries and capacitors; the other leverages an everyday household phenomenon in a new way.
The first paper, from a collaboration between Intel, Florida Int’l Univ., and Univ. of Turku, demonstrated a way to create porous silicon to increase surface area in a capacitor. They do this with an etch that, in principle, is capable of a 1000:1 aspect ratio, although other limitations limited the etch depth, as we’ll see.
The idea is that, by “hollowing” out solid silicon with numerous small pores, you get the benefit of surface area inside the bulk, not just on the top of the silicon. Smaller pores mean more surface area, but they’re also harder for ions to navigate through. So they used a combination of large and small pores, tapering them slightly so that ions could more easily enter to keep the performance high.
But there’s a catch here: it turns out that, left like this, the silicon surface will oxidize and degrade after repeated cycling; the surface needs to be stabilized through a coating. They had demonstrated carbon as an effective coating, but that required high temperatures (above 650 °C). So in this work, they focused on atomic-layer deposition (ALD) of TiN. They did this at temperatures between 380 and 450 °C (and could have done 280 °C had they used a different precursor).
They used a modification of typical ALD processes, presumably because of the fact that they were depositing not on a plane, but into a porous material. The normal process is to let the precursor soak for 1-2 seconds; they gave it 30 seconds in a so-called “stop-flow” process.
While the coating stabilized the surface, it also decreased surface area. A 2.5-nm layer reduced the surface area by 13% over uncoated; a 10-nm layer reduced it by 53%. So limiting the thickness is important to maintaining performance.
As to the pore depth, they found they could etch as deep as 254 µm. But they found that, upon heating, passivating hydrogen came off – which caused stresses and cracking. This became a problem with pores deeper than 15 µm, so they limited themselves to a range of 2 – 12 µm.
This device inhabits a space between capacitors and batteries. It uses mechanisms similar to capacitors, but because it’s leveraging the hollow-out interior of the bulk, it also shares the 3D characteristics of a battery. The power is going to be determined by the mobility of the ions through the pores. But, as you can see, this competes well in power and energy densities.
And there’s another payoff: While lithium ion or lithium thin-film batteries can be cycled only a hundred or so times, the porous silicon device could be charged hundreds of thousands of times.
Meanwhile, in a totally different vein, a team from KAIST and NASA Ames experimented with “triboelectricity” – essentially, the kind of static electricity you build up when rubbing something. It needs some kind of external pressure to make it work – that’s the source of energy.
The idea goes as follows: a polymer layer is placed over metal; another movable metal surface then contacts the polymer on top so that, when in contact, you effectively have a polymer sandwich. In this configuration, the polymer accepts charge from the top metal, leaving a net positive charge in the metal. The metal layer is then moved away from the polymer.
The two metal layers are connected through a resistor. So now, because that top metal layer has moved out of range, the negative charges move towards the lower metal layer, and the excess charges on the top metal rush from the top metal layer, through the resistor – doing work – to the bottom layer.
This process is repeated, and the charges travel back up to the top layer when it comes in contact again.
The physical implementation of this involved small polydimethylsiloxane (PDMS) pyramids that would get squished by the top metal sliver contact. The size of the pyramids matters – smaller ones give more surface area and therefore a higher voltage, and they’re more sensitive. Larger ones, on the other hand, can handle a wider range of pressure because the pressure “saturates” at a higher level (the bigger pyramids have a higher restoring force).
They were able to extract hundreds of µW/cm2 – enough to run, for example, some kind of implantable device. Of course, you need a source of motion – either vibration or… makes me wonder if the periodic pressure in the blood could be harvested.
If you have the IEDM proceedings, you can find all the details in papers 8.2 and 8.3.
(All images courtesy IEDM.)
posted by Jim Turley
Semiconductor, meet cycle. It's another up/down go-around for semiconductor capital spending, according to the latest Gartner report that predicts that 2015 will be nearly flat compared to 2014. Non-memory (i.e., logic) chipmakers will spend a total of just 0.8% more this year than last, with 5.6% growth in capital equipment. That works out to $65.7 billion overall, with $41.1 billion in shiny new equipment.
As before, the independant foundries will be spending more than the standalone chip makers (IDMs), and memory will be more active than logic. Gartner further predicts that many memory makers will plow investment into manufacturing DRAM, rather than flash, to take advantage of currently favorable (for them) DRAM pricing.
Looking ahead into 2016, 2017, and 2018, most of Gartner's numbers are positive. Two years from now, the company foresees strong double-digit growth in capex all across the board.