It sounds like something out of a spy thriller. A piece of security software, masquerading as a routine driver update, sniffs out enemy chips and terminates them with extreme prejudice. There is no fix; the chip, and everything it’s connected to, is bricked.
Sneaky, huh? And not really all that hard to implement. With nearly everything connected “to the cloud,” it’s easy to insert new software remotely. And we’re all accustomed to downloading and installing new drivers every few weeks, so there’s nothing suspicious that would tip anyone off.
The case this week involves FTDI, a company that makes popular and inexpensive USB-interface chips. You’ve probably got one inside some device nearby, or you’ve used FTDI chips in your own designs. We’ve covered the company before here and here.
Unfortunately, some bad guys have been cloning FTDI’s chips and selling them as cheap replacements. They’re counterfeit, in other words. That’s not unknown in this business; there have been counterfeit microprocessors, memory chips, and FPGAs for almost as long as there have been silicon chips. It’s a fairly lucrative business if you’ve got the silicon technology to make it work
To thwart the counterfeiters, FTDI released a driver update. No big deal there. Interface firms update their drivers all the time. But this particular update searched for the subtle telltale signs that the chip was counterfeit and not genuine. If it found a bogus device, it programmed a special combination of configuration bits that instantly rendered the chip useless. The change is permanent and irreversible. Rolling back to the old driver won’t fix it. The chip is nuked; broken; totally useless.
And as the owner of a product containing this chip, you’re boned. You’ll get no mercy from FTDI.
Therein lies the rub. The people who bought a USB-connected lab instrument, tablet, gadget, game, or other device have absolutely no idea whose chips are inside – why would they? – and even less to say about their provenance. Yet they’re the ones who suffer. Their legitimately purchased product is now useless, all because somebody up the supply chain took a shortcut and bought a cut-price counterfeit chip.
Does the maker of the end product suffer? Perhaps indirectly, when their customers start flooding the tech-support hotline with complaints that the thing just stopped working. Indeed, that’s probably how these companies first found out about the imitation chips lurking inside their products.
Does the hardware OEM take the hit? The board manufacturer? The distributor of the dodgy components? How about the company that illegally counterfeited the chips in the first place? They’re obviously not going to cop to making gray-market products, so their brand and their reputation are unsullied. Their chips look just like real FTDI chips (that’s the whole idea), so there’s nobody to point the finger at. If anything, the failures look like a black mark on FTDI, the apparent maker of the now-useless parts.
So what would have been the right approach here? Was FTDI justified in searching out and destroying the forged silicon, even though they’d long ago passed through the supply chain and had arrived in the hands of perfectly innocent customers? Or was this an egregious betrayal of the customers’ trust in FTDI and its driver updates? Is this warranted vigilantism, or selfish corporate treachery?
FTDI itself is apparently conflicted. After a few weeks, the company removed its malicious driver update, and the CEO apologized (sort of) in the company’s official blog. But that doesn’t help the users whose devices are irretrievably broken.
In the United States, if you buy a used car that later turns out to have been stolen, you have to return it without any compensation. It doesn’t matter that you bought it in good faith and paid good money for it. You get nothing. It doesn’t even matter if you painstakingly traced the car’s history of ownership back through time, or conducted extensive background checks on the seller. It doesn’t matter if anyone involved in the transaction knew it was stolen or not. You’re out the money (and the car), and you get nothing but a great story to tell at the bar.
Similarly, FTDI is essentially reclaiming its stolen property, and the end customer, like the used-car buyer, gets the short end of the stick. When a stolen car is found and reclaimed years later, the buyer gets no reimbursement – who would pay him anyway? – and he becomes just another victim of the thief. The original owner is made whole (more or less) by getting his car back, albeit used. In FTDI’s case, they get to “reclaim” the stolen property by disabling it. And the innocent customer is, again, worse off than anyone.
How would you have handled this? Many disgruntled users in the FTDI incident have suggested that perhaps the company’s approach was just a touch more aggressive than it needed to be. The driver update could have notified users of the counterfeit chip instead of disabling it, along with the valuable product that contains it. Maybe damaging the device beyond repair was a bit extreme, and targeted the wrong parties anyway?
It’s hard to judge how this move will affect FTDI’s reputation among engineers. On one hand, the company is truly the injured party here. Their chips were counterfeited, so at the very least, it lost revenue – probably a lot of revenue. And who knows how much time the company wasted providing technical support for chips that weren’t even theirs. And how long must it have taken them to discover that there even were counterfeit chips circulating in the market? That’s a lot of wasted time and money, all because some shady firm decided to rip off their design and sell bogus chips with their name on them. FTDI has a right to some corporate anger.
Yet it seems like the company took the law into its own hands, and outside of Western movies and pulp detective novels, that’s rarely a good thing. We deride lawless territories that don’t share our ideas of propriety, order, and respect for intellectual property (the kind of place where these chips were likely made), but FTDI’s reaction seems similarly lawless and anarchic. It’s a shot across the bow to other potential counterfeiters – “we’ll sink you whenever we find you!” – but who is guilty of retaliatory piracy here? And whose ship are we sinking and whose passengers are we sending down in the battle?
As a designer, are you more or less likely to use FTDI chips? Or, more broadly, are you more likely to check out the origins of all the chips you design-in? And if so, how? Surely your distributor will say all the chips are genuine. And they probably are. But if they’re not, how would they know? And how will you know?
And if by chance they’re not genuine, will your customers be the first ones to tell you? When your tech support forum activity spikes with widespread reports of systems dying in the field, will that be your first indication that something deep inside your product isn’t what you thought it was?
19 thoughts on “I Brick Your Chip”
As long as things continue to work, then people can sell counterfits and make a huge profit, as long as they continue to work.
I think FTDI did the right thing … I think every vendor that has that ability should do it.
There is NOTHING better to stop counterfits.
The consumer should have the right to have their device replaced, as defective … even beyond normal time frames when the vendor did not take care to make sure the chips purchase were counterfit.
Without that liability, it’s way too easy to say no harm, no foul, and purposefully buy counterfit chips to make a few more dollar/yen …. when the liability exists, companies will think twice.
So my vote is GO FTDI … brick them.
And for consumers of a bricked device … contact your representives and mandate replacements or refund from the merchants that sold them as law.
I also feel FTDI did no wrong here.
Really they were lucky they could tell the difference between their chip and the counterfeits, and could disable them.
Far worse are counterfeits that identify themselves as larger or better memories, but really are not.
Or, they are out of specification rejects being resold as real parts.
Counterfeiting is a hard problem to solve. How do you really know what is in a device when you get it built and sourced completely outsourced?
There is also a fine line that should not be crossed. Could Microsoft brick your PC because they don’t support XP any longer? Big savings on support, less security risks, and a forced upgrade. That sounds like too far, but is possible, and companies would love to do that sort of thing.
Well done FTDI. I just hope they made sure that the end users knew what went wrong.
End users who bought a device claimed to be legitimate, should ask/sue the seller for a replacement or money-back since the end product is faulty.
The seller should also ask/sue his suppliers up the supply chain.
And the “punishment” should be rolled up to the suppliers.
But the important thing here, is that serious companies will make sure to keep their good name in the business.
Can you imaging the impact on companies such as Dell/HP/Lenovo or other, if it turns out they used counterfeit parts?
Can you imaging the impact on suppliers such as Digi-key/Mouser or other?
As a designer, I would definitely continue using FTDI devices. But if I heard that a supplier caught selling counterfeit components, I’d stop working with him or condition it with a strict agreement with penalties.
> I just hope they made sure that the end users knew what went wrong.
Okay, but *how* will end users know what went wrong?
For example, my wife’s FitBit stopped working a few weeks ago. Was it because of a counterfeit USB chip, or was it just a coincidence? I don’t know if the FitBit even has a USB chip, much less who made it. How would I know?
Are we going to be second-guessing every USB-related failure from now on?
What is a counterfeit FTDI chip? USB to serial chips are generic. So are these chips that have their VID set to FTDI? And what exactly was being counterfeit? Was it just malefic labelling a generic USB to serial devices as FTDI and using their VID?
If this is justified, It would also justify rigging a self destruct in stolen cars. And what if the device was a safety critical appliance? This was unethical. A nagging dialog box is okay, but this is unethical.
Hey Lord Loh, personally I don’t think the stolen car analogy really works … you have traffic cops for that, but there’s nobody policing the counterfeit chip business. As TotallyLost states “when the liability exists, companies will think twice” but until there is something making them do this they will just carry on giving money to the counterfeiters 🙂
So this is one more reason not to subscribe to automatic updates. If the software ain’t broke – don’t fix it! It’s been a long time since a software update has done anything that I’d consider useful. Mostly they are closing “security loopholes” that prevent the publisher from restricting the use of their product. While they’d like to make you think these security updates are to protect you, it’s pretty obvious to me that they’re protecting their own interests. This case is just a bit more blatant than most.
Posted from my Windows XP SP2 system with updates shut off for many years…
Let me propose a hypothetically extreme example for comparison:
Someone takes a rifle to the top of a building and starts shooting pedestrians down below, all to protest world hunger.
His intentions may be good, but his methods are criminally extreme. No jury in the world would say his rampage was justified, no matter how worthy his cause.
We call those people terrorists. They sacrifice innocent bystanders to make their own point.
I think we can draw a parallel to FTDI. They deliberately sacrificed the products owned by people utterly unrelated to the issue. Granted, counterfeiting is an important issue (if you’re in that business). But the company’s methods can be judged as extreme. Did the punishment — and the punished — fit the crime?
FTDI’s actions are dangerously irresponsible and indicates a breathtaking level of corporate hubris. They are not a law enforcement agency, they are just a commercial company.
FTDI cannot possibly have known what the counterfeit chips were being used for at the instant they chose to destroy them. They are used in a wide variety of devices worldwide, some of which will inevitably be safety-critical, medical, or involve heavy machinery.
For example, if FTDI’s “update” destroys the USB blood-glucose monitor you rely on, and you then go into a coma and die, surely that is corporate manslaughter?
If FTDI want counterfeit goods seized and destroyed then they need to get a law enforcement agency, (with a valid warrant) to enter the suspect’s property and seize the goods. Under such circumstances, the electrical equipment can be safely turned off, or if it is safety critical, it can be replaced by alternative equipment before the counterfeit goods are impounded. There would also be the right to challenge the seizure in court. In the event of a successful legal defence the private property must be capable of being returned to the owner unharmed.
By using a remote deactivation procedure FTDI have bypassed all legal due process, tricked their way into homes and businesses, and have then destroyed private properly.
If FTDI’s “hacking” had damaged anything my company owned here in the UK I would have been sure to sue them for consequential damages, destruction of private property, and probably a few other things too.
In addition, computer hacking compounded by the intent to cause criminal damage is almost certainly a breach of the UK computer misuse act 1990, which is a criminal offence, not just a civil offence.
The Computer Misuse Act:
“unauthorised access with intent to commit or facilitate commission of further offences, punishable by 6 months/maximum fine on summary conviction or 5 years/fine on indictment;”
Sets PID to 0? Driver rollback works? That’s…more moderate than the guard ring bricking I imagined. I guess on the attack readiness side I could stand to see more people ready to trace/patch from JTAG… Definitely nicer than spurious write, fuzzing host bus, camera RAW mode gone bad and even electrical randomness; though I suppose there’s a possibility to find the first POV ticker display, printer or WiFi printer and waste printing supplies complaining that the IP or other abuse kills kittens (in 14 languages.)
The brickable phone SoC people seem more committed to the impermanence thing happening near the AES (or PKI otherwise) unit. No faking retina prints and swapping phones with a similarly keyed device on their watch!
That’s a really rough way to go after IP slippage, not least for a UK firm; somewhere in the world though, a drivers team got wild enough review feedback (some Android phone’s team really, really going through the updates, getting them excited over the integrity of every third T-Mobile phone or such,) a fantastic report back from the fab on what happened to their off batches (destroy ’til sold, right?), and and/or bad enough chai (Donetsk Bru? Wildcrafted ginseng?) that they thought to release that. US oversight perhaps granted them the Oskar Pistorius 2014 prize (third party depositions!) for fitful sleepers?
That would be excellent wizardry if it actually affected the brokers responsible, nevermind any ODM (FTDI from any 2 sources…) or OEM (oops, I spec’d it again) error. In some universe, perhaps Beacons with 20-byte UUID are bricking things for coming too close. Work fast!
Let me ask you this:
Assume your smart phone got stolen. Besides wiping the personal data, would you brick it if you could? Or would you let the thief enjoy his theft and practically encourage him?
Now assume he sold it to someone else. Is your answer changes?
In my view, it shouldn’t.
As for your wife’s FitBit:
She should sue or ask for compensations from the seller/manufacturer.
More importantly, I guess she won’t buy it again, and recommend everyone to stay away from it.
That’s exactly the punishment that companies who cut such corners should get.
A very interesting legal and ethical dilemma! Longer term, this may be a sound strategy for the chip vendor. Consider:
– You’re an engineer. You design an FTDI chip into your product.
– You hear from your support people that there is a high field failure rate. Perhaps even your manufacturing line is experiencing these failures.
– The engineer (you) gets involved. You make measurements of voltages and timings on failed units, double- and triple-check the data sheet, but can find nothing wrong.
– You send some of the failed chips back to FTDI, who eventually tell you that they’re counterfeit.
– Now your purchasing people have to get involved, and they have to investigate up the supply chain.
In the end, your company has devoted a lot of time and cost to this, and has little to show for it but lost customer confidence.
You can be sure that there is an edict coming down the corporate pike from way up high to the effect that parts must be purchased from reputable distributors, there must be supply chain transparency, etc., etc. What this means is that if you design in an FTDI part, the purchasing people will have to go to greater lengths to make sure they’re really buying FTDI parts. Arguably, that’s part of their job.
FTDI’s goal is to make the pain of counterfeit parts affect not just their bottom line, but the bottom line of the people who purchase and use them.
I think this bricking is more than inconvenient, it is unethical and potentially dangerous. What if they brick devices in our weapons systems, or in our medical devices, or in some important control system?
FTDI should have some recourse, but this isn’t it.