feature article
Subscribe Now

Where Does an Engineer’s Responsibility End?

Whom Can You Blame for Mis-used Technology?

EEJournal, as its name implies, concentrates on the bits and bytes or the chips and boards of the electronics industry. But there are times when it seems like a good idea to look at wider issues. And this may be one of them. What has triggered this is a series of news stories that have demonstrated technology failings that have led to broader consequences.

Botnets and passwords

An engineer created a cheap digital video reorder (DVR) designed to be connected to surveillance cameras, and so it was given an IP address. He was foresighted enough to realise that it would be a good idea to give it password protection, so the DVR was shipped with a default password and access by telnet. It was sold to, and branded by, a number of companies, and then distributed through a mix of channels to installers and then to users. Somewhere along the way, the knowledge that there is a password and that it needs resetting by the installer and/ or the end user got lost. Some bad guys discovered this and used the default password to gain access from the Internet. They weren’t interested in the normal operations of the DVR but wanted it as a node in a network of other devices that they controlled. Once in the network, the DVRs were initially used to identify further potential nodes and bring them into the network. Then the nodes simultaneously sent messages to specific web sites, creating a Digital Denial of Service (DDoS), which overwhelmed the targets. Now, who was responsible for this – the Miraibot attack? Obviously, the perpetrators carry the bulk of the blame, but who else in the chain of design, manufacture, distribution, and use should share in the blame?

As a postscript, PenTest, a UK security consultancy, recently discovered that a DVR supplier had closed off telnet access, but PenTest were able to easily re-open it.

Death on the Internet

A young woman in a Paris suburb live-streamed, on social media, her suicide under a train. While she was doing so, some of the other members of the group were urging her on, often in the most offensive way. Similar events have happened elsewhere. When that particular live-streaming service was being created, should the specification have included a monitoring service for events such as this?

Free speech vs neo-Nazis

The Daily Stormer, an extreme neo-Nazi web site has, following its unpleasant comments after the Charlottesville events, been kicked off of several hosts and been delisted by Cloudflare, which protected Daily Stormer’s domain name from DDoS attacks. While this was widely applauded, there were some who questioned the decision. One of them was the CEO of Cloudflare, himself.  Matthew Prince wrote, “Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.” He also wrote, “My rationale for making this decision was simple: the people behind the Daily Stormer are assholes, and I’d had enough.”  His concerns were echoed by the Electronic Freedom Foundation (EFF), which calls itself “the leading non-profit organization defending civil liberties in the digital world.” Most of the time EFF fights for the rights of individuals and small companies against large companies and governments. Their statement says “we must also recognise that, on the Internet, any tactic used now to silence neo-Nazis will soon be used against others.”

Whom can we trust to police the net?

Self-driving cars

I would have said that a great deal of ink has been spilt on the debate over decision-making by autonomous vehicles. Perhaps many electrons have been mangled? But the issue is not going to go away. One simple version is, should the car favour its passengers over other people? The car can hit a large object – say a stationery truck, which would kill or badly injure the passengers in the car, or swerve to avoid the object and hit, and possibly kill, a pedestrian? This idea is explored in detail by the Moral Machine project where you get to make decisions for multiple scenarios. While fascinating, there is often far more information than would be available in real life – which rather negates the experience.

Back to real life. You are the project team working on the decision-making for an autonomous vehicle. What criteria do you use for these decisions? Who is going to sign them off? What are the legal constraints? Is the company’s insurer happy? Will you be prepared to stand up in a court to defend your decision if it is your car in the inevitable court case? And all this will vary depending on the legal jurisdiction of where the car is manufactured and where it is used.

Building bias into machine learning

Some machine learning activities start with sets of data and rules that are used to train them to build on experience of the real world to develop more sophisticated and complex systems. But if the data and rules are incomplete in some way – or, worse, have inbuilt bias (not necessarily deliberate), then the resultant systems may reflect or exaggerate that bias. For example, if a certain group of people were under-represented in the starting set (say women with deep voices for a voice recognition system), then the system might be classing them as men. That is a fairly harmless example, but a major difficulty lies in recognising cultural biases in the people developing the data. These may be intrinsic and almost unrecognisable within the group developing the software – software engineers probably – but people from a different world (I was tempted to say the real world) may easily recognise that bias. One of the takeaways I got from the Google employee’s rant about women was that he saw something as completely logical, from what one might regard as insufficient data, and, because of the logic, he saw no reason not to state his views publicly, without any consideration of how it might appear to others.

Another issue with machine learning is acting on data without considering other social issues. My favourite on this is the American store, Target. Their research team developed an analysis that tracked increased purchases of things like unscented body lotions, certain diet supplements, and other things and realised that these could be linked with pregnancy, and, after some research, they were able to predict due dates within a fairly narrow window. Target used the tracking data to mail the purchasers with pregnancy-specific offers. According to a story in the New York Times about five years ago, a Target store manager got an angry call from a man complaining that the store was bombarding his live-at-home teenage daughter with coupons for baby clothes and the like. The manager apologised and checked the records and saw that, indeed, this was happening. A few days later, he rang the customer to make sure everything was fine only to have the father apologise. He hadn’t checked with the daughter before sounding off, and he was, indeed, about to become a grandfather.

While this is relatively harmless, one can see that acting on similar information in a medical context without checking could have much more serious consequences, which is why there has been some debate in medical circles on these issues.

Abandoning the user

Jim Turley, at the end of August, wrote, “Help! My Product Is Suing Me! What’s Your Recourse When a Product Changes Features Under Your Nose?” This talks about the issues of changes manufactures make that alter the way in which their product works. This is not about the lock manufacturer who bricked several hundred installed locks by sending an update for the wrong version of the software, but is instead about your status when the result is that features that you have come to rely on are deleted. It isn’t simple.

What responsibility does a manufacturer of hardware and software have to continue to support all features, and for how long can that support logically continue?

Twits

And, since I don’t live in the United States, I won’t make any comment on a President who seems to govern by Twitter in the early hours of the morning, although I notice there is a case going through the courts as to whether he can legally block followers.

Pulling it together

One thing that is emerging in all these events is that technology is moving far faster than legislation, or public opinion. This is nothing new – with the harnessing of steam, it took decades before legislation was passed to provide for safe construction and use of boilers, and, in the meantime, many people died or were horrifically injured. In these circumstances, shouldn’t the tech world be starting a debate on these matters? Most hardware engineers are members of professional societies and are expected to conform to the societies’ codes of ethics, and these could be the starting point for a debate. Indeed, there are people who are approaching the problem, even if only sideways – such as those who are addressing security and safety. But there is no joined-up thinking yet. And, while most politicians are woefully ignorant on these matters, as are many commentators, it doesn’t seem to stop them spouting the most arrant nonsense.  Senior management in many companies also seem to be ignorant of these issues – how does the engineering team change this?

I am sorry I have no answers, just a very strong concern that we are sleepwalking into a world where decisions are being made without a full understanding of their consequences.

5 thoughts on “Where Does an Engineer’s Responsibility End?”

  1. While its high time that this subject is discussed, on should not forget that the full responsibility lies with management. Real engineers take their job seriously but are often forced by management (time and cost pressure) to cut corners. Does management also spend the resources to train the engineers as they should have been trained? Can one write software with knowing about safety engineering? How many have bene trained on formal methods? How much of the free open source software is really trustworthy, read certifiable? The list is much longer. My assesment is that safety critical software is 2/3 code that takes care of potential faults, 1/3 only for the core functionality. Often that 2/3 might take 10 times more effort to get it right. Only of management understands this and makes sure that this is the way it is done, then only then can one start to blame the engineer who translates requirements (assuming these were not ill-defined as well) into code. Note that such good practices are being applied e.g. in the aviation industry. Unfortunately, this can’t be said of most other industries. The more consumer oriented the products are, the worse is the practice. When will we start to certify autonomously driving cars?

  2. Of course, I meant without knowing about safety engineering? Who is the engineer that implemented this comment feature without the functionality to correct typos? 🙂

Leave a Reply

featured blogs
Feb 26, 2021
In the SPECTRE 20.1 base release, we released Spectre® XDP-HB as part of the new Spectre X-RF simulation technology. Spectre XDP-HB uses a highly distributed multi-machine multi-core simulation... [[ Click on the title to access the full blog on the Cadence Community si...
Feb 25, 2021
Learn how ASIL-certified EDA tools help automotive designers create safe, secure, and reliable Advanced Driver Assistance Systems (ADAS) for smart vehicles. The post Upping the Safety Game Plan for Automotive SoCs appeared first on From Silicon To Software....
Feb 24, 2021
mmWave applications are all the rage. Why? Simply put, the 5G tidal wave is coming. Also, ADAS systems use 24 GHz for SRR applications and 77 GHz for LRR applications. Obviously, the world needs mmWave tech! Traditional mmWave technology spans the 30 – 300 GHz frequency...
Feb 24, 2021
Crowbits are programmable, LEGO-compatible, magnetically-coupled electronic blocks to interest kids in electronics and computing and facilitate their STEM activities....

featured video

Silicon-Proven Automotive-Grade DesignWare IP

Sponsored by Synopsys

Get the latest on Synopsys' automotive IP portfolio supporting ISO 26262 functional safety, reliability, and quality management standards, with an available architecture for SoC development and safety management.

Click here for more information

featured paper

The Basics of Using the DS28S60

Sponsored by Maxim Integrated

This app note details how to use the DS28S60 cryptographic processor with the ChipDNA™. It describes the required set up of the DS28S60 and a step-by-step approach to use the asymmetric key exchange to securely generate a shared symmetric key between a host and a client. Next, it provides a walk through on how to use the symmetric key to exchange encrypted data between a Host and a Client. Finally, it gives an example of a bidirectional authentication process with the DS28S60 using an ECDSA.

Click here to download the whitepaper

Featured Chalk Talk

Protecting Circuitry with eFuse IC

Sponsored by Mouser Electronics and Toshiba

Conventional fuses are rapidly becoming dinosaurs in our electronic systems. Finally, there is circuit protection technology that doesn’t rely on disposable parts and molten metal. In this episode of Chalk Talk, Amelia Dalton chats with Jake Canon of Toshiba about eFuse - a smart solution that will get rid of those old-school fuses once and for all.

Click here for more information about Toshiba efuses