feature article
Subscribe Now

Where Does an Engineer’s Responsibility End?

Whom Can You Blame for Mis-used Technology?

EEJournal, as its name implies, concentrates on the bits and bytes or the chips and boards of the electronics industry. But there are times when it seems like a good idea to look at wider issues. And this may be one of them. What has triggered this is a series of news stories that have demonstrated technology failings that have led to broader consequences.

Botnets and passwords

An engineer created a cheap digital video reorder (DVR) designed to be connected to surveillance cameras, and so it was given an IP address. He was foresighted enough to realise that it would be a good idea to give it password protection, so the DVR was shipped with a default password and access by telnet. It was sold to, and branded by, a number of companies, and then distributed through a mix of channels to installers and then to users. Somewhere along the way, the knowledge that there is a password and that it needs resetting by the installer and/ or the end user got lost. Some bad guys discovered this and used the default password to gain access from the Internet. They weren’t interested in the normal operations of the DVR but wanted it as a node in a network of other devices that they controlled. Once in the network, the DVRs were initially used to identify further potential nodes and bring them into the network. Then the nodes simultaneously sent messages to specific web sites, creating a Digital Denial of Service (DDoS), which overwhelmed the targets. Now, who was responsible for this – the Miraibot attack? Obviously, the perpetrators carry the bulk of the blame, but who else in the chain of design, manufacture, distribution, and use should share in the blame?

As a postscript, PenTest, a UK security consultancy, recently discovered that a DVR supplier had closed off telnet access, but PenTest were able to easily re-open it.

Death on the Internet

A young woman in a Paris suburb live-streamed, on social media, her suicide under a train. While she was doing so, some of the other members of the group were urging her on, often in the most offensive way. Similar events have happened elsewhere. When that particular live-streaming service was being created, should the specification have included a monitoring service for events such as this?

Free speech vs neo-Nazis

The Daily Stormer, an extreme neo-Nazi web site has, following its unpleasant comments after the Charlottesville events, been kicked off of several hosts and been delisted by Cloudflare, which protected Daily Stormer’s domain name from DDoS attacks. While this was widely applauded, there were some who questioned the decision. One of them was the CEO of Cloudflare, himself.  Matthew Prince wrote, “Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.” He also wrote, “My rationale for making this decision was simple: the people behind the Daily Stormer are assholes, and I’d had enough.”  His concerns were echoed by the Electronic Freedom Foundation (EFF), which calls itself “the leading non-profit organization defending civil liberties in the digital world.” Most of the time EFF fights for the rights of individuals and small companies against large companies and governments. Their statement says “we must also recognise that, on the Internet, any tactic used now to silence neo-Nazis will soon be used against others.”

Whom can we trust to police the net?

Self-driving cars

I would have said that a great deal of ink has been spilt on the debate over decision-making by autonomous vehicles. Perhaps many electrons have been mangled? But the issue is not going to go away. One simple version is, should the car favour its passengers over other people? The car can hit a large object – say a stationery truck, which would kill or badly injure the passengers in the car, or swerve to avoid the object and hit, and possibly kill, a pedestrian? This idea is explored in detail by the Moral Machine project where you get to make decisions for multiple scenarios. While fascinating, there is often far more information than would be available in real life – which rather negates the experience.

Back to real life. You are the project team working on the decision-making for an autonomous vehicle. What criteria do you use for these decisions? Who is going to sign them off? What are the legal constraints? Is the company’s insurer happy? Will you be prepared to stand up in a court to defend your decision if it is your car in the inevitable court case? And all this will vary depending on the legal jurisdiction of where the car is manufactured and where it is used.

Building bias into machine learning

Some machine learning activities start with sets of data and rules that are used to train them to build on experience of the real world to develop more sophisticated and complex systems. But if the data and rules are incomplete in some way – or, worse, have inbuilt bias (not necessarily deliberate), then the resultant systems may reflect or exaggerate that bias. For example, if a certain group of people were under-represented in the starting set (say women with deep voices for a voice recognition system), then the system might be classing them as men. That is a fairly harmless example, but a major difficulty lies in recognising cultural biases in the people developing the data. These may be intrinsic and almost unrecognisable within the group developing the software – software engineers probably – but people from a different world (I was tempted to say the real world) may easily recognise that bias. One of the takeaways I got from the Google employee’s rant about women was that he saw something as completely logical, from what one might regard as insufficient data, and, because of the logic, he saw no reason not to state his views publicly, without any consideration of how it might appear to others.

Another issue with machine learning is acting on data without considering other social issues. My favourite on this is the American store, Target. Their research team developed an analysis that tracked increased purchases of things like unscented body lotions, certain diet supplements, and other things and realised that these could be linked with pregnancy, and, after some research, they were able to predict due dates within a fairly narrow window. Target used the tracking data to mail the purchasers with pregnancy-specific offers. According to a story in the New York Times about five years ago, a Target store manager got an angry call from a man complaining that the store was bombarding his live-at-home teenage daughter with coupons for baby clothes and the like. The manager apologised and checked the records and saw that, indeed, this was happening. A few days later, he rang the customer to make sure everything was fine only to have the father apologise. He hadn’t checked with the daughter before sounding off, and he was, indeed, about to become a grandfather.

While this is relatively harmless, one can see that acting on similar information in a medical context without checking could have much more serious consequences, which is why there has been some debate in medical circles on these issues.

Abandoning the user

Jim Turley, at the end of August, wrote, “Help! My Product Is Suing Me! What’s Your Recourse When a Product Changes Features Under Your Nose?” This talks about the issues of changes manufactures make that alter the way in which their product works. This is not about the lock manufacturer who bricked several hundred installed locks by sending an update for the wrong version of the software, but is instead about your status when the result is that features that you have come to rely on are deleted. It isn’t simple.

What responsibility does a manufacturer of hardware and software have to continue to support all features, and for how long can that support logically continue?

Twits

And, since I don’t live in the United States, I won’t make any comment on a President who seems to govern by Twitter in the early hours of the morning, although I notice there is a case going through the courts as to whether he can legally block followers.

Pulling it together

One thing that is emerging in all these events is that technology is moving far faster than legislation, or public opinion. This is nothing new – with the harnessing of steam, it took decades before legislation was passed to provide for safe construction and use of boilers, and, in the meantime, many people died or were horrifically injured. In these circumstances, shouldn’t the tech world be starting a debate on these matters? Most hardware engineers are members of professional societies and are expected to conform to the societies’ codes of ethics, and these could be the starting point for a debate. Indeed, there are people who are approaching the problem, even if only sideways – such as those who are addressing security and safety. But there is no joined-up thinking yet. And, while most politicians are woefully ignorant on these matters, as are many commentators, it doesn’t seem to stop them spouting the most arrant nonsense.  Senior management in many companies also seem to be ignorant of these issues – how does the engineering team change this?

I am sorry I have no answers, just a very strong concern that we are sleepwalking into a world where decisions are being made without a full understanding of their consequences.

5 thoughts on “Where Does an Engineer’s Responsibility End?”

  1. While its high time that this subject is discussed, on should not forget that the full responsibility lies with management. Real engineers take their job seriously but are often forced by management (time and cost pressure) to cut corners. Does management also spend the resources to train the engineers as they should have been trained? Can one write software with knowing about safety engineering? How many have bene trained on formal methods? How much of the free open source software is really trustworthy, read certifiable? The list is much longer. My assesment is that safety critical software is 2/3 code that takes care of potential faults, 1/3 only for the core functionality. Often that 2/3 might take 10 times more effort to get it right. Only of management understands this and makes sure that this is the way it is done, then only then can one start to blame the engineer who translates requirements (assuming these were not ill-defined as well) into code. Note that such good practices are being applied e.g. in the aviation industry. Unfortunately, this can’t be said of most other industries. The more consumer oriented the products are, the worse is the practice. When will we start to certify autonomously driving cars?

  2. Of course, I meant without knowing about safety engineering? Who is the engineer that implemented this comment feature without the functionality to correct typos? 🙂

Leave a Reply

featured blogs
Dec 5, 2023
Generative AI has become a buzzword in 2023 with the explosive proliferation of ChatGPT and large language models (LLMs). This brought about a debate about which is trained on the largest number of parameters. It also expanded awareness of the broader training of models for s...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured paper

3D-IC Design Challenges and Requirements

Sponsored by Cadence Design Systems

While there is great interest in 3D-IC technology, it is still in its early phases. Standard definitions are lacking, the supply chain ecosystem is in flux, and design, analysis, verification, and test challenges need to be resolved. Read this paper to learn about design challenges, ecosystem requirements, and needed solutions. While various types of multi-die packages have been available for many years, this paper focuses on 3D integration and packaging of multiple stacked dies.

Click to read more

featured chalk talk

Package Evolution for MOSFETs and Diodes
Sponsored by Mouser Electronics and Vishay
A limiting factor for both MOSFETs and diodes is power dissipation per unit area and your choice of packaging can make a big difference in power dissipation. In this episode of Chalk Talk, Amelia Dalton and Brian Zachrel from Vishay investigate how package evolution has led to new advancements in diodes and MOSFETs including minimizing package resistance, increasing power density, and more! They also explore the benefits of using Vishay’s small and efficient PowerPAK® and eSMP® packages and the migration path you will need to keep in mind when using these solutions in your next design.
Jul 10, 2023
17,534 views