Why study Ethics in the context of Computer Science and Engineering?

Ethics is important,  especially if you work in technology.

The study of ethics is sometimes regarded as common sense.  While some argue it is common sense that studying ethics is worthwhile, others might argue that what is ethically permissible is common sense.  I tend to agree with the former.

There are plenty of examples where two ethical paradigms disagree (Kant’s duty ethics and Mill’s utilitarian ethics famously tend to disagree), and so it is important for one to explore the field of ethics at a deeply personal level.

So if ethics is important, why is it especially important in the context of computer science? One of the most compelling reasons is that, according to Marc Andreessen software is eating the world. In his words, ‘we are in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy’.  So software companies will increasingly control larger percentages of the global economy, and as Uncle Ben famously says, ‘with great power comes great responsibility’.

Those who work in software companies are implicitly or explicitly granted great power, and everyone in the field must grapple with some of the most difficult ethical problems today as a result. I want specifically to discuss quality, data privacy, and automation.

Being ethical in ensuring quality is uniquely important in computer science. It is impossible to engineer perfection, and there are always trade-offs, so to what extent must one ensure quality in software, and how does one define quality? For contrast, in consumer packaged goods, QA is often  in charge of ensuring that the product ships with the marketed specifications. When I worked at Procter & Gamble, our QA folks had to make sure that we had the right sheet count on our paper towel rolls, because we claimed a certain number of sheets on our packaging. In computer science, the stakes are often higher, less well defined, and less predictable. I say the stakes are often higher because if a software program fails, any number of catastrophic events can take place: nuclear meltdown, data security breach, etc. Not only are the failures more severe, but the client/user is often unaware of how to specify the quality of the product.  While someone knows that their paper towels need to be absorbent, they might not know that they need the bank information they submit online to be encrypted.  Moreover, due to the ubiquitous nature of software, it is often difficult to determine every potential use for software you develop – leading to poor quality usage in unintended applications. So how thorough must a computer scientist be in ensuring the quality of her work in order to be ethical?

Big data capability is expanding, allowing us to do more than ever before: we can track and predict everything from terrorist activity to disease outbreaks with new-found success. In his State of the Union address, President Obama put Joe Biden in charge of taking on cancer through leveraging big data.  However, all of this requires the use of personal information.  Data mining, big data analysis, and related fields all have to grapple with the ethical issues surrounding consent, notification, and security for personal information. Facebook and Amazon both notoriously use personal information for targeted marketing: is that wrong?  Both are providing a great service to the user, but at what cost? The cost of relinquishing personal privacy?  So what, the user consented to Facebook’s data privacy policy some say.  But I’m not so sure that any twenty year old in the western world can elect not to have a Facebook account without suffering some social setbacks. Again, having all of this power in personal information comes with great responsibility, and every Facebook programmer has to make the call for herself as to whether they believe this is ethical.

Programmers call it ‘automation’, but laborers call it ‘unemployment’. From one perspective, a 5 percent cut in costs and a 10 percent increase in output sounds like a no-brainer.  From the other perspective, 3 months without work sounds pretty grim. You could argue that the cost savings can be passed on to the customer. You could also argue that without the automation, the company might lose business to competitors who did automate, costing everyone in the company their job. That might not make it any easier to tell your employee that they should probably update their resume. While my own views on automation tend to be positive, it’s another example where the ethical decision isn’t always black and white.

So software is eating the world, and software companies are at the helm. The world is becoming increasingly digital, and there are a number of unique ethical issues facing computer science. With the power placed in the hands of software companies, it is exceptionally important that technologists concern themselves with ethics.

 

Leave a comment