top of page

Exploring Ethics in Technology

Last week, I attended a lecture on Prognostics Health Management, a field that studies how to identify failure in mechanical systems or components. It is essentially IoT for failure detection. I posed a question about security concerns to which the lecturer responded, “that will be considered during commercialization, we just focus on perfecting the design”. I wasn’t satisfied and that got me thinking about the philosophy and ethics of technology, a very vital and often ignored component of technology. According to the Stanford Encyclopedia of Philosophy, there are four main recurrent themes in ethics of technology: moral agency, responsibility, design and technological risks.

In this era of digital transformation and hyper-connectiveness, technologists must increasingly consider philosophy and ethics. Moral agency questions whether technology is value-laden or agnostic. I have often viewed technology as agnostic but a deeper look reveals that technology is always designed with a purpose -one that imbibes it with values. These purposeful designs make technologies fit for some functions and less effective for others. We should pause here to acknowledge how technologies are converging in design and what that means for moral agency. It is important to consider this at every point during the technology development phase.

Responsibility is a classic ethical concern for technology especially when we think of autonomous systems. Who will bear responsibility for consequences of actions taken by autonomous systems? Here, I think we need to take a step back and re-define “autonomous”. Humans are autonomous; but are “autonomous” systems acting with their own will?

There is consensus that technologies are more malleable during design than in use, hence the need for ethical considerations during design. "Security by design” is a principle which prioritizes safety during the design phase, and is being utilized by many organizations. There are always trade-offs, but during design we get to decide what those are. As you can tell, this also poses a philosophical problem: what trade-offs are justifiable?

Technological risks make up another classic ethical issue. Here, the main questions are, “what is safe enough?”, and “what makes a risk (un)acceptable?” Risk Assessment is a very demanding field, one that requires a thorough balance between several dichotomies and "multichotomies". To exemplify the point, there are no perfectly safe systems but what level of safety is allowable, and who gets to make that decision for who?

There are other layers to discussions of ethics in technology such as historical and political approaches, ontological and epistemological approaches to name a few. But that’s besides the point. We live in a world where technology drives development, inequality, social reform, and innovation among others. Shouldn’t we be thinking about these issues holistically? I think we should. So, when a researcher is developing technology in their lab, maybe there should be a philosopher next to her during the entire phase, not just commercialization.

Notes

Franssen, Maarten, Lokhorst, Gert-Jan and van de Poel, Ibo, "Philosophy of Technology", The Stanford Encyclopedia of Philosophy (Fall 2015 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/fall2015/entries/technology/>

bottom of page