Journal Home > Volume 2 , issue 4

While scholars involved in studying the ethics and politics flowing from digital information and communication systems have sought to impact the design and deployment of digital technologies, the fast pace and iterative tempo of technical development in these contexts, and the lack of structured engagement with sociotechnical questions, have been major barriers to ensuring values are considered explicitly in the R&D process. Here I introduce Apologos, a lightweight design methodology informed by the author’s experience of the challenges and opportunities of interdisciplinary collaboration between computational and social sciences over a five-year period. Apologos, which is inspired by “design apologetics”, is intended as an initial mechanism to introduce technologists to the process of considering how human values impact the digital design process.


menu
Abstract
Full text
Outline
About this article

Apologos: A Lightweight Design Method for Sociotechnical Inquiry

Show Author's information Luke Stark1( )
Faculty of Information and Media Studies, Western University, London, N6A 0A2, Canada

Abstract

While scholars involved in studying the ethics and politics flowing from digital information and communication systems have sought to impact the design and deployment of digital technologies, the fast pace and iterative tempo of technical development in these contexts, and the lack of structured engagement with sociotechnical questions, have been major barriers to ensuring values are considered explicitly in the R&D process. Here I introduce Apologos, a lightweight design methodology informed by the author’s experience of the challenges and opportunities of interdisciplinary collaboration between computational and social sciences over a five-year period. Apologos, which is inspired by “design apologetics”, is intended as an initial mechanism to introduce technologists to the process of considering how human values impact the digital design process.

Keywords:

values in design, values sensitive design (VSD), artificial intelligence, Values@Play, design methods, sociotechnical, ethics, values
Received: 20 May 2021 Revised: 20 November 2021 Accepted: 25 November 2021 Published: 30 January 2022 Issue date: December 2021
References(74)
1
L. Winner, Do artifacts have politics? in The Whale and the Reactor: A Search for Limits in an Age of High Technology, L. Winner, ed. Chicago, IL, USA: University of Chicago Press, 1986, pp. 19–39.
2

B. Friedman and H. Nissenbaum, Bias in computer systems, ACM Transactions on Information Systems, vol. 14, no. 3, pp. 330–347, 1996.

3
C. O’Neil, Weapons of Math Destruction. New York, NY, USA: Broadway Books, 2017.
4
A. Selbst, S. Friedler, D. Boyd, S. Venkatasubramanian, and J. Vertesi, Fairness and abstraction in sociotechnical systems, in Proc. the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 2019, pp. 59–68.https://doi.org/10.1145/3287560.3287598
5

B. Pfaffenberger, “If I want it, it’s OK”: Usenet and the (outer) limits of free speech, The Information Society, vol. 12, no. 4, pp. 365–386, 1996.

6

L. Suchman, J. Blomberg, J. E. Orr, and R. Trigg, Reconstructing technologies as social practice, American Behavioral Scientist, vol. 43, no. 3, pp. 392–408, 1999.

7

B. Friedman, P. H. Kahn Jr., J. Hagman, R. L. Severson, and B. Gill, The watcher and the watched: Social judgments about privacy in a public place, Human-Computer Interaction, vol. 21, no. 2, pp. 235–272, 2006.

8

K. Shilton, Engaging values despite neutrality, Science,Technology,&Human Values, vol. 43, no. 2, pp. 247–269, 2018.

9

J. Gaboury, Becoming NULL: Queer relations in the excluded middle, Women&Performance:A Journal of Feminist Theory, vol. 28, no. 2, pp. 143–158, 2018.

10
M. K. Scheuerman, E. Denton, and A. Hanna, Do datasets have politics? Disciplinary values in computer vision dataset development, Proceedings of the ACM on Human-Computer Interaction, vol. 5, no. CSCW2, pp. 1–37, 2021.https://doi.org/10.1145/3476058
11

R. Benjamin, Assessing risk, automating racism, Science, vol. 366, no. 6464, pp. 421–422, 2019.

12
S. U. Noble, Algorithms of Oppression: How Search Engines Reinforce Racism. New York, NY, USA: New York University Press, 2018.https://doi.org/10.2307/j.ctt1pwt9w5
13
J. Buolamwini and T. Gebru, Gender shades: Intersectional accuracy disparities in commercial gender classification, in Proc. the 1st Conference on Fairness, Accountability and Transparency, New York, NY, USA, 2018, pp. 77–91.
14

S. Barocas and A. D. Selbst, Big data’s disparate impact, California Law Review, vol. 104, pp. 671–732, 2016.

15
A. L. Hoffmann, Data violence and how bad engineering choices can damage society, Medium, https://medium.com/s/story/data-violence-and-how-bad-engineering-choices-can-damage-society-39e44150e1d4, 2018.
16
A. L. Hoffmann and L. Stark, Hard feelings — inside out, Silicon Valley, and why technologizing emotion and memory is a dangerous idea, https://lareviewofbooks.org/essay/hard-feelings-inside-out-silicon-valley-and-why-technologizing-emotion-and-memory-is-a-dangerous-idea, 2015.
17
M. M. Malik, A hierarchy of limitations in machine learning, https://arxiv.org/pdf/2002.05193.pdf, 2020.
18
M. Raghavan, S. Barocas, J. Kleinberg, and K. Levy, Mitigating bias in algorithmic employment screening: Evaluating claims and practices, arXiv preprint arXiv: 1906.09208, 2019.https://doi.org/10.2139/ssrn.3408010
19
B. Fish and L. Stark, Reflexive design for fairness and other human values in formal models, in Proc. 2021 AAAI/ACM Conference on AI, Ethics and Society, Virtual Event, USA, 2021, pp. 89–99.https://doi.org/10.1145/3461702.3462518
20
V. Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York, NY, USA: St. Martin’s Press, 2018.
21
D. Greene, A. L. Hoffmann, and L. Stark, Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning, https://hdl.handle.net/10125/59651, 2019.https://doi.org/10.24251/HICSS.2019.258
22

A. Rességuier and R. Rodrigues, AI ethics should not remain toothless! A call to bring back the teeth of ethics, Big Data&Society, vol. 7, no. 2, p. 205395172094254, 2020.

23

S. Mohamed, M. -T. Png, and W. Isaac, Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence, Philosophy&Technology, vol. 33, pp. 659–684, 2020.

24
B. Green, The contestation of tech ethics: A sociotechnical approach to technology ethics in practice, Journal of Social Computing, doi: 10.23919/JSC.2021.0018.https://doi.org/10.23919/JSC.2021.0018
25
M. A. Madaio, L. Stark, J. W. Vaughan, and H. Wallach, Co-designing checklists to understand organizational challenges and opportunities around fairness in AI, in Proc. the 2020 CHI Conference on Human Factors in Computing System, Honolulu, HI, USA, 2020, pp. 1–14.https://doi.org/10.1145/3313831.3376445
26

N. Manders-Huits and M. Zimmer, Values and pragmatic action: The challenges of introducing ethical intelligence in technical design communities, International Review of Information Ethics, pp. 1–8, 2009.

27

I. Poel, An ethical framework for evaluating experimental technology, Science and Engineering Ethics, vol. 22, no. 3, pp. 667–686, 2015.

28
J. van den Hoven, P. E. Vermaas, and I. van de Poel, Design for values: An introduction, in Handbook of Ethics, Values, and Technological Design, J. van den Hoven, P. E. Vermaas, and I. van de Poel, eds. Dordrecht, the Netherlands: Springer, 2015, pp. 1–7.https://doi.org/10.1007/978-94-007-6970-0_40
29

I. van de Poel, J. N. Fahlquist, N. Doorn, S. Zwart, and L. Royakkers, The problem of many hands: Climate change as an example, Science and Engineering Ethics, vol. 18, no. 1, pp. 49–67, 2011.

30
S. Gürses and J. van Hoboken, Privacy after the agile turn, in Cambridge Handbook of Consumer Privacy, J. Polonetsky, O. Tene, and E. Selinger, eds. Cambridge, UK: Cambridge University Press, 2018, pp. 579–601.https://doi.org/10.1017/9781316831960.032
31

E. Donahoe and M. M. Metzger, Artificial intelligence and human rights, Journal of Democracy, vol. 30, no. 2, pp. 115–126, 2019.

32
Y. Stevens and A. Brandusescu, Weak privacy, weak procurement: The state of facial recognition in Canada, SSRN Electron J, doi: 10.2139/ssrn.3857355.https://doi.org/10.2139/ssrn.3857355
33
N. Shedroff and C. Noessel, Make It So: Interaction Design Lessons from Science Fiction. New York, NY, USA: Rosenfeld Books, 2012.https://doi.org/10.1145/2254556.2254561
34

B. Friedman, D. G. Hendry, and A. Borning, A survey of value sensitive design methods, Foundations and Trends® in Human–Computer Interaction, vol. 11, no. 2, pp. 63–125, 2017.

35
M. Flanagan, D. C. Howe, and H. Nissenbaum, Embodying values in technology: Theory and practice, in Information Technology and Moral Philosophy, J. van den Hoven and J. Weckert, eds. Cambridge, UK: Cambridge University Press, 2008, pp. 322–353.https://doi.org/10.1017/CBO9780511498725.017
36
M. Flanagan and H. Nissenbaum, Values at Play in Digital Games. Cambridge, MA, USA: The MIT Press, 2014.https://doi.org/10.7551/mitpress/9016.001.0001
37

K. Shilton, Values and ethics in human-computer interaction, Foundations and Trends® in Human–Computer Interaction, vol. 12, no. 2, pp. 107–171, 2018.

38
R. Y. Wong and D. K. Mulligan, Bringing design to the privacy table, in Proc. the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 2019, pp. 1–17.https://doi.org/10.1145/3290605.3300492
39
A. Dunne and F. Raby, Towards a critical design: Consuming monsters: Big, perfect, infectious, http://dunneandraby.co.uk/content/bydandr/42/0, 2005.
40
S. Lawson, B. Kirman, C. Linehan, T. Feltwell, and L. Hopkins, Problematising upstream technology through speculative design, in Proc. the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 2015, pp. 2663–2672.https://doi.org/10.1145/2702123.2702260
41
M. Malik and M. M. Malik, Critical technical awakenings, Journal of Social Computing, doi: 10.23919/JSC.2021.0035.https://doi.org/10.23919/JSC.2021.0035
42

N. JafariNaimi, L. Nathan, and I. Hargraves, Values as hypotheses: Design, inquiry, and the service of values, Design Issues, vol. 31, no. 4, pp. 91–104, 2015.

43

C. A. L. Dantec and C. DiSalvo, Infrastructuring and the formation of publics in participatory design, Social Studies of Science, vol. 43, no. 2, pp. 241–264, 2013.

44

K. M. MacQueen, N. T. Eley, M. Frick, L. R. Mingote, A. Chou, S. S. Seidel, S. Hannah, and C. Hamilton, Developing a framework for evaluating ethical outcomes of good participatory practices in TB clinical drug trials, Journal of Empirical Research on Human Research Ethics, vol. 11, no. 3, pp. 203–213, 2016.

45
D. J. Mir, Y. Shvartzshnaider, and M. Latonero, It takes a village: A community based participatory framework for privacy design, presented at 2018 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), London, UK, 2018.https://doi.org/10.1109/EuroSPW.2018.00022
46

K. Shilton, Values levers: Building ethics into design, Science,Technology,&Human Values, vol. 38, no. 3, pp. 374–397, 2013.

47
E. P. S. Baumer and M. S. Silberman, When the implication isnot to design (technology), in Proc. the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, Canada, 2011, pp. 2271–2274.https://doi.org/10.1145/1978942.1979275
48
B. Friedman and D. G. Hendry, Value Sensitive Design. Cambridge, MA, USA: The MIT Press, 2019.https://doi.org/10.7551/mitpress/7585.001.0001
49
D. D. Clark, Designing an Internet. Cambridge, MA, USA: The MIT Press, 2018.https://doi.org/10.7551/mitpress/11373.001.0001
50

D. D. Clark, The contingent internet, Daedalus, vol. 145, no. 1, pp. 9–17, 2016.

51

D. D. Clark, The design philosophy of the DARPA Internet Protocols, ACM SIGCOMM Computer Communication Review, vol. 25, no. 1, pp. 102–111, 1995.

52

D. D. Clark, J. Wroclawski, K. R. Sollins, and R. Braden, Tussle in cyberspace: Defining tomorrow’s Internet, IEEE/ACM Transactions on Networking, vol. 13, no. 3, pp. 462–475, 2005.

53

S. Braman, The interpenetration of technical and legal decision-making for the internet, Information,Communication&Society, vol. 13, no. 3, pp. 309–324, 2010.

54
H. Abelson, G. J. Sussman, Structure and Interpretation of Computer Programs. Cambridge, MA, USA: The MIT Press, 1996.
55

E. Fisher, M. O’Rourke, R. Evans, E. B. Kennedy, M. E. Gorman, and T. P. Seager, Mapping the integrative field: Taking stock of socio-technical collaborations, Journal of Responsible Innovation, vol. 2, no. 1, pp. 39–61, 2015.

56

A. S. Balmer, J. Calvert, C. Marris, S. Molyneux-Hodgson, E. Frow, M. Kearnes, K. Bulpin, P. Schyfter, A. Mackenzie, and P. Martin, Taking roles in interdisciplinary collaborations: Reflections on working in Post-ELSI spaces in the UK synthetic biology community, Science and Technology Studies, vol. 28, no. 3, pp. 3–25, 2015.

57
F. Callard and D. Fitzgerald, Rethinking Interdisciplinarity Across the Social Sciences and Neurosciences. London, UK: Palgrave MacMillan, 2015.https://doi.org/10.1057/9781137407962
58

A. Quan-Haase, J. L. Suarez, and D. M. Brown, Collaborating, connecting, and clustering in the humanities, American Behavioral Scientist, vol. 59, no. 5, pp. 565–581, 2015.

59

D. Fitzgerald and F. Callard, Social science and neuroscience beyond interdisciplinarity: Experimental entanglements, Theory,Culture&Society, vol. 32, no. 1, pp. 3–32, 2015.

60

F. Callard, D. Fitzgerald, and A. Woords, Interdisciplinary collaboration in action: Tracking the signal, tracing the noise, Palgrave Communications, vol. 1, no. 1, p. 15019, 2015.

61
A. Lucero, P. Dalsgaard, K. Halskov, and J. Buur, Designing with cards, in Collaboration in Creative Design, P. Markopoulos, J. -B. Martens, J. Malins, K. Coninx, and A. Liapis, eds. Cham, Switzerland: Springer International Publishing, 2016, pp. 75–95.https://doi.org/10.1007/978-3-319-29155-0_5
62
B. Friedman and D. G. Hendry, The envisioning cards: A toolkit for catalyzing humanistic and technical imaginations, in Proc. the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 2012, pp. 1145–1148.https://doi.org/10.1145/2207676.2208562
63
A. Fedosov, M. Kitazaki, W. Odom, and M. Langheinrich, Sharing economy design cards, in Proc. the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 2019, pp. 1–14.https://doi.org/10.1145/3290605.3300375
64
J. Knapp, J. Zeratsky, and B. Kowitz, Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days. New York, NY, USA: Simon and Schuster, 2016.
65
S. Umbrello and O. Gambelin, Agile as a vehicle for values: A value sensitive design toolkit, doi: 10.13140/RG.2.2.17064.08965/1.
66
C. Fiesler, Ethical considerations for research involving (speculative) public data, in Proc. the ACM on Human-Computer Interaction, vol. 3, no. GROUP, pp. 1–13, 2019.https://doi.org/10.1145/3370271
67
R. Y. Wong and V. Khovanskaya, Speculative design in HCI: From corporate imaginations to critical orientations, in New Directions in Third Wave Human-Computer Interaction: Volume 2 – Methodologies, M. Filimowicz and V. Tzankova, eds. Cham, Switzerland: Springer International Publishing, 2018, pp. 175–202.https://doi.org/10.1007/978-3-319-73374-6_10
68
G. Cockton, Designing worth is worth designing, in Proc. the 4th Nordic Conference on Human-Computer Interaction: Changing Roles, Oslo, Norway, 2006, pp. 165–174.https://doi.org/10.1145/1182475.1182493
69

G. Cockton, From doing to being: Bringing emotion into interaction, Interacting with Computers, vol. 14, no. 2, pp. 89–92, 2002.

70
P. Sengers, K. Boehner, S. David, and J. “Jofish” Kaye, Reflective design, in Proc. the 4th Decennial Conference on Critical Computing: Between Sense and Sensibility, Aarhus, Denmark, 2005, pp. 49–58.https://doi.org/10.1145/1094562.1094569
71
C. DiSalvo, Adversarial Design. Cambridge, MA, USA: The MIT Press, 2012.https://doi.org/10.7551/mitpress/8732.001.0001
72
M. Sloan, E. Moss, O. Awomolo, and L. Forlano, Participation is not a design fix for machine learning, arXiv preprint arXiv: 2007.02423, 2020.
73
L. Irani, Chasing Innovation: Making Entrepreneurial Citizens in Modern India. Princeton, NJ, USA: Princeton University Press, 2019.https://doi.org/10.23943/princeton/9780691175140.001.0001
74
L. Stark, D. Greene, and A. L. Hoffmann, Critical perspectives on governance mechanisms for AI/ML systems, in The Cultural Life of Machine Learning, J. Roberge and M. Castelle, eds. Cham, Switzerland: Palgrave Macmillan, 2021, pp. 257–280.https://doi.org/10.1007/978-3-030-56286-1_9
Publication history
Copyright
Rights and permissions

Publication history

Received: 20 May 2021
Revised: 20 November 2021
Accepted: 25 November 2021
Published: 30 January 2022
Issue date: December 2021

Copyright

© The author(s) 2021

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Reprints and Permission requests may be sought directly from editorial office.

Return