Journal Home > Volume 5 , Issue 3
Purpose

This paper aims to introduce a crowd-based method for theorizing. The purpose is not to achieve a scientific theory. On the contrary, the purpose is to achieve a model that may challenge current scientific theories or lead research in new phenomena.

Design/methodology/approach

This paper describes a case study of theorizing by using a crowd-based method. The first section of the paper introduces what do the authors know about crowdsourcing, crowd science and the aggregation of non-expert views. The second section details the case study. The third section analyses the aggregation. Finally, the fourth section elaborates the conclusions, limitations and future research.

Findings

This document answers to what extent the crowd-based method produces similar results to theories tested and published by experts.

Research limitations/implications

From a theoretical perspective, this study provides evidence to support the research agenda associated with crowd science. The main limitation of this study is that the crowded research models and the expert research models are compared in terms of the graph. Nevertheless, some academics may argue that theory building is about an academic heritage.

Practical implications

This paper exemplifies how to obtain an expert-level research model by aggregating the views of non-experts.

Social implications

This study is particularly important for institutions with limited access to costly databases, labs and researchers.

Originality/value

Previous research suggested that a collective of individuals may help to conduct all the stages of a research endeavour. Nevertheless, a formal method for theorizing based on the aggregation of non-expert views does not exist. This paper provides the method and evidence of its practical implications.


menu
Abstract
Full text
Outline
About this article

Crowd modelling: aggregating non-expert views as a method for theorizing

Show Author's information Octavio González Aguilar( )
Felizmente Verde, Tepic, Mexico

Abstract

Purpose

This paper aims to introduce a crowd-based method for theorizing. The purpose is not to achieve a scientific theory. On the contrary, the purpose is to achieve a model that may challenge current scientific theories or lead research in new phenomena.

Design/methodology/approach

This paper describes a case study of theorizing by using a crowd-based method. The first section of the paper introduces what do the authors know about crowdsourcing, crowd science and the aggregation of non-expert views. The second section details the case study. The third section analyses the aggregation. Finally, the fourth section elaborates the conclusions, limitations and future research.

Findings

This document answers to what extent the crowd-based method produces similar results to theories tested and published by experts.

Research limitations/implications

From a theoretical perspective, this study provides evidence to support the research agenda associated with crowd science. The main limitation of this study is that the crowded research models and the expert research models are compared in terms of the graph. Nevertheless, some academics may argue that theory building is about an academic heritage.

Practical implications

This paper exemplifies how to obtain an expert-level research model by aggregating the views of non-experts.

Social implications

This study is particularly important for institutions with limited access to costly databases, labs and researchers.

Originality/value

Previous research suggested that a collective of individuals may help to conduct all the stages of a research endeavour. Nevertheless, a formal method for theorizing based on the aggregation of non-expert views does not exist. This paper provides the method and evidence of its practical implications.

Keywords: Crowdsourcing, Crowd science, Citizen science, Crowdsourced research, Theory building, Theorizing

References(48)

Bacharach, S.B. (1989), “Organizational theories: some criteria for evaluation”, Academy of Management Review, Vol. 14 No. 4, pp. 496-515.

Bolger, F. and Wright, G. (2011), “Improving the Delphi process: lessons from social psychological research”, Technological Forecasting and Social Change, Vol. 78 No. 9, pp. 1500-1513.

Boudreau, K.J. and Lakhani, K.R. (2013), “Using the crowd as an innovation partner”, Harvard Business Review, Vol. 91 No. 4, pp. 62-69.

Brabham, D.C. (2008), “Crowdsourcing as a model for problem solving: an introduction and cases”, Convergence: The International Journal of Research into New Media Technologies, Vol. 14 No. 1, pp. 75-90, doi: 10.1177/1354856507084420.

Brodie, R.J., Nenonen, S., Peters, L.D. and Storbacka, K. (2017), “Theorizing with managers to bridge the theory-praxis gap”,European Journal of Marketing, Vol. 51 NoS 7/8, pp. 1173-1177.

Chen, L., Meservy, T.O. and Gillenson, M. (2012), “Understanding information systems continuance for information-oriented mobile applications”, Communications of the Association for Information Systems, Vol. 30 No. 1, pp. 127-146.

Collins, H. and Evans, R. (2007), Rethinking Expertise, 1st edition, The University of Chicago Press, Chicago, IL.https://doi.org/10.7208/chicago/9780226113623.001.0001
DOI
Creswell, J.W. (2014), Research Design, SAGE Publications.

Cunha, A.A.R., Filho, J.L.S. and Morais, D.C. (2016), “Aggregation cognitive maps procedure for group decision analysis”, Kybernetes, Vol. 45 No. 4, pp. 589-603.

Davis, F.D., Bagozzi, R.P. and Warshaw, P.R. (1989), “User acceptance of computer technology: a comparison of two theoretical models”, Management Science, Vol. 35 No. 8, pp. 982-1003.

Dublin, R. (1976), “Theory building in applied areas”, in Dunnette, M. (Ed.), Handbook of Industrial and Organizational Psychology, Rand McNally, Chicago, IL, pp. 17-39.

Feldman, D.C. (2004), “What are we talking about when we talk about theory?”, Journal of Management, Vol. 30 No. 5, pp. 565-567.

Feldman, M.S. and Orlikowski, W.J. (2011), “Theorizing practice and practicing theory”, Organization Science, Vol. 22 No. 5, pp. 1240-1253.

Fishburn, P.C. and Little, J.D.C. (1988), “An experiment in approval voting”, Management Science, Vol. 34 No. 5, pp. 555-568.

Foley, M. and Hart, A. (1992), “Expert-Novice differences and knowledge elicitation”, in Hoffman, R.R. (Ed.), The Psychology of Expertise: cognitive Research and Empirical AI, 1st edition, Springer-Verlag, Berlin, pp. 233-244.https://doi.org/10.1007/978-1-4613-9733-5_14
DOI

Folger, R. and Turillo, C.J. (1999), “Theorizing as the thickness of thin abstraction”, Academy of Management Review, Vol. 24 No. 4, pp. 742-758.

Franzoni, C. and Sauermann, H. (2014), “Research policy”, Research Policy, Vol. 43 No. 1, pp. 1-20, doi: 10.1016/j.respol.2013.07.005.

Ghezzi, A., Gabelloni, D., Martini, A. and Natalicchio, A. (2018), “Crowdsourcing: a review and suggestions for future research”, International Journal of Management Reviews, Vol. 20 No. 2, pp. 343-363, doi: 10.1111/ijmr.12135.

Glaser, B.G. and Strauss, A.L. (1967), “Theoretical sampling”, In The Discovery of Grounded Theory: Strategies for Qualitative Research, Weidenfeld and Nicolson, pp. 45-77.https://doi.org/10.4324/9780203793206-4
DOI
Hoffman, R.R. (1998), “How can expertise be defined? Implications of research from cognitive psychology”, in Williams, R., Faulkner, W. and Fleck, J. (Eds), Exploring Expertise: Issues and Perspectives, 1st edition, Macmillan Press, London, pp. 81-100.https://doi.org/10.1007/978-1-349-13693-3_4
DOI

Ikediego, H.O., Ilkan, M., Abubakar, A.M. and Bekun, F.V. (2018), “Crowd-sourcing (who, why and what)”, International Journal of Crowd Science, Vol. 2 No. 1, pp. 27-41, doi: 10.1108/IJCS-07-2017-0005.

Jaccard, J. and Jacoby, J. (2010), Theory Construction and Model-Building Skills: A Practical Guide for Social Scientists, The Guilford Press, New York, NY.

Kwahk, K.Y., Kim, H.W. and Chan, H.C. (2007), “A knowledge integration approach for organizational decision support”, Journal of Database Management, Vol. 18 No. 2, pp. 41-61.

Laumer, S., Maier, C., Eckhardt, A., et al. (2016), “User personality and resistance to mandatory information systems in organizations: a theoretical model and empirical test of dispositional resistance to change”, J Inf Technol, Vol. 31, pp. 67-82, available at: https://doi.org/10.1057/jit.2015.17

Lin, C. and Bhattacherjee, A. (2010), “Extending technology usage models to interactive hedonic technologies: a theoretical model and empirical test”, Information Systems Journal, Vol. 20 No. 2, pp. 163-181.

Linstone, H.A. and Turoff, M. (2011), “Delphi: a brief look backward and foward”, Technological Forecasting and Social Change, Vol. 78 No. 9, pp. 1712-1719.

Love, J. and Hirschheim, R. (2017), “Crowdsourcing of information systems research”, European Journal of Information Systems, Vol. 26 No. 3, pp. 315-332.

Lukyanenko, R., Wiersma, Y., Huber, B., Parsons, J., Wachinger, G. and Meldt, R. (2017), “Representing crowd knowledge: guidelines for conceptual modeling of user-generated content”, Journal of the Association for Information Systems, Vol. 18 No. 4, pp. 297-339.

Miller, G.A. (1956), “The magical number seven, plus or minus two: some limits on our capacity for processing information”, The Psychological Review, Vol. 63 No. 2, pp. 81-97.

Möller, K. (2017), “Questioning the theory-praxis gap in marketing – types and drivers of research implementation”, European Journal of Marketing, Vol. 51 Nos 7/8, pp. 1163-1172.

Moray, N. (1988), “A lattice theory of mental models of complex systems”, Engineering Psychology Research Laboratory, University of Illinois, EPRL.

Moray, N. (1990), “A lattice theory approach to the structure of mental models”, Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, Vol. 327 No. 1241, pp. 577-583.

Moray, N. (1996), “A taxonomy and theory of mental models”, Proceedings of the human factors and ergonomics society 40th annual meeting, pp. 164-168.https://doi.org/10.1177/154193129604000404
DOI

Moray, N. (1998), “Identifying mental models of complex human – machine systems”, International Journal of Industrial Ergonomics, Vol. 22 Nos 4/5, pp. 293-297.

Nadkarni, S. and Nah, F.F. (2003), “Aggregated causal maps: an approach to elicit and aggregate the knowledge of multiple experts”, Communications of the Association for Information Systems, Vol. 12, pp. 406-436.

Onwuegbuzie, A.J. and Collins, K. (2007), “A typology of mixed methods sampling designs in social science research”, The Qualitative Report, Vol. 12 No. 2, p. 36, available at: https://doi.org/10.1016/j.bbi.2003.12.001

Ossadnik, W., Kaspar, R.H. and Schinke, S.M. (2013), “Constructing a tailor-made performance management system supported by knowledge elicitation tools and dynamic modeling”, International Journal of Business Research and Management, Vol. 4 No. 4, pp. 75-98.

Page, S.E. (2008), The Difference: how the Power of Diversity Creates Better Groups, Firms, Schools, and Societies, Princeton University Press, Princeton, NJ.https://doi.org/10.1515/9781400830282
DOI

Poetz, M.K. and Schreier, M. (2012), “The value of crowdsourcing: can users really compete with professionals in generating new product ideas?”, Journal of Product Innovation Management, Vol. 29 No. 2, pp. 245-256.

Regenwetter, M. and Grofman, B. (1998), “Approval voting, borda winners and condorcet winners: evidence from seven elections”, Management Science, Vol. 44 No. 4, pp. 520-533, doi: 10.1287/mnsc.44.4.520.

Rowe, G. and Wright, G. (2011), “The Delphi technique: past, present and future prospects – introduction to the special issue”, Technological Forecasting and Social Change, Vol. 78 No. 9, pp. 1487-1490.

Scheliga, K., Friesike, S., Puschmann, C. and Fecher, B. (2018), “Setting up crowd science projects”, Public Understanding of Science, Vol. 27 No. 5, pp. 515-534, doi: 10.1177/0963662516678514.

Schenk, E. and Guittard, C. (2009), “Crowdsourcing: what can be outsourced to the crowd, and why?”, Halshs-00439256, Vol. 1, pp. 1-29.

Shepherd, D.A. and Suddaby, R. (2017), “Theory building: a review and integration”, Journal of Management, Vol. 43 No. 1, pp. 59-86.

Surowiecki, J. (2004), The Wisdom of Crowds, 1st edition, Doubleday, New York, NY.

Swedberg, R. (2016), “Before theory comes theorizing or how to make social science more interesting”, The British Journal of Sociology, Vol. 67 No. 1, pp. 5-22.

Weick, K.E. (1989), “Theory construction as disciplined imagination”, Academy of Management Review, Vol. 14 No. 4, pp. 516-531.

Yu, C., Chai, Y. and Liu, Y. (2018), “Literature review on collective intelligence: a crowd science perspective”, International Journal of Crowd Science, Vol. 2 No. 1, pp. 64-73, doi: .

Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 17 April 2021
Revised: 29 July 2021
Accepted: 31 July 2021
Published: 03 October 2021
Issue date: November 2021

Copyright

© The author(s)

Acknowledgements

Acknowledgements

The author want to thank Professor Christian Wagner for all the meaningful guidance and advice. The author also want to thank Professor Carlos Jimenez for providing the space and access to participants during the data collection.

Rights and permissions

Octavio González Aguilar. Published in International Journal of Crowd Science. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode

Return