700
Views
31
Downloads
0
Crossref
N/A
WoS
0
Scopus
N/A
CSCD
This paper aims to introduce a crowd-based method for theorizing. The purpose is not to achieve a scientific theory. On the contrary, the purpose is to achieve a model that may challenge current scientific theories or lead research in new phenomena.
This paper describes a case study of theorizing by using a crowd-based method. The first section of the paper introduces what do the authors know about crowdsourcing, crowd science and the aggregation of non-expert views. The second section details the case study. The third section analyses the aggregation. Finally, the fourth section elaborates the conclusions, limitations and future research.
This document answers to what extent the crowd-based method produces similar results to theories tested and published by experts.
From a theoretical perspective, this study provides evidence to support the research agenda associated with crowd science. The main limitation of this study is that the crowded research models and the expert research models are compared in terms of the graph. Nevertheless, some academics may argue that theory building is about an academic heritage.
This paper exemplifies how to obtain an expert-level research model by aggregating the views of non-experts.
This study is particularly important for institutions with limited access to costly databases, labs and researchers.
Previous research suggested that a collective of individuals may help to conduct all the stages of a research endeavour. Nevertheless, a formal method for theorizing based on the aggregation of non-expert views does not exist. This paper provides the method and evidence of its practical implications.
This paper aims to introduce a crowd-based method for theorizing. The purpose is not to achieve a scientific theory. On the contrary, the purpose is to achieve a model that may challenge current scientific theories or lead research in new phenomena.
This paper describes a case study of theorizing by using a crowd-based method. The first section of the paper introduces what do the authors know about crowdsourcing, crowd science and the aggregation of non-expert views. The second section details the case study. The third section analyses the aggregation. Finally, the fourth section elaborates the conclusions, limitations and future research.
This document answers to what extent the crowd-based method produces similar results to theories tested and published by experts.
From a theoretical perspective, this study provides evidence to support the research agenda associated with crowd science. The main limitation of this study is that the crowded research models and the expert research models are compared in terms of the graph. Nevertheless, some academics may argue that theory building is about an academic heritage.
This paper exemplifies how to obtain an expert-level research model by aggregating the views of non-experts.
This study is particularly important for institutions with limited access to costly databases, labs and researchers.
Previous research suggested that a collective of individuals may help to conduct all the stages of a research endeavour. Nevertheless, a formal method for theorizing based on the aggregation of non-expert views does not exist. This paper provides the method and evidence of its practical implications.
Bacharach, S.B. (1989), “Organizational theories: some criteria for evaluation”, Academy of Management Review, Vol. 14 No. 4, pp. 496-515.
Bolger, F. and Wright, G. (2011), “Improving the Delphi process: lessons from social psychological research”, Technological Forecasting and Social Change, Vol. 78 No. 9, pp. 1500-1513.
Boudreau, K.J. and Lakhani, K.R. (2013), “Using the crowd as an innovation partner”, Harvard Business Review, Vol. 91 No. 4, pp. 62-69.
Brabham, D.C. (2008), “Crowdsourcing as a model for problem solving: an introduction and cases”, Convergence: The International Journal of Research into New Media Technologies, Vol. 14 No. 1, pp. 75-90, doi: 10.1177/1354856507084420.
Brodie, R.J., Nenonen, S., Peters, L.D. and Storbacka, K. (2017), “Theorizing with managers to bridge the theory-praxis gap”,European Journal of Marketing, Vol. 51 NoS 7/8, pp. 1173-1177.
Chen, L., Meservy, T.O. and Gillenson, M. (2012), “Understanding information systems continuance for information-oriented mobile applications”, Communications of the Association for Information Systems, Vol. 30 No. 1, pp. 127-146.
Cunha, A.A.R., Filho, J.L.S. and Morais, D.C. (2016), “Aggregation cognitive maps procedure for group decision analysis”, Kybernetes, Vol. 45 No. 4, pp. 589-603.
Davis, F.D., Bagozzi, R.P. and Warshaw, P.R. (1989), “User acceptance of computer technology: a comparison of two theoretical models”, Management Science, Vol. 35 No. 8, pp. 982-1003.
Feldman, D.C. (2004), “What are we talking about when we talk about theory?”, Journal of Management, Vol. 30 No. 5, pp. 565-567.
Feldman, M.S. and Orlikowski, W.J. (2011), “Theorizing practice and practicing theory”, Organization Science, Vol. 22 No. 5, pp. 1240-1253.
Fishburn, P.C. and Little, J.D.C. (1988), “An experiment in approval voting”, Management Science, Vol. 34 No. 5, pp. 555-568.
Folger, R. and Turillo, C.J. (1999), “Theorizing as the thickness of thin abstraction”, Academy of Management Review, Vol. 24 No. 4, pp. 742-758.
Franzoni, C. and Sauermann, H. (2014), “Research policy”, Research Policy, Vol. 43 No. 1, pp. 1-20, doi: 10.1016/j.respol.2013.07.005.
Ghezzi, A., Gabelloni, D., Martini, A. and Natalicchio, A. (2018), “Crowdsourcing: a review and suggestions for future research”, International Journal of Management Reviews, Vol. 20 No. 2, pp. 343-363, doi: 10.1111/ijmr.12135.
Ikediego, H.O., Ilkan, M., Abubakar, A.M. and Bekun, F.V. (2018), “Crowd-sourcing (who, why and what)”, International Journal of Crowd Science, Vol. 2 No. 1, pp. 27-41, doi: 10.1108/IJCS-07-2017-0005.
Kwahk, K.Y., Kim, H.W. and Chan, H.C. (2007), “A knowledge integration approach for organizational decision support”, Journal of Database Management, Vol. 18 No. 2, pp. 41-61.
Laumer, S., Maier, C., Eckhardt, A., et al. (2016), “User personality and resistance to mandatory information systems in organizations: a theoretical model and empirical test of dispositional resistance to change”, J Inf Technol, Vol. 31, pp. 67-82, available at: https://doi.org/10.1057/jit.2015.17
Lin, C. and Bhattacherjee, A. (2010), “Extending technology usage models to interactive hedonic technologies: a theoretical model and empirical test”, Information Systems Journal, Vol. 20 No. 2, pp. 163-181.
Linstone, H.A. and Turoff, M. (2011), “Delphi: a brief look backward and foward”, Technological Forecasting and Social Change, Vol. 78 No. 9, pp. 1712-1719.
Love, J. and Hirschheim, R. (2017), “Crowdsourcing of information systems research”, European Journal of Information Systems, Vol. 26 No. 3, pp. 315-332.
Lukyanenko, R., Wiersma, Y., Huber, B., Parsons, J., Wachinger, G. and Meldt, R. (2017), “Representing crowd knowledge: guidelines for conceptual modeling of user-generated content”, Journal of the Association for Information Systems, Vol. 18 No. 4, pp. 297-339.
Miller, G.A. (1956), “The magical number seven, plus or minus two: some limits on our capacity for processing information”, The Psychological Review, Vol. 63 No. 2, pp. 81-97.
Möller, K. (2017), “Questioning the theory-praxis gap in marketing – types and drivers of research implementation”, European Journal of Marketing, Vol. 51 Nos 7/8, pp. 1163-1172.
Moray, N. (1990), “A lattice theory approach to the structure of mental models”, Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, Vol. 327 No. 1241, pp. 577-583.
Moray, N. (1998), “Identifying mental models of complex human – machine systems”, International Journal of Industrial Ergonomics, Vol. 22 Nos 4/5, pp. 293-297.
Nadkarni, S. and Nah, F.F. (2003), “Aggregated causal maps: an approach to elicit and aggregate the knowledge of multiple experts”, Communications of the Association for Information Systems, Vol. 12, pp. 406-436.
Onwuegbuzie, A.J. and Collins, K. (2007), “A typology of mixed methods sampling designs in social science research”, The Qualitative Report, Vol. 12 No. 2, p. 36, available at: https://doi.org/10.1016/j.bbi.2003.12.001
Ossadnik, W., Kaspar, R.H. and Schinke, S.M. (2013), “Constructing a tailor-made performance management system supported by knowledge elicitation tools and dynamic modeling”, International Journal of Business Research and Management, Vol. 4 No. 4, pp. 75-98.
Poetz, M.K. and Schreier, M. (2012), “The value of crowdsourcing: can users really compete with professionals in generating new product ideas?”, Journal of Product Innovation Management, Vol. 29 No. 2, pp. 245-256.
Regenwetter, M. and Grofman, B. (1998), “Approval voting, borda winners and condorcet winners: evidence from seven elections”, Management Science, Vol. 44 No. 4, pp. 520-533, doi: 10.1287/mnsc.44.4.520.
Rowe, G. and Wright, G. (2011), “The Delphi technique: past, present and future prospects – introduction to the special issue”, Technological Forecasting and Social Change, Vol. 78 No. 9, pp. 1487-1490.
Scheliga, K., Friesike, S., Puschmann, C. and Fecher, B. (2018), “Setting up crowd science projects”, Public Understanding of Science, Vol. 27 No. 5, pp. 515-534, doi: 10.1177/0963662516678514.
Schenk, E. and Guittard, C. (2009), “Crowdsourcing: what can be outsourced to the crowd, and why?”, Halshs-00439256, Vol. 1, pp. 1-29.
Shepherd, D.A. and Suddaby, R. (2017), “Theory building: a review and integration”, Journal of Management, Vol. 43 No. 1, pp. 59-86.
Swedberg, R. (2016), “Before theory comes theorizing or how to make social science more interesting”, The British Journal of Sociology, Vol. 67 No. 1, pp. 5-22.
Weick, K.E. (1989), “Theory construction as disciplined imagination”, Academy of Management Review, Vol. 14 No. 4, pp. 516-531.
Yu, C., Chai, Y. and Liu, Y. (2018), “Literature review on collective intelligence: a crowd science perspective”, International Journal of Crowd Science, Vol. 2 No. 1, pp. 64-73, doi: .
The author want to thank Professor Christian Wagner for all the meaningful guidance and advice. The author also want to thank Professor Carlos Jimenez for providing the space and access to participants during the data collection.
Octavio González Aguilar. Published in International Journal of Crowd Science. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode