Research project: Functionality and usability requirements for a crowdsourcing task interface that supports rich data collection and volunteer participation / A case study: The New Zealand Reading Experience Database
Researcher: Donelle McKinley
Department of Information Studies & Wai-te-ata Press Research Team
Victoria University of Wellington
Completion: Feb 2013
Crowdsourcing is “an umbrella term for a highly varied group of approaches” to outsourcing tasks traditionally performed by specific individuals to a group of people or community through an open call (Howe, 2009, p. 280). More specifically, it has been described as “harnessing online activity to aid in large-scale projects that require human cognition” (Terras, 2012, p. 175). Increasingly, academic researchers and collecting institutions are crowdsourcing to create and enhance online collections and resources more cost-effectively, enable research, and engage the wider community. Projects that invite volunteers to participate in relatively complex tasks, such as manuscript transcription, text encoding, and data collection, rely heavily on task interfaces that capture sufficiently rich information for future research, and support ease of contribution and sustained participation. In this context, requirements are the needs that a task interface must satisfy to provide value to its stakeholders, who include the target research community, other resource users, and resource contributors (Rogers, Sharp, & Preece, 2011, p. 349). Requirements are the outcome of the requirements activity, which aims to:
- Understand the users, their activities, and the context of that activity, so that the system under development can support them in achieving their goals
- Produce a set of stable requirements that form a sound basis to start designing and inviting user feedback (Rogers et al., 2011, pp. 352–353).
While the design of crowdsourcing task interfaces is often influenced by projects that have preceded them, design guidelines “must be tailored for and validated against unique requirements” (Mayhew & Follansbee, 2012, p. 946). For this reason, this research takes a case study approach, focusing on one academic research project in development: the New Zealand Reading Experience Database (NZ-RED). Wai-te-ata Press, Victoria University of Wellington is one of four international partners collaborating on a World Reading Experience Database (World-RED) with the Open University, UK. Based on the UK project launched in 1996, the NZ-RED will collect reading experiences of New Zealanders from the nineteenth century to the present day. Volunteers will be invited to identify instances of reading in diaries, letters, biographies and memoirs, from private collections, libraries and archives, and contribute their discoveries to the online database. Collecting data about what, where, when and how people read will enable patterns to emerge, and new research questions about the history of reading to be explored (Crone, Hammond, & Towheed, 2011; Crump, 1995; Eliot, 1996; Halsey, 2008; Liebich, 2012, p. 5; Towheed, Crone, & Halsey, 2011).
Purpose of the study
Like the REDs being developed in Australia, Canada, and the Netherlands, work on the NZ-RED to date has been based on the UK-RED template. The UK-RED task interface is a one-page online form with six compulsory and thirteen optional sub-sections. Only previous versions of this form have been subjected to limited usability testing, which raises the question, “How effectively and efficiently does the task interface support rich data collection and volunteer participation?”. The purpose of this study is to investigate the suitability of the UK-RED task interface template for the NZ-RED, and produce functionality and usability requirements for a NZ-RED task interface prototype, which will be developed in the next stage of the project. The requirements produced by this study will also be of value to the UK-RED project team, who are investigating a revised RED model, and World-RED project partners, who are in the early stages of development.
To answer the main research question “What are the functionality and usability requirements for a NZ-RED task interface that supports rich data collection and volunteer participation?”, the requirements activity will be driven by three key research questions:
- What are the needs and objectives of RED contributors?
- How efficiently and effectively does the UK-RED task interface support rich data collection and volunteer participation?
- What are some alternative approaches to task interface design that might better support rich data collection and volunteer participation?
Significance of the study
For collecting institutions, crowdsourcing “can continue a long standing tradition of volunteerism and involvement of citizens in the creation and continued development of public goods” (Owens, 2012). For academic researchers, this can extend to digital resources that support research and interpretation methods such as data visualisation, data mining and computational analysis. As Oomen and Aroyo (2011, p. 139) point out, not only can these new forms of collections usage lead to a deeper level of involvement with the collections, but these initiatives will also be of growing importance from a managerial and public relations perspective, as funding of many heritage organizations is based on their societal impact. For Humanities scholars in particular, “social research models offer one way to show relevance through involving a larger community”, at a time when Humanities’ “social contract with society is being challenged” (Rockwell, 2012, p. 151). However, research that focuses on volunteer task interface requirements is extremely limited. Accurate, unambiguous and verifiable requirements are important, as misconceptions about target users can result in an inappropriately designed user experience (IEEE, 1998; Mayhew & Follansbee, 2012, p. 952). For projects reliant on sufficient volunteer contributions, this could seriously impact on the success of the project. This study will begin to address this gap in the literature with a view to contributing to the strategic planning, development and evaluation of crowdsourced projects.
Crone, R., Hammond, M., & Towheed, S. (2011). The Reading Experience Database 1450-1945 (RED). In S. Towheed, R. Crone, & K. Halsey (Eds.), The History of Reading: A Reader (pp. 427–436). London and New York: Routledge.
Crump, M. J. (1995). The reading experience database. Library Review, 44(6), 28–29.
Eliot, S. (1996). The Reading Experience Database: Problems and possibilities. Publishing History, 39, 87–97.
Halsey, K. (2008). Reading the evidence of reading: An introduction to the Reading Experience Database, 1450-1945. Popular Narrative Media, 1(2), 123–137.
Howe, J. (2009). Crowdsourcing: why the power of the crowd is driving the future of business. New York: Three Rivers Press.
IEEE. (1998). IEEE Recommended Practice for Software Requirements Specifications. New York: Institute of Electrical and Electronics Engineers, Inc. Retrieved from http://www.math.uaa.alaska.edu/~afkjm/cs401/IEEE830.pdf
Liebich, S. (2012). Connected Readers: Reading Practices and Communities Across the British Empire c.1890-1930 (PhD). Victoria University of Wellington, Wellington, New Zealand.
Mayhew, D. J., & Follansbee, T. J. (2012). User Experience Requirements Analysis within the Usability Engineering Lifecycle. In J. A. Jacko (ed.), The human-computer interaction handbook : fundamentals, evolving technologies, and emerging applications (pp. 945–953). Boca Raton, FL: CRC Press.
Oomen, J., & Aroyo, L. (2011). Crowdsourcing in the cultural heritage domain: opportunities and challenges. In C&T ’11 Proceedings of the 5th International Conference on Communities and Technologies. (pp. 138–149). doi:10.1145/2103354.2103373
Owens, T. (2012, May 20). The Crowd and The Library [Weblog post]. Retrieved from http://www.trevorowens.org/2012/05/the-crowd-and-the-library/
Rockwell, G. (2012). Crowdsourcing the Humanities: Social Research and Collaboration. In M. Deegan & W. McCarty (Eds.), Collaborative Research in the Digital Humanities (pp. 135–154). Farnham: Ashgate.
Rogers, Y., Sharp, H., & Preece, J. (2011). Establishing requirements. In Interaction Design: Beyond Human-Computer Interaction (pp. 352–388). Chichester, West Sussex: Wiley. Retrieved from http://books.google.co.nz/books?id=b-v_6BeCwwQC&printsec=frontcover&source=gbs_ViewAPI&redir_esc=y#v=onepage&q&f=false
Terras, M. (2012). Present, Not Voting: Digital Humanities in the Panopticon. In D. Berry (Ed.), Understanding Digital Humanities (pp. 172–190). Hampshire, UK: Palgrave Macmillan. Retrieved from http://www.palgrave.com/products/title.aspx?pid=493310
Towheed, S., Crone, R., & Halsey, K. (Eds.). (2011). The History of Reading. London: Routledge.