This pattern suits a large asynchronous online cohort where an open call needs to be made to the group to collectively generate responses.
The context for this pattern is the rise of large cohort and class sizes in the tertiary sector. While this “massification” of education (Hornsby & Osman, 2014) has been identified as negatively impacting student experience (Cuseo, 2007), large class sizes can also provide an opportunity for educational interventions that capitalise on the “wisdom of the crowd” (Wexler, 2011) to support interconnected, peer-driven learning (Tyrrell & Shalavin, 2022).
Crowdsourcing is an approach to leveraging the ideation and problem-solving capacity of large, networked groups (Howe, 2009). It has become popular in commercial innovation contexts and for collectively ideating within organisations, for example as a form of staff consultation in developing corporate strategy.
Crowdsourcing is increasingly being applied in education contexts (Jiang et al., 2018). After analysing 51 initiatives where crowdsourcing was employed in tertiary education settings, Jiang et al. (2018) establish a helpful taxonomy by proposing four categories that differentiate the purposes for which educators may employ an open call. They note that crowdsourcing for education may be used to (2018, p.7):
- “create educational contents”, such as sourcing open textbook content from a large group;
- “provide practical experience” by allowing students to participate in a relevant activity such as testing software;
- exchange “complimentary knowledge”, such as swapping coding problems and solutions; and
- augment “abundant feedback”, for example by sourcing a large range of critiques on creative work.
Large class sizes have been identified as negatively impacting student experience (Cuseo, 2007). This creates a challenge for meaningfully connecting large groups of learners online, especially in novel ways that leverage the diversity of the cohort and capitalise on the size of the ‘crowd’. The need to connect a large group of students may arise because there is a question to be collectively answered, a task to be collectively undertaken, or data to be collectively gathered.
The CROWDSOURCING FOR LEARNING pattern re-frames the problem of large cohorts to explore the problem-solving and collaborative potential of large groups to generate forms of collective intelligence. It uses the affordances of large networked learning communities to support students in collective ideation, problem-solving and peer learning.
This pattern was tested in an extracurricular context as an informal learning opportunity. If the pattern is being applied in a formal unit of study/subject, it could be implemented as a learning activity or summative assessment.
01 > Consider the aims of the crowdsourcing initiative. Crowdsourcing can be employed to support a range of different learning outcomes and objectives. Along with Jiang et al.’s (2018) taxonomy above, we provisionally identify three broad crowdsourcing modes:
- Quick collection or decision: this is a basic, ‘quick and dirty’ mode of crowdsourcing where an open call is used to ask the crowd a question or to collect content. For example, by asking the cohort to add examples or share other kinds of content or data. A question may be posed to ascertain the priorities of the group e.g. What are we struggling with? What do we value most? What questions do we want to go over in a revision session?
- Collaborating on solutions to a problem: this is a more complex mode of crowdsourcing where a common challenge or problem is posed and the crowd needs to find the the best solution/s over time. Within this mode of crowdsourcing, the duration could be a short and sharp engagement such as 48-hours to 1 week (hackathons are an example that often use short timeframes to catalyse creativity), or a longer program such as 10 – 13 weeks.
- Creating an ongoing common resource: this is similar to the mode above, however the duration is ongoing. This mode brings a large group together online to undertake the shared maintenance of a common resource as a living document, such as an open textbook or Wiki (Kimmerle, 2011). Wikipedia is the prototypical example of this mode of crowdsourcing.
02 > Select your crowdsourcing platform. Depending on the aims and mode of the crowdsourcing initiative (see Step 1), different technical features will be required. This pattern has been tested using a custom ideas management platform (examples of commercial crowdsourcing platforms include Crowdicity, IdeaScale and ThoughtExchange). If a custom platform is out of scope, consider a DIY approach using a more general tool, for example an online discussion tool or collaboration tool that allows for voting, commenting and contributing. A DIY tool would be particularly appropriate for the first mode of crowdsourcing identified above. Along with considering the various features of different platforms, it’s worthwhile factoring in time for technical, accessibility and data privacy checks.
03 > Decide on the structure of your crowdsourcing experiment. When working with large crowds, having a logical structure and process for dealing with contributions is imperative. How will participants sort or filter submissions? Crowdsourcing usually makes use of gamification features, so consider applying a game logic, such as through the use of voting or awarding points and incentives for top-voted contributions. A common structure might employ phases such as:
- Ideation – the crowd generates as many ideas or contributions as possible around a common question or goal. Consider providing guidelines or examples around what constitutes a quality contribution.
- Refinement – similar ideas/contributions are consolidated or synthesised. Depending on the number of submissions, the refinement stage may also include grouping or categorising ideas/contributions around themes or other criteria. Decide whether the moderators or the crowd itself will do the refinement process.
- Voting – the crowd votes on the ideas/contributions that they think are the best. Consider if you want to provide criteria for voting to support evaluative judgement. Some platforms allow the use of a “Wallet” to limit the amount of potential votes each user can award.
See Nassar & Karray (2019) for more information on crowdsourcing structures and processes in information management contexts.
04 > Rewards. The crowd needs motivation and incentive to perform the task. Make the incentive for different kinds of engagement clear to participants. Rewards are linked to the overall gamification logic of the crowdsourcing process, in that it is a goal-oriented method. It’s important that the crowdsourcing design not be extractive – the crowdsourcing experience should benefit the whole crowd (not just the winners) and the facilitators should design the process to create value for all participants.
05 > Recruit the crowd. Considering that all online communities are likely to have a number of “lurkers” (Garfield, 2020), recruiting a crowd of 500 or more will work best to create diversity of contributions.
06 > Communicate the task to be performed to the crowd. It’s important that the crowd is aware of the aims and purpose of their participation. Clear and enticing wording of the challenge or question is important to both communicate the nature of the task and motivate the crowd to engage. Use diagrams or other visual means to communicate the structure and process of the crowdsourcing experiment to support textual explanation and instructions.
07 > Consider how you plan to facilitate the crowd. Consistent communication within the platform helps maintain a sense of community. For example, facilitators might acknowledge all contributions with a thank you or feedback, or post comments encouraging participants to vote. It’s also recommended to develop and make available community behaviour guidelines on the platform, and to devise a moderation plan for how any unacceptable behaviour will be dealt with.
08 > Create a communications plan. Consider when and how you will communicate with the crowd. For example, a weekly email update to keep the crowd aware of what stage they are at in the process. For example, an update on which submissions are currently in the lead, or how much time is left to cast a vote. Depending on the crowdsourcing platform being used, there may be an option to set up automated system messages or notifications, for example letting a participant know when their contribution has received a vote. These communications help to close the feedback loop and to create a sense of an ongoing conversation, which helps maintain engagement.
09 > The results of the crowdsourcing should be fed back to the crowd. How will winners be announced? Are solutions being implemented? If so, how will you update the crowd?
10 > Evaluate the crowdsourcing. Design a way to gather participant feedback at the end of the crowdsourcing experiment in order to inform future iterations. Communicate the evaluation findings to key stakeholders and make a plan to use and share learnings.
The 100%Open innovation platform provides a helpful crowdsourcing tool with detailed information on implementing crowdsourcing in an open innovation context. Many of the considerations laid out in this tool are transferrable to crowdsourcing for learning, especially in terms of the collaborating on solutions to a problem mode of crowdsourcing.
Examples of pattern in use
Example 1: Future Makers
This pattern was tested in an extracurricular crowdsourcing initiative called Future Makers that was run with students in semester 2 2020 at the University of Sydney Business School. Over 1,300 students signed up to participate.
The pattern was iteratively developed and implemented. We acknowledge Associate Dean Education, Associate Professor Peter Bryant. We also acknowledge members of the research and project team, including Courtney Shalavin, Jacqui Ruello, Tingting Yu, Stacey Petersen, Iris Zeng and Dewa Wardak.
Future Makers was designed and delivered by Business Co-Design (BCD) in Semester 1 and 2 2020 as an interdisciplinary, extracurricular initiative. All Business School students were invited to participate in a 9-week crowdsourcing experience to debate and discuss the critical global, local and personal challenges of the future. Across the duration of the challenge, 1,314 students joined the Future Makers community and submitted 36 ideas, cast 151 votes, and made 126 comments. There were a total of 5,012 page views (of student submissions and learning resources on the platform) and 1,923 blog post views (of a curated suite of academic and student blogs on topics related to global and local challenges). At the end of the 9-week crowdsourcing challenge, 5 winning ideas with the most engagement from the community were selected to be rewarded with a bespoke industry networking session designed with partners from the Business School’s External Engagement team.
The Business School’s 2025 Strategy aims to “enhance our learning community’s capacity to interrogate and address critical global and local challenges”. Future Makers contributes to this endeavour by inviting students to participate in dialogue around what they perceive the most important challenges of the future to be, and to work with each other to design solutions and ideate the core capabilities they see as necessary to respond to such challenges as future leaders. This both builds student capacity to learn with and from each other in extra-curricular contexts, and produces findings (as part of the broader Work. Live. Play. Learn. research project) on student perspectives of global and local challenges that can be used in educational and curriculum development, especially informing authentic contexts and cases through the Connected Learning at Scale (CLaS) project.
Future Makers represents the School’s first engagement with a large-scale crowdsourcing approach, and the initiative has served as a pilot to explore the pedagogical potential of crowdsourcing for learning, as well as to understand the affordances of such platforms as innovative learning and engagement technologies. The research team who delivered the project are in the process of publishing more detailed findings that build on those summarised in this pattern.
Technology / resources used
Future Makers was hosted on a professional innovation and idea management platform called Crowdicity. This platform afforded voting, commenting and the submission of contributions. Other gamification features included points being awarded to participants based on their level of engagement, a Leaderboard to show leading users and live feeds of recent activity, including new idea submissions or comments being made.
Student evaluation data on the crowdsourcing experience was captured through a feedback survey administered within the platform in the final week of the crowdsourcing experience. The feedback survey included Likert scale questions and open text questions. The response rate was low, with only n=7 respondents. Due to the low response rate, only qualitative comments were analysed. Qualitative feedback revealed that participating in Future Makers supported learning, with one student noting the experience “promoted important discourse and critical thinking”. Further comments revealed students found the platform was “good to collaborate [on] student’s idea[s], anyone can easily respond and comment” and the “platform was designed well to share ideas”. When asked about specific benefits of participation, students noted that they benefited from “Knowing about other student’s idea[s] which […] could open [their] mind in solving certain problem” and that the experience “Improves critical thinking skills” and encourages students to “…look into topics [they weren’t] familiar with.” This sentiment was echoed by another student who noted the experience “Broadened perspective, especially when thinking about key issues such as Climate Change.” Suggested improvements included giving more detail about what constituted a good contribution or idea worth submitting.