Skip to Content
A grainy black-and-white photograph of a group of children crowding around a circular display. They lean on and over it, observing and reading. A grainy black-and-white photograph of a group of children crowding around a circular display. They lean on and over it, observing and reading.

Crowdsourcing Metadata in Museums: Expanding Descriptions, Access, Transparency, and Experience

Museums have experienced a technological boom over the last fifty years, and digital access to collections has evolved from searchable catalogues available only on-site to a variety of modalities ranging from web-based, publicly available databases to interaction through social media platforms. As museums look to capitalize on the new ways in which their collections are being discovered, cataloguing visual data and expanding metadata are necessary for staying relevant, on trend, and engaged with audiences.[1] As usually defined, metadata is a set of data that provides information about other data.[2] A piece of metadata typically consists of a set of properties (elements or fields) and a set of values for each property: for example, Title Field: “The Mona Lisa,” Accession Number: “2010.030.0001,” and so on. Metadata allows people to perform various operations with data, including searching, managing, structuring, preserving, and authenticating resources. Creating metadata can be labor intensive, and one solution to the need for more extensive cataloguing is crowdsourcing, which over the last two decades has proven not only to increase access points to collections but also to enrich catalogue data.[3] As well, crowdsourcing presents an opportunity for museums to make what has long been an opaque back-end process more transparent, turning metadata creation into a mission-supporting activity.[4] As Meghan Ferriter, Samantha Blickhan, and Mia Ridge, the leaders of the Collective Wisdom project, state, at their best, crowdsourcing projects from cultural institutions create “meaningful opportunities for the public to experience collections while undertaking a range of tasks that make those collections more easily discoverable.”[5]1

Using an adapted practice-based methodology, this article takes a project I devised and led at Chicago’s Adler Planetarium, Tag Along with Adler, as a case study in the benefits of crowdsourcing projects (and metadata tagging projects in particular) within museums, not as mere outsourcing of labor but rather as participatory, even transformational experiences for an engaged public that also enhance and expand cataloguing.[6] It also explores the successes and shortcomings of this ongoing project and what these early results suggest for the field at large with respect to language and metadata production. In particular, it demonstrates that there exists a semantic gap in the language and descriptive styles of museum professionals, on the one hand, and the public, on the other, and that crowdsourcing demonstrates promise to help bridge this gap while also providing an opportunity for the public to engage with museums directly.2

The Development of Tag Along with Adler

The Adler Planetarium’s Tag Along with Adler project is an ideal case study because of the varied nature of the Adler’s collection, which encompasses archival (100,000+ linear feet), library (6,000+ books), and object (3,000+ historic artifacts) holdings, and because the project uses Zooniverse, the world’s largest and most-used platform for citizen science, with over two million registered users globally.[7] Zooniverse began in 2007 as a single astrophysics citizen-science project, Galaxy Zoo, which was a collaboration between the Adler Planetarium and Oxford University.[8] Since Galaxy Zoo, Zooniverse has grown into an online platform hosting humanities- and collections-based crowdsourcing projects, and has also expanded to include mobile app capabilities.[9] Moreover, the Adler’s audience has already been introduced to the Zooniverse platform through the Adler’s marketing and social media, as well as on-site exhibition interactives.[10]3

The Adler collections selected for Tag Along with Adler were chosen based on a need to present the public with collection objects they could describe relatively easily; for this reason, the project focused on two-dimensional, pictorial works. Professional cataloguing often centers on what something is (its materiality, date, creator, location, etc.) but not on what it may be about or might reflect. The distinction is not a mere matter of semantics. While the descriptive text on a wall label in a gallery, for example, might address an object’s subject matter—its “aboutness”—this dimension of the objects has not usually been represented in catalogues; that is, these objects and their images have not traditionally been catalogued according to their “aboutness.[11] Instead, museum records often focus on the facts of an object’s creation, such as who made it, what it is made of, what size it is, when it was made, and where it was made. Although useful, this information does not help a person find what they’re looking for when they query a database seeking images that feature a specific visual element such as children or snow.4

Museum educator Kris Wetterlund provides a clear example of how the difference between technical descriptive metadata and everyday language can hide materials from the wider audiences who may be searching for them—in other words, how prioritizing what objects are can prevent the discovery of what they’re about. As Wetterlund writes, “Curators in art museums describe the medium of works of art very specifically. An oil painting in an art museum is often catalogued as oil on canvas and a black-and-white photograph is called a silver gelatin print. Thus, when teachers search museum sites [using words like] paintings or photographs, no items are returned even though art museums obviously are filled with paintings or fine art photographs.”[12] There is also evidence that users struggle to find materials when catalogues use only minimally descriptive metadata (a core set of metadata fields the EMDaWG [Embedded Metadata Working Group of the Smithsonian Institution] has designated essential for all collections to provide better online access to images and ensure preservation of these images in the future).[13] As Jennifer Schaffner of the research division of OCLC (the Online Computer Library Center) stated in 2009:

Structured metadata can be useful internally for collection management and public services, but is not always what users need most to discover primary sources, especially minimally described collections and “hidden collections.” We understand archival standards for description and cataloging, but our users by and large don’t. Studies show that users often do not want to search for collections by provenance, for example, as important as this principle is for archival collections.[14]

Schaffner argues that thirty years of user studies show that aboutness and relevance matter most for public discovery, especially when that discovery is happening online instead of on-site.[15]
5

With this problem in mind, I selected 1,090 objects from the Adler Planetarium’s collection specifically because they were two-dimensional and pictorial; early results from similar projects like steve.museum and Your Paintings Tagger found such objects easiest for users to work with.[16] The group included 613 works on paper, 195 archival photographs, and 282 rare book illustrations, which were then used consistently across different tasks given to users to test how variations in the tasks users were asked to participate in (verification vs. free tagging), in the audiences who participated (Adler audience vs. Zooniverse audience), and in the platforms on which they participated (Zooniverse online, Zooniverse MuseumMode on-site, and gamified workflow) affected user engagement and the tags created.6

The project team began by pulling all Adler catalogue data for each of the 1,090 objects, logging only publicly searchable text in order to focus on the terms that currently facilitated or hindered public inquiry. Artificial intelligence and machine learning is “a method of data analysis that automates analytical model building. Using algorithms that iteratively learn from data, machine learning allows computers to find hidden insights without being explicitly programmed where to look.”[17] In the case of this study, two separate AI tagging models were used on the selected image sets, providing two examples of computer algorithms trained to provide metadata tags on images.[18] The inclusion of AI tags in this project accords with recent museum projects from the last five to ten years. Several institutions—including the Barnes Foundation, Philadelphia; the Harvard Art Museums, Cambridge; the Massachusetts Institute of Technology, Cambridge; the Metropolitan Museum of Art, New York; and the Philadelphia Museum of Art—have employed machine vision to analyze, categorize, and interpret their collections’ images, and although such application of AI is still in its infancy, the reported results show promise.[19] AI already underlies many routine aspects of our lives, and part of the motivation to include AI tags in this project was to explicitly call out the ways in which these tags are instrumental to all our daily search and discovery tasks, often in ways we do not realize.[20]7

Machine vision and AI tagging are now advanced enough to detect subject matter depicted in paintings and photographs.[21] Several institutions have already used AI to expand and enrich existing metadata tags, including as part of the Your Paintings Tagger project, a collaboration between the Public Catalogue Foundation and the BBC launched in 2011 to tag paintings with searchable terms.[22] When volunteer taggers participating in the project were taking longer than anticipated to complete the tagging, the project team turned to automation to speed up the process.[23]8

The application of machine learning to museum collections has long been subject to doubts about its accuracy and utility. In the words of Cuseum founder and CEO Brendan Ciecko, “Just how well does machine vision do? Can it offer accurate tags? Is the metadata generated useful and correct?”[24] One scientific study found that “although computer-based analysis can address many research questions, it is yet to surpass human ability in a number of areas.”[25] Not only are AI models limited in their ability to process complexity but they are also still trained by humans based on the cataloguing practices of museums. Thus these models can create exemplary tags for religious, Christian, and Western canonical images, for example, but often run into difficulties creating tags for anything “other.”[26] For Tag Along with Adler, it is critically important to recognize that utilizing a machine removes neither bias nor semantic gaps, because these are in fact things the machines are inevitably trained in.9

The Adler team opted to use two AI tagging models for Tag Along with Adler: the iMet Collection Attribute Classifier and Google Cloud Vision API. The team selected them because they have been trained using more images than the Adler has access to, and both are publicly available for use by any institution. They also offer two different tagging models: one specifically trained for a museum image-based collection (the iMet Collection 2019) and one similar to the algorithms routinely encountered by users online using Google Image Search (Google Cloud Vision API). Ultimately, the Adler team included AI tags to expose project participants to this emerging technology and conduct a survey to determine whether the presence of AI in the project had motivated them to participate; it was not done to replace the participatory component of human tagging, as in the Your Paintings Tagger project.10

Once the tagging models generated the AI tags, the Adler created its project using the Zooniverse Project Builder.[27] Although the Project Builder does not permit customization, it is a free tool that is relatively easy to use for people with little coding knowledge. Thus, the team decided to use it to make the case study replicable by any museum regardless of its budget or technical prowess. Tag Along with Adler launched on Zooniverse on March 23, 2021.11

The “Get Started” section of the homepage greeted users with descriptions of the two different workflows available to Zooniverse volunteers.[28] The “Verify AI Tags” workflow aimed first and foremost at gauging the accuracy of AI tagging models and introducing the public to the positive and negative effects these models can have in their everyday lives. The second, though not subsidiary, workflow, “Tag Images,” focused on user-generated language and gathering a diversity of opinions and perspectives. As the Adler team developed the project’s descriptive text for the Zooniverse project page, we focused on how it diverges from other Zooniverse projects—that is, that we were not looking for consensus or a single “right” answer. For many Zooniverse volunteers, this would prove to be the most challenging aspect of the project. 12

Evaluating Tag Along with Adler

The project was completed on March 12, 2022, approximately a year after it began. Tag Along with Adler used eleven subject sets of images, and as each subject set was retired, the team processed the data from both workflows (verification and textual), ultimately evaluating the results for all 1,090 images.[29] In this section, I examine the ways in which this data supports adopting the citizen-science method of crowdsourcing metadata tags to demonstrate—and address—the semantic gaps among the language of professional cataloguers, AI algorithms, and general public users. I also argue that bridging these gaps can comprise one part of a participatory, mission-driven experience of a cultural institution.13

Over the twelve months of Tag Along with Adler, the project amassed 3,557 registered volunteers with 6,976 individual participants.[30] Of these participants, one noticeable subset stood out: the superuser. A known entity in crowdsourcing projects, superusers are a small number of users who contribute a large percentage of the activity (in contrast to the larger number of users who make fewer contributions in total).[31] In one review of more than sixty online citizen-science projects, researchers confirmed that the presence of superusers can be at odds with project models meant to capture a diversity of perspectives.[32] Though engaging a community of dedicated and experienced volunteers who consistently return to a project enables quicker and more accurate data processing, metadata projects that are specifically not looking for a consensus or single accurate answer must be implemented in ways that encourage the participation of a broad range of users alongside a dedicated base of superusers. Specific ways Tag Along with Adler aimed to accomplish this included requiring fifty people to classify an image before it was retired from the project, and releasing small amounts of data incrementally (eleven sets of one hundred images). All of these design decisions helped prevent superusers from blowing through the project data quickly and thereby limiting the diversity of user approaches.[33]14

Initial processing of Tag Along with Adler’s data confirmed the presence of superusers, although it appears that the design choices just mentioned were ultimately effective in encouraging the participation of both superusers and a larger group of users. Of the 3,557 registered volunteers who have participated in Tag Along with Adler at the time of writing, only 807 users participated in more than one subject set (that is, returned to the project for additional releases of more images). Across the eleven subject sets, these 3,557 registered volunteers created 322,993 individual metadata tags. Those who participated in only one subject set (that is, who did not return to the project over time) submitted 163,342 of these tags (50.6%), averaging 59 tags per user. In comparison, those who worked on multiple subject sets (that is, who returned to the project more than once over the course of its twelve-month duration) submitted 159,651 of the tags (49.4%), averaging 197.8 tags per user over the course of their participation.15

These results reflect previous analyses of superusers, including a 2019 study that looked at the virtual citizen-science project Shakespeare’s World (also on Zooniverse), finding that 37% of the total content was created by only 3% of the project participants.[34] Tag Along with Adler results show that roughly 22% of the project participants are responsible for the creation of 49.4% of the project’s data. Figure 1 demonstrates the number of users returning for each subject set and the median number of tags created per user, per set. The inverse relationship demonstrates that although the rate of returning users generally falls off with each set, the tags created by each user continues to go up as they return to the project (showing that those who return to the project create more tags per image for each set they return to, than those who participate only once), attesting to the involvement of a dedicated base of engaged superusers and also a large group of users who together generate the bulk of the tags.16

A line graph titled “Superuser Data.” A dark red line indicates “Users” and a turquoise line indicates “Tags created (median per person/set).” The x-axis shows the number of subject sets a user participated in (numbers ranging from 2 to 11, left to right). The y-axis shows numbers, starting with “0” at the bottom and ending with “300” at the top. The red “Users” line begins slightly above 600 users and drops down to below 10 across the 11 subject sets. The turquoise line of “Tags Created” starts below 100 and increases to over 400 across 11 subject sets. The graph indicates an inverse relationship between the number of users and the median number of tags created.

Fig. 1


Superuser data for the Tag Along with Adler project, including the number of users who completed more than one subject set and the median number of tags they created for each set. All charts and diagrams in this essay were designed by Ben Bertin.

Several considerations motivated the choice to center this research within the collections of the Adler, to use the preexisting third-party platform Zooniverse, and to test iteratively across various workflows and projects. However, each of these choices comes with limitations that must be acknowledged to accurately assess the potential of such projects outside of the Adler and across the cultural heritage sector more broadly. We immediately recognized the limitation of this case study’s reliance on English at the expense of other languages. Over two-thirds of Zooniverse’s users identify as residents of the United States or the United Kingdom, and the majority of the public projects on the site are only available in English.[35] The Adler does not have demographic information on the languages our guests use, but it is worth noting that 35.8% of Chicago residents speak a language other than English.[36] Although these kinds of metrics do not provide a clear idea of how many non–English speakers are precluded from participating, as residents who speak a language other than English may also speak English, they do remind us that engaging publics in only one language will inevitably lead to some degree of exclusion.[37]17

Other types of diversity among project participants are also a critical marker of representational equity. Because an explicit purpose of our team’s crowdsourcing project was to enhance and expand the accessibility and representation of catalogue data, ensuring the involvement of a representative public was vital. The Tag Along with Adler header on the project’s Zooniverse landing page collected demographic information through a voluntary survey.[38] We made the survey voluntary as we recognized that it would be unethical to require demographic information as a condition of participation, as such a requirement could be a barrier to entry for those uncomfortable identifying various aspects of their identity, especially to an institution by which they may feel tokenized or othered, or from which they may feel excluded.18

At the time of writing, 107 of the project’s 3,557 registered users (or roughly 5.5%) have participated in this survey. The data presented here breaks down responses to questions centered on education level (see fig. 2), ethnicity (see fig. 3), gender (see fig. 4), and age (see fig. 5).[39] These results show a strong alignment with those of previous surveys on traditional crowdsourcing platforms. Most notably, just shy of two-thirds of respondents self-identified as white/Caucasian, and ethnic diversity overall was the least distributed of the four demographics gauged. A 2020 study published in the journal Citizen Science: Theory and Practice found that data from the surveys of an Illinois citizen-science project called RiverWatch indicated that participants were “disproportionately white, highly educated, and affluent compared with the Illinois general population.”[40] Education levels among those surveyed for Tag Along with Adler varied significantly more than racial identity, however, in part due to the active participation of many high school students, and the percentage of participants with bachelor’s degrees (28.7%) closely matched the percentage of Americans with bachelor’s degrees as recorded by the US Census between 2015 and 2019 (32.1%).[41] Participants also logged a diverse array of ages in their survey responses, with the largest proportion coming from school-aged students eighteen or under and a decent distribution among the remaining age groups. This data comes with limitations, clearly—as already stated, only 5.5% of users thus far have opted in to provide it—but it does help provide a basis for claiming that the project data was produced by a group more representative of the general public than is the predominantly white and female professional museum staff who otherwise would have catalogued these images.[42]19

A pie chart titled “What is the highest level of education you have completed?” Moving clockwise around the circle, a red section of 28.6% represents “Bachelor’s degree,” a turquoise section of 25% represents “Currently in middle or high school,” a dark gray section of 19.9% represents “High school diploma or GED,” a light gray section of 15.8% represents “Master’s degree,” an orange section of 4.6% represents “Doctorate,” a blue section of 3.1% represents “Prefer not to answer,” and a white section of 3.1% represents “Did not respond.”

Fig. 2


Demographic data on education level from the user survey for Tag Along with Adler, compiled March 2021–March 2022.

A pie chart titled “How do you identify your ethnicity?” Moving clockwise around the circle, a red section of 59.7% represents “White/Caucasian,” a turquoise section of 14.8% represents “Asian,” a dark gray section of 8.7% represents “None of these/Self-reported,” a light gray section of 5.1% represents “Prefer not to answer,” an orange section of 4.1% represents “Black/African,” a blue section of 3.6% represents “Two or more categories,” a light blue section of 2.6% represents “Hispanic/Latinx,” and a white section of 1.5% represents “Did not respond.”

Fig. 3


Demographic data on ethnicity from the user survey for Tag Along with Adler, compiled March 2021–March 2022.

A pie chart titled “How do you identify your gender?” Moving clockwise around the circle, a red section of 62.2% represents “Woman,” a turquoise section of 17.3% represents “Man,” a dark gray section of 5.1% represents “Non-binary,” a light gray section of 4.1% represents “Prefer not to answer,” an orange section of 3.6% represents “Two or more categories,” a blue section of 1% represents “None of these/self-reported,” and a white section of 0.5% represents “Did not respond.”

Fig. 4


Demographic data on gender from the user survey for Tag Along with Adler, compiled March 2021–March 2022.

A pie chart titled “How old are you?” Moving clockwise around the circle, a red section of 30.1% represents “18 years or less,” a turquoise section of 22.4% represents “19–30 years,” a dark gray section of 17.3% represents “60 years or more,” a light gray section of 8.7% represents “31–40 years,” an orange section of 7.7% represents “51–60 years,” a blue section of 7.1% represents “41–50 years,” a light blue section of 5.1% represents “Prefer not to answer,” and a white section of 1.5% represents “Did not respond.”

Fig. 5


Demographic data on age from the user survey for Tag Along with Adler, compiled March 2021–March 2022.

In addition to providing the quantitative data discussed above, the optional survey also provided the project team with qualitative baselines on the engagement of users with the project itself and with its overall learning objectives. This qualitative data—along with that collected from other components like the online platform’s discussion threads—is also essential for discussing crowdsourcing projects as the kind of mission-centric, participatory experience that this paper asserts they are. One set of survey questions was particularly helpful in gauging audience perspectives on concepts like trust and representation with the publics to whom we are reaching out.20

Figure 6 graphs the responses to eight survey questions about museums, science, communities, and representation. Response options ranged from Strongly Disagree to Strongly Agree. Even with only 5.5% of users reporting, the responses support the project team’s initial hypotheses about the value of museum crowdsourcing projects. Approximately 23.9% of survey respondents did not agree with the statement “Stories like mine are in museum collections,” 69% did not agree with the statement “Stories like mine are included in museum exhibitions,” and 39.6% did not agree with the statement that “I see people like me in science today.” The extremely high percentage of participants who felt museums were essential to communities (94.4%) and communities were essential to museums (94.9%) points to a clear opportunity for museums to leverage their position within the community to initiate participatory experiences that bring the public into the process of description, helping to not only increase the representation that is notably lacking in professional cataloguing staff but also transparently bringing the community into the essential work of the museum.21

A bar chart titled “Please select the choice that best represents your feelings/agreement for each phrase below.” The y-axis features numbers, with “0” at the bottom and “150” at the top. The x-axis features seven different statements with five color-coded bars rising from each. The five bars indicate “strongly disagree” (dark red), “disagree” (orange), “neutral” (dark gray), “agree” (light gray), and “strongly agree” (turquoise). The height of each bar indicates the number of respondents who picked that cateogry for each statement. The first statement is “I trust museums reflect multiple perspectives,” and the responses are mixed. The second statement is “Stories like mine are in museum collections,” and the response is mixed, with approximately equal numbers agreeing, and disagreeing or strongly disagreeing. The third statement is “Stories like mine are in museum exhibitions,” and the response is mixed, with approximately equal numbers agreeing, and disagreeing or strongly disagreeing. The fourth statement is “I see people like me in science today,” and the response is mixed, with approximately equal numbers agreeing, and disagreeing or strongly disagreeing. The fifth statement is “Museums are essential to communities,” and the most popular response by far is “strongly agree.” The sixth statement is “Communities are essential to museums,” and the most popular response by far is “strongly agree.” The seventh statement is “I trust what I find online,” and the most popular response is neutral. The eighth statement is “I can find things online easily,” and the most popular response is “agree.”

Fig. 6


Qualitative responses to eight prompts from the Tag Along with Adler user survey.

There is also an opportunity for museums—as institutions with recognized standing in the community—to help foster discussions about searchability and discovery on the internet. Our survey results show that only 18.7% of participants agreed they could trust what they find online but, conversely, 76% of them believe they can find things online easily. This study’s degree of transparency—and that of crowdsourcing projects in general—could be adapted to empower communities to better distinguish between fact and fiction online and to increase their trust of online searches by imparting knowledge of how to identify bias and recognize shortcomings in automation and algorithms, including but hardly limited to searches of a museum’s collection.22

The data thus far has illuminated user behavior and the limitations typical of any crowdsourcing project, from the presence of superusers to the demographics of participants. However, it is also possible to use this data to demonstrate both that the semantic gap between the language and description style museum professionals use (e.g., technical language, focus on physicality and provenance) and the language and description style the public uses (e.g., conversational language, focus on context and aboutness) does in fact exist and that projects like these can begin to bridge this gap. As discussed above, the Adler project team conducted a full survey of the Adler cataloguing data in conjunction with the design of the project, and the most frequent terms across extant Adler records (see fig. 7) were locations of objects (where objects were created, where books were published), item types (instrument names, book types [folios, manuscripts], document types), date of creation, and creators (object makers, authors, etc.). Although this data is clearly important for recording the provenance and overall historicity of the objects, it does not contribute significantly to an understanding of their aboutness.23

A horizontal bar chart titled “Adler Search Terms.” The terms are listed in descending order from the highest frequency (with a frequency around 175) to the lowest frequency (with a frequency around 40). The terms in order from top to bottom are: London, England, Adler Planetarium, Chicago, celestial, cartography, Chicago city, 1933, Century of Progress International Exposition, James Weber Linn, Kaufman & Fabry Co., Pictorial works, Reuben H. Donnelley Corporation, France, Paris, diagrams, Germany, instruments, Portrait, Netherlands, Astronomy, Issac Taylor, Ephraim Chambers, Amsterdam, 1870, exhibition, 1786, 1728, armillary sphere.

Fig. 7


The top thirty terms already applied by the Adler itself to images included in the Tag Along with Adler project, and those terms’ frequency.

By comparison, the participants working with the eleven subject sets of Tag Along with Adler eschewed terms focusing on the “isness” of makers, locations, and dates, instead producing language geared toward describing what is represented in an object (although importantly still including terms related to object type such as “diagram,” “drawing,” and “photograph”) (see fig. 8). Comparing even only the thirty most frequent terms from the Adler catalogue and the thirty most frequent tags from the Tag Along with Adler project reveals a distinct gap between the way museums and the public describe collections. These results help to show that crowdsourcing does have the desired effect of enhancing collections records to better suit the language of their users, which will go a long way toward improving the searchability of collections, especially for the public.[43]24

A horizontal bar chart titled “User Generated Tags.” The terms are listed in descending order from the highest frequency (around 800) to the lowest frequency (around 270). The tags in order from top to bottom are: illustration, astronomy, drawing, black and white, science, diagram, space, people, men, history, Art, Circle, globe, photograph, stars, map, earth, planets, man, moon, sun, sky, chart, planet, telescope, building, instrument, engraving, women.

Fig. 8


The top thirty terms participants in the Tag Along with Adler project created for images, and those terms’ frequency.

Figure 9 visualizes the importance of adding user language to institutions’ digital catalogues. Next to the image, two columns of text show the terms that were present in the Adler catalogue for this image before the Tag Along with Adler project (left) and the tags generated by users during the project (right). This comparison lays bare not only the difference in how museum cataloguers and general audiences describe images but also the sheer number of access points that can be added by a crowdsourcing project, again increasing discoverability of a collection’s contents for staff and public alike. These expanded access points will allow internal staff to query the database of images for specific subjects, in order to select objects for virtual exhibitions, public programming, or social media. As collaborative hashtags spread across museum social media such as #MuseumSunshine, having discoverable language focused on the context and pictorial dimension of object’s (i.e., their aboutness) helps aid staff members in finding appropriate collections to include, and also helps share collections within wider hashtags and movements that may expand the reach of the institution to new publics.[44]25

On the left a black-and-white photograph shows a light-skinned blond woman sitting on top of a small rug partly covering a large rock. She sports a complicated up-do, earrings, a knee-length black dress, stockings, and high-heeled shoes. Behind her are framed photographs of planetary bodies. To the right of the photograph are two lists of terms. The first is titled “Terms from the Adler’s collection catalogue,” and lists the following: 1945, 1950, Adler Planetarium, Chicago, Chicago Park District, meteorite, and woman. The second is titled “Terms Created by Users of Tag Along with Adler” and lists the following: Asteroid, Blonde, Display, exhibit, Exhibition, Fossil, Girl, meteor, Meteorite, Moon rock, Museum, Planets, Photograph, portrait, pose, Rock, sitting, sitting on, Space rock, vintage, Woman, Young Woman.

Fig. 9


A historic photograph (APHP.S5.C.F1.1) from the Adler’s collection, juxtaposed with two sets of tags for it. The left column of text shows the searchable terms for the image from the Adler’s collection catalogue, and the right column of text shows the terms created by users of the Tag Along with Adler project. Historic photograph courtesy of the Adler Planetarium, Chicago. Courtesy of the Adler Planetarium, APHP.S2.F18.1di.

Finally, when processing the data for the Tag Along with Adler project, we counted how many tags participants added and analyzed their overlap with, on the one hand, tags already in the Adler catalogue and, on the other, tags created by AI models. For this project, only 12.2% of user-generated tags were already in the Adler catalogue, meaning 87.8% of their tags were new, and only about 7.25% of the tags users generated were also created by the AI models, meaning 92.75% of the tags users created were not generated by AI. This was an exciting early assurance of the importance of using crowdsourcing for metadata creation, as it shows that although AI has some promise for metadata and tag creation, its success is still extremely dependent on the dataset on which the AI model was trained and is still far inferior to the work of human participants, at least in terms of describing varied materials from multiple points of view. For example, AI is often unable to pick up nuance or situational knowledge that humans can; for archival photographs at the Adler Planetarium, the AI inaccurately tagged scientists in lab coats as doctors or nurses. Human volunteers added tags for “scientists,” “labcoats,” and “lab coats,” but did not include tags for “doctor” or “nurse” as the image did not show medical equipment or hospital backgrounds that would prompt such tags. However, the inclusion of AI tags in this project clearly enticed and motivated some volunteers’ participation: the “Verify AI Tags” workflow consistently saw two to three times more engagement than that of the “Tag Images” workflow, demonstrating the draw that AI, automation, and algorithms can offer.26

A Case for Crowdsourcing

Crowdsourcing offers the opportunity for museums to leverage a novel methodology to build new relationships with their audiences, disrupting the usual relationship between the museum and the user by inviting the public to act as curators, experts, and researchers. In the process, it simultaneously enriches the user’s experience and the museum’s data and access points.[45] Crowdsourcing also ultimately expands the museum’s voice by incorporating a vocabulary and style of description aligned with the public’s own intellectual interests and perceptions. It thus expands who can access these collections while also allowing for mission-driven experiences that encourage engagement with the institution.27

If we as museum professionals acknowledge that in the contemporary online ecosystem, the public contends with misinformation, inherent biases in the results of their searches, and frequently invisible AI that underpins their methods of discovery, then now is not the time to uphold a status quo that hinders the very missions of our institutions by hampering the ability to discover our collections. In the words of the Collective Wisdom project, “doing nothing is also a decision. Doing nothing in this context, by choosing not to engage with values, is likely to support the status quo, including existing power structures, instead of taking the opportunity for challenge and consciously course-setting.”[46] It is time to examine and consider projects like crowdsourcing as an extension of museums’ mission-driven work and to see the value of including the voices of the communities we serve in the work that we present and the projects we initiate. By considering such participation part of the mission-centric work of the institution, it is possible to devote the staff, time, and resources needed to address contemporary discussions about the issues affecting the lives of the public both inside and outside of museums.[47]28

The results of Tag Along with Adler not only further our understanding of a research topic situated within decades-long literature reviews but also provides a specific and concrete case study evolving in real time through the actual work undertaken at the Adler Planetarium. As the public’s online habits and access to collections have shifted, it has become more important for both museums and the public that cataloguing focus as much on isness as aboutness. This essay bolsters the call for more institutions to engage in crowdsourcing and metadata-generating projects that utilize the enthusiasm and insight of their audiences. These projects create meaningful opportunities for the public to experience collections while making them more easily discoverable by others. Increasing transparency, accessibility, and representation within collections is a central component of museums’ online presences, and crowdsourcing is proving to be one of the most effective ways to do this work.29

Banner image: Schoolchildren watching a demonstration at the Adler Planetarium, Chicago, mid-twentieth century. Courtesy of the Adler Planetarium Collections, APHP.S2.F18.1.30


Notes

  1. Brendan Ciecko, Hilary-Morgan Watt, and Emily Haight, “How Museums Can Experiment with Social Media to Boost Audience Engagement During Coronavirus,” April 1, 2020, Cuseum, webinar, 58:02, https://cuseum.com/webinars/how-museums-can-experiment-with-social-media-to-boost-audience-engagement-during-coronavirus-overview.
  2. Steven Miller, Metadata for Digital Collections (New York: Neal-Schuman Publishers, 2011), 179.
  3. Jennifer Trant, “Tagging, Folksonomy and Art Museums: Results of steve.museum’s Research,” University of Arizona, Digital Library of Information Science & Technology, January 2009, http://hdl.handle.net/10150/105627.
  4. Mary Flanagan et al., “Citizen Archivists at Play: Game Design for Gathering Metadata for Cultural Heritage Institutions,” Proceedings of DiGRA 2013: DeFragging Game Studies, August 2014, http://www.digra.org/digital-library/publications/citizen-archivists-at-play-game-design-for-gathering-metadata-for-cultural-heritage-institutions/.
  5. “About,” Collective Wisdom: The State of the Art in Crowdsourcing in Cultural Heritage, https://collectivewisdomproject.org.uk/about/.
  6. Laura Carletti et al., “Digital Humanities and Crowdsourcing: An Exploration,” Museums and the Web 2013, April 17 –20, 2013, https://mw2013.museumsandtheweb.com/paper/digital-humanities-and-crowdsourcing-an-exploration-4/.
  7. Citizen science involves the collaboration of the general public with professional scientists to conduct research. The principal benefit of this method is that it enables research that would not otherwise be possible, but it can also provide the opportunity to engage a more diverse audience that may include typically underrepresented skills or demographics. See Helen Spiers et al., “Everyone Counts?,” Journal of Science Communication 18, no. 1 (January 2019), https://jcom.sissa.it/archive/18/01/JCOM_1801_2019_A04.
  8. Chris Lintott et al., “Galaxy Zoo: Morphologies Derived from Visual Inspection of Galaxies from the Sloan Digital Sky Survey,” Monthly Notices of the Royal Astronomical Society 389, no. 3: 1179–89, https://doi.org/10.1111/j.1365-2966.2008.13689.x.
  9. Samantha Blickhan et al., “Individual vs. Collaborative Methods of Crowdsourced Transcription,” in “Collecting, Preserving, and Disseminating Endangered Cultural Heritage for New Understandings through Multilingual Approaches,” special issue, Journal of Data Mining and Digital Humanities (December 2019), https://hal.archives-ouvertes.fr/hal-02280013.
  10. The ongoing Tag Along with Adler project includes various workflows and project designs to test for optimal diversity in tag generation, user motivation, and engagement with the project, but for the purposes of this paper I will focus on the Zooniverse-hosted online workflows only. As defined by Jennifer Trant, tagging is a process with the focus on user choice of terminology; an individual tag can be a word or phrase chosen by a user. Jennifer Trant, “Studying Social Tagging and Folksonomy: A Review and Framework,” Journal of Digital Information 10, no. 1 (January 2009), 43.
  11. Alyx Rossetti, “Subject Access and ARTstor: Preliminary Research and Recommendations for the Development of an Expert Tagging Program,” Art Documentation: Journal of the Art Libraries Society of North America 32, no. 2 (Fall 2013): 284–300.
  12. Kris Wetterlund, “Flipping the Field Trip: Bringing the Art Museum to the Classroom,” Theory Into Practice 47, no. 2 (Spring 2008): 110–17.
  13. Stephanie Ogeneski Christensen et al., “Basic Guidelines for Minimal Descriptive Embedded Metadata in Digital Images,” Embedded Metadata Working Group, Smithsonian Institution, April 2010, http://www.digitizationguidelines.gov/guidelines/GuidelinesEmbeddedMetadata.pdf.
  14. Jennifer Schaffner, “The Metadata Is the Interface: Better Description for Better Discovery of Archives and Special Collections, Synthesized from User Studies,” OCLC: A Publication of OCLC Research, May 2009, https://library.oclc.org/digital/collection/p267701coll27/id/444/. The term “hidden collections” describes collections in institutions that could deepen public understanding of the histories of people of color and other communities whose work and experiences have been insufficiently recognized by traditional narratives. The term originates in a project initiated by CLIR, Digitizing Hidden Collections: Amplifying Unheard Voices, https://www.clir.org/hiddencollections/. The term “structured metadata” (also “structural metadata”) is defined by the CLIR as follows: “metadata that describes the types, versions, relationships and other characteristics of digital materials.” See William Arms, Christophe Blanchi, and Edward Overly, “An Architecture for Information in Digital Libraries,” D-Lib Magazine, February 1997, https://www.dlib.org/dlib/february97/cnri/02arms1.html.
  15. Ibid.
  16. Michael Hancher, “Seeing and Tagging Things in Pictures,” Representations 155, no. 1 (Summer 2021): 82–109, https://doi.org/10.1525/rep.2021.155.4.82.
  17. Brendan Ciecko, “Examining the Impact of Artificial Intelligence in Museums – MW17: Museums and the Web 2017.” Accessed February 24, 2020. https://mw17.mwconf.org/paper/exploring-artificial-intelligence-in-museums/.
  18. The two tagging models selected for this project were the image attribute classifier trained on the Metropolitan Museum of Art’s iMet Collection 2019 dataset (https://tfhub.dev/metmuseum/vision/classifier/imet_attributes_V1/1) and the Google Cloud Vision API (https://cloud.google.com/vision).
  19. For the Metropolitan Museum of Art, see Chenyang Zhang et al., “The iMet Collection 2019 Challenge Dataset,” arXiv.org, June 3, 2019, http://arxiv.org/abs/1906.00901. For the Barnes Foundation, see Shelley Bernstein, “Using Computer Vision to Tag the Collection,” Medium, October 26, 2017, https://medium.com/barnes-foundation/using-computer-vision-to-tag-the-collection-f467c4541034. For the Massachusetts Institute of Technology, see Maria Kessler, “The Met × Microsoft × MIT: A Closer Look at the Collaboration,” Metropolitan Museum of Art, February 21, 2019, video, 2:40, https://www.metmuseum.org/blogs/now-at-the-met/2019/met-microsoft-mit-reveal-event-video. For the Philadelphia Museum of Art, see Penn Engineering, “Penn Engineering and the Philadelphia Museum of Art Join Forces to Envision the Future,” Medium, November 12, 2019, https://medium.com/penn-engineering/penn-engineering-and-the-philadelphia-museum-of-art-join-forces-to-envision-the-future-bde4cbfc282f. For the Harvard Art Museums, see Harvard Art Museums, “AI Explorer,” https://ai.harvardartmuseums.org/. For comprehensive results of such programs, see Brendan Ciecko, “AI Sees What? The Good, the Bad, and the Ugly of Machine Vision for Museum Collections,” Museums and the Web 2020, March 31–April 4, 2020, https://mw20.museweb.net/paper/ai-sees-what-the-good-the-bad-and-the-ugly-of-machine-vision-for-museum-collections/.
  20. Brendan Ciecko, “AI Sees What? The Good, the Bad, and the Ugly of Machine Vision for Museum Collections,” Museums and the Web 2020, March 31–April 4, 2020, https://mw20.museweb.net/paper/ai-sees-what-the-good-the-bad-and-the-ugly-of-machine-vision-for-museum-collections/.
  21. Ibid.
  22. “Your Paintings Project,” culture ant, https://culture-ant.com/your-paintings-project/, accessed February 28, 2022.
  23. Andrew Greg, quotes in Hancher, “Seeing and Tagging,” 82–109.
  24. Ciecko, “AI Sees What?”
  25. Spiers et al., “Everyone Counts?” See also Zooniverse Help, “How to Create a Project with Our Project Builder,” Zooniverse, https://help.zooniverse.org/getting-started/?_ga=2.172897696.2010710078.1613746401-631436202.1612287942.
  26. Ciecko, “AI Sees What?”
  27. Zooniverse Help, “How to Create a Project.” See also Laura Trouille et al., “DIY Zooniverse Citizen Science Project: Engaging the Public with Your Museum’s Collections and Data,” Museums and the Web 2017, April 19–22, 2017, https://mw17.mwconf.org/paper/diy-your-own-zooniverse-citizen-science-project-engaging-the-public-with-your-museums-collections-and-data/.
  28. “Home,” Tag Along with Adler, Zooniverse, https://www.zooniverse.org/projects/webster-institute/tag-along-with-adler.
  29. The project was designed with incremental releases of subject sets, or groups of images. That is, instead of releasing all 1,090 images at once, the images were broken into 10 sets of 100 images, and 1 set of 90 images. The sets were uploaded and made available to the public one at a time.
  30. Zooniverse projects do not require participants to register to participate, but any unregistered user who participates is assigned a single-use ID number for each session, making it difficult to ascertain whether such users participate more than once. Tag Along with Adler had 3,557 registered volunteers, with an additional 3,419 single-use identification sessions, for a maximum total of 6,976 individual users.
  31. Frauke Rohden et al., “Tagging, Pinging and Linking—User Roles in Virtual Citizen Science Forums,” Citizen Science: Theory and Practice 4, no. 1 (June 7, 2019): 19, https://doi.org/10.5334/cstp.181.
  32. Spiers et al., “Everyone Counts?”
  33. Simon Fuger at al., “User Roles and Team Structures in a Crowdsourcing Community for International Development—A Social Network Perspective,” Information Technology for Development 23, no. 3 (July 2017): 438–62, https://doi.org/10.1080/02681102.2017.1353947.

    See also Lesandro Ponciano and Francisco Brasileiro, “Finding Volunteers’ Engagement Profiles in Human Computation for Citizen Science Projects,” Human Computation 1, no. 2 (December 2014), https://doi.org/10.15346/hc.v1i2.12; and “Project Update: Gamifying the Transcription of Bentham’s Writings,” Transcribe Bentham, University College London, February 2019, https://blogs.ucl.ac.uk/transcribe-bentham/category/events/.

  34. Rohden et al., “Tagging, Pinging and Linking,” 19.
  35. Robert Simpson, “Who are the Zooniverse Community? We Asked Them . . . ,” Zooniverse (blog), Zooniverse, March 5, 2015, https://blog.zooniverse.org/2015/03/05/who-are-the-zooniverse-community-we-asked-them.
  36. United States Census Bureau, “Non-English Speakers: Most Common Languages, Chicago, IL,” Accessed August 20, 2021, https://data.census.gov/table?q=languages+in+Chicago+&tid=ACSST1Y2021.S1601.
  37. Conversations are still ongoing at the Adler about the possibilities of incorporating catalogue and metadata tags in non-English languages.
  38. The survey that accompanies the project can be accessed here: https://forms.gle/JZ3fuZhKdvahe5dm7.
  39. Spiers et al. identifies additional user demographic statistics for age and gender across five Zooniverse projects, although no information was recorded for race or education level.
  40. Charlie Blake, Allison Rhanor, and Cody Pajic, “The Demographics of Citizen Science Participation and Its Implications for Data Quality and Environmental Justice,” Citizen Science: Theory and Practice 5, no. 1 (October 2020): 21, https://doi.org/10.5334/cstp.320.
  41. Michael T. Nietzel, “New from U.S. Census Bureau: Number of Americans with a Bachelor’s Degree Continues to Grow,” Forbes, Feburary 22, 2021, https://www.forbes.com/sites/michaeltnietzel/2021/02/22/new-from-us-census-bureau-number-of-americans-with-a-bachelors-degree-continues-to-grow/?sh=106c61957bbc.
  42. In 2018 the Andrew W. Mellon Foundation, the Association of Art Museum Directors, and the American Alliance of Museums published a report on the demographics of current museum staff. The report indicates that, in art museums, staff are predominately female, though most senior leadership positions are held by men. Staff are also predominantly white (72%), with leadership positions approximately 80% white. The reports did not study age or educational level of museum staff. See Mariët Westermann, Liam Sweeney, and Roger Schonfeld, “Art Museum Staff Demographic Survey 2018,” Ithaka S+R, January 28, 2019, https://doi.org/10.18665/sr.310935.
  43. Work continues at the Adler on a process for maintaining quality control in user-generated tags. Accuracy is extremely important—looking at spelling, facticity, polysemy, and plurality, for example—but as this is ongoing work, this essay will not expand on it but only mention here that it is a necessary step to either be designed into the project for future users to assist with, or assigned as part of staff workload.
  44. On #MuseumSunshine, see “Closed Art Institutions Brighten Up the Day With #MuseumSunshine Images,” Observer, April 22, 2020, https://observer.com/2020/04/museum-sunshine-art-institutions-twitter-hashtag/. On museums and hashtags more generally, see Brendan Ciecko, Hilary-Morgan Watt, and Emily Haight, “How Museums Can Experiment with Social Media to Boost Audience Engagement During Coronavirus,” Cuseum, April 1, 2020, https://cuseum.com/webinars/how-museums-can-experiment-with-social-media-to-boost-audience-engagement-during-coronavirus-overview.
  45. Mia Ridge et al., “Introduction and Colophon,” The Collective Wisdom Handbook: Perspectives on Crowdsourcing in Cultural Heritage—Community Review Version, April 29, 2021, https://britishlibrary.pubpub.org/pub/introduction-and-colophon.
  46. Ibid.
  47. Ibid.

How to Cite

Jessica BrodeFrank, “Crowdsourcing Metadata in Museums: Expanding Descriptions, Access, Transparency, and Experience,” in Perspectives on Data, ed. Emily Lew Fry and Erin Canning (Art Institute of Chicago, 2022).

This essay has been peer reviewed through an open-review process.

© 2022 by The Art Institute of Chicago. This work is licensed under a CC BY-NC 4.0 license: https://creativecommons.org/licenses/by-nc/4.0/

https://doi.org/10.53269/9780865593152/02

Sign up for our enewsletter to receive updates.

Learn more

Image actions

Share