At the Creator Economy Lab, we study the policies and industry practices of the creator economy. We focus on how creators and their audiences experience, enact, and challenge the governance of social media platforms. Our goal is to understand the possibilities for political participation in sociotechnical systems and incorporate user perspectives into policy recommendations.
Publications
The TikTok caliphate: How jihadist supporters exploit algorithmic recommendations and evade content moderation [link]
Global creator culture? Converging values and generic practices in YouTube Reviews [link]
Stuck in the middleware with you: The challenges of capitalizing a market‑oriented approach to platform governance [link]
Hot tubs, yoga pants, and gamba: Twitch’s controversial metas as cultural negotiations of platform governance [link]
Priorities and exclusions in Trust and Safety industry standards [link]
Aspirational platform governance: How creators legitimise content moderation through accusations of bias[link]
Parasocial media: The mass production of intimacy on a Chinese pop idol mobile application [link]
User-generated accountability: Public participation in algorithmic governance on YouTube[link]
Copyright callouts and the promise of creator-driven platform governance [link]
Projects
Chatbot vs. the Chat: How an AI streamer gamifies content moderation circumvention on Twitch
Content moderation begets content moderation circumvention, trapping platforms in a never-ending game with adversarial actors. The accessibility of generative AI has raised the stakes of the game, enhancing the capabilities of platforms and adversaries alike. It has also transformed and multiplied the game, bringing new actors into content moderation roles. Nowhere is this more evident than the case of AI streamers who interact with chatting audiences in real time. To comply with platform policies, creators of AI streamers must design content moderation systems for an AI agent, the audience, and their often antagonistic interactions. We illustrate this process through the case of Neuro-sama, the leading AI streamer active on Twitch, YouTube, and Bilibili. Through an analysis of news articles (n=65), commentary videos (n=26), and threads of a popular drama subreddit (n=113), we show how an edgy persona is key to Neuro-sama’s appeal, driving attention and monetization in the form of subscriptions and paid chat interactions. We identify the tactics Vedal, Neuro-sama’s creator, uses to turn content moderation circumvention into a game, incentivizing the audience to bait the AI agent to say bad things while maintaining necessary levels of compliance with transnational platform policies.
Team: Blake Hallinan, CJ Reynolds
Donation, subscription, purchase, token: A vocabulary of payments on social platforms
Mechanisms for payments between audiences and creators are now common features of social platforms. They constitute a new mode of value generation, independent of advertisers, which provides the financial foundation of patronage platforms like Patreon and OnlyFans, and supplements advertising revenue on multi-purpose platforms like YouTube and TikTok. While such mechanisms promise audiences the opportunity to directly support preferred creators, platforms govern social transactions through design and policy. Drawing on existing research and an analysis of ten mainstream and adult-oriented social platforms, we introduce a vocabulary of payment mechanisms, consisting of donations, subscriptions, purchases, and tokens. We describe how each mechanism typically configures the relationship between audiences, creators, and platforms, and present notable variants. We also discuss how platforms build inequality through tiering, obfuscate the economic aspects of payments through rhetoric and interface design, and restrict information about transactions in official APIs and transparency reports.
Team: Blake Hallinan, Dana Theiler, Kai Roland Green, Isabell Knief, CJ Reynolds
Tracking informal platform governance and alternative copyright norms through fan wikis
Platform governance research prioritizes formal mechanisms that can be codified in law or policy and tracked with trace data. While such mechanisms merit attention, the focus risks overlooking informal practices that both shape user experience and enable users to shape platform culture. We introduce a methodological approach to track informal platform governance through fan wikis, focusing on copyright enforcement on YouTube. We annotated 154 copyright controversies from Wikitubia to understand how creators and their communities understand, use, and evaluate copyright reporting tools. The wiki’s picture of copyright enforcement looks very different from YouTube’s official accounts: contributors rarely discussed Content ID and focused on political, economic, or interpersonal disputes, evaluating the legitimacy of enforcement through the moral status of the target. Our analysis demonstrates that informal platform governance is a semi-autonomous sphere with distinct actors, values, and practices that can complement or conflict with mainstream approaches to copyright and related issues.
Team: CJ Reynolds, D. Bony Valdovinos Kaye, Landrous Shen, Blake Hallinan
Value alignment practices:The negotiation of Generative AI on Chinese art commission platforms
This study examines the value alignment practices—actions that attribute and attempt to shape the values of sociotechnical systems—of the Chinese “original character” (OC) community regarding generative AI on art commission platforms. Through our analysis of policies, social media discourse, and interviews, we map stakeholder responses. While platforms deploy strategic ambiguity to balance political pressure to promote AI with the risk of user backlash, artists’ responses vary by market position, and consumers oppose AI-generated art as an ethical violation of OCs’ existence as digital beings. The study reveals how socio-political contexts and subcultural affective attachments shape our understandings of ethical AI.
Team: Landrous Shen, Blake Hallinan
Creator protection practices: Publicizing the financial risks of social payments
In circumstances where platforms delegate the financial responsibility for refunds, what issues do creators experience? And how do they respond to these challenges? To answer these questions, we created a boutique dataset of content creators discussing issues around refunds on X. Starting in November 2025, we began searching for combinations of [creator, streamer, model] and [refund, chargeback] on X weekly, manually reading and coding all relevant conversations on the topic, resulting in 50 posts with over 3300 replies thus far, collected with Zeeschuimer. Although the data collection and analysis is ongoing, we already see how creators publicize the financial risks of social payments, drawing attention to risks poorly documented in platform policies and academic literature. They share strategies for mitigating risks, including how to configure donation settings and respond to bad actors, engage in fan pedagogy around proper social payment conduct, and provide social and emotional support to creator peers.
Team: Blake Hallinan, CJ Reynolds, Emilija Jokubauskaitė
Goodbye YouTube! A longitudinal analysis of creators leaving the platform
The phenomenon of videos where a creator announces their departure from the platform—what we call “quit vids,” referencing the genre of “quit lit” as narrative writing about the decision to leave a career—appears in the margins of journalistic and academic discourse on creators. Yet by discussing their reasons for leaving (or thinking about leaving), creators provide a self-produced “exit interview,: addressing their audiences, peers, and the platform itself in the absence of a regular employer. And in YouTube’s competitive attention economy, where views and often dollars are on the line, announcements of leaving are also strategic plays for visibility. We examine the phenomenon longitudinally, based on a manually vetted dataset of over 13,000 English-language quit vids published between 2006 and 2025. We are conducting a content analysis of 500 quit vids, randomly sampled from yearly intervals. Our codebook includes the tone of the video, reasons for leaving, framing of being a creator, and framing of YouTube as a platform. Computationally, we track links to other social media platforms in the description of the videos, seeing how alternatives to YouTube change over time. We investigate which channels continue to post after their “goodbye” video and compare the engagement metrics of the video to channel averages to assess the role of quit vids as a visibility strategy. Taken together, our study provides a holistic account of the quit vid phenomenon, a creator-centric history of the platform, and a complementary perspective on working conditions with the creator economy.
Team: Blake Hallinan, Manon Raynaud, CJ Reynolds, Maria Rasskazova
Coordinated reporting and the question of harm
The problem of online harm is particularly thorny because harm is essentially contested and inescapably political; people, platforms, and regulators disagree, sometimes profoundly, over what counts as harm. To investigate community responses to harmful content, we focused on the use of social media among Jewish Israelis following 7 October 2023. We conducted 30 semi-structured, in-depth interviews, focused on participants’ involvement in coordinated actions aimed at eliciting platform responses (taking down content, banning an account, etc.). On the one hand, interviewees joined coordinated reporting campaigns as part of a social media turf war. On the other hand, interviewees included the atrocity videos within their definition of harmful content, and described intense psychological reactions to watching them, but felt that more people should be exposed to them, and that they served both an important political role and contributed to Jewish-Israeli unity. Not only do we see that notions of harm are contextual, but in some contexts, harmful content can be seen as good, or even as serving a noble cause.
Team: Nicholas A. John, Blake Hallinan, Tommaso Trillò, Noa Niv, Omer Rothenstein, Dana Theiler
Community Notes as participatory consumer protection
X — then Twitter — launched Community Notes in 2021, allowing users to attach “notes” that contextualize, contest, or clarify posts on the platform. While Community Notes engages in conventional fact-checking tasks of verifying news and political discourse, it also plays an important role drawing attention to spam, scams, fraud, and other consumer protection issues. Through a combination of qualitative and computational text analysis of consumer protection-oriented Community Notes, we identify the types of consumer protection issues that the program flags, the sources of evidence participants use, and the relationship between the presence of community notes and other top-down content moderation responses. We reflect on the potential and limitations of participatory approaches for addressing consumer harms on social media.
Team: CJ Reynolds, Omer Rothenstein, Noa Niv, Yehonatan Kuperberg
Collaborators
CJ Reynolds, University of Copenhagen
Maria Rasskazova, University Sorbonne Paris Nord
Rebecca Scharlach, University of Bremen [link]
Nicholas A. John, University of Manchester [link]
Landrous (Xinyue) Shen, Indiana University [link]
Tommaso Trillò, Hebrew University of Jerusalem [link]
D. Bondy Valdovinos Kaye, University of Leeds [link]
Kai Roland Green, Aarhus University [link]
Dana Theiler, Hebrew University of Jerusalem
Isabell Knief, Independent Researchers
Former Members
Gilad Karo, MA Student, Hebrew University of Jerusalem
Noa Niv, PhD Student, Hebrew University of Jerusalem
Yehonatan Kuperberg, MA Student, Hebrew University of Jerusalem
Omer Rothenstein, Incoming PhD Student, University oF Zurich
Tom Divon, PhD Candidate, Hebrew University of Jerusalem [link]