Disclaimer: This Manifesto is entirely independent of the studies presented and references their findings solely to inform its conceptual development.
Studies supporting “AI contributions must be transparent”
-
Lessons learned: Transparency around using AI needs to be specific to maintain trust – Source
Randomized experiment (N = 1,483) shows AI labels can lower organisational trust unless paired with precise source lists, highlighting nuance in disclosure. -
Labeling Messages as AI-Generated Does Not Reduce Their Persuasive Effects – PDF
Survey experiment (N = 1,601) finds authorship labels boost transparency yet don’t affect persuasion.
Studies supporting “Collaboration is layered”
-
When humans and AI work best together — and when each is better alone – Source
Review of 106 experiments shows combinations excel when roles complement but underperform in algorithm-dominant tasks, underscoring role-layer matching. -
When combinations of humans and AI are useful: A systematic review and meta-analysis – Source
Meta-analysis of 370 effect sizes finds content-creation tasks gain most from human-AI synergy, while decision tasks often suffer. -
Examining human–AI collaboration in hybrid intelligence learning environments – Source
Classroom case study proposes a Synergy Degree Model; synergy fluctuates low-to-moderate and maps clear teacher/AI layers. -
Human-AI Co-Creativity: Exploring Synergies Across Levels of Creative Collaboration – PDF
Framework details four collaboration levels (Digital Pen → AI Co-Creator) with empirical cases showing creative gains when responsibilities are explicitly layered. -
Human-generative AI collaboration enhances task performance but undermines human’s intrinsic motivation – Source
Series of four studies (N = 3,562) finds immediate performance boosts yet drops in long-term motivation, stressing careful layering for sustainable teamwork.
Studies supporting “Responsibility stays with people”
-
Influence of AI behavior on human moral decisions, agency, and responsibility – Source
Drone-operator experiment shows AI advice shifts moral choices and reduces explicit responsibility, evidencing real “responsibility gaps.” -
AI Explainability: How to Avoid Rubber-Stamping Recommendations – Source
Panel of experts and 1,221-respondent survey conclude explainability & human oversight are complementary safeguards; without explanations, oversight devolves to rubber-stamping. -
Accountable Artificial Intelligence: Holding Algorithms to Account – PDF
Public-sector analysis urges transparent, interpretable models so humans can answer for algorithmic decisions (information-explanation-consequence triad).