Creating AI Art Responsibly: A Field Guide for Artists

Main Article Content

Claire R. Leibowicz
Emily Saltz
Lia Coleman

Abstract

Machine learning tools for generating synthetic media enable creative expression, but they can also result in content that misleads and causes harm. The Responsible AI Art Field Guide offers a starting point for designers, artists, and other makers on how to responsibly use AI techniques and in a careful manner. We suggest that artists and designers using AI situate their work within the broader context of responsible AI, attending to the potentially unintended harmful consequences of their work as understood in domains like information security, misinformation, the environment, copyright, and biased and appropriative synthetic media. First, we describe the broader dynamics of generative media to emphasize how artists and designers using AI exist within a field with complex societal characteristics. We then describe our project, a guide focused on four key checkpoints in the lifecycle of AI creation: (1) dataset, (2) model code, (3) training resources, and (4) publishing and attribution. Ultimately, we emphasize the importance for artists and designers using AI to consider these checkpoints and provocations as a starting point for building out a creative AI field, attentive to the societal impacts of their work.


Article Details

How to Cite
Leibowicz, C., Saltz, E., & Coleman, L. (2021). Creating AI Art Responsibly: A Field Guide for Artists. Diseña, (19), Article.5. https://doi.org/10.7764/disena.19.Article.5
Section
Projects
Author Biographies

Claire R. Leibowicz, Partnership on AI

BA in Psychology and Compu­ter Science, Harvard University. Master in the Social Science of the Internet, University of Oxford (as a Cla­rendon Scholar). She is the Head of the AI and Media Integrity program at the Partnership on AI, a global multistakeholder nonprofit devoted to responsible AI. Under her leadership, the AI and Media Integrity team investigates the impact of emerging AI techno­logy on digital media and online information. She is also a 2021 Journalism Fellow at Tablet Magazine, where she is exploring questions at the intersection of technology, society, and digital culture, and an inco­ming doctoral candidate at the Oxford Internet Insti­tute. Her latest publications include, ‘Encounters with Visual Misinformation and Labels Across Platforms: An Interview and Diary Study to Inform Ecosystem Approaches to Misinformation Interventionsʼ (with E. Saltz and C. Wardle; Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, Issue 340) and ‘The Deepfake Detection Dilemma: A Multistakeholder Exploration of Adver­sarial Dynamics in Synthetic Mediaʼ (with A. Ovadya and S. McGregor; Proceedings of the 2021 ACM Con­ference on Artificial Intelligence, Ethics, and Society).

Emily Saltz, The New York Times

Master in Human-computer Inte­raction, Carnegie Mellon University. She is a UX Researcher studying media and misinformation, working with organizations like the Partnership on AI and First Draft. She led UX for The News Provenance Project at The New York Times, where she works as a UX researcher. Some of her work includes a collaboration on an AI-generated op-ed for author Oobah Butler on being catfished by AI (The Independent, 2021); explorations of text prediction software such as ‘Human-Human Autocompletion’ (presented at WordHack at Babycastles, 2020) and ‘Super Sad Googlesʼ (presented at Eyeo 2019); and ‘Filter Bubble Roulette’, a mobile VR experience to inhabit user-specific social media feeds (presen­ted at The Tech Interactive in San Jose, 2018).

Lia Coleman, Rhode Island School of Design

BSc in Computer Science, Massa­chusetts Institute of Technology. She is an artist, AI researcher, and educator. Adjunct Professor at Rhode Island School of Design, she teaches machine learning artwork. She is the author of ‘Machines Have Eyesʼ (with A. Raina, M. Binnette, Y. Hu, D. Huang, Z. Davey, and Q. Li; in Big Data. Big Design: Why Designers Should Care About Machine Learning; Princeton Architectural Press, 2021), ‘Artʼificial (with E. Lee; Neocha Magazine, 2020), and ‘Flesh & Machine’ (with E. Lee; Neocha Magazine, 2020). Some of her recent workshops and talks include ‘How to Play Nice with Artificial Intelligence: Artist and AI Co-creationʼ (presented at Burg Giebichenstein University of Art and Design, 2021); ‘A Field Guide to Making AI Art Responsiblyʼ (presented at Art Machi­nes: International Symposium on ML and Art), and ‘How to Use AI for Your Art Responsiblyʼ (presented at Mozilla Festival, 2020 and Gray Area, 2020).

References

Bhatt, U., Andrus, M., Weller, A., & Xiang, A. (2020). Machine Learning Explainability for External Stakeholders. Association for Computing Machinery ArXiv, (arXiv:2007.05408). http://arxiv.org/abs/2007.05408

Bickert, M. (2020, January 6). Enforcing Against Manipulated Media. Facebook Blog. https://about.fb.com/news/2020/01/enforcing-against-manipulated-media/

Buolamwini, J. (2016). Project Overview Algorithmic Justice League. MIT Media Lab. https://www.media.mit.edu/projects/algorithmic-justice-league/overview/

Chesney, R., & Citron, D. K. (2018). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107(1753). https://doi.org/10.15779/Z38RV0D15J

Costanza-Chock, S. (2018). Design Justice: Towards an Intersectional Feminist Framework for Design Theory and Practice. Proceedings of the Design Research Society 2018. https://doi.org/10.21606/drs.2018.679

Crawford, K., & Paglen, T. (2019). Excavating AI: The Politics of Images in Machine Learning Training Sets. Excavating AI. https://excavating.ai

Diehm, C., & Sinders, C. (2020, May 14). “Technically” Responsible: The Essential, Precarious Workforce that Powers A.I. The New Design Congress Essays. https://newdesigncongress.org/en/pub/trk

Dolhansky, B., Bitton, J., Pflaum, B., Lu, J., Howes, R., Wang, M., & Ferrer, C. C. (2020). The DeepFake Detection Challenge (DFDC) Dataset. Association for Computing Machinery ArXiv, (arXiv:2006.07397). https://arxiv.org/abs/2006.07397v4

Epstein, Z., Levine, S., Rand, D. G., & Rahwan, I. (2020). Who Gets Credit for AI-Generated Art? IScience, 23(9), 101515. https://doi.org/10.1016/j.isci.2020.101515

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2018). Datasheets for Datasets. Association for Computing Machinery ArXiv, (arXiv:1803.09010). https://arxiv.org/abs/1803.09010v1

Grosz, B. J., Grant, D. G., Vredenburgh, K., Behrends, J., Hu, L., Simmons, A., & Waldo, J. (2018). Embedded EthiCS: Integrating Ethics Broadly Across Computer Science Education. Association for Computing Machinery ArXiv, (arXiv:1808.05686). https://arxiv.org/abs/1808.05686

Hao, K. (2019, June 6). Training a Single AI Model Can Emit as Much Carbon as Five Cars in Their Lifetimes. MIT Technology Review. https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/

Hara, N. (2020). Pause Fest [AI-Generated Image]. http://www.n-hara.com

Lacoste, A., Luccioni, A., Schmidt, V., & Dandres, T. (2019). Quantifying the Carbon Emissions of Machine Learning. Association for Computing Machinery ArXiv, (arXiv:1910.09700). https://arxiv.org/abs/1910.09700

Leibowicz, C. R. (2020). The Deepfake Detection Challenge: Insights and Recommendations for AI and Media Integrity. Partnership on AI. https://www.partnershiponai.org/wp-content/uploads/2020/03/671004_Format-Report-for-PDF_031120-1.pdf

Leibowicz, C. R., Stray, J., & Saltz, E. (2020, July 13). Manipulated Media Detection Requires More Than Tools: Community Insights on What’s Needed. The Partnership on AI. https://www.partnershiponai.org/manipulated-media-detection-requires-more-than-tools-community-insights-on-whats-needed/

Li, Y., & Lyu, S. (2019). De-identification Without Losing Faces. Proceedings of the ACM Workshop on Information Hiding and Multimedia Security, 2019, 83–88. https://doi.org/10.1145/3335203.3335719

Lomas, N. (2020, August 17). Deepfake Video App Reface is just Getting Started on Shapeshifting Selfie Culture. TechCrunch. https://social.techcrunch.com/2020/08/17/deepfake-video-app-reface-is-just-getting-started-on-shapeshifting-selfie-culture/

Lyons, M. J. (2020). Excavating “Excavating AI”: The Elephant in the Gallery. Association for Computing Machinery ArXiv Preprint, (arXiv:2009.01215). https://doi.org/10.5281/zenodo.4037538

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–229. https://doi.org/10.1145/3287560.3287596

Mix. (2020, May 7). This AI Spits Out an Infinite Feed of Fake Furry Portraits. The Next Web. https://thenextweb.com/news/ai-generated-furry-portraits

Moisejevs, I. (2019, July 14). Will My Machine Learning System Be Attacked? Towards Data Science. https://towardsdatascience.com/will-my-machine-learning-be-attacked-6295707625d8

Nicolaou, E. (2020, August 27). Chrissy Teigen Swapped Her Face with John Legend’s and We Can’t Unsee It. Oprah Daily. https://www.oprahdaily.com/entertainment/a33821223/reface-app-how-to-use-deepfake/

Paris, B., & Donovan, J. (2019). Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence. Data & Society. https://datasociety.net/library/deepfakes-and-cheap-fakes/

Patrini, G. (2019, October 7). Mapping the Deepfake Landscape. Sensity. https://sensity.ai/mapping-the-deepfake-landscape/

Posters. (2019, May 29). Gallery: “Spectre” Launches (Press Release). http://billposters.ch/spectre-launch/

Raji, I. D., & Yang, J. (2019). ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles. Association for Computing Machinery ArXiv Preprint, (arXiv:1912.06166v1). http://arxiv.org/abs/1912.06166

Rakova, B., Yang, J., Cramer, H., & Chowdhury, R. (2020). Where Responsible AI meets Reality: Practitioner Perspectives on Enablers for shifting Organizational Practices. Proceedings of the ACM on Human-Computer Interaction, CSCW1. https://doi.org/10.1145/3449081

Roth, Y., & Achuthan, A. (2020, February 4). Building Rules in Public: Our Approach to Synthetic & Manipulated Media. Twitter Blog. https://blog.twitter.com/en_us/topics/company/2020/new-approach-to-synthetic-and-manipulated-media

Rothkopf, J. (2020, July 1). Deepfake Technology Enters the Documentary World. The New York Times. https://www.nytimes.com/2020/07/01/movies/deepfakes-documentary-welcome-to-chechnya.html

Salgado, E. (2020, August 5). Yaku with Circular Loops [AI-Generated Image]. https://www.youtube.com/watch?v=kSQW8Q2WV9c

Saltz, E., Coleman, L., & Leibowicz, C. R. (2020). Making AI Art Responsibly: A Field Guide [Zine]. Partnership on AI. https://www.partnershiponai.org/wp-content/uploads/2020/09/Partnership-on-AI-AI-Art-Field-Guide.pdf

Schultz, D. (2019). Faces2flowers—Artificial Images. https://artificial-images.com/project/faces-to-flowers-machine-learning-portraits/

Schultz, D. (2020). Artificial Images. https://artificial-images.com/

Simonite, T. (2018, November 28). How a Teenager’s Code Spawned a $432,500 Piece of Art. Wired. https://www.wired.com/story/teenagers-code-spawned-dollar-432500-piece-of-art/

Twitter Safety [@TwitterSafety]. (2020, October 30). Our policies are living documents. We’re willing to update and adjust them when we encounter new scenarios or receive important… [Tweet]. Twitter. https://twitter.com/TwitterSafety/status/1322298208236830720