Standford AI Index - Impact - Summer 2024

https://aiindex.stanford.edu/

The presence of AI in every aspect of life has been a fact for the past 20 months. With the publication of the Stanford AI Index, two areas have come into focus. For museums, how to work with industry giants, without having their offering "distanced" by the summarising power of AI. For artists, how to thrive where sources of production are being monetised in Silicon Valley

Twenty months into the global focus on artificial intelligence (AI) and its increasing presence in daily life, the nature and scale of the AI challenge to artists and art institutions, and to academics seeking transparent access to information, has become clearer following the publication of the seventh annual edition of Stanford University’s Artificial Intelligence Index Annual Report.

The report, the most comprehensive research study of the state of AI, reveals an industry increasingly dominated, and monetised, by the tech giants Google, Microsoft, OpenAI and Meta, and where the cost of advanced breakthroughs in the functioning of AI—once made by academic institutions—has become prohibitively expensive. Meanwhile, it finds that new closed, proprietary, AI models—whose lack of transparency is a source of increasing concern to the tech giants themselves—have been outperforming the capabilities of open-source models, which are fully available to academics, artists and developers.

Existential challenges

Both art institutions and artists face existential challenges in negotiating this AI universe and its emerging financial model. Art-world experts consulted by The Art Newspaper have offered a mix of pragmatic, creative, combative and hopeful responses to the report and to the state of the AI industry.

The museum sector is at a technological tipping point and will soon have to engage with industry giants such as Google to disseminate information and data, the museum director Thomas Campbell told The Art Newspaper in Hong Kong earlier this year. “It’s just a matter of months before these systems are going to be telling you about Monet, Medieval tapestries or Damien Hirst,” he said. “They’re going to be doing that whether we are participating or not.”

The challenges of cost and accessibility, and of protecting intellectual property that have emerged in the age of AI, and the financial challenges written into the Stanford report, are far from unique to the art world. But for museums, some of the biggest questions are those of distance and control; how they can harness the power of AI to classify and offer new insights into their collections, as a partner or licensee of tech giants, without finding themselves further away from the audience that is looking to access and understand their art and activities—distanced by the summarising power of AI chatbots. (A challenge of AI “distancing” is one that news publications face this month with Google rolling out, first in the US, an AI summary in response to word search, in favour of the longstanding interface that ranked stories while displaying links back to their sources.)

The Future Art Ecosystems (FAE) team at Serpentine, in London, which has been leading research for the past decade on how cultural institutions work with AI, takes a less binary view. “Lines of power distribution are still being drawn”, the team tells The Art Newspaper, because of legal challenges—largely on copyright to generative AI models that depend on scraping vast amounts of image and text data from the internet—and because of growing public awareness around the “cultural, regulatory and ownership interests” attached to the functioning of leading AI models such as Chat GPT-4 and Stable Diffusion.

The FAE team, which published Future Art Ecosystems 4: Art x Public AI (FAE 4), their fourth annual report aimed at encouraging new thinking and collaboration around the interaction between art and technology, says, “The mandate of cultural institutions is to make informed decisions that serve the public interest. This does not mean there should be an absolute embargo on partnering with large corporate actors, but the terms of that partnership should benefit the public” above and beyond whether they have access to advanced AI or not.

How to work with big tech

At the heart of the 2024 report is an emerging dilemma that every artist and every art institution seeking to engage AI will have to face: will that engagement be through and with AI created by the world’s giant technology companies, or will they look to develop their own, or work with open-source models freely available on the web?

Access to, and control of, the technologies of production is a critical part of artistic, democratic and institutional freedom. AI’s complexity and cost mean that disparities between proprietary “closed” technologies and the open sharing and re-use of technology, data and ideas will likely increase—as may the impact of those disparities on democratic and creative freedoms.

According to the Stanford report, the most dramatic breakthroughs in AI in the past year have been made in the closed, proprietary, approach, and as the report’s editor-in-chief Nestor Maslej said: “If it’s the case that closed developers are substantially outpacing or substantially outperforming developers that are open, this could have a lot of implications for how democratic and how widely distributed the benefits of the AI revolution could possibly be.”

The Artificial Intelligence Index Report shows how rapidly the dynamics of AI development are changing. In the 18 months since Chat GPT-3 was released, AI has bypassed human capability in entire task categories, including some in categorising images, in visual reasoning and understanding English. Crucially, this capability is starting to have decisive outcomes in science, where new AI applications such as GNoME, which helps the process of materials discovery, have emerged in the last year.

Driving these advances is corporate investment in proprietary models—and the cost of those advances is growing. Industry now clearly leads advanced, or “frontier”, AI research, the cost of which is moving beyond the capacity of states or academic institutions to lead. The cost of training a “frontier” AI model—one which may do or discover something new—is already over $100m and will increase. These costs are driving ever-larger investment rounds, with over $25.2bn of private investment in the last year.

However, the report’s authors point out that university researchers, who have been sidelined financially from the recent AI breakthroughs—dominated by the tech giants—may regain their place in the vanguard through research breakthroughs, not least in the fields of efficiency in how data can be used, the kind of breakthrough that might change the stakes in the innovation “space race”. It also highlights that 2023 saw the launch of 21 notable AI models through industry/academic collaborations, “a new high”.

The main social outcome of this so far is anxiety. The world is noticing the incursion of AI into everyday life and getting increasingly nervous about it —with recent data from the Pew Research Center showing 52% of people were more concerned than excited about AI, up from 37% two years earlier. If the gap between proprietary technologies and those made directly by or openly (and fully) available to artists and institutions continues to grow, what might that mean for what we understand an AI artist to be? And how can art institutions maintain the trust and data integrity they depend on to fulfil their public roles?

The emerging dilemma about the power and ownership of AI means we may already have passed a threshold where what it meant to be an “AI artist” for the last 50 years is obsolete, and what it will mean has not yet emerged.

Harold Cohen is commonly accepted as the first AI artist and others have since followed his version of what an “AI artist” is. In the late 1960s Cohen created AARON, the software that paints his paintings. Cohen was both technology creator and collaborator with AARON. Cohen made his AI model in his own artistic image, AARON creating in the painterly style Cohen had worked in successfully for some years. AARON and Cohen became inseparable from each other.

This conjoined role—humans making the AI, and then acting as creative collaborators with the technology they made—has been the basis of what we have meant by an “AI artist” for the last 50 years. It is a model beneath the last decade’s generation of artists whose use of AI has brought them, and the wider medium, to prominence.

The notable AI artists of the last decade—Refik Anadol, Mario Klingemann and others—have worked primarily with a type of AI called GANs (generative adversarial networks). GANs are technologically complex—but within reach of technically skilled, independent artists—but they are not as complex or capable as the new generation of Generative AIs unleashed since 2022.

Take an example such as Jake Elwes’s 2019 Zizi—Queering the Dataset, which brilliantly skewered the biases in facial recognition systems by injecting 1,000 images of drag and gender fluid faces into a 70,000-strong image set used to “train” an AI model—so as to reimagine what “normal” looks like.

Anadol, like many other notable AI artists, spent time experimenting with AI at Google, with a residency at Artists and Machine Intelligence (AMI) in 2016, but his approach has ultimately been the same as Cohen’s: to build his own technologies and then collaborate with the technology to produce work.

But Generative AI has shifted the dynamics of AI’s technical complexity and its potential for creativity, radically and at speed. GANs work from relatively small bodies of data to complete very specific tasks. Generative AI builds from millions, hundreds of millions and billions of piece of data—with outputs in multiple formats—and is only at the beginning of being explored. It has evolved at such speed in the last 20 months that previous generations of AI have been completely left behind.

This year Anadol has become the first significant figure to move his production—maintaining his role as artist and technology creator—into this much more complex, more expensive domain His recent project for the World Economic Forum in Davos, later shown at the Serpentine in London, is based on millions of images, sound and text inspired by data of flora, fungi and fauna from over 16 rainforest locations globally

(ect……..) https://www.theartnewspaper.com/2024/05/30/the-art-worlds-ai-dilemma-how-can-artists-and-museums-thrive-when-big-tech-controls-the-monetising-of-artificial-intelligence?utm_source=The+Art+Newspaper+Newsletters&utm_campaign=33d45ca0cb-EMAIL_CAMPAIGN_2024_05_28_11_17_COPY_01&utm_medium=email&utm_term=0_-66fe1c2575-%5BLIST_EMAIL_ID%5D

Previous
Previous

The Metropolitan Museum - Repatriation - Summer 2024

Next
Next

Monet and a Climate Activist - Restored and Re-Hung - Summer 2024