There is no AI!

I read a really interesting take on AI from Jaron Lanier in the New Yorker.

He also spoke about the same issues during a recent talk at Berkeley.

The premise of his argument is that what we are calling “AI” is really just a super-clever suite of mashup technologies that take training data and use it to generate ‘new’ information. He argues from a standpoint of what he calls “Data Dignity” – if AI were to provide a track of how it came to its answers (if it were to show its workings, as my maths teacher used to insist), then we’d be better served as a society. We’d be able to see AI for what it is and would re-privilege humans as the ultimate creators and arbiters of what is ‘creative’ and ‘intelligent’.

To quote from the New Yorker piece:

A program like OpenAI’s GPT-4, which can write sentences to order, is something like a version of Wikipedia that includes much more data, mashed together using statistics. Programs that create images to order are something like a version of online image search, but with a system for combining the pictures. In both cases, it’s people who have written the text and furnished the images. The new programs mash up work done by human minds. What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking. This is a significant achievement and worth celebrating—but it can be thought of as illuminating previously hidden concordances between human creations, rather than as the invention of a new mind.

Given that there’s a special emphasis in this course on AI I thought this was worthy of discussion. What does everyone else think? I find Lanier’s argument compelling and humanistic.

About Jaron Lanier.


( 0 upvotes and 0 downvotes )
( Average Rating: 0 )

5 responses to “There is no AI!”

  1. David Vogt

    Thanks for this, Andrew –

    I read the New Yorker piece as well. While this perspective on AI is reasonable in terms of potentially mitigating our angst, I find it (and similar arguments) disturbingly xenophobic in seeking to buttress how special the human mind is, and what human creativity is, when we actually don’t understand yet in any systematic or quantifiable way what human intelligence, consciousness or creativity is, so we have no firm ground from which to differentiate ourselves from AI.

    Put another way, speaking to an audience of educators, who invest their careers in helping students learn to mash up hidden concordances across the work of other humans in (hopefully) original and valuable ways, is there any way you can, or could, ever unambiguously determine that true invention (whatever that is) is involved in anything that another human creates?

    Or more bluntly, if human creativity is not some form of algorithm, what else could it be?

    Provocation for the purpose of debate…

    David


    ( 0 upvotes and 0 downvotes )
    1. andrew dunn

      “we actually don’t understand yet in any systematic or quantifiable way what human intelligence, consciousness or creativity is, so we have no firm ground from which to differentiate ourselves from AI”

      But I think Lanier would dispute the very premise of this objection, in that if AI is to be understood (albeit simplistically) as a clever way to create new concordances between ‘actual intelligence’-created information, then it isn’t in a category that we can compare with human (or other non-human) intelligences. It’s like saying that a table is intelligent if we choose to define ‘intelligence’ in ways that rely on its table-ness, and that we’re being anti-furniture if we don’t select a definition that can include the table.

      Perhaps if/when AI becomes general as opposed to narrow (I would argue that GPT4 and its cousins are the most recent and most impressive instances of the latter), then we can worry about whether privileging humans when it comes to defining ‘intelligence’ is a problem.

      As to what human (or any) creativity is – I should probably leave that with trained philosophers, but I’d see it more as an emergent property than as the result of a stepwise algorithmic process. Happy to concede that my understanding of how we measure consciousness/creativity is pretty limited, but I am confident that what Chat GPT and other ‘AI’ tools are able to do is not ‘creative’ in a truly meaningful way.


      ( 0 upvotes and 0 downvotes )
      1. andrew dunn

        I asked ChatGPT for its thoughts.

        Prompt: Given that Chat GPT uses training data and does not create data of its own, can it be said to be ‘creative’ or does it merely provide very clever mash-ups of pre-existing information in ways that appear novel?

        ChatGPT, like other AI language models, operates by generating responses based on patterns and information present in its training data. It does not possess true creativity or original thought in the way humans do. Instead, it generates text by drawing from its extensive knowledge base to provide responses that are contextually relevant, coherent, and, at times, can appear novel. However, this novelty is a result of cleverly combining existing information rather than generating entirely new ideas or insights.

        In essence, ChatGPT is not creative in the traditional sense. It doesn’t have independent thought, consciousness, or the capacity to create novel concepts or ideas that were not present in its training data. Its responses are constrained by the data it was trained on and the statistical patterns it learned from that data.

        When ChatGPT generates text that appears creative or novel, it’s more accurately described as a form of “creativity by combination” or “creative synthesis.” It can combine information in ways that may surprise or engage users, and this can be a valuable tool for generating ideas or solutions, but it’s not the result of true creative thinking.

        In summary, ChatGPT doesn’t possess creativity in the human sense but can provide responses that appear creative by recombining and repurposing the vast amount of information present in its training data. Its “creativity” is a product of its ability to generate text based on patterns and associations learned during training.


        ( 0 upvotes and 0 downvotes )
        1. David Vogt

          No worries, I’m not claiming to be an expert on intelligence or creativity either, but the central problem remains that we don’t have any way to describe, demonstrate or prove that “actual intelligence” is something that humans have, as distinct from an AI. Yes, humans create lots of things, but what is special about our creative process relative to what an AI does: why can’t all of human creativity also be reduced to “a product of [our] ability to generate text [etc] based on patterns and associations learned during training”? Can you offer an example of anything that a human has ever created that can’t be fully encompassed by this phrase? Over millions of years we’ve generated lots of patterns and associations, and have become very good at training, but is there any roadblock that will prevent an AI from catching up quickly?


          ( 0 upvotes and 0 downvotes )
          1. andrew dunn

            “Can you offer an example of anything that a human has ever created that can’t be fully encompassed by this phrase?”

            I’ll give it a shot. Not being a physicist I have probably got some of the specifics of this wrong, and it’s probably a foolhardy example to offer an astronomer, but here goes…

            I’m thinking of the cognitive/creative leap that Einstein made when he imagined light as being quantized into photons carrying discrete values of energy. Up to this point scientists had understood light as being a wave, and that one could step up the energy (frequency) of the wave and always find values in between (i.e. the wave was analog). He arrived at this insight in trying to resolve the problem of the photoelectric effect, a problem that the analog wave theory was unable to account for. Recombining his ‘training data’ – everything he had learned up to that point – in novel ways could not have led to this insight, I would argue. Or to put it another way, if we put all of physics up to 1905 (when he proposed his theory) into an LLM and asked it to account for the photoelectric effect, it wouldn’t be able to do so, ever.

            I don’t know what the difference is between what Einstein was able to do and what the tools we call AI can do now, but I suspect there is a difference, and it’s qualitative.

            None of this is to say that today’s AI tools aren’t extraordinarily useful. They will certainly have a huge impact on our field. But I’m with Lanier when it comes to how we conceive of them, and how they differ from the creativity and imagination other entities are capable of (specifically humans, but who knows?).

            Edit: I should also give due credit to Max Planck who came up with the idea of quantized radiation in the first place – perhaps his, and not Einstein’s was the creative leap, and Einstein just rehashed Planck’s ideas in a novel way?


            ( 0 upvotes and 0 downvotes )

Leave a Reply

You must be logged in to post a comment.