IP 2: Artificial Intelligence

I

Who were/are these people, and did/does each contribute to the development of artificial intelligence? How did/does each think “intelligence” could be defined?

Alan Mathison Turing (1912-1954) – Mathematician / Computer Scientist / Cryptanalyst

Turing questioned “can machines think?” (Turing, p. 433, 1950) using a hypothetical game where one of three participants – two human, one machine – analyzes textual communication to determine the identities of the others (Turing, p. 433, 1950), gaining short-hand status in the domain of AI as the Turing Test. Passing refers to whether a machine’s output reads as human. Turing represents an early advocate for knowledge acquisition’s essentiality to machine intelligence (Chollet, 2019, p. 6).

John McCarthy (1927-2011) – Computer Scientist

McCarthy is Co-credited with coining the term artificial intelligence in a proposal for the 1956 Dartmouth Summer Research Project on Artificial Intelligence, initiating artificial intelligence (AI) as a field of study. McCarthy is additionally known for seminal contributions to early networking systems, functionality necessary for the internet to exist (Woo, 2014). Generality permeated his perspective of AI, meaning that intelligence exhibited adaptability and flexibility in novel situations. (Chollet, 2019, pp. 5, 6, 9).

Herbert Simon (1916-2001) – Political Scientist / Social Scientist

Engaged in computer science, economics, and psychology, Simon co-created the first artificially intelligent computer program, the Logic Theorist, a program able to solve complex logic problems (Gugerty, 2006, p. 880). Simon researched decision making and believed that to effectively problem solve, one must first collect and analyze relevant data to full understanding – to Simon, intelligence advances as each experience or decision point informs future action, not unlike learning (The Decision Lab).

Marvin Minsky (1927-2016) – Computer Scientist / Cognitive Scientist

Co-credited with coining the term artificial intelligence (AI), Minsky founded the MIT Computer Science and Artificial Intelligence Laboratory. He authored philosophical texts that explore the nature of human intelligence in relation to AI (Rifkin, 2016), yet his perspective of intelligence in AI is retrospectively limited, reflective of collective logical tasks (Chollet, 2019, p. 5). Minsky’s expertise informed Hal 9000 in Kubrick’s’ 2001 A Space Odyssey (Stork, 1997).

Timnit Gebru (1982) – Computer Scientist

Gebru published evidence of racial and gender bias within AI. She recognizes that data and programing are imbued with developer and societal bias: AI is not objective. (Buolamwini & Gebru, 2018). Gebru gained public recognition when fired from Google for identifying risks with AI training of “large language models,” (Hao, 2020) highlighting concerns associated with AI development being exploited by massive companies in positions of wealth and power.

II

How do “machine (programming) languages” differ from human (natural) ones?

Harris (2018) describes programming languages as “logical, precise, perfectly unambiguous”: codes designed by humans to perform specific operations where each phrase represents a particular command. The languages of human-to-human communication (speech, written, body language, etc.) are saturated with nuance, context, history, and shared cultural perspectives. They are not only “imperfect” as Harris describes, but their fluctuating and implicit nature is essential to their definition. Jones (2020) describes the study of pragmatics in linguistics as “being concern[ed] with how people communicate and discern intentions below the level of explicit meaning” (pp. 24-25). Although programming languages are inherently explicit, Jones (2020) explains (or perhaps warns) that via algorithmic pragmatics, algorithms gather extensive data of human behaviour, and use it to uncover implicit information, even beyond what is communicated through natural language (pp. 29-32).

III

How does “machine (artificial) intelligence” differ from the human version?

Chollet (2019) argues that part of the struggle to develop sophisticated artificial intelligence (AI) is reflective of the challenge of defining what exactly intelligence is, and without a clear definition AI technology has failed to compare to human intelligence. Advanced AI excels at specifically programmed task-based functionality, at times performing far beyond human ability, but this functionality lacks generalization, “the ability to handle situations (or tasks) that differ from previously encountered situations” (Chollet, 2019, pp. 9-10). In contrast, human intelligence is highly generalized in that we are constantly facing new experiences and using prior knowledge to determine and adapt our behaviour accordingly. Generality is essential to human nature and is an aspect that developers are only beginning to program in comparatively rudimentary ways in contemporary AI. 

IV

How does “machine learning” differ from human learning?

When compared to machine learning, human learning is slow, it comes from everywhere: intentional education, reading, passive observation, interactions, perpetual multifaceted lived experiences. As we encounter life, we build our own subjectivity shaped by the cultures we associate with; each new experience is weighed against our prior knowledge, reflected on, synthesized, and integrated in accordance with our worldview. Machine learning consists of gathering massive amounts of data to systematically analyze it to discover patterns, to then use that information as per programmed metrics (Heilweil, 2020). This data is not representative of one individual’s complex yet cohesive lived experience, it represents the fragmented digital output of millions. Human learning is closely attached to emotional experience, machine learning is emotionally vacant, yet the data it gleans is embedded with infinite emotion (infinite bias) that the machine cannot be programmed to accurately decipher.

V

How do YOUR answers to these questions differ from what a machine could generate?

It took several iterations to write the short biographies in question one. On the first attempt, I reviewed many web-based articles to gather a sense of the individuals. I spent hours wordsmithing to cleverly shorten my responses, only to realize I failed to provide any insight to their perspectives on intelligence. I had skimmed Chollet’s (2019) article enough to know that within was information about intelligence yet had not read it thoroughly enough to achieve comprehension. I read it in depth and re-worked each biography with a new understanding of task-based vs generalized intelligence.

Writing is difficult for me. Also, and perhaps more importantly, thinking about what to write is difficult for me and is often a laborious process. To feel confident in my writing, I must read everything available, think about it extensively, talk about it to those who might engage (e.g., pragmatics with a friend who studied linguistics, algorithmic bias with an archivist friend).

Additionally, part of my personal synthesis is looking for connections. Although not explicitly stated, I always consider Vygotsky’s concept of sociocultural theory when I think of humans learning; I thought of how Jones (2020) implied ideas of actor-network theory and noted that he cited Latour – and from my prior learning, I know why Vygotsky and Latour are relevant.

I print out my texts, I make marginalia!

I like to think that this work was produced idiosyncratically and that an AI generated version of these responses would be too uncanny valley to mistake as human made. 

References

Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability, and transparency (pp. 77-91). PMLR.

Chollet, F. (2019). On the measure of intelligence. https://arxiv.org/pdf/1911.01547.pdf

Gugerty, L. (2006). Newell and Simon’s Logic Theorist: Historical Background and Impact on Cognitive Modeling. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 50(9), 880–884. https://doi.org/10.1177/154193120605000904

Hao, K. (2020, December 4). We read the paper that forced Timnit Gebru out of Google. Here’s what it says. MIT Technology Review. https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru

Harris, A. (2018, October, 31). Human languages vs. programming languages. Medium. https://medium.com/@anaharris/human-languages-vs-programming-languages-c89410f13252

Heilweil, R. (2020, February 18). Why algorithms can be racist and sexist. A computer can make a decision faster. That doesn’t make it fair. Vox. https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency

Jones, R. (2020). The rise of the Pragmatic Web: Implications for rethinking meaning and interaction. In C. Tagg & Evans M. (Eds.), Message and medium: English language practices across old and new media (pp. 17-37). De Gruyter Mouton. https://doi.org/10.1515/9783110670837

Rifkin, G. (2016, January 25). Marvin Minsky, pioneer in artificial intelligence, dies at 88. The New York Times. https://www.nytimes.com/2016/01/26/business/marvin-minsky-pioneer-in-artificial-intelligence-dies-at-88.html

Stork, D. (Ed.). (1997). HAL’s Legacy: 2001’s Computer as Dream and Reality. MIT Press.

The Decision Lab. (n.d.). Thinker: Herbert Simon. The Decision Lab. https://thedecisionlab.com/thinkers/computer-science/herbert-simon

Turing, A. (1950). Computing, machinery and intelligence. Mind, 49(236), 433-460. https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf

Woo, E. (2014, March 20). John McCarthy dies at 84; the father of artificial intelligence. Los Angeles Times. https://www.latimes.com/local/obituaries/la-me-john-mccarthy-20111027-story.html

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam prevention powered by Akismet