Question 1: Who were these people and how do/did they contribute to the field of AI? How do they think “intelligence” could be identified?
| My Response | ChatGPT 5.2 |
|---|---|
| Who was Alan Turing? | |
| A British mathematician who, among many other things, is credited for developing conceptual computational hardware and foundational algorithmic theories used to solve calculable problems. His “imitation game” forms the basis of the Turing test, a measure of a machine’s “intelligence” (Biography.com, 2020). | Alan Turing (1912–1954) was a British mathematician, logician, and cryptanalyst whose work founded computer science. He formalized computation through the Turing machine, helped break Nazi Enigma codes during WWII, and shaped debates on artificial intelligence, notably through the Turing Test. |
| Who was John McCarthy? | |
| Dr. McCarthy is credited with the coining of the term “artificial intelligence”, and contributed to the field of AI through his work on translating human commonsense reasoning into algorithms that could be read and used by computer systems. He defined “intelligence” as “…the computational part of the ability to achieve goals in the world” (McCarthy, 2026). | John McCarthy (1927–2011) was an American computer scientist who coined the term artificial intelligence, organized the 1956 Dartmouth Conference, and developed foundational ideas such as Lisp, time-sharing, and formal approaches to machine intelligence that shaped AI as a research field. |
| Who was Herb Simon? | |
|
An American behavioural scientist whose theory of “rational” decision-making re-prioritized the context/environment, the known information, and the cognitive capabilities of the processor (individual) in the eventual decision outcome. He sought to model human processes through mathematical models based on pattern recognition and inference (UBS, n.d). According to Simon, “intelligence”, if it could be applied to computers, was based on a machine’s ability to problem solve heuristically. |
Herbert A. Simon (1916–2001) was a polymath whose work bridged economics, psychology, and computer science. A pioneer of artificial intelligence and cognitive science, he introduced bounded rationality, co-developed early AI programs, and won the Nobel Prize for reshaping theories of human decision-making. |
| Who was Marvin Minsky? | |
| A mathematician and computer scientist credited with the development of “symbolic manipulation”. Paired with heuristic pattern recognition and inductive planning, this research lay the foundation for neural networks, information representation, and machine reasoning. Minsky would have defined “intelligence” as a machine’s ability to have “common sense”. | Marvin Minsky (1927–2016) was an American cognitive scientist and cofounder of MIT’s Artificial Intelligence Laboratory. A central figure in symbolic AI, he advanced theories of mind such as the Society of Mind, shaping debates on intelligence, cognition, and the limits of machine reasoning. |
| Who was Timnit Gebru? | |
| A computer scientist and AI ethicist whose groundbreaking work on facial recognition biases and the socio-environmental costs of training large-language models has contributed to the broader awareness of risks associated with AI. Gebru’s “intelligence” centers on machines’ ability to discern the nuances of social, economic, and racial inequalities (Wikipedia, 2026). | Timnit Gebru is a computer scientist known for foundational work in algorithmic bias, ethical AI, and data documentation. She co-founded the Distributed AI Research Institute (DAIR) and has critically shaped debates on power, race, and accountability in machine learning systems. |
Question 2: How do “machine languages” differ from natural ones?
| My Response | ChatGPT 5.2 |
|---|---|
|
Although human languages require the communicator – or the person being communicated to – to understand the definitions of the vocabulary being used, the words are not often “encoded” like they are with computer programming languages. For example, in English, a phrase like
refers to names and words that generally have well-understood meanings, whereas in Hashicorp Language (HCL), the “phrase”
requires the reader to know more of the context of the function to understand the code’s meaning (Hashicorp, n.d). Machine languages also tend to be less dynamic than human languages because the syntax – or rules around how terms can be used – is defined and “standardized” in the creation of the language, rather than being developed “on-the-fly” to accommodate new representations or phenomena. |
Machine languages are formal, explicitly defined systems designed for exact interpretation and execution by computers. Their syntax and semantics are fixed, unambiguous, and closed, minimizing interpretation. Natural languages, by contrast, evolve socially, tolerate ambiguity, and depend on context, culture, and shared human experience for meaning. While machine languages optimize precision, efficiency, and reproducibility, natural languages prioritize expressiveness, flexibility, and interpersonal communication, allowing meaning to shift across situations, speakers, and time. |
Question 3: How does machine/artificial intelligence differ from the human version?
| My Response | ChatGPT 5.2 |
|---|---|
| Chollet (2019) argues that parsing intelligence into a set of concrete parameters is essential to formulate a definition that can be used to compare a “generally-intelligent system to human intelligence”; these include parameters such as “…scope, potential, generalization difficulty, experience, and priors” (Chollet, 2019, p. 43-44).
As we hurtle towards super-intelligence, I am reminded of philosopher Nick Bostrom’s (2015) salient point that machines are not limited to a cranium like humans are; our biology and chemistry also puts physical limits on the speed of electrical signal transmission that is orders of magnitude slower than the speed of light. At the moment, human intelligence is superior at providing capabilities that sustain human life; things like constructing shelter, providing sustenance, and making social-emotional connections. |
Machine or artificial intelligence differs from human intelligence in that it is engineered, task-specific, and data-driven, operating through formal models, algorithms, and optimization processes. Human intelligence is embodied, context-rich, and value-laden, shaped by emotion, culture, consciousness, and lived experience. AI excels at speed, scale, and pattern recognition within defined domains, while human intelligence integrates meaning, judgment, creativity, and ethical reasoning across uncertain, open-ended situations. |
Question 4: How does machine learning differ from human learning?
| My Response | ChatGPT 5.2 |
|---|---|
| Current machine learning methodologies rely on a computer’s ability to scan vast datasets and develop probability-based responses based on patterns between various parameters within these datasets. Probabilistic inference, coupled with a machine’s ability to iterate extremely quickly and help from human validation, allows responses to be corrected and refined. It is the speed and breadth of this process that separates machine and human learning.
There is no doubt that the content of the datasets being used to train models is critical to the outcome of generated responses, but it could also be argued that a similar process occurs with the information gathered or taught to humans, albeit at a slower pace. |
Machine learning differs from human learning in that it relies on statistical optimization over large datasets, adjusting parameters to improve performance on defined tasks. Human learning is developmental, embodied, and meaning-making, integrating experience, emotion, social interaction, and values. Humans learn with limited data, transfer knowledge flexibly across contexts, and reflect on purposes and consequences, whereas machine learning remains constrained by objectives, representations, and training conditions. |
Question 5: How do these answers differ from the ones generated by ChatGPT?
In re-reading through my work and the responses from ChatGPT, I get the impression that the AI’s responses are sharper and more formulaic. The tone of the language feels clinical, and although the output is very effective at conveying meaning, I feel more distanced from the meaning than from my own responses. I wonder whether I would get the same impression if I was comparing a classmate’s work.
In the chat interface there are “knobs and buttons” that can be used to adjust the tone, verbosity, and level of detail, but I haven’t adjusted the defaults of my premium subscription, so the output is correspondingly “middle-of-the-road”. It’s also worth noting that any instructions or context provided in the prompt are extremely influential in shaping the response (Liu, et al., 2024):

As I reflect on how my responses were derived, I note that my choice of words was intentionally colourful: “…scan vast datasets…”, or “…iterate extremely quickly…”. This tends to draw-out the emotional component of vocabulary and gives depth to the meaning of the thought.
My writing is certainly more error-prone and “tangly”, which is likely something that could be identified as “of human origin”, but as my Mother reminds me every so often “the world would be pretty dull if we were all the same”!
References
Association for Computing Machinery. (n.d.). A.M. Turing Award: Marvin Minsky. Retrieved January 26, 2026, from https://amturing.acm.org/award_winners/minsky_7440781.cfm
BBC News. (2016, January 26). AI pioneer Marvin Minsky dies aged 88. BBC News. https://www.bbc.com/news/technology-35409119
Biography.com Editors. (2020, July 22). Alan Turing. Biography.com. https://www.biography.com/scientists/alan-turing
Bostrom, N. (2015, April). What happens when our computers get smarter than we are? [Video]. TED Conferences. Retrieved January 26, 2026, from https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are
Chollet, F. (2019, November 5). On the measure of intelligence [Preprint]. arXiv. Retrieved January 26, 2026, from https://arxiv.org/abs/1911.01547
Distributed AI Research Institute. (n.d.). DAIR Institute projects. Retrieved January 26, 2026, from https://www.dair-institute.org/projects/
Harris, A. (2018, October 31). Human languages vs. programming languages. Medium. Retrieved January 26, 2026, from https://medium.com/@anaharris/human-languages-vs-programming-languages-c89410f13252
HashiCorp. (n.d.). For expressions — Terraform configuration language documentation. HashiCorp Developer. Retrieved January 27, 2026, from https://developer.hashicorp.com/terraform/language/expressions/for
Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua, M., Petroni, F., & Liang, P. (2024). Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics, 12. https://doi.org/10.1162/tacl_a_00638
McCarthy, J. (2026). What is AI? Basic questions. Retrieved January 24, 2026, from http://www-formal.stanford.edu/jmc/whatisai/node1.html
UBS. (n.d). Meet the Nobel Laureates in economics: Do we understand human behaviour.
Wikipedia contributors. (2026, January 26). Timnit Gebru. In Wikipedia, The Free Encyclopedia. Retrieved January 27, 2026, from https://en.wikipedia.org/w/index.php?title=Timnit_Gebru&oldid=1334851328