Beginning to Integrate a Framework for AI Literacy Into Existing Heuristics
Avatar photo

Beginning to Integrate a Framework for AI Literacy Into Existing Heuristics

  Reading time 15 minutes

Within education, we are likely familiar with the many cognitive models and heuristics used to depict learning stages or provide frameworks for approaching the art and science of teaching. Bloom’s Taxonomy, Maslow’s Hierarchy of Needs, Piaget’s Theory of Cognitive Development, Vygotsky’s Zones of Proximal Development, and many other models and theories provide conceptualizations of individual steps, thoughts, stages, or actions to be taken in the internalization and mastery of concepts in education, both for students and instructors. It seems a natural progression then that a similar framework would begin to develop in the age of artificial intelligence that helps instructors and students alike understand the stages of development or work to be done in understanding, testing, and applying AI workflows to our current states of learning and teaching.

In a recent article from Hibbert et al., in Educause, the authors describe a simplified framework for AI literacy developed by the authors at Barnard College in New York.  The authors built on work by researchers at the University of Hong Kong and the Hong Kong University of Science and Technology, who, in turn, had worked to redefine Bloom’s Taxonomy to include concepts from the emerging age of AI. In reading the article, I was struck by the overlap between the various theories and how quickly it leads to concrete steps to be taken or questions to answer while working towards gaining AI literacy or developing AI workflows for teaching and learning. Beginning with the entry points for several of these frameworks, I began to outline an integrated heuristic of the AI framework.

The authors begin their conceptualization of Stage 1 with the concepts of Understanding AI; this overlaps in several ways with other models.

Stage One: Physiological Needs, Basic Knowledge and Recall

Integrating the initial stage of Maslow’s Hierarchy of Needs seems a bit silly–what physical needs are met by AI, or how does AI affect one’s physiological needs? The key, at least for me, comes in understanding the bridging between my physical space and the AI tools or environments I am trying to connect to. 

Taken broadly to refer to anything within the physical lives we lead, this step in the framework simply refers to the need to have physical access to the tools necessary to begin integrating AI into existing workflows for teaching and learning. In this case, knowing what AI tools are available and where–are they integrated into your operating system on your laptop, or only available via tablets and cellphones?  Can they be installed via an app or integrated into existing software (i.e. Microsoft has begun adding CoPilot to many existing tools, such as Microsoft Office and Outlook, etc.). What is the cost to the user–is there a free tier–what are its limitations? If there isn’t a free tier, is the cost covered by any existing university contracts, or do you have to cover the costs? Once installed, how easy are they to access or interact with–do they work with voice or are they text only? 

As students and teachers begin to decide how, when, where, or even if AI should fit into their existing workflows, users first need to know where and how these tools have been made available to them. What barriers to entry exist in order to activate the tools (limitations in hardware or software, limitations in cost or access, accessibility or fit for purpose, etc.). This maps to Bloom’s initial stages in knowing and recalling where to interact with the AI, recognizing tools that are better fitted to one task or situation than another.

Start by understanding the various AI platforms and where or how they are made available to you specifically.  Take note of the ones that are most readily available either through your institution, your operating system, or through other tools and services you already have access to. Once you begin to develop an understanding of which tools and platforms are available, the initial stages of understanding are in place to continue.

Stage Two: Assimilation and Accommodation, Safety and Security, Understanding and Application

Once a user has developed an initial knowledge of what tools are available and how and where to access them, that knowledge needs to be expanded by understanding and integrating more general concepts of AI–what Hibbert et al. refer to as understanding the “basic AI terms and concepts” in the Understand AI stage. The authors recommend familiarity with key terms and definitions, recognizing benefits and limitations, and identifying differences between various types of AI. This allows the user to demonstrate basic understanding, mapping to Bloom’s second stage where the user can explain, describe, interpret, etc. concepts and use cases for AI.  

The list below defines (very briefly) some of the key concepts I explored, but is certainly not exhaustive of all terms that may be used. 

Key Terms to Know:

  • (AI): Machines designed to perform tasks by simulating human intelligence.
  • Machine Learning (ML): A subset of AI where algorithms learn from data to improve performance over time.
  • Deep Learning: A type of ML using neural networks with many layers to identify complex patterns.
  • Neural Network: A system of interconnected nodes, modeled after the human brain, that processes information.
  • Algorithm: A set of rules a computer follows to complete a specific task.
  • Data: Information processed by AI to learn or make decisions.
  • Training: Teaching a machine learning model by feeding it data to learn from.
  • Testing: Evaluating a model’s performance using new, unseen data.
  • Prompt Engineering: Crafting prompts or questions to guide AI’s responses effectively.

Assimilation and Accommodation, taken from Piaget’s theory of cognitive development, happens when an individual begins to recognize external factors (such as the terms and concepts above, an AI’s performance on a specific task, the quality of the AI’s output compared to the requirements of a given scenario, etc.) and begins to interpret them according to existing schemas, experiences, or knowledge (i.e. your knowledge as a scholar and researcher, a student’s existing educational background and skill set, etc.). This maps most directly to the Use and Apply AI stage in the AI framework. 

In the early stages of experimenting with AI, I would recommend using the tool to perform fairly basic tasks, specifically those in which you, as an expert in certain subjects or tasks, can identify how well the AI system is performing and whether the output is accurate and fitting for the situation–some of the common pitfalls to be vigilant of are “AI hallucination”, bias in the training data, or outright manipulation through misinformation or bad data. In this manner, you are able to integrate your lived experiences with workflows common to the AI you are using, but also form benchmarks for performance while building an understanding of how your workflows match (or don’t) with the way the AI expects to perform work (i.e. prompt engineering and testing). This will then lead to Accommodation, where, as a user, you modify existing schemas to accommodate new information, scenarios, or experiences. In prompt engineering, this could involve adjusting the prompt based on the AI’s response or your perception of the quality of the response. Building upon this feedback loop allows you to work towards more complex tasks for the AI with an understanding of when it may perform better based on the prompt or task.

Further integrating with Bloom and Maslow and increasing the complexity of the AI tasks or experiments, users need to demonstrate situational understanding/application and to recognize the various aspects of safety and security that exist around AI.  Taken alongside the above steps for assimilation and accommodation, a user will also begin to develop a sense of safety and security for when an AI is performing accurately and when human intervention or revision may be necessary for an output to be utilized beyond personal testing or research.  In essence, how safe and secure will you be using this AI produced content in a professional setting? Can you be sure you will be free of charges of plagiarism or data manipulation?  Can you trust and feel reasonably secure that the information is accurate and that the formatting, tone, and terminology is correct? These elements require a higher degree of complexity and understanding than earlier stages, but can still be performed by a student with guidance from an expert. 

This entire experiment also encompasses the idea of scaffolding the tasks and supervising the AI, similar to Vygotsky’s Zones of Proximal Development or Piaget’s concrete operational stage where users test, supervise, and revise the work to ensure accuracy and proficiency over time. Only your direct experience and a foundational understanding of the content and workflows as an expert will allow a user to integrate their existing knowledge with an AI workflow by supervising the outputs and intervening to make corrections, or simply perform a task without the AI, when experience deems it necessary. 

Stage Three: Belongingness, Esteem, Cognitive Equilibrium, Analysis and Evaluation, and the Formal Operational Stage

Taken broadly, Belongingness and Esteem (Maslow), Equilibrium (Vygotsky), Analysis and Evaluation, and the Formal Operational Stage (Piaget) describe concepts of fitness or appropriateness for AI within specific tasks, and also AI within the field and profession itself. As a user begins to explore or deploy AI for more and more professional tasks, there needs to be a level of comfort and connection with the tasks and the quality of the outputs. If a user feels that the AI lacks essential elements of what the user values in their work, or if the use of an AI tool violates core components of the field and the way that work is expected to be done, a clear mismatch in purpose or belonging exists, making the user to feel at odds with the community around them, or to lose esteem with their colleagues and supervisors.

Hibbert et. al. describe this as the third part of their pyramid, Use and Apply AI, but it also closely maps to Bloom’s middle stages of Apply and Evaluate. Users can further test the fitness of the tools to the tasks, but also evaluate the reception of that work or even the acceptance of AI tools in general amongst peers and others.  This helps to form healthy boundaries for where the tool should or should not be used, as well as formalizing elements of professional decorum, such as how to cite the work of an AI in accordance with any professional organizations like the APA or MLA, etc. This stage formalizes the metacognition surrounding your use of AI and how you understand your positionality within this discussion (for a more insightful discussion around your feelings toward AI integrations to your workflows, see Josh Lund’s post on AI’s role in education.)

Stage Four: Creation, Self Actualization, and User Independence

The final stages of each of these heuristics implies self actualization and independence, a user’s ability to work comfortably and confidently on generative tasks within the specific domain using a variety of tools to solve new and novel problems. Hibbert et. al. define this stage as being able to create AI, but realistically, it means any number of creative endeavors using AI, such as training new AI models on a specific data set, utilizing AI to create new and novel works, arguments, or compiling and interpreting data. An additional interpretation may be that the user is now able to guide others in reaching self actualization, i.e. to teach how to use AI or prompt engineering for a novice user to build proficiency and resilience. 


For a quick audio overview of this discussion, check out this AI-generated podcast based on the article:

 

References

Eitel-Porter, Robert, and Lisa Federer. “A Framework for AI Literacy.” EDUCAUSE Review, 5 June 2024.

Key Stages in the Design Thinking Process.” Elsevier, 2021.

Brundage, Kyle, and Hongyan Zhang. “Key Insights on Design Thinking for AI Solutions.” Patterns, vol. 2, no. 3, 2021.

Maslow’s Hierarchy of Needs.” Wikipedia: The Free Encyclopedia, Wikimedia Foundation, 29 Sept. 2024.

Piaget’s Theory of Cognitive Development.” Wikipedia: The Free Encyclopedia, Wikimedia Foundation, 21 Sept. 2024.

Zone of Proximal Development.” Wikipedia: The Free Encyclopedia, Wikimedia Foundation, 27 Sept. 2024.

Bloom’s Taxonomy.” Center for Teaching, Vanderbilt University.

Avatar photo

About Kevin Lyon

Kevin is a Double-Demon, receiving his Bachelor's degree in English with a minor in Professional Writing from DePaul in 2009, and staying on for his Master's in Writing, Rhetoric, and Discourse with dual concentrations in Technical and Professional Writing and Teaching Writing and Language. He is an now an Instructional Technology Consutlant and a Writing, Rhetoric and Discourse instructor. His research interests include technology in education, education and identity formation/negotiation, and online learning and interaction.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.