AI in Pedagogy

From AI Literacy to AI Readiness: Teaching Economics for the Algorithmic Era

As a Northeastern economics professor, I do not treat AI as a shortcut—or ban it outright. I ask students to build foundations first, then use AI as a co-pilot under clear verification rules. My aim is technical sovereignty: graduates who can think independently, audit machine output, and apply AI responsibly.

How I Talk About This Work with Colleagues

Roadmap from AI literacy baseline to UNESCO progression and validated AI readiness outcomes
I run the course as a staged path: establish literacy, scaffold practice through UNESCO progression, and then document readiness with auditable evidence.

Why this shift matters

Over the past two years, I've watched higher education move from AI prohibition toward guided integration. Generative models can widen support and speed up iteration, but they also bring hallucinations, bias, and over-reliance.

In my classroom, the real challenge is capturing AI's scale benefits without eroding human reasoning. I want students to use AI to extend their capability, not outsource their understanding.

AI Literacy vs AI Readiness

Framework linking AI literacy to AI readiness through UNESCO Understand, Apply, Create progression
In my framework, literacy is the baseline; readiness is what students can actually do under uncertainty, accountability, and transfer.

AI Literacy (know)

  • Understand how AI systems are trained and where they fail.
  • Recognize ethical risks: bias, privacy, and misuse.
  • Evaluate outputs rather than accepting them at face value.

AI Readiness (do)

  • Apply AI strategically to academic and professional tasks.
  • Adapt workflows as tools and constraints evolve.
  • Preserve human advantage: judgment, causal reasoning, and ethics.

Bridge: I treat literacy as foundational knowledge and readiness as operational competence students can demonstrate with transparent evidence.

UNESCO Progression: Understand → Apply → Create

  • Understand: Students learn core concepts, limits, and social impact of AI.
  • Apply: Students use tools in domain tasks with attribution, validation, and accountability.
  • Create: Students build or adapt AI-enabled systems, then improve through feedback loops.

I scaffold this progression across the term so students revisit the same concepts with deeper technical demands and increasing ownership of method quality.

Cognitive Offloading, TAM, and What I See in Class

Risk-control loop showing cognitive offloading risk mitigated through PRIME checks and feedback
In practice, productive friction protects learning quality: students verify and revise before they accept AI outputs.
  • I see AI reduce cognitive load, but unmanaged use can offload the exact struggle that builds durable skill.
  • When trust in AI rises too quickly, evaluation effort often drops unless verification habits are explicit.
  • TAM research tracks with what I observe: critical thinking mediates whether perceived usefulness turns into higher-order learning.

Teaching principle: productive friction is a feature, not a bug.

Practical Classroom Policy

Foundations First (Phase 1: Manual Mastery)

In new modules, I restrict AI code generation. Students manually write and debug core workflows to build syntax fluency, error diagnosis, and conceptual understanding.

Co-Pilot Rule (Phase 2: AI-Assisted Expansion)

Once competency is demonstrated, I require AI for scaling tasks (dashboards, advanced cleaning, complex modeling). Students submit verification logs with prompts, revisions, and checks.

P.R.I.M.E. workflow (AI auditing routine)

  • P — Prep: Set role, context, constraints, and assumptions.
  • R — Request: Ask for a precise, bounded task.
  • I — Iterate: Debug intentionally using output and error evidence.
  • M — Mechanism Check: Verify libraries, methods, and statistical logic.
  • E — Evaluate: Run economic and causal sanity checks before acceptance.

What I'm Borrowing from Peer Institutions

For colleagues, the key question is not who has the best AI slogan—it's which implementation patterns hold up across courses. These are the four patterns I find most reusable.

Northeastern University

I borrow the network model: pair faculty pilots with practical support, then circulate shared templates so attribution language, prompt-use boundaries, and assessment updates travel across departments.

Stanford University (AIMES)

I borrow Stanford's strategy framing for assignment design—clearly signaling when AI is encouraged, limited, or prohibited—so students understand intent before they touch a tool.

MIT (RAISE)

I borrow the metacognition emphasis: students do not just submit outputs, they explain how they prompted, validated, and revised AI work so process quality is visible.

Harvard

I borrow Harvard's policy-clarity approach: explicit course-level expectations plus assessment redesign (oral checks, drafts, and reflection) to reward reasoning over polished AI prose.

Video & Resource Briefs

I keep a short list of peer resources that are practical for faculty planning meetings and quick course-policy refreshes.

Northeastern · AI in Teaching and Learning

Network-level initiatives, scholar cohorts, and implementation examples for integrating AI into teaching.

Read resource ↗

Stanford · AIMES Teaching Strategies

Practical strategy library for when to allow, limit, or prohibit AI by assignment type and learning goal.

Read resource ↗

MIT · RAISE Resource Hub

Curated AI literacy and responsible-use resources with a strong emphasis on creation, reflection, and ethics.

Read resource ↗

Harvard · Bok Center AI Teaching Guidance

Faculty guidance on assignment redesign, policy language, and responsible classroom use of generative AI.

Read resource ↗

How I Translate This into Economics Courses

Dynamic Pricing

I use live platform comparisons so students can observe algorithmic price discrimination in real markets.

Labor and Automation

I ask students to classify tasks by likely replacement versus complementarity with AI-enabled systems.

Externalities

We evaluate productivity gains against compute intensity and energy-cost tradeoffs.

Bias and Detection

We assess false positives and fairness risks in AI-detection systems and grading contexts.

Closing Takeaway

I'm not trying to produce AI dependence. I'm trying to build AI fluency with intellectual independence: graduates who can use AI to accelerate workflow while retaining responsibility for truth, method, and ethics.

What students leave with

  • Work without AI when first-principles reasoning is required.
  • Scale with AI once competency and controls are in place.
  • Defend analytical claims with transparent, reproducible evidence.