AI in Pedagogy

From AI Literacy to AI Readiness in Quantitative Social Science

At Northeastern, I approach AI as a tool that should widen intellectual range without narrowing human judgment. In quantitative social science, students still need to formulate questions, understand data, interpret evidence, and defend their choices. My teaching asks them to build those foundations first, then use AI as a transparent, auditable co-pilot across economics, political science, sociology, public policy, and related fields.

How I Frame This Work

Roadmap from AI literacy baseline to UNESCO progression and validated AI readiness outcomes
I treat the classroom as a staged path: build literacy, scaffold practice through the UNESCO progression, and document readiness with auditable evidence.

Why this shift matters

Across CSSH, quantitative work now spans survey data, administrative records, text, networks, geospatial evidence, and digital traces. Students are increasingly expected to move between substantive theory, computational tools, and public-facing interpretation.

Generative AI can support brainstorming, coding, literature mapping, data-cleaning drafts, and explanatory writing. It can also flatten reasoning, hide assumptions, and produce fluent but unreliable answers. The pedagogical question is not whether AI exists in the classroom; it is how to use it without weakening the habits that make quantitative social science credible.

AI Literacy vs AI Readiness

Framework linking AI literacy to AI readiness through UNESCO Understand, Apply, Create progression
Literacy is the baseline. Readiness is what students can actually do with AI under conditions of uncertainty, accountability, and transfer.

AI Literacy (know)

  • Understand how AI systems generate outputs and where they routinely fail.
  • Recognize risks around bias, privacy, citation, and inappropriate use.
  • Evaluate AI responses against theory, evidence, and disciplinary standards.

AI Readiness (do)

  • Translate a research or classroom task into a precise, bounded AI workflow.
  • Use AI strategically for analysis, revision, and iteration without outsourcing judgment.
  • Defend methods, interpretations, and outputs with transparent evidence.

Bridge: I treat literacy as foundational knowledge and readiness as demonstrated methodological competence.

UNESCO Progression: Understand → Apply → Create

  • Understand: Students learn what AI can do, what it cannot do, and how its limits matter for research and learning.
  • Apply: Students use AI in bounded social science tasks with attribution, validation, and clear accountability.
  • Create: Students build or adapt AI-assisted workflows, then improve them through reflection, feedback, and revision.

I scaffold this progression across the term so students revisit the same concepts with deeper technical demands and stronger expectations around explanation, documentation, and transfer across contexts.

Productive Friction, Trust, and What I See in Class

Risk-control loop showing cognitive offloading risk mitigated through PRIME checks and feedback
Productive friction protects learning quality: students verify and revise before they accept AI outputs.
  • AI reduces friction in coding, memo drafting, and exploratory analysis, but some friction is pedagogically useful because it forces students to inspect variables, assumptions, and model choice.
  • When trust in fluent output rises too quickly, evaluation effort often drops. Students need explicit routines for checking provenance, logic, and fit with the actual research question.
  • In quantitative social science, the durable skill is not producing polished output on the first try. It is knowing what should be checked, how to diagnose problems, and when to reject a plausible-looking result.

Teaching principle: productive friction is a design choice, not a bug.

Practical Classroom Policy

Foundations First (Phase 1: Manual Mastery)

In new units, I limit AI for core tasks. Students practice question formation, variable definition, descriptive analysis, interpretation, and core coding or statistical workflows themselves so they can see where errors come from and what assumptions are doing.

Co-Pilot Rule (Phase 2: AI-Assisted Expansion)

Once competency is visible, I allow AI for higher-order work: cleaning messy data, comparing model specifications, generating visualization drafts, organizing literature, documenting reproducible workflows, and stress-testing explanations. Students still need to disclose use and verify every claim.

P.R.I.M.E. workflow (AI auditing routine)

  • P — Prep: Set role, context, constraints, and assumptions.
  • R — Request: Ask for a precise, bounded task.
  • I — Iterate: Debug intentionally using output and error evidence.
  • M — Mechanism Check: Verify libraries, methods, variable construction, and statistical logic.
  • E — Evaluate: Run substantive, statistical, and ethical sanity checks before acceptance.

Design Principles for AI in Quantitative Social Science

The strongest institutional models do not treat AI as universally good or universally forbidden. They align tool use with learning goals, method standards, and disciplinary norms.

  • Method-Sensitive Guardrails: AI policy should vary by assignment. Early problem sets, replication exercises, survey design, coding labs, and theory-building tasks each require different boundaries.
  • Experiential Integration: Use AI to support the kinds of work students will encounter in research, co-op, public service, and industry—without skipping the reasoning that makes those outputs credible.
  • Documentation and Attribution: When AI materially shapes a workflow, students should show prompts, revisions, source use, and validation steps.
  • Ethics and Interpretation: Good social science practice includes privacy, bias, representativeness, fairness, and communicating uncertainty—not just technical correctness.

Cultivating the AI-Ready Social Scientist

For me, AI readiness means students can move between theory, data, method, and explanation with their judgment intact. In economics, that may include connecting econometrics with machine learning. In political science, sociology, public policy, or adjacent fields, it may mean combining causal inference, survey design, text analysis, network methods, or administrative data work with transparent AI-assisted workflows. The common standard is accountable analysis.

Selected Northeastern + Public Resources

These are useful starting points for faculty conversations about pedagogy, policy, and responsible AI use in quantitative social science contexts.

Northeastern · AI in Teaching and Learning

University-wide guidance, events, and examples for AI-informed teaching, with an emphasis on critical thinking and clear learning design.

Read resource ↗

Northeastern CSSH / NULab · Faculty Guide to AI Resources

A practical guide to Northeastern library tutorials, teaching resources, policies, and critical-use support for faculty working with AI.

Read resource ↗

Northeastern CSSH · Social Sciences for the 21st Century

A concise CSSH framing of interdisciplinary social science training that brings together research, data, technology, and public-facing problem solving.

Read resource ↗

UNESCO · AI in Education

Human-centered guidance on AI, inclusion, equity, and competency development in teaching and learning.

Read resource ↗

How This Shows Up Across Quantitative Social Science

Research Design and Causal Inference

Students can use AI to compare identification strategies, debug code, and clarify assumptions, but they still need to explain the estimand, threats to validity, and why the design fits the question.

Text, Media, and Digital Trace Analysis

AI can assist with preprocessing, coding schemes, summarization, and exploratory pattern finding, while students remain responsible for sampling, categories, interpretation, and representational limits.

Surveys, Measurement, and Policy Work

AI can help draft survey items, revise questionnaires, and scaffold policy memos, but students must still defend measurement choices, question wording, and the meaning of the evidence they present.

Economics as One Applied Example

In economics courses, I use cases like pricing algorithms, labor-market change, and platform behavior to show how AI intersects with data, markets, and policy. The same readiness principles travel well across CSSH fields.

Closing Takeaway

I am not trying to normalize AI dependence. I am trying to develop students who can use AI without giving up responsibility for evidence, method, and ethics.

What students leave with

  • Work without AI when first-principles reasoning is required.
  • Use AI to extend workflows once competency and controls are in place.
  • Defend analytical claims with transparent, reproducible evidence.