AI Literacy (know)
- Understand how AI systems generate outputs and where they routinely fail.
- Recognize risks around bias, privacy, citation, and inappropriate use.
- Evaluate AI responses against theory, evidence, and disciplinary standards.
AI in Pedagogy
At Northeastern, I approach AI as a tool that should widen intellectual range without narrowing human judgment. In quantitative social science, students still need to formulate questions, understand data, interpret evidence, and defend their choices. My teaching asks them to build those foundations first, then use AI as a transparent, auditable co-pilot across economics, political science, sociology, public policy, and related fields.
Across CSSH, quantitative work now spans survey data, administrative records, text, networks, geospatial evidence, and digital traces. Students are increasingly expected to move between substantive theory, computational tools, and public-facing interpretation.
Generative AI can support brainstorming, coding, literature mapping, data-cleaning drafts, and explanatory writing. It can also flatten reasoning, hide assumptions, and produce fluent but unreliable answers. The pedagogical question is not whether AI exists in the classroom; it is how to use it without weakening the habits that make quantitative social science credible.
Bridge: I treat literacy as foundational knowledge and readiness as demonstrated methodological competence.
I scaffold this progression across the term so students revisit the same concepts with deeper technical demands and stronger expectations around explanation, documentation, and transfer across contexts.
Teaching principle: productive friction is a design choice, not a bug.
In new units, I limit AI for core tasks. Students practice question formation, variable definition, descriptive analysis, interpretation, and core coding or statistical workflows themselves so they can see where errors come from and what assumptions are doing.
Once competency is visible, I allow AI for higher-order work: cleaning messy data, comparing model specifications, generating visualization drafts, organizing literature, documenting reproducible workflows, and stress-testing explanations. Students still need to disclose use and verify every claim.
The strongest institutional models do not treat AI as universally good or universally forbidden. They align tool use with learning goals, method standards, and disciplinary norms.
For me, AI readiness means students can move between theory, data, method, and explanation with their judgment intact. In economics, that may include connecting econometrics with machine learning. In political science, sociology, public policy, or adjacent fields, it may mean combining causal inference, survey design, text analysis, network methods, or administrative data work with transparent AI-assisted workflows. The common standard is accountable analysis.
These are useful starting points for faculty conversations about pedagogy, policy, and responsible AI use in quantitative social science contexts.
University-wide guidance, events, and examples for AI-informed teaching, with an emphasis on critical thinking and clear learning design.
Read resource ↗A practical guide to Northeastern library tutorials, teaching resources, policies, and critical-use support for faculty working with AI.
Read resource ↗A concise CSSH framing of interdisciplinary social science training that brings together research, data, technology, and public-facing problem solving.
Read resource ↗Human-centered guidance on AI, inclusion, equity, and competency development in teaching and learning.
Read resource ↗Students can use AI to compare identification strategies, debug code, and clarify assumptions, but they still need to explain the estimand, threats to validity, and why the design fits the question.
AI can assist with preprocessing, coding schemes, summarization, and exploratory pattern finding, while students remain responsible for sampling, categories, interpretation, and representational limits.
AI can help draft survey items, revise questionnaires, and scaffold policy memos, but students must still defend measurement choices, question wording, and the meaning of the evidence they present.
In economics courses, I use cases like pricing algorithms, labor-market change, and platform behavior to show how AI intersects with data, markets, and policy. The same readiness principles travel well across CSSH fields.
I am not trying to normalize AI dependence. I am trying to develop students who can use AI without giving up responsibility for evidence, method, and ethics.