52 Comments
User's avatar
Jessica's avatar

I think paper and pen exams are pretty straightforward to administer. A room, paper, pen, 3 hours. No electronics in the room. Simple. This is how whole generations of students were examined for centuries until about 2013. If the students are angry at writing their thoughts with a pen, so what? They probably should not be in university if they have a problem with pen and paper exams.

Expand full comment
coffeebits's avatar

My college wants the students who probably shouldn't be in university to also be in university 🙁

Expand full comment
Treekllr's avatar

Ofc. They get paid regardless of how a student does at school or after. They have a huge incentive to get/keep this ai farce rolling

Expand full comment
Loren Thacker's avatar

I’m 63 years old and a retired lawyer. I am also an undergraduate again at the University of Minnesota. A key problem relates to student goals and incentives: Most students are there “to get a degree” — something they need to get to enhance their job prospects. How they earn that degree is far less important. And, so, their goal is to get the degree with the minimum amount of effort, and that means extensive use of AI bots.

That said, I think Yascha’s approach is worth careful consideration. Students must be made to think and write for themselves and the best methods of assessing those skills — and ensuring they are developed — is to use both written essay exams and one-on-one oral exams. But Yascha is also correct that students need to be adept at the proper use of AI bots. It’s the combination of those two things that will likely render the most interesting and useful work after university.

Expand full comment
Edward Scizorhands's avatar

I have a kid finish high school soon and I dont see the value in sending him to college to cheat his way to a fake degree. I wish someone would just enforce some rules, whatever is necessary to do that. Maybe so Butlerian and ban computers entirely

Expand full comment
barry dank's avatar

Mounk overlooks the fact that all too many professors have little or no interest in grading and testing. And generally, grading is outsourced to teaching assistants. I have little doubt such will continue to be the fact. Indicative of the fact that grading is of minor concern to professors is that in my long professorial career serving on promotion and tenure committees, grading practices never came up. Of course, professors become professors to teach not to grade. Bottom line here is that grading is considered to be dirty work and as long as such is the case, significant changes in grading are unlikely to occur no matter how prevalent ai.

Expand full comment
Suki Wessling's avatar

I teach teenagers, and that is what I always tell them: You can't know what you really think until you write it down. Universities and high schools are going to have to grapple with this. Now that is easy to produce something, the value of the product should go down. Students should be graded on the work they do and what they learn, not just on a product that they produce.

Expand full comment
Chris Myers Asch's avatar

I always appreciate your perspective. As a history professor at a small arts college, I encounter the challenge of AI on a daily basis. Our college has attempted to be ahead of the curve in embracing AI and offering its tools to the faculty, but we are still trying to figure out how to make it work. I see it as an opportunity to get back to some of the basic assessments and rigor that we have slowly abandoned over the last couple generations: the return of handwritten and oral exams, the use of in-class presentations and peer review, and other “old school”methods. We are in a completely new world now, and we have to adjust accordingly.

Expand full comment
James Andrews's avatar

Yascha, your “both/and” prescription is right, but there’s a structural roadblock that makes the problem worse than you describe.

Federal Title IV rules (RSI: Regular and Substantive Interaction) were written to block old-fashioned correspondence courses from receiving aid. Those rules now end up penalizing the very proctored, verification-heavy assessment models that would actually address the AI problem you outline. The “path of least resistance” isn’t just habit or denial; it’s baked into the funding architecture.

Under the current compliance framework, oral exams and in-person verification are disappearing at the exact moment we need them most.

Expand full comment
JakeH's avatar

Hi James, I don't follow this. How does RSI penalize in-person exams? Really just asking for clarification.

Expand full comment
James Andrews's avatar

RSI doesn’t ban in-person exams, but it disincentivizes them in practice.

Here’s the short version:

RSI requires that online courses show continuous, documented “regular and substantive interaction” throughout the term in order to remain eligible for Title IV aid. A course that relies primarily on a midterm and final with proctoring technically looks too much like a “correspondence course,” which is ineligible for federal aid.

So colleges build course shells packed with weekly graded interactions, discussion posts, polls, etc., because that’s what auditors look for. High-stakes, proctored assessments don’t generate RSI “activity,” so they get pushed to the margins even though they’re the only reliable way to verify student work in the age of AI.

That’s the sense in which RSI penalizes in-person verification — not legally, but structurally.

Expand full comment
Eugene Earnshaw's avatar

I will just report that at my institution I am making extensive use of oral tests and they are GREAT. And the practice seems to be spreading.

Expand full comment
JdL's avatar

I agree that the solution to overuse of AI is to administer exams in a setting where AI is not available, but actual "pen and paper" is overly restrictive, I believe, as it makes it nearly impossible to edit and rearrange on the fly what one is composing. Much better would be to provide students with a laptop or desktop computer which can't be connected to the web, at least from the room where the exam is being given. One could argue that it's valid to test the ability of students to imagine the entire essay in their heads before writing a single line, but this skill is not particularly valuable in the real world: what counts is the worthiness of the final product, not whether it was completed in one go without ever backspacing vs. refined with a profusion of tweaks.

Expand full comment
Gil Moss's avatar

I'm a math professor. Pen and paper exams are the bread and butter of mathematics education and have been for a long time. Students can easily cheat on homework, and have done so since time immemorial, but it doesn't help them come exam time.

Expand full comment
Deepa's avatar

Hasn't it always been the case that you cannot force an unmotivated student to learn? Cheating has become easier. But motivated students who genuinely want to.learn are the ones worth caring about, I think, rather than transform the industry to force unmotivated students to become motivated.

Expand full comment
JakeH's avatar

I see your point, which is a version of the "extrinsic" vs. "intrinsic" motivation idea. The idea is that we learn more and better when intrinsically motivated, when we're doing it because we want to and not for an "extrinsic" reason, like, if I don't do it, I'll get a bad grade or disappoint my parents or something.

True enough, but also glib and unrealistic. The problem is that students are largely extrinsically motivated, even earnest and intelligent ones who have a genuine interest in the subject, especially when it comes to doing the difficult, annoying thing at any given moment, like plowing through challenging reading, for example, and doing it tonight and not tomorrow or next week or the night before the final. People do respond to incentives and taking cheating and shortcuts off the table will actually do many fine and capable minds, and by extension their society, a great service by forcing them to engage seriously.

There's a limit, of course, as you suggest. You can't force a student who doesn't even care about their grade to learn. But you can actually force most students to learn, simply by making it required, by incentivizing it appropriately. This is pretty much how all school works and has always worked. Extrinsic? Sure, not ideal maybe, but unavoidable. Extrinsic motivation, after all, is still motivation.

Expand full comment
Deepa's avatar

This is pretty eye-opening for me. I agree that idealism is not good.

Expand full comment
chris moore's avatar

As a professor teaching Social Psychology, I have adapted to the AI threat by asking students to reflect on what they have learned in the course. Recognizing that this may not work for many disciplines, I find that requiring a written essay on "What aspect of this course has most changed the way you think about human social life" generates the most interesting and often beautifully moving responses. The students really seem to like it. It has the added benefit of giving me feedback on those aspects of the course that are working well!

Expand full comment
THPacis's avatar

I’m sure this is meaningful to some students but beware that others may be outsourcing that to AI as well. I have seen many convincing outputs in a recent ai in pedagogy seminar with precisely this prompt. In other words “personal reflections” can be outsourced and cheated out of just as easily as anything else in fact more so because they don’t require source citations. Some students may like it because it may be even easier to cheat their way out of.

Expand full comment
Petula's avatar

Sure, but if you require personal reflections about *this course* then you can usefully hold students to a standard of showing they have been engaging in *this course* - something the AI bot won't have any training on

Expand full comment
THPacis's avatar

Why not? You upload the course readings and it will!

Expand full comment
Petula's avatar

There's more to a course than just the readings

Expand full comment
THPacis's avatar

Here’s an idea. Take your course readings plus your syllabus. Give that to Gemini or gpt or whatever and ask it to use those materials to write a reflection as if it was a student in you course. Use the same wording as in the assignment you give to your students. Compare the results to what your students are handing in.

Expand full comment
Thomas P. Balazs's avatar

Yeah, good luck with that.

I require my students to write writing reflections on my course every semester. Several of them this semester were written with AI , and they were not easy to spot at first. You can feed ChatGPT rubrics, assignments, anything that you give to your student they can give to ChatGPT, and it can generate a pretty believable reflection.

Expand full comment
Stephen Riddell's avatar

Hi Yascha, very interesting stuff! I went to University well before the 'chatbot' craze, but this type of creativity has been an issue ever since 'neural networks' become a part of programmes like Premiere Pro, well before the advent of LLMs.

I think, based on my knowledge and operation of computers, it would be quite easy for supervising lectuers actually to check whether there were discrepancies between an oral dissertation and a written dissertation?

I mean, obviously we can check for token fragments to see if 'ai' was used, but if someone has obvious learning disabilities then that shouldn't count against them?

Expand full comment
David Wilchfort's avatar

When I was a student in the 1960s, there was a lively debate about whether the newly introduced multiple-choice exams would lower academic standards compared to traditional essay questions. In reality, crafting good multiple-choice questions is far more difficult than coming up with an essay topic. An essay topic usually arises quite naturally from the subject matter at hand, whereas well-designed multiple-choice questions—which truly test a student’s knowledge—require creativity and careful thought from the examiner.

I see a similar challenge emerging today. Once again, it’s about assigning tasks to students that both assess their knowledge and test their ability to work with new tools in ways that will prepare them for future advances in AI.

That’s why I fundamentally appreciate the article’s approach. However, I believe that making effective use of AI requires us to recognize this: the goal should not primarily be to prevent students from taking shortcuts, but for teachers themselves to embrace the challenge of leveraging the benefits of these new instruments—so they can truly fulfill their teaching mission.

Expand full comment
David Wilchfort's avatar

After posting my previous comment, I kept thinking about what kind of assignment a professor could give to see AI not as a problem, but as a solution. For example, a student could be asked to research a specific topic and use an AI tool to formulate an initial question. The answer from the AI would then serve as the basis for generating the next question, and so on. The key would be for the professor to grade the quality of the student’s questions, not the AI’s answers.

This would have the added benefit of getting much closer to the skills that will actually matter in students’ professional lives—namely, the ability to ask thoughtful, creative, and probing questions in collaboration with new tools.

Expand full comment
JakeH's avatar

I like this: grade the prompts rather than the answers. The only problem I see with it is that you can ask AI to generate prompts too!

Expand full comment
David Wilchfort's avatar

I appreciate your point—the process-based assessment idea absolutely builds on the core principle I was aiming for. If we see education as “manager training” for a world shaped by AI, then it’s less about testing facts and more about developing the judgment required to manage, coordinate, and synthesize the outputs of different AI systems.

With every new leap in AI capability, the “management layer” shifts higher. Today’s students should practice not just using one tool, but choosing when, which, and how to use multiple AIs—a process that inevitably requires reflection, adaptation, and strategic orchestration. That’s exactly what process-based assessment can capture: not just what answers are produced, but how well the student navigates, evaluates, and integrates the potential of digital tools.

In this model, an exam would focus on how thoughtfully a student manages their engagement with AI, documents decision points, and justifies choices. The principle remains constant: it's not just about leveraging technology, but cultivating the human skill of directing it toward meaningful outcomes.

Would you agree that this shift places educators and students alike in a new “meta-managerial” role—one where learning how to guide, question, and re-evaluate AI itself becomes the true exam?

Expand full comment
JakeH's avatar

Hi, thanks for your response. I think that describes well how a skilled professional would be expected and encouraged to use AI. I'm less sure that this model is appropriate for the lion's share of educational pursuits.

A litigator writing a brief or preparing for an oral argument, for example, might have AI do preliminary research, evaluate drafts, and provide oral argument outlines. Lawyers would likely also rely on AI to do "document review," preliminary evaluation of the copious material exchanged in discovery to locate relevant information, a task AI could probably do better and far more quickly than a young associate, who would be in charge of coordinating that process rather than doing it manually.

It would then fall on the lawyer to evaluate and incorporate those outputs based on what the lawyer knows about the law and the case and the points they're trying to make, all informed by their experience and expertise.

That's all well and good, and people will have to learn how to do this at some point, to an extent at least in their brief-writing seminar or clinic activities in law school probably or, most saliently for most I'm guessing, in their on-the-job training in summer internships and first jobs, when most lawyers first learn how to practice law on the ground in their chosen sub-field anyway.

But managing AI interaction and, for the professor, meta-managing AI interaction through process-based assessments, strikes me as a problematic conception of legal education generally. The purpose of that education is to instill deep background understanding -- the legal literacy, the practice in legal reasoning -- that furnishes the foundation upon which they can do all they do professionally, with or without AI assistance.

The key question for educators designing coursework or assessments should be, Are students able to use AI to offload the cognitive lift they would ordinarily be required to do, and required to do for good reason? We must make peace with the fact that education is not generally about producing useful product, as in a job. Rather, the point is the mental exertion in the area of study, exertion that builds the "muscles" that will enable skilled professionals to produce products later, with or without AI interaction.

I'm persuaded by the analogy to physical exercise. The point of physical exercise is not to get something done, i.e., to get somewhere by running there or to lift some iron discs for some reason. This is intuitively obvious. If the point were to get from point A to point B, you'd drive. Now, you should learn how to drive. It's an important skill in most places for getting around and functioning in the world. But it elides the main purpose of going for a run, which is the exercise itself.

Depending on the subject or the assessment design, I would be concerned that leaning too heavily into AI too early would allow students to avoid building that mental muscle, which is the point. I like the sound of Mounk's two-phase approach, and I'm open to the "free but documented" use concept, though I'd have to think quite a lot about how to ensure that it maintains a high level of cognitive challenge that can't be gamed. I would really not want to take on a policing role, in this regard, especially because unauthorized AI use typically can't be conclusively proved.

Expand full comment
David Wilchfort's avatar

Thank you for such a thoughtful response. Before adding onto your reflections, allow me to clarify—I have no illusions that two people in a comment section might solve a dilemma that's occupying millions of educators worldwide.

Your “mental muscle” analogy perfectly captures what matters in education. This, too, is my hope: to steer the discussion constructively—“how might we use this as an opportunity?”—rather than destructively—“how could we stop it?”

From a pragmatic point of view, AI is making the educational landscape a moving target. Any measure we put in place to “stop” a shortcut today may be obsolete tomorrow as technology evolves. That's even more reason for teachers to focus on strengthening those intellectual muscles—guiding students to grapple deeply with reasoning, reflection, and judgment, not just answers.

Could it be, then, that the new educational challenge is less about avoiding shortcuts and more about teaching the skills to master, evaluate, and even orchestrate those shortcuts wisely? Perhaps we need to see "learning to manage AI" as part of the mental exertion itself—a new form of cognitive training, foundational for the future.

Would you agree that, while the “mental workout” must remain central, integrating AI management—when, where, and how to use it—will be a skill as valuable to future lawyers and professionals as legal reasoning itself?

Expand full comment
JakeH's avatar

Hi, thanks, I guess I'm not very squeamish about basically "stop[ing] it," because that's pretty easy to do most of the time and it's the best way to incentivize genuine learning, building the mental muscle I'm talking about. Law school assessments in most classes, for example, have traditionally been 3-hour, in-person essay-style exams, which made up 100% of your grade in the class. When I went to law school, some were open-book, some were not, but in either case, you had to know your stuff to earn a good grade. I don't see any reason to depart from that model. Indeed, AI suggests doubling down on it (in my day, a few exams were one-day take-homes, which AI would seem to foreclose) and perhaps even extending it to college coursework as Mounk proposes.

I don't see how tech poses a moving target here. You just ban the tech from the assessment. This is not in any way weird or unusual and should not be seen as excessively burdensome. See AP exams (which, though digital as of last year, use a "lockdown browser" that prevents leaving the test on your screen while taking it). See the bar exam or any typical exam in any area that has always restricted access to outside help, simply forbidden as cheating without a trace of embarrassment.

I'm not enamored of the idea of mastering the shortcuts as a major portion of the educational process, except as, at most, a supplement to the usual. Shortcuts are fine for the person who has already done the work to etch in their mind the grooves of genuine understanding of the area. Shortcuts shortchange that process. I have no problem with my pilot using the autopilot to fly the plane, of course -- and using the autopilot is surely part of their training -- but I very much hope that they know how to fly the plane without it and I would imagine that learning how to do that would take up the lion's share of their training.

That analogy exposes another issue here, which is that using AI, like using the autopilot, isn't that hard. There's not so very much to master. The key, as you say, is knowing when it's making a fruitful contribution and knowing how to guide it to offer one. But you won't know that unless you know what you're talking about in the first place, and you won't know that if you rely on shortcuts to learn the stuff in the first instance. You'll forever be grasping at shadows, always fearing, if you're reflective, that you're a bit of a fraud.

So, no, I don't agree that integrating AI will be a skill as valuable to lawyers and professionals as legal reasoning itself. Legal reasoning and literacy comprise the hard-won intellectual foundation. AI is a tool, like, in my day as a practicing lawyer, Westlaw and Lexis electronic research that supplanted earlier generations' hours in the library. The tools are, you're correct, always changing, but that's not an argument for going all in on them at the learning stage. That's an argument, rather, for focusing on what is more constant, which is the core substance of the thing.

I would heartily endorse using AI as a personalized tutor. Here, there is a genuine gap that AI could fill. Teachers can't administer to every student's particular needs, and many students wouldn't want to expose their struggles to the teacher anyway. AI as tutor is infinitely patient. If a student doesn't get something they read, they can ask the AI tutor and ask it again and again.

The only problem I see with my "just say no" approach in school when it comes to assessments is in the area of the research paper, the thesis, the law review comment (i.e., student-authored article), the dissertation, that sort of thing. The little response paper, the little take-home essay, is done, sadly, or should be. But you would still want students to practice producing those larger formal written works. My general sense is that the requirement of making a serious, at least somewhat original point, the requirement of meeting with a professor/advisor to help develop it, the requirement of citing and quoting accurately a plethora of authoritative sources which AI yet struggles with, and the danger that an AI-produced work would be suspected to be such, would all conspire to make excessive reliance on AI (i.e., offloading too much of the work to the machine) more trouble than it would be worth. But I'm not sure.

Anyway, nice chatting with you. Cheers!

Expand full comment
Natividad Cruz's avatar

I remember being blown away by the things I learned and heard and read my first semester in college, 32 years ago... I suspect that when my 15-year old gets to college, she will not experience this awe and wonder. She will have seen and heard and learned so many things, randomly and aimlessly, via the internet...

It is very likely that she will do her work, she will read if she must, she will learn and she will graduate. She will use AI, just like she will use electronic books, and go to class, and take exams. But university, the university, as a very significant and special liminal space, has lost its significance. It is no longer a long, oddly drawn out "process of passage" from curious youth to humanist adult. Humanist, whether one graduated from the sciences. Humanist because the university was supposed to ground you in the Universe, through philosophy and observation, through critical thinking and skeptical pursuits, through following the footsteps of people who Came Before You and Produced Knowledge...

If it is a place you go to complete your training to get a job that pays your bills, it matters little. And it will matter little to students. The misuse of technology is not the problem. The problem is thinking that thinking, writing, and learning are not ends in and of themselves, but tools for a job, which is also not meaningful enough to be an end in and of itself, but a means to pay bills. The bills, in our contemporary postmo universe where everything, from babies to wombs to genitals to presidential pardons to islands to care to time, can be bought and sold in The Market. seem to be the End in and of Itself. Being able to consume and pay for what we consume.

It is no wonder students and professors are caving in to AI. There is little point to struggle.

Expand full comment
Twirling Towards Freedom's avatar

I like the idea of encouraging the use of AI to write. But of course the bar should be much higher. Complexity and depth of argumentation, citation, all becomes much more important than grammar or superficial critiques. But ultimately we do need to teach these kids how to use these tools effectively rather than pretend we live in a world wheee they don’t exist

Expand full comment