teaching

Error message

  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /var/www/drupal-7.x/includes/common.inc).

An “AI Student Agent” Takes an Asynchronous Online Course

Published by Anonymous (not verified) on Fri, 19/04/2024 - 12:33am in

Tags 

teaching

The earlier we all start thinking about this problem, the sooner we can start generating ideas and potential solutions.

Given the magnitude of impact generative AI is having and will have in education (and many other aspects of life), I’m working with some diligence to keep up to date with developments in the field. Recently, I noticed how a couple of the emerging capabilities of generative AI will come together in the future in a way that will impact education much more dramatically than I am hearing anyone talking about currently (if I’m missing this conversation somewhere, please help me connect to it!). But before I give away the punch line, let me share the individual pieces. Maybe you’ll see what I saw.

The Pieces

“Agents” are ways of using generative AI to interact with the world outside the large language model (LLM). Some recent examples include:

In these two examples, we see LLMs searching the web to find and read technical documentation, writing, debugging, and running computer code, playing songs on Spotify, and creating images on Midjourney. But we also see an LLM ordering food via DoorDash, booking a ride from Uber, and purchasing a plane ticket. These LLMs aren’t just writing essays or conducting mock job interviews. They’re reaching outside themselves to navigate the web, use a wide range of services, and take actions in the real world (in some cases spending real money to do so).

The rabbit r1 takes a standard approach to connecting to and using other services – you individually authenticate with each service you want the r1 to be able to access and use (e.g., you can see Jesse authenticating with Spotify around 11:31 in the video above).

Open Interpreter takes a radically different approach to connecting to and using other services.

Open Interpreter is a kind of integration layer that allows LLMs to take actions directly using your computer – operating the keyboard and mouse autonomously. Rather than authenticating with Spotify and operating it via an API like the r1 did, Open Interpreter would simply open the Spotify app, click in the search box, type the name of a song, hit enter, and then double click the song title to start playing. (Open Interpreter is open source and you can check out the repo on Github.)

In the video below, introducing the 01 hardware device made to work with Open Interpreter, Killian says, “You can think of the 01 as a smart person in front of your computer” (3:50 mark in the video).

In the 01 demo (starting about 4:10) we see the Open Interpreter use Slack by pressing hotkeys on the keyboard, seeing and interpreting what’s on the screen, clicking on user interface elements, typing, and hitting enter. This is absolutely incredible. And it connects to a topic that was discussed briefly on a recent episode of the Latent Space podcast (starting around 35:11).

While many computer vision models have been trained on datasets like COCO, which is comprised of photos of a wide range of objects in a wide range of contexts, the kind of computer vision that’s needed to support knowledge work is the capacity to understand PDFs, charts, graphs, screenshots, etc. And while they’re playing catch-up, the capabilities of vision models in this area are advancing quickly. As the 01 demo shows, this kind of multimodal support in LLMs is already pretty good.

(And you likely noticed that the r1 and the 01 both include a learning function, which you can use to teach them how to perform new skills. That’s an entirely different essay.)

Now let’s add one more piece. Here’s an oldie-but-a-goldie by Ethan Mollick from over a year ago.

And of course the capability of frontier models has only increased in the 13 months since this tweet was published. (And yes, I called it a tweet.)

Ok, those are the main pieces. Do you see what I see?

Putting the Pieces Together

As we’ve seen above, generative AI is capable of opening programs on your computer and using those programs autonomously. It can use a web browser to open webpages, navigate, click on buttons or form fields or radio buttons or other UI elements. And as we already knew, generative AI can write essays and pass a wide range of very difficult exams with flying colors. In other words,

All the technology necessary for an “AI student agent” to autonomously complete a fully asynchronous online course already exists today. I’m not talking about an “unsophisticated” kind of cheating where a student uses ChatGPT to write their history essay. I’m talking about an LLM opening the student’s web browser, logging into Canvas, navigating through the course, checking the course calendar, reading, replying to, and making posts in discussion forums, completing and submitting written assignments, taking quizzes, and doing literally everything fully autonomously – without any intervention from the learner whatsoever.

Putting these pieces together to build an AI student agent will require some technical sophistication. But in terms of overall difficulty, it feels like the kind of thing that could be done by a team of two during a weekend AI Hackathon.

I’ve experimented with setting up a toy version of an AI student agent on my laptop using Open Interpreter with GPT-4. It’s probably prohibitively expensive to do an entire course this way today – it would cost well over $100 to have an AI student agent complete a single class in this configuration. With more time and effort, you might be able to use a cheaper model (like GPT-3.5-turbo). But either way, the price per token will keep going down in the future, so high prices are likely only a temporary barrier to the adoption of AI student agents.

Of course, the way to avoid paying for API calls altogether is to run an LLM locally. So I also tried using Open Interpreter with Mistral-7b running locally via LM Studio (and therefore costing me essentially $0 per token). This was slower and not as accurate, but with enough time and effort I think you could get an AI student agent working using a local LLM. The “problem” then becomes that, in order to use an AI student agent in this configuration, a real student would have to download, install, and run a large language model on a pretty powerful laptop. But again, these barriers are also likely only temporary – the UI/UX for running local models will keep improving and computers will keep getting more powerful.

With OpenAI widely rumored to be releasing updated functionality this summer specifically designed to make agents easier to create and control, and with GPT-5 rumored to be coming toward the end this year (the CEO of OpenAI recently said that GPT-4 “kind of sucks” compared to what’s coming), the tasks of building and running this AI student agent will only get easier as time goes on. I’m not sure what the odds are that this tech exists by Fall semester of 2024, but it seems highly likely it exists by Fall 2025.

The implications for formal education are obvious, if hard to fully appreciate. But then there’s also corporate training, safety and compliance training, etc. to consider. The overwhelming majority of this kind of training is delivered fully asynchronously.

So now what?

I’ll share some early ideas in another post. I’m anxious to hear yours. We have work to do.

Reviving the Philosophical Dialogue with Large Language Models (guest post)

Published by Anonymous (not verified) on Thu, 14/03/2024 - 11:27pm in

“Far from abandoning the traditional values of philosophical pedagogy, LLM dialogues promote these values better than papers ever did.”

ChatGPT and other large language models (LLMs) have philosophy professors worried about the death of the philosophy paper as a valuable form of student assessment, particularly in lower level classes. But Is there a kind of assignment that we’d recognize as a better teaching tool than papers, that these technologies make more feasible?

Yes, say Robert Smithson and Adam Zweber, who both teach philosophy at the University of North Carolina, Wilmington. In the following guest post, they discuss why philosophical dialogues may be an especially valuable kind of assignment to give students, and explain how LLMs facilitate them.


[digital manipulation of “Three Women Conversing” by Ernst Ludwig Kirchner]

Reviving the Philosophical Dialogue with Large Language Models
by
Robert Smithson and Adam Zweber

How will large language models (LLMs) affect philosophy pedagogy? Some instructors are enthusiastic: with LLMs, students can produce better work than they could before. Others are dismayed: if students use LLMs to produce papers, have we not lost something valuable?

This post aims to respect both such reactions. We argue that, on the one hand, LLMs raise a serious crisis for traditional philosophy paper assignments. But they also make possible a promising new assignment: “LLM dialogues”.

These dialogues look both forward and backward: they take advantage of new technology while also respecting philosophy’s dialogical roots. Far from abandoning the traditional values of philosophical pedagogy, LLM dialogues promote these values better than papers ever did.

Crisis

Here is one way in which LLMs undermine traditional paper assignments:

Crisis: With LLMs, students can produce papers with minimal cognitive effort. For example, students can simply paste prompts into chatGPT, perhaps using a program to paraphrase the output. These students receive little educational benefit.

In past courses, we tried preventing this “mindless” use of LLMs:

  1. We used prompts on which current LLMs fail miserably. We explained these failures to students by giving their actual prompts to chatGPT during class.
  2. Because LLMs often draw on external content, we sought to discourage their use through prohibiting external sources.
  3. We told students about the dozens of academic infractions involving LLMs that we had prosecuted.

Despite this, many students still submitted (mindless) LLM papers. In hindsight, this is unsurprising. Students get conflicting messages over appropriate AI use. Despite warnings, students may still believe that LLM papers are less risky than traditional plagiarism. And, crucially, LLM papers take even less effort than traditional plagiarism.

The above crisis is independent of two other controversies:

Controversy 1: Can LLMs help students produce better papers?

Suppose that they can. Even so, the crisis remains. This is because the main value of an undergraduate paper is not the product, but instead the opportunity to practice cognitive skills. And, by using LLMs mindlessly, many students will not get such practice.

Controversy 2: Can we reliably detect AI-generated content?

Suppose that we can. (We, the authors, were at least reliable enough to prosecute dozens of cases.) It doesn’t matter: our experience shows that, even when warned, many students will still use LLMs mindlessly.

Roots of the crisis

With LLMs, many students will not put the proper kind of effort into their papers. But then, at some level of description, a version of this problem existed even before LLMs. Consider:

  • Student A feeds their prompt to an LLM.
  • Student B’s paper mirrors a sample paper, substituting trivial variants of examples.
  • Student C, familiar with research papers from other classes, stumbles through the exposition of a difficult online article, relying on long quotations.
  • Student D merely summarizes lecture notes.

Taking the series together, the problem is not just about LLMs or even about student effort per se (C may have worked very hard indeed). The problem is that students often fail to properly appreciate the value of philosophy papers.

Students who see no value at all will be tempted to take the path of least resistance. Perhaps this now involves LLMs. But, even if not, they may still write papers like B. Other students will fail to understand why philosophy papers are valuable (see students C and D). This, we suggest, is because of two flaws with these assignments.

First, they are idiosyncratic. Not expecting to write philosophy papers again, many students will question these papers’ relevance. Furthermore, the goals of philosophy papers may conflict with years of writing habits drilled into students from other sources.

Second, with papers, there is a significant gulf between the ultimate product and the thought processes underlying it. If students could directly see the proper thought process, they would probably understand why papers are valuable. But, instead, they see a product governed by its own opaque conventions. This gulf is what enables students to submit papers with the wrong kind of effort.

For instructors, this gulf manifests as a diagnostic problem. We often wonder whether someone really understands an argument. We want to ask further questions but the paper cannot answer. In the Phaedrus, Plato himself laments this feature of writing. For Plato, written philosophy was always second best.

The Value of Dialogue

The best philosophy, thought Plato, involves active, critically-engaged dialogue. In place of the above flaws, dialogue manifests two virtues.

First, dialogue manifests the social character of philosophy. Most students are already familiar with discussing philosophical issues with friends and family. And, looking ahead, dialogue will be the main context where most students use their philosophical skills. (Imagine, years from now, a former student serving on a delicate case. She will converse with her fellow jurors, explaining subtle distinctions, asking careful questions, and identifying crucial issues.)

Second, dialogue draws us near to students’ actual thought processes. With papers, the gulf between thought process and product made it possible for someone to submit work with the wrong kind of effort. But it is difficult to imagine this in a dialogue with an experienced interlocutor.

A Promising Alternative to Paper Assignments

We all know the value of philosophical conversation. But our assessments often look different, This is because dialogues have always been difficult to administer in a fair, practical way.

But LLMs can help revive dialogue as a pedagogical instrument. We propose that, at least in intro classes, instructors shift emphasis from papers to “LLM dialogues”: philosophical conversations between the student and an LLM.

We have used many versions of this assignment in recent courses. Here is one example:

To show the assignment’s promise, here is an excerpt from a recent student’s ensuing dialogue (ChatGPT speaks first):

We offer several observations. First, the above student practiced philosophy in a serious way. In particular, they practiced the crucial skill of tracking an argument in the direction of greater depth.

Second, the transcript clearly exhibits the student’s thought process. This makes it difficult for students to (sincerely) attempt the assignment without practicing their philosophical skills.

Third, this dialogue is transparently similar to students’ ordinary conversations. Accordingly, we have not yet received dialogues that simply “miss the point” by, e.g., copying class notes, pretending to be research papers, etc. (Though, of course, we still have received poor assignments.)

Certainly, it is possible for students to submit dialogues that merely copy notes, just as this is possible for papers. But there is a difference. With papers, these students may genuinely think that they are completing the assignment well. But, with dialogues, students already know that they must address the interlocutor’s remarks and not just copy unrelated notes.

Cheating?

But can chatGPT complete the dialogue on its own? If so, LLM-dialogues do not avoid the crisis with papers.

Here, we begin with a blunt comparison. From the 500+ dialogues we graded in 2023, there were only two suspected infractions (both painfully obvious). From the 300+ papers from 2023, we successfully prosecuted dozens of infractions. There were also many cases where we suspected, but were uncertain, that students used LLMs.

What explains this? First, there are technical obstacles. Students cannot just type: “Produce a philosophical dialogue between a student and chatGPT about X”. This is because one can require a link (provided by OpenAI) that shows the student’s specific inputs.

Thus, cheating requires an incremental approach, e.g., ask chatGPT to begin a dialogue, copy this output into a new chat and ask chatGPT for a reply, copy this reply back into the original chat, etc., for every step.

But this method is difficult to use convincingly. The difficulty is not merely stylistic. There are many “moves” which come naturally to students but not to chatGPT:

  • Requesting clarification of an argument
  • “Calling out” an interlocutor’s misstep
  • Revising arguments to address misunderstandings
  • Setting up objections with a series of pointed questions

Of course, one can get chatGPT to perform these actions. But this requires philosophical work. For example, the instruction “Call out a misstep” only makes sense in appropriate contexts. But identifying such contexts itself requires philosophical effort, a fact that makes cheating unlikely. (Could LLMs be trained to make these moves? We discuss this issue here.)

There are also positive incentives for honesty. Because most students already understand why dialogues are valuable, these assignments are unlikely to seem like mere “hoops to jump through”. Indeed, many students have told us how fun these assignments are. (Fewer students have told us how much they enjoyed papers.)

A Good Tool For Teaching Philosophy

LLM dialogues help students practice many of the skills that made undergraduate papers valuable in the first place. Indeed, far from being a concession to new technological realities, LLM dialogues are a better way to teach philosophy (at least to intro students) than papers ever were.

This brief post leaves many issues unaddressed. How does one “clean up” dialogues so that they are not dominated by pages of AI text? What is the experience of grading like? If students are just completing dialogues, how will they ever learn to write?

We address these and other issues in a forthcoming paper at Teaching Philosophy. (This paper provides a concrete example of an assignment and other practical advice.) We hope at this point that philosophers will experiment with these assignments and refine them.

 

The post Reviving the Philosophical Dialogue with Large Language Models (guest post) first appeared on Daily Nous.

Learning to Teach Philosophy You Don’t Already Know

Published by Anonymous (not verified) on Wed, 28/02/2024 - 5:17am in

You may occasionally think about a topic you think you should add to a course you teach, but put off doing so because you don’t believe you know enough about it to teach it well.


[Starr Hardridge, “Black Snake” (detail)]

The preparation required could be significant, the subject could be challenging, and, in some cases, the materials and ideas might be from a philosophical tradition you’re not familiar with.

That last kind of obstacle is the focus of the NEWLAMP series.

NEWLAMP is the Northeast Workshop to Learn About Multicultural Philosophy. Initiated in 2022, it consists in week-long residential workshops that are “designed to give philosophy teachers the tools to approach, and successfully integrate” philosophy from traditions and regions underrepresented in mainstream U.S. philosophy curricula.

Each summer’s workshop has had a different focus. This year, the topic is “contemporary issues in Native American, Indigenous and Land-Based social and political philosophy. The curriculum will center on 5 key concepts in Indigenous resistance work: Sovereignty, Land, Decolonization, Indigenous Feminisms, and Cultural Reclamation.” You can learn more about it here.

NEWLAMP is being put on this summer as an NEH Institute for Higher Education Faculty at Northeastern University, and is being coordinated by Candice Delmas (Northeastern).

Up to 20 faculty will be accepted into the program. Applications are due March 5th.

 

The post Learning to Teach Philosophy You Don’t Already Know first appeared on Daily Nous.

Using Generative AI to Teach Philosophy (w/ an interactive demo you can try) (guest post)

Published by Anonymous (not verified) on Fri, 23/02/2024 - 1:25am in

Philosophy teachers—Michael Rota, a professor of philosophy at the University of St. Thomas (Minnesota), is about to make your teaching a bit better and your life a bit easier.

Professor Rota recently began learning about how to use artificial intelligence tools to teach philosophy. In the following guest post, he not only shares some suggestions, but also let’s you try out two demos of his GPT-4-based interactive course tutor.

The course tutor is part of a program he is helping develop, and which should be available for other professors to use and customize sometime this summer.

Using Generative AI to Teach Philosophy
by Michael Rota

I have a friend who leads AI product strategy at a medium-sized tech company, and for about a year he’s been telling me about various impressive tasks one can accomplish with Large Language Models, like OpenAI’s GPT-4. In December I finally started listening, and began investigating how one might use AI tools as a teacher. (I’m a philosophy professor at the University of St. Thomas.) I’ve been amazed by the promise this new technology holds for instructors—in part because of the potential to increase productivity (of the teacher), but even more because of the potential to improve student learning.

In this post I’ll focus on the practical and discuss three free or low-cost tools that can be employed by a philosophy professor without any special technical expertise: (1) an interactive course tutor for your students, which you can load with your own questions and answers from your course, (2) a tool for quickly drafting a new exam, quiz, or HW assignment, and (3) a chatbot created from your own syllabus and lecture notes, so your students can query the content of your course.

The interactive course tutor

GPT-4 mimics human reasoning remarkably well (it scored in the 88th percentile on the LSAT). But it sometimes just makes stuff up. What if you could provide GPT-4 with good answers to questions you wanted your students to work through? It turns out you can, and thus it is possible to create an alarmingly capable course tutor by supplying GPT-4 with a series of question/answer pairs. This allows each student to have a one-on-one tutoring experience, and get immediate feedback as they work through an assignment.

You can play with a demo of this here.

Take the first assignment in the first module of this demo: “Think up a false conjunctive proposition.” This task has an infinite number of possible correct responses, yet GPT-4 can competently assess student answers,because the instructor-provided answer passed to GPT-4 by the program is general—it’s a recipe for correct answers, as it were. In this demo, the instructor-provided answer GPT-4 has been given is this:

A conjunctive proposition is any proposition of the form A and B, where A is a complete proposition and B is a complete proposition. A and B are called the ‘conjuncts’ of the conjunctive proposition. A conjunctive proposition is false if and only if A is false or B is false or both A and B are false. It counts as true otherwise.

That’s it. That’s enough for the AI tutor to respond accurately to almost any possible student response. A student can get the question wrong in a number of ways: for example, by entering a conjunctive proposition that’s true, or a proposition that’s not a conjunction, or something that’s not a proposition at all. GPT-4 handles all of these possibilities.

Using generative AI in this way offers several advantages over traditional homework assignments:

(a) students get immediate, specific feedback on each question
(b) students who need more practice can get it without having to make other students do busy work
(c) there’s less grading for teachers
(d) there is a decreased need for the teacher to explain the same thing multiple times.

How will grading work? In my view it’s too soon to hand grading over to AIs, so in my classes I plan to split the grading and the learning. The grading will be based on class participation and in-class, pen and paper exams. The learning will be facilitated in the standard ways but also with the help of an interactive course tutor based on questions and answers from my course.

Here is a second demo, allowing an instructor to test functionality by inputting a single question/answer pair and then checking how well the AI tutor handles mock student answers.

The demos linked above use an early version of the product I’m helping to design. It should be available by the summer, at which points professors will be able to create an account, input their own modules of question/answer pairs, and hit ‘submit’ to create a tutor based on their material, accessible for their students as a Web App.

For a larger discussion of the promise of interactive tutors in education, see this TED talk by Sal Khan of Khan Academy.

Assignment generation

The creation of new questions for homeworks, quizzes, and exams can be time-consuming, whether one is designing a new course or just creating a new version of an existing assignment or test. Large language models are great for speeding up this process.

If you go to chat.openai.com, you can sign up for a free account with OpenAI and use GPT 3.5 at no cost. That allows you to type into a textbox, entering a prompt like “Can you give me ten sample questions on ____, suitable for a college level” or “Here’s a question on this topic {insert a question from an old assignment}. Can you can give me a very similar question, but with different details?” Used in this way, GPT 3.5 can provide some value.

But GPT 4 is much better, both because it is better at mimicking human reasoning and because it allows you to attach files. So you can attach an old assignment and ask it for a very similar assignment in the same format. The downside here is that to use GPT-4 you need a GPT Plus account, which costs $20 a month. An upside is that additional functionality comes along with a GPT Plus account: you can access the GPT store. There you will find customized versions of GPT-4 like the “Practice Exam/Quiz/Test Creator for School” GPT, which allows you to upload course content (e.g. your lesson plans on a topic), and then ask for sample questions based on that material. With only a little more work, you can create your own GPT with access to knowledge about your own course, uploading up to 20 files, and use it to generate drafts of assignments tailored to your material.

As with any AI generated content, think of the process like this: with the right prompting from you, the AI produces initial drafts almost instantaneously, but you’ll need to evaluate and refine before you have a final product.

Course chatbot

Another thing you can do with GPT Plus is to create a course chatbot. If you create a GPT and upload files with information about the content in your course (your syllabus, lessons plans, handouts, etc.), then anyone with access to the chatbot can ask questions like “When is the final exam in this class?”, “How did we define a ‘level of confidence’?”, or “what are the steps in Aristotle’s function argument?”. And you can give others access to the chatbot by making it available to anyone with a link. However, your students would need a GPT-Plus account to use it, and that may not be feasible. But there is a free workaround: If you put your course content in a pdf that is no more than 120 pages (or break it up into several), you can give the pdf(s) to your students and direct them to the landing page of ChatPDF, where they can upload the pdf and then query it for free.

If you have further questions about any of this, raise them in the comments or email them to me.

 

The post Using Generative AI to Teach Philosophy (w/ an interactive demo you can try) (guest post) first appeared on Daily Nous.

Are Your Students Doing The Reading?

Published by Anonymous (not verified) on Sat, 17/02/2024 - 1:04am in

And if they’re not, what can be done to get them to do it? Or is that the wrong way to think about it?

[Note: This was originally posted on February 16, 2024, 9:04am, but was lost when a problem on February 17th, 2024 required the site to be reset. I’m reposting it on February 18th with its original publication date, but I’m sorry to report that the comments, many of which contained helpful suggestions, may have been lost; I’m looking into the matter.]

These questions come up in response to a recent piece by Adam Kotsko (North Central College) at Slate. He writes about the “diffuse confluence of forces that are depriving students of the skills needed to meaningfully engage” with books:

As a college educator, I am confronted daily with the results of that conspiracy-without-conspirators. I have been teaching in small liberal arts colleges for over 15 years now, and in the past five years, it’s as though someone flipped a switch. For most of my career, I assigned around 30 pages of reading per class meeting as a baseline expectation—sometimes scaling up for purely expository readings or pulling back for more difficult texts. (No human being can read 30 pages of Hegel in one sitting, for example.) Now students are intimidated by anything over 10 pages and seem to walk away from readings of as little as 20 pages with no real understanding. Even smart and motivated students struggle to do more with written texts than extract decontextualized take-aways. Considerable class time is taken up simply establishing what happened in a story or the basic steps of an argument—skills I used to be able to take for granted.

Kotsko anticipates one kind of reaction to this complaint:

Hasn’t every generation felt that the younger cohort is going to hell in a handbasket? Haven’t professors always complained that educators at earlier levels are not adequately equipping their students? And haven’t students from time immemorial skipped the readings?

He reassures himself with the thought that other academics agree with him and that he is “not simply indulging in intergenerational grousing.” That’s not a good response, because the intergenerational divide is not as relevant as the divide between academics and non-academics (i.e., nearly all of their students): professors were not, and are not, normal.

Still, I’m a professor, too, and despite my anti-declinist sentiments and worries about my own cognitive biases, I can’t help but agree that students do not seem as able or willing to actually do the reading, and as able or willing to put in the work to try to understand it, as they have in the past (though I probably don’t think the decline is as steep as Kotsko thinks it is).

Kotsko identifies smartphones and pandemic lockdowns as among the culprits responsible for poor student reading, but acknowledges we “can’t go back in time” and undo their effects. Nor does he offer any solutions in this article.

Are there any solutions? What can we do? What should we do? What do you do?

Related:
How Do You Teach Your Students to Read
The Point and Selection of Readings in Introductory Philosophy Courses
Why Students Aren’t Reading

 

The post Are Your Students Doing The Reading? first appeared on Daily Nous.

New Teaching Philosophy with Technology Prize

Published by Anonymous (not verified) on Tue, 30/01/2024 - 10:00pm in

Oxford University Press and the American Philosophical Association (APA) have teamed up to launch the new “Oxford University Press Teaching with Technology Prize.”

The prize “recognizes outstanding use of technology in the teaching of philosophy and philosophical pedagogy by philosophers at a junior career stage” who are also members of the APA.

The prize is $2000 and a certificate, plus funds for travel to the APA meeting at which the prize is awarded. There is also the possibility of a $500 honorable mention prize being awarded.

The contest has two stages: a nomination stage (self nominations are allowed), and then a stage for a selection of the contestants to submit more detailed information and materials.

The first deadline is February 25th.

More details are here.

The post New Teaching Philosophy with Technology Prize first appeared on Daily Nous.

How To Write A Philosophy Paper: Online Guides

Published by Anonymous (not verified) on Wed, 24/01/2024 - 9:40am in

Some philosophy professors, realizing that many of their students are unfamiliar with writing philosophy papers, provide them with “how-to” guides to the task.

[Originally posted on January 15, 2019. Reposted by reader request.]

I thought it might be useful to collect examples of these. If you know of any already online, please mention them in the comments and include links.

If you have a PDF of one that isn’t online that you’d like to share, you can email it to me and I can put in online and add it to the list below.

Guidelines for Students on Writing Philosophy Papers

(Crossed-out text indicates outdated link.)

The post How To Write A Philosophy Paper: Online Guides first appeared on Daily Nous.

Teachers: Was the Semester AI-pocalyptic or Was It AI-OK?

Published by Anonymous (not verified) on Tue, 12/12/2023 - 1:18am in

A survey conducted at the end of last year indicated that 30% of college students had used ChatGPT for schoolwork. Undoubtedly, the number has gone up since then. Teachers: what have your experiences been like with student use of ChatGPT and other large language models (LLMs)?

Here are some questions I’m curious about:

  • Did you talk to your students about how they may or may not use LLMs on work for your courses?
  • Have you noticed, or do you suspect, that your students have used LLMs illicitly on assignments for your courses?
  • Have you attempted to detect illicit LLM use among your students, and if so, what methods or technology did you use?
  • If you reported a student for illicit LLM use, how did your institution investigate and adjudicate the case?
  • Have you noticed a change in student performance that you suspect is attributable to increased prevalence of LLMs?
  • Did you incorporate LLM-use into assignments, and if so, how did that go?
  • Did you change or add assignments (or their mechanics/administration) in response to increased awareness of LLMs that do not ask the students to use the technology? (e.g., blue book exams in class, oral exams)
  • Have your LLM-related experiences this semester prompted you to think you ought to change how you teach?

I’m also curious about which other questions I should be asking about this.

 

 

 

The post Teachers: Was the Semester AI-pocalyptic or Was It AI-OK? first appeared on Daily Nous.

The Near-term Impact of Generative AI on Education, in One Sentence

Published by Anonymous (not verified) on Wed, 18/10/2023 - 3:22am in

Tags 

teaching

Preparing to participate in a panel on generative AI and education at this week’s AECT convention gave me the excuse to carve out some dedicated time to think about the question, “how would you summarize the impact generative AI is going to have on education?” This question is impossible to answer over the medium to long-term, but maybe I could give an answer addressing the near-term?

My approach to this question was to look for a different, comparable example and try to work my way into the question from that more familiar territory. The internet seems like the obvious choice here, as no other recent advance can even begin to compare to the potential impact generative AI will have.

If I had to summarize the impact the internet (not computers, but the internet) has had on education in a single sentence, it would be something like “the internet greatly decreases the degree to which time and distance are obstacles to education.” While it’s a simple statement, I would argue that 30 years later we’re still struggling to fully unpack its implications. We certainly haven’t fully harnessed these time-shifting and place-shifting superpowers to their fullest extent yet in the service of student learning.

This “decreasing obstacles” framing turned out to be helpful in thinking about generative AI. When the time came, my answer to the panel question, “how would you summarize the impact generative AI is going to have on education?” was this: 

“Generative AI greatly reduces the degree to which access to expertise is an obstacle to education.”

We haven’t even started to unpack the implications of this notion yet, but hopefully just naming it will give the conversation focus, give people something to disagree with, and help the conversation progress more quickly.

What does it mean for education that, at any point when a learner needs an answer to a question, needs a different explanation, needs another example, needs some practice, needs some feedback, or needs any of the myriad other things that someone with expertise in a discipline could provide in support of their learning, they have immediate access to that expertise? (Also, keep in mind that disciplinary expertise includes expertise in pedagogy, ethics, etc.)  Our current instructional design approaches assume that access to expertise is scarce, expensive, and delayed. That’s why we “capture” disciplinary expertise in “content” – so we can economically provide access to expertise to learners. But what if access to expertise was abundant, cheap, and immediate? If your students have access to the internet, that’s the world your students are now living in. How should that fact change the design of your instruction?

THE STRIKE

Published by Anonymous (not verified) on Wed, 07/12/2022 - 5:27am in

The Strike continues with no end in sight.  Although there have been tentative agreements concerning Post-Docs and Academic Researchers, in the Academic Student Employee and Student Researcher units, the parties appear to remain well apart on the fundamental economic issues.  This distance is most easily seen in the ASE category: although the UAW made significant adjustments in its proposal UC responded with little change.  You can see the latest UAW wage proposal here and the latest UC wage proposal here.  

It is impossible from the outside to tell where the negotiations are headed.  But what I want to try to do here is offer some suggestions for how we could think about the gap, how we got here, and what we might do in the future to alter the conditions that have created what is undoubtedly a crisis at the University, and a depressing foreshadowing of the end of UC as a serious research university.  If the latter does happen the responsibility will ultimately lie with UCOP and the Regents with some support from the campus Chancellors.

The first point is that it seems clear that there is a fundamental gap in the way that each side is defining these negotiations.  UC is approaching this as if it were a conventional labor negotiation with a class of workers whose position is fundamentally stable.  The UAW and its supporters on the other hand, start from the position that they have been placed in an untenable economic position.  Given the fact that TA wages have barely kept up with national inflation over the years combined with the extreme cost of housing in California, they cannot continue with relatively minor adjustments in the dollar amount of their monthly pay.  To make matters worse, UC's latest offer has a first year adjustment that is about equal to current inflation.  In this light, UCOP appears completely out of touch with the reality of life on campuses and indifferent to its lack of knowledge.

This image of autocratic disregard was only deepened by Provost Brown's appalling letter to the faculty last week.  Although much of it was standard UCOP pablum, he inspired widespread faculty hostility with his closing flourish threatening faculty members who refused to pick up the work of striking workers with discipline beyond the docking of pay.  For the last three years, faculty and lecturers  have performed an enormous amount of additional labor to keep the university afloat during the pandemic: transforming their courses, spending additional time with students, planning for campus transformations, and putting their research duties on the side to maintain "instructional continuity" as the administration likes to put it.  After all this effort, for the Provost to threaten disciplinary action for those who choose not to pick up the work of striking TAs or to act upon their own convictions about academic integrity, manifests a contempt for the faculty that is hard to ignore.

It's important to grasp UC's budgetary situation correctly.  Most importantly, the usual invocation of the university's 46 billion dollar budget needs to be put aside.  Most of that budget is tied up in the medical centers or in funding for designated purposes.  The real budget that is relevant is the core budget made up of tuition, state funding, and some UC funds.  It comes in closer to $10 billion (Display 1) and is largely tied up in salaries across the campuses.  As Chris and I have been pointing out for nearly 15 years, UC has been subject to core educational austerity surrounded by compartmentalized privatized wealth (although we should notice that the medical centers barely stay in the black).  This crisis will not be overcome by hidden caches of money floating around the university.  The problem is deeper than that:  its roots lie in the combination of state underfunding and the expansion of expensive non-instructional (often non-academic research) activities that have taken up too much of campus's payrolls.

But I want to stress that this reality does not mean that the graduate students are being unreasonable in seeking wages that enable them to perform their employment duties and pursue their studies.  Instead, it is a sign of how deep the failure of the University has been in (not) providing a sustainable funding model both for students and faculty supporting students.  The Academic Senate has been pointing to this problem for at least two decades.  In statements and reports from 2006, 2012, and 2020, the Senate has repeatedly insisted that graduate student support was insufficient and proposed steps to improve it.  Even the administration itself has sometimes recognized its depth.  To take only one example from 2019, UCOP's Academic Planning Council declared that:  

UC must do better at financially supporting its doctoral students, particularly as it seeks to diversify the graduate student body. The University cannot compete with its peers for talented candidates if it does not offer competitive support. In 2017 the gap in average net stipend between UC and its peers was nominally $680.3 In actuality the gap is much greater due to California’s high cost of living - with factored in, the average gap in doctoral support is closer to $3,400.4 This is a huge difference but not insurmountable. The Workgroup urges UC leadership to make every effort to close the gap so that the quality of UC’s doctoral programs is maintained and enhanced.

UC campuses, with planning and prioritization, could guarantee five-year multi-year funding to doctoral students upon admission. According to current data, about 77 percent of doctoral students across UC receive stable or increasing net stipends for five consecutive years.5 (Appendix 1.) With some exceptions, this multi-year funding is relatively consistent across campuses and disciplines. However, this funding is typically not presented as a full five-year multi-year guaranteed package upon admission. Offering five-year funding upon admission would enhance recruitment of high-potential students, offer financial security, and address one of the chief stressors for doctoral students - worry over continued funding while in the program.

In addition to offering guaranteed five-year funding, the University must address the issue of graduate student housing. Graduate students, many of whom have family responsibilities, face enormous challenges in finding affordable housing. Without a targeted effort to address graduate student housing, UC’s capacity to attract and retain qualified candidates is at serious risk.  (4-5)

And yet the problem persists.  The Academic Senate has stressed this issue repeatedly and with great force.  A recent letter from the UCLA Divisional Senate's Executive Board has pointed its finger at the problem--the need for renewed state funding.   It is time for the administration to do something to fix it--and something that doesn't simply damage other parts of the academic endeavor.

UCOP will continue--as they always do--to insist that we cannot get more money out of the state to pay for what needs to be done.  But let's press on that point a little more.  It is certainly possible that we are heading for a recession--the Federal Reserve seems determined to induce one to put labor in its place.  But does that mean that the state doesn't have the capacity to respond to an emergency at the University?  Despite all the talk about a budget shortfall, Dan Mitchell at the UCLA Faculty Association Blog has been pointing out that the situation is far less clear than the Legislative Analyst is insisting (and the University is repeating).  For one thing, revenues have been higher than expected and that even with the possibility of a downturn the state has around 90 billion dollars in usable reserves. If the state won't help it's not because of economic necessity but a matter of political choice.  After all, the Governor had no problem finding $500 million to pay for a private immunology research park at UCLA that provides little, if any, real benefit to the campus academic program.  The Governor and the state can do more for the educational core of the University than they are doing: and if UCOP and the Regents can't show the state how necessary that is, then one wonders again what their purpose is.  

I want to make one final point.  UC is the research university of the state and UC insists that graduate education is at the heart of its purpose.  But if UCOP actually agrees with that then the question must be: what do we need to do to have academic graduate education in a sustainable form?  What resources do we need to enable students to both contribute to the larger functioning of the university and to pursue their studies?  Are we willing to have only graduate programs where students have family money or have already flipped a startup?  Or where they are here to gain an additional credential to take back to their jobs?  Does UCOP remain committed to UC's contributions to disciplines across the spectrum of knowledge?  Or does it only care about graduate students (and others) as cheap and disposable labor?  

I don't expect that these negotiations or this strike can answer or settle these questions.  But UC is at a crossroads and the university--especially its leadership--must face up to that.  The long-term question raised by the strike is whether UC will continue as a research university; if we don’t make it possible for future scholars to attend, we will have forfeited our purpose.  There is an opportunity here to take the first steps towards creating a new sustainable vision of a twenty-first century research university.  Or we can continue as we have in decline.  The choice ultimately is UCOP's and the Regents'.

****

(I've focused here on the ASE unit because the Student Researcher Unit is admittedly a more complicated problem.  The vast majority of GSRs are supported by external grants and those grants have both limits and their own rules.  To some extent UC has been negotiating with someone else's money.  That doesn't mean the situation is impossible but rather that it has to be implemented in such a fashion as to protect Principal Investigators from damaging unintended consequences.)

Pages