Archive for Copilot

The Next Semester

Posted in Artificial Intelligence, Education, mathematics, Maynooth with tags , , , , , , , on January 26, 2026 by telescoper

There’s just a week to go before the next Semester at Maynooth University so I’ve been looking at my calendar for the weeks ahead. Actually, I won’t start teaching again until Tuesday 3rd February, because Monday 2nd February is a national holiday. As it turns out, however, I don’t have any lectures, labs or tutorials on Mondays anyway so I won’t be missing a session either on February 2nd or on May 4th, another holiday. I will have to miss one on Friday 3rd April (Good Friday), though.

The Timetable has given me two 9 o’clock lectures a week for the forthcoming Semester, one on Tuesdays and the other on Thursdays. I don’t think the students like 9am lectures very much, but I don’t mind them at all. I find it quite agreeable to have accomplished something concrete by 10am, which I don’t always do. This schedule might mean that I defer publishing papers at the Open Journal of Astrophysics on those days. I usually do this before breakfast, but I might not have time if I have to be on campus and ready to teach for 9am.

As usual, Semester 2 is a stop-start affair. We have six weeks until the Study Break, which includes the St Patrick’s Day holiday, then we’re back for two weeks (minus Good Friday) before another week off for Easter. We return on Monday April 13th to complete the Semester; the last lectures are on Friday 8th May and exams start a week later. This arrangement creates no problems for lecture-based teaching, but it takes some planning to organize labs and project deadlines around the breaks. I’ll have to think about that for my Computational Physics module.

A more serious issue for Computational Physics is how to deal with the use of Generative AI. I’ve written about this before, in general terms, but now it’s time to write down some specific rules for a specific module. A default position favoured by some in the Department is that students should not use GenAI at all. I think that would be silly. Graduates will definitely be using CoPilot or equivalent if they write code in the world outside university so we should teach them how to use it properly and effectively.

In particular, such methods usually produce a plausible answer, but how can a student be sure it is correct? It seems to me that we should place an emphasis on what steps a student has taken to check an answer, which of course they should do whether they used GenAI or did it themselves. If it’s a piece of code to do a numerical integration of a differential equation, for example, the student should test it using known analytic solutions to check it gets them right. If it’s the answer to a mathematical problem, one can check whether it does indeed solve the original equation (with the appropriate boundary conditions).

If anyone out there reading this blog has any advice to share, or even a link to their own Department’s policy on the use of GenAI in computational physics for me to copy adapt for use in Maynooth, I’d be very grateful!

(My backup plan is to ask ChatGPT to generate an appropriate policy…)

Generative AI in Physics?

Posted in Artificial Intelligence, Education, mathematics, Maynooth with tags , , , , , , , , , on August 11, 2025 by telescoper

As a new academic year approaches we are thinking about updating our rules for the use of Generative AI by physics students. The use of GenAI for writing essays, etc, has been a preoccupation for many academic teachers. Of course in Physics we ask our students to write reports and dissertations, but my interest in what we should do about the more mathematical and/or computational types of work. A few years ago I looked at how well ChatGPT could do our coursework assignments, especially Computational Physics, and it was hopeless. Now it’s much better, though still by no means flawless, and now there are also many other variants on the table.

The basic issue here relates to something that I have mentioned many times on this blog, which is the fact that modern universities place too much emphasis on assessment and not enough on genuine learning. Students may use GenAI to pass assessments, but if they do so they don’t learn as much as they would had they done the working out for themselves. In the jargon, the assessments are meant to be formative rather than purely summative.

There is a school of thought that has the opinion that formative assessments should not gain credit at all in the era of GenAI since “cheating” is likely to be widespread. The only secure method of assessment is through invigilated written examinations. Students will be up in arms if we cancel all the continuous assessment (CA), but a system based on 100% written examinations is one with which those of us of a certain age are very familiar.

Currently, most of our modules in theoretical physics in Maynooth involve 20% coursework and 80% unseen written examination. That is enough credit to ensure most students actually do the assignments, but the real purpose is that the students learn how to solve the sort of problems that might come up in the examination. A student who gets ChatGPT to do their coursework for them might get 20%, but they won’t know enough to pass the examination. More importantly they won’t have learnt anything. The learning is in the doing. It is the same for mathematical work as it is in a writing task; the student is supposed to think about the subject not just produce an essay.

Another set of issues arises with computational and numerical work. I’m currently teaching Computational Physics, so am particularly interested in what rules we might adopt for that subject. A default position favoured by some is that students should not use GenAI at all. I think that would be silly. Graduates will definitely be using CoPilot or equivalent if they write code in the world outside university so we should teach them how to use it properly and effectively.

In particular, such methods usually produce a plausible answer, but how can a student be sure it is correct? It seems to me that we should place an emphasis on what steps a student has taken to check an answer, which of course they should do whether they used GenAI or did it themselves. If it’s a piece of code to do a numerical integration of a differential equation, for example, the student should test it using known analytic solutions to check it gets them right. If it’s the answer to a mathematical problem, one can check whether it does indeed solve the original equation (with the appropriate boundary conditions).

Anyway, my reason for writing this piece is to see if anyone out there reading this blog has any advice to share, or even a link to their own Department’s policy on the use of GenAI in physics for me to copy adapt for use in Maynooth! My backup plan is to ask ChatGPT to generate an appropriate policy…