Archive for Artificial Intelligence

Writing as Thinking

Posted in Artificial Intelligence, Education with tags , , , , , on March 10, 2026 by telescoper

The other day I was informed that WordPress has an “AI Tool” which can write blog posts for me. I suspect most people think writing a blog is a waste of time, and that is even more the case if you get AI to write posts for you. I write a blog for many reasons apart from that after 17 years it has become a habit. One reason is that writing a post sometimes helps me tease out what I actually think about things. If I don’t feel I can express my thoughts in a reasonably coherent way, it is possible that my thoughts are themselves incoherent. Of course sometimes the lack of clarity in a post is indeed because I didn’t write it very well. Nevertheless, the process of writing helps even if it doesn’t lead to anything like a perfect result.

Writing isn’t just about blog posts, of course. In academic life we write articles and books and other pieces. Some academics give the impression that we do the writing after we’ve done the thinking – or, in scientific fields, after doing the calculations or measurements – but I think writing is an intrinsic part of the process, not something done right at the end.

It was with these thoughts in mind that I decided to share the following post, written by Pat Thomson a former Professor of Education in the School of Education at the University of Nottingham which makes a number of points that are valid across different disciplines.

The Next Semester

Posted in Artificial Intelligence, Education, mathematics, Maynooth with tags , , , , , , , on January 26, 2026 by telescoper

There’s just a week to go before the next Semester at Maynooth University so I’ve been looking at my calendar for the weeks ahead. Actually, I won’t start teaching again until Tuesday 3rd February, because Monday 2nd February is a national holiday. As it turns out, however, I don’t have any lectures, labs or tutorials on Mondays anyway so I won’t be missing a session either on February 2nd or on May 4th, another holiday. I will have to miss one on Friday 3rd April (Good Friday), though.

The Timetable has given me two 9 o’clock lectures a week for the forthcoming Semester, one on Tuesdays and the other on Thursdays. I don’t think the students like 9am lectures very much, but I don’t mind them at all. I find it quite agreeable to have accomplished something concrete by 10am, which I don’t always do. This schedule might mean that I defer publishing papers at the Open Journal of Astrophysics on those days. I usually do this before breakfast, but I might not have time if I have to be on campus and ready to teach for 9am.

As usual, Semester 2 is a stop-start affair. We have six weeks until the Study Break, which includes the St Patrick’s Day holiday, then we’re back for two weeks (minus Good Friday) before another week off for Easter. We return on Monday April 13th to complete the Semester; the last lectures are on Friday 8th May and exams start a week later. This arrangement creates no problems for lecture-based teaching, but it takes some planning to organize labs and project deadlines around the breaks. I’ll have to think about that for my Computational Physics module.

A more serious issue for Computational Physics is how to deal with the use of Generative AI. I’ve written about this before, in general terms, but now it’s time to write down some specific rules for a specific module. A default position favoured by some in the Department is that students should not use GenAI at all. I think that would be silly. Graduates will definitely be using CoPilot or equivalent if they write code in the world outside university so we should teach them how to use it properly and effectively.

In particular, such methods usually produce a plausible answer, but how can a student be sure it is correct? It seems to me that we should place an emphasis on what steps a student has taken to check an answer, which of course they should do whether they used GenAI or did it themselves. If it’s a piece of code to do a numerical integration of a differential equation, for example, the student should test it using known analytic solutions to check it gets them right. If it’s the answer to a mathematical problem, one can check whether it does indeed solve the original equation (with the appropriate boundary conditions).

If anyone out there reading this blog has any advice to share, or even a link to their own Department’s policy on the use of GenAI in computational physics for me to copy adapt for use in Maynooth, I’d be very grateful!

(My backup plan is to ask ChatGPT to generate an appropriate policy…)

The Counties of the United Kingdom (according to ChatGPT)

Posted in Artificial Intelligence with tags , , , , on December 23, 2025 by telescoper

Regular readers will know that I sometimes use this blog to educate the Great Unwashed about the facts of British geography (including where the North begins). I have decided to enlist the help of Generative AI to support me with this task so, with a little help from social media, here is a response from ChatGPT to a prompt requesting a map showing all the counties of the United Kingdom with their names. The result, as you can see, is truly spectacular:

I began my research career at the University of Bulgaria, by the way.

Denario

Posted in The Universe and Stuff with tags , , , on November 3, 2025 by telescoper

I’ve been alerted (by one of the authors) to a paper in the computer science/artificial intelligence section on arXiv, called The Denario project: Deep knowledge AI agents for scientific discovery by Francisco Villaescusa-Navarro et al. The abstract follows:

We present Denario, an AI multi-agent system designed to serve as a scientific research assistant. Denario can perform many different tasks, such as generating ideas, checking the literature, developing research plans, writing and executing code, making plots, and drafting and reviewing a scientific paper. The system has a modular architecture, allowing it to handle specific tasks, such as generating an idea, or carrying out end-to-end scientific analysis using Cmbagent as a deep-research backend. In this work, we describe in detail Denario and its modules, and illustrate its capabilities by presenting multiple AI-generated papers generated by it in many different scientific disciplines such as astrophysics, biology, biophysics, biomedical informatics, chemistry, material science, mathematical physics, medicine, neuroscience and planetary science. Denario also excels at combining ideas from different disciplines, and we illustrate this by showing a paper that applies methods from quantum physics and machine learning to astrophysical data. We report the evaluations performed on these papers by domain experts, who provided both numerical scores and review-like feedback. We then highlight the strengths, weaknesses, and limitations of the current system. Finally, we discuss the ethical implications of AI-driven research and reflect on how such technology relates to the philosophy of science. We publicly release the code at this https URL. A Denario demo can also be run directly on the web at this https URL, and the full app will be deployed on the cloud.

arXiv:2510.26887

Here’s a random picture from the paper:

I haven’t had time to read the paper yet – it’s 270 pages long – but I’m sure it will provoke strong reactions both in favour and against the idea of an AI research assistant. Comments are welcome through the box below.

P.S. The name Denario appears to be derived from the Latin “denarius”, a coin roughly equivalent to a day’s pay for a skilled worker in the days of the Roman Empire. More amusingly, “denarius” is the origin of the Polari word “dinarly”, meaning “money”. If I get time I must generate a Polari version of this manuscript.

An AI Guide to Europe

Posted in Artificial Intelligence, Barcelona with tags , , , on May 6, 2025 by telescoper

To assist those readers who might be planning conference trips or vacations in Europe I thought I’d share this helpful map (which I found here) that was generated by one of those famously accurate AI apps. There may be a few small errors, but I’m sure they are insignificant:

Apart from everything else, this explains why I found Barcelona much warmer than I had expected when I was there last year…

Meta Theft

Posted in Art, Books, Television with tags , , , , , , on March 21, 2025 by telescoper

Beware, all thieves and imitators of other people’s labour and talents, of laying your audacious hands upon our work.

Albrecht Dürer, 1511

I’ve remembered that quotation since it was uttered by Inspector Morse in the episode Who Killed Harry Field? Albrecht Dürer wasn’t referring to Artificial Intelligence when he said it, but it does seem pertitent to what’s going on today.

There’s an article in The Atlantic about a huge database of pirated work called LibGen that has been used by Mark Zuckerberg’s corporation Meta to train its artificial intelligence system. Instead of acquiring such materials from publishers – or, Heaven forbid, authors! – they decided simply to steal it. That’s theft on a grand scale: 7.5 million books and 81 million research papers.

The piece provides a link to LibGen so you can search for your own work there. I searched it yesterday and found 137 works by “Peter Coles”. Not all of them are by me, as there are other authors with the same name, but all my books are there, as well as numerous research articles, reviews and other pieces:

I suppose many think I should be flattered that my works are deemed to be of sufficiently high quality to be used to train a large language model, but I’m afraid I don’t see it that way at all. I think, at least for the books, this is simply theft. I understand that there may be a class action in the USA against Meta for this larceny, which I hope succeeds.

I think I should make a few points about copyright and authorship. I am a firm advocate of open access to the scientific literature, so I don’t think research articles should be under copyright. Meta can access them along with everyone else on the planet. It’s not really piracy if it’s free anyways. Although it would be courteous of Meta to acknowledge its sources, lack of courtesy is not the worst of Meta’s areas of misconduct.

In a similar vein, when I started writing this blog back in 2008 I did wonder about copyright. Over the years, quite a lot of my ramblings here have been lifted by journalists, etc. Again a bit of courtesy would have been nice. I did make the decision, however, not to bother about this as (a) it would be too much hassle to chase down every plagiarist and (b) I don’t make money from this site anyway. As far as I’m concerned as soon as I put anything on here it is in the public domain. I haven’t changed that opinion with the advent of ChatGPT etc. Indeed, I am pretty sure that all 7000+ articles from this blog were systematically scraped last year.

Books are, however, in a different category. I have never made a living from writing books, but it is dangerous to the livelihood of those that do to have their work systematically stolen in this way. I understand that there may be a class action in the USA against Meta for this blatant larceny, which I hope succeeds.

The Dangers of AI in Science Education

Posted in Education with tags , , , , on January 17, 2025 by telescoper

I’m taking the liberty of reblogging this post from an experienced university teacher of chemistry and physics outlining some of the dangers posed by the encroachment of Artificial Intelligence into science education. It’s quite a long piece, but well worth reading in its entirety

Machine-based Censorship

Posted in Biographical with tags , , , , , , , on November 25, 2024 by telescoper

A very noticeable manifestation of the rise of so-called Artificial Intelligence has been the use of AI bots in censoring posts. The most recent example of this I’ve seen was on Saturday when I wrote a post about the general election candidates for my constituency, Kildare North. As usual when I write an article here it gets posted automatically on a variety of other platforms, including LinkedIn. However, Saturday’s post was blocked:

The powers that be did not tell me which of the “Professional Community Policies” that post might have violated so I looked through them all and couldn’t find any plausible reason for blocking that post. I can only assume some defect in the algorithm deployed by LinkedIn had been triggered wrongly. Unfortunately, all this is run by machine so there is no possibility of appeal.

I’ve noticed quite a few bizarre things like this over the past few weeks. The worst offender when it comes to random censorship is Meta (which runs Facebook, Instagram and Threads). I have been posting content automatically on Meta platforms, Facebook and Threads. Recently, however, Meta’s AI algorithm has gone berserk. A couple of weeks ago it blocked this post (about the Edgeworth family) on the grounds that it violated rules concerning “nudity or sexual activity”. Heaven knows how it decided that; you can read the post yourself. I defy you to find any nudity or sexual activity, or reference thereto, or link to any post that mentions such things, anywhere in it!

When I appealed the decision I got this.

Truly bizarre.

More recently, it blocked this post (one of my regular weekly updates for OJAp) on the grounds that it was identified as spam. I can see the need for an automatic screening given the huge volume of posts, but the problem is that my facebook feed is full of actual spam that gets through these filters while innocent posts get blocked. In other words the algorithm is crap. If you ask for a review of the decision, all Meta does is run the algorithm again – with the same results, which is a waste of time.

The algorithm that screens comments on this blog for spam has also been playing up, with some comments from regular contributors being tagged as spam.

None of these is in itself of any consequence to me personally, not least because I’m not trying to run a business using these platforms. However, such AI engines are being deployed nowadays in a huge range of contexts primarily in order to save money. No doubt such processes do save money, but if they are based on poorly constructed algorithms – which they seem to be – the consequences could be dire. Imagine the horror of a health service based on poorly trained AI…

Is machine learning good or bad for the natural sciences?

Posted in The Universe and Stuff with tags , , , , , , , on May 30, 2024 by telescoper

Before I head off on a trip to various parts of not-Barcelona, I thought I’d share a somewhat provocative paper by David Hogg and Soledad Villar. In my capacity as journal editor over the past few years I’ve noticed that there has been a phenomenal increase in astrophysics papers discussing applications of various forms of Machine Leaning (ML). This paper looks into issues around the use of ML not just in astrophysics but elsewhere in the natural sciences.

The abstract reads:

Machine learning (ML) methods are having a huge impact across all of the sciences. However, ML has a strong ontology – in which only the data exist – and a strong epistemology – in which a model is considered good if it performs well on held-out training data. These philosophies are in strong conflict with both standard practices and key philosophies in the natural sciences. Here, we identify some locations for ML in the natural sciences at which the ontology and epistemology are valuable. For example, when an expressive machine learning model is used in a causal inference to represent the effects of confounders, such as foregrounds, backgrounds, or instrument calibration parameters, the model capacity and loose philosophy of ML can make the results more trustworthy. We also show that there are contexts in which the introduction of ML introduces strong, unwanted statistical biases. For one, when ML models are used to emulate physical (or first-principles) simulations, they introduce strong confirmation biases. For another, when expressive regressions are used to label datasets, those labels cannot be used in downstream joint or ensemble analyses without taking on uncontrolled biases. The question in the title is being asked of all of the natural sciences; that is, we are calling on the scientific communities to take a step back and consider the role and value of ML in their fields; the (partial) answers we give here come from the particular perspective of physics

arXiv:2405.18095

P.S. The answer to the question posed in the title is probably “yes”.

On Papers Written Using Large Language Models

Posted in Uncategorized with tags , , , , , , , on March 26, 2024 by telescoper

There’s an interesting preprint on arXiv by Andrew Gray entitled ChatGPT “contamination”: estimating the prevalence of LLMs in the scholarly literature that tries to estimate how many research articles there are out there that have been written with the help of Large Language Models (LLMs) such as ChatGPT. The abstract of the paper is:

The use of ChatGPT and similar Large Language Model (LLM) tools in scholarly communication and academic publishing has been widely discussed since they became easily accessible to a general audience in late 2022. This study uses keywords known to be disproportionately present in LLM-generated text to provide an overall estimate for the prevalence of LLM-assisted writing in the scholarly literature. For the publishing year 2023, it is found that several of those keywords show a distinctive and disproportionate increase in their prevalence, individually and in combination. It is estimated that at least 60,000 papers (slightly over 1% of all articles) were LLM-assisted, though this number could be extended and refined by analysis of other characteristics of the papers or by identification of further indicative keywords.

Andrew Gray, arXiv:2403.16887

The method employed to make the estimate involves identifying certain words that LLMs seem to love, of which usage has increased substantially since last year. For example, twice as many papers call something “intricate” nowadays compared to the past; there are also increases in the use of the words “commendable” and “meticulous”.

I found this a commendable paper, which is both meticulous and intricate. I encourage you to read it.

P.S. I did not use ChatGPT to write this blog post.