Archive for Artificial Intelligence

Denario

Posted in The Universe and Stuff with tags , , , on November 3, 2025 by telescoper

I’ve been alerted (by one of the authors) to a paper in the computer science/artificial intelligence section on arXiv, called The Denario project: Deep knowledge AI agents for scientific discovery by Francisco Villaescusa-Navarro et al. The abstract follows:

We present Denario, an AI multi-agent system designed to serve as a scientific research assistant. Denario can perform many different tasks, such as generating ideas, checking the literature, developing research plans, writing and executing code, making plots, and drafting and reviewing a scientific paper. The system has a modular architecture, allowing it to handle specific tasks, such as generating an idea, or carrying out end-to-end scientific analysis using Cmbagent as a deep-research backend. In this work, we describe in detail Denario and its modules, and illustrate its capabilities by presenting multiple AI-generated papers generated by it in many different scientific disciplines such as astrophysics, biology, biophysics, biomedical informatics, chemistry, material science, mathematical physics, medicine, neuroscience and planetary science. Denario also excels at combining ideas from different disciplines, and we illustrate this by showing a paper that applies methods from quantum physics and machine learning to astrophysical data. We report the evaluations performed on these papers by domain experts, who provided both numerical scores and review-like feedback. We then highlight the strengths, weaknesses, and limitations of the current system. Finally, we discuss the ethical implications of AI-driven research and reflect on how such technology relates to the philosophy of science. We publicly release the code at this https URL. A Denario demo can also be run directly on the web at this https URL, and the full app will be deployed on the cloud.

arXiv:2510.26887

Here’s a random picture from the paper:

I haven’t had time to read the paper yet – it’s 270 pages long – but I’m sure it will provoke strong reactions both in favour and against the idea of an AI research assistant. Comments are welcome through the box below.

P.S. The name Denario appears to be derived from the Latin “denarius”, a coin roughly equivalent to a day’s pay for a skilled worker in the days of the Roman Empire. More amusingly, “denarius” is the origin of the Polari word “dinarly”, meaning “money”. If I get time I must generate a Polari version of this manuscript.

An AI Guide to Europe

Posted in Artificial Intelligence, Barcelona with tags , , , on May 6, 2025 by telescoper

To assist those readers who might be planning conference trips or vacations in Europe I thought I’d share this helpful map (which I found here) that was generated by one of those famously accurate AI apps. There may be a few small errors, but I’m sure they are insignificant:

Apart from everything else, this explains why I found Barcelona much warmer than I had expected when I was there last year…

Meta Theft

Posted in Art, Books, Television with tags , , , , , , on March 21, 2025 by telescoper

Beware, all thieves and imitators of other people’s labour and talents, of laying your audacious hands upon our work.

Albrecht Dürer, 1511

I’ve remembered that quotation since it was uttered by Inspector Morse in the episode Who Killed Harry Field? Albrecht Dürer wasn’t referring to Artificial Intelligence when he said it, but it does seem pertitent to what’s going on today.

There’s an article in The Atlantic about a huge database of pirated work called LibGen that has been used by Mark Zuckerberg’s corporation Meta to train its artificial intelligence system. Instead of acquiring such materials from publishers – or, Heaven forbid, authors! – they decided simply to steal it. That’s theft on a grand scale: 7.5 million books and 81 million research papers.

The piece provides a link to LibGen so you can search for your own work there. I searched it yesterday and found 137 works by “Peter Coles”. Not all of them are by me, as there are other authors with the same name, but all my books are there, as well as numerous research articles, reviews and other pieces:

I suppose many think I should be flattered that my works are deemed to be of sufficiently high quality to be used to train a large language model, but I’m afraid I don’t see it that way at all. I think, at least for the books, this is simply theft. I understand that there may be a class action in the USA against Meta for this larceny, which I hope succeeds.

I think I should make a few points about copyright and authorship. I am a firm advocate of open access to the scientific literature, so I don’t think research articles should be under copyright. Meta can access them along with everyone else on the planet. It’s not really piracy if it’s free anyways. Although it would be courteous of Meta to acknowledge its sources, lack of courtesy is not the worst of Meta’s areas of misconduct.

In a similar vein, when I started writing this blog back in 2008 I did wonder about copyright. Over the years, quite a lot of my ramblings here have been lifted by journalists, etc. Again a bit of courtesy would have been nice. I did make the decision, however, not to bother about this as (a) it would be too much hassle to chase down every plagiarist and (b) I don’t make money from this site anyway. As far as I’m concerned as soon as I put anything on here it is in the public domain. I haven’t changed that opinion with the advent of ChatGPT etc. Indeed, I am pretty sure that all 7000+ articles from this blog were systematically scraped last year.

Books are, however, in a different category. I have never made a living from writing books, but it is dangerous to the livelihood of those that do to have their work systematically stolen in this way. I understand that there may be a class action in the USA against Meta for this blatant larceny, which I hope succeeds.

The Dangers of AI in Science Education

Posted in Education with tags , , , , on January 17, 2025 by telescoper

I’m taking the liberty of reblogging this post from an experienced university teacher of chemistry and physics outlining some of the dangers posed by the encroachment of Artificial Intelligence into science education. It’s quite a long piece, but well worth reading in its entirety

Machine-based Censorship

Posted in Biographical with tags , , , , , , , on November 25, 2024 by telescoper

A very noticeable manifestation of the rise of so-called Artificial Intelligence has been the use of AI bots in censoring posts. The most recent example of this I’ve seen was on Saturday when I wrote a post about the general election candidates for my constituency, Kildare North. As usual when I write an article here it gets posted automatically on a variety of other platforms, including LinkedIn. However, Saturday’s post was blocked:

The powers that be did not tell me which of the “Professional Community Policies” that post might have violated so I looked through them all and couldn’t find any plausible reason for blocking that post. I can only assume some defect in the algorithm deployed by LinkedIn had been triggered wrongly. Unfortunately, all this is run by machine so there is no possibility of appeal.

I’ve noticed quite a few bizarre things like this over the past few weeks. The worst offender when it comes to random censorship is Meta (which runs Facebook, Instagram and Threads). I have been posting content automatically on Meta platforms, Facebook and Threads. Recently, however, Meta’s AI algorithm has gone berserk. A couple of weeks ago it blocked this post (about the Edgeworth family) on the grounds that it violated rules concerning “nudity or sexual activity”. Heaven knows how it decided that; you can read the post yourself. I defy you to find any nudity or sexual activity, or reference thereto, or link to any post that mentions such things, anywhere in it!

When I appealed the decision I got this.

Truly bizarre.

More recently, it blocked this post (one of my regular weekly updates for OJAp) on the grounds that it was identified as spam. I can see the need for an automatic screening given the huge volume of posts, but the problem is that my facebook feed is full of actual spam that gets through these filters while innocent posts get blocked. In other words the algorithm is crap. If you ask for a review of the decision, all Meta does is run the algorithm again – with the same results, which is a waste of time.

The algorithm that screens comments on this blog for spam has also been playing up, with some comments from regular contributors being tagged as spam.

None of these is in itself of any consequence to me personally, not least because I’m not trying to run a business using these platforms. However, such AI engines are being deployed nowadays in a huge range of contexts primarily in order to save money. No doubt such processes do save money, but if they are based on poorly constructed algorithms – which they seem to be – the consequences could be dire. Imagine the horror of a health service based on poorly trained AI…

Is machine learning good or bad for the natural sciences?

Posted in The Universe and Stuff with tags , , , , , , , on May 30, 2024 by telescoper

Before I head off on a trip to various parts of not-Barcelona, I thought I’d share a somewhat provocative paper by David Hogg and Soledad Villar. In my capacity as journal editor over the past few years I’ve noticed that there has been a phenomenal increase in astrophysics papers discussing applications of various forms of Machine Leaning (ML). This paper looks into issues around the use of ML not just in astrophysics but elsewhere in the natural sciences.

The abstract reads:

Machine learning (ML) methods are having a huge impact across all of the sciences. However, ML has a strong ontology – in which only the data exist – and a strong epistemology – in which a model is considered good if it performs well on held-out training data. These philosophies are in strong conflict with both standard practices and key philosophies in the natural sciences. Here, we identify some locations for ML in the natural sciences at which the ontology and epistemology are valuable. For example, when an expressive machine learning model is used in a causal inference to represent the effects of confounders, such as foregrounds, backgrounds, or instrument calibration parameters, the model capacity and loose philosophy of ML can make the results more trustworthy. We also show that there are contexts in which the introduction of ML introduces strong, unwanted statistical biases. For one, when ML models are used to emulate physical (or first-principles) simulations, they introduce strong confirmation biases. For another, when expressive regressions are used to label datasets, those labels cannot be used in downstream joint or ensemble analyses without taking on uncontrolled biases. The question in the title is being asked of all of the natural sciences; that is, we are calling on the scientific communities to take a step back and consider the role and value of ML in their fields; the (partial) answers we give here come from the particular perspective of physics

arXiv:2405.18095

P.S. The answer to the question posed in the title is probably “yes”.

On Papers Written Using Large Language Models

Posted in Uncategorized with tags , , , , , , , on March 26, 2024 by telescoper

There’s an interesting preprint on arXiv by Andrew Gray entitled ChatGPT “contamination”: estimating the prevalence of LLMs in the scholarly literature that tries to estimate how many research articles there are out there that have been written with the help of Large Language Models (LLMs) such as ChatGPT. The abstract of the paper is:

The use of ChatGPT and similar Large Language Model (LLM) tools in scholarly communication and academic publishing has been widely discussed since they became easily accessible to a general audience in late 2022. This study uses keywords known to be disproportionately present in LLM-generated text to provide an overall estimate for the prevalence of LLM-assisted writing in the scholarly literature. For the publishing year 2023, it is found that several of those keywords show a distinctive and disproportionate increase in their prevalence, individually and in combination. It is estimated that at least 60,000 papers (slightly over 1% of all articles) were LLM-assisted, though this number could be extended and refined by analysis of other characteristics of the papers or by identification of further indicative keywords.

Andrew Gray, arXiv:2403.16887

The method employed to make the estimate involves identifying certain words that LLMs seem to love, of which usage has increased substantially since last year. For example, twice as many papers call something “intricate” nowadays compared to the past; there are also increases in the use of the words “commendable” and “meticulous”.

I found this a commendable paper, which is both meticulous and intricate. I encourage you to read it.

P.S. I did not use ChatGPT to write this blog post.

The Existentialist University: A Strategic Plan for Authentic Learning

Posted in Education, Maynooth with tags , , on August 8, 2023 by telescoper

Guess who has been playing with an AI text generator again?

I. Introduction

In the vast cosmos of knowledge, we find ourselves, a university, a microcosm of the universe itself. We are not merely an institution, but a collective entity, a gathering of minds seeking to understand the essence of existence. Our strategic plan, therefore, is not a mere roadmap, but a philosophical treatise, a reflection of our existentialist ethos.

II. Vision

Our vision is to cultivate an environment where students are not just passive recipients of knowledge, but active seekers of truth. We aim to foster a culture of questioning, where the essence of learning is not in the answers, but in the pursuit of questions. We envision a university where education is not a means to an end, but an end in itself.

III. Mission

Our mission is to create a space where students can confront the absurdity of existence, and find meaning in their own individuality. We strive to nurture minds that are not confined by societal norms, but are free to explore the depths of their own consciousness. We aim to produce graduates who are not just equipped with skills, but with the courage to face the existential angst of life.

IV. Core Values

1. Authenticity: We value authenticity over conformity. We encourage our students to be true to themselves, to embrace their uniqueness, and to resist the pressure to fit into predefined molds.

2. Freedom: We uphold the freedom of thought, the freedom to question, and the freedom to dissent. We believe that true learning can only occur in an environment where minds are free to explore, to challenge, and to innovate.

3. Responsibility: We emphasize the responsibility that comes with freedom. We instill in our students the understanding that they are the authors of their own lives, and that every choice they make shapes their destiny.

V. Strategic Goals

1. Curriculum Development: To develop a curriculum that encourages critical thinking, fosters existential inquiry, and promotes self-discovery.

2. Faculty Development: To attract and retain faculty who are not just experts in their fields, but are also committed to our existentialist ethos.

3. Student Engagement: To create opportunities for students to engage in meaningful dialogues, to participate in existential debates, and to express their individuality.

4. Community Engagement: To extend our existentialist ethos beyond the university, and to engage with the wider community in discussions about the meaning of life, the nature of existence, and the purpose of education.

VI. Conclusion

In the face of an ever-changing world, our strategic plan is not a rigid blueprint, but a flexible guide. It is a reflection of our commitment to existentialist principles, and our belief in the power of education to transform lives. We invite all members of our university community to join us in this existential journey, as we strive to create a university that is not just a place of learning, but a space for authentic living.

Branding Physics

Posted in Maynooth with tags , on July 9, 2023 by telescoper

How do you like the AI-generated image for new corporate branding of the Department of Theoretical Physics at Maynooth University?

I quite like it, as it accurately portrays the friendly and welcoming atmosphere of the Department, but it’s a bit misleading because the sinister-looking men in the picture are wearing ties.

ChatGPT and Physics Education

Posted in Education, The Universe and Stuff with tags , , , on January 24, 2023 by telescoper

Following on the theme of ChatGPT, I see that Phil Moriarty has done a blog post about its use in Physics Education which many of my readers will find well worth reading in full. His findings are well in accord with mine, although I haven’t had as much time to play with it as he has. In particular, it is easily defeated by figures and pictures so if you want to make your assessment ChatGPT-proof all you need to do is unlike lots of graphics. More generally, ChatGPT is trained to produce waffle so avoid questions that require students to produce waffle. This shouldn’t pose too many problems, except for disciplines in which waffle is all there is.

Phil Moriarty also done a video in the Sixty Symbols series, on that YouTube thing that young people look at, which you can view here:

I start teaching Computational Physics next week and will be seeing how ChatGPT does at the Python coding exercises I was planning to set!