We are currently – once again – in a very intense phase of AI development, where it's difficult to keep a cool head. After the progress over the year 2024 was rather incremental (for instance with the upgrade from GPT-4 to GPT-4o in May 2024 – still the current model for regular chats in ChatGPT) and voices accumulated predicting a plateau – first in science, then also in the broader societal discourse – suddenly a switch seems to have been flipped, as if the singularity were within reach.
One could simply dismiss this and point to how almost exactly a year ago the initial hype about the supposedly economy-revolutionizing GPT Store quickly dissolved into thin air. However, the rapid development of the "reasoning" models, which I described in a recent post (by now there's another one – Grok-3), seems to me to be qualitatively on a different level. Certainly, to some extent it's again the form that amazes us, not so much the content that actually impresses. When it's claimed, for example, that Deep Research from OpenAI can already produce entire doctoral dissertations, upon closer inspection (there are numerous critical analyses, one example would be this one) this turns out to be quite exaggerated. Nevertheless: We are now at a point where reports on many topics can be produced at the push of a button and in just a few minutes, reports that are actually currently being created by humans – and highly educated ones at that – in various economic contexts. Tests in which Deep Research performs poorly usually have something to do with the (in principle fixable) digital source situation. The fact that we now won't be getting o3 outside of Deep Research for testing soon (see here) makes it difficult to make reliable concrete assessments. My impression, however, is that the ability to reliably extract information from various sources and synthesize it in a meaningful way is definitely highly present in o3, and results are possible here that I would only expect from my most cognitively gifted students.
The latest results on the programming ability of o3 confirm the great progress that has been made here, in my opinion. What's impressive is not only that o3 performed better at the "International Olympiad in Informatics" 2024 than 99.8% of human participants, but also the distance to a version of o1 specifically fine-tuned for this task, whose result was in the 49th percentile. The trend that models with higher general intelligence often perform better than highly specialized models is thus confirmed once again – and the possibility of a widely applicable AI thereby gains at least intuitive plausibility.
Just a few days after I gained access to Deep Research, I was able to participate in an event in Austria, at the Theoforum 2025 of the Catholic Church in Vorarlberg. The aim is not only to give young people who are about to graduate from school an overview of church professions, but also to offer fundamental orientation for their future. A really very nice event. I myself offered several workshops on the topic of "Profession and Vocation in the Age of AI." (In German it is a wordplay with "Beruf" and "Berufung" ...)

I was very pleased that the event was well received. At the same time, I must say that I have never felt such responsibility in a workshop or lecture on AI. It's one thing to try out AI tools out of pure curiosity or to speculate theoretically about future developments. It's something completely different to face people who will be existentially affected by the upcoming changes.
What can one in good conscience advise young people who are facing career choices, given the predictions about how AI will affect the working world? The seriousness with which the participants spoke to me about the topic vividly demonstrated how real these questions are for this generation, which stands at a very peculiar point in history where it is already foreseeable that AI will play a lasting role, from which various future scenarios are still associated with many uncertainties – yet at the same time, running through all the drafts like a red thread is the notion that future workers will probably be expected to demonstrate an extraordinary degree of flexibility.
In my workshop, we went through short-, medium-, and long-term development possibilities. It's a topic in itself that I might want to address in more detail elsewhere. Newer data points that have been added in just the short period since the aforementioned event in early February would be this study, which suggests a time saving of over 66% for many tasks in the daily work routine, and the "Anthropic Economic Index," which shows for which tasks Claude is primarily used – and what proportion of the US economy the corresponding professional fields have.
In any case, it seems increasingly obvious to me that we are moving rather quickly from a supplementation of human and machine agents to a substitution of the former by the latter. And it is particularly against backgrounds such as the aforementioned interaction with the students so directly affected that I increasingly emphasize that we as a society urgently need to think about how we want to shape the future.
I find one aspect particularly important: The behavior of leading representatives of the AI industry definitely indicates that they themselves actually assume that the development of AGI – and thus a revolution of the labor market – is possible – no, very likely! – within the next one, two, or at most three years. This must be taken seriously. When Sam Altman was still raving about the AI paradise that superintelligence would bring us within a decade in September 2024, I would have been willing to dismiss a large part of this claim as a publicity measure. Now the situation is different. The very concrete predictions of imminent AGI (one step smaller than the AI deity, but still science fiction enough!) are not just a marketing strategy. So: Yes, they really believe that.
Now, two things must be clearly separated: On the one hand, one can be skeptical about whether these claims are realistic. On the other hand, this changes nothing about the fact that they are meant seriously. And that must call politics to action. Because even if these developments should turn out to be completely a figment of imagination – that exactly now, all at once, the end of scalability should have been reached seems personally hard for me to imagine, but be that as it may – we are now confronted with the reality of an industry with a great sense of mission and a completely new self-confidence. Politics must respond to this in a way that takes larger social contexts into account and respects the democratic will.
Now one might think that the developments of the last few weeks finally point in a good direction. With the AI Action Summit 2025, the global political significance of AI seems to have been suddenly, almost miraculously, recognized in the media. Moreover, there are increasing indications that AI is "finally" becoming a campaign issue. Shouldn't I be happy about that?
To be honest, that's not the case at all right now.
Neutrally, one can first note: The EU wants to mobilize 200 billion € for AI development. 50 billion of this comes from public funds, 20 billion are to be invested in AI gigafactories. Quite obviously, they want to move away from the image of EU pettiness and regulatory mania that many have accused the AI Act of, and for which Brussels is certainly not unjustly known.
So instead, now it's full throttle investment and progress. Even among the Greens, it's now recently stated that AI must be more thematized in the election campaign, Europe presented as its own AI location, dramatically caught up in terms of investment and commercialization of business models, the development generally extremely accelerated. Apparently, they don't want to expose any weakness in the current AI enthusiasm. It must have hurt when Christian Lindner pointed to the 500 billion dollars of private investment capital in the USA and contrasted that with the fact that (my translation) "simultaneously in Berlin ... a Robert Habeck has announced that he wants to create 129 new official positions, i.e., civil servants, for the regulation and supervision of artificial intelligence. Do you notice something? In the USA, growth of value creation; in Germany, growth of bureaucracy."
I understand it: Regulation is not sexy. With that, in the heated atmosphere around the topic, there's no flower pot – and certainly no election – to be won. Nevertheless, one has to wonder a bit about such statements. Wasn't it said in the last federal election that the climate catastrophe was an existential threat to humanity that could only be averted by the upcoming government? Did I miss something and that has already happened? It may be that one believes in the super AI that will solve our environmental problems, and one can justify the obvious masses of emissions that will accompany the demanded AI infrastructure through this mental detour. But it is surprising that not even a subordinate clause about this is mentioned in the tumbling demands for investment. Has it already been forgotten how emphatically Sam Altman emphasized the importance of energy production for AI development a year ago in Davos? Is it not known what role global warming plays in the eschatology of many leading AI enthusiasts?
Now, unfortunately, as so often, different things can be true at the same time. The fact is that the German federal government has been terribly off the mark in terms of AI in recent years and bet on the wrong horse, as this interview by Florian Gallwitz from 2019, which has really aged excellently, impressively proves. When you're advised by a "Digital Strategy Germany" advisory board whose members made fun of ChatGPT as late as March 2023 because it hallucinated about their oh-so-important biography, that's not surprising either. Wanting to do better now is a good start.
That the AI industry wants to take advantage of the moment is also understandable. Of course, it represents its own interests. And that it appears loudly is understandable and at least partly appropriate given the successes it can show. To a good extent, it is indeed also immediately obvious that it is in the overall societal interest to develop a certain independence in terms of AI at the European and national level, so that companies with many employees in numerous industries are not dependent on US-American or Chinese AI corporations. (A little realism would be nice, however, but that's another topic.)
I can also understand that AI research is excited and thinks that AI is "our best insurance for economic growth and value creation in the future." It must be nice when one's own research area finally gets attention – also in the form of funding. (Although one may also formulate critical questions here, such as whether we are not also breeding rather unhealthy funding tendencies, as they are becoming increasingly visible in physics.)
What I have little understanding for and what actually shocks me is how fully and without any deductions Ursula von der Leyen adopted these buzzwords in her statement after the AI Action Summit in Paris:
"AI will improve our healthcare, spur our research and innovation and boost our competitiveness. We want AI to be a force for good and for growth. We are doing this through our own European approach – based on openness, cooperation and excellent talent. But our approach still needs to be supercharged. This is why, together with our Member States and with our partners, we will mobilise unprecedented capital through InvestAI for European AI gigafactories. This unique public-private partnership, akin to a CERN for AI, will enable all our scientists and companies – not just the biggest - to develop the most advanced very large models needed to make Europe an AI continent."
Not a skeptical word about the AI industry is uttered here, not even a lip service to not losing sight of larger societal connections – if one may even be so optimistic as to assume that they are even on the radar at all. I would like to know: How does politics intend to shape a future with AGI, which it now suddenly intends to bring about through investments?
What about the young people who asked me their burning questions about their future? And who will factually receive answers from politics in the form of concrete decisions – from a politics that is busy investing in the automation of jobs for which the young adults are simultaneously supposed to qualify, and whose undifferentiated statements on the topic give little hope that it is aware of what it is trying to advance.
The constellation is complex, too complex to be fully dealt with in such a blog post. Therefore, I would like to at least end with a few brief comments on what I am not concerned about, to prevent some obvious misunderstandings.
I don't think "regulation" is the appropriate keyword to create a balance to "investment." With a technology that has come so far that the best models from two years ago now run offline on every phone, control cannot be maintained by that alone.
It's also not about bringing more "stakeholders" to the table. I would find even this framing of those affected problematic. The young adults I spoke with are not simply stakeholders – they are people who anxiously look to the future and wonder what kind of existence they will have, what kind of life they are heading towards. It's not just about how regulation could artificially save a few jobs for them. It's about what kind of society it will be – beyond the job market – that they are growing into, how they can fulfill themselves in their lives and what is important to them, when the shape of their profession is increasingly beyond their control.
As I said, this topic is complex. Much too complex for politicians who haven't cared about it before to suddenly conjure up meaningful positions a few days before the federal election. Is it boldness or callousness? It annoys me in any case. Because I find it quite disrespectful, given what's at stake, to guess so unqualifiedly into the blue.
The parties all bring emphases that, in view of the fundamental challenges that AI brings with it, with appropriate investment of brain power could be illuminated in a completely new way – also and especially against the background of the political traditions they bring with them, which I don't want to belittle at all. That the Greens from their perspective would quite likely pay attention to the ecological aspect seems obvious. And dear CDU, what does conservatism mean in this rapidly changing time, what needs to be reconsidered, for example, in terms of family and security? There can be no reflexive answers here, where already the questions often take quite amazingly new forms. For example, the FDP should now ask itself what new intersections it might develop with the Left, where suddenly its own highly qualified clientele will also be affected by automation.
This discourse poses challenges for all of us. We are all constantly learning, getting things wrong, reorienting ourselves. One should admit that, instead of concealing the confusion with concrete-sounding plans. That, I find, is yet another strategy that I would not prefer to the general progress slogans. If you look around on LinkedIn, for example, you get the feeling that education policy consists only of the question of how to bring as much AI as possible into schools (or straight into kindergarten!) as quickly as possible. The "tools" are all quite nice and often impressive. But how can one seriously believe that the future of education in Germany depends on these questions? This activism, which completely overlooks larger contexts, looks wonderfully qualified at first glance, but is just as unhelpful as grandiloquent announcements of wanting to cut the ground from under the Americans in terms of AI development with a lot of money. For this discourse, which glitters so digitized, simply doesn't contribute much to the question of what kind of future awaits us and what kind of people we want to allow ourselves to be educated for this era.
Instead, I simply wish for a conversation about AI that actually comprehensively considers the human being. AI is a topic for politics, because politics is conducted by humans for (among others) humans, and numerous areas of their lives will be quite fundamentally affected by this technology – and in what way depends precisely on what decisions we make as a society. We are not simply blindly at the mercy of this development, as if only a single future scenario at the feet of a hopefully well-disposed super AI were conceivable and as if we had to deterministically follow a specific path drawn by a few people on the way there. We have freedom of choice about what is important for us to preserve in what way. For this to succeed in the sense of democratic society, the media must of course ensure that an industry that is not negligibly shaped by ethical long-term ideas that would not have majority support in society, indeed would cause dismay if one were to talk about them, does not simply set the tone.
Perhaps one could also – now it's getting somewhat utopian – involve cultural actors in the conversation, since it's about such profound cultural changes. This happens almost never. At most, humanities scholars come to speak who emphasize that literature, for example, certainly has a contribution to make when it comes to reflecting on questions about the future – such as my colleague Nina Beguš from UC Berkeley (we will soon be organizing a conference together), who was interviewed for a Breitband broadcast. Interestingly, the editor Jochen Dreier took away this lesson from the conversation: "Practice thought experiments and don't immediately try to do everything with artificial intelligence."
To engage in thought experiments, of course, requires an appropriate playground. With an EU cultural funding program that provides just a tenth of the money that will be put into AI gigafactories for the years 2021-2027, it will of course be somewhat tight...
Comments