top of page

“With Process Mining, a lot could be improved”

Prof. Wil van der Aalst is considered one of the founding fathers of Process Mining. His field of research addresses the point where many organizations operate in the dark: the actual workflows behind ERP systems, tickets, orders, and approvals. He is not only a professor at RWTH Aachen University, but also Chief Scientist at Celonis, the world’s leading provider in the field of Process Mining and Process Intelligence. Process Mining makes use of digital traces to make IT processes visible – not as described on slides, but as they actually run in real life. Where do loops occur? Where do cases pile up? Where does reality deviate from the intended process?


In this interview, Prof. van der Aalst talks about why this "understanding" is more than just transparency – and why acceptance, data access, and internal interests are often more important than the next wave of technology. At the RWTH Tech Impact Festival, he will take the stage in the “Deep Tech Innovation & Transfer” stream to present precisely this perspective: what it takes to turn a powerful approach not just into a pilot project, but into a solution that works and scales in day-to-day operations.


													Photo: Celosphere
Photo: Celosphere

Professor van der Aalst, Process Mining is now considered a key technology for data-driven organizational development. Looking back, what was the pivotal moment for you when a scientific idea became an approach with real industrial relevance?


I think the pivotal moment has always been that point in projects when we show companies their actual processes – and the first reaction is disbelief: “That can’t be right, that’s not how things work here.” But then, with a drill-down into the data, we can prove: yes, that is exactly what’s happening. I’ve seen this “aha” moment repeatedly for about 20 years now – for instance, when analyzing data from SAP, Oracle, or Salesforce and suddenly realizing how much reality and assumptions diverge.


I’ve been working on this topic for a very long time. Even though companies around the world are using our technologies today, I’m still surprised by how long it took for Process Mining to become widely adopted. I saw its potential early on. Between 2008 and 2012, I founded several start-ups with my PhD students. And I’m still amazed that many more companies are not yet using Process Mining.



Your research aims to make processes visible and understandable. Why is this “understanding” in your view a critical success factor – especially for companies facing high pressure to become efficient and transform?


Transparency isn’t the ultimate goal – but it is the first step. And that step is often missing. In a typical Process Mining project, the process begins by extracting data from systems like Oracle, Microsoft Dynamics, or SAP, and reconstructing the actual events from it. Automatic process models are then generated from this, showing what’s really happening. This is incredibly valuable because you quickly realize: processes are much more complex than expected – and deviations, bottlenecks, or loops become immediately visible.


In the next step, you overlay a normative process model – that is, what you expect or would like to see. Then you continuously show where reality and expectations diverge – not as a one-time report, but continuously, like a dashboard you use daily.

And only when you have your processes under that level of control do you enter the “Process Intelligence” phase: then you can automatically intervene and steer when something isn’t running optimally. But without that first phase – without genuine understanding through transparency – the next steps are simply worthless.


One example: people quickly assume sales processes are simple. But in practice, there are several ERP systems and a huge number of variants – at Siemens, for instance, there are about 900,000 process variants within just one company. It’s similar at a university: there’s a curriculum that describes how students are “supposed” to study. But when you look at their actual paths, reality often looks quite different. You can only improve what you’ve first made truly visible and understood.



Process Mining is technically mature, but in practice it sometimes faces acceptance problems. From your experience, what are the biggest hurdles – and what do companies need to do differently so that deep-tech solutions are not just introduced, but actually used?


From my point of view, there are several hurdles. The first is very simple: many companies don’t even know Process Mining exists. Many immediately talk about AI or ChatGPT, but the gap between what people imagine and reality is often enormous. Process Mining can help bridge that gap – but many just aren’t aware of it. In Europe, there is now more awareness; in the U.S., it’s harder: some universities there don’t even offer relevant courses.


The second hurdle: many companies like to talk about data and AI but have enormous difficulty even extracting data from their systems. Data engineering isn’t as “sexy” as ChatGPT, but it’s critical. The data exists, but the expertise to know where it is and how to make it usable is lacking. Data protection is sometimes used as an excuse to cover up this lack of capability.


The third challenge is organizational: Process Mining reveals what needs to be improved. If “everything is going well,” you don’t need it. That’s why many bottom-up initiatives fail: someone finds the technology exciting, shows problems – and the organization often doesn’t want those problems to be visible.



The stream “Deep Tech Innovation & Transfer” poses the question: “How can we turn deep tech into scalable solutions?” What organizational or cultural conditions must companies establish to ensure that technologies like Process Mining truly scale?


There are many perspectives. For me, scaling starts with transfer: everything we’ve researched, we’ve made consistently accessible – as open source. We built a framework early on that still covers a wide range of functionalities, and in recent years we’ve worked specifically on PM4Py to make Process Mining more accessible and usable for Process Intelligence users. This helps others build on it. The story of Celonis is a great example: the founders read my articles, used my software, and then started a company. That’s how many other companies emerged as well.


At the same time, in my view, open source is only viable to a limited extent in the enterprise context. In areas with a large number of users, an open-source community can work well. But in production and process software, it’s often different: these solutions need to be built, operated, supported, and introduced in organizations – and for that, you need scaling via companies, not via a university chair with a few students. In the end, you need firms that can continue developing and supporting this professionally. My model is: we provide example software that shows what’s possible – and companies drive widespread adoption.


Then there’s the question of framework conditions: there is generally infrastructure to support start-ups, but with larger organizations it gets more difficult. In my field, I’m also frustrated that in Germany, people talk a lot about such topics, but German companies aren’t consistently supported. Instead, in the end, they buy solutions from large providers like Microsoft and Palantir. I think that’s a mistake. Especially considering the bureaucracy problem, a lot could be improved with Process Mining – but there’s often a big gap between what’s said and what’s actually done. Using Process Mining software developed in Germany would have a double positive effect: it would immediately improve efficiency and transparency in administration and companies and also strengthen technological sovereignty, value creation, and innovative strength in Germany.



What impact has Process Mining had already – and what do you expect in five to ten years?


A lot has already happened in recent years. At the same time, I believe much more is possible – especially as the Process Mining market continues to grow. Currently, there are about 50 companies offering Process Mining as a service; Celonis is the largest and most successful among them.


In recent years, we’ve also seen many smaller companies being acquired by larger ones. That can make sense – but it becomes critical when such projects fail. I often see large companies paying a lot of money, but then not actually using the acquired know-how. That’s frustrating.



The Tech Impact Festival deliberately brings researchers, companies, and start-ups together. From your perspective, what role does the festival play in the transfer of deep tech – and what can companies concretely gain by engaging early in this dialogue?


It’s a wonderful initiative. At a university like RWTH, an incredible amount of exciting work is being done – but that’s not always visible from the outside. That’s why I’m looking forward to exchanging ideas with companies about deep tech.


What’s important to me is that companies express more clearly: What specific question do we actually want to ask? And just as importantly: What expertise is actually available? Often, companies don’t see exactly what RWTH does and can offer. From the outside, a lot looks similar – from the inside, you realize that for very specific topics there are people with deep experience. A festival can help build that connection. I like to use this analogy: if I have eye problems, I don’t go to a dermatologist at the hospital, but to a specialist. In AI, you sometimes feel like you come out with lots of information – but no clear diagnosis. The Tech Impact Festival can help turn that diversity into concrete questions and connect with the right people.


 
 
bottom of page