I'm an AI researcher working at the intersection of generative modeling, geometric deep learning, and molecular design. I build models that inverse-design molecules (proteins, drugs, materials) by learning to navigate high-dimensional chemical and biological spaces. I also do freelance engineering consulting. I hold an MPhil in Machine Learning from Cambridge (Distinction) and a BSc in Mathematics from VU Amsterdam (Cum Laude).
Some problems I find interesting:
-
Modeling and sampling arbitrary data distributions
Generative modeling represents one of the most ambitious frontiers in AI. At its limit, it would enable generating anything on demand (images, video, proteins, molecular therapeutics) initially in simulation (bits), later translated to physical reality (atoms) through printer-like tech. I find this problem one of the most exciting ones because it can be hugely impactful and it is often mathematically elegant (think transport maps via diffusion or flows with strong physics inductive biases, and other clever sampling methods to overcome the curse of dimensionality, rather than just autoregressive models that work via scaling deep learning architectures).
-
How can we leverage AI to accelerate science
I see two main paths towards AI-accelerated science:
Foundation models for science. In a world where scaling laws work, language was the low-hanging fruit (i.e. internet text made LLMs feasible). Now it is the time to apply the same playbook to data-scarce domains (genomes, molecules, proteins, materials), and build foundation models that do for biology and chemistry what GPT did for language.
Agentic exploration of the hypothesis space. Einstein once supposedly said genius is 1% talent and 99% hard work. Suppose an autonomous agent that can take a high-level idea and then actually code, run experiments, debug, analyse results, scan research literature, and suggest what to try next. Even crazier, tie this with embodied AI, and it can even run wet lab experiments. Though I don't think we'll get fully autonomous researchers anytime soon, what's way more realistic in the near term are tireless research assistants that amplify the impact of a sharp PI who guides them appropriately. We're already seeing early sparks of this (see research paper, AI scientist startup). -
How we can leverage verifiable rewards to unlock RL at scale
The verifier's rule states that the ease of training AI to solve a task is proportional to how verifiable the task is. This asymmetry (where verification is orders of magnitude easier than generation) is becoming one of the most powerful levers in AI. Think: Sudoku takes minutes to solve but seconds to verify; Instagram took years to build but anyone can check if it works; complex proofs are hard to discover but (relatively) straightforward to validate. What excites me: we're entering an era where any problem with objective truth, fast verification, and scalable feedback becomes tractable.
-
What are the computational principles of intelligence
What does it take to explain abstraction, reasoning, even consciousness, in computational terms? Given neurons and synapses give rise to human-level cognition, what are the universal principles we need to capture to develop algorithms capable of autonomous abstraction (i.e. general intelligence).
-
How can we engineer human biology
Think of designing interventions to repair and improve our bodies, and monitor these satisfactorily once they are implemented. At the limit of medicine 3.0, I imagine nanorobots circulating through our bloodstream, constantly tracking cellular function and flagging risks before they become problems. These might interface with AI doctors capable of proposing personalized therapies (e.g. some of them drugs generated on demand). Building the hardware–software infrastructure for this will be hard, but so was sending humans to the Moon.
-
How to redesign geopolitical governance to avoid wars, and promote economic prosperity
The Westphalian system of nation-states creates perverse incentives: wars emerge not from irrationality but from misaligned structures where conflict can be profitable. The question isn't "how do we find better leaders?" but "how do we design systems where rational actors can't profit from conflict?". Real progress requires mechanisms over mandates: network states that enable governance-as-a-service, forcing institutions to compete for citizens; asymmetric economic interdependence where warfare becomes suicidal for all parties; and algorithmic institutions (smart contracts, prediction markets, executable policy) that replace opacity with legibility. The uncomfortable truth is achieving this demands either voluntary adoption by existing powers (unlikely) or parallel institution-building that gradually outcompetes legacy systems (generational).
I'm based in London, where it's currently 00:00:00.
In my spare time, I run, swim, surf, lift, explore nature, eat, code, travel, and read.