Research
Bridging AI with the physical world

Normal Computing was founded in the USA by former members of Google Brain and Google X who helped pioneer AI for the physical world, and developed the leading ML frameworks for Probabilistic and Quantum AI.

The infrastructure powering AI models was never designed with today’s scale, complexity, or energy demands in mind. Today's general-purpose architectures underutilize the physical potential of the hardware itself.

Exploring the limits of new and custom silicon, including those which optimize their own physics, requires the help of AI, better EDA software, and ultimately the realization of a virtuous cycle of self-improving AI hardware.

PLay Video
Algorithms and AI Engineering
We are advancing the limits of reasoning and simulation from two convergent perspectives.
JOIN US

Formal Logic Meets LLMs

We are building auto-formalizing systems that combine the deductive rigor of symbolic proofs with the flexible abstraction of large language models. Inspired by AlphaGeometry and similar systems, we use synthetic data and advanced reinforcement learning to align LLMs with Formal Models for complex reasoning and deductive logic in chip design from specifications.

Probabilistic Reasoning

The real world is uncertain, noisy, and bound to the laws of physics. To simulate and reason within it, AI systems must be fundamentally probabilistic. We're co-designing generative diffusion-like models, grounded in unstructured information, to reason coherently about the physical world, without hallucination.

Thermodynamic Computing and Silicon
We are scaling AI with Physics-Based ASICs.

Physics-Based Approach

Because probabilistic reasoning is ubiquitous in nature, we turn to the natural world for clues on how to build these kinds of AI computing systems in silicon using physics-based principles of thermodynamics and Bayesian learning.

Scaling Diffusion and SDEs

To reduce AI costs by orders of magnitude, we are developing new silicon built for scaling diffusion-like models. Our thermodynamic computing chips leverage principles like Langevin dynamics, non-equilibrium driving, and asynchronization to perform efficient stochastic differential equation emulation in physics.

The New "Fluctuation" Frontier

Thermodynamic computing was pioneered at Normal by the Founders alongside Dr. Patrick Coles and Dr. Gavin Crooks – recognized experts in quantum physics and statistical mechanics. We are supported by public sector programs and publish regularly in venues spanning AI, semiconductors, physics, and hardware systems.

READ THE latest research
Our research team is behind the innovations that enabled scaling probabilistic programming and probabilistic machine learning, and the modern approach to near-term quantum computation (NISQ). Together, the Normal team has pioneered Thermodynamic AI, a physics-based computing paradigm for accelerating the key primitives in probabilistic machine learning.
We also created Posteriors and Outlines, open-source frameworks for uncertainty quantification and controlled LLM generations, and help to maintain DSPy, the leading framework for language model program optimization.
11.9.2023
A First Demonstration of Thermodynamic Matrix Inversion

Thermodynamic computing offers a natural approach for fast, energy-efficient computations. We report on the first-ever experiment towards thermodynamic artificial intelligence: solving matrix inversion problems by allowing a system of coupled electrical oscillators to thermally equilibrate with its environment.

Read more
Research
All
11.5.2023
Developing Advanced Reasoning and Planning Algorithms with LLMs

In this post we introduce Branches, our tool for prototyping and visualizing advanced LLM reasoning and planning algorithms. We apply Branches to the problem of generating Python code for HumanEval.

Read more
Research
All
10.24.2023
Supersizing Transformers: Beyond RAG with Extended minds for LLMs

In this blog we discuss how the transformer architecture naturally extends over external memories, and share empirical results which leverage this capability. These methods are innate (don't require fine tuning) and outperform popular retrieval augmented generation methods.

Read more
Research
All
10.20.2023
Explainable Language Models: Existing and Novel Approaches

We review key aspects of explainability for language models and introduce some Normal innovations.

Read more
Research
All