## Conventional Computing is Hindering AI Progress

Energy consumption has become the primary bottleneck for AI, with growing concerns around the overall cost, waste heat, and emissions associated with AI infrastructure. This highlights a deep problem due the mismatch between underlying digital hardware and the computations required for AI. Moreover, there is a clear need to democratize AI research, which has become inaccessible to many due to massive energy consumption, cost, and compute requirements. These issues are especially pronounced when one desires AI with high-level reasoning, where probabilistic AI plays a key role but is highly computationally expensive. Overall, this provides motivation for a novel computational paradigm to make AI faster, more capable, and energy-efficient.

## Democratizing Thermodynamic Computing

At Normal Computing, our team has been pioneering physics-based hardware utilizing thermodynamics (i.e., a thermodynamic computer) to address the above issues. To accelerate the development of efficient AI with reasoning capabilities, we aim to grow the community around thermodynamic computing. To this end, we have made many of our hardware designs and results publicly available in our publications and github repositories. Advancing thermodynamic computing out of the lab and into large-scale devices will be accelerated by coordination between public, private, and academic entities. This is why we have made publicly available the results necessary to understand how thermodynamic computing devices work, and the knowledge necessary to build them. We are now taking another step towards democratization by open-sourcing a cutting-edge tool developed at Normal: `thermox`

, **a fast and exact thermodynamic computing simulator**.

At a fundamental level, the thermodynamic computer we are building β coined the Stochastic Processing Unit (SPU) β is a physical device that evolves according to a stochastic differential equation (SDE).

Possibly the simplest SDE and the focus of our first prototype is the Ornstein-Uhlenbeck (OU) process, which takes the form:

where π₯ and π are π-dimensional vectors, π΄ and π· are πΓπ matrices^{1}, and π(0,π·) denotes a normal distribution with covariance π·.

## Accelerating Linear Algebra with OU Processes

As we describe in our foundational paper, one may perform computations by building a device that simulates that OU process. One may send in the problem inputs to the SPU, wait for some time, and measure the state of the SPU to get a solution to the problem at hand. One reason why this is exciting is that, for a scaled-up system, we can get a speedup over digital computers on a range of linear algebra primitives. Let us take solving a linear system as an example. We wish to solve:

π΄π₯^{β}=π,

that is, find the π₯^{β} that satisfies this equation. By sending in π΄ and π to the device, after some relaxation time, the average value of π₯ will be π₯^{β}, up to some error. It turns out that with this method, we obtain an asymptotic speedup over digital methods, with a runtime that is linear in π!

Thermodynamic linear system solving is part of a broader range of thermodyamic linear algebra primitives including full inverse π΄β1 and matrix exponentials.

*thermox*: Fast and Exact OU Processes

Studying our SPU at the large-scale (think thousands of dimensions) requires solving SDEs of the form described above. This is computationally expensive, so we really needed to have a fast and efficient tool to simulate OU processes, and for it to work on any machine. So as `jax`

-aficionados, we built `thermox`

.

On a digital computer, one needs to discretize time to simulate an OU process (and this is what is typically done in the community). SDEs are especially sensitive to this, and although some second-order solvers are more robust, it is generally a pain to simulate SDEs.

Let us consider the case that we have an initial point π₯_{0 }and want to sample a point from the process π₯_{π} at time π. The simplest way to do this is to use the Euler-Maruyama method, which is a first-order method that discretizes the SDE as:

with π€ a random draw from π(0,π·). Here πΏ is a stepsize that is required to be small enough to ensure the discretization error is small. The process is repeated until we reach time π.

The time-complexity of running π_{steps} steps is π(π^{2}π_{steps}) because of matrix-vector multiplications, which doesnβt look too bad at first. But remember there will be some discretization error, which can be kept down by having a large number of steps^{2}. For long OU trajectories where we want samples at many times, this can quickly become intractable. As an example, in molecular dynamics simulations, up to billions of time steps are needed π« . Most existing libraries, such as diffrax would typically run OU processes in a similar way, with better solvers but which still suffer from discretization error. Thankfully, it turns out that we donβt need to discretize time if we want to simulate multivariate OU processes.

With `thermox`

, a trajectory is run very differently. For a multivariate OU process, there are analytical expressions for the mean and covariance matrix at all times. The mean reads:

and the covariance:

assuming Ξ£(0)=0. By diagonalizing π΄ and π·, it turns out we can construct these matrices (with a π(π^{3}) preprocessing step), and simply sample from π(π(π‘),Ξ£(π‘)) for any time π‘ to obtain a sample. This is convenient as it avoids discretization errors altogether and eliminates the dependence on the number (and sparsity) of time steps from the time complexity. The total `thermox`

complexity is π(π^{3}+ππ^{2}) to collect π samples, which can provide a huge improvement over the π(ππ^{2}π_{steps)} complexity of discretized methods.^{3}

## Comparison to *diffrax*

`diffrax`

is an awesome library for discretizing generic SDEs, but as shown above we can simulate OU processes **exactly**. Letβs show that numerically! Here we compare `thermox`

and `diffrax`

to simulate a multivariate OU process. Details of the code can be found here. Here are the results we get:

And there you see it. For π=100 and a large number of time steps, you get up to 800Γ speedup *and* the simulation is exact (no discretization error).

## Quick Example

Letβs simulate a 5-dimensional OU process with `thermox`

:

```
1import thermox
2import jax
3import jax.numpy as jnp
4
5# Set random seed
6key = jax.random.PRNGKey(0)
7
8# Timeframe
9ts = jnp.arange(0, 1, 0.01)
10
11# System parameters for a 5-dimensional OU process
12A = jnp.array([[2.0, 0.5, 0.0, 0.0, 0.0],
13 [0.5, 2.0, 0.5, 0.0, 0.0],
14 [0.0, 0.5, 2.0, 0.5, 0.0],
15 [0.0, 0.0, 0.5, 2.0, 0.5],
16 [0.0, 0.0, 0.0, 0.5, 2.0]])
17
18b, x0 = jnp.zeros(5), jnp.zeros(5) # Zero drift displacement vector and initial state
19
20# Diffusion matrix with correlations between x_1 and x_2
21D = jnp.array([[2, 1, 0, 0, 0],
22 [1, 2, 0, 0, 0],
23 [0, 0, 2, 0, 0],
24 [0, 0, 0, 2, 0],
25 [0, 0, 0, 0, 2]])
26
27# Collect samples
28samples = thermox.sample(key, ts, x0, A, b, D)
```

β

Ok letβs plot those OU samples:

Code:

```
1import matplotlib.pyplot as plt
2plt.figure(figsize=(12, 5))
3plt.plot(ts, samples, label=[f'Dimension {i+1}' for i in range(5)])
4plt.xlabel('Time', fontsize=16)
5plt.ylabel('Value', fontsize=16)
6plt.title('Trajectories of 5-Dimensional OU Process', fontsize=16)
7plt.legend(fontsize=16)
8plt.show()
```

β

How pretty!

## A First Thermodynamic Computing Simulator

As mentioned, `thermox`

is a great tool to run benchmarks for thermodynamic computers. With `thermox`

you can run such thermodynamic simulations in a single line of code (similarly to the widely-used `scipy.linalg.solve`

):

`1x_s = thermox.linalg.solve(A, b, num_samples=1000)`

β

Here, 1000 samples are collected from an OU process and then averaged to obtain an approximate solution to the linear system π΄π₯=π, with a preset sampling interval. In fact, we believe that any thermodynamic advantage experiment would have to beat `thermox`

, as it provides a baseline for digital computers simulating OU process. We can therefore view `thermox`

as the first open-source thermodynamic computing simulator. This is analogous to how quantum computing simulators (e.g., IBM Qiskit or tensor-network methods) provide a classical baseline for quantum computers to beat. For further details on this, please take a look at this notebook. We hope to see the community also run its own thermodynamic experiments, propose new thermodynamic experiments and run custom applications with `thermox`

!

*posteriors* and *thermox*

We recently released `posteriors`

, a Python library for uncertainty quantification which features a few methods that can leverage `thermox`

. For example, consider that you wish to apply a Laplace approximation to a desired application, and sample from it. You can do this with `thermox`

to get an idea of how using a thermodynamic device to collect samples would influence your end result. In the long-term, our vision is to use a real thermodynamic device in conjunction with `posteriors`

for uncertainty quantification in machine learning.

## Whatβs next?

`thermox`

is an open source project, we welcome contributions, and particularly further research into thermodynamic computation and its applications in machine learning and other topics, such as finance and evolutionary biology! We hope that `thermox`

can accelerate research and our journey to practical thermodynamic computation. Come join the `thermox`

fun on GitHub!

β

- With the matrix π· being symmetric positive-definite.
- And therefore small stepsize πΏ.
- Of course, thermodynamic devices will ultimately be faster than
`thermox`

. More specifically, for solving a linear system, we expect a runtime scaling as π(ππ^{2}), with π the condition number of π΄, as shown here.