A 1 GW wind portfolio produces roughly 3 TWh annually. A 20 pp improvement in forecast accuracy, under typical hedging and imbalance-penalty structures, translates to approximately €25M per year. That is the value of one objective on one domain.
Most of that money buys physical experiments that fail. Compressing that loop with learned models of the underlying physics is one of the largest unsolved problems in industry. It is the shape of problem this approach is built for.
Atmosphere, energy, manufacturing, transport, materials, life sciences. Each is governed by physics. Each is a candidate for a foundation model trained on its own data. Atmosphere is the one we have shipped. The rest is the thesis we are testing.
That is the value of one objective in one domain. There are many.
We trained EPT-2 on the atmosphere because it is the largest continuous record of physics on Earth. The base learned the underlying mechanics, not the weather. Specialize it for an airfoil or a shock wave and the physics carries over. The list below is what the same base extends to, the way a language foundation model extends from text to code. One foundation, many use cases. Not a separate model per industry.