New AI Approach Weighs Data ‘Temperature’ to Improve Prediction Accuracy
By Deborah Borfitz
February 12, 2026 | Anything that can be defined as a system—a mixed-bag list that includes physical materials like silicon and graphene as well as patient-specific implantable devices and the human brain—are subject to the law of entropy, meaning its disorder irreversibly increases over time if left alone. It turns out that this fundamental concept has a more comprehensive approach, dubbed zentropy by its creators at Penn State.
The approach dates to 2008 but acquired the name at the suggestion of an attendee at a COVID-era seminar out of Duke University, says Zi-Kui Liu, Ph.D., professor of materials science and engineering and lead developer of zentropy. The “z” stands for a German word used in statistical mechanics to represent the partition function that sums the probabilities of all possible configurations of a system.
Zentropy serves as a framework for predicting thermodynamic properties, which are possessed by almost all physical matter and systems, by combining microscopic quantum data with macroscopic statistical mechanics. It was taken to a whole new level once Liu started working with Wenrui Hao, Ph.D., professor of mathematics and director of the Center for Mathematical Biology at Penn State, who suggested that zentropy be used to solve the AI problem of making sense of heterogeneous, multi-source scientific data.
It took less than two years for the collaboration to give birth to ZENN, short for zentropy-embedded neural networks (Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.2511227122). The theory had previously been used for feature selection in AI, but they had now solved “the whole AI problem,” says Liu, much as zentropy had captured all entropy of a system.
ZENN breaks the properties of data down into two parts: its “energy,” capturing the meaningful patterns or signals in the data, and its “intrinsic entropy,” capturing the noise, uncertainty or disorder in the measurements. The model also employs a “temperature” parameter that helps it recognize hidden differences between datasets.
Getting to ZENN
Entropy refers to the top-down view of the global property of a system, the scale at which observations can be made about uncertainties in a message or data (Shannon entropy), a physical system (quantum entropy), or decision tree algorithms (machine learning entropy), Liu explains. None of them have the whole entropy of a system covered.
The remedy of the Penn State team was to add a partition function (commonly denoted by the letter Z in statistical mechanics) whereby entropy can be calculated over multiple configurations of a system. It covers the disorder, fluctuations, and statistical probabilities across and within different configurations of a system and is designed to be a model- and parameter-free approach to making predictions about material properties. “Zentropy captures the ensemble behavior of the system.”
Putting thermodynamic entropy on top of a neural network model made zentropy “more physical, more generalizable, [and] more reliable,” adds Hao. A better model results from incorporating more fundamental, universal laws.
It was Hao’s suggestion to adapt zentropy to information theory where different types of data—e.g., imaging, blood biomarkers, and clinical records—are getting pulled for analysis from a variety of places using different machines with different measurement errors. When those data get integrated, there is no way to distinguish the data sources, he says.
Hao’s idea was to differentiate those sources by introducing the temperature of the data. In this way, scientists can determine how much of the data originated from one place or another. “Once you know how to cluster them, you can improve your prediction accuracy,” he says.
This is believed to be the first time thermodynamic laws have been embedded directly into deep learning architectures, Hao continues. When put to the test, ZENN outperformed larger, more complex neural networks while remaining more robust when data quality varied.
Creating Digital Twins
ZENN has potential applications in various fields, including material science, medical research, and quantum computing, Hao points out. He and Liu are currently engaged in a dozen projects with different Penn State colleagues using the novel framework.
One of those projects is a collaboration with a neurologist in the college of medicine that aims to create digital twins intended to map the trajectory of Alzheimer’s disease and the impact of various treatments, says Hao. Researchers will develop a ZENN model based on data from the Alzheimer’s Disease Neuroimaging Initiative and then combine that with anonymized clinical data from Penn State College of Medicine (e.g., blood work, brain scans, and genetics and behavioral information) to create digital twins at both the population and individual levels.
In this case, ZENN is being applied to PET scan and MRI data to determine when a model renders a good prediction. Based on preliminary results, prediction accuracy sits at about 90%, versus between 50% and 60% with traditional machine learning methods, Hao says.
Ultimately, researchers hope that the personalized predictions will be used to inform the optimal treatment for real-world patients, he adds. Several monoclonal antibody therapies designed to remove beta-amyloid plaques from the brain are now on the market, and many experimental Alzheimer’s drugs are currently under development. The digital twins will use reinforcement learning to maximize predictive performance, enabling personalized and real-time clinical decision-making.
The work is being supported by Penn State’s Institute for Computational Data Science, a one-year grant from its Eberly College of Science Office for Innovation, and a seed grant from its Center for Biodevices.
Customizing Implants
In another project at Penn State with a group of orthopedic surgeons, ZENN will be utilized to better customize femur implants for individual patients to prevent long-term failure and intraoperative injury, reports Liu. It is highly challenging to create a device that perfectly matches a person’s anatomy, a process that involves the use of 3D models of the bone based on a patient’s X-rays and CT scans.
“The initial focus is on quantifying fractures and understanding how the artificial structure functions so we can detect early whether healing is progressing toward union or drifting toward nonunion, rather than discovering failure only long after the healing process has already stopped,” Liu says.
ZENN is to be used in a series of simulation exercises to assess when the structure of the implant is good enough to proceed with surgery, says Liu. It could also be used on X-rays taken at follow-up patient visits to reassess structural stability of the implant and, if there are any issues detected, promptly take corrective action.
“A big problem for these doctors is they must wait for a problem to show up to treat it,” he notes. The discovery can sometimes take more than a year, by which time the implant may have caused significant damage. Surgeons would rather have a read on needed changes months earlier when the situation is easier to remedy, and a patient’s pain has not become severe.
Parallel Paths
The fastest moving ZENN-related project and subject of another soon-to-publish paper is the use of the model to predict housing prices, mental health ratio, and PM2.5 particulate matter in the air, Liu shares. When put to the test, the correlation between ZENN prediction and actual data (housing price) was twice as good as existing machine learning methods.
Liu says he is currently in discussions about starting a company to develop accessible, real‑world products, such as foundation models designed to generate predictions across a wide range of geographic data for local governments.
In yet another forthcoming paper, Liu and Hao together with Penn State statistician Bing Li, Ph.D., will be examining similarities between trajectories of human and artificial intelligence. Like neural networks, the human brain does a lot of abstraction to turn information into coherent laws, which aptly describes thermodynamics, says Liu. “It covers everything—how a system changes, how a system responds, how a system evolves, [and] how a system crashes.”
AI also follows a “very similar trajectory” as human intelligence, Liu points out. Both started with neurons, with AI evolving to graph neural networks and then large language models and humans to spoken and written language and the development of the sciences of calculus, thermodynamics, and quantum mechanics. The paper will be the second published by the open-access publication ZENNtropy.
Furthermore, the team is actively investigating ZENN’s potential to provide internal containment for AI safety through two intrinsic mechanisms: structural containment via configuration partitioning and dynamic containment via zentropy‑based regulation of driving forces and stability, complementary to the external containment approaches broadly discussed in the community.


