>Here we demonstrate how this vision became reality by first combining state-of-the-art artificial intelligence (AI) models and traditional physics-based models on cloud high-performance computing (HPC) resources to quickly navigate through more than 32 million candidates and predict around half a million potentially stable materials. By focusing on solid-state electrolytes for battery applications, our discovery pipeline further identified 18 promising candidates with new compositions and rediscovered a decade's worth of collective knowledge in the field as a byproduct. By employing around one thousand virtual machines (VMs) in the cloud, this process took less than 80 hours.
>rediscovered a decade's worth of collective knowledge
Unless one can guarantee this knowledge wasn't in the training data, it's not rediscovered, it's regurgitated.
Good example of the bias that exist with AI: when the output/performance can be interpreted in two different manners, I consistently find people unconsciously choosing the option painting the AI in a better light.
I don't think AI / deep learning is useless, but I do think it's overrated at the moment.
There are two major frameworks managing DFT calculations: AiiDA and Fireworks. What's strange to me is Microsoft has put a lot of support into AiiDA (including hiring a few high profile developers). Yet in the paper they use Fireworks to run the simulation
What AI? From the article they iterated through a lot of possible materials and calculated how they'd fare according to some physics based criteria. They also mention HPC.
So they did a brute force search through possible candidates to find promising ones. Which makes a lot of sense but ... AI?
Guessing they need more funding now so they're making it sound like it's part of the LLM fad.
> Machine learning (ML) models for materials science have the potential to vastly expedite the computational discovery process. State-of-the-art ML models can predict the results of physics-based quantum mechanical calculations but are several orders of magnitude faster, making them ideal for predicting general material properties [5–7]. In addition to direct property prediction, combining universal ML potentials such as M3GNet [8], CHGNet [9], and GNoME [10] has made it possible to perform geometric optimization, and hence evaluate thermodynamic stability, for arbitrary combinations of elements and structures. The significant speed advantage of ML-based techniques over direct simulation has made it possible to explore materials across a vast chemical space that greatly exceeds the number of known materials.
Models mentioned:
M3GNet: M3GNet is a new materials graph neural network architecture that incorporates 3-body interactions. A key difference with prior materials graph implementations such as MEGNet is the addition of the coordinates for atoms and the 3×3 lattice matrix in crystals, which are necessary for obtaining tensorial quantities such as forces and stresses via auto-differentiation.
CHGNet: A pretrained universal neural network potential for charge-informed atomistic modeling (see publication). Crystal Hamiltonian Graph neural Network is pretrained on the GGA/GGA+U static and relaxation trajectories from Materials Project, a comprehensive dataset consisting of more than 1.5 Million structures from 146k compounds spanning the whole periodic table.
GNoME: GNoME Is A State-Of-The-Art Graph Neural Network (GNN) Model. The Input Data For GNNs Take The Form Of A Graph That Can Be Likened To Connections Between Atoms, Which Makes GNNs Particularly Suited To Discovering New Crystalline Materials.
They used a machine-learning model to predict the physical properties of each material. That is how they managed to narrow it down from millions of possibilities and find something worth actually trying to synthesize.
It seems like a reasonable strategy. They have way too many candidates so it isn’t as if throwing away a bunch because the AI is misfiring would be a problem.
In an age where everyone questions if conventional intelligence does work, artificial intelligence will surely be popular.
But yes, it probably is just expert software. We sometime call it intelligent, although it may just be some physics model with a lot of experience in engineering pragmatic assumptions to reduce brute forcing likely optimal solutions.
Honestly I hate this new trend of AI/ML papers not even describing the model anymore. They used a graph neural network model called M3Gnet. It's mentioned in the APPENDIX.
Ok so they did use ML for something. They did a brute force exhaustive search across the whole solution space - which is not ML - and used neural networks to reduce the computing power needed for the "fitness function" (yes, GA term but I understand they didn't use GAs here).
Nowhere in the article linked to by HN they say that, so it's easy to assume their use of ML is just marketing.
Potatos work as batteries with very little lithium. New Scientist apparently doesn't care about those properties that make batteries useful, so maybe they'll write up potatoes next.
This is the next thing, a renewable type of battery storage device that is also carbon net negative ? Sign me the fuck up ! Who wants to fund the pot-A.I.-to battery startup with me ?
Replacing Lithium with Sodium, the only information this article suggests came from the AI, is about as trivial as you get in chemistry. That is a question that a high school teacher would ask their class after introducing the period table and expect everyone to know.
> Vijay Murugesan at the Pacific Northwest National Laboratory in Washington state was one of the scientists who picked up the phone. He and his colleagues suggested additional screening criteria for the AI. After more elimination rounds, Murugesan’s team ultimately picked one of the AI’s suggestions to synthesise in the lab. It stood out because half of what Murugesan would have expected to be lithium atoms were replaced with sodium. He says that this is a very novel recipe for an electrolyte and that having the two elements together opens questions about the basic physics of how the material works inside a battery.
This article would be a lot better if there were anything material about what was novel. Mixing the two isn’t new. If the AI was anything more than a word-mashing stochastic parrot, it could have come with a representation that piked Pr. Murugesan’s interest or it could be that he had an idea while looking at gibberish ellucubrations, the way people find inspiration in their shower.
But unless the article explains what those representations are, we can’t tell.
> If the AI was anything more than a word-mashing stochastic parrot, it could have come with a representation that piked Pr. Murugesan’s interest or it could be that he had an idea while looking at gibberish ellucubrations
A lazy critique of llms but that is not what was used here. The paper is linked in the comments, if you're actually interested.
It does look like they are describing a sort of Lithium/Sodium mix rather than a 'pure' Sodium-ion battery, which is (at least to me) slightly novel.
It would then raise a bigger question: If the AI 'came up with it', does that mean that it can't be patented? Or does that depend on whether it came up with the manufacturing technique?
There are other axes to consider. Cost, thermal stability, supply chain reliability, toxicity. Any of which might make these “better” than lithium-ion for some applications (such as home or datacenter UPS batteries).
Yes agree with you there, static storage requirements are less of a concern with regards to the weight, which is probably why we'll see sodium batteries take over some of this space.
For primary applications (cars, portable devices, etc) sadly it still looks like lithium is the best option.
I mean you can call it AI to sound fancy, but in the end they're just using computation to predict the properties of a material and then optimizing for certain properties. If that's all it takes to call something AI then it would have been more impressive to find a new design without using AI.
Basically I'm not sure why this is touted as 'AI comes up', instead of 'scientists use computer to'.
They used a machine-learning model to predict the physical properties of each candidate material. Since ML and AI are (incorrectly) basically synonymous at this point, the news outlet is probably the one that s/ML/AI/'d.
The original paper is a bit better, but focusses a lot on the novelty of combining pre-existing models and datasets with a lot of computation resources (courtesy of Microsoft, presumably) and calling it AI, and a lot less on the actual material.
That doesn't really excuse it, the news outlet has a responsibility to educate itself on basic matters (eg ML vs AI). Even if they have refused to do so before now.
Granted, I've not met many newsrooms with a serious conceptualization of "responsibility" beyond whatever bullshit they learned in school.
ML is either a subset of or synonymous with AI depending on how you want to frame it. This has been true for a long time, it's not a new thing. What does seem new is people trying to argue that ML isn't AI. It's not a historically useful distinction and it's not useful for understanding articles now.
> What does seem new is people trying to argue that ML isn't AI.
Well, ML is a well-defined class of processes, calling it AI seems a little disingenuous. Is a beam search still considered AI? How about Markov chains? It's much easier to refer to the specific processes rather than vague floating signifiers if you want to communicate clearly, which I would argue is a primary responsibility of journalists. It doesn't bode well for reporting if journalists aren't zeroed in on this problem of de-jargonizing tech reporting in the first place, and this leaves them vulnerable to essentially marketing ploys that inherently misrepresent the capabilities of the software.
Why? It's been done for a long time. The whole field has been referred to as AI for decades, nobody was standing up in my lectures saying "No! SVMs aren't AI!".
> It doesn't bode well for reporting if journalists aren't zeroed in on this problem of de-jargonizing tech reporting in the first place, and this leaves them vulnerable to essentially marketing ploys that inherently misrepresent the capabilities of the software.
Referring to the things used here as AI is entirely consistent with how I've seen the term used dating back beyond when people would ask me if AI was to do with aliens. Simpler things have been called AI in the public sphere too, so it's not a new thing being sprung on people. I don't think people have generally been confused by a camera that says it has AI thinking it's sentient.
> The whole field has been referred to as AI for decades
Yea, everyone else has been laughing at you for this the whole time. It's a dumb term for anything but encouraging rubes to fork over cash for stuff that looks like magic.
> Referring to the things used here as AI is entirely consistent with how I've seen the term used dating back beyond when people would ask me if AI was to do with aliens. Simpler things have been called AI in the public sphere too, so it's not a new thing being sprung on people.
Maybe you should consider communicating more directly and effectively.
> I don't think people have generally been confused by a camera that says it has AI thinking it's sentient.
You should talk to more people. This is 100% a problem.
> Yea, everyone else has been laughing at you for this the whole time
Almost nobody really knew about the field not that long ago.
> Maybe you should consider communicating more directly and effectively.
I don't understand what you mean. People had seen the film AI and misremembered it having aliens in, and when I said my course was AI that's what they thought about.
> You should talk to more people. This is 100% a problem.
I don't believe you that people generally believe that cameras and the like have been sentient for years.
Yes. There's plenty of youtube clips from last week's CES that make this point amusingly. The word "AI" out of the mouth of every presenter, every 2 seconds.
https://arxiv.org/abs/2401.04070
>Here we demonstrate how this vision became reality by first combining state-of-the-art artificial intelligence (AI) models and traditional physics-based models on cloud high-performance computing (HPC) resources to quickly navigate through more than 32 million candidates and predict around half a million potentially stable materials. By focusing on solid-state electrolytes for battery applications, our discovery pipeline further identified 18 promising candidates with new compositions and rediscovered a decade's worth of collective knowledge in the field as a byproduct. By employing around one thousand virtual machines (VMs) in the cloud, this process took less than 80 hours.