Theoretical and Applied Mechanics Letters (May 2025)

Fine-tuning a large language model for automating computational fluid dynamics simulations

  • Zhehao Dong,
  • Zhen Lu,
  • Yue Yang

DOI
https://doi.org/10.1016/j.taml.2025.100594
Journal volume & issue
Vol. 15, no. 3
p. 100594

Abstract

Read online

Configuring computational fluid dynamics (CFD) simulations typically demands extensive domain expertise, limiting broader access. Although large language models (LLMs) have advanced scientific computing, their use in automating CFD workflows is underdeveloped. We introduce a novel approach centered on domain-specific LLM adaptation. By fine-tuning Qwen2.5-7B-Instruct on NL2FOAM, our custom dataset of 28,716 natural language-to-OpenFOAM configuration pairs with chain-of-thought (CoT) annotations enables direct translation from natural language descriptions to executable CFD setups. A multi-agent system orchestrates the process, autonomously verifying inputs, generating configurations, running simulations, and correcting errors. Evaluation on a benchmark of 21 diverse flow cases demonstrates state-of-the-art performance, achieving 88.7% solution accuracy and 82.6% first-attempt success rate. This significantly outperforms larger general-purpose models such as Qwen2.5-72B-Instruct, DeepSeek-R1, and Llama3.3-70B-Instruct, while also requiring fewer correction iterations and maintaining high computational efficiency. The results highlight the critical role of domain-specific adaptation in deploying LLM assistants for complex engineering workflows. Our code and fine-tuned model have been deposited at https://github.com/YYgroup/AutoCFD.

Keywords