Author
Abstract
Background: Large Language Models (LLMs) have transformed research and industry applications; however, cloud deployment decisions remain complex and poorly documented, particularly for academic researchers operating under budget constraints. Systematic guidance on infrastructure selection for LLM-based research is limited.Objective: This study provides a comprehensive empirical evaluation of cloud-based LLM deployment architectures, examining inference efficiency, serverless platform availability, and architectural trade-offs across major cloud providers to deliver actionable guidance for budget-constrained researchers.Methods: The author evaluated 32 open-source LLMs ranging from 0.6 billion to 1 trillion parameters across serverless and Bring Your Own Container (BYOC) deployment configurations. Using the Belebele benchmark, we analyzed cost-efficiency relationships, serverless platform availability, and metrics exposure across Amazon SageMaker, Amazon Bedrock, Azure Serverless, and Hugging Face-compatible providers.Results: Model performance follows a logarithmic scaling relationship with parameter count (R 2 =0.727) and deployment cost (R 2 =0.639). Models in the 30-50B parameter range achieve 85-90% of maximum accuracy at a fraction of the cost of frontier models. However, serverless availability remains fragmented: only 34.4% of examined models are accessible via serverless endpoints, with minimal cross-platform redundancy (6.2%). Deployment architecture introduces a fundamental trade-off: serverless platforms expose 71% fewer metrics than BYOC approaches while eliminating infrastructure management overhead and idle costs.Conclusion: These findings provide practical guidance for researchers selecting cloud infrastructure under budget constraints. Models in the 7-14B range offer optimal cost efficiency, while the 30-50B range maximizes accuracy per dollar for demanding tasks. The results also challenge the prevailing emphasis on ever-larger models, as diminishing returns become substantial beyond 30B parameters. Persistent gaps in serverless availability and observability highlight the need for greater standardization in cloud platforms.
Suggested Citation
Download full text from publisher
As the access to this document is restricted, you may want to
for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:prg:jnlaip:v:preprint:id:313. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Stanislav Vojir (email available below). General contact details of provider: https://edirc.repec.org/data/uevsecz.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.