Zero-shot performance analysis of large language models in sumrate maximization

大型语言模型在总和最大化中的零样本性能分析

阅读:1

Abstract

Large language models have revolutionized the field of natural language processing and are now becoming a one-stop solution to various tasks. In the field of Networking, LLMs can also play a major role when it comes to resource optimization and sharing. While Sumrate maximization has been a crucial factor for resource optimization in the networking domain, the optimal or sub-optimal algorithms it requires can be cumbersome to comprehend and implement. An effective solution is leveraging the generative power of LLMs for such tasks where there is no necessity for prior algorithmic and programming knowledge. A zero-shot analysis of these models is necessary to define the feasibility of using them in such tasks. Using different combinations of total cellular users and total D2D pairs, our empirical results suggest that the maximum average efficiency of these models for sumrate maximization in comparison to state-of-the-art approaches is around 58%, which is obtained using GPT. The experiment also concludes that some variants of the large language models currently in use are not suitable for numerical and structural data without fine-tuning their parameters.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。