O
Orclever
Back to Journal
Research Article Open AccessOrclever Native

Benchmarking Llama 3 70B for Code Generation: A Comprehensive Evaluation

Pınar Ersoy1,
Mustafa Erşahin2
1Dataroid
2Commencis
Published:May 31, 2024
DOI: 10.56038/oprd.v4i1.444
Vol. 4, No. 1 · pp. 52–58

Abstract

This study benchmarks the capabilities of Llama 3 70B, a 70-billion parameter large language model (LLM), for code generation tasks. To effectively train and fine-tune this massive model, we integrate PyTorch Fully Sharded Data Parallel (FSDP) [1], [2] for distributed training and Quantized Low-Rank Adaptation (Q-LoRA) [7] for efficient fine-tuning. We address challenges associated with distributed training, including communication overhead and synchronization complexities, through optimization strategies like gradient accumulation, optimizer state sharding, and mixed precision training. Additionally, we employ advanced training techniques such as Curriculum Learning, Dynamic Batch Sizing, and Adaptive Optimization Algorithms to enhance model performance and training efficiency. Our primary focus is evaluating the performance of the fine-tuned Llama 3 70B model on two widely-recognized code generation benchmarks: HumanEval [8] and MBPP [9]. HumanEval assesses the model's ability to translate natural language problem descriptions into functionally correct code, while MBPP evaluates its proficiency in solving complex programming problems by generating accurate Python code. We present detailed performance results on these benchmarks, analyzing the model's strengths and limitations in various code generation scenarios. Furthermore, we compare the impact of our training and fine-tuning methodologies on scalability, memory efficiency, and training speed, demonstrating the feasibility and efficiency of our approach. This benchmark study offers valuable insights for researchers and practitioners exploring the application of LLMs for code generation. It provides a comprehensive evaluation of Llama 3 70B's capabilities, sheds light on the effectiveness of various training and fine-tuning techniques, and emphasizes the importance of rigorous benchmark evaluation in driving progress within this rapidly evolving field.

Keywords
Large Language ModelsLlama 3 70BPyTorch FSDPQ-LoRA

References

  1. 1.PyTorch FSDP Documentation, [Online]. Available: https://pytorch.org/docs/stable/fsdp.htmlLink
  2. 2.PyTorch FSDP Tutorial, [Online]. Available: https://pytorch.org/tutorials/intermediate/FSDP_tutorial.htmlLink
  3. 3.Megatron-LM Usage Guide, [Online]. Available: https://huggingface.co/docs/accelerate/en/usage_guides/megatron_lmLink
  4. 4.NVIDIA Megatron-LM, [Online]. Available: https://github.com/NVIDIA/Megatron-LMLink
  5. 5.DeepSpeed, [Online]. Available: https://www.deepspeed.ai/Link
  6. 6.Llama 2: Open Foundation and Fine-Tuned Chat Models, [Online]. Available: https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/Link
  7. 7.Q-LoRA: Efficient Finetuning of Quantized LLMs, [Online]. Available: https://arxiv.org/abs/2305.14314Link
  8. 8.OpenAI Codex, [Online]. Available: https://openai.com/blog/openai-codex/Link
  9. 9.MBPP: A Modular Benchmark for Python Programming, [Online]. Available: https://github.com/google-research/google-research/tree/master/mbppLink
  10. 10.Introducing Code Llama, a state-of-the-art large language model for coding [Online]. Available: https://ai.meta.com/blog/code-llama-large-language-model-coding/Link
  11. 11.Introducing Meta Llama 3: The most capable openly available LLM to date, [Online]. Available: https://ai.meta.com/blog/meta-llama-3/Link
Download PDF
Cite This Article
Ersoy, P., Erşahin, M. (2024). Benchmarking Llama 3 70B for Code Generation: A Comprehensive Evaluation. *Orclever Proceedings of Research and Development*, 4(1), 52-58. https://doi.org/10.56038/oprd.v4i1.444

Bibliographic Info

JournalOrclever Proceedings of Research and Development
Volume4
Issue1
Pages52–58
PublishedMay 31, 2024
eISSN2980-020X