0
  • 聊天消息
  • 系统消息
  • 评论与回复
登录后你可以
  • 下载海量资料
  • 学习在线课程
  • 观看技术视频
  • 写文章/发帖/加入社区
会员中心
创作中心

完善资料让更多小伙伴认识你,还能领取20积分哦,立即完善>

3天内不再提示

怎样使用Accelerate库在多GPU上进行LLM推理呢?

冬至子 来源:思否AI 作者:思否AI 2023-12-01 10:24 次阅读

大型语言模型(llm)已经彻底改变了自然语言处理领域。随着这些模型在规模和复杂性上的增长,推理的计算需求也显著增加。为了应对这一挑战利用多个gpu变得至关重要。

所以本文将在多个gpu上并行执行推理,主要包括:Accelerate库介绍,简单的方法与工作代码示例和使用多个gpu的性能基准测试。

本文将使用多个3090将llama2-7b的推理扩展在多个GPU上

基本示例

我们首先介绍一个简单的示例来演示使用Accelerate进行多gpu“消息传递”。

from accelerate import Accelerator
 from accelerate.utils import gather_object
 
 accelerator = Accelerator()
 
 # each GPU creates a string
 message=[ f"Hello this is GPU {accelerator.process_index}" ] 
 
 # collect the messages from all GPUs
 messages=gather_object(message)
 
 # output the messages only on the main process with accelerator.print() 
 accelerator.print(messages)

输出如下:

['Hello this is GPU 0', 
   'Hello this is GPU 1', 
   'Hello this is GPU 2', 
   'Hello this is GPU 3', 
   'Hello this is GPU 4']

多GPU推理

下面是一个简单的、非批处理的推理方法。代码很简单,因为Accelerate库已经帮我们做了很多工作,我们直接使用就可以:

from accelerate import Accelerator
 from accelerate.utils import gather_object
 from transformers import AutoModelForCausalLM, AutoTokenizer
 from statistics import mean
 import torch, time, json
 
 accelerator = Accelerator()
 
 # 10*10 Prompts. Source: https://www.penguin.co.uk/articles/2022/04/best-first-lines-in-books
 prompts_all=[
     "The King is dead. Long live the Queen.",
     "Once there were four children whose names were Peter, Susan, Edmund, and Lucy.",
     "The story so far: in the beginning, the universe was created.",
     "It was a bright cold day in April, and the clocks were striking thirteen.",
     "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.",
     "The sweat wis lashing oafay Sick Boy; he wis trembling.",
     "124 was spiteful. Full of Baby's venom.",
     "As Gregor Samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect.",
     "I write this sitting in the kitchen sink.",
     "We were somewhere around Barstow on the edge of the desert when the drugs began to take hold.",
 ] * 10
 
 # load a base model and tokenizer
 model_path="models/llama2-7b"
 model = AutoModelForCausalLM.from_pretrained(
     model_path,    
     device_map={"": accelerator.process_index},
     torch_dtype=torch.bfloat16,
 )
 tokenizer = AutoTokenizer.from_pretrained(model_path)   
 
 # sync GPUs and start the timer
 accelerator.wait_for_everyone()
 start=time.time()
 
 # divide the prompt list onto the available GPUs 
 with accelerator.split_between_processes(prompts_all) as prompts:
     # store output of generations in dict
     results=dict(outputs=[], num_tokens=0)
 
     # have each GPU do inference, prompt by prompt
     for prompt in prompts:
         prompt_tokenized=tokenizer(prompt, return_tensors="pt").to("cuda")
         output_tokenized = model.generate(**prompt_tokenized, max_new_tokens=100)[0]
 
         # remove prompt from output 
         output_tokenized=output_tokenized[len(prompt_tokenized["input_ids"][0]):]
 
         # store outputs and number of tokens in result{}
         results["outputs"].append( tokenizer.decode(output_tokenized) )
         results["num_tokens"] += len(output_tokenized)
 
     results=[ results ] # transform to list, otherwise gather_object() will not collect correctly
 
 # collect results from all the GPUs
 results_gathered=gather_object(results)
 
 if accelerator.is_main_process:
     timediff=time.time()-start
     num_tokens=sum([r["num_tokens"] for r in results_gathered ])
 
     print(f"tokens/sec: {num_tokens//timediff}, time {timediff}, total tokens {num_tokens}, total prompts {len(prompts_all)}")

使用多个gpu会导致一些通信开销:性能在4个gpu时呈线性增长,然后在这种特定设置中趋于稳定。当然这里的性能取决于许多参数,如模型大小和量化、提示长度、生成的令牌数量和采样策略,所以我们只讨论一般的情况

1 GPU: 44个token /秒,时间:225.5s

2 gpu: 88个token /秒,时间:112.9s

3 gpu: 128个token /秒,时间:77.6s

4 gpu: 137个token /秒,时间:72.7s

5 gpu: 119个token /秒,时间:83.8s

在多GPU上进行批处理

现实世界中,我们可以使用批处理推理来加快速度。这会减少GPU之间的通讯,加快推理速度。我们只需要增加prepare_prompts函数将一批数据而不是单条数据输入到模型即可:

from accelerate import Accelerator
 from accelerate.utils import gather_object
 from transformers import AutoModelForCausalLM, AutoTokenizer
 from statistics import mean
 import torch, time, json
 
 accelerator = Accelerator()
 
 def write_pretty_json(file_path, data):
     import json
     with open(file_path, "w") as write_file:
         json.dump(data, write_file, indent=4)
 
 # 10*10 Prompts. Source: https://www.penguin.co.uk/articles/2022/04/best-first-lines-in-books
 prompts_all=[
     "The King is dead. Long live the Queen.",
     "Once there were four children whose names were Peter, Susan, Edmund, and Lucy.",
     "The story so far: in the beginning, the universe was created.",
     "It was a bright cold day in April, and the clocks were striking thirteen.",
     "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.",
     "The sweat wis lashing oafay Sick Boy; he wis trembling.",
     "124 was spiteful. Full of Baby's venom.",
     "As Gregor Samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect.",
     "I write this sitting in the kitchen sink.",
     "We were somewhere around Barstow on the edge of the desert when the drugs began to take hold.",
 ] * 10
 
 # load a base model and tokenizer
 model_path="models/llama2-7b"
 model = AutoModelForCausalLM.from_pretrained(
     model_path,    
     device_map={"": accelerator.process_index},
     torch_dtype=torch.bfloat16,
 )
 tokenizer = AutoTokenizer.from_pretrained(model_path)   
 tokenizer.pad_token = tokenizer.eos_token
 
 # batch, left pad (for inference), and tokenize
 def prepare_prompts(prompts, tokenizer, batch_size=16):
     batches=[prompts[i:i + batch_size] for i in range(0, len(prompts), batch_size)]  
     batches_tok=[]
     tokenizer.padding_side="left"     
     for prompt_batch in batches:
         batches_tok.append(
             tokenizer(
                 prompt_batch, 
                 return_tensors="pt", 
                 padding='longest', 
                 truncation=False, 
                 pad_to_multiple_of=8,
                 add_special_tokens=False).to("cuda") 
             )
     tokenizer.padding_side="right"
     return batches_tok
 
 # sync GPUs and start the timer
 accelerator.wait_for_everyone()    
 start=time.time()
 
 # divide the prompt list onto the available GPUs 
 with accelerator.split_between_processes(prompts_all) as prompts:
     results=dict(outputs=[], num_tokens=0)
 
     # have each GPU do inference in batches
     prompt_batches=prepare_prompts(prompts, tokenizer, batch_size=16)
 
     for prompts_tokenized in prompt_batches:
         outputs_tokenized=model.generate(**prompts_tokenized, max_new_tokens=100)
 
         # remove prompt from gen. tokens
         outputs_tokenized=[ tok_out[len(tok_in):] 
             for tok_in, tok_out in zip(prompts_tokenized["input_ids"], outputs_tokenized) ] 
 
         # count and decode gen. tokens 
         num_tokens=sum([ len(t) for t in outputs_tokenized ])
         outputs=tokenizer.batch_decode(outputs_tokenized)
 
         # store in results{} to be gathered by accelerate
         results["outputs"].extend(outputs)
         results["num_tokens"] += num_tokens
 
     results=[ results ] # transform to list, otherwise gather_object() will not collect correctly
 
 # collect results from all the GPUs
 results_gathered=gather_object(results)
 
 if accelerator.is_main_process:
     timediff=time.time()-start
     num_tokens=sum([r["num_tokens"] for r in results_gathered ])
 
     print(f"tokens/sec: {num_tokens//timediff}, time elapsed: {timediff}, num_tokens {num_tokens}")

可以看到批处理会大大加快速度。

1 GPU: 520 token /sec,时间:19.2s

2 gpu: 900 token /sec,时间:11.1s

3 gpu: 1205个token /秒,时间:8.2s

4 gpu: 1655 token /sec,时间:6.0s

5 gpu: 1658 token /sec,时间:6.0s

总结

截止到本文为止,llama.cpp,ctransformer还不支持多GPU推理,好像llama.cpp在6月有个多GPU的merge,但是我没看到官方更新,所以这里暂时确定不支持多GPU。如果有小伙伴确认可以支持多GPU请留言。

huggingface的Accelerate包则为我们使用多GPU提供了一个很方便的选择,使用多个GPU推理可以显着提高性能,但gpu之间通信的开销随着gpu数量的增加而显著增加。

声明:本文内容及配图由入驻作者撰写或者入驻合作网站授权转载。文章观点仅代表作者本人,不代表电子发烧友网立场。文章及其配图仅供工程师学习之用,如有内容侵权或者其他违规问题,请联系本站处理。 举报投诉
  • GPU芯片
    +关注

    关注

    1

    文章

    303

    浏览量

    5804
  • 自然语言处理

    关注

    1

    文章

    618

    浏览量

    13552
  • LLM
    LLM
    +关注

    关注

    0

    文章

    286

    浏览量

    327
收藏 人收藏

    评论

    相关推荐

    对比解码LLM上的应用

    为了改进LLM推理能力,University of California联合Meta AI实验室提出将Contrastive Decoding应用于多种任务的LLM方法。实验表明,所提方法能有效改进
    发表于 09-21 11:37 623次阅读
    对比解码<b class='flag-5'>在</b><b class='flag-5'>LLM</b>上的应用

    【飞凌嵌入式OK3576-C开发板体验】rkllm板端推理

    开发板上执行推理过程了。 执行推理 首先,确保将存放librkllmrt.so文件的路径添加到LD_LIBRARY_PATH环境变量中,以便程序能够正确加载该。这可以通过
    发表于 08-31 22:45

    YOLOv5s算法RK3399ProD上的部署推理流程是怎样

    YOLOv5s算法RK3399ProD上的部署推理流程是怎样的?基于RK33RK3399Pro怎样使用NPU进行加速
    发表于 02-11 08:15

    怎样阿里云物联网平台上进行单片机程序的编写

    阿里云物联网平台是怎样设置的?怎样阿里云物联网平台上进行单片机程序的编写
    发表于 02-22 06:04

    充分利用Arm NN进行GPU推理

    Arm拥有跨所有处理器的计算IP。而且,无论您要在GPU,CPU还是NPU上进行ML推理,都可以一个通用框架下使用它们:Arm NN。Arm NN是适用于CPU,
    发表于 04-11 17:33

    请问一下rknn推理参数该怎样去设置

    rknn推理参数设置然后进行推理推理的结果会把三张图片的结果合并在一个list中,需要我们自己将其分割开:最终其结果和单张
    发表于 07-22 15:38

    如何判断推理何时由GPU或NPUiMX8MPlus上运行?

    当我为 TFLite 模型运行基准测试时,有一个选项 --nnapi=true我如何知道 GPU 和 NPU 何时进行推理?谢谢
    发表于 03-20 06:10

    使用DevtronKubernetes集群上进行开发

    本文中,您将学习如何在集群环境中使用 Devtron K8S 上进行应用开发。
    的头像 发表于 11-17 09:50 840次阅读

    PyTorch教程13.5之在多个GPU上进行训练

    电子发烧友网站提供《PyTorch教程13.5之在多个GPU上进行训练.pdf》资料免费下载
    发表于 06-05 14:18 0次下载
    PyTorch教程13.5之在多个<b class='flag-5'>GPU</b><b class='flag-5'>上进行</b>训练

    Nvidia 通过开源提升 LLM 推理性能

    加利福尼亚州圣克拉拉——Nvidia通过一个名为TensorRT LLM的新开源软件,将其H100、A100和L4 GPU的大型语言模型(LLM)
    的头像 发表于 10-23 16:10 642次阅读

    现已公开发布!欢迎使用 NVIDIA TensorRT-LLM 优化大语言模型推理

    NVIDIA 于 2023 年 10 月 19 日公开发布 TensorRT-LLM ,可在 NVIDIA GPU 上加速和优化最新的大语言模型(Large Language Models)的推理
    的头像 发表于 10-27 20:05 968次阅读
    现已公开发布!欢迎使用 NVIDIA TensorRT-<b class='flag-5'>LLM</b> 优化大语言模型<b class='flag-5'>推理</b>

    Hugging Face LLM部署大语言模型到亚马逊云科技Amazon SageMaker推理示例

     本篇文章主要介绍如何使用新的Hugging Face LLM推理容器将开源LLMs,比如BLOOM大型语言模型部署到亚马逊云科技Amazon SageMaker进行推理的示例。我们将
    的头像 发表于 11-01 17:48 932次阅读
    Hugging Face <b class='flag-5'>LLM</b>部署大语言模型到亚马逊云科技Amazon SageMaker<b class='flag-5'>推理</b>示例

    利用NVIDIA组件提升GPU推理的吞吐

    本实践中,唯品会 AI 平台与 NVIDIA 团队合作,结合 NVIDIA TensorRT 和 NVIDIA Merlin HierarchicalKV(HKV)将推理的稠密网络和热 Embedding 全置于 GPU 上进行
    的头像 发表于 04-20 09:39 715次阅读

    LLM大模型推理加速的关键技术

    LLM(大型语言模型)大模型推理加速是当前人工智能领域的一个研究热点,旨在提高模型处理复杂任务时的效率和响应速度。以下是对LLM大模型推理
    的头像 发表于 07-24 11:38 858次阅读

    解锁NVIDIA TensorRT-LLM的卓越性能

    NVIDIA TensorRT-LLM 是一个专为优化大语言模型 (LLM) 推理而设计的。它提供了多种先进的优化技术,包括自定义 Attention Kernel、Inflight
    的头像 发表于 12-17 17:47 116次阅读