Chinese-LLaMA-Alpaca contains the Chinese LLaMA model and the Alpaca large model fine-tuned with instructions.

Based on the original LLaMA, these models expand the Chinese vocabulary and use Chinese data for secondary pre-training, which further improves the ability to understand the basic semantics of Chinese. At the same time, the Chinese Alpaca model further uses Chinese instruction data for fine-tuning, which significantly improves the model’s ability to understand and execute instructions.

Main content of this project

  • The Chinese vocabulary has been expanded for the original LLaMA model, and the efficiency of Chinese encoding and decoding has been improved.
  • Open source Chinese LLaMA pre-trained with Chinese text data and Chinese Alpaca with fine-tuned instructions
  • Quickly use the CPU/GPU of a laptop (personal PC) to quantify and deploy a large model locally
  • supportHugging Face transformers,llama.cpp,text-generation-webui,LlamaChatEcology
  • Currently open source model version: 7B (standard version,Plus version), 13B (standard version)

The following picture shows the actual experience of the Chinese Alpaca-7B model after the local CPU quantitative deployment (GIF is not accelerated, measured under M1 Max):

#ChineseLLaMAAlpaca #Homepage #Documentation #Downloads #Chinese #LLaMA #Alpaca #Large #Model #News Fast Delivery

Leave a Comment

Your email address will not be published. Required fields are marked *