ChatGLM2-6B is an open source Chinese-English bilingual dialogue model ChatGLM-6B The second-generation version of ChatGLM, on the basis of retaining many excellent features of the first-generation model, such as smooth dialogue and low deployment threshold, ChatGLM2-6B introduces the following new features:

  1. more powerful performance: Based on the development experience of the first generation model of ChatGLM, the base model of ChatGLM2-6B has been fully upgraded. ChatGLM2-6B used GLM The mixed objective function of , after the pre-training of 1.4T Chinese and English identifiers and human preference alignment training,evaluation resultsIt shows that compared with the original model, the performance of ChatGLM2-6B on data sets such as MMLU (+23%), CEval (+33%), GSM8K (+571%), and BBH (+60%) has achieved a substantial improvement. It has strong competitiveness in open source models of the same size.
  2. longer context:based on Flash Attention technology, the context length (Context Length) of the pedestal model is extended from 2K of ChatGLM-6B to 32K, and the context length of 8K is used for training in the dialogue stage, allowing more rounds of dialogue. However, the current version of ChatGLM2-6B has limited ability to understand single-round ultra-long documents, and will focus on optimization in subsequent iterative upgrades.
  3. more efficient reasoning:based on Multi-Query Attention Technology, ChatGLM2-6B has more efficient reasoning speed and lower video memory usage: Under the official model implementation, the reasoning speed has increased by 42% compared with the first generation, and under INT4 quantization, the dialogue length supported by 6G video memory has been increased from 1K to 8K .
  4. more open protocol: ChatGLM2-6B weights for academic researchfully openafter obtaining official written permission, alsocommercial use allowed.

The ChatGLM2-6B open source model aims to promote the development of large model technology together with the open source community. I implore developers and everyone to followopen source agreementDo not use open source models and codes and derivatives based on open source projects for any purposes that may cause harm to the country and society, and for any services that have not undergone security assessment and filing.Currently, the project team has not developed any applications based on ChatGLM2-6B, including web, Android, Apple iOS and Windows App applications.

Although the model tries its best to ensure the compliance and accuracy of the data at all stages of training, due to the small scale of the ChatGLM2-6B model, and the model is affected by probabilistic random factors, the accuracy of the output content cannot be guaranteed, and the model is easy to be misleading.This project does not assume the risks and responsibilities of data security and public opinion risks caused by open source models and codes, or any risks and responsibilities arising from misleading, misusing, spreading, and improper use of any models.

evaluation results

The following is the ChatGLM2-6B model in MMLU (English),C-Eval(Chinese),GSM8K(math),BBH(English) on the evaluation results.

MMLU

modelAverageSTEMSocial SciencesHumanitiesOthers
ChatGLM-6B40.6333.8944.8439.0245.71
ChatGLM2-6B (base)47.8641.2054.4443.6654.46
ChatGLM2-6B45.4640.0651.6141.2351.24

The Chat model is tested using the zero-shot CoT (Chain-of-Thought) method, and the Base model is tested using the few-shot answer-only method

C-Eval

modelAverageSTEMSocial SciencesHumanitiesOthers
ChatGLM-6B38.933.348.341.338.0
ChatGLM2-6B (base)51.748.660.551.349.8
ChatGLM2-6B50.146.460.450.646.9

The Chat model is tested using the zero-shot CoT method, and the Base model is tested using the few-shot answer only method

GSM8K

modelAccuracyAccuracy (Chinese)*
ChatGLM-6B4.825.85
ChatGLM2-6B (base)32.3728.95
ChatGLM2-6B28.0520.45

All models are tested using few-shot CoT method, CoT prompt from http://arxiv.org/abs/2201.11903

* We translated 500 questions and CoT prompts in GSM8K using the translation API and manually proofread them

BBH

modelAccuracy
ChatGLM-6B18.73
ChatGLM2-6B (base)33.68
ChatGLM2-6B30.00

All models are tested using few-shot CoT method, CoT prompt from https://github.com/suzgunmirac/BIG-Bench-Hard/tree/main/cot-prompt

#ChatGLM26B #Homepage #Documentation #Download #Open #Source #Bilingual #Dialogue #Language #Model #News Fast Delivery

Leave a Comment

Your email address will not be published. Required fields are marked *