“Enlightenment” is a bilingual multimodal pre-training model with a scale of 1.75 trillion parameters.There are currently 7 open source model results in the project, and the model parameter files need to beEnlightenment platformMake a download request.

Graphics

  • CogView

    The CogView parameter is 4 billion. The model can generate images from text, and after fine-tuning, it can generate images such as Chinese paintings, oil paintings, watercolor paintings, and outline drawings. At present, it has surpassed OpenAI DALL·E in the recognized MS COCO Vincent map task, ranking first in the world.

  • BriVL

    BriVL (Bridging Vision and Language Model) is the first multi-modal large-scale pre-training model for Chinese general graphics and text. The BriVL model has excellent results in image-text retrieval tasks, surpassing other common multimodal pre-training models (such as UNITER, CLIP) in the same period.

text class

  • GLM

    GLM is a series of pre-training language models with English as the core. Based on the new pre-training paradigm, a single model has achieved the best results in language understanding and generation tasks, and surpasses common pre-training models trained on the same amount of data ( Such as BERT, RoBERTa and T5), currently open source models with 110 million, 335 million, 410 million, 515 million, and 10 billion parameter scales.

  • CPM

    The CPM series models are pre-trained language model series that take into account both comprehension and generation capabilities, covering Chinese and Chinese-English bilingual models. Currently, models with a parameter scale of 2.6 billion, 11 billion and 198 billion have been open sourced.

  • Transformer-XL

    Transformer-XL is a pre-trained language generation model with Chinese as the core, with a parameter scale of 2.9 billion. Currently, it can support mainstream NLG tasks including article generation, intelligent poetry composition, and comment/abstract generation.

  • EVA

    EVA is an open-field Chinese dialogue pre-training model. It is currently the largest Chinese dialogue model with a parameter volume of 2.8 billion. It is pre-trained on the Enlightenment Dialogue Dataset (WDC) that includes 1.4 billion Chinese in different fields.

  • Lawformer

    Lawformer is the world’s first long-text Chinese pre-training model in the legal field, with a parameter scale of 100 million.

protein

#Enlightenment #Homepage #Documentation #Downloads #Bilingual #Multimodal #Large #Language #Model #News Fast Delivery

Leave a Comment

Your email address will not be published. Required fields are marked *