Language

Enlightenment Homepage, Documentation and Downloads – Bilingual Multimodal Large Language Model – News Fast Delivery

“Enlightenment” is a bilingual multimodal pre-training model with a scale of 1.75 trillion parameters.There are currently 7 open source model results in the project, and the model parameter files need to beEnlightenment platformMake a download request. Graphics CogView The CogView parameter is 4 billion. The model can generate images from text, and after fine-tuning, it

Enlightenment Homepage, Documentation and Downloads – Bilingual Multimodal Large Language Model – News Fast Delivery Read More »

Pengcheng·Pangu α Homepage, Documentation and Download- Chinese Pre-trained Language Model- News Fast Delivery

Pengcheng Pangu α is the industry’s first 200 billion parameter pre-trained language model with Chinese as the core. Currently, two versions are open source: Pengcheng Pangu α and Pengcheng Pangu α enhanced version, and support both NPU and GPU. It supports rich scene applications, and has outstanding performance in text generation fields such as knowledge

Pengcheng·Pangu α Homepage, Documentation and Download- Chinese Pre-trained Language Model- News Fast Delivery Read More »

Baize Homepage, Documentation and Downloads – Large Language Model Trained Using LoRA – News Fast Delivery

Baize is an open-source chat model trained with LoRA, which improves upon the open-source large-scale language model LLaMA by fine-tuning LLaMA with a newly generated chat corpus, which runs on a single GPU, making it available to a wider range of researchers . Bai Ze currently includes four models in English: Bai Ze-7B, 13B, and

Baize Homepage, Documentation and Downloads – Large Language Model Trained Using LoRA – News Fast Delivery Read More »

Linly Homepage, Documentation and Downloads – Large Scale Chinese Language Model – News Fast Delivery

This project provides the community withChinese dialogue model Linly-ChatFlow, Chinese basic model Linly-Chinese-LLaMA and their training data.model based on Tencent Pretrain The pre-training framework is realized, and the full-tuning on 32 * A100 GPU will gradually open up the Chinese model weights of 7B, 13B, 33B, and 65B scales. The Chinese basic model is based

Linly Homepage, Documentation and Downloads – Large Scale Chinese Language Model – News Fast Delivery Read More »

yaklang Homepage, Documentation and Downloads – Network Security Programming Language – News Fast Delivery

yaklang is China’s first programming language in the field of network security launched by the Institute of Cyberspace Security, University of Electronic Science and Technology of China and the yaklang.io team. In order to accelerate the engineering development of security products and security tools, we created a new language (Yaklang) and implemented a stack virtual

yaklang Homepage, Documentation and Downloads – Network Security Programming Language – News Fast Delivery Read More »

Multimodal Large Language Model mPLUG-Owl

The multimodal GPT model proposed by Alibaba Dharma Institute: mPLUG-Owl, a multimodal large language model based on mPLUG modularization. It can understand not only the content of inference text, but also visual information, and has excellent cross-modal alignment ability. Paper: https://arxiv.org/abs/2304.14178 DEMO: https://huggingface.co/spaces/MAGAer13/mPLUG-Owl Example highlights a modular training paradigm for multimodal language models. It can

Multimodal Large Language Model mPLUG-Owl Read More »

CINO Homepage, Documentation and Downloads- Minority Language Pre-training Model- News Fast Delivery

In the field of natural language processing, the pre-trained language model (Pre-trained Language Model, PLM) has become an important basic technology, and the use of pre-trained models is becoming more and more common in multilingual research. In order to promote the research and development of Chinese minority language information processing,Harbin Institute of Technology Xunfei Joint

CINO Homepage, Documentation and Downloads- Minority Language Pre-training Model- News Fast Delivery Read More »

MOSS Homepage, Documentation and Downloads – Dialogue Large Language Model – News Fast Delivery

MOSS is an open source dialogue language model that supports Chinese-English bilingualism and various plug-ins.moss-moonThe series models have 16 billion parameters, and can run on a single A100/A800 or two 3090 graphics cards at FP16 precision, and can run on a single 3090 graphics card at INT4/8 precision. The MOSS pedestal language model is pre-trained

MOSS Homepage, Documentation and Downloads – Dialogue Large Language Model – News Fast Delivery Read More »

MiniGPT-4 Homepage, Documentation and Downloads – Enhancing Visual Language Understanding with LLM – News Fast Delivery

MiniGPT-4 enhances visual-language understanding with advanced large-scale language models. MiniGPT-4 aligns the frozen vision encoder from BLIP-2 with the frozen LLM Vicuna using only one projection layer. The training of MiniGPT-4 is divided into two stages: The first traditional pre-training stage is trained using about 5 million aligned image-text pairs in 10 hours using

MiniGPT-4 Homepage, Documentation and Downloads – Enhancing Visual Language Understanding with LLM – News Fast Delivery Read More »

StableLM Homepage, Documentation and Downloads – Language Model Developed by Stability AI – News Fast Delivery

The StableLM project repository contains Stability AI’s ongoing development of the StableLM series of language models, and currently Stability AI has released the initial StableLM-alpha model set with 3 billion and 7 billion parameters. Models with 15 billion and 30 billion parameters are in development. Stability AI is currently at Hugging Face Spaces A demo

StableLM Homepage, Documentation and Downloads – Language Model Developed by Stability AI – News Fast Delivery Read More »

RedPajama Homepage, Documentation and Downloads – Large Language Model – News Fast Delivery

The RedPajama project aims to create a leading set of fully open source large language models. Currently, the project has completed the first step, successfully replicating more than 1.2 trillion data tokens from the LLaMA training dataset. The project is jointly developed by Together, Ontocord.ai, ETH DS3Lab, Stanford University CRFM, Hazy Research and MILA Quebec

RedPajama Homepage, Documentation and Downloads – Large Language Model – News Fast Delivery Read More »

Web LLM Homepage, Documentation and Downloads – Bringing Language Model Chat Directly to Your Web Browser – News Fast Delivery

Web LLM is aA project to bring large-scale language models and LLM-based chatbots to web browsers.Everything runs in the browser, requires no server support, and is accelerated using WebGPU. This opens up many interesting opportunities to build AI assistants for everyone and achieve privacy while enjoying GPU acceleration. Check out the demo page to try

Web LLM Homepage, Documentation and Downloads – Bringing Language Model Chat Directly to Your Web Browser – News Fast Delivery Read More »