Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a toolkit (Ray AIR) for accelerating ML workloads.
Ray 2.0 is a substantially updated version of Ray with enhancements to all libraries in the Ray ecosystem. With this important release, Ray is making great strides toward its goal of making distributed computing scalable, unified, and open.
To achieve these goals, Ray 2.0 features new features that unify the machine learning (ML) ecosystem, improves Ray’s production support, and makes Ray’s libraries easier than ever for ML practitioners.
Important things to look out for in this update include:
- Ray AIR: is an extensible, unified ML application toolkit, now in beta
- Simplifies ML application development, increases developer speed, and is interoperable with other frameworks such as Tensorflow, PyTorch, Hugging Face, and more.
- Ray now supports local shuffle of 100TB or more of data using the Ray Datasets library
- KubeRay: is a toolkit for running Ray on Kubernetes, now in beta. It replaces the traditional Python-based Ray operator.
- Ray Serve’s Deployment Graph API, a new, easier way to build, test, and deploy deployment inference graphs, was released as a Beta in 2.0.
For more details, please check: https://github.com/ray-project/ray/releases/tag/ray-2.0.0
#Ray #released #machine #learning #technology #OpenAI #News Fast Delivery
Ray 2.0 released, the machine learning technology behind OpenAI – News Fast Delivery