EN
登录

Anyscale与NVIDIA合作,将世代人工智能模型投入生产

Anyscale Teams Up With NVIDIA to Scale Generative AI Models Into Production

GlobeNewswire 等信源发布 2024-03-19 06:00

可切换为仅中文


SAN FRANCISCO, March 18, 2024 (GLOBE NEWSWIRE) -- Anyscale, the AI infrastructure company built by the creators of Ray, the world’s fastest growing open-source unified framework for scalable computing, today announced a collaboration with NVIDIA to integrate the NVIDIA AI Enterprise software platform into the Anyscale platform, enabling customers to accelerate and scale large language models (LLMs) into a production environment with security, support, and stability.

旧金山,2024年3月18日(环球通讯社)--Anyscale是一家人工智能基础设施公司,由世界上增长最快的可扩展计算开源统一框架Ray的创建者创建,今天宣布与NVIDIA合作,将NVIDIA AI企业软件平台集成到Anyscale平台,使客户能够加速大型语言模型(LLM)并将其扩展到具有安全性、支持性和稳定性的生产环境中。

The integration brings support for NVIDIA NIM inference microservices announced at NVIDIA GTC today. Customers will benefit from the combined power of Ray and Anyscale’s managed runtime environment, providing capabilities like container orchestration, observability, and autoscaling, as well as access to NVIDIA AI Enterprise to improve security and LLM performance.

该集成为今天在NVIDIA GTC上宣布的NVIDIA NIM推理微服务提供了支持。客户将受益于Ray和Anyscale的托管运行时环境的综合功能,提供容器编排、可观察性和自动缩放等功能,以及访问NVIDIA AI Enterprise以提高安全性和LLM性能。

Increasingly, AI workloads demand more performance from infrastructure. Dynamically scaling that infrastructure while balancing cost remains a pervasive challenge. Anyscale’s integration with NVIDIA AI Enterprise will enhance the scalability of AI workloads, enabling training and deployment of larger and more complex models and support the optimization of smaller models for specific tasks.

人工智能工作负载对基础设施的性能要求越来越高。在平衡成本的同时动态扩展基础设施仍然是一个普遍的挑战。Anyscale与NVIDIA AI Enterprise的集成将增强AI工作负载的可扩展性,从而能够培训和部署更大、更复杂的模型,并支持针对特定任务优化较小的模型。

Access to NVIDIA’s accelerated computing infrastructure will simplify the deployment and management of distributed machine learning (ML) applications, leading to more efficient resource utilization, faster iteration, and reduced costs. In addition, access to NIM minimizes restrictions and time spent on infrastructure, allowing developers to remain focused on driving innovation forward for their organization.

访问NVIDIA的加速计算基础设施将简化分布式机器学习(ML)应用程序的部署和管理,从而提高资源利用率、加快迭代速度并降低成本。此外,使用NIM可以最大限度地减少基础设施上的限制和时间,使开发人员能够继续专注于推动组织的创新。

“This enhanced integration with NVIDIA AI Enterprise makes it simpler than ever for customers to get access to cutting-edge infrastructure software and top.

“这种与NVIDIA AI Enterprise的增强集成使客户比以往任何时候都更容易访问尖端的基础设施软件和top。