Connect with us

Uncategorized

Anyscale Introduces New Replica Compaction to Optimize Resource Usage | IDOs News

Avatar

Published

on

Anyscale Introduces New Replica Compaction to Optimize Resource Usage | IDOs News




Felix Pinkston
Jul 15, 2024 18:56

Anyscale launches Replica Compaction to address resource fragmentation, enhancing resource utilization and reducing costs for Ray Serve deployments.





Companies embracing AI are increasingly facing the issue of resource utilization and cost management. Model serving and inference in particular need to be able to scale up and down over time in response to traffic. Ray Serve is a scalable model serving library built on Ray to help handle these dynamics. And while open source systems like Ray Serve help manage increased traffic, even sophisticated systems struggle to scale down once traffic abates. This type of resource fragmentation inevitably leads to underutilized resources and higher costs.

Anyscale’s new Replica Compaction feature helps to solve resource fragmentation by optimizing resource usage for online inference and model serving. Take a look at how this feature works, as well as how you can use it in practice.

Background: What is Ray Serve?

Ray Serve has several key concepts:

  • Deployment: A deployment contains business logic or an ML model to handle incoming requests.

  • Replica: A replica is an instance of a deployment that can handle requests. These are implemented with Ray Actors. The number of replicas can be scaled up or down (or even autoscaled) to match the incoming request load.

  • Application: An application is the unit of upgrade in a Ray Serve cluster. An application consists of one or more deployments.

  • Service: A Service is a Ray Serve cluster that can consist of one or more applications.

Deployments handle incoming requests independently which allows for parallel processing and efficient resource utilization in most cases. For example, Ray Serve makes it possible to create deployments for Llama-3-8B and Llama-3-70B on the same Service with different resource requirements (1 GPU and 4 GPU per replica respectively). Both of these deployments would scale independently in response to their respective traffic.

The Problem of Resource Fragmentation

Resource fragmentation occurs when scaling activities lead to uneven resource utilization across nodes. As replicas increase, the autoscaler will start new nodes to handle the increased deployment load. But then, when traffic decreases and models scale down, the same nodes that were needed to handle the increased load become underutilized. This is one of the most common reasons for increased costs and reduced cluster performance.

Essentially, when scaling a specific deployment or model (e.g. Model A), Ray Serve takes into account the traffic and resource requirements for that particular deployment alone. The state, replicas, and traffic of any other deployments (e.g. Models B and C) are not taken into account during the scaling process. Because scaling only considers a single deployment at a time, resource fragmentation is inevitable as traffic changes and the cluster scales up and down.

Anyscale Introduces New Replica Compaction to Optimize Resource Usage | IDOs News

Solving the Resource Fragmentation Issue with Anyscale’s Replica Compaction

Anyscale introduces Replica Compaction to address resource fragmentation. With Replica Compaction, Anyscale will automatically migrate replicas into fewer nodes in order to optimize resource use and reduce costs. There are three main components to the Replica Compaction feature:

  • Replica Migration: Compaction monitors the cluster for opportunities to migrate replicas. If a node is minimally used, Anyscale’s Replica Compaction will automatically move replicas to other nodes with sufficient capacity. Every node in the cluster is checked and nodes with fewer replicas that can be released are prioritized.

  • Zero Downtime: Migration is effortless. Anyscale Services seamlessly spins up a new replica, monitors its health, reroutes traffic, and removes the old replica.

  • Autoscaler Integration: The Anyscale Autoscaler continuously searches for idle nodes post-migration and spins them down as needed, reducing node count—and costs.

Let’s take a look at our same example from above, now with Anyscale’s Replica Compaction. With Replica Compaction, Anyscale is able to detect when Model A is downscaled, and it automatically migrates the excess Model C replicas into a single node.

image2.png

Example of Anyscale Replica Compaction. Anyscale Replica Compaction detects resource fragmentation is causing unnecessary resource usage. The replicas are automagically shifted (without interrupting production traffic) to a single node, thereby reducing costs and boosting utilization.

Replica Compaction in Action: Practical Results

To test the new Replica Compaction feature, Anyscale ran a live production workload for several months. Take a look at what was run—and how Replica Compaction decreased cost and increased efficiency.

Case Study:

Anyscale offers a serverless API to prompt LLMs including Mistral, Mixtral, Llama3, and more. These models are deployed as replicas in an Anyscale Service. This service has been running for several months, serving 10+ models to users at scale with widely varying traffic patterns.

After releasing Anyscale Replica Compaction, significant savings and efficiency improvements were found looking at tokens per GPU second. With no other changes (i.e. changing the tensor parallelism or models being served and hardware used), the overall efficiency improvement post Replica Compaction was ~10% on average. Overall, in the immediate day after enabling, instance seconds declined 3.7%, despite traffic, measured by # tokens, increasing by 11.2% in the same period. Since high-end GPUs like A100s and H100s are used for serving models, this translates to substantial cost savings.

The impact and savings from Replica Compaction vary widely depending on the distribution of traffic, number of deployments, and underlying instances. In less scaled scenarios, costs can be reduced by 50% (or more!).

What’s Next for Replica Compaction

The team is continuing to improve the Replica Compaction algorithm including work to factor in node costs and resource types to better optimize usage and overall costs. Stay tuned for more exciting updates in the coming months.

Get Started with Anyscale

Anyscale’s new Replica Compaction feature significantly improves resource management in distributed clusters by addressing resource fragmentation. This ensures an efficient, cost-effective infrastructure for Ray Serve deployments, with ongoing enhancements promising even smarter resource management. Anyscale Replica Compaction is configured by default for Ray Serve applications deployed on the Anyscale Platform.

Get started today!

Image source: Shutterstock



Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Uncategorized

Advantages of Mobile Apps in Gambling: The Example of Pin Up App | IDOs News

Avatar

Published

on

Advantages of Mobile Apps in Gambling: The Example of Pin Up App | IDOs News


By Terry Ashton, updated August 31, 2024

Online gambling is going mobile — over 50% of players are already playing casino games on their mobile devices, and their number is expected to grow further. But does a mobile app have actual advantages over browser-based play? We decided to do more profound research by accessing and trying gambling on a desktop browser, mobile browser, and the app. That allowed us to distinguish casino mobile applications’ key benefits and drawbacks. If you’re considering using one, just keep reading — we will share some helpful insights below. 

Benefits of Mobile Play at Pin Up Casino

The rise of online gambling happens for multiple reasons, including the following ones: 

  • Ultimate accessibility. You can access the app anywhere, even on the go. You don’t need to take additional actions — the casino opens with just one click. 
  • Lower Internet requirements, offline play. If you play for fun, you can do it even without an Internet connection. If you prefer to play real money, the requirements for an Internet connection will still be much lower because most data is already downloaded to your device. 
  • Push notifications. You can immediately learn about the new top promotions and the hottest games without checking your email. 
  • Special bonuses. Sometimes, special bonuses are granted to mobile players. Some casinos may add them occasionally to encourage players to play on apps. 
  • The same game selection. If a casino is modern and cooperates with top providers, all games will be compatible with mobile devices. For instance, if you play at Pin Up casino online, you can access the same collection of games. That goes not only for slots but also for live games, table games, etc. 
  • Higher security standards. The app is protected even better than the site. Data is encrypted, and the chance that anyone will access your account is close to zero. 

Registration also goes smoothly. Once you sign up on the browser or app, you can access the platform with just one click by entering your Pin Up login and password. 

Considering the Cons: Potential Drawbacks of Using a Pin-Up Mobile App 

Nothing is perfect, and neither are casino apps. Gamblers should also consider the drawbacks, and the most common ones are as follows: 

  • Installing software is a must. You need to install the software on your phone. It’s safe if it’s the official casino site and a good product. However, clicking on the wrong link and downloading the wrong APK file may result in problems. 
  • Battery drain and storage space. It’s no secret that charging the phone all the time is annoying, and innovative slots with top graphics may drain your battery quickly. Also, though most apps don’t take much space (in the case of Pin Up, it’s just about 100 Mb), they still require more effort to manage it. 
  • Compatibility requirements. Any app will have technical requirements, and most aren’t compatible with old mobile devices and tablets. Also, you’ll need to install updates quite regularly. 
  • Smaller screen. This is a disadvantage for those who prefer playing on larger screens, particularly those who prefer live dealer games. 

Do the pros outweigh the cons for you? If yes, the mobile app will boost your experience. If not, browser play may be a better option. 

Final Thoughts: The App vs. Browser Play at Pin-Up Casino

Technology is shaping the industry. Nowadays, there’s no such significant difference between playing on a mobile app and a mobile or desktop browser. You get the same game selection, the same bonuses, and the same smooth experience. So, it’s a matter of taste. Choose what will work best for you and enjoy your play.


Continue Reading

Uncategorized

NVIDIA Introduces Fast Inversion Technique for Real-Time Image Editing | IDOs News

Avatar

Published

on

NVIDIA Introduces Fast Inversion Technique for Real-Time Image Editing | IDOs News




Terrill Dicki
Aug 31, 2024 01:25

NVIDIA’s new Regularized Newton-Raphson Inversion (RNRI) method offers rapid and accurate real-time image editing based on text prompts.





NVIDIA has unveiled an innovative method called Regularized Newton-Raphson Inversion (RNRI) aimed at enhancing real-time image editing capabilities based on text prompts. This breakthrough, highlighted on the NVIDIA Technical Blog, promises to balance speed and accuracy, making it a significant advancement in the field of text-to-image diffusion models.

Understanding Text-to-Image Diffusion Models

Text-to-image diffusion models generate high-fidelity images from user-provided text prompts by mapping random samples from a high-dimensional space. These models undergo a series of denoising steps to create a representation of the corresponding image. The technology has applications beyond simple image generation, including personalized concept depiction and semantic data augmentation.

The Role of Inversion in Image Editing

Inversion involves finding a noise seed that, when processed through the denoising steps, reconstructs the original image. This process is crucial for tasks like making local changes to an image based on a text prompt while keeping other parts unchanged. Traditional inversion methods often struggle with balancing computational efficiency and accuracy.

Introducing Regularized Newton-Raphson Inversion (RNRI)

RNRI is a novel inversion technique that outperforms existing methods by offering rapid convergence, superior accuracy, reduced execution time, and improved memory efficiency. It achieves this by solving an implicit equation using the Newton-Raphson iterative method, enhanced with a regularization term to ensure the solutions are well-distributed and accurate.

Comparative Performance

Figure 2 on the NVIDIA Technical Blog compares the quality of reconstructed images using different inversion methods. RNRI shows significant improvements in PSNR (Peak Signal-to-Noise Ratio) and run time over recent methods, tested on a single NVIDIA A100 GPU. The method excels in maintaining image fidelity while adhering closely to the text prompt.

Real-World Applications and Evaluation

RNRI has been evaluated on 100 MS-COCO images, showing superior performance in both CLIP-based scores (for text prompt compliance) and LPIPS scores (for structure preservation). Figure 3 demonstrates RNRI’s capability to edit images naturally while preserving their original structure, outperforming other state-of-the-art methods.

Conclusion

The introduction of RNRI marks a significant advancement in text-to-image diffusion models, enabling real-time image editing with unprecedented accuracy and efficiency. This method holds promise for a wide range of applications, from semantic data augmentation to generating rare-concept images.

For more detailed information, visit the NVIDIA Technical Blog.

Image source: Shutterstock



Continue Reading

Uncategorized

AMD Radeon PRO GPUs and ROCm Software Expand LLM Inference Capabilities | IDOs News

Avatar

Published

on

AMD Radeon PRO GPUs and ROCm Software Expand LLM Inference Capabilities | IDOs News




Felix Pinkston
Aug 31, 2024 01:52

AMD’s Radeon PRO GPUs and ROCm software enable small enterprises to leverage advanced AI tools, including Meta’s Llama models, for various business applications.





AMD has announced advancements in its Radeon PRO GPUs and ROCm software, enabling small enterprises to leverage Large Language Models (LLMs) like Meta’s Llama 2 and 3, including the newly released Llama 3.1, according to AMD.com.

New Capabilities for Small Enterprises

With dedicated AI accelerators and substantial on-board memory, AMD’s Radeon PRO W7900 Dual Slot GPU offers market-leading performance per dollar, making it feasible for small firms to run custom AI tools locally. This includes applications such as chatbots, technical documentation retrieval, and personalized sales pitches. The specialized Code Llama models further enable programmers to generate and optimize code for new digital products.

The latest release of AMD’s open software stack, ROCm 6.1.3, supports running AI tools on multiple Radeon PRO GPUs. This enhancement allows small and medium-sized enterprises (SMEs) to handle larger and more complex LLMs, supporting more users simultaneously.

Expanding Use Cases for LLMs

While AI techniques are already prevalent in data analysis, computer vision, and generative design, the potential use cases for AI extend far beyond these areas. Specialized LLMs like Meta’s Code Llama enable app developers and web designers to generate working code from simple text prompts or debug existing code bases. The parent model, Llama, offers extensive applications in customer service, information retrieval, and product personalization.

Small enterprises can utilize retrieval-augmented generation (RAG) to make AI models aware of their internal data, such as product documentation or customer records. This customization results in more accurate AI-generated outputs with less need for manual editing.

Local Hosting Benefits

Despite the availability of cloud-based AI services, local hosting of LLMs offers significant advantages:

  • Data Security: Running AI models locally eliminates the need to upload sensitive data to the cloud, addressing major concerns about data sharing.
  • Lower Latency: Local hosting reduces lag, providing instant feedback in applications like chatbots and real-time support.
  • Control Over Tasks: Local deployment allows technical staff to troubleshoot and update AI tools without relying on remote service providers.
  • Sandbox Environment: Local workstations can serve as sandbox environments for prototyping and testing new AI tools before full-scale deployment.

AMD’s AI Performance

For SMEs, hosting custom AI tools need not be complex or expensive. Applications like LM Studio facilitate running LLMs on standard Windows laptops and desktop systems. LM Studio is optimized to run on AMD GPUs via the HIP runtime API, leveraging the dedicated AI Accelerators in current AMD graphics cards to boost performance.

Professional GPUs like the 32GB Radeon PRO W7800 and 48GB Radeon PRO W7900 offer sufficient memory to run larger models, such as the 30-billion-parameter Llama-2-30B-Q8. ROCm 6.1.3 introduces support for multiple Radeon PRO GPUs, enabling enterprises to deploy systems with multiple GPUs to serve requests from numerous users simultaneously.

Performance tests with Llama 2 indicate that the Radeon PRO W7900 offers up to 38% higher performance-per-dollar compared to NVIDIA’s RTX 6000 Ada Generation, making it a cost-effective solution for SMEs.

With the evolving capabilities of AMD’s hardware and software, even small enterprises can now deploy and customize LLMs to enhance various business and coding tasks, avoiding the need to upload sensitive data to the cloud.

Image source: Shutterstock



Continue Reading

Trending