Skip to main content

INSERTING and REPLACING NeuReality Redefines AI Economics, Delivers Instant Access to LLMs Out of the Box While Lowering Total Cost of AI Inference

The NR1® AI Inference Appliance, powered by the first true AI-CPU, now comes pre-optimized with Llama, Mistral, Qwen, Granite, and other generative and agentic AI models – making it 3x faster to deploy with far better price/performance results.

Insert after seventh paragraph:

“Enterprise and service providers are deploying AI applications and agents at record pace and are laser focused on delivering performance economically,” said Rashid Attar, senior vice president of engineering, Qualcomm Technologies, Inc. “By integrating the Qualcomm Cloud AI 100 Ultra accelerators with NeuReality's AI-CPU architecture, users can achieve new levels of cost efficiency and AI performance without compromising ease of deployment and scaling.”

Insert at end of release:

Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries. Qualcomm is a trademark or registered trademark of Qualcomm Incorporated.

The updated release reads:

NEUREALITY REDEFINES AI ECONOMICS, DELIVERS INSTANT ACCESS TO LLMS OUT OF THE BOX WHILE LOWERING TOTAL COST OF AI INFERENCE

The NR1® AI Inference Appliance, powered by the first true AI-CPU, now comes pre-optimized with Llama, Mistral, Qwen, Granite, and other generative and agentic AI models – making it 3x faster to deploy with far better price/performance results.

NeuReality, a pioneer in reimagining AI inferencing architecture for the demands of today’s AI models and workloads, announced that its NR1 Inference Appliance now comes preloaded with popular enterprise AI models, including Llama, Mistral, Qwen, Granite1, plus support for private generative AI clouds and on-premises clusters. Up and running in under 30 minutes, the generative and agentic AI-ready appliance delivers 3x better time-to-value, allowing customers to innovate faster. Current proofs of concept demonstrate up to 6.5x more token output for the same cost and power envelope compared to x86 CPU-based inference servers – making AI more affordable and accessible to businesses and governments of all sizes.

Inside the appliance, the NR1® Chip is the first true AI-CPU purpose built for inference orchestration – the management of data, tasks, and integration – with built-in software, services, and APIs. It not only subsumes traditional CPU and NIC architecture into one but also packs 6x the processing power onto the chip to keep pace with the rapid evolution of GPUs, while removing traditional CPU bottlenecks.

The NR1 Chip pairs with any GPU or AI accelerator inside its appliance to deliver breakthrough cost, energy, and real-estate efficiencies critical for broad enterprise AI adoption. For example, comparing the same Llama 3.3-70B model and the identical GPU or AI accelerator setup, NeuReality's AI-CPU powered appliance achieved a lower total cost per million AI tokens versus x86 CPU-based servers.

“No one debates the incredible potential of AI. The challenge is how to make it economical enough for companies to deploy AI inferencing at scale. NeuReality’s disruptive AI-CPU technology removes the bottlenecks allowing us to deliver the extra performance punch needed to unleash the full capability of GPUs, while orchestrating AI queries and tokens that maximize performance and ROI of those expensive AI systems,” said Moshe Tanach, Co-founder and CEO at NeuReality.

“Now, we are taking ease-of-use to the next level with an integrated silicon-to-software AI inference appliance. It comes pre-loaded with AI models and all the tools to help AI software developers deploy AI faster, easier, and cheaper than ever before, allowing them to divert resource to applying AI in their business instead of in Infrastructure integration and optimizations,” continued Tanach.

A recent study found that roughly 70% of businesses report using generative AI in at least one business function, showing increased demand. Yet only 25% have processes fully enabled by AI with widespread adoption and only one-third have started implementing limited AI use cases according to Exploding Topics.

Today, CPU performance bottlenecks on servers managing multi-modal and large language model workloads are a driving factor for low 30-40% average GPU utilization rates. This results in expensive silicon waste in AI deployments and underserved markets that still face complexity and cost barriers to entry.

“Enterprise and service providers are deploying AI applications and agents at record pace and are laser focused on delivering performance economically,” said Rashid Attar, senior vice president of engineering, Qualcomm Technologies, Inc. “By integrating the Qualcomm Cloud AI 100 Ultra accelerators with NeuReality's AI-CPU architecture, users can achieve new levels of cost efficiency and AI performance without compromising ease of deployment and scaling.”

Already deployed with cloud and financial services customers, NeuReality’s NR1 Appliance was specifically designed to accelerate AI adoption through its affordability, accessibility, and space efficiency for both on-premises and cloud inference-as-a-service options. Along with new pre-loaded generative and agentic AI models, with new releases each quarter, it comes fully optimized with preconfigured software development kits and APIs for computer vision, conversational AI or custom requests that support a variety of business use cases and markets (e.g. financial services, life sciences, government, cloud service providers).

The first NR1 Appliance unifies NR1® Modules (PCIe cards) with Qualcomm® Cloud AI 100 Ultra accelerators. More information on the NR1 Appliance, Module, Chip, and NeuReality® Software and Services please visit: https://www.neureality.ai/solution.

Join NeuReality at InnoVEX 2025

NeuReality will be at InnoVEX (co-located with Computex in Taipei, Taiwan) on May 20-23, 2025, in the Israel Pavillion, Hall 2 Booth S0912 (near Center Stage). The company will host live demonstrations on the NR1 Inference Appliance, including migrating a chat application in minutes and a performance demo with the NR1 chip running Smooth Factory Models and DeepSeek-R1-Distill-Llama-8B.

About NeuReality

Founded in 2019, NeuReality is a pioneer in purpose-built AI inferencing architecture powered by the NR1® Chip – the first AI-CPU for inference orchestration. Based on an open, standards-based approach, the NR1 is fully compatible with any AI accelerator. NeuReality’s mission is to make AI accessible and ubiquitous by lowering barriers associated with prohibitive cost, power consumption, and complexity, and to scale AI inference adoption through its disruptive technology. It employs 80 people across facilities in Israel, Poland, and the U.S. To learn more, visit http://www.neureality.ai.

1 AI models pre-loaded and pre-optimized for enterprise customers include: Llama 3.3 70B, Llama 3.1 8B (with Llama 4 series coming soon); Mistral 7B, Mistral 8x7B and Mistral Small; Qwen 2.5 including Coder (with Qwen 3 coming soon); DeepSeek R1-Distill-Llama 8B, R1 Distill-Llama 70b; and Granite 3, 3.1 8B (with Granite 3.3 coming soon).

Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries. Qualcomm is a trademark or registered trademark of Qualcomm Incorporated.

NeuReality announces pre-loaded LLMs to NR1 Inference Orchestration Appliance.

Contacts

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms Of Service.