NeuReality honored with a Special Award from InnoVEX – the global startup arm of Computex in Taiwan – recognizing its innovation and market potential in advancing more affordable and accessible AI Inferencing. There, NeuReality demonstrated its new generative and agentic AI-ready NR1® AI Inference Appliance, powered by the NR1® Chip, the first true AI-CPU designed for inferencing at scale
NeuReality, a pioneer in reimagining AI inferencing architecture for the demands of today’s AI models and workloads, announced that its NR1® Inference Appliance now comes preloaded with popular enterprise AI models, including Llama, Mistral, Qwen, Granite1, plus support for private generative AI clouds and on-premise clusters. Up and running in under 30 minutes, the generative and agentic AI-ready appliance delivers 3x better time-to-value, allowing customers to innovate faster. Current proofs of concept demonstrate up to 6.5x more token output for the same cost and power envelope compared to x86 CPU-based inference servers – making AI more affordable and accessible to businesses and governments of all sizes.
Inside the appliance, the NR1® Chip is the first true AI-CPU purpose built for inference orchestration – the management of data, tasks, and integration – with built-in software, services, and APIs. It not only subsumes traditional CPU and networking architecture into one but also packs 6x the processing power onto the chip to keep pace with the rapid evolution of GPUs, while removing traditional CPU bottlenecks.
The NR1 Chip pairs with any GPU or AI accelerator inside its appliance to deliver breakthrough cost, energy, and real-estate efficiencies critical for broad enterprise AI adoption. For example, comparing the same Llama 3.3-70B model and the identical GPU or AI accelerator setup, NeuReality’s AI-CPU powered appliance achieved a lower total cost per million AI tokens versus x86 CPU-based systems.
“No one debates the incredible potential of AI. The challenge is how to make it economical enough for companies to deploy AI inferencing at scale. NeuReality’s disruptive AI-CPU technology removes the bottlenecks allowing us to deliver the extra performance punch needed to unleash the full capability of GPUs, while orchestrating AI queries and tokens that maximize performance and ROI of those expensive AI systems,” said Moshe Tanach, Co-founder and CEO at NeuReality.
“Now, we are taking ease-of-use to the next level with an integrated silicon-to-software AI inference appliance. It comes pre-loaded with AI models and all the tools to help AI software developers deploy AI faster, easier, and cheaper than ever before, allowing them to divert resource to applying AI in their business instead of in Infrastructure integration and optimizations,” continued Tanach.
A recent study found that roughly 70% of businesses report using generative AI in at least one business function, showing increased demand. Yet only 25% have processes fully enabled by AI with widespread adoption and only one-third have started implementing limited AI use cases according to Exploding Topics.
Today, CPU performance bottlenecks on servers managing multi-modal and large language model workloads are a driving factor for low 30-40% average GPU utilization rates. This results in expensive silicon waste in AI deployments and underserved markets that still face complexity and cost barriers to entry.
“Enterprise and service providers are deploying AI applications and agents at record pace and are laser focused on delivering performance economically,” said Rashid Attar, senior vice president of engineering, Qualcomm Technologies, Inc. “By integrating the Qualcomm Cloud AI 100 Ultra accelerators with NeuReality’s AI-CPU architecture, users can achieve new levels of cost efficiency and AI performance without compromising ease of deployment and scaling.”
Already deployed with cloud and financial services customers, NeuReality’s NR1 Appliance was specifically designed to accelerate AI adoption through its affordability, accessibility, and space efficiency for both on-premises and cloud inference-as-a-service options. Along with new pre-loaded generative and agentic AI models, with new releases each quarter, it comes fully optimized with preconfigured software development kits and APIs for computer vision, conversational AI or custom requests that support a variety of business use cases and markets (e.g. financial services, life sciences, government, cloud service providers).
NeuReality participated in the InnoVEX 2025 exhibition, held in Taipei, Taiwan from May 20–23, showcasing its new generative and agentic AI inference server to investors, customers and potential technology partners. Of 150 global submissions, NeuReality was the only Top 15 finalist from Israel. Following a live pitch by Tanach on NeuReality’s disruptive AI-CPU technology inside the new generative and agentic AI-ready NR1 Appliance with redefined price/performance, NeuReality won a Special Award from Applied Ventures. The prize recognized NeuReality’s leadership story, innovation and vast potential in the growing AI inferencing market as a GPU-agnostic solution to unleash the full performance capability of any AI accelerator.
The NR1 Appliance demonstrated at InnoVEX unifies NR1® with Qualcomm® Cloud AI 100 Ultra accelerators, but is compatible with any type or number of AI accelerators. More information on the NR1 Appliance, Module, Chip, and NeuReality® Software. please visit: https://www.neureality.ai/solution.
Credit: NeuReality




















