AI

NVIDIA and ServiceNow Partner on New Autonomous AI Agents for Enterprises

As enterprises push AI beyond basic generation and reasoning, a new frontier emerges: autonomous decision-making. A partnership between NVIDIA and ServiceNow is pioneering the integration of sophisticated agent systems into large-scale enterprise environments, where AI must navigate complex workflows, interact with diverse data sources, and adapt to evolving business needs. This marks a critical step towards widespread adoption of AI-driven automation.

NVIDIA and ServiceNow are expanding their collaboration to deliver specialized autonomous AI agents for enterprise environments, integrating NVIDIA's accelerated computing, open models, and secure runtime with ServiceNow's workflow and governance platform.

Overview

The partnership, announced at ServiceNow Knowledge 2026, focuses on moving enterprise AI beyond generation and reasoning into autonomous decision-making. The companies are combining NVIDIA's hardware and software stack with ServiceNow's Action Fabric and AI Control Tower to create agents that can operate across complex, multistep workflows with built-in governance and security.

Project Arc: Autonomous Desktop Agent

ServiceNow introduced Project Arc, a long-running, self-evolving autonomous desktop agent designed for knowledge workers including developers, IT teams, and administrators. Unlike standalone AI agents, Project Arc connects natively to the ServiceNow AI Platform through ServiceNow Action Fabric, providing governance, auditability, and workflow intelligence for every action the agent takes.

Project Arc can access local file systems, terminals, and installed applications to complete complex, multistep tasks that traditional automation cannot handle. It is built on three requirements for long-running autonomous agents: open models and domain-specific skills that can be customized, security that helps agents act without exposing sensitive data or systems, and AI factories that deliver efficient tokenomics.

Security and Open Source Runtime

Project Arc uses NVIDIA OpenShell, an open source secure runtime for developing and deploying autonomous agents in sandboxed, policy-governed environments. ServiceNow is building on and contributing to OpenShell to advance a common foundation for secure, enterprise-grade agent execution. With OpenShell, enterprises can define what an agent can see, which tools it can use, and how each action is contained.

Open Models and Agent Skills

The companies are building on an open ecosystem that allows organizations to tailor models and applications to their specific domains and data. NVIDIA agent skills enable specialized agents, such as ServiceNow AI Specialists, to deliver targeted capabilities across enterprise workflows. The NVIDIA AI-Q Blueprint for building specialized deep research agents empowers ServiceNow AI Specialists to gather context, synthesize information, and support more complex decision-making across business functions.

The NVIDIA Agent Toolkit, including NVIDIA Nemotron open models, provides flexible building blocks and specialized skills for developing customized AI applications.

Benchmarking and Performance

To support real-world performance, the companies are advancing NOWAI-Bench, an open benchmarking suite for enterprise AI agents, integrated with the NVIDIA NeMo Gym library. NOWAI-Bench includes EnterpriseOps-Gym, one of the industry's most challenging enterprise agent benchmarks, where Nemotron 3 Super currently ranks No. 1 among open source models. These evaluations focus on multistep workflows, where enterprise AI systems often encounter real challenges.

Efficiency and Token Economics

Similar Articles

More articles like this

AI 2 min

The Download: inside the Musk v. Altman trial, and AI for democracy

Elon Musk’s breach-of-contract suit against Sam Altman and OpenAI pivots on a single 2015 email thread—now unsealed—that allegedly binds the company to an open-source AGI covenant, a claim OpenAI counters by invoking its later shift to a capped-profit model and Microsoft’s $13B infusion. Inside the San Francisco courtroom, testimony revealed how Musk’s demand for 50% equity and GPU dominance clashed with Altman’s pivot to a cloud-first, API-driven revenue engine, setting the stage for today’s closed-source AI oligopoly.

AI 1 min

GPT-5.5 Instant System Card

OpenAI’s GPT-5.5 Instant quietly redefines real-time inference with a sub-100ms latency SLA, slashing token costs by 40% while preserving 98% of GPT-4 Turbo’s benchmark accuracy. The new "System Card" architecture offloads safety checks to a dedicated co-processor, enabling parallel validation without throttling throughput—effectively decoupling compliance from performance for the first time in a frontier model.

AI 2 min

GPT-5.5 Instant: smarter, clearer, and more personalized

OpenAI’s GPT-5.5 Instant quietly redefines conversational AI by slashing hallucination rates by 40% while introducing granular per-user calibration—letting power users toggle context retention, tone consistency, and domain-specific guardrails without sacrificing latency. The upgrade, rolled into ChatGPT’s default endpoint, marks the first time a frontier model ships with built-in preference tuning, effectively turning a chatbot into a customizable reasoning engine.

AI 1 min

A blueprint for using AI to strengthen democracy

A seismic shift in information flows is underway, as AI-driven technologies begin to redefine the boundaries of civic engagement and representation. By harnessing the power of distributed networks and decentralized data architectures, a new generation of digital tools is poised to amplify marginalized voices and hold institutions accountable. This quiet revolution in democratic infrastructure is being driven by the convergence of blockchain, edge computing, and AI-driven content moderation.

AI 4 min

Claude Code: The Terminal-Based AI That Runs Your Business While You Sleep

Most Claude users never leave the browser tab. A smaller group has moved to Claude Code, a terminal-based interface that unlocks plugins, scheduled agents, MCPs, and project-aware files. This guide walks through installation, the four modes, slash commands, managed agents, skills, MCPs, and the two files that run an entire business. All for the same $20/month Pro plan.

AI 2 min

Cut Claude Code Costs

Claude Code is a powerful coding tool, but its token usage can quickly add up. By implementing three simple tricks, users can significantly reduce their token usage without compromising on performance. These tricks include using the Opus and Sonnet models efficiently, utilizing subagents for research and exploration, and installing the Caveman plugin. By combining these methods, users can extend their token usage limits and get more out of their Claude Code plan.