Corporate AI strategy has spent the last three years tethered to the cloud. Accessing large language models meant paying variable API costs, accepting latency, and navigating the compliance risks of sending corporate data to third-party servers. AMD is trying to break that dependency.

In March, the chipmaker released official configurations for running OpenClaw, an open-source AI agent framework, entirely on local hardware. AMD is pitching this as the dawn of the "agent computer," a workstation where the primary user is an autonomous AI rather than a human. For technology executives, the move signals a shift from cloud-dependent AI experiments to locally hosted, private agent networks.

The hardware divide

The setup targets developers and engineers using ultra-premium hardware. AMD has split its approach into two tiers to accommodate different corporate workloads.

The first configuration, dubbed RyzenClaw, runs on the Ryzen AI Max+ processor equipped with 128GB of unified memory. This setup prioritises concurrency and context capacity. It can run up to six AI agents simultaneously, managing a 260,000-token context window. The trade-off is speed. It generates about 45 tokens per second and takes roughly 19.5 seconds to process an initial 10,000-token input.

Organisations that need faster throughput can use the RadeonClaw configuration, which runs on the workstation-grade Radeon AI PRO R9700 graphics card. This setup processes tokens almost three times faster, hitting 120 tokens per second, and digests a 10,000-token prompt in 4.4 seconds. However, it restricts capacity, supporting only two concurrent agents and a 190,000-token context window.

Breaking cloud dependency

For the last three years, corporate AI adoption has been dominated by cloud providers. Companies scaling multi-agent systems quickly discover that variable API pricing can make widespread deployment economically unviable. Every token processed, every internet search executed by an agent, and every context window refreshed incurs a micro-transaction.

Running these models locally flips that financial model. Enterprises face a steep, fixed upfront cost for hardware like the Ryzen AI Max+ or Radeon AI PRO R9700, but the marginal cost of running continuous agent workflows drops to the cost of electricity. For an AI agent designed to endlessly crawl datasets, draft reports, or monitor internal communications, a local workstation provides a predictable expenditure.

Beyond economics, data sovereignty remains a persistent barrier to corporate AI adoption. Regulated industries, such as finance and healthcare, often prohibit sending sensitive documents to external APIs. By outfitting local hardware with massive memory pools, such as the 96GB allocated to variable graphics memory in the RyzenClaw setup, companies can process proprietary data entirely on-premises.

Bridging Windows and Linux

Both systems use the Windows Subsystem for Linux (WSL2), bridged with LM Studio and llama.cpp. This is a practical compromise for enterprise IT departments. It allows developers to deploy complex, Linux-native AI environments without leaving the heavily managed Windows operating system that dominates corporate networks.

The framework enables local agents to execute automated background tasks, such as web scraping or internet research, and even integrates with platforms like Discord for remote monitoring. By processing all embeddings and memory functions locally, enterprise users can run complex autonomous tasks without exposing sensitive data to external networks.

The reality of deployment

Despite AMD framing this deployment for personal systems, the reality is strictly enterprise and high-end research. A machine with 128GB of memory or a Radeon AI PRO R9700 GPU carries a workstation price tag, not a consumer one. The hardware eliminates ongoing cloud subscription costs, but it requires a heavy upfront capital expenditure.

Furthermore, the implementation remains highly technical. Setting up OpenClaw requires comfort with command-line interfaces, GitHub repositories, and port forwarding. It is not a plug-and-play solution for general employees.

There is also a gap between the marketing claims and the benchmark realities. While AMD introductory materials suggest massive models like the 122-billion-parameter Qwen 3.5 can run locally, the actual performance benchmarks and setup instructions rely entirely on the much smaller 35-billion-parameter version.

AMD also included a legal disclaimer regarding the dangers and risks of AI agents in its setup guide, though it declined to specify what those liabilities might be.

The shift from personal computers to agent computers is a compelling vision for corporate productivity. By bringing multi-agent frameworks inside the firewall, companies can build custom automated workflows without leaking proprietary data to cloud providers. AMD has proven the hardware is finally capable of supporting this architecture. The next hurdle will be making the software deployment simple enough for mainstream enterprise adoption.