Vitalik Buterin, co-founder of Ethereum, published a blog post warning that modern AI tools create serious privacy and security risks. He argued that cloud-based systems should be replaced with local, on-device alternatives.
Buterin said AI has moved beyond simple chat tools. Newer systems now act as autonomous agents that can complete long tasks using hundreds of tools. He said this shift increases the risk of data exposure and unauthorized actions.

He wrote that he has already stopped using cloud-based AI. He described his setup as “self-sovereign, local, private, and secure.”
He cited research showing that about 15% of AI agent skills contain malicious instructions. Some tools were also found to send data to external servers without the user knowing.
Buterin warned that certain AI models may contain hidden backdoors. These could activate under specific conditions and act in the developer’s interest rather than the user’s.
He also noted that many models described as open-source are only “open-weights.” Their full internal structure is not visible, which leaves room for unknown risks.
To address these concerns, Buterin built a system around on-device inference, local storage, and process sandboxing. His setup runs on NixOS, with llama-server handling local inference and bubblewrap used to isolate processes.
He tested several hardware configurations using the Qwen3.5 35B model. A laptop with an NVIDIA 5090 GPU delivered around 90 tokens per second. An AMD Ryzen AI Max Pro system reached about 51 tokens per second. DGX Spark hardware hit around 60 tokens per second.
He said performance below 50 tokens per second felt too slow for regular use. Based on his tests, he preferred high-performance laptops over specialized hardware.
For those who cannot afford such setups, he suggested groups of friends pool resources to buy a shared computer and GPU and connect to it remotely.
Buterin uses a “2-of-2” confirmation model for sensitive actions. Tasks like sending messages or transactions require both AI output and human approval.
He said combining human and AI decisions is safer than relying on either alone. When using remote models, his requests are first filtered through a local model to remove sensitive information before anything is sent out.
He compared AI systems to smart contracts, saying they can be useful but should not be fully trusted.
The use of AI agents is growing. Projects like OpenClaw are expanding autonomous agent capabilities. These systems can operate independently and complete tasks using multiple tools.
Industry estimates put the AI agents market at around $8 billion in 2025. That figure is projected to reach over $48 billion by 2030, representing annual growth of more than 43%.
Some agents can modify system settings or alter prompts without user approval, which increases the risk of unauthorized access.
The post What Vitalik Buterin’s AI Privacy Warning Means for Crypto Users appeared first on CoinCentral.


