Advertisement

A Real-World Review of the 50,000 Yuan Apple AIPC: Better Than We Expected | M5 Max MacBook Pro Review

If you had a budget of 50,000 yuan to build a personal computer, what would you choose?

In the past, you might have put the bulk of your budget into the graphics card—after all, whether you're "playing games" or "playing games after get off work," having a powerful GPU is never a bad thing.

▲ Image | Internet

But now, the problem has become more complicated.

The budget that was previously distributed in an orderly manner among CPU, GPU, motherboard, memory, hard drive, and peripherals has suddenly been disrupted by memory, this "money-devouring behemoth".

Now, no matter what you plan to do with your computer, you'll encounter the problem of neglecting one thing for another—

Large memory, large video memory, and large hard drive are all essential, but each one will drain your wallet.

The Mac, which has emerged as a dark horse amidst the memory chaos, is precisely the best solution to the problem mentioned above.

The most powerful AI Mac to date

At the recent spring launch event, Apple unveiled the upgraded M5 MacBook Pro as expected, along with the accompanying M5 Pro and M5 Max processors.

As a result of Apple Silicon's complete overhaul of TSMC's 3nm N3P process, the two new processors certainly did not disappoint in terms of specifications.

The M5 Pro comes in two configurations: 15+16 and 18+20 cores. Both are equipped with the neural network accelerator from last year's M5, which is the "Apple version of Tensor Core".

▲ Image|Apple

The M5 Max offers 18+32 and 18+40 core options, as well as a 16-core neural network accelerator. In terms of processor size alone, both the M5 Pro and M5 Max are undoubtedly GPU-first .

This tendency is also reflected in the microarchitecture design of the new processors.

Currently, all M5 series processors have been upgraded with LPDDR5X 9600 unified memory. According to Apple, the M5 Pro has a maximum memory bandwidth of 307GB/s, while the M5 Max has 614GB/s .

▲ Image|Apple

Since both the M5 Pro and M5 Max come standard with an 18-core CPU, the difference in memory bandwidth is most likely due to the GPU specifications.

Based on pre-release predictions, this discrepancy suggests that the memory controller for the M5 series is likely located on the GPU core cluster .

This strategy is remarkably similar to the Panther Lake architecture that iFanr saw during their visit to Intel's factory last year:

The benefits of doing this are obvious—placing the GPU closer to the memory controller can effectively reduce the latency of inter-core communication of memory data, thereby indirectly improving GPU efficiency.

What are GPUs, which are faster and have more VRAM, best at? Local AI applications, of course.

This is one of the reasons why Apple mentioned "AI" so frequently on its official website this time.

Take this 14-inch MacBook Pro prototype from iFanr as an example. The one we received is the top-of-the-line 40-core GPU M5 Max version this year, paired with 128GB of unified memory and an 8TB hard drive, a performance monster costing over 55,000 yuan.

Generally speaking, when we run a local model on a Windows PC, the biggest bottleneck is often not the exorbitantly priced "motherboard memory," but rather the VRAM (video memory) inside the graphics card.

The biggest advantage of Apple's unified memory is that it can be directly accessed by the GPU.

For example, our 128GB M5 Max review unit can theoretically provide the GPU with nearly 100GB of video memory:

Now that we have such ample memory, we should, as Apple advertises, run those large-scale local AI models that we couldn't run before.

In llmfit, you can see that a 128GB M5 Max can run all models up to 125 bytes perfectly.

Only devices like the MiniMax M2.5, Qwen3, and DeepSeek v2.5 (with specs of 220b or higher) will become "barely running" (marginal).

▲ M5 Max 128GB

In comparison, a 32GB RAM M1 Max, according to LLMfit, can only run models with 2 or 4-bit quantization of around 35 bits at most .

▲ M1 Max 32GB

Considering ease of deployment and the space for context understanding, we chose to test qwen3.5-35b-a3b and qwen3-next-80b, which supports MLX, using LM Studio. Both are 8-bit quantized MoE models .

For MoE models like qwen3.5-35b-a3b, which have "a small total number of iterations and a small number of inferences," M5 Max often finishes running before it even has a chance to warm up .

▲ qwen3.5-35b-a3b

Even when faced with original text of nearly 3,000 words, after manually maximizing the model token limit, the M5 Max's first-word response time in each round of rewriting and imitation was around 1.7 seconds , or about 1.7s for TTFT and about 65tps for TPOT. There was no overflow even after accumulating nearly 10,000 words of thought and writing.

▲ qwen3.5-35b-a3b

The qwen3-next-80b, with its MLX optimization, 8-bit quantization, and larger parameter count, is even more powerful on the M5 Max.

Although it requires manually loading a model that is nearly 80GB while ignoring memory warnings, the running results are truly remarkable:

In qwen3.5-35b-a3b, it takes nearly 30 seconds to think about the same prompt word, while in qwen3-next-80b, it is almost instantaneous, with TTFT of about 3 seconds and TPOT of about 72 tps.

▲ qwen3-next-80b

This is partly because the 80b parameters are already large enough compared to the 3b active parameters, and more importantly, because it is an optimized version based on Apple's open-source MLX framework , which can maximize the advantages of Apple Silicon.

Besides the MoE model, how does M5 Max perform with dense models like Llama 3.3?

▲ Image|Tom's Guide

Although the 8-bit quantized Llama 3.3 70b model is only about 75GB in size, the huge KV cache required for the 128k context still overflows, causing LM Studio to fail to load it.

After switching to the smaller Llama 3.3 70b Q4_K_M , M5 Max finally loaded normally. After executing the above prompt, the system load was approximately 95GB, and the generation speed was 9.95 tokens/s.

In other words, when dealing with dense models of similar size, the M3 Ultra with more memory is still needed .

However, the highest resource usage we observed on M5 Max this time was not from the dense Llama 3.3, but from deepseeek-r1 running in Msty Studio:

In Msty Studio, we loaded a 75GB deepseek-r1 70b-llama-distill-q8_0 file, and in two minutes, it used 122GB of memory to write you a haiku:

▲ deepseek-r1 70b-llama-distill-q8_0

This is just the result for the local language model. Even in some traditional performance projects, M5 Max's performance has not disappointed us.

In Cinebench 2026, the M5 Max achieved a GPU score of 79,295 , which is more than 15% higher than the M4 Max and only about 5% lower than the current largest M3 Ultra.

▲ After continuous stress testing, the score dropped to around 77,000.

How would such a result be perceived in a game?

We played Cyberpunk 2077 on the M5 Max again, using the same parameters as when we reviewed the standard version of the M5 last year.

When using the default "for this Mac" preset, the M5 Max can maintain a stable frame rate of around 59 frames per second. Compared to the standard M5, the preset not only has a higher resolution and more detail, but the frame rate is also more than doubled .

After manually optimizing the settings (high detail 1.5K ray tracing FSR MetalFX super-resolution and frame generation), the M5 Max can maintain a stable 50-60 FPS in dense scenes even with the fan fully loaded.

This performance is certainly far from that of a gaming laptop, but 2077 is a very demanding game, and it's still quite surprising that the M5 Max can run it to this level in a 14-inch chassis without being plugged in.

As for other smaller, better-optimized games, such as Control: Ultimate Collection and Escape from Durkoff, as long as you don't mess with the settings, the M5 Max can generally maintain a stable 60 frames per second.

Whether it's for AI workflows or gaming, this MacBook Pro with the M5 Max chip is undoubtedly a powerful beast .

The best Apple screen to date

Besides the M5 Pro/Max, another "Pro-level" new product launched at this spring's product launch event was the long-awaited new generation Studio Display.

More specifically, it's the new Studio Display and Studio Display XDR.

After the Pro Display XDR was discontinued, the Studio Display XDR took over its mantle, becoming Apple's flagship professional monitor with a starting price of 24,999.

Our initial experience with the Studio Display XDR was consistent with that at the Apple event:

The impact of the smaller screen size is not obvious; in fact, ProMotion grabbed our attention from the very first second.

Thanks to a mini-LED panel with 2304 zones, as well as 1000 nits peak SDR brightness and 2000 nits peak HDR brightness, it's impossible to say that it's "not eye-catching".

▲ The halo effect of mini-LEDs is only visible under very extreme conditions.

Beyond the well-worn topic of wide color gamut HDR content creation, the Studio Display XDR is also a formidable player in audio-visual entertainment.

Especially if you have some HDR-enabled "blockbuster" movies on hand, the experience of using a MacBook Pro with the Studio Display XDR is unparalleled in the current Apple product line:

The same assessment applies to this year's new Studio Display.

In fact, aside from ProMotion, peak brightness, and charging power, the Studio Display's screen panel quality is completely on par with the new Studio Display XDR.

After all, Apple had given prior notice: 5K 120Hz is not something that just any processor can handle. If your Mac is using the M1 series, M2, or M3 standard version, it can only display at a maximum of 60Hz when you plug in a Studio Display XDR.

This aligns with our experience with the Studio Display XDR.

Even if your macOS version is too old, it may not be able to output a picture even if it is plugged into a monitor and can be charged.

Interestingly, when iFanr spoke with Apple staff at the launch event, they mentioned that both displays are equipped with the iPhone's SoC chip .

MacRumors, a foreign media outlet, discovered by unpacking the firmware update code for the two new displays that Apple has equipped them with A19 and A19 Pro processors respectively.

Unsurprisingly, this is intended for 5K video decoding, backlight control, Center Stage camera, and other display features.

But this has also led to more and more "processor jokes" about Apple:

At the very beginning of 2026, you will be able to buy an iPad Pro with an M5 chip, a MacBook Neo with an A18 Pro chip, and a monitor with an A19 Pro chip.

Overall, this year's Studio Display XDR is a very timely update.

Its most important significance lies in filling the gap in Apple's professional product line with ProMotion, and also in making the product interaction experience more seamless.

When Apple started talking about AI

At this spring launch event, in addition to changing its previous launch format, Apple also began to openly discuss AI.

This AI is neither the repeatedly delayed Apple Intelligence, nor the Machine Learning that Apple has repeatedly emphasized; it is simply and directly general artificial intelligence .

Judging from the current performance of its products, Apple is indeed ready when it starts talking about AI.

Back when Apple switched to Apple Silicon and a unified memory architecture in 2020, it probably didn't anticipate the explosive growth in demand for AI models and the ensuing memory crisis.

The most straightforward example is the 128GB of unified memory on this M5 Max:

  • If you only consider consumer-grade DDR5 6400 memory, it's not difficult to buy 128GB, with a cost of only around 10,000 yuan , but it will never reach the bandwidth of 614GB/s.
  • If you want to use your graphics card to get 128GB of VRAM, and you can't buy a professional graphics card, you'll need to buy five RTX 5090D cards . This is after ignoring the communication latency between graphics cards.

In such situations, small business teams, individual developers, AI practitioners, and others with local AI needs will find themselves in a dilemma:

Alternatively, with a limited budget, you can allocate it to memory, graphics card, CPU, hard drive, etc. when building a PC, thus diluting the overall computing performance.

Alternatively, they could grit their teeth and increase their budget, investing tens or even hundreds of thousands of yuan to enter the field of self-constructed servers.

▲ Image|Servermall

At this point, a MacBook Pro priced under 60,000 yuan, featuring 128GB of high-bandwidth memory, a top-of-the-line HDR screen and speakers, and an 8TB hard drive, has become the ultimate cost-effective choice for individual and studio users .

Even if you don't need the aforementioned "peripherals," or your local AI requirements are low, you can opt for a Mac Studio or Mac mini as a second choice.

The latter has already enjoyed its own spring during the recent "lobster boom".

▲ Image|Apple Must

While Apple Intelligence may be laughable, the potential of Apple Silicon and unified memory in this "big AI era" is only the tip of the iceberg.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

ifanr | Original Link · View Comments · Sina Weibo


Midea launched MevoX, a smart home device, aiming to make smart homes understand you better.

If we were to draw a simple line through the development of smart homes over the past decade or so, we could roughly divide it into three stages: networking, interconnection, and understanding.

The earliest stage involved connecting home appliances to the internet. This transformed standalone devices into products that could be controlled via smartphones, which is what many people initially understood as "smart home"—remotely turning on the air conditioner, checking the refrigerator's status, and controlling the robot vacuum cleaner with a smartphone.

The second stage involves the devices starting to work together. For example, air conditioners and air purifiers work in tandem, washing machines and dryers automatically take turns completing the washing and care process, and simple scene combinations are formed between kitchen appliances. This step addresses the question of "whether the devices can work together."

However, as the industry has grown, it has come to realize a deeper problem: even with interconnected devices, the home still doesn't truly understand people. Users still need to give explicit commands, manually create scenes, and constantly adjust parameters. Many smart home devices appear intelligent in demonstrations, but are rarely used frequently in real life. The problem isn't with the devices themselves, but with the system lacking genuine "understanding."

At its whole-house smart strategy launch event in Shanghai, Midea attempted to answer this question. The two core announcements were: the launch of the MevoX smart home system and the proposal of its "three ones" strategy for whole-house intelligence: a smart home network, a smart brain, and an open platform. The logic behind this combination is to upgrade the home from a "controllable system" to a "system that understands people."

Intelligent agents enter the home: from controlling devices to understanding life

Over the past year, "agent" has become one of the hottest keywords in the AI ​​industry. Unlike traditional AI, intelligent agents do not just answer questions or perform single tasks, but are able to understand goals, retain memories, and make decisions in complex environments.

When this concept is placed in a family setting, its meaning becomes very concrete.

The logic of traditional smart homes is: human – command – device execution.

The logic of an intelligent agent is more like: human – intent – ​​system understanding and scheduling of devices.

Midea's newly released MevoX smart home system aims to bring this capability into the home. Its two core capabilities are higher-order reasoning and persistent memory. The former means the system not only recognizes voice commands but also understands what the user wants to achieve; the latter means it can gradually learn the habits of family members, such as their sleep schedules, temperature preferences, and bathing/care methods.

In other words, the value of an intelligent agent lies not in "turning on the lights for you," but in knowing when to turn on the lights, without even needing you to say anything.

This is why Midea describes the goal of home AI as "intent-driven space". When the system can understand people's intentions, the home environment will no longer just respond passively, but can actively adjust.

For example:

  • Before I got out of bed in the morning, the system had already set the room temperature and hot water.
  • On the way home, the car and home systems work together in advance.
  • Kitchen equipment works automatically during dinner preparation.

These tasks cannot be accomplished by a single device; they require an "intelligent agent" capable of coordinating the entire home device network.

From a broader perspective, this actually represents a structural shift taking place in the home appliance industry. For the past few decades, competition among home appliance companies has primarily focused on product performance, manufacturing capabilities, and channel scale. However, in the era of intelligent technology, new dimensions of competition are emerging: equipment scale, data capabilities, and system scheduling capabilities.

The key to intelligent agents lies in their ability to integrate these capabilities. No matter how intelligent a single device is, it cannot understand the entire home; only system-level AI can do that.

This is why more and more technology companies are starting to focus on the home environment. Because the home is actually an extremely complex yet frequently used living space. For home appliance companies, this is precisely their advantage: the devices themselves are the entry point.

Having a large number of devices means having real-life data, which also means that AI can continuously learn from real-world scenarios. This is a capability that many internet companies find difficult to replicate.

The "Three Ones" Strategy: The True Infrastructure of Smart Homes

However, an AI brain alone is not enough to build a truly smart home. The complexity of a home space far exceeds that of a mobile phone or computer system—there are many more devices, brands, and scenarios, and any disconnect in any link will disrupt the experience.

This is also the reason why Midea proposed the "Three Ones" strategy.

First, there's the network of devices. In the smart home industry, many companies emphasize algorithms and interaction, but what truly determines the user experience is often the scale of the devices. Without a sufficient number of connected devices, the so-called smart scenarios can only remain in the demonstration stage.

Currently, Midea has over 500 million connected home appliances, of which 140 million are online. This means that a large number of devices, from air conditioners and refrigerators to water heaters and washing machines, have become part of the home network. For AI, this is equivalent to having a complete "perception system."

In other words, the more home appliances you have, the more complete your home data becomes, and the more capable the intelligent agent is of understanding life.

Second, it has a smart brain. This part is MevoX and the home navigation system MIA 1.0 behind it. If the home network is the body, then the AI ​​brain is the nervous system—it is responsible for the unified scheduling and decision-making of all devices.

Midea calls this system "home autonomous driving". This analogy is actually quite apt: autonomous vehicles need to perceive the environment in real time, predict behavior and control the vehicle, while home AI also needs to make comprehensive judgments on multiple systems such as air, temperature, water, electricity, and diet.

True automation of the home space is only possible when both the scale of the equipment and the ability of AI inference are present.

Third, it needs to be an open platform. This has been a long-standing challenge for the smart home industry. Over the past decade, most households have owned devices from multiple brands, making it difficult for different systems to interoperate.

Midea's approach is not to create a closed system, but to try to promote a larger interconnected ecosystem. For example, it connects with mobile phone manufacturers and car manufacturers to establish connections between mobile phones and home devices, and between vehicles and home devices.

When mobile phones, cars, and home appliances can share data, the boundaries of smart scenarios will expand significantly. For example, the car's infotainment system can notify home devices in advance while you're on your way home, or home energy consumption data can be linked with the vehicle's charging strategy.

From a technological perspective, this change has occurred at a crucial stage. On the one hand, large-scale models have significantly improved AI's understanding capabilities, and voice, semantic, and multimodal interactions have gradually matured; on the other hand, the penetration rate of smart home appliances is already high enough, and connecting devices to the network is no longer a problem.

In other words, software capabilities and hardware infrastructure matured simultaneously for the first time.

When these two factors combine, a true upgrade in the home experience is possible. In the coming years, the smart home industry is likely to shift from a "race to connect devices" to a "race to make systems intelligent."

In this process, whoever can build a more complete device network, more real-world scenario data, and more stable system scheduling capabilities will have a greater chance of defining the next generation of home experience.

The theme of this press conference is "Intelligent Beauty in All Things." If we interpret it from another perspective, it actually describes a larger change: the home is transforming from a collection of devices into a computable living system.

When the air, water, food, and laundry in a home can be sensed and managed, the home space is no longer just a physical space, but a continuously operating system. In this system, devices are no longer isolated products, but nodes; and the AI ​​intelligent agent acts as the central hub connecting these nodes. Midea's MevoX and "Three Ones" strategy are essentially building such an infrastructure.

Whether it can truly transform the home experience remains to be seen. But what is certain is that as intelligent agents begin to enter homes, the narrative of smart homes has already shifted, moving from "controlling devices" to "understanding life."

The situation is stable and improving.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

ifanr | Original Link · View Comments · Sina Weibo


Lobster Unloading Guide

Even real lobsters aren't suitable for everyone.

This phrase perfectly describes OpenClaw, currently the absolute pinnacle of AI.

The screenshots circulating on social media always show the lobster at its fattest and most delicious: the agent automatically processes emails, schedules tasks across applications, and acts like a digital employee that never rests and is never unresponsive in group chats.

This visual created a strong sense of FOMO, making countless people think, "I want one too."

Thus began a collective craze for lobsters. However, no one mentioned what kind of pot this "lobster" should be paired with, how much firewood to burn, or whether it would empty the entire refrigerator once it entered your kitchen.

Today, we won't talk about those grand narratives that change the world; let's just calculate the costs that an ordinary person would have to pay to own an OpenClaw.

A monthly salary of 20,000 yuan isn't enough to support even one lobster.

First, let's talk about how to experience OpenClaw.

The most complete solution currently available is to prepare a dedicated local hardware device that is always online. Peter Steinberger, the founder of OpenClaw, himself uses a Mac Mini to run the agent, connect to local files, attach various tools, and continuously handle various tasks.

As a result, the Apple Mac mini quickly sold out on major e-commerce platforms. Apple's official website shows that orders placed now will not arrive until the end of April at the earliest. Furthermore, some second-hand platforms have even spawned services such as "renting a Mac mini to raise lobsters".

However, if you want to reduce API costs using a local model, the hardware requirements will increase dramatically.

If you want to save on hardware costs, you can choose a cloud server. Tencent Cloud and Alibaba Cloud both offer one-click deployment solutions, with prices ranging from tens to hundreds of yuan, as well as Kimi Claw, MaxClaw, and AutoClaw, which officially launched today, all emphasizing out-of-the-box usability.

What if you can't buy a machine? You just have to use your old computer. But OpenClaw has extremely unpredictable requirements for the system environment, especially the version of Node.js. Countless enthusiastic young people have spent all night following tutorials, only to get stuck on the command-line error screen.

This anxiety of wanting something but not being able to use it has also spawned a highly profitable installation service industry called OpenClaw: on domestic platforms, remote installation starts at tens of yuan, while on-site services generally cost 500 to 1500 yuan. A foreign website called SetupClaw quotes prices between 3000 and 6000 US dollars.

Even if you successfully deploy the lobster, it's advisable to be aware of potential pitfalls later on.

In the era of chatbots, users paid monthly subscriptions, and the cost was static—one question, one answer. But once an agent starts running tasks, every time it reads a webpage, calls up tools, looks at a file, or retryes an error, it's fueled by a frantic burning of tokens.

This reminds me of a popular saying recently: "A monthly salary of 20,000 yuan is not enough to support OpenClaw."

OpenClaw's official documentation is quite straightforward: the cost of "raising lobsters" comes not only from the core model's response, but also from webpage reading, memory retrieval, compression and summarization, tool calls, and the workspace file and bootstrap configuration stuffed into the system prompts.

With a long context and repeated calls, the burning tokens can be as numerous as two punches. Specifically, based on market conditions in March 2026, running OpenClaw on Claude Sonnet would cost nearly $180 in fees for a single month's total input and output tokens of ten million.

If you treat it as a 24/7 agent and use advanced models to run more difficult tasks, it's not uncommon for monthly fees to exceed a thousand dollars.

Market data also confirms this strategy. The amount of tokens processed by OpenRouter jumped from 6.4 trillion per week to 13 trillion per week.

In this ecosystem, the top winners are always the major AI companies that find C-end scenarios and reap the benefits by leveraging computing power and APIs; the next level consists of cloud providers and knowledge payers who make money through services and information asymmetry; the only losers are ordinary users who spend money to burn tokens and bear the system risks.

Before even installing OpenClaw, I've already paid my first security lesson.

Even if you're not short of money, security issues are the real minefield that makes people sleep soundly.

Microsoft's security team has warned of the dangers of OpenClaw: OpenClaw should be considered an "untrusted code execution environment with persistent credentials" and is not suitable for running directly on standard personal computers or enterprise workstations.

The problem isn't whether it works; the problem is that it's inherently in a very dangerous position. High privileges, high connectivity, and high automation—these three things combined shouldn't make people let their guard down. But many people install OpenClaw with the mindset of installing chat software, which easily leads to a mess.

Shodan platform monitoring shows that there are over 100,000 OpenClaw instances globally that are directly exposed on the public internet and are in a state of zero authentication. Qi An Xin data shows that a significant number of these are located within China.

The Ministry of Industry and Information Technology has also issued a risk warning, stating that the OpenClaw gateway does not verify the source of requests under default configuration. If a user accidentally clicks a malicious link in their browser, an attacker can take over all system privileges of the Agent through the local port.

What's more troublesome is that some people have already paid their first tuition fee before even installing the genuine version.

In February 2026, the security research firm Huntress discovered that someone was taking advantage of the popularity of OpenClaw to create a fake installation package on GitHub, implanting a Vidar information-stealing trojan and GhostSocks proxy malware.

Even Bing search ads were used for traffic generation; when users searched for "OpenClaw Windows," the AI-recommended link directly pointed to a newly created malicious GitHub repository. These fake installation packages were uploaded on February 2nd and were not discovered and taken down until February 10th, a full eight days later.

▲Bing AI search results linked to a malicious installer hosted on GitHub.  https://www.huntress.com/blog/openclaw-github-ghostsocks-infostealer

The plugin ecosystem is also a hidden minefield.

A cybersecurity audit found that about 12% of the skills in the ClawHub plugin marketplace contained malicious code. These skills typically disguised themselves as popular categories such as cryptocurrency assistants or YouTube tools, performing normal tasks while stealing SSH keys, browser passwords, and API keys in the background.

Since most plugins are stored in Markdown or YAML format, ordinary users cannot distinguish them by sight. Even worse, even if the official repository removes known malicious plugins, the GitHub repository still retains historical backups. What exactly was added to the copy you had someone else install? Often, even the person who did the installation might not be able to explain it clearly.

These kinds of risks do not automatically disappear just because the user is professional enough.

After Summer Yue, the Director of Security Research at Meta AI, connected her work email to OpenClaw, the agent began deleting emails at high speed and did not respond to her repeated "STOP" commands. In the end, she had to physically disconnect the machine to stop the loss.

The problem isn't that the model isn't smart enough. Rather, OpenClaw's context compression mechanism, when processing large volumes of emails, simply filtered out and forgot her previously set bottom-line instruction of "no confirmation, no execution." The system's design priorities didn't include the option for "users to stop at any time."

Even a top expert specializing in AI security risks couldn't stop the ship from crashing at a critical moment. The risks faced by ordinary users are therefore easy to imagine.

Ultimately, people's anxiety is not without reason. Last year's DeepSeek is like today's OpenClaw. Every now and then, a new species of AI emerges, pushing people to the psychological edge of "if we don't use it, we'll be left behind."

But often, what truly wears people down isn't the lack of advanced tools, but rather the sheer number, complexity, and noise of those tools. A Harvard Business Review study from March of this year provided data to confirm this situation.

After surveying 1,488 full-time workers, researchers found that using more than three AI tools simultaneously actually reduced productivity.

They call this state "AI brain overload," with typical symptoms including attention saturation, decision fatigue, and persistent mental fog. Employees experiencing this state are 39% more likely to voluntarily leave their jobs than others. Even the most skilled AI users can sometimes be "killed" by AI in another form.

So looking back, if you use OpenClaw as a toy, or for high-value, low-frequency tasks, the costs are generally controllable, and the risks are manageable. But if you treat it like a 24/7 digital employee, the costs, risks, and management complexity will rise rapidly.

For the vast majority of ordinary users, waiting for the next generation of more stable, safer, and cheaper products is often much more rational than rushing in now to become one of the first guinea pigs.

The first person to eat crab deserves respect. But the hundredth person to eat crab usually eats it better and cheaper.

Uninstallation Guide

If you've read this far and have already concluded that the costs and risks of OpenClaw far outweigh the benefits, and you're ready to say goodbye to this "lobster" with dignity, there are ways to do so. Uninstalling it is different from uninstalling ordinary software; it's not simply a matter of dragging it to the trash.

Uninstallation follows two paths: if the CLI is still running, use the simplified path; if the CLI is no longer found but the service is still running, use the manual cleanup path.

Simplified path (CLI still available)

The easiest way is to use its built-in uninstall command:

openclaw uninstall

To clear all configurations with one click and skip all confirmation prompts, add the following parameter:

openclaw uninstall –all –yes –non-interactive

If you prefer to use npx, that's fine too:

npx -y openclaw uninstall –all –yes –non-interactive

If you want to do it manually step by step, the effect will be exactly the same; just execute the steps in order:

Step 1: Stop the gateway service:

OpenClaw Gateway Stop

The second step is to uninstall the gateway service itself:

openclaw gateway uninstall

Third step: Delete local state and configuration files:

rm -rf “${OPENCLAW_STATE_DIR:-$HOME/.openclaw}”

Note: If you set OPENCLAW_CONFIG_PATH to a custom path outside the status directory, you also need to manually delete that file, otherwise there will be remnants.

Step 4: Delete the workspace (optional, but recommended, as it will also delete files generated by the Agent during runtime):

rm -rf ~/.openclaw/workspace

Step 5: Uninstall the CLI program, selecting the appropriate command based on the installation method at the time:

# Installed via npm

npm rm -g openclaw

# Installed via pnpm

pnpm remove -g openclaw

# Bun installation

bun remove -g openclaw

If you also have the macOS desktop version installed, remember to handle that as well:

rm -rf /Applications/OpenClaw.app

Manually clean up the path (CLI is no longer available, but the service is still running).

If the CLI is no longer available, but the gateway service is still running silently in the background, then it needs to be handled separately according to the operating system.

macOS users:

The default service tag is ai.openclaw.gateway . Execute:

launchctl bootout gui/$UID/ai.openclaw.gateway
rm -f ~/Library/LaunchAgents/ai.openclaw.gateway.plist

If you used the `–profile` parameter, you need to replace the tags and plist filenames in the command with `ai.openclaw.<profile name>`. Additionally, any plists in the `com.openclaw.*` format inherited from older versions of OpenClaw that still exist should also be deleted.

Linux users:

The default service unit name is openclaw-gateway.service . Execute:

systemctl –user disable –now openclaw-gateway.service
rm -f ~/.config/systemd/user/openclaw-gateway.service
systemctl –user daemon-reload

For those using `–profile`, the corresponding unit name is `openclaw-gateway-<profile name>.service`, which can be replaced in the command.

Windows users:

The default task name is OpenClaw Gateway. Execute:

schtasks /Delete /F /TN “OpenClaw Gateway”
Remove-Item -Force “$env:USERPROFILE.openclawgateway.cmd”

If `–profile` is used, the corresponding task name is OpenClaw Gateway (<profile name>) , and `~.openclaw-<profile name>gateway.cmd` is deleted.

Several easily overlooked details

  • For multiple profiles: If you created multiple configurations using the `–profile` parameter, each profile has its own independent status directory, with the default path being `~/.openclaw-<profile name>` . You need to find and delete them one by one. You cannot miss any, otherwise residual data will remain.
  • In remote mode: If you are using remote mode, the state directory is not on your local machine but on the gateway host. This means that the steps above, such as stopping the service and deleting the state directory, need to be performed by logging into the gateway host; local operations are insufficient.
  • For installations using source code: If you pulled the source code using `git clone` , the uninstallation order is crucial—you must first uninstall the gateway service (using the simplified path mentioned above or manually cleaning the path), then delete the repository directory, and finally clean up the status and workspace. The order cannot be reversed; otherwise, the service will still be running, and deleting the repository will not completely clean it up.

Only after doing all this can you truly say goodbye to this lobster.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

ifanr | Original Link · View Comments · Sina Weibo


Only one can be kept! Porsche, unable to afford a loss, is forced to choose between the Panamera and Taycan.

In September 2019, the Porsche Taycan made its global debut simultaneously on three continents.

At the Niagara Falls hydroelectric power station in Canada, massive plumes of water rise into the sky, their deafening roar showcasing nature's most primal gravitational potential energy; at the vast solar farm in Neuhardenberg, Germany, tens of thousands of matrix-arranged silicon crystal panels silently absorb the blinding sunlight; and along the coastline of Pingtan Island in Fujian, China, the blades of giant wind turbines, hundreds of meters high, carry the salty sea breeze, emitting a deep and rhythmic whistling sound.

In an extremely rare move, Porsche has crossed three continents and used the three largest and purest clean energy sources—wind, solar, and water—to create a grand stage for its first all-electric sports car.

▲ The stage for the press conference located in Fujian

As the extremely low-slung Taycan silently slid into the spotlight, Porsche's iconic horizontally opposed exhaust note, a hallmark of its century-long history, was replaced by a low-frequency, futuristic electrical hum.

The visual impact of this moment is extremely strong; the Taycan is like Stuttgart's elegant yet resolute declaration of war against the old era.

Subsequent market feedback confirmed the success of this move; in 2021, Taycan sales even surpassed those of Porsche's iconic 911.

Despite this glory, as late as early 2024, Kevin Giek, Vice President of the Porsche Taycan product line, remained steadfast in an interview, stating that Taycan was a "long-term" model name, with a strategic position on par with the 911.

However, the script of the business world often changes faster than the magic tricks of a press conference.

sharp turn

As 2026 dawned, faced with two consecutive years of declining sales figures for its all-electric vehicles and dismal financial reports, Porsche's management was forced to reconsider a decision that would have been unthinkable two years prior.

The Taycan was stripped of its independent status and incorporated into the Panamera product line, forming a unified series of high-performance four-door sedans.

Taycan and Panamera are currently two completely independent product projects. Although they are similar in positioning, there is almost no overlap between them at the engineering level.

The current Panamera is built on the Volkswagen Group’s mature MSB platform. According to Porsche’s original plan, the third-generation Panamera will smoothly transition to the newer PPC advanced fuel platform within the next decade, continuing to rely on fuel and plug-in hybrid powertrains as the main sales engines.

The Taycan, on the other hand, is built on the J1 platform, which is specifically developed for high-performance all-electric vehicles and shares this expensive underlying architecture with the Audi e-tron GT within the group. According to the original plan, the Taycan's successor should have smoothly transitioned to the Volkswagen Group's latest generation SSP Sport all-electric platform, completing the iteration of electrification technology.

In an era of a favorable macroeconomy and soaring sales of pure electric vehicles, Porsche has been able to comfortably support two completely independent R&D systems, two incompatible parts supply chains, and two parallel engineering teams, thanks to its extremely high profit margin per vehicle.

However, as market sentiment cooled and sales declined, the cost of maintaining this "dual-track system" began to become unbearable.

In 2025, global deliveries of the Taycan fell to 16,339 units, while the Panamera continued its steady performance, delivering 27,701 units.

With the sales volume of pure electric vehicles shrinking significantly, it has become uneconomical to continue to maintain an expensive and massive underlying platform iteration path for a model with annual sales of less than 20,000 units.

The internal software architecture crisis within the Volkswagen Group has exacerbated the situation. The ongoing turmoil and restructuring of the CARIAD software division has directly led to significant delays in the development of the SSP Sport all-electric platform, originally slated for release in recent years.

The difficulty in developing the underlying architecture put Porsche's huge upfront R&D investment at extremely high risk of being lost, forcing the company to record an impairment loss of up to 1.8 billion euros in its financial statements.

Faced with such a heavy financial burden, the new Porsche CEO, Michael Leiters, had no choice but to implement a tough plan to "cut redundant development spending."

▲ New Porsche CEO Michael Leiters

Same name variant

Fortunately, Porsche does not intend to discontinue one of its models directly. Instead, it prefers to adopt a "same name, different bodies" approach, which means using a single product line name to cover different platforms and powertrains, while offering gasoline, plug-in hybrid, and pure electric options.

This practice is not unfamiliar within Porsche. Currently, gasoline and all-electric versions of the Macan and Cayenne are sold concurrently in multiple markets, despite being built on completely different platforms.

If Porsche extends this logic to its sedan product line, the general direction will be that the gasoline and plug-in hybrid versions will use the PPC platform, while the pure electric versions will use the SSP Sport platform, and they will be presented as a single product line to the outside world.

▲ Currently, various platforms under Volkswagen

According to reports from foreign media outlets such as Autocar, Porsche is currently exploring the possibility of wider parts sharing and establishing a common product identity, even if subsequent models continue to use different underlying platforms.

To maximize cost reduction and efficiency, Porsche engineers are seeking a universal solution that transcends existing chassis architectures. Even if future product lines are still built on two different chassis architectures, Porsche strives to achieve large-scale modular sharing across platforms in areas such as electronic and electrical architecture, the underlying code of the infotainment system, the seat assembly frame, the air conditioning temperature control module, and even the Porsche Active Ride active suspension technology, which best represents the soul of Porsche's handling.

▲Lorsche Active Ride active suspension technology

Besides the chassis architecture, if the two models eventually merge into a product line, integrating the exterior design and body shape will be another challenge.

Currently, there is a significant personality clash between these two cars in terms of their visual characteristics.

Thanks to the physical advantages of its all-electric architecture, the Taycan has an extremely low center of gravity, giving it a compact and aggressive overall stance, with highly aggressive aerodynamic design.

The Panamera, on the other hand, is a standard flagship four-door sedan with a longer and wider body, exuding the composed cruising aura of a classic luxury car.

Furthermore, once the sedan product lines are consolidated, in order to streamline complex production lines and reduce manufacturing costs as much as possible, wagon variants like the Taycan Cross Turismo, which have high R&D costs but a very narrow audience, will most likely face the fate of being ruthlessly canceled.

▲Taycan Cross Turismo

Regarding the final visual presentation, industry insiders generally speculate that Porsche is very likely to refer to the approach taken by the new Cayenne.

This means that while maintaining a unified family-style body outline and interior layout, the gasoline and pure electric versions are allowed to retain their own exclusive front grille styles, air intake shapes, and specific aerodynamic details.

In this way, consumers can clearly identify whether the engine or the motor is running inside the car by just glancing at the front, while also allowing the pure electric version to retain a bit of the futuristic visual DNA that the Taycan was once proud of.

▲ All-electric Cayenne

As for which name to use after the merger—Panamera or Taycan—it remains an unresolved point of contention at Porsche's senior management meetings.

The name "Panamera" undoubtedly boasts a deeper global base, a larger high-net-worth car owner group, and indelible glorious memories of the era of gasoline-powered cars; but "Taycan" represents Porsche's first sharp blade cutting into the electric era, carrying the brand's determination and vigor in its transformation into the new century.

Erasing any one of them would be a regret and a loss.

▲ The first generation of Padme

Fortunately, Porsche still has some time before the final verdict is delivered.

The third-generation Panamera made its debut at the end of 2023, while the Taycan underwent a major mid-cycle refresh in 2024. Following the typical seven- to eight-year iteration cycle of traditional luxury car manufacturers, the complete replacement of these two models will likely occur around 2029 to 2030.

This means that Porsche still has relatively ample time to complete the demonstration and engineering layout of the platform integration plan. Before the official replacement plan is implemented, the two current models will continue to operate independently.

Looking back at that morning in front of Niagara Falls in 2019, the Taycan was seen by countless people as the beginning of Porsche's future.

After the initial rapid advancement of electrification, and with the cooling of high-end pure electric performance cars and the strong resurgence of plug-in hybrid technology, using a more intensive and efficient R&D approach to cover the most comprehensive powertrain matrix possible may be the most sober defensive counterattack this giant can make.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

ifanr | Original Link · View Comments · Sina Weibo


After 11 years of losses, “idealist” NIO has finally turned a profit.

After 11 years of operation and long-term losses, NIO has finally achieved profitability.
According to the recently released financial report, NIO's operating profit in the fourth quarter of last year was 1.25 billion yuan, marking the company's first quarterly profit.
Full-year revenue reached RMB 87.49 billion, a year-on-year increase of 33.1%, and total gross profit reached RMB 11.92 billion, a year-on-year increase of 83.5%, both setting new historical records.
NIO attributed its "turnaround" primarily to "continued sales growth in the fourth quarter of 2025" and "optimized vehicle gross margins driven by a favorable product mix."
Delivery data also strongly supports NIO's positive development momentum.
In 2025, NIO delivered a total of 326,028 new vehicles; in January of this year, even during the off-season, NIO's deliveries still achieved a year-on-year growth of 96%.
Ledao's performance was also surprising, with its sales surpassing those of NIO's main brand in the third quarter of 2025, becoming the main driver of growth.

The wind direction can change.

If we turn back the clock to the beginning of 2025, when Li Bin proclaimed that the company would be profitable in the fourth quarter, almost no one believed him.

Various rumors of the brand's collapse are rampant on social media platforms. Li Bin admitted that "the Ledao brand previously suffered a loss of up to 40% of its orders, mainly due to the continued fermentation of negative public opinion."

Over the past few years, NIO has faced a lot of criticism, but what's interesting is that most of the controversy is not focused on the product itself, but rather on issues such as operational efficiency and circuit replacement, and whether the company can weather economic cycles.

In other words, many potential customers don't think the car is bad; they're worried about whether the company will stay in the market long-term after they buy it.

As Li Bin said—

Currently, 30%-40% of potential customers are not buying NIO cars because they are worried about the company going bankrupt. If we become profitable, this rumor will be dispelled.

When the market begins to believe that NIO can survive, the pent-up wait-and-see sentiment will be transformed into orders, and the product's popularity will be reignited.

The shift in trend began with the launch of the Ledao L90.

With its extra-large 240-liter front trunk and highly competitive pricing, this model wins out in a head-to-head battle with the Li Auto i8.

Subsequently, the launch of the all-new NIO ES8 once again triggered a surge in orders and a sharp increase in store traffic.

At this point, the claim that "NIO will go bankrupt" quietly faded away, and profitability was no longer seen as mere boasting, but as a foreseeable reality.

The success of the Ledao L90 and NIO ES8 is ultimately due to NIO's long-term commitment to "long-termism".

Choices previously considered "too heavy and too slow" underwent a qualitative change at a certain moment, suddenly becoming a moat.

The battery swapping system is the most controversial part of this long-term approach, and it best embodies the "hard work" required.

Since the concept of battery swapping emerged, the controversy surrounding it and the BAAS (Battery as a Service) scheme has never ceased.

NIO repeatedly explained its logic in the early days: use a small battery for daily use and replace it with a large battery for long-distance travel, which reduces the burden of one-time car purchase for users and improves the utilization efficiency of battery assets.

The problem is that when battery swapping stations are still concentrated in a few areas, the threshold for understanding these benefits is too high, most people lack a perceptible experience, and naturally they are unwilling to pay for them.

However, the continuous investment finally led to a qualitative change from the quantitative one.

On the same day that it released its profit forecast, Li Bin announced that NIO's battery swap count was about to exceed 100 million, and the number of battery swap stations had expanded to 3,719, covering most parts of China and travel routes.

After experiencing it firsthand, users will naturally figure out which option is more beneficial to them. After the L90 became a hit, when only 85 kWh batteries were initially provided, many users spontaneously wanted to rent 60 kWh batteries because this was more in line with their usage scenarios and saved them money.

Earlier this year, in an internal letter to employees, Li Bin cited a set of data—in the first 11 months of 2025, the proportion of pure electric vehicles in the domestic new energy vehicle market rebounded to 61.9%, with a growth rate significantly surpassing that of range-extended and plug-in hybrid vehicles.

This trend validates NIO's long-held judgment that, with the improvement of infrastructure, the experiential benefits brought by pure electric technology have officially surpassed the experiential losses caused by charging anxiety.

▲ NIO's current battery swapping system layout

Battery swapping offers a "long-term" approach at the infrastructure level, while the product competitiveness of the L90 and ES8 comes from the "long-term" approach of their technology platforms.

For example, the front trunk of the Ledao 240L is the result of NIO's systematic optimization of the front cabin space through technologies such as 900V architecture, integrated thermal management, and miniaturized drive units.

Similarly, the highly praised third-row seating experience also benefits from the advantages of the all-electric architecture. The elimination of the need for an exhaust and transmission system allows the L90's chassis to be completely flat, providing more legroom and a more comfortable seating position for third-row passengers.

Based on these technological advancements, the Ledao L90 completely stunned the Li Auto i8 as soon as it hit the market.

In terms of price, Li Auto struggles to convince users to pay nearly 80,000 yuan more for supercharging and NVH performance; technically, it's even harder for Li Auto to explain why the more expensive i8 lacks a front trunk and has far less storage space than the Ledao L90.

The NIO ES8's BAAS price of 290,000 yuan is based on the same logic.

In Li Bin's own words—

The R&D and production costs of the 900V platform can be spread across the ET9 and L90. This includes the self-developed Shenji chip, which can save a lot of money. As the scale increases to a certain extent, the costs of R&D chips, technological innovations, and intelligent systems can all be spread. However, the older models used the technology too early and the scale was not large enough, resulting in high costs.

This is where the principle of "great things come from small beginnings" comes into play.

Believe and strive for it.

In the course of business operations, mistakes are inevitable. Pitfalls can occur in product direction, configuration choices, and timing, leading to external scrutiny.

So when mistakes are made and doubts are faced, what is the most essential force that helps one get out of trouble or wait for help?

Various works have explored this issue, but in the end, it all boils down to two words: "belief" or "vision".

NIO is the same. Even when it was being criticized by everyone and the company was on the verge of collapse, NIO never gave up on its core strategy of "battery swapping".

The most fundamental source of this strategy is NIO's vision of "creating a sustainable and better future together".

NIO's ability to persist for 10 years despite huge losses and to continuously raise funds is largely due to its battery swapping system.

Battery swapping centralizes batteries into a unified system for management, making charging rhythm more controllable, increasing the proportion of slow charging, and improving battery turnover efficiency; it is more cost-effective for users, operators, and the battery assets themselves.

It is precisely because this model is more "compliant and explainable" in terms of resource utilization and management efficiency that battery swapping can enjoy policy support such as exemption from purchase tax.

The same applies to funding. Li Bin's ability to repeatedly secure funds from the market is not due to personal charisma, but rather because there are people willing to pay for the sustainability of this path.

Latepost once reported a story about the development of the Ledao L90, which may also reflect NIO's commitment to "values" from another perspective.

Li Bin has always opposed installing rear entertainment screens in cars, citing concerns that "in-car screens would deprive children of the opportunity to explore the world and communicate with their families."

This idealistic persistence made NIO's early products stand out from the competition's "color TV refrigerator" offensive.

Later, when the team approached Li Bin with a solution featuring a rear-row screen during the development of the Ledao L90, they were unsurprisingly "scolded".

However, the team explained that they needed to consider families with three-year-old children as well as the needs of other families. After several rounds of discussions, the final solution was to make the rear screen an optional feature and use the front screen to control and lock the rear screen for content guidance.

This process of constantly seeking a balance between profit and ideals, technology and warmth, is perhaps why many people hope that NIO will win.

In both visible and invisible places, what NIO has done can be said to be the most far-reaching thing among domestic brands.

In a business world increasingly dominated by algorithms and programs, we can still see a glimmer of idealism in NIO.

At the NIO Day event in 2023 when ET9 was released, Li Bin said:

I believe that Chinese car brands will eventually achieve technological and brand advancement, but this process will be much longer than people imagine. Therefore, we must remain steadfast and patient, and avoid turning back and forth during this process.

Ultimately, our hope that Li Bin and NIO can win and survive stems from the belief that people and companies that consistently do difficult but right things should be rewarded.

#Welcome to follow iFanr's official WeChat account: iFanr (WeChat ID: ifanr), where more exciting content will be presented to you as soon as possible.

ifanr | Original Link · View Comments · Sina Weibo